In episode 16 of The Kubelist Podcast, Marc speaks with Josh Dolitsky of Blood Orange about Open Container Initiative. They also discuss working with containers, Josh’s introduction to Kubernetes, and the importance of nurturing a diverse community.
About the Guests
Marc Campbell: Hi, again.
Today, I'm here with Josh Dolitsky, founder of Blood Orange, to talk about the Open Container Initiative project, that's in the Linux foundation.
This is a cool project, and it's about a lot more than the Docker Registry Protocol.
We're going to dig into the OCI spec and talk about the history and the roadmap.
Josh Dolitsky: Hi, how's it going?
Marc: Great. So, before we get started doing all that, I'd love to hear a little bit about your background and how you got into this ecosystem.
Josh: Definitely. Yeah.
So, I've been working as a software engineer since 2013.
Around that time, was sort of when Docker was coming out, and at the same time, I was getting into devops and Jenkins types of things.
So, I've been working with containers for a long time, but it wasn't until 2017, when I was working on a team at a company called HERE Technologies.
I was introduced to Kubernetes for the first time.
So, that was around 2017.
And through that, I ended up doing an open source project that became semi-popular in the ecosystem, called ChartMuseum.
And ChartMuseum is a repository server for Helm Charts.
So, Helm, which is packaged manager tool for Kubernetes, there's a way to share Helm Charts between people, between teams.
So, yeah, I maintain ChartMuseum, which is one of the more popular tools for doing that.
And from there, I got in touch with a lot of people in the open source ecosystem, starting with Helm and then moving into OCI, which is sort of tangentially related to CNCF and that world.
And now, I'm one of the maintainers in OCI.
But it's been a fun journey and open source is really a interesting place to be right now.
Marc: That's cool. I think there's something there that we'll have to dive into around ChartMuseum.
I can kind of connect a little bit of dots there. I think that we'll probably cover in a bit.
Marc: Great. So, let's jump into the OCI Project. OCI, Open Container Initiative.
Can you describe, like, the charter of the project or what it is that you're trying to do?
Josh: Sure. So, just to be, you know, totally transparent, my involvement in the project kind of comes at a later stage.
I didn't really get involved until, sort of, early of last year, but it's been around since 2015.
You know, it's hard to understand the full history.
I involve myself, a lot of people who were there, at the time but basically Docker was coming up with their standards for containers.
And CoreOS came on the scene with their own standards.
And they each kind of both got a ton of traction in what they were doing.
And so, as not to start some sort of container war and the legal battles that come with that, they decided to come together and form this initiative.
And so, the container runtimes and specifications are all now more or less a shared standard that everyone's agreeing on.
Marc: Right and so, at the time, let's go back there.
So, Docker was out, and CoreOS came out.
When you talk about CoreOS coming out with a separate standard, was that based around Rocket?
Josh: Yes. So, as far as I understand Rocket was the tool and App C was the spec.
Marc: That's right.
Luckily, for the ecosystem everybody decided to work together and form one standard, that we can all create, and that is now the OCI specifications.
Josh: Correct. And it's a little loaded if you say OCI specification. It's actually, it's like a collection of specifications.
So, I think the original one is the runtime specification and that's where Rocket and Docker come in.
But it's since then expanded into image specification, which defines how is a container laid out and built.
And then, the one that I'm primarily involved with, that ties into all the home stuff, in a way, is the distribution specification.
Which actually. just a few days ago, reached a 1.0 release
Marc: Oh, congrats. So, like V1, after quite a bit of work put into it.
Josh: Yeah. It seems like it shouldn't be a lot but it definitely was a lot to get to that point.
Marc: Hey, I mean, we have to make sure that it supports all the various use cases that we have. Not just one, so.
Josh: Right. Totally.
Marc: Okay, so, the OCI project is the runtime, the image, and the distribution spec.
And Docker has a project called Distribution that's in the CNC app.
That's, like, their self hosted version of the registry. Is that completely implement, the OCI space?
Josh: Yeah. It's, so basically, what happened is the tool came out and the API that the tool did, was turned into a spec.
So, Steven Day, who worked for Docker at the time, I believe, kind of like, was the one to put together what it did and writing.
The challenges, like the distribution spec over the past year, we've really buckled down and tried to determine what end points, and status codes, and response bodies, and those types of really nitty-gritty details does the spec define, and it turned out there were certain portions of the distribution project that didn't actually implement its own spec.
So, you know, we submitted PRs and got it to that point.
I think it still has yet to release some of the changes, but they're minor, like pagination of tag listing, and small things like that.
But it, essentially, you can rely, for the most part, that the CNCF distribution project is a sort of reference implementation.
It's just that the code came before the spec and there's quite a few new projects, which I can talk all about, that actually come after the spec, which is really interesting.
Marc: So, you mentioned that you got started on the project about a year, year and a half ago.
What was the impotence of that?
Josh: Yeah, it's really interesting how I became involved.
You know, I use Docker all the time. Had used Docker all the time for doing dev ops type stuff.
I've never been particularly involved in the building of containers or managing runtimes, anything really like that.
The reason I got involved is because, really because of my involvement in the Helm project.
So, I mentioned ChartMuseum earlier.
There was kind of a lot of demand around the features that the Helm Chart repo system should provide.
Like, things like authentication, like, OAuth, authorization.
Like, can this person access this?
And, you know, if you have any familiarity with Helm repos, very easy to use in my opinion, but they don't really provide these more, like, what you might say is Enterprise level features.
So, it turned into, you know, somebody who's really made a name for himself in this space, Steve Lasker, who works at Microsoft.
His team started experimenting actually with ChartMuseum.
Like, I was on calls with him back in 2018.
Like, they were considering using ChartMuseum for their registry and they kind of pivoted and turned into, "Can we just put Helm Charts in our container registry?"
So, they started experimenting with this and I kind of saw that they were doing this and was instantly, kind of became excited about the idea because, you know, as much as I take some pride in like, you know, the code that I wrote and, you know, the system that I helped maintain, I think like, you know, a healthy sense of considering when's the right time to kind of look to an alternative came and so, really, what happened is, I started experimenting with using some of these new technologies of putting Helm Charts into registries.
And from there, I was kind of introduced to some of the people from OCI, which turned into a project, like, for making sure that registries conform to the spec.
Because it was like we were going a little too far.
Like, can we put Helm Charts in registries? Sure but like, there was still not really a full understanding of what the spec was.
So, it was like, you know, putting the cart before the horse.
So, really spent a year in making sure that the spec was really solid and that there was testing in place so that our registry could say, "I am a distribution spec forming registry."
So, that's work I've been doing over the past year, along with my now ex-coworker, Peter.
And that was, yeah, just released this week.
And now, I think we're really turning back to really trying to get Helm a fully working in the registry example.
And even now, like it's in Helm, it's experimental.
Like, you have to kind of set an environment flag that, "Hey I explicitly want to use these features."
Even with this experimental, like all of Microsoft, Amazon, and Google, like already support this type of stuff.
So, it's pretty promising. So, we're now turning to really finalize this in home clients.
Marc: When you say that Microsoft, Amazon and Google support it, you're talking about they're like hosted registries, like ECR in Google container registry, right?
Josh: Yeah, exactly.
Marc: Cool. So, let's go back a little bit then, because that's really cool.
And I actually want to spend a little bit of time talking about, you know, exactly what you just described there, which was, you know, Helm Charts in a container registry is cool.
Like, where's the line?
Like if a registry or this distribution spec says, look it's essentially, you know and tell me if I'm oversimplifying this, it's a well-defined interface that's an object store that has a manifest in our back on it.
What should we not think about putting in there?
If I can put a Helm Chart in, like, could I take it to the all the way to the extreme and just start using it for any types of artifacts?
Josh: That is a great question. And I think it depends on who you ask.
There's kind of mixed feelings about this, because a lot of the people involved in the project, you know, like OCI is Open Container Initiative.
So, really the runtime is like really the bread and butter there. And the rest of it is to support that.
So, you know, there's definitely a handful, I wouldn't even say handful, like a large portion of people whose like, you know, just do container stuff with this.
I would put myself on the complete other end of the spectrum and I would argue as someone who built a tool to implement this custom spec, like the Helm spec, it's really not worth it.
Like, we should all kind of come together on this is how you upload things to a place.
And, you know, Steven Day, who originally wrote the spec, in talking to him, it was really designed that it could handle any type of content.
You know, the spec is very important for containers because, if you're familiar with Docker and the way it does layering, part of it is like, if you lose your internet connection when you're downloading a two gigabyte image, you can pick up where you left off.
And so, there's components that are, you know, have to do with the reliability and the efficiency.
But at the end of the day, it's like a really good upload download system that has like you said, like Barback enabled, it's not part of the spec, but it's everything's namespace.
And so, I would argue, yes.
But put Helm Charts, yes. Put MPM modules, put whatever the next, you know, big cloud native tool that has something to share around.
Actually, one of the most interesting things that's going on right now is around WebAssembly modules.
So, WebAssembly modules are typically a couple of megabytes.
And back to Microsoft, there is a project they have called Cross Lids, which actually replaces containers with WebAssembly in a Kubernetes environment.
And of course, they're using the same API that the Kubelist uses to go fetch container images, but instead they've implemented their own code to go fetch a WebAssembly from a registry.
Marc: That's super interesting.
Are there other, like, even more kind of random ideas that you guys are looking forward to trying to implement?
Like leveraging OCI or anything else that you could?
Josh: You're asking like artifacts or like, a specific artist?
Marc: I don't know. I don't know. I mean, you tell me.
It seems like the sky's the limit with this. So, what would be like a dream for you in six months?
Like, in a year maybe, and be like, "Oh this is we're using it for this."
From the upload download side of things.
Josh: Yeah. So, I would like, you know, when the next big project comes out.
When I say the next big project, like I questioned myself what I even mean by that.
But like, when a project comes out and it's popular I would like to see that project come out and it doesn't reinvent the wheel around distribution.
I would like to see a project full, project push, and to me, once we get past how do we share things and how do we distribute things. To me, that's a win because then we can just focus on like, what is this project really doing?
And why did I spend so much time focusing on this Helm repost system, instead of focusing on the Helm clients?
You know, maybe I wouldn't be here if that didn't happen and whatever.
But like, I think, you know, more software is not bad, but like, I think once we can agree on these things and, you know, we mentioned the major clouds registries like, these things become commodities.
And once that becomes boring, we can focus on really the interesting problems to solve.
Marc: Yeah, and I think those interesting problems could be, you know, obviously using NPM to host NPM packages and, you know, have like, you know, I'm putting my rest packages in a cargo server. That's great but-
Marc: If I have a common protocol, then I can start implementing, you know, compliance and like common tooling that can be applied to any of those registries across different ecosystems.
Obviously, it's going to have to know what an NPM package is, but I don't have to like reinvent the plumbing for every protocol out there anymore.
Josh: Yeah. The compliance thing is actually an interesting aspect.
Like, when you're considering like a security or production operations standpoint, it's like, you know, my infrastructure has all these things going on.
Why am I going to add a ChartMuseum server? Or why am I going to add a Crates server?
You know, and I'm not saying anything bad about the way that you know, Crates is implemented or anything.
It's just like, it's just another thing to worry about.
It's another thing that can be attacked. That's another thing that can leak your information.
It's another thing to monitor. It's another thing to wake you up at two in the morning.
And when you're only worrying about this registry and my Kubernetes server, is just much less, you know and I don't know much about compliance, but it seems like to me, some of these cloud services address those concerns.
And I would imagine that the registry is definitely one of them because it hosts the container images that make up your entire application.
Marc: Yeah, I mean, anything you can push out of the application into the platform layer is a win.
I mean, that's why Kubernetes is popular, right?
Marc: You mentioned that the distribution part of the OCI spec hit 1.0 recently.
What was the milestone? What was the work that was done that you had to cross to be able to call it a V1?
Josh: So, you know, back when Peter and I were recruited to help out with this.
So, you mentioned the Docker distribution, or CNC, now CNCF distribution project.
At that time, like early 2020, it seems like really neat that I could stand up Docker distribution, and I could push home trucks to it, and I can use Docker to push images to it.
But, you know, not everyone's using that code.
I should be able to implement my own server in Rust or, and there is a distribution server in Rust called Trout.
But basically I should be able to implement this without relying on this code, just in principle.
And so, what happened is like the spec itself was like a 3,000 page document.
And so, what we did is we sat down and we look through this document and try to figure out what exactly is the spec saying is and isn't allowed.
And that was not really a small task.
You know, there's a lot of people who've been around this project who understand it, kind of like, a tribal knowledge, but coming at it, I didn't really necessarily like, like I said I'd been using Docker for a long time, but I didn't really understand the APIs underneath.
And so, we had to really rip it out and figure out what are all the end points, distill it into the smallest possible way to understand it, and then, combined with that.
So, we took the document from like 3,000 lines to like 300 lines.
It's something like that. It was like a 10 X reduction. Really trying to distill what it is.
At the end of the document, you can see there's like, now a markdown table of the different, these are the actual API end points.
So, if I'm about to write a client or a server like this is all I need to worry about.
But as part of that effort too, we built a really interesting tool that lives inside the distribution spec, GitHub Repo.
If you go there and go to directory called conformance.
We built something using a go testing framework called Ginkgo, you know.
And I don't want to get too deep into what, you know, libraries we use and all this type of stuff.
But we made a very simple way to point this tool at a registry, run it, and produce an HTML report that says, "Your registry supports these endpoints, but it doesn't support these end points. These are the error messages we got" and it shows like all the responses from the registry.
We also broke it down into four categories. So, certain registries, like Docker Hub.
They explicitly don't allow you to delete tags because, you know, I don't know, ask them.
But you can go through their UI, and you can delete tags. But you can't do it through the spec.
And so, it turned into like, certain registries don't want to support deletion.
So, we broke it into four different categories.
Which is pool, push, content discovery, and content management.
And so, now you can point this tool at your registry and say, "I want to test all four of these things, or just one of these four things."
And you may come out and say, "My registry supports downloading using the spec but it doesn't support uploading."
We have our own custom way to upload things to the registry or we have our own custom way to delete things or to list things in the registry.
That would be the content discovery.
So, yeah, there's this really full fledged tool.
You know, the spec is really what the repost's for but the tool allows you to test and make sure that your server, does indeed, implement the spec.
And like I said, there's some new projects that have come out.
A few of them, I can mention Trow, which is by a smaller company called Container Solutions.
And it's a Rust based- I think it's like the first Rust based server.
And it's meant to, like, live inside of your Kubernetes cluster.
Adrian, who's the main developer of that, like was really working with us to make sure while we were building the conformance tooling, that his registry, or their registry, implemented.
Another really, really interesting project is called ZOT. Z-O-T.
And that's by the open source division within Cisco.
So, Rom, who's kind of the lead, at least how we understand it, the lead on that project.
He too, like he was building this server as the same time we were building the Conformance.
So, it'd be like going back and forth like, "Oh should this end point return a 201 or a 202?"
And they were really building it as it goes.
So, and there's a few more, but you know, those two projects really evolved as the spec was being built.
And while they're probably not as stable as Docker distribution or CNCF distribution, it's brand new code.
And they definitely have, like, features that are, you know, a little bit niched out and do their own thing.
Marc: That's cool. I mean, I think a lot of us like to pick a project, and dig into code, and start writing.
And you know, maybe like the idea of writing a spec isn't the most glamorous sounding.
So, first, thank you so much for doing that because, you know, we had competing standards and like, you know, getting everybody onto one is amazing.
And like taking a fast moving ecosystem that developers love and everybody has opinion, you know, around how the Docker registry should work, and the direction they want to be able to do, and go through the process of actually turning that into like a vendor neutral, agreed on, standard is like, that's no small undertaking at all.
Josh: Yeah. It's weird working in the open.
Not weird, you know. I think, you know what you're getting into.
Marc: I think it's fair to say that there's not a lack of a opinions in the waters that you navigate.
And bless your heart for doing that, is I think what Mark and I are both trying to say.
Josh: Thanks. There's a lot of like, "Hey, we've already done it this way. So, like, we should just keep it this way." And-
Marc: So, wait, so that brings up a really good point. So, just high level, no specifics.
But how do you go about resolving two strong opinions in this space when you have to kind of come to consensus?
Is there a trick that you use? Is there a pattern that you've developed, or it's just kind of just hard work?
Or what would you say is your technique there?
Josh: Wow. I don't know that I even really have a solution.
There's been moments where I'm just like, this is not working.
And, you know, there's been moments in these meetings where it almost felt like, you know, different vendors were going to go off and go implement their own thing, because like, you know, F it, we're going to do what we're going to do.
Thankfully, the different vendors really picked a strong, they picked some really strong individuals to represent them in these conversations and they, you know, work through it and were able to get to the bottom of it.
But I really don't know the answer.
Like, it's always nice to have people who don't have too much skin in the game. I would consider myself, really, one of those people, but I was also sort of leading the effort.
So, but there's definitely people who will come on these calls, like Nisha, who works for VMware.
I'm not sure that VMware has a registry offering or not, but she was really great to have around because she would question, you know, if someone were coming with a strong opinion from their vendors registry, and she would just kind of tell it how it is.
Like, "This doesn't make sense." Or "Yeah, that's a really good idea."
And I think just having a diverse community of people who are not scared to call things what they are, is really important.
And also for the vendors to be able to give a little.
Which in the end, it all worked out. But yeah, definitely having outsiders is a big, big, big thing.
Marc: Cool. I want to make sure we give like the full OCI project a little bit of a time too, you know.
We're digging into the distribution, which is actually like super cool.
Let's switch over and talk about the runtime side of the spec.
That controls how a image pulled from distribution is like, is executed in the cluster?
Josh: Yeah and the reason I focus on distribution is I really am probably not the best person to talk to about runtime, but yeah.
So, runtime spec defines how you take an image, stored in a registry or elsewhere, and how to actually execute that as a process that runs your application.
Most of OCI is a host of specs and read me type things.
With the exception, a few other exceptions, but with the exception of some code, which is called Run-C.
And Run-C is the open source implementation of the runtime spec.
Marc: It's interesting when you start, and this might not be in your wheelhouse, but it's really interesting where the line between the spec and implementation is.
So, can you just maybe give us a little color on like what the group does, and what's responsible for Run-C, and who does that?
Are you writing code for Run-C, or who's doing that, and how tight is that?
Because I think Run-C's a critical piece of the infrastructure that we all leverage.
Josh: Yep. So, the people I'm familiar with that work on that project is Alexa and Akihiro, but it really is its own world.
So, even within open containers, like, there seems to be conversations that some people are totally invested in, and others that are not, or even really taken for granted, like you said, like Run-C powers everything out there.
But it's like, you know, I'm taking it for granted that it just works and all that type of stuff.
But the code itself is maintained, just like other projects, you know, you would find in cloud native or anything like that.
It has been interesting. Like, they've been really inching towards a 1.0 for a very long time.
Like, I'm looking at the project right now and it is released candidate number 93.
So, I, yeah, I'd love to have, you know, one of them on a podcast like this and really understand more about that myself, because the focus of a lot of these OCI calls that has actually been the image spec, which is, like, the JSON layout is of manifests, and then, distribution spec, which I've been talking about.
Marc: Yeah. Great. So, yeah. Let's move over to that image stack. So, you said it's a JSON representation of an image?
Josh: Yeah. So, OCI image spec, or the image format specification, it defines what a container image is, as it's represented, kind of like, on disc.
And so, what it is, is it's a manifest that says, you know, "Here's some data about it."
And in that manifest, it lists a set of, like, file system layers.
And given that definition, I'm able to unpack this thing and I believe hand it off to Run DMC, so that it can run it.
Marc: Cool. That totally makes sense.
And it also makes sense that the project is, you know, so this big enough and it has three distinct areas that, there's like, you know, deep expertise in each of those.
I'm kind of curious, like, if you can share a little bit about how the community and the governance works.
Is it, it's one project by the Linux foundation.
But does runtime image and distribution have separate tracks for community meetings?
You know, like you're obviously at a 1.0 and they're not all?
Josh: So, image back is 1.0.1. Runtime is 1.0.2. There's just one meeting.
There's been talks recently about, especially when it comes to like, what are we going to do with HelmCharts and new things.
Like that's, that's definitely like a whole group that is so far removed from runtime.
Because we're not even talking about containers anymore.
So, there's been a talks recently about, you know, breaking it down into different working groups.
But as far as the governance goes, each project has its own maintainers, and they're totally in control of that project, and how things get added or changed, and the releases, OCI reviewing issues, and pull requests.
So, I'm one of the ones on distribution, but then there's also the TOB, or technical oversight board, and that does more higher level and legal type of work.
And those people are kind of, you know, decide which projects should be in OCI.
Among those different sub projects or specs like, how they should be maintained.
Like, I think things such as like the pool request needs two reviewers.
Like that, I think, I believe is managed at a higher level with the TOB.
Typically, like, the people who represent the TOB are from a lot of the stakeholder companies.
So, I'm looking at right now, like Alibaba, Google, AWS, Apple, SUSA, like, you know.
Kind of these bigger companies who are, not to say like a smaller company couldn't hold a chair, but they really represent, like, people who have a lot at stake with open containers being a successful initiative.
Marc: So, I have a question for you. We're going to change this up a little bit.
What is your favorite named project in all the ecosystem? Just from like, you know, like what, like, Krustlet, you mentioned that earlier. I think that's a hilarious one.
Give us another one or two names just to look at, that are obviously exciting projects as well, but let's just go with the name side here.
Josh: Yeah. I do love projects that like bring into context kind of the language.
And like, Rust is really funny to me because it has all of the crab-- crab stuff, and like also Rust, you know, all of the chemical properties of rust, and things like that.
Marc: Oxidation is, is a very big factor in the naming convention, that's for sure.
Josh: Right. Exactly, exactly.
Marc: No, hey, look, we all know what the hardest part of computer science, is naming, and cache, and validation.
So, okay. Let's take a minute here. Tell us a little bit about Blood Orange, what you're doing, and just briefly, how OCI has kind of played a role in that project that you're working on.
Josh: Yeah, so, Blood Orange really just small consulting.
I started doing this in early 2019. I had been working at a company called Code Fresh, a hosted CICB platform, and I kind of wanted to just do my own thing.
So, I just started doing what I knew best, which is kind of dev ops, and cloud, and delivery, slapped the name on it, Blood Orange, and I help companies, typically smaller to medium sized companies, kind of get their bearings with CI, get up to speed on Kubernetes.
As I had been involved with Kubernetes, you know, in the two years prior to starting this.
And then, you know, I also do some stuff with open source, and Linux foundation, and really just a mix of things.
You know, some of the open source stuff that I do is kind of just for fun and other things are a little more official, you know.
It's great being able to be involved in open source in this kind of really large ecosystem as a, more like, individual contributor.
I mean, there's pros and cons.
I feel like I'm not tied to pick certain tools, or pick certain tech, because I work here.
I can kind of make the call.
Marc: Right. So, you have the lay of the land to decide what you want to work on? Okay.
So, you have had a lot of visibility into various companies.
What do you think the craziest use of Kubernetes, or distribution, is that you've seen?
What's the, no specifics of course for privacy, but high level, what's the craziest thing you've seen?
Josh: So, I did a project with the company, a longer-term project with a company, and they gather a lot of information from the internet, like using crawlers.
And so, they have a lot of Python code built up that parses webpages and tries to pull out information about them and so forth.
And one of the issues they had is, like, when they were running on just normal EC2s, where even Kubernetes, like, they were using the same IP addresses, public outbound, IP addresses, and they were getting blocked by these places that they were trying to scrape web content from.
And so, at the same time that this problem was coming about, we came up with a solution to use Amazon Fargates.
Because our Amazon Fargate notes kind of give you fresh IP per each pod.
And like, before I even say, like Amazon released support for Fargates to be used as Kubernetes nodes, which is really crazy because basically, you have a Kubernetes cluster that is, has no nodes.
It's just, you're paying for the control plane through Amazon.
And then, we would do a Helmet install this web crawler and it would spawn thousands and thousands of Fargate.
Like, it was Kubernetes pods, but they were on top of individual Fargate instances that had their own IP and could go off, and do their own thing.
That's one thing I can think of that's been a really nonconventional use of Kubernetes that I've been involved in.
Marc: That's cool. Hopefully their AWS bill wasn't a surprise after all of that.
Josh: Yeah. So, that's someone else's problem.
Marc: Yeah, exactly.
I'm also happy, you know, you didn't mention, you know, when we first started talking a while ago, we were talking about the OCI distribution spec and I was like, "So, Josh, I have this crazy idea. Like, I have some pods running in a cluster that has a not very volatile database, but it's like, kind of just like start pushing my SQL Lite database to the registry."
And it's like, kind of like shying away after asking that.
Not sure if you'd like, just laugh at the idea but you're like, "It would work."
Josh: Yeah. So, unfortunately the appetite for this sort of, like, use and, or abuse of distribution, it doesn't necessarily transmit into the consulting. Like, as much as I had hoped, like, I always like to kind of combine the things I'm working on, an open source and the consulting.
I did have one project I was able to get them on Helm Charts in the registry, but I think they actually rolled backwards, because it was still experimental. And so, I would love to see more stuff doing interesting things with registries.
Like, the problem with a lot of the cloud native stuff is because I'm like, I don't want to say cutting edge.
I never want to refer to things I do as cutting edge.
But like, these are definitely newish things that are like still being vetted.
And as much as I would hope everyone would jump on and be, like, really excited about it.
Like, there's definitely like, kind of like, that adoption curves.
You know, like we're waiting probably a few years before people are like, "Oh yeah, I could just put this in ECR." Like, for sure.
Marc: Well, like as these registries all start to support the protocol though.
It would work, right?
And, you know, I think as developers and as engineers, we might say, "Oh, I don't want to like abuse that."
I think you used the word abuse a few minutes ago or like, "Hey, that's a hack built on top of it."
But I think hearing from you, you know, the maintainer of this fact that like, you know, hey, you're welcoming it.
You're looking for those use cases. Like, it's not a hack. Like, this is the future.
This is the direction that we want to take the project. That carries a lot of weight.
Josh: Yeah. I certainly believe in it. I really do.
It's hard to get registries to support and promote it before the tools support it.
Like, the client side supports it, and also vice versa.
Like convincing a tool to do the registry stuff when the registries don't support it.
So, it's like this back and forth, and we have to kind of build the momentum around it.
Which reminds me something I am working on that is super related.
So, in March, like when we first talked about doing this, like the ORAS project.
So, like, the ORAS project is basically like a code library.
Like a Go based library that lets you build this sort of push pool into your tool.
And we're actually, currently working on like a Rust version of that same library.
And so, I hope that through tooling and libraries that help you get there, you know, I want to do maybe like 75% of the work for you.
And then, you just kind of plug this in and all of a sudden your things in a registry.
So, that's another really interesting project that's going on.
It's, I believe, in process of attempting a CNCF sandbox submission.
Marc: Awesome. That's great.
We'll link to all the ORAS repo in the project in the show notes here, but that's super cool.
I think the idea of registries that conform to this spec are ubiquitous, right?
Like, it doesn't really matter where I'm running my infrastructure.
I can click a button and either self-host or get a hosted managed registry becomes ubiquitous.
These libraries drop in, and I don't have to think about storage for any kind of artifact anymore.
Like, I don't have to worry about that.
I now have like cloud this vendor agnostic, I guess, storage layer that I can carry with me and run in one cloud provider, another cloud provider on prem, or wherever that is
Josh: Totally. And one less service to have to worry about.
Josh: I felt the excitement around this since the Helm experiments, you know.
I think this is like all the way back in end of 2018 and now we're here, three years later, and it's like, people are like, "I don't know, is that something we should do?"
And I don't know the best answer to, how to, you know, take this a step further, but I'm pretty confident.
Like, Helm's a pretty well used project.
I think once Helm releases supports, there's a lot of supporting projects in the ecosystem that use Helm's as a component, like Harbor, which is like CNCF registry.
They already support this experiment.
I believe Weave, the CI tool, already supports us as experiments. I think the desire is there.
And I hope that other projects besides Helm sort of see the potential on this and you know, people can kind of work together.
Just like when OCI started, we can work together on containers. We can work together on distribution too.
Marc: One quick question. If I wanted to contribute to OCI, or any of the various things we just went through, what would be the way get involved?
Josh: So, there is a call that's been pretty regular, every Wednesday at 5:00 PM Eastern.
So, that's the absolute best way. There's an open containers, Slack, which has also bridged in Matrix and IRC. Simply opening an issue on one of the issue queues.
There's a mailing list on Google Groups.
We're probably, you know, considering there is some in-person CubeCon this fall, there'll definitely be, kind of like, a meeting for OCI stuff there.
Or just pinging one of the maintainers and just asking, "Hey, what is this thing?"
Like, I've noticed there's, you know, unless someone just can't get around to it, like, people love to kind of share their opinions on this stuff.
So, you know, don't feel hesitant to reach out to people and just talk about stuff.
Marc: Lots of strong opinions, but you're welcome more.
Josh: Yes. Yes.
Marc: So, there is a CNCF project called Artifact Hub Today.
Is that built around some of the same distribution, OCI specs?
Josh: So, Artifacts Hub is more of a aggregator of different CNCF shared things.
And whether it's a Helm Chart, or operators, open policy agent bundles, it's not technically tied to distribution but it does support.
For example, if a Helm Chart is in a registry, it will support that.
So, if you start hosting Helm Charts in a registry, you could get them added into Artifact Hub, last I checked.
But Artifact Hub is more about meeting the users in the projects where they're at with their distribution and allowing the community to find that off the shelf thing that they want to use.
Marc: Great. That makes sense.
You're not just like poking around Blood Orange and in your hub repo.
And it looks like you have other little projects that might be, I don't know, like anything worth sharing there around, like I'm looking right now, at one called Bundle Up Bar.
It looks like, you know, it's kind of a way to store any random artifact, kind of in line with everything we've been talking about. What's the state of that?
Josh: Yeah. So, Bundle Bar is actually like a hosted service that you can use.
And I welcome all sorts of feedback, and people to sign up, and try it out.
But it's a hosted registry that has lifted all restriction on artifact types altogether.
So, you can publish anything, as long as it's under five megabytes, into this registry and use it as a place to store things publicly.
You know, there's a paid option if you want to do things privately, but it's a very early stage project.
I definitely welcome people to try it out.
And it's really a early stage experiments around what does it mean for registry to conform to distribution?
And what does it mean to store things that aren't containers in a registry and, you know, to kind of put that point through it's explicitly limiting things to be under five megabytes, to kind of push people to think about what other types of things you might put in a registry from Helm Charts, to WebAssembly modules, to configurations, Jamo files, things like that.
Marc: Yeah. That's interesting.
It's like effectively you could get the same type of behavior by bringing in Docker distribution, but like it's, you know, maybe more the nuances around the purpose-built like OCI distribution and registry that you're hosting to do a certain task means, like, it's just, it's easier. It's always there.
Josh: Totally. Yeah. And the neat thing is hopefully, in the future, like, anything you could get into Bundle Bar, or the tooling that works with Bundle Bar, whether it's ORAS, or Docker, or, you know, your own custom instrumented code, you know.
You could do it to any of these cloud registries.
And yeah, I think in the future Bundle Bar may, you know, once we find a more specific niche that we want to fall into, we'll probably niche out, and do something like that.
But right now it's very general purpose, you know, seeking the right use case to kind of expand that into a hopefully one day a more full-blown product.
Marc: Good. Josh, I'm super excited about the future of the OCI spec, the distribution side in particular right now that we spent a lot of time talking about.
I think there's a lot of interesting opportunities, like waiting to be discovered and make this like widely used for everything.
Josh: Totally. Yeah.
Thank you so much for having me on. This was fun. Thanks.
Marc: Thanks, Josh.