In episode 21 of The Kubelist Podcast, Marc Campbell and Benjie De Groot are joined by Taylor Thomas and Matt Butcher. Together they explore the WebAssembly ecosystem, the CNCF Sandbox project Krustlet, and how they’re pushing the limits of container runtime.
About the Guests
Taylor Thomas is a Senior Software Engineer working on Krustlet, Bindle, Wasm, and other open source tooling at Microsoft. He is a regular speaker at various open source conferences and meetups, including various KubeCons and local meetup groups.
Matt Butcher is a Principal Software Developer at Microsoft, where he leads the team of open source developers that manage Helm, Krustlet, CNAB, Brigade, Porter, and several other projects. Matt has a Ph.D. in philosophy, and is the author of nine technical books. He’s also the co-author, with Karen Chu, of The Illustrated Children’s Guide to Kubernetes book series.
Marc Campbell: Hey everyone, welcome back to another episode of the Kubelist Podcast.
I'm excited about today's episode, and Benjie's here with me too. Hey, Benjie, what's new with you?
Benjie De Groot: Hey Marc, gearing up for KubeCon.
There's a lot of talks trying to figure out which ones to check out.
I'm pretty excited to get back into the community.
And just wrapping up the summer, the winter's coming.
For those of us on the East Coast, enjoying the last days of warmth.
How about you Marc, what's going on with you?
Marc: Pretty much the same, getting ready for KubeCon and this episode, the conversation we're about to have.
So Benjie and I are here with a couple of really great software engineers who have written some code you probably use regularly.
We're here today with Matt Butcher and Taylor Thomas. Welcome Matt and Taylor.
Taylor Thomas: Hi everyone, thanks.
Matt Butcher: Thanks.
Marc: So let's get started with a little bit of back ground here.
Matt, today, we'll start with you.
You're a principle software engineer at Microsoft, will you share how you got involved in the cloud-native world, what you're doing these days?
Matt: Oh yeah, sure. So about five years ago, I joined Microsoft via an acquisition of Deis.
And at Deis, we had been building on container technologies for years before then.
Many of us contributed to Docker very, very early Kubernetes.
And in the course of that started a project called Helm that you might have heard of and then several other open source projects including Brigade and Draft that were all very Kubernetes-centric.
And so for the last five years, I've been leading a team at Microsoft called Deis Labs.
And we've been doing a lot of experimentation around the container ecosystem, which then became experimentation around a broader cloud-native ecosystem.
And recently in the last two years, we've really gotten intrigued by I think what we'll talk about a little bit today this...
I wouldn't say it's a new technology, it's been around a long time.
But a technology that we're thinking we can newly apply in the cloud-native ecosystem, and that's WebAssembly.
Taylor for quite a while has been on my team.
Taylor of course is also a Helm engineer.
So this is a good time to hand it over to you Taylor.
Taylor: Actually, Matt and I have worked together for a while, not necessarily at the same company, but we worked together for quite a long time.
I came into this cloud-native space in a similar way.
I started doing stuff with Docker around 0.5, 0.6 timeframe-ish.
For those of you who have been here for a while, you know that makes me very old in container years as we to like joke around.
So I came into that and then started experimenting around with different schedulers and stuff, and especially gotten to Kubernetes.
Also got in Kubernetes fairly early and I did a lot of platform building with Kubernetes from the very beginning starting at Intel, which is where I worked before as well as Nike.
And then I did a lot of stuff at Microsoft and I was also one of the Helm core maintainers.
Apparently, I was informed and Matt can correct if I'm wrong here, but apparently I think it was the first non Deis or Google maintainer of Helm when I jumped into that.
So I did that for quite a while. I stepped down, I'm just now an emeritus maintainer there.
But I've been doing a lot of the research work with Matt and the rest of the Deis Labs team around WebAssembly.
In fact, I recently jumped over to Cosmonic which is a WebAssembly related startup.
So been very familiar with this cloud-native space for a while, and that's how I got here.
Matt: And Marc, I think we met you first at the Helm Summit in Amsterdam, if I'm not mistaken.
What was that, a couple years ago now?
Marc: Yeah, that was a few years ago, that was a great event.
I think it really talks about the early success of Helm too, having the entire Helm Summit.
A lot of the CNCF projects really focused on events around KubeCon.
But Helm Summit was a great couple day long event in Amsterdam, good audience, good attendees, lot of content. It was great.
Matt: Yeah. Thanks.
Marc: Cool. So I think that there are some interesting stuff here to talk about today.
And Taylor, you mentioned platform building and I think that's a theme that I think we're going to end up chatting a lot about today.
But before we jump in, first congrats on the success of Helm to both of you.
It's been an amazing project. It's really helped drive Kubernetes adoption and growth for sure.
But although you both worked quite a bit on Helm, that's really not what we're here to talk about today.
I'd love to actually schedule that and get another episode and really dive into Helm, the origins.
There's probably some great stories there. Today, we're going to talk about Krustlet.
So I'd love to start off, hear what a high level of what Krustlet is for somebody who might not be familiar with the project.
Taylor: Yeah. So Krustlet is, it's a fun name.
I can't even remember how we eventually landed on it, but it stands for Kubernetes Rust Kubelet.
And the only reason that Rust has to do with this is around WebAssembly.
So it is a Kubelet implementation. It's pretending to be a Kubelet.
In fact, a lot of its design terminology is borrowed heavily from the Virtual Kubelet project, which is something that one of our sister teams at Microsoft with Krustlet had worked on and donated to the CNCF.
And so it pretends to be a real Kubelet to the Kubernetes API, but on the backend, it's actually built to run WebAssembly modules.
WebAssembly modules are the name for what you call a binary that's been compiled from WebAssembly and can be used in a WebAssembly environment.
So WebAssembly modules are run instead of containers.
So it's still used as the same Kubernetes API.
You still create a pod or a deployment or whatever.
And instead of using a container to deploy what it needs to, it uses WebAssembly modules to put it all together.
And so it's basically a way for people who are very familiar with Kubernetes, which in the cloud-native space is the lingua franca right now of working with containers and clusters.
And so it gives the ability for people to try WebAssembly in an environment that they are more familiar with, which is Kubernetes.
And so that's where the project sits in this cloud-native space.
Matt: Cool. And for those that might not be entirely familiar with Kubernetes technology or really the under the hood view of Kubernetes, that Kubelet is the thing that is responsible for saying... when you start up a new pod, the Kubelet is the thing that says, "Oh, I've got some space over here. I can run that pod on the node that I control."
And then the Kubernetes scheduler will delegate it to that Kubelet and that Kubelet then becomes responsible for running the pod for the entirety of its lifecycle, and then letting everybody else know in the cluster, what the deal is with that.
So really, we were trying to plug in a natural layer and say, instead of scheduling a pod that has a bunch of containers attached to it, we just want to schedule a pod that has a bunch of WebAssembly binaries.
Marc: And obviously the benefit there is that I as an organization have made a big investment in Kubernetes tooling, and whatever that is, that's how I'm going to manage my applications.
Then look, there's this amazing interface here.
The Kubelet does execution, but the Kubelet just gets direction from the Kubernetes API server and you're able to say, "With Krustlet, that's fine. We can piggyback on top of that."
And it turns out, we're actually not going to run a Docker container or like a containerd, anything like this.
We're going to do this completely different thing.
Matt: Yeah, exactly. We wanted to push the limits of what we thought a Kubelet could do.
And in particular test out the idea that it's not just a matter of being able to swap in and out different container run times, but actually we could swap out different kinds of runable objects.
And at first we were not sure if this would work at all, because a lot of the terminology is baked into Kubernetes.
It calls them containers, it calls them images.
And of course we're using Wasm binaries and executing in a WebAssembly runtime instead of a container runtime.
So it was a thrilling project to try out. We had several false starts that of course we can talk about.
Where we tried different ways of doing this and they did not work out.
And it was certainly, particularly for Taylor, an adventure deeply into the internals of the lowest levels of Kubernetes.
And at the end, it was simultaneously thrilling and maybe to use the term lightly, a little bit horrifying what we had done because we had really stretched the boundaries of what was imagined to be the Kubernetes run time.
Marc: That's cool.
Taylor, I imagine when you were doing that, there's a lot of projects you'll start writing as a software developer where you say, "Look, I don't know how I'm going to get from here to there, but I know this is possible."
Were you confident that you could actually do this when you started it or was it like, no, this is a total experiment, this may not even be possible?
Taylor: Oh, this was 100% an experiment.
If you go back and look at the Git history of Krustlet, not that long ago, we still had a big warning sign on the README saying, "This is entirely experimental. Please do not build anything in production. We are not going to support your production use case."
And that's actually been removed because at this point, we're getting very close 1.0.
We're in our alpha releases of the 1.0 right now.
But no, when we started with this, we had zero clue if this was going to work.
And honestly, there's still some things where I look and I go, "I'm not sure if we'll ever get whatever the feature is working perfectly here within a Wasm context."
But like Matt said, the big thing with this is people can plug in any backing, and that's why it's very similar to virtual Kubelet in that aspect is that people can build whatever kind of backend.
They're called providers in Krustlet terminology, for whatever they want. In fact, there's community members who've been heavily involved who've been writing one for systemd and it's insane to see that, but it's both a binary that you can run to run WebAssembly modules, but we also release it as a Rust library, if you want to, for some reason, assemble your own Kubelet in Rust.
So there's still some features I'm not sure we'll get.
And during the whole time we've been writing it, there's been some times where I'm like, "I am not sure if this is even possible."
Or the kinds of days where you just want to give up computers, throw your computer out the window and go work on a goat farm in Montana.
And so those happened quite a bit with Krustlet, if I'm being completely honest.
But like Matt said, I have learned unholy and arcane arts from doing things with Kubernetes that people should never do and never look at.
But really it has been a breeze too, because we've learned a lot about Rust, which might be something we can talk about too, and why we chose Rust and what we've used it for in the cloud-native space.
Kind of rambling there, Marc, but there's just a lot there to unpack.
Marc: There is a lot there.
I think I have a ton of questions now.
I think Benjie, did you have something?
Benjie: Yeah. I just want to back up a second.
For those of us that aren't familiar with Wasm or WebAssembly, can you just tell us what this does, what this gets us, the different areas where we'd want to be using this?
Because I think that we've had to some other people on, and we've dove into the edge a bit, but it's really fascinating what we're all thinking about this, and I'd love to hear how you guys think about WebAssembly and what it is.
Matt: Yeah. And I'll describe this in the form of an origin story.
As Deis Labs, we've been trying to push the envelope of what could be done in the cloud-native ecosystem here and there.
And some of those things that we were trying to do, we just kept hitting on little limitations here and there of run times.
We wanted to go down into small devices and be able to run Kubernetes nodes on very constrained like ways smaller than a Raspberry Pi, devices that might live on the edge, might live in somebody's home as an IoT device or in a factory as an IoT device.
And various experiments like that that we had just been eyeballing and trying to figure out if we could tell interesting stories there.
And we would start to run up against limitations.
Like a container runtime really does take a lot of memory and a lot of processor power.
Also there were limitations about cross architecture stuff.
You have to know a lot about what a node looks like before you can send a container to execute on it, and you need to know the architecture, you need to know the operating system.
And that meant often rebuilding the same kind of thing.
So there were a whole bunch of... performance is another one.
We wanted ultra fast startup times, and there's really only so much you can do to speed up a container startup time.
So we started just at first gently bumping up against the walls that are enforced by a container run time.
And then we started saying, "Okay, these are getting increasingly frustrating as problems to solve."
So we came at it really with the problem first, saying we know what we want.
We want cross platform, ultra fast startup times, smallest binaries we can get.
And we started looking around to see how we could accommodate that.
And we were not having much luck, to be honest.
And we got together for an offsite meeting back in 2019 in Victoria, Canada.
And in the course of throwing around some ideas, it turned out a couple of the people on the team, myself and a couple others had all been independently looking at WebAssembly as an interesting thing.
Now, WebAssembly was a browser technology and it was built with a very specific set of design constraints in mind.
A of course, it should run in a browser and if it's going to run in a browser, then it needs to be cross platform.
And startup time in a browser is a big deal.
Nobody wants to wait and watch a blank screen while something's loading and getting ready to execute.
So performance and size were a big deal.
And more than that, the security model of a browser is really interesting.
Well, if you're talking WebAssembly where it could be a binary file that was originally written in C++ or Rust or something like that and compiled to WebAssembly, you better have a very good security story in the browser because you don't want to open a nice big hole for attacks.
Now that particular set of profiles actually works really well for the cloud.
And this is what got us really excited when we had this meeting in Vancouver.
I don't know. It was way after we were done with the meeting.
We're sitting in a pub somewhere unwinding after a long day of whiteboarding how we were going to do Helm features and stuff that.
And we're talking about WebAssembly and everybody's getting increasingly excited because we're starting to see the potential of executing WebAssembly on the cloud side, but we didn't know what we were going to build.
So we had this good idea here, and then we're going, but what's step one?
So to answer your question there then, the thing that attracted us to WebAssembly was the fact that the browser security performance model across platform model very much matched the kind of cloud model that we had been in search of.
And then we ended up coming back from Vancouver and saying, "What should we do? What should we do?"
And that's where we alighted on this idea; well, why not see if we could just plug this run time directly into Kubernetes?
Marc: Yeah. That is actually super cool.
And I'm really just fascinated by the origin story here, because it's not a natural predicted progression of oh, here's the next thing that Kubernetes should have.
It's almost like there's a gap that you manage to jump and say, "Oh, here's this problem."
And Kubernetes is a pragmatic, widely adopted, growing ecosystem.
How can we leverage this to our benefit?
And I think Matt, you described that as, we took an approach here from the problem side.
We started from the problem. First of all, that's good.
That's a great way to solve things. Solutions without problems are hard.
And then they were like, "We can run this on the server side."
And it turns out that worked really well.
In fact, I think that is part of the inspiration behind doing this is seeing technologies that gain a lot of traction and a huge developer community on the website, and then saying, "Well, why not adapt this for the server side?"
Again, at the end of the day, we want to make a developer's life easier, not just an operator's life easier.
And if that means writing using the same set of tools they were comfortable using for their front end apps, then yeah, by all means sign us up.
Taylor: Yeah. And I would note here too, that the developer experience has potential to be better than what we even have with Docker.
I mean, I think Docker was so much better than what many of us had been used to like packaging something up so it could be deployed to a virtual machine and all those kind of aspects of deploying an application.
But the thing is, WebAssembly really sets up and has this wonderful set of features.
And people always ask, "Oh, is it Wasm versus Docker?"
And I don't necessarily think so. I think they sit in their own separate areas.
But compared to Docker, if everything continues on the innovation path that it is, you're looking at first off, that you can build on any operating system and then run it on any operating system like Matt mentioned.
You can use your existing tool chain also like he mentioned, the size is much smaller.
So this generally speaking, we haven't done huge amounts of extensive testing on this.
But if you write two comparatively same things in WebAssembly and compile it to WebAssembly versus a container, we've seen it's about 10% of the size.
So you're talking way smaller, the run times are way lighter.
And then one thing that people often miss when they're learning about WebAssembly is that WebAssembly allows you to choose the runtime characteristics, how it's processed and compiled at run time, not beforehand.
So you have multiple run times that can run it interpreted.
So if you're on a really constrained device, you can read in instructions at a time and interpret them.
You can do JIT. if you're doing something B-theory, you can do AOT.
But the thing is, that decision's made at runtime rather than having to do all that optimization before.
So you glue all those things together and it creates a great developer and operator experience for someone who's creating and consuming WebAssembly modules.
Marc: Yeah. That's cool. It's really like the early days of Docker or one of the big things Docker gave us that we didn't really have before was application portability.
But it turns out that there's actually a much, much deep layer of portability that you can get by WebAssembly is the way you're describing it to me.
Taylor: Yeah, that's correct.
This is where I'm going to be blunt and I am generally a blunt person.
But Docker, we to say it's cross platform, but truly it isn't.
You build a container for Windows, it can only run on Windows.
If you're on a Mac, you're using a shim, that's using a VM.
Whereas a WebAssembly module and WebAssembly, as more and more targets are added because right now you can't compile every single language to cloud compatible WebAssembly, which is called WASI.
We can talk about what that means a little bit.
But not every language does that, but as more and more languages are added, people can just compile it once and then run it literally on any operating system they want, which is actually truly portable rather than what we had had with Docker.
And you can even see, most of the examples you run, if you try out Krustlet are actually things that I built on my MacBook and will run on any operating system.
That's something I always like to point out to people that, I just compile this on my machine and there's none of this, oh, does this work on my machine because of how WebAssembly works.
Marc: Yeah. That's cool. So you talked about the developer and the operator experience, both being improved with WebAssembly.
Let's start with the developer experience and dive into there a little bit.
For somebody who's never used it before, what languages can I use to write and then target WebAssembly?
Can I take, oh, I have app application, it's written in whatever language, just change the make file target, and then spit out a WebAssembly, and then it's just going to run?
What do I have to do to start adopting WebAssembly?
Matt: So there's a handful of languages today that have been able to compile or execute on WebAssembly for years and years.
And then there's this growing list of languages that are being ported.
And then most exciting, what we're starting to see now is some WebAssembly first languages that are sprouting up with the idea that WebAssembly is the target, and consequently can tool their runtime profiling.
So in that first category C and C++ to some extent Rust assembly script, which is a subset of TypeScript.
Those are all WebAssembly first languages that have been around for a while and been able to compile to WebAssembly for a while.
And then we're starting to see momentum gathering behind other languages.
Swift now has a WebAssembly compiler .NET and Python and Kotlin and a bunch these big top tier languages are all in the process of tooling to compile to WebAssembly.
Because again, the very community that's building the browsers is starting to look at this and say, oh, this might be as you put it before, the Node.js moment where a technology that was invented for the browser might have some real implications for cloud run time, server run times.
That one's my favorite.
And then again, that third category of the new language that are coming out, my personal favorite right now is a language called Grain.
And it is a functional programming language with a bunch of neat little features.
It has a lot of the features I love about Rust, but without necessarily having to master the borrow checker, and a cool little standard library.
And I've been playing around a lot with that one recently, just because it's so pleasant to write and so much fun.
But I know there are other languages like that, that are coming along that are going to be WebAssembly first languages.
Marc: Cool. Yeah. I've written a lot of Go I've experiment with Rust, the borrow checker.
Those concepts are phenomenal. I think that there's a learning curve here.
I actually am curious, bringing it back from WebAssembly for a second here into the Krustlet project that you wrote.
Taylor, you mentioned that you wrote it in Rust. Did you have to write it in Rust?
Did you choose to write it in Rust? Why not Go?
What drove that and how did Rust help with it?
Taylor: So we've actually written a whole blog post about this, but it's really interesting to just talk about and just have a good discussion about it because there's two major reasons.
Well, there's several, but the top two are; number one, you have very good WebAssembly support in Rust.
Rust is the most... not to knock on C or C++, which let's be honest, a lot of us do anyway in cloud-native areas.
But it's the most fully featured language because it's got the power of something you're expecting from Python or Go or whatever, but it also has first class WebAssembly support.
And so you can just build straight into a WebAssembly target from Rust.
And so that was one of those reasons that made a good choice.
You can't do that from Go. Even right now, TinyGo has some support, but you can't entirely do it right now.
The second reason is that Rust is actually a really, really good fit for cloud-native applications.
To understand that, you have to understand some of the features of Rust.
So first off is there's a security benefit. Because of the ownership system, that borrow checker that we've referred to, you're guaranteed data safety.
In fact, there's times when the borrow checker has actually saved us from bugs that were the same class of bugs that we would find in Helm that the Go race checker wouldn't even find.
And you'd be sitting there really mad and just like, what are you telling me, borrow checker?
This should be easy. And finally look through for a couple hours and all of a sudden you'd be like, oh.
You have this realization that the compiler just saved you from some gnarly race condition because of its ownership model.
And so you get that extra security for free. Rust and how it works, basically eliminates whole classes of bugs.
That doesn't mean you still can't use an escape patch and cause those bugs or whatever.
But if you are programming within what's called safe Rust, you are going to eliminate whole classes of bugs that easily exist inside of other programming languages.
The other thing that it has is a very powerful trait system.
And traits are what... since you're probably, if you're listening to this podcast, familiar with Go at a very minimum and have heard the idea of interfaces in Go, they're similar to interfaces in Go, but they're a lot more powerful in how they can be composed and put together.
And so this powerful trait system allows for very useful generics when you're writing Kubernetes things, which is very, very helpful.
So instead of having this big auto generated client with its 20 different methods, and then you have to generate that for each one, you have something that works for all Kubernetes objects.
And this is combined together to make some really interesting things.
So you can use macros in Rust, which allow you to generate code to what's called derive an implementation for a CRD.
So instead of having to create your thing or pull in a certain library or do whatever and then generate the code and then write it, you can just literally add a single line that says, "Derive this to be a custom resource definition."
And then it works with the built in clients that comes with the Rust Kubernetes library.
And so these generics and the way the system works to keep safety involved is really, really powerful for the cloud-native ecosystem.
And especially with Kubernetes projects, it adds a certain layer of protection there that didn't exist before and ability to write code that is a little bit... well, I'll say a lot of bit, less verbose than other code that you're used to writing in let's say Go.
It was something we were interested in as a team.
It was something newer and up and coming.
It had that connection to WebAssembly, but we also wanted to prove that you could do something real and useful in the cloud-native ecosystem with it.
And I would say that we were successful in that endeavor because it has been very useful and powerful and much easier to write things in.
When we had to duplicate features from the main Kubelet, those were thousands of lines less code and still as expressive and more safe.
So that's where the Rust fits into this and why we chose it over Go.
Matt: And I think to just add on and maybe at a slightly higher level, the thing that won me over about Rust when writing Kubernetes code was how after having written a lot Go code where everything is very explicit.
And it's nice, because you can read it. It's very explicit, but the code gets bigger and bigger and bigger.
And the lines of code you have to read as you're working on this, get bigger and bigger.
Rust felt like magic when I did things like, oh, I can just derive this whole thing or, oh, the generics handle all of this for me and I don't have to worry about what type of resource I'm dealing with here.
Those things just felt great. And I felt so productive so fast.
Even though Rust is rightly critiqued for being a very hard to learn language, you really have a steep learning curve until you hit that first plateau, mainly because the borrow checker just works so much differently than reference counted languages.
But once you hit it, then all of a sudden you experience this huge burst of productivity because the Rust language is just smart enough to sort through a lot of this for you.
So we experimented with it and it was frustrating at first and then we hit on the plateau and then all of a sudden we just felt so productive and our code felt so clean and it was great experience.
Benjie: So wait.
So do we have a Kubelet exclusive that you're officially announcing Helm for that is going to be rewritten in Rust?
Is that what I just heard? I think that's what I just heard.
Taylor: Well, we joke about that quite a bit .
Benjie: It was a joke. That was a facetious comment there.
But you guys are making the case to me around Rust. That's for sure.
So you guys had mentioned a second ago, WASI.
Just give us a little nuance, just switching gears.
What is WASI versus Wasm and where does that come in this ecosystem and how you guys are developing?
And then when you want to start using other features, you import standard libraries.
And WebAssembly, the definition as specified by W3 and as implemented in all the browsers, it defines how WebAssembly operates, what the format looks like, what instructions do and things like that.
But it doesn't tell you how to interact with the system around you.
So the WebAssembly run time, there are no real definitions.
Every WebAssembly runtime has to have this kind of thing and has to have that kind of thing beyond the certain core facilities that any language runtime would need.
So in the browser context,
But that model doesn't necessarily make sense on the server side.
We don't necessarily want to document in a window and objects that. So the WASI, which is the WebAssembly System Interface is an effort to...
Well, originally it was an effort to define a set of POSIX like libraries.
So you could say my WebAssembly runtime needs a file system interface and needs the ability to fetch environment variables and find out what the system clock is and things like that.
And so the earliest implementations of WASI as that particular working group got going to find exactly those kinds of things, all of which were very powerful.
But at some point, the WASI working group realized that they could go one better than that and just define a way of describing to a WebAssembly runtime what information the host wants to exchange with the guest module.
And that sounds really abstract because it is.
But you can think of it the way gRPC or any of those serialization frameworks work.
Where a gRPC implementation gives you a protobuf specification and says this is the kind of data I'm passing back and forth, these are the kinds of functions you can call.
Largely, you can think of WASI as a specification that will define the same layer of thing for a WebAssembly run time that says these are the functions that I as a module can export to the system.
These are the functions that I as the system can export to the module.
And you can start coming up with a common parlance of what you can execute inside your module and what kind of things need to be outside.
But I think that as much as I like that abstraction model and it's definitely the future, the salient part of this is just that WASI is providing the way to get at things like file system networking, environment variables.
And it'll increase in flexibility and increase in scope and we're excited about that. But for right now it's just a system library.
Marc: But WebAssembly has some really cool and powerful isolation functionality that doesn't exist in normal Docker containers.
Does Krustlet and the implementation that you have today extend that?
So if I'm running a multi-tenant environment and I want to actually have better isolation between pods versus deploying two Docker containers.
If I can write this as WebAssembly and use Krustlet, do I actually get some benefit of isolation there?
Taylor: Yes, you do. So there's two things to keep in mind.
First off, any WebAssembly module has its own memory space, and people are doing more and more research at lower levels than I generally get to about how to really provide good isolation between things.
But the other thing is that it's a capabilities-based security model.
So you have to explicitly grant that module access to anything you want it to do.
You want it to access the file system? You have to grant it access to a specific file or directory.
So people will always find a way. That's the thing about security with computers.
But the model in general will make it so that you can't do a breakout and then get to other parts of the file system by default.
Now, I'm sure someone will find a way, like I said, but that kind of stuff exists.
And by default, it comes into Krustlet with WebAssembly.
So when you mount a volume, it's giving you specific access to that directory and nothing else or whatever volumes you have set up inside of Krustlet itself.
But I guess right here would be a good time to point out that WebAssembly is still very, very much the bleeding edge of things.
It doesn't have full functionality with all the features you'd expect yet.
That's something that's constantly ongoing.
So one of the ones to point out, because I'm hoping that if you've listened to this point in the podcast and you be like, "Oh, I want to try WebAssembly."
You're like, "Let me go write my new cat blog in WebAssembly."
Well, that is not necessarily possible with WASI right now because WASI doesn't have built-in networking support.
In fact, we wrote our own shim that would get around this issue so you could do outgoing HTTP calls.
And so this has some interesting implications right now, because if you want to use WebAssembly right now, a bunch of other projects popped up to create a bridge and to improve the developer experience around some of those things.
There's things like Atmo, which is one project that is out there.
You have Wasm cloud, which is something I work on at Cosmonic.
And Wasm cloud's another one of those things where it uses something called providers.
I know we've used the word provider a bajillion times at this.
But it uses a provider to allow people access to key-value stores or to an HTTP server or to an HTTP client and adds that layer in and takes advantage of all those good things we talked about with WebAssembly, but bridges the gap for the currently missing pieces.
And as those pieces are filled in, those types of projects that build on top of it can then leverage it.
And those providers, instead of being written in platform native code, can then be written in WebAssembly instead.
And so these are the things that we're hoping to see in the future from WebAssembly, but it's also important to note right now, it is not a complete and finished totally working spec yet.
We still don't have some of these basic features we need.
But that's not because of lack of trying, it's because it takes time to make sure these are standardized and well done so everyone in the community can use them.
Matt: And I think I'd clarify that by saying, WebAssembly, the run time and the core is stable and has been deployed for a long time.
It's WASI that is very much under development.
But our hope is that the WASI specification as it evolves will ultimately say... instead of us having to define bespoke implementations of this for each of the host runtimes, we'll be able to say, "Here is a common specification."
And the host may implement it and grant a HTTP access, for example, or another WebAssembly module may implement it, but the specification and the interaction model will stay the same.
So I hope that puts a little bit of a box around the abstract part of what WASI is trying to do and explains that versus say the more concrete stuff, which is just any WASI run time gives the guest module access to something that feels like a file system, for example.
Benjie: I feel like I'm in the future right now with what you guys are talking about.
And it feels like we're in dotCloud land for where Docker was five years.
This is blowing my mind on a lot of levels here.
Going back for one second about the isolation stuff.
So I have a container, I've got the isolation, but it's container obviously.
And you were saying that there's not shared memory spaces.
Who enforces the not sharing of the memory space in a Krustlet context.
That I'm struggling to wrap my brain around. Who's enforcing that?
Matt: I actually really like your statement about, it feels like we're in the dotCloud era because I think that's exactly right.
We feel like we're right up there trying to explore the limits and applicability of a new technology.
And that memory isolation piece is a key part of this. It's key part of the security story.
And so the way we do it now is actually the way a lot of scripting run times work, where when you execute a script in say, Lua, you start up a Lua interpreter and you pass the Lua script into it.
WebAssembly functions essentially with a similar model. There's a run time and the run time is responsible A, for interpreting the bite codes in the WebAssembly module, but also B for determining how much memory that WebAssembly module is allowed to use, what process or resources. And we talked a little bit about file system, and this is actually a really interesting aspect of having the layer right here.
A file system, the module, the guest code might think that it's dealing with a file system and it's opening Etsy/Fu or it's opening my directory/my database or whatever.
Whereas the host run time might be implementing this simply as an in memory only representation.
The guest module may never actually hit a real file system at all, or it might be hitting a network file system or a local file system, or a database where the database queries are getting translated real time to file systemy things.
But the guest module doesn't have to know any of that.
And that's one of those exciting features that I see there.
But it's all sandboxed by the WebAssembly runtime, which does all of the enforcement.
To contrast that with the model we have with Docker--
Docker's power comes from the fact that you're sharing the kernel space and you are using very carefully constructed C groups and other low level permitives managing to use the operating system facilities to mount in a directory or to expose certain environment variables or things like that.
WebAssembly is essentially abstracting away that low level stuff and appearing to the module as if the module is running in an operating system, when in fact it's running in an interpreter that is holding very close boundaries around what that thing can do and how it can execute.
Marc: That's cool. Let's dive into what I might want to use it for today.
You said I have lots of different things that I'm going to pull together right now that you've mentioned.
Krustlet's going to hit 1.0 pretty soon.
So you've gone from, please don't run this into production, to let's actually do run it in production.
WASI is still relatively early and there's some missing parts but there's shims, different language support.
If somebody out there-- Taylor, I think you mentioned somebody's listening and saying, "Hey, I'm just going to go write my next thing in WebAssembly."
What guardrails would you give them to say, "If your next thing meets the following criteria, it's a good, or if it meets the following criteria, it doesn't work"?
Matt: I think I can probably take the two simpler cases and then pass it to Taylor for the more advanced cases.
In the simplest case, when you're thinking about running something that might be heavier on needing to do a lot of computation and stuff like that, but not need to be a long running HTTP service or something like that, there's a very simple way to run a WebAssembly module inside of Krustlet.
So you can think of it really as more the workload there that batch jobs excel at.
That's a good use case and that's what I would consider to be the bare bones default use case.
Because really you're dealing with, I read a file, I write some data back out in the WebAssembly run time, whereas the host is just piping all that data back and forth to where it needs to go.
So that's an easy base case.
We wanted to extend that a little bit and allow people to write HTTP handlers, all functions as a service style where you're really dealing more at the request response model.
And we can do that today with WebAssembly and WASI as they are today.
And so we wrote this little program called WAGI which stands for WebAssembly Gateway Interface.
And it is deliberately a nod to CGI, the Common Gateway Interface that those of us who have been doing web development for a really long time know and possibly love or possibly loath, I'm not sure which.
But it's a very simple specification for how to build a Web Request/Response Model.
And so we built a WAGI handler for Krustlet as well. So if your thing is, I'd like to write some really cool, simple functions as a servicey like things, you can also build that today.
And I'm going to hand it off to Taylor because the work that Taylor's done in Cosmonic and the Krustlet bridge to that is probably the most sophisticated way that you can get started with WebAssembly today.
Taylor: Yeah. So right now there is a Krustlet Wasm cloud provider.
It is a little out of date because there's just a huge release of Wasm cloud.
We need to go back and up date some things.
So I would not say it's ready to do full production workloads there if you're trying to glue it into Kubernetes.
But in general with WebAssembly, your best bet is going to be trying to do one of these kind of bridge systems if you're trying to do something more complex.
And these are platforms that you can really leverage a lot out of.
So Wasm cloud, just because it's the one I know the most out of the rest of them because I've helped with it, I'm currently working on it, but there's plenty of others too.
But what it does is it's an actor based model.
And so you're buying into the actor based model and the ecosystem of how you sign things and do stuff in that platform.
But it gives you a lot of power because the WebAssembly part is just your business logic.
And so the simple example that we have available out there and we actually have just finished one with the Petclinic demo from Java Land.
We just finished porting over an initial version of that into doing it in WebAssembly with Wasm cloud.
But you just write your business logic connecting it to those things I mentioned before, providers.
So in a key value example, let's say you just have a simple web service where you can hit it and it'll update the how many times that page has been hit inside of a key-value store.
And so instead of you having to set up your key-value store connection or do all those things, your WebAssembly module just says, I need to be able to talk to a key-value store capability and I also need to receive an HTTP request.
And so it connects into both of those capability providers that allow that to happen.
Now, the cool thing about this is that you can hot swap these.
So the business logic doesn't really care what kind of key-value store it's talking to.
It just cares that something that satisfies the contract that it's expecting is there and available for it to use.
And so if you decide you don't want to use Redis, you could switch to another key-value store or you could switch to something in memory if you're testing.
And it's the same thing with the HTTP server.
If you want some really hyper tweaked HTTP server implementation, you can do that, swap it out and the module doesn't care.
And so because the WebAssembly part is encapsulated just to your business logic, it's very, very small.
And then those binaries, like I said, are very, very tiny.
So you focus only really writing on that business logic code and you can pull it in and run it.
And then you can do all sorts of other complicated things like have it talk to other WebAssembly modules and all sorts of stuff like that.
But once again, all of these components are hot swappable, which is one of the other benefits of having WebAssembly.
All that it cares about is that it's calling the right function that's going to return the proper data. And so this is just one example.
Like I said, there's multiple frameworks out there, but that's where you can start doing more complex stuff right now.
And the nice thing about something like Wasm cloud is that as the project, we're trying to follow the WebAssembly System Interface.
So WASI just all the way through.
So like I had mentioned before, once we start getting support, for example, for sockets, then we're going to the provider in Wasm.
And so then you could have an incoming HTTP connection that's handled completely by WebAssembly modules.
But it allows you to just glue together a whole system.
And that's where you can see the more complex things right now.
But for simple batch jobs and stuff, you can just start compiling things out to a WASI target because all you need is file system access and that's already there and that's where something like Krustlet works well right now.
If you have something in Kubernetes and you're like, "Hey, I'm not so sure about WebAssembly yet, but I want to try it,"Krustlet is a great way to do it.
You attach a Krustlet node into your Kubernetes cluster and then do the kinds of things that Matt was talking about.
Matt: I just love the way that Taylor began by saying, "Yeah, WebAssembly's early. I don't consider it production ready," and then described an implementation of what every large enterprise developer really wants as a back end.
And I think that gets to what Benjie was saying earlier is that there are a lot of good parallels to draw between where WebAssembly is right now and the dotCloud to Docker transition that on one end, things feel rough and it's a little bit Wild West right now, and we're still trying to figure out some big issues.
And then at the other end, the possibilities that we are looking at and that we're watching open up before our eyes day by day is like, this is going to be it.
If we could just fast forward two years till we've solved some of these other things, developers are going to really, really enjoy working in this platform.
Operators are going to really like the fact that they have the right knobs and dials and switches to turn without having to necessarily master the intricacies of every language that their developers are using to deploy.
It's a really, really exciting ecosystem right now.
And just listening to Taylor talk about that got me so excited about the potential and the possibilities out there.
I really do feel very strongly that in the next few years we're going to watch another big wave of innovation happen, and that wave of innovation is going to be centered around WebAssembly.
Benjie: That's super exciting guys.
So a few episodes ago we got to interview someone over from Rancher, and he told us that they are running a K3s cluster on a Raspberry Pi on a satellite in space.
Just a quick fun thing here. Matt and Taylor, what is the craziest edge cool use case that you've even... Forget people using it.
If there's something cool that people are using for this already, I want to hear about that.
But give me your dream. My iWatch is going to run batch GPU, I don't know.
Give me your biggest dream of what we could do with this real edge WebAssembly and Kubernetes cluster type stuff.
Matt: You just pitted us against a satellite. That's not fair.
Benjie: Sorry. Yeah. I mean, that was a little unfair. But come on, get creative.
Taylor: It's a great question though too, because this is stuff that really when we get frustrated or something, we start thinking about these dreams of what we've had.
And so this sounds like conversations I've had with Matt and so many people so many different times.
I remember one thing. So there's two kind of visions I see here.
One of these is just how quickly you can update or swap out components of something.
I remember at a recent KubeCon, I can't remember if it was San Diego or before that, someone from the DOD came and talked about running Kubernetes on an F freaking 16.
I was just blown. Number one, it is terrifying that a jet fighter is running Kubernetes.
But I just think about the fact that we're using the ability to swap out certain components of the system using a container and how much more secure and small that would be for any type of system, like a satellite, like a jet fighter, like whatever it might be to swap that out.
I can see some really interesting use cases there. But I also would really love a future here where number one, you can compile from any language.
So a developer doesn't have to worry about a Docker file or any of those kind of things.
They can just write their code and compile it and be done. But then that type of code can be glued together into a really interesting platform.
So one of the things we talk about we're actually going to...
If you listen to this before KubeCon, you'll see that we're going to talk about this in one of our KubeCon talks that Matt and I are doing at Wasm Day, but we have this example app that we like to use for this of chow time.
And let's say you have... an application's going to give you recommendations about restaurants.
And so then you could be working on a device with no internet connection or with a weak internet connection and just running on a device, or you could be at home and have full connection to an internet and you could have a full recommendation engine.
So the thing is, is you could have multiple WebAssembly modules that could do either part.
If you're running locally, you can have a little local database that has your favorite recommendations for your favorite restaurants and the biggest chains.
And it can do some simple machine learning locally that's not going to really destroy your device.
But then you can get back home and automatically when you're back on your internet, it'll swap over to the other implementation that runs on some big cloud server somewhere that can do a big machine learning model.
And it can give you very tailored recommendations based on all of your history and all these different things for whichever restaurant you're going to do.
And all of these components are swappable.
There's some really cool things there about being able to move from device to device or from which backend you're talking to.
And people have tried to do that and done it successfully with some things, but this becomes so much easier and very powerful in the future.
And so you start to think about some of the other things that we can do, which we didn't even get into, like freezing a running WebAssembly module and being able to resume it exactly where it was left off.
There's some really interesting potential there.
But before I keep just completely running my mouth, I should let Matt give his different visions of this too.
Matt: I mean, I think the one that gets me the most excited is that I think we have just experienced with Kubernetes as it is today, distributed computing version one, or maybe version two, if you want to point to some of the academic work before that.
But I think that we could hit another iteration of this.
And to me, the sci-fi future that I want to see is like, when I walk around, my cluster comes with me.
And things join and leave my cluster as I move around in physical space.
So my phone might be the center of my cluster.
And as I walk into my house, my laptop joins that and my home router joins that.
And I can start moving compute loads around inside of the ambient environment that is around me, instead of having to push everything up into the cloud where I don't control anything.
And I don't have visibility into what data they're collecting about me and so on.
And I suppose we could go one up and say, and when I need those cloud resources, when I'm running some really intensive job, it would be nice to just have that piece of the cloud join into my cluster as well, and participate for a while.
In order to get there, the thing that I think we need is a binary format that can run on all of these architectures that can be run securely that has that kind of freeze and unfreeze kind of memory model.
And again, when I think about what I want to see, what I want to live in the future, I think WebAssembly is one of those technologies that's really starting to enable that sort of vision and that sort of experience.
Marc: Yeah. That's really cool.
So every day at Replicated what we're trying to remind everybody that if you're writing software, your opinion of where that software should run shouldn't influence who can run it.
Whether it's running Wasm cloud or on-prem, but you're taking it to the ultimate extreme here where it's like, it actually doesn't run in any one place, it's like really morphable.
It goes to wherever you need to be and you don't even have to know beforehand the architecture of the environment where it's going to run in.
You can just choose dynamically at run time based on any type of conditions, how to reshape the architecture of where the application's running based on anything from the data to what's available right now.
Matt: Yeah, exactly.
Marc: So we've spent a lot of time talking about the power of WebAssembly and I think that helps explain a lot about why.
I want to kind of bring it back to Krustlet for a little bit.
So Krustlet is a CNCF Sandbox Project, and it's not your first CNCF project that you guys have created either.
How is it going as a project right now?
If I wanted to be involved with it and actually start running it, is there like a community meeting that we should be involved in?
What's the best way to start getting involved in the Krustlet project?
Taylor: Yeah. So that's a really great question. There's a couple to get involved.
For community meetings, we have one every Monday at, I think it's 1:00 PM Pacific Time, Tech Central Time as I like to call it.
So that's one of the ways to get involved.
There is a Krustlet channel in the Kubernetes Slack that you can reach out to there.
The other way is just trying to use it.
At this point, because we're in the alpha state, we would love feedback for people to actually try it with something semi real they're testing out and just say like, "Hey, I found this bug, or I found this rough edge, or this is hard to use," whatever it might be and give us the feedback that way as well.
And then we have some issues labels, good first issue.
But really we're just looking to see how people are going to use this.
We brought it to a point where it has the table stakes Kubernetes features that need to exist for somebody to do something real in Kubernetes with it.
And after that, we just need to see, how do people use this?
Are they going to use it how we envision it?
Or are they going to take it in a completely different direction?
Because that will define any future roadmap or features that we want to add is just based on how that will evolve.
In addition to the continuing evolution of the WASI spec, we can then add those features in as they become available, but also just seeing how people use it and then tailoring it to that use case as it grows.
And so that's where the project's at. It's getting to a point where it's stable, but now we need people to say like, "Are you going to use it? How are you going to use it? What's the kind of feedback that we get from those users?"
Matt: And while Krustlet is a WebAssembly first project in our minds, it was designed to do more than just that.
And even one project Krator has even spun out of it. Taylor, you want to talk about that a little bit?
Taylor: Yeah. Krator's a great one to talk about too.
So Krator is something that came out.
We designed a state machine for Kubernetes. And this is a true state machine.
It goes from state to state with transitions, like a graph, as opposed to reconciliation loops that you're used to if you've coded deep Kubernetes stuff before.
And so it is an operator framework that uses state machines to drive how that operator is done.
And so that's actually a really, really cool project.
It is used in Krustlet, but it can be used for entirely separate things as well with custom resources, with builtin Kubernetes resources.
And that project also would love people to use it, try it out, give it a whirl, especially if you're interested in doing Rust inside of Kubernetes or Rust really in cloud-native or Kubernetes adjacent because it is a powerful framework that just evolved and took a couple months of iteration about how you could get this dynamic graph working in a strict language like Rust.
And it's quite powerful now.
So that's another way that you can get involved as well is if you're writing Kubernetes operators and you want to do it in Rust, give Krator a whirl.
That one is a great thing to get involved with and we help run that community too.
Marc: Yeah. We'll definitely include links to that in the show notes here.
I'm looking through it here. Definitely, it's unique in a very fascinating approach to writing operators.
It's cool. I'm curious about, if I want to deploy and if I'm using Krustlet today, I write a Pod spec.
I'm going to write a deployment, I'm going to write a stateful set.
And in there there's like this common Pod spec that defines how to run it.
Earlier, you mentioned that there was the image tag and there's the antivirus.
There's like a very, very specific thing. It's designed around the concept of containers.
How does that change or how do I deploy a WebAssembly module using a Pod spec or don't I do that today?
Taylor: It's actually fairly straightforward and it took a lot of work to get it there, because as you said, it is a bit of a square peg and a round hole, but we did try to make it so that the API is almost completely unchanged.
So right now Krustlet stores its... it expects a WebAssembly module to be stored in an OCI registry.
So Docker Hub doesn't support this, but a lot of the other container registries by the different cloud providers allow you to push arbitrary artifacts up, including things like Helm Charts and other stuff.
But one of those is WebAssembly modules.
And so the only thing you'll need different is a tool called Wasm2 OCI that can push a WebAssembly module to an OCI registry, but then it's tagged and looks just like a Docker image name and tagged when you put it in.
And so outside of that, the only other difference is that you have to have specific tolerations and nodeSelectors on your Pod because you need it to make sure it doesn't land on the non WebAssembly enabled nodes in a cluster.
And so those are well documented in the examples, but it just makes sure that it can tolerate a WebAssembly Node because the WebAssembly Nodes are set up with their taints to repel any non WebAssembly Pods.
That's the only thing that's different. Otherwise you'll write it the exact same as you would write any stateful set, deployment Pod, whatever it is that you do with containers.
So those are just the only two steps that are different.
Benjie: So wait. A Wasm Node that's just running, what is a Wasm Node in a Kubernetes context?
Taylor: That's someone running Krustlet, some Node running Krustlet is a Wasm Node.
So it's just a Wasm capable Node because it is running Krustlet, not for anything else that's installed on it.
Benjie: Right. And I mean, when we were talking about this earlier, we skipped over the very obvious, the M1 Problem that we all have with the X86 versus RM stuff.
So that seems like this even solves a bunch of that for me soon, kind of.
Matt: I know. I feel like we should say more about that, but the simple answer is, yes.
Benjie: Yeah. Right. We all have our own biases.
Over at Shipyard, we use single-tenant Kubernetes clusters as our security model for namespace ephemeral environments, basically.
So each organization gets their own Kubernetes cluster.
Not today, maybe, or today, can I start looking at WebAssembly as a... Can we use that there for namespace ephemeral environments? Probably not yet, but close.
Taylor: So it's actually fairly simple to do this just like you would with any other Kubernetes cluster.
The way that the WebAssembly part will work is just that you're going to have Krustlet Nodes there and they're all tied to it.
And they can only do specific workloads such as file I/O, like batch processing type things, or outgoing network calls because of the limitations of the WebAssembly spec.
And so it's good to just keep in mind that pretty much anything you could do with a normal Kubernetes cluster you can do with Krustlet and Krustlet Nodes.
You just have to keep in mind the limitations of WebAssembly itself. And that's it.
Marc: So what are you currently working on?
What's on the roadmap to get to 1.0, and have you started thinking about past 1.0?
Taylor: Well, like we've mentioned throughout here, it is very much close to 1.0. The alpha's out.
We're waiting for just a few little features left that we have in our 1.0 milestone on GitHub before we cut the final alpha release.
And then we're just going to let it sit for a couple weeks.
See if anyone gives any more feedback saying I found this bug or whatever, because we're trying to stay in alpha, so if we need to break an API, we can.
If you've used the Helm project to some people chagrin into other people's joy, we are very strict about breaking changes there, and we're very strict here with Krustlet.
We want to make sure that when we publish an API, it's not going to break somebody when we change something in the future. So we're trying to give time for... if there's anything like somebody points out a really rough edge that needs to sand off, we can break that API, but we're not anticipating that. And then beta will be another few weeks and we'll wait for other stuff to come in, if there's any other bugs or things.
And then after that, we'll cut our first RC, release candidate.
And then once that's released, and if there's no bugs there, then we'll cut the final release, which should be identical to the RC if everything has gone well.
And so that's the final path. There's really not a lot left.
We're just trying to make sure we tie off those last little bit of features.
And then the future roadmap is really just about that, trying to get more feedback from people actually using it.
How are they using it? What are the next things?
Obviously, anything that enters into the WebAssembly system interface will be one of the things we add in almost immediately, because we want to take advantage of all those features that are available to anything compiled to a Wasm 32 WASI target.
But outside of that, we're just trying see what we can get from people using it and where they want to take the project.
Benjie: Now, correct me if I'm wrong, Taylor, but Krustlet 1.0, will support all the applicable features from Container Storage Interface, CSI, but CNI is still one that is on the horizon.
So if there are any CNI experts out there who are interested in an interesting and challenging project, we have the project for you.
Marc: That's great. Anybody who knows CNI join the community meeting and let's figure out some cool technical challenges.
Marc: And then right now Krustlet is a Sandbox Project.
The 1.0 release and getting that out and getting CNI, that's amazing.
That's great. It's a relatively recent Sandbox edition, but have you started thinking at all about what it would look like and what your goals are to get to apply for an incubation level?
Taylor: Bluntly, no, we haven't, but obviously, that's where we see where the thing evolves to.
And I say this carefully, I don't want people to take away that we're trying to throw all the responsibility on the community and the users.
But really it's up to what people decide to do with it, because it could be that people are like, "You know what, this was a fun project, but I'm doing other things with my WebAssembly."
And people don't want to do something with it, which would be fine.
This proved how you can do WebAssembly and do it in something we know.
Or it could turn out, people are like, "You know what, I'm building our next production platform thing on Krustlet."
And then it'll be like, "Okay, so people are using this for something real. We should probably make sure that as a community, we push this towards incubation and eventual graduation because people are using it for real solid big projects."
And so it could fall anywhere in between there.
And it just depends, like I said, a bit on how the community responds and uses the project and then we can go from there.
Matt: And this is a great opportunity to heap some well deserved praise on CNCF because this whole Sandbox model, I believe that this is the kind of project that the Sandbox model was intended for.
Taylor works at Cosmonic.
I work at Microsoft. Kevin, who is another one of the core maintainers.
I don't even know where he works. I feel bad admitting that on the show, but we're all from different companies.
We work together highly collaboratively and asynchronously on a highly experimental platform that we all just think has some huge potential.
And CNCF has given us that space in the Sandbox world to say, "Hey, what can we build?"
But also they've given us some mechanism built into the way CNCF functions to say, "Hey, community, tell us if we're building anything of value."
And if we are, then we're seriously going to look at moving up into the incubation phase.
If we had something we thought was a great idea, but other people just don't find as compelling, then the whole goal of the Sandbox would be, okay, here's a place where we can record all the things we learned in the open and perhaps let the ship sail silently into the night.
But I don't think that's going to happen.
I think that what you'll see is some real interest in Krustlet over the next year, year and a half.
And in that case, then what we would be looking for in incubation is something that is really going to meet the needs of this nascent emerging WebAssembly ecosystem.
Marc: Yeah. Echoing that, it's an amazing thing that the Sandbox exists and it's created this governance model that allows you to conduct these experiments.
But pair that also with just the community in general is so receptive to the experimentation and being willing to say, "Oh, wow, this crazy out there idea. I've never thought of before. Let's start throwing it up. When can I put it in production?"
And you're like, "No." The culture of experimentation that exists in the whole CNCF ecosystem is phenomenal.
Marc: Cool. So the last question that I had was that, Matt you and Taylor are both giving a presentation at KubeCon. Right now KubeCon is in about a week and a half from when we're recording this.
Hopefully we get the episode shipped before KubeCon and so there's still opportunity there.
So I guess, give us a quick intro to the talk that you're going to be giving.
Taylor: Yeah. So there are three different talks we're actually giving. One is the Cloud Native Rejekts.
And that one is called the National Association of W Lovers, which is a Sesame Street reference for the keen-eared among you.
And that's going to be talking over the different projects going on in WebAssembly land.
So it's kind of an overview of the WebAssembly landscape what's going on right now. What do the different projects do.
So that one's going to be showing... It's a virtual conference, so you can see it and sign up for it.
It's on Saturday before KubeCon. And then at Cloud Native Wasm Day, we're going to be giving a panel and a talk.
And the talk is going to be on Bindle, which is an aggregate object storage system that's designed around WebAssembly.
And it answer some of these questions about how do you distribute and install and describe these applications.
It's really, really interesting if you are curious at all about object storage or artifact delivery or WebAssembly in general.
And then the last one is a panel that I believe Matt is actually going to be moderating.
And I'll be speaking on with a couple other people who are involved in WebAssembly that go over... It's the panel version of the talk we're giving at Rejekts.
It's kind of talking over the different things that all of us are working on in the community, the things that excite us about it.
And so those are the three big things from the WebAssembly land. I think there's some others that Matt might have.
Matt: I'm really excited about the panel because we'll have of Oscar Spencer from the Grain project.
He's talking about these new programming languages. I think he's going to give a really interesting perspective. Bailey Hayes, she's done some really... you can go watch some of her videos on YouTube. She's done these really cool experiments with storage and WebAssembly. And then some of us from the cloud-native side of the WebAssembly story, I think it's going to be one of those points where you'll see a melting pot of views that'll give you a good takeaway of all the different possibilities that I think are opening up in this ecosystem.
Marc: That sounds great. I'm planning to be there in person or virtual. Hopefully, there's really great participation that's going to happen there. Well, Matt and Taylor definitely thank you a lot for your time today, talking about Krustlet and the work that you're doing there. I'm really excited to see the future of both the Krustlet project and just really the concepts here that you were talking about and the implementation of WebAssembly as a runtime.
Matt: Yeah. Thanks for having us today.
Taylor: Yeah. It's really been a good time to chat with you and just let us go a little bit wild as we talk about the exciting role of the WebAssembly.