about the episode
about the guests
Marc Campbell: Hi, again, and welcome to another episode of the Kubelist podcast.
We're recording this just as KubeCon is getting started here in LA, where a lot of new stuff is probably going to get announced this week.
But before we jump in, let's do some quick intros.
As always, Benjie from Shipyard is here. Hey, Benjie, what are your plans for KubeCon this year?
Benjie De Groot: So I am going to be attending virtually.
I'm out in New York, not going to make it out this time, kind of disappointed, but safety first, yada yada.
But I am super excited about a few of the talks that are coming on and the virtual stuff looks pretty cool.
And it's still pretty cheap. I want to commend the CNCF because 75 bucks to attend virtually really is really inclusive for a lot of us that can't make it.
So I'm really excited about that and just looking forward to learning a lot from a bunch of these talks.
Marc: Yeah, I think it's really cool the way it's kind of a really good hybrid event, looking forward to it.
So we're here today with Chandler Hoisington from AWS.
Chandler is the GM for Kubernetes for AWS. Welcome Chandler.
Chandler Hoisington: Hey, thanks. Thanks for having me.
Marc: Cool. So before we jump in, I think we're going to spend a lot of time today talking about what's going on in the Kubernetes area at AWS.
But before we dive in, I'd love to give you an opportunity to tell us a little bit about your background.
Like how long you've been at AWS, what got you into the cloud-native ecosystem? Things like that.
Chandler: Yeah, it sounds good.
My title is officially GM of Kubernetes, but I'm really the GM for EKS Anywhere and EKS Distro.
Bob Wise is the GM for kind of the global Kubernetes business.
I think that's one of those legacy titles when you first join and the project is still not publicly announced yet, so they have to give you some kind of title, but I'll keep it because it sounds good.
But yeah, I started my career just like anybody else, got a little lucky and joined a company as the VP of engineering to help kind of re-architect and rebuild one of their legacy platforms.
And at that time that's kind of how I got into containers.
I think that was around 2014 or 2015. And really enjoyed the container space at the time.
It was just starting to come online and I got in touch with the Mesosphere folks, the founders from Mesosphere and just really hit it off with them.
They're really great guys and I joined Mesosphere as the VP of engineering there.
And by the time we got done, I was running product there as well and had a great time there versing the space and figuring out how to position Mesosphere.
And eventually we rebranded as D2iQ and then last year joined Amazon to kind of do much more of the same, help customers with on-premise Kubernetes problems.
Marc: And you're definitely doing some really cool stuff in the Kubernetes space at Amazon.
While you were at D2iQ, how involved were you with that transition?
Help me understand the timing of that from Mesos into like they're like, "Oh, let's make this more Kubernetes focused."
Chandler: Yeah. I joined pretty much right when we started having those discussions around 2018.
And it became pretty clear that customers were asking for help with Kubernetes.
Mesosphere at the time knew a lot about containers in the space.
I mean, they had to build a lot of the components themselves because of just the strategy they took from 2014 to 2018.
And so they just had a lot of expertise especially on their engineering team.
And customers were asking them for help with Kubernetes as well.
And so eventually we said, "Okay, let's do it."
And that's what led to the rebrand as well, because it seemed a little awkward to sell Kubernetes from a Mesosphere type company.
It's like selling Ford trucks at a Chevy dealer or something like that.
And so that's kind of what contributed to rebranding.
Marc: That's great.
And then you ended up at Amazon and you mentioned the title was a little bit about like an unlaunched project, but I think the word's out, right?
It's EKS Anywhere is what you've been building.
Chandler: That's right.
Marc: Yeah. Let's just start there. What is EKS Anywhere?
Chandler: Yeah. It was really interesting being an outsider looking into Amazon.
I wasn't very familiar with how passionate the company is about their leadership principles specifically around customer obsession.
And I know that sounds kind of cliche and a lot of companies have values and principles and things like that, but Amazon really does weave it through the fabric of a lot of things you do here.
And when I was joining, customers were asking for help with on-premise Kubernetes. It's as simple as that.
And Amazon really does a good jobs of listening to them and building the things that will satisfy them.
And so they already knew they wanted to build this before I joined again, because customers were ever asking for it.
But when I did join, it was during the pandemic, it was June or so last year, 2020, we all got together and said, "What can we do really quickly just to signal to our customers that, 'Hey, we're coming to this space, we're coming to help you with on-prem' but also is there something we can help some of our more advanced customers with just right out of the box?"
And that's actually why we decided to build EKS Distro first.
And EKS Distro is really more about taking the core components of EKS itself, the core packages, Kubernetes, the API server, the Kube-scheduler, it's 10 or 15 different containers and binaries taking what we consider kind of that base Distro of packages and giving that to all of our customers and partners to use.
And so you can have kind of the confidence that you're running the same builds, the same versions on your either on-prem platform or other platform that you would be running under EKS.
And so we started with EKS Distro just to kind of signal to everyone like, "Hey, we're going to this space, but also to kind of give that value back to a lot of our customers and we need to do that work anyway for EKS Anywhere."
So that's kind of what got us going down there.
But then we also pre-announced EKS Anywhere last re-invent and successfully launched at GA this September.
And EKS Anywhere builds on top of the EKS Distro and it's easier to explain, because it's really about just being that full featured on-premise Kubernetes platform.
And it's really about having something that customers can help them install clusters and manage clusters, but then it's fully supported by Amazon. And all the additional components we put on top of it are all fully supported by Amazon.
So we're pretty excited about the direction of EKS Anywhere. It's got a long way to go.
Obviously it's V1, but we have a lot more we'll be doing with it.
Marc: Okay. So the traditional EKS that's been out for a while, that's like a managed, hosted Kubernetes control plane, EKS Distro is the open source version that I can take.
And for anybody who hasn't tried to just do Kubernetes the hard way, it's great.
Everybody should actually do that.
Like great health Kelsey Hightower tutorial of the hard way, but you don't want to do that in production.
It's great just as like an educational experience.
So EKS-D is like, it's all open source, I can run it myself, but then EKS Anywhere takes EKS-D and then kind of wraps it around like supportability, enterprise readiness.
Are the bits between EKS-D and EKS Anywhere the same?
And is it more about support in relationship there or are there actually like software bits that are different too?
Chandler: Yeah. So the bits are the same.
If you were to do the Kelsey Hightower tutorial, you basically would replace his links where he pulls down images and packages.
I think a lot of them come from Google servers.
Obviously, Kelsey works at Google and that's where the default Kubernetes builds go.
And instead it would pull them down from ECR and S3 buckets that we put our builds in.
So it's just a different supply chain for the same components.
But to answer your other question, basically, we took your EKS Distro and we're very excited about the cluster API project.
And we essentially built a wrapper for the cluster API project in which wraps other things with wrappers and wrappers and wrappers.
But basically, we took the cluster API project and we said, "Okay, this is going to be our foundation."
So we pull our underlying builds from EKS Distro then we use cluster API to help customers stand up their first cluster.
And then we layer on top a set of components that we feel like people need to actually run Kubernetes in production like a CNI for example, a network overlay of some kind.
And over time, we might add some monitoring features and logging features and things like that.
And we'll pull from the open source community to help us do that and then we'll support the whole thing, including all of the open source projects.
So that's kind of the idea there.
Marc: Those extra components like a CNI that you need, are those configurable?
If I want to run EKS Anywhere in my own environment, can I choose between CNI providers or are you saying, "No, we've done a lot here, we've done a lot of checking. This is the best supportable way, if you want to run the EKS Anywhere, we strongly encourage."
How strong is that encouragement to stick with the CNI that you're recommending?
Chandler: Yeah. I mean, we have to pick one to give the customers who don't have strong opinions a default option.
Because we have a lot of customers that come to us and say, "We don't know, you're the experts."
We're really good at setting up cell towers and figuring out how to connect mobile phones to each other, but we don't know anything about Kubernetes and we don't want to, that's why we're partnering with Amazon.
So you tell us what we should be using in all these areas.
Otherwise, we're going to have to hire a whole team of solution architects to go spend six months and performance test and feature, do feature bake-offs between all these projects and figure out which one we want.
So why don't you pick a default one for us?
So that's why we will have a default in a lot of these categories, but we also don't want to push away partners and customers who have already made investments and decisions in some of these areas.
And so we're keeping the optionality portion of it available.
So if a customer comes to us and says, "Well, we don't like your choice for key management storage."
Let's say we pick something for key management storage, which we haven't done yet. "And we want to use this project instead,"that's something we'll always help them with.
But we do want to provide a set of kind of sane defaults for each of the most important categories on the platform.
Marc: That's good. I think like for a lot of non-tech non-software companies, right?
Amazon, you're in the business of creating a Kubernetes distribution, but if my business is cell phone networks, when you provided like how Kubernetes is run is just not differentiating to me.
And it's like anything that doesn't just help me get on Kubernetes is not helping me solve the actual problem that I'm trying to solve as a business.
Chandler: Right. It's just noise to them and it's not really improving their bottom line.
But if we can help them make those decisions and lower their costs to just making decisions in general, I think that's something that EKS Anywhere should try to do.
Benjie: I have an interesting follow-up on that one.
So you're giving these enterprise SLAs and support level contracts to people using EKS Anywhere, but if they choose to not use, what's the line if I'm like, "I want to use this sandbox CNCF project for my mesh?"
So say I want to use Submariner and Submariner is early days versus using an STO or Linkerd, how do you guys support that?
Like how do you find balance there?
And is there a line that's too far to cross for what you do support and from a third-party library perspective?
Chandler: Yeah. I think we're going to be more opinionated about what we would consider runtime critical components like your CNI, like Core DNS or NCD or something like that.
Obviously, I don't know many customers that are going to try to swap those things out.
But maybe the CNI obviously and your mesh that you might choose, we could be a little bit more opinionated about that and try to steer customers towards our decisions.
But the way we're supporting these components in the first place is by partnering with the ISPs behind most of these open source projects.
So we're partnering with ISO Vaillant who backs the psyllium project, and we'll continue to partner with other ISVs in the space who back these projects.
And that's actually how we're able to provide such great support.
Even though you're going to open one ticket with AWS support, that ticket will end up with my engineering team eventually.
And if we can't figure it out, that ticket will end up with ISO Vaillant and you'll get the experts who know that project the best.
So if it's a mission critical component that a customer really wants to use, and it's a showstopper for them and they don't like the choice that we made or it just doesn't work for them for whatever reason, then our first choice would be to partner with the ISV who backs that project.
And if there isn't an ISV, in some cases some open-source projects don't have one, then we have to start to build the internal expertise around that open source project, contribute to it, chop wood and carry water like they like to say in the Kubernetes community.
And so we understand that community and they know that we're here to help and we have customers who want to use their project and we want to help them with it.
So that's really the two-state strategy. It's nothing fancy. It's either partner or get involved.
And if those two things don't work, then we have to work with the customer on a different project.
Marc: So what do you see as the target customer profile that wants to run EKS Anywhere on-prem?
Are they like just they really just don't want that Kubernetes sophistication, but they see the value of it or how does that map do you see?
Chandler: Yeah, it's really fascinating.
I started my career as a DevOps engineer and when I first got connected with AWS, I'm like, "Wow, this is it. Everyone's going to move to the cloud one day. Data centers are going to be completely obsolete very soon. Once people get okay with all the security problems and all these things, everyone's going to move to the cloud one day."
But once I got into the on-premise Kubernetes space a bit more, I realized there's actually quite a few use cases that could potentially stay on-prem for a long time.
You have highly secure facilities like in the three-letter agencies, government agencies that need to be completely air-gaped for example.
You have cruise ships that are in the Arctic Ocean or the Atlantic Ocean somewhere with very poor connectivity if any at all.
Then you have things like factories in the Midwest that don't have good internet also.
I mean, there's tons and tons of use cases where people are still going to want to keep on-premise data centers and on-premise workloads in general for their workloads.
And that's really what we're aiming to solve.
I mean, that's kind of your customer profiles, anybody who needs a reason to run their own data center.
So it's typically not going to be a startup.
But some startups are building businesses around data centers, but not your traditional startup.
It's going to likely beat a large enterprise company who has a footprint on-prem that they're not planning on moving to the cloud.
And those are the people that we're trying to help out.
Marc: When you talk about on-prem and you use the word on-prem, do you mean on-prem in like the very literal like I have a data center, I have a server rack that I'm racking and stacking servers, or earlier you mentioned something that like, maybe I'm not an AWS customer, but I like the EKS Anywhere offering, I'm using this other cloud provider.
Can I take the bits and run it there and kind of have the best of both worlds then for what I've decided like the decisions that I want to make and who I want to partner with?
Chandler: Yeah. I mean, the answer is you could.
It's an source project and it's backed by cluster API and cluster API has providers for other clouds.
But the big but is my focus and our team's focus is really about helping EKS customers with their on-premise workloads.
And that's really what our focus is. And the reason why, and if we're being totally honest, the best place to run Kubernetes on Google is GKE.
Just like the best place to run Kubernetes on Amazon is EKS.
Those are the best places to run Kubernetes on those clouds and that's our collective opinion here.
And so I don't think customers need help running Kubernetes on GCP.
I think they already have a really great product, which is GKE.
But what I do think customers need help with is running Kubernetes on-premise and that's really what EKS Anywhere is all about.
Marc: So what do you support today? Let's talk about where the project is today as far as like on-prem environments.
Can I literally just bring any on-prem or like what's the minimum requirements?
I mean, just like with any MVPV1 product, what we launched is 100% ready for production, but you have to scope it down.
So luckily, the great folks at VMware have put in a ton of work on the cluster API project, and they had a vSphere provider that was pretty far along.
And so we've made some small contributions to that just to get it more aligned with how we like to run the NCD control plane.
And outside of that, we were able to take the work of all these great people in the community and wrap it in our product.
And so we launched with vSphere support right out of the box.
And the next things we're working on is an alternative to vSphere, which will be cloud-stack support and then we're also working on bare-metal support.
So we're planning on launching those two things first half of next year, sometime.
Marc: That's great.
I know that your team or the Amazon team has done a great job with this public containers roadmap that starts to show a little bit about what you're working on and a lot more transparency because you're really working with this open source community and it's like really fast moving and everybody wants to know where you're going.
And it's been great to see since joining Amazon because also every project I've worked on so far has been open-sourced and it's just really great to see Amazon's commitment to making their products available for anyone to use.
So EKS Distro and EKS Anywhere are fully open source Apache two licensed.
And what you would be paying Amazon for eventually would be support, but if you want to use the project, you're welcome to download it and use it.
And I just think it's great to see us so dedicated to that strategy and I think it's working out really well.
Benjie: So I come at you and I've got my Linodes running in Texas or whatever, and I'm like, "I want to run EKS Anywhere. I just can download Distro, you guys kind of bootstrap Kubernetes for me and I have basically EKS equivalent fully running and everything's great."
And then like three months from now, I'm like, "Okay, I'm ready to really take this to the next level."
I could call up AWS and be like, "Hey guys, will you support my EKS Anywhere cluster running on a bunch of emails somewhere?"
Is that literally the model? I can do that?
Chandler: Yeah. That's the exact business model I pitched to my VP. So I really hope it works.
Benjie: Okay, great.
Yeah, user disclaimer, I used Linode back in like 2007, so I haven't used it since then but it was a great experience and I'm sure they're doing great now as well. That's pretty interesting.
Is there any other Amazon projects or AWS projects rather that are similar or equivalent to this?
Because this is the first time that I'm really aware of something so open like you were saying.
Chandler: Yeah. I mean, the Kubernetes team and the Containers org in general has been pretty good about pushing open source projects over the last couple years.
We have Karpenter, which is an auto-scaling project for Kubernetes.
It's another one we pushed out.
But as far as making it a business model, this would be one of the few--
It's the only one I know of, but Amazon is so big that I bet you there's another one in there I just don't know about.
But it's definitely a newer area for Amazon, but again, I feel like understanding the Kubernetes community and the CNCF, this is the right strategy for us.
And customers don't want to feel like they're locked into proprietary software, especially in their own data centers where they are the ones in control.
And they really want to have that kind of security that if they didn't want to pay us our support fee anymore, they could go it on their own.
If budgets were really tight and they had to cut something, they could cut us and they could have their platform engineering team running on their own.
And so I feel like that's the strategy customers are asking for and that's ultimately why we did it.
Marc: That's great.
So kind of going back you mentioned that Amazon, you're going to help install and support, but I think the words you used were runtime critical components for the Kubernetes cluster.
What does that include today? Like we mentioned the CNI, what else is in there?
Chandler: I mean, CNI is the main one that we launched with. We also launched with Flux Controller.
If you're ever in a meeting with anybody from leadership in the Amazon space, they're going to mention GitOps at least three times in that customer meeting, because we really feel like that's the way customers should be interacting with their clusters, not through imperative commands like go to a UI and click update or go to a UI and launch an application with it.
But instead through manifest files and configuration files and really take advantage of what Kubernetes is best at, which is this reconciliation of state, right?
And that's why we're kind of all in on the whole GitOps model.
And so the Flux Controller is another thing that we launched with and it gives customers the ability to manage their on-prem clusters through configuration files and then check them into source control and let the controllers figure out how to make that a reality.
And so we launched Flux, going forward, our roadmap we'll have a whole bunch of other components and you guys know what they all are because they're very common to run in Kubernetes and production will need some sort of key management store.
Like I mentioned earlier, we'll need some logging and monitoring solutions.
We're going to need something to back up that CD with and too potentially.
So there's a whole list of things that we'll be launching in the future, but again, just starting off with a MVP product, we knew we absolutely had to have a CNI and then we also wanted to start pushing customers towards this kind of GitOps mode of working and so we thought that was important to have in the first version as well.
Marc: Yeah. We're going to come back and we're going to talk more about GitOps in a second.
What about storage? Like we do a lot of Kubernetes installations on a regular basis and it seems like networking and storage are like two really complex variables that are both hard to figure out.
They vary dramatically and they're crazy hard to support out there.
So what are you doing for storage on EKS Anywhere today?
I mean, it's a lot easier when you have one kind of or a few kinds of storage in a cloud and that's all you have to worry about.
But on-premise, obviously there's tons of different types of storage that customers could have.
So we're leaving that currently up to our partners and our customers to figure out.
So if it has a CSI driver, it likely will work with EKS Anywhere.
And we're partnering with the big storage providers like HPE and NetApp and all these folks to sit down and say, "Okay, let's make sure your CSI drivers really do work with EKS Anywhere because our customers are going to want to start using those as soon as they get this going."
And so storage is a tricky one and that's one we really are going to have a partner for strategy with and really make sure that anything that our customers have, any kind of vendor that they've already invested time and training into understanding that storage that that will work with EKS Anywhere as well.
Like elasticity that you normally get in the cloud on AWS services in general, it's often going to be missing in the on-prem environments, but it is cool that the CSI interface exists so that like whether I'm choosing to run something like Longhorn from Rancher or Rook and Surf and I've made massive investments there or like a more proprietary vendor supported solution, you're saying it's just going to work as long as that CSI interface's there?
Chandler: Yeah. I think we can all be very thankful to CSI and CNI both for that reason.
And it's funny because that's something that was really important to us that makes us fear as well, because obviously there was Kubernetes and there was Mesos and we wanted to make sure that those two things didn't diverge as far as what vendors had to build against.
And so that's actually how CSI was born.
I remember one of our developers who's now at Google working with Google on it at the time.
Marc: Cool. So let's go back to GitOps.
You're installing Flux to, I assume, probably just out of the box done on every EKS Anywhere cluster?
Chandler: That's right. And we help people out with obviously the documentation.
You provide a GitHub link and we help you with the whole flow of basically change something in a config file, check it in and then let the controllers on your cluster figure out how to make that state a reality now.
And that's a very, very important model to us, not just for EKS Anywhere, but for the entire Kubernetes landscape and containers landscape, Amazon, honestly.
Marc: Yeah. I think we've been huge proponents of the GitOps workflow ever since the team that we've coined the term and started really pushing it.
Now there's like our go, there's different tooling.
I think the idea of everything is declarative and sitting in a Repo somewhere and now a team of SREs or DevOps engineers can use tools that they already know.
They don't need these imperative rollback commands or anything like this.
They can literally just use the Git workflows they know on how to revert something if they need to roll it back or diff it to see what the changes have been in the last week.
And relying on Kubernetes reconciliation, it's like, once you actually dig in and you wrap your head around it, there's a learning curve, but there's so much benefit there.
Chandler: Yeah. I know some people get some heat because Git itself is not exactly the greatest UX in the world, but it's the UX we have.
And it is what it is the best way to say it.
And so I think if you had to pick one source control, I think Git is probably the right way to go.
But over time, it doesn't have to be Git, right?
You can use any kind of source control and have something listening to that for state changes.
And then the beauty of Kubernetes is that's what it's built around and then have something reconcile those state changes.
And it doesn't have to be Git.
Hopefully one day someone invents a better UX or Git or makes the UX or Git better, but it is what we have for now.
Benjie: Yeah. I'm not a material guy, but I'm yet to hear material ops or like-- I've never heard of those guys.
Chandler: SVN ops.
Benjie: SVN ops. So look, you're talking about UX, we've got an AWS on, I have to ask what's the deal with the console?
Why can't I have a new console for AWS?
I love what you guys do under the hood there and I'm sorry for putting you on the spot, but why the AWS console is never updated?
It's been like 20 years almost.
Chandler: Oh, it's updated just with news and more and more services every day. It's updated.
Chandler: I don't have a good answer on that but what it does introduce is actually something we did launch, which was EKS Console, which is kind of a new console for the Kubernetes side of the AWS console house, which is interesting.
And basically we launched that in preview at the same time that we launched EKS Anywhere GA.
And essentially that's allowing customers to hook up any Kubernetes cluster.
This could be a GKE cluster, this could be a OpenShift cluster.
This could be any type of cluster or Rancher cluster, a D2IQ Konvoy cluster.
You hook all those up to one single pane of glass and see all your Kubernetes clusters in one place.
And so that was actually launched at the same time.
And the interesting thing about the AWS console is like as sub teams, we kind of can own our little piece of it.
And that's something that the Kubernetes team built and they're pretty proud of.
And there's huge roadmap coming for the EKS Console as well that'll be pretty exciting.
Benjie: Oh, that's good to hear. Tell us a little bit more about the EKS Console.
Can you tell us any previews of what might be on the roadmap or just in general, how can I use this thing today if I want to for some of my clusters?
Chandler: Yeah. It's in preview, you can go check it out today. It's very simple.
The approach we took was actually using a long-standing project called the SSM Agent, which is essentially an agent that runs typically on ECG machines or it can run on bare-metal as well.
And it allows you to do operations on those machines and you can even use it in like a GitOps mode as well.
It's a very powerful agent. And that's the approach we took with the EKS Console.
So to hook up a cluster to the EKS Console, you install the EKS connector agent on your cluster and that does all the authorization for you.
And then over time, that's going to enable us to do more than just show the console as a single pane of glass and it opens up the possibility on the roadmap for more like cloud operations against your cluster.
That team isn't my team exactly.
So I don't know the details of all the roadmap, but it's definitely exciting the conversations I've had with them about their roadmap and it's really going to help customers who have a lot of clusters.
That's really what it's about. It's like kind of like multi cluster strategy there.
And when you have a lot of clusters and you need to make sense of all them and maybe there's a CVE that got pushed and you need to bulk update a bunch of clusters and you want to see which ones haven't been updated yet, you have a place to search on that or you want to install some kind of software across a whole bunch of clusters at once, those are the types of things that EKS Console is really going to help you out with.
Marc: That's really cool.
I think the idea of starting with that single pane of glass, supporting any Kubernetes cluster, obviously there's overlap to what GitOps can do, but kind of going back to that different customer profile, different Kubernetes sophistication everywhere, you may or may not have that tool.
And when it comes down to it, you have 100 or 500 Kubernetes clusters up and you wake up in the morning and there's a CVE that got pushed.
It really doesn't matter what tooling you have.
You need a way to get that out and know and verify that that CVE patch went out everywhere.
Chandler: That's right.
And the dream is to connect the two, the GitOps model and the console so that if you do make a change to the console, instead of just getting pushed out as like a command against an API, instead it gets pushed out into a config file, which then gets checked in and someone does an LGTM on it and then it gets rolled out.
So you can connect the two worlds and you can bet it's something that we'll be looking into for sure.
Marc: Yeah. We've kind of looked at that. We do something like related to different space.
I'm actually managing the Kubernetes cluster itself, but around like, "Hey, let's build this workflow that makes sure there's like on-ramps and off-ramps for the customers in their existing tooling and how they want to work. And we just want to provide."
And it sounds like you're taking that same philosophy, just you want to provide whatever pieces of this that the customer wants in order to get a complete story.
Some customer may have one gap, some customer may have an entire pipeline that they need built from you.
Chandler: Yeah, absolutely.
I mean, like you said earlier, every customer is at a different stage in kind of their journey and their sophistication and some don't want to go farther than they are.
And so we need to build tools for them.
And some customers are very advanced, almost as advanced as the cloud providers are at Kubernetes or more in some cases.
And they need a completely different set of things to help them.
And so as a company like Amazon, we have to think about all of those customers and build products across the entire band of sophistication.
Benjie: So since we're completely off topic of EKS Anywhere, I have been following AWS project that's very much in this space.
Curious if you have any thoughts on Bottlerocket and how that might actually play into an even tighter Distro of EKS Anywhere in the long-term or is that just completely out of scope and kind of a ridiculous question?
Chandler: No, it's completely on topic for EKS Anywhere actually.
EKS Anywhere launched with two OSs that we support as our base offices and one was Ubuntu and the other was Bottlerocket.
And Bottlerockets are the one we obviously prefer because it's an AWS project and we have full control over it.
But yeah, Bottlerocket is really interesting. The Bottlerocket team's done a great job.
I feel like helping fill the void of this kind of minimal container first OS and they made some changes to Bottlerocket to support EKS Anywhere and specifically cluster API and how Ubuntu gets called to actually install Kubernetes across your cluster.
And so they've been a big partner of ours that other Kubernetes team throughout this whole process and Bottlerocket is available with EKS Anywhere.
And so for customers who are interested in that kind of minimal OS experience, Bottlerocket is a really cool option.
Backing up to my other example, so you're telling me that I could take EKS Anywhere, I could put it all on some pretty large machine and then kind of use Bottlerocket to split it up and have a bunch of nodes on that individual machine or I'm going crazy here?
Chandler: No, no. Today, it would more look like you could have 10 huge machines in a data center.
You could install vSphere across all those machines and then vSphere would provision Bottlerocket nodes and Kubernetes on top of those using EKS Anywhere.
Benjie: Oh, that's really cool.
See, I've been looking for a way to easily bootstrap a Bottlerocket Kubernetes cluster for a long time because I liked the isolation layer there with Bottlerocket.
So that's super interesting. Okay.
I'm glad that we picked on that one a little bit. Any other internal projects or external projects that EKS Anywhere is working with that we don't know about?
Chandler: Well, our bare-metal stuff isn't quite there to be announced yet, but we're working with the folks over at Packet and Equinix quite a bit.
And they've helped us out a lot with like the Kube project, which has been great. Dan and the folks over there have been super helpful.
And we're also investigating Tinkerbell, which is one of the projects that they're working on as well, open-source projects for provisioning bare-metal clusters using Kubernetes and cluster API. So that's coming down.
Hopefully, we can-- We haven't fully made the decision on which direction we're going there yet, but we're definitely looking into Tinkerbell quite seriously.
We really liked the direction that Equinix has taken with that project.
Marc: When you talked about using community projects when possible and partnering with them, do you find that your team ends up having to spend a lot of time making contributions back and really digging in and helping the roadmap on those projects or how deep are you involved in the open source side of those community projects?
Chandler: Yeah. I mean, obviously we're using quite a few just like anybody does when they're interacting with Kubernetes.
Chandler: We don't consider it necessarily a chore though to give back.
We really like interacting with the communities mainly because we want to build up internal expertise to be able to support our customers on this platform.
And without that expertise, we would have to lean on the good graces of a GitHub issue or something like that.
Or in some cases, like I mentioned, there's some big partners behind some of these projects.
But yeah, we do spend a decent amount of time.
Obviously, a lot of it has to do with do we need a feature on our roadmap?
And if so, how do we start to get involved with that community, chop wood, carry water so that they know us and we're not just opening a giant PR or submitting a big design documents that they're facing like who are these folks and why are they asking us for help?
We don't know anything about them.
So we'd like to get involved first and really start to investigate the community and contribute back.
And then open those PRs and start to see if we can make some changes that align with our roadmap.
Marc: Yeah. I imagine as a small project maintainer, somebody from AWS opening up a large PR into the project like that's going to take you back for a second wait, what's going on here?
Chandler: Yeah, exactly. What are you all doing?
Marc: So I want to kind of talk about the ecosystem in general around Kubernetes.
So I know like on a day-to-day basis, my daily job, and I know Benjie's daily job, we're actually helping Kubernetes developers with specific problems.
And Kubernetes has one, the scheduling and orchestration more is I guess if you will, but it's still not everywhere.
Like everybody either has Kubernetes on their roadmap right now, they're starting to adopt it or they're planning to, but we're like all looking forward to this world where Kubernetes is just a commodity.
Everyone has Kubernetes, you can say, "Hey, here's a Kubernetes manifest, you know what to do with it."
I'm kind curious to like how you see EKS Anywhere, EKS-D, EKS kind of taking really the helm of that and really driving it forward and saying like, "Look, we're going to help commoditize Kubernetes," or is that not your mission at all here?
Chandler: Yeah. It's actually really fascinating.
When you look at all the different Kubernetes offerings that Amazon has, we've really done a good job I think about meeting the customer where they're at as far as how much they want us to manage and how much they themselves want to manage.
So if you think about it like a spectrum or a scale on one far end of the scale, you have EKS with far gate and you give us a container, we deploy it for you, right?
I mean, that's like the most managed experience you can get right now with Kubernetes.
And then if you come the other direction from that, you could have EKS where we manage the control plane still, we're obviously managing a lot of the infrastructure you're in the cloud and then you can have managed node groups where we manage the actual node groups themselves for you. We can do updates and various things to those node groups and you give us the container, but you have a little bit more control.
And then as you keep going down this less managed direction, you can have EKS and you can bring your own worker nodes kind of the original way they launched EKS.
And then if you're like, "Okay, I don't even want to run Kubernetes. I don't want you to manage anything at all. You could just run Kubernetes on top of AWS."
Some customers choose to do that. And then if you're like, "Okay, I want help with Kubernetes, but I want to manage the infrastructure," we have outposts where we can bring a rack to your data center and you could run EKS on that today.
And then if you say, "Okay. Well, I don't want you to manage the infrastructure, I want to put it on my own infrastructure, but I need some help with Kubernetes," then you have EKS Anywhere.
And so it kind of-- And that's what we've obviously talked about today.
And then obviously EKS Distro we say, "I don't even want that. I just want the bits that you're using under EKS and I'll stand it up myself."
And so it's like we've kind of checked the box on almost every option of manageability you have now as a customer.
And like you said, I think customers over time will move more towards the you guys just take care of it for me strategy, but there's real reasons why customers still want to tune certain knobs or they feel like they want more control or they just can't use the cloud for certain workloads or they want to use outposts for certain workloads and that's why we're kind of meeting them where they are with a Kubernetes solution.
Marc: Yeah, sure.
And you mentioned air gaped installs earlier or like maybe a cruise ship in the Arctic that like it might not be--
Like for security reasons, it might just be for like physical reasons that it's an air gaped environment they're running it on or through a government agency, does EKS Anywhere support those environments today?
Chandler: Yeah. So we support kind of bring your own container registry today.
And I think that actually is launching in two weeks or something like that, three weeks, a bring your own container registry options.
So you can actually pull the images and binaries down and you can just put them on your own registry on-premise and do the install that way.
And over time we'll help customers more with that by packaging a container registry, if you don't already have one, along with EKS Anywhere.
So that'll be another one of those things on our roadmap that we'll get to. That is a big use case of ours.
And a lot of our customers want us to be able to install Kubernetes and completely it ranges from totally air gaped, no connectivity at all, put it on an unrideable DVD and give it to us to like, well, sometimes it'll have connectivity off and on depending on where the ship is in the world and how much bandwidth we can allocate to that, et cetera.
And then to like it needs to survive five days of disconnect or one day of disconnect.
So there's all sorts of disconnected use cases that we're aiming to support. So a big, big strategy of ours.
Marc: Yeah. And it's like install disconnect versus like runtime disconnect.
I'm sure there's like that's more of a spectrum than like a yes or no question, I guess.
Chandler: Right. Exactly.
Benjie: So now we're getting to the edge, pretty, pretty edgy stuff here when we're talking about the Arctic and a ship, which actually makes me curious.
So K3s is a pretty awesome Distro.
We use it at Shipyard and some other I know, I believe Marc does as well.
Is EKS Anywhere going to have like a light version that maybe uses K3s or is that on the roadmap just because when you start talking about these really isolated environments, I definitely can see a lighter weight type of K3s type Distro with Amazon support being pretty interesting to a lot of people.
I mean, I can't specifically talk about that yet, but what I can say is Amazon and the Kubernetes strategy in general has really been passionate about checking all those boxes like I mentioned earlier on that kind of scale of what do you want the Kubernetes cluster to look like and how much do you want to own and what does that architecture of the cluster need to look like for us to kind of satisfy our customers?
And when enough customers ask for a specific use case or feature, we build it.
I mean, that's just the reality of it or big enough customers ask for it. So I mean, that's just the reality of how Amazon works.
I mean, they really are obsessed with helping customers with what they're asking for.
And it's definitely something we could look at in the future.
We also are seeing the K3s project pop up all over and Rancher's done a great job with that project.
And it seems like to really be hitting the right nerve with a lot of customers.
Benjie: Yeah. Well, I have to take that opportunity to ask for a free control plane for EKS, just the hosted version, not a big customer, but a decent sized customer.
And I'd love to get a control plane, free control plane. Just had to throw that out there.
Chandler: I'll run it up the line and let them know.
Marc: So Chandler, you talked about like getting EKS into these disconnected or remote or hard to get to environments and the customer would bring their own registry and then they could push the images in and like, great.
It kind of makes me think around the topic of supply chain, like software supply chain, it's like super relevant.
There's been some huge, huge attacks and huge vulnerabilities in the news recently from SolarWinds to many other ones.
I have a feeling over the next few days here at KubeCon there's going to be some talks in the security track around supply chain, there's cool stuff happening.
What are you doing in that space? If I want to like take the Amazon bits, it's great.
It's open source. I can see like how you built the code, but now I'm going to push it and run it in my own registry and in my environment, but it's the Kubernetes control plane.
It's a pretty privileged piece of infrastructure that I'm going to run.
How do you help me make sure that I'm running something that is the code? This is the bits that you wrote.
Chandler: Yeah, absolutely. So that's a great question. EKS Distro is what that was all about.
And we actually publish the exact manifest and we give you the exact hashed binary basically.
So you could technically, if you don't trust us, rebuild the binaries and then compare the hashes against the two and make sure it's the same.
And so we're doing our best to kind of provide that supply chain security.
And that was a big reason why we did EKS Distro because it's not always obvious to people where builds are coming from, especially with open source projects.
And so we wanted to kind of control that supply chain for all the components.
And I think some customers think Kubernetes is just like one binary or something.
It's actually a dozen or two dozen different images and binaries and special builds.
And so we stand up for our own infrastructure internally and rebuild all that stuff and then publish it so that customers, they can have the confidence that their builds were done by us and then they have a way to verify that.
Marc: So that's a lot of work when you think about, oh, I'm going to take--
Somebody who might be new to Kubernetes and they've only run managed Kubernetes, they might not understand that there's like a dozen or two dozen different containers and binaries and processes from the scheduler or controller manager, whatever these are running.
Can you talk a little bit about what were the challenges with building EKS-D and then EKS Anywhere as you dug into it and realized like this is actually going to be a lot more work and this is why every company in the world shouldn't be trying to make their own Distro.
Chandler: Yeah. Well, EKS or Kubernetes, I should say, is built with Prow for their dev builds.
And I actually, the Prow builds end up using something different than Prow on the Google side.
But Prow's a great project, but Prow's really meant to run on the Google infrastructure.
It wasn't as meant to run on the AWS stuff. So we had to make some changes there and we wanted to use Prow because we want to stay aligned with what the community is doing.
And also Prow's a pretty cool project the way it works with, especially some of the GitHub integrations and things like that they've done.
So we thought Prow was the right way to go for building Kubernetes internally.
And also being Amazon, we're very operational minded and we didn't want to take any dependencies on any outside SaaS platforms like GitHub.
So we actually clone all the repos internally.
We stand up all of our own Prow infrastructure and we do all the builds published from the S3, published from the ECR.
Yeah, it's a lot of work and then we have to keep all those environments updated and maintained and new versions Kubernetes come out.
We have to obviously do new builds and when CBEs come out, we have to be on top of that to do new builds.
So yeah, it's a lot of work and I don't think anybody would want to do that work or should try to do that work unless you're in the business of distributing Kubernetes images and binaries.
Marc: I think when we were first looking at EKS-D when you came out with it, I think there's a lot that you have done with your team and a lot of it is just really out in the open that like people can hook up into in different places.
Like I was just surprised to see, I think like at the time, and I don't know if you're still doing this, there's like a public SNS topic that you could just subscribe to and be like, "Oh, there's a new version available. And I could hook it into any pipeline that I want to from that point forward."
Chandler: Yeah. I thought that was a really cool feature.
I always talk about that with customers is that you could subscribe to this topic either as an email alert or whatever.
And then technically, I don't know if anyone's done this yet, but I think it'd be pretty cool to write some automation to say, "Oh, there's a new version of EKS-D out. Let me pull it down because the SNS topic itself is just the manifest."
Right? It's a CRD essentially.
And let me pull that CRD down, roll it out to a dev cluster and then run my whatever conformance test or whatever you're going to run against it to make sure it doesn't break.
And you could do all that.
We could release something on a Friday, the worst data release software in the world and then when you come in on Monday, you could already have your tests passing or giving you results, which I think would be a pretty cool workflow.
Marc: Do you do stuff like that to get the upstream patches from Kubernetes into EKS-D internally?
Chandler: Yeah. And also, most of the patches are CBEs that we're most concerned about.
And we have folks on my team and on the Kubernetes team on the security committee, the Kubernetes security committee, the PSE, and so we hear about some of that stuff ahead of time and start to work on it.
But those are the biggest things that we try to stay on top of as does the entire community.
Marc: Transitioning there and speaking more about community in general.
So EKS-D is open source EKS Anywhere.
If I want to just ask questions or get a little bit more involved, can you talk a little about how you actually are managing the community for these projects?
Chandler: Open-sourcing software, I feel like you do for different reasons.
One might be to build a big community around it, one might be to show transparency around what you're building.
There's lots of reasons to do it. For these projects, the primary goal wasn't necessary to build a giant community.
That giant community already exists, which is Kubernetes and the CNCF.
And so we weren't aiming to build a giant community on both these projects. It might happen.
And if it does, we will support that and with infrastructure on our side.
But the easiest way to ask questions is just to open a GitHub issue. And we really like that.
In fact, we monitor those issues pretty regularly and it's exciting for us because our team built this project and whenever someone's interested or trying it out, we're like, "Oh great. That's cool that you like what we did."
So I would just say go to either one of those projects and just open an issue against them and that's typically the best way to do it and start to get involved.
And we're open into any PRs or issues or anytime someone wants to jump on a call our team is very excited to do those types of things.
So we definitely want to support the growing community, I should say, around these projects, but it wasn't our primary goal.
So we don't have like Discords or Slack infrastructure or anything like that set up yet.
Benjie: So do you have any kind of success stories that you can share with us around EKS Anywhere or EKS-D?
Chandler: Oh, good question.
I should have prepared that because I don't know which customers I'm legally allowed to talk about, but we do have customers that are running both these projects and interacting with us for sure.
And I would say we have a lot of interested and successful customers doing this in the telco space.
And I mentioned some of the cruise ships, also the online gaming space, both types of gaming, gambling gaming and computer gaming.
So those are the big use cases and then obviously we mentioned the three letter agencies again.
So those are kind of the types of groups that we're working with today.
And I think that that really is a good example of the types of customers that EKS was really built for.
Benjie: Yeah. So they're really just leveraging you guys for the Distro and then obviously for support like we've been talking about.
Anything that was an unexpected benefit of the Distro or anything like that, that just kind of popped up.
I know it's early, but I'm just curious if there was something that got solved that you're like, "I never thought we would be solving that problem with this."
Chandler: Oh, interesting. Yeah.
Oh, one thing actually I can think of is we have this AWS project called Snow, which are these physical devices that you can buy and they look like little towers and you can buy them and they have compute and storage capacity on them.
The AWS Snow project and a customer who, again, I don't know if I can talk about them, so I apologize.
But anyway, they put them in a Humvee and they ran like six of them with an EKS Distro cluster on them.
And I thought that was a pretty cool project.
And I never thought of that application specifically all combined together working out, but it did.
And it is a pretty cool project.
Benjie: That is a really cool one.
Marc: So Chandler, you just launched EKS Anywhere in September, there is a public roadmap, summarize it for us that somebody who's just listening right now and maybe hasn't actually visited the roadmap, what are the big things that the team is trying to tackle right now?
Not maybe in the next week or two, you talked about that, but like what are you going to ship the rest of this quarter in the beginning of next year?
Chandler: Yeah. So I think about our roadmap in kind of three buckets.
The first bucket is all the things we didn't ship at GA bucket.
Everyone who's done software knows that this is how it goes, right?
And so we got to clean up our upgrade story a little bit.
We have to add support for this thing and that, and get some more testing around certain areas.
There's a whole bunch of work that I would say is three to six months of work of just cleanup.
That's one bucket that a bunch of folks are working on. The second bucket is really about expanding the CAPI providers that we support.
So as I mentioned earlier, we want to provide support for cloud-stack, some customers have asked for cloud-stack support.
So we want to provide support for cloud-stack.
And we also are going to provide support for direct to bare-metal.
So those are the two probably very, very important things for us to get done in a short amount of time.
And then the third piece is really that community piece, that ecosystem piece, and adding in these additional capabilities after your clusters up and running, adding these additional capabilities onto it like secret storage and backup tooling and monitoring and logging tooling and those types of components that customers need.
So that at end of the day, they can install EKS Anywhere.
And there's very little additional work they need to do to get their application into production.
And that's our goal.
Marc: That's cool. And I love the way you're thinking about that too, because like the AMI Marcetplace could solve some of these problems, right?
Like cloud Marcetplaces are one click to launch an application, but like the GitOps integration and everything you're doing is like less around like you want to solve that problem, but it's not the problem to solve.
It's really around maintainability and supportability of that cluster and whatever workload's running in that cluster long term that's important.
Chandler: That's right. That's absolutely right.
Benjie: Chandler, tell us a little bit about troubleshoot. I know that's a project that's part of this whole thing.
Chandler: Yeah. Obviously support is how we're making our money.
And so we needed a way for customers to get us their logs and a bundle of information about their cluster so that our support team and the service and engineering team could make sense of it.
And it's a big challenge, especially with customers who are air gaped or when they're on-premise, we can't reach into their cluster and figure out what's going on and just start running commands and tailing logs.
So we need a way to see what's happening.
And the work that Replicated did with the Troubleshoot.sh product was way farther along than anything we would've ever been able to build.
So we're really fortunate to be able to integrate that project.
And it was dead simple to integrate by the way.
We actually had our intern do it or at least start it.
He's no longer here, but he did a great job with it.
And we're really excited about that project.
It gives our support teams and our services teams a lot of information about what's happening.
And there's a lot of other cool features that it has that I don't even know if we're taking full advantage of yet like a redactor.
So when they send you information, you can redact certain PII.
And it also has some really cool features that can even start to diagnose issues before they even send them to you just by looking at the log.
So a great project and we're pretty excited about continuing to build out what it's capable of with EKS Anywhere.
Marc: Awesome. Thanks. We're pretty happy with that.
I think that's the cool part about this ecosystem, right?
It's like the leverage you get by being able to take the open source work that somebody's doing and add it to the open source work here and then add your own bits on top and then like, sure, we're not shipping GPL software, so you could take it and license it kind of any way you want to, but like the community's not doing that.
We're all just like making it available for everybody else to continue to build the next thing on and leverage is powerful.
Chandler: Yeah. And it's great. And if we have a problem, we just call you all up and say, "Hey, what do you think of this?"
And everyone just works together and it's a really great thing.
Marc: Yeah. Awesome. Chandler, well, I really appreciate you taking the time out of your day today.
Before coming down here, looking forward to hopefully running into you in-person here in KubeCon but if not, seeing what you guys are shipping over the next few months with EKS Anywhere.
Chandler: Yeah. I appreciate it. I had a great time. Thanks for having me.