1. Library
  2. Podcasts
  3. Open Source Ready
  4. Ep. #23, Kubernetes, AI, and Community Engagement with Davanum Srinivas
Open Source Ready
45 MIN

Ep. #23, Kubernetes, AI, and Community Engagement with Davanum Srinivas

light mode
about the episode

In episode 23 of Open Source Ready, Brian Douglas and John McBride sit down with Davanum “Dims” Srinivas to discuss the health and future of the Kubernetes community. They explore how corporate changes impact open source contributions, the importance of onboarding programs, and the challenge of sustaining long-term contributors. Dims also shares insights into Kubernetes’ evolving role in AI and GPU workloads. The discussion is equal parts career advice, technical insight, and open source storytelling.

Davanum “Dims” Srinivas is a long-time open source contributor and Kubernetes maintainer who has helped shape the cloud native ecosystem for over seven years. Currently at Nvidia, he focuses on making Kubernetes relevant for AI workloads while championing contributor growth and community health. Known for his leadership in CNCF and Kubernetes governance, Dims brings a wealth of experience in sustaining large-scale open source projects across multiple organizations.

transcript

John McBride: Welcome back everybody to another episode of Open Source Ready, as always, with Brian. How are you doing today?

Brian Douglas: I'm good. I'm hot in the booth, literally in a booth right now. It's like one of the hottest days in San Francisco of the year, so it's fun times.

John: Yeah, I heard there was a heat wave going through. One of my buddies was telling me that they just moved into a new office and they don't have AC. Do you have AC at least somewhere?

Brian: No, we do not have AC. We have windows. Not like Microsoft Windows, actual real windows, but we do work right next to the LinkedIn building, so they have a good bottom floor area. I was actually just out about an hour ago.

John: Nice. Well, we didn't come here to talk about HVAC. We are here today to chat with Dims, who I've been wanting to chat with for a little while. It's honestly an absolute honor.

Dims is legendary in the cloud native ecosystem, he has been working in Kubernetes for forever, I think. Dims, why don't you give us an intro and tell us what you've been working on.

Davanum "Dims" Srinivas: Thank you. so my nickname is Dims. I go by Dims everywhere, GitHub and Twitter and Slack and so on. That's actually my nickname. And I have a long story behind the nickname, but there's just too many people with the same name as me, which is Srinivas. So I ended up with the nickname in college and it kind of stuck. My wife calls me Dims too.

John: Oh really? Amazing.

Dims: Yeah, I've been in Kubernetes stuff for probably seven and a half years or so. So across multiple jobs and still sticking to the community. So it's been a lot of fun learning from folks and doing a bunch of hopefully useful work.

John: Yeah, well, I know it's been impactful for me. You and I first rubbed shoulders during the VMware days and then also when we were at AWS. And I know you were critical in getting new people involved in the communities and also getting companies to be involved in these communities.

So one of the big reasons I wanted to bring you here was to chat about the state of these communities. Things seem to be rapidly changing, not only in open source, but also just with how things are changing with companies supporting these open source ecosystems. There was a lot that changed when VMware was acquired by Broadcom.

Now AI is a big player in a lot of this. So big broad question, how do you currently see the cloud native ecosystem in relation to its community and its people? How is the broader health from your view?

Dims: So just like you mentioned, there's ebbs and flows across the companies, across the people and we always have to be on the lookout for how do we get more people in, get people who are already there to do more and so on.

For example when VMware got bought by Broadcom, we knew that the contributions were going to go down and then maybe pick back up later. We saw some heavy movement in different companies like Intel, around the Open Source Program Office and the after effects of it.

So what ends up happening is it does affect the community, it does affect the work that is going on in the community especially because a lot of the things that we end up doing in the community are longer term things.

When people pick up something, they want to see it through. They don't just want to stop at some point. So what ends up happening is when they lose the support that they were getting from their company, they don't know what to do.

So then it becomes, "okay, do I do it on my own or do I hand it off to some other people and who else is interested?" So we think of this as like a funnel in some way. Like you need a lot of people coming in.

So we have the new contributor orientation and we have outreaches during KubeCon and we know that a lot of new people end up learning about cloud native and Kubernetes in KubeCon. A lot of the time it's 60, 70% that are coming for the first time. So there is a good pool of people that are trying to come in but then Kubernetes is a mature project so we move slow for good reasons because we don't want to break people.

So then it becomes about how to engage those people. And we have some programs for new contributors and new people to get engaged and so on and so forth. So we do try to take a deliberate approach to this, but it's hard to get people and keep people for the longer period of time that we need them for.

John: Yeah, that's something I think about a lot. Just being in like COBRA and some of these smaller libraries that feed into some of the cloud native spaces. You know, it's like how to engage people to onboard to these super mature projects. What's been successful?

Dims: The programs that are successful are things that run short term. I'll give you a really good example is the Kubernetes Release Team process. So you sign up for three months, you become somebody who is taking care of some documentation or shepherding the enhancements process or watching the CI signals and things like that.

There is a runbook which tells you what is expected of you at what phase of the release process. And there are some mentors in there who are going to handhold you and there are some leads and there are shadows. So we have this program that churns out people to do some stuff that is very meaningful to us. Otherwise it's just on a handful of people doing work all the time.

We do have those, we have a set of people who are in SIG release that create the automation and then the folks who are in the release team run that automation and then report bugs and whatnot. And so, by this process, the new people who are coming in get to know everybody who's there in the ecosystem and who they need to talk to.

Who do they escalate? What does a chain of command look like? Who are the chairs and leads? They get to do networking, they get to know the features that are going to come out, they know who to talk to and how to escalate, all sorts of things they learn.

All these are transferable skills and they can use them later. So a lot of people from those sorts of programs do tend to stick around even after they are done.

So those kinds of things where you set up a process and it's a self-sustaining process over a period of time that accumulates people and introduces them to the community, those have been the ones that have worked really well for us.

John: Yeah, it makes a ton of sense. Something that was very surprising to me, first getting involved in open source was just how personable it really is. Like how much that human to human interaction is just really important.

Like you said, getting to know the people, getting to know the SIG chairs, which are the special interest groups and really just understanding that big web of who's who and being able to escalate when necessary. It's very daunting, just starting to network and starting to understand who those people are.

Dims: Right. So the way I tell people is, look--

There's always somebody who knows something more than you and there's some set of people who know less than you. You learn from one set and you teach the other set.

Right. So that's the way to look at it. You cannot always be afraid to ask something. One example I give people is like look, if you don't ask a question, we don't know who you are, we don't know what you're interested in, we don't know what you want to learn or what you know. How are we supposed to engage with you if you don't give us a hook for us to talk to you?

When you leave a breadcrumb like that next time we know that, hey, this person was interested in this topic, maybe they still are. And maybe we give them some small tasks to do, or engage them in a conversation. Right?

So a lot of the time, people who come to the Slack, either in the CNCF Slack or on Kubernetes Slack, say, "hey, help me, I want to do stuff." Who are you? How do I know you? What do you do and what do you know and what do you not know? What are your interests?

So by the time we get to know them, they're gone. So it becomes harder if you don't engage. So we are all like you, we all started somewhere. I always say, come talk to us. Sometimes we bite, sometimes we don't.

If we don't respond, it means there's something else going on. We are not trying to box you out or put up a firewall or whatever. It's just that some of us have two jobs or three jobs or whatever else. So that is all it is. Usually we are very friendly.

John: Yeah, managing the community definitely seems like a whole thing in itself. It reminds me, you talking about the who's who and trying to understand that and just that question of who are you? I felt that very frequently when I was at AWS working on Bottle Rocket and people would just drop into the GitHub repo and I'd be like, "who are you? Are you an AWS person? Are you a customer? Are you just some random person who's trying to run Bottle Rocket on your own or something crazy like that?"

This was something we tried chewing on at OpenSauced, which eventually was acquired by the Linux Foundation. So I'm very curious to ask you, what are the current set of tools that you and the other Kubernetes maintainers, what are those current set of tools you use to understand the community and understand the who's who and maybe even the broader health of the community?

Dims: So knowing where people are coming from is definitely important, right? The day before yesterday somebody asked a question, "hey, when are you going to update Kubernetes 1.34 to Golang 1.25" out of the blue, right? Yeah, okay, we can give you the answer but then we need to know why you're asking that question and what is your purpose.

So just looking up their name, looking up what they are doing. I'll go to LinkedIn and look up their GitHub handle, do a little bit of a background research and then ask that person themselves, "hey, here is the long answer. Go read this cap, it has all the answers for you. We've written it down. And the TLDR is we won't randomly change things only when the version of Golang, 1.24, which 1.3, whatever is going to go out of support then we go update it. Otherwise we won't randomly do it."

It turns out that person is from SUSE and they're asking because they want to do some pre-work on K3s or whatever, so they want to do some additional work beforehand. So it was a good conversation, right? I got to know the person, I know where they are, and next time when there is a problem, we'll tell them.

Like once I know that they exist, next time I'm going to say, "hey, we are going to update 1.34 to 1.25, maybe you should go do some work now."So that is basically the point that I'm trying to make.

Brian: Yeah, it's interesting because I'm actually dealing with a very large open source project. It's not the size of Kubernetes, but we get a lot of those one-off folks asking questions that are very, very pointed questions but no background, no context. A lot of times no avatar.

So it's obviously someone who has a day job and there's somebody who, you have multiple jobs, they have multiple jobs, but they're just driving by. And I'm frustrated or I got to the point where I need to ask this question. Here's the question. No context.

Dims: Yes.

Brian: So yeah, I'm curious because Kubernetes is a rather large footprint so you'll have a lot of ways to engage and, as you know, Dims, and as the audience knows, I spent a short amount of time with the CNCF and then also with the Linux foundation, but we also spent a ton of time with the CNCF during the tenure of OpenSauced, so I got to know a bit about how the CNCF works.

But I'd love to learn more about how you orchestrate some of this information with tabs and TOCs, if you can help color that. How do you get your changes implemented? Do you need to be represented by a company? Or can you walk in?

Dims: I will dive right into it. This was a thing that we talked about just a few days ago. We were trying to come up with like "hey, lots of people do AI using Kubernetes, especially inferencing. Is there a set of things that is common across all these people? And what are those things? Can we do like a conformance testing program around it?"

And so first we were thinking about vendors, right? Like vendors are the ones that are doing distros or this or that where there is a Kubernetes available that you can use and does it support AI use cases or not? Right. Like so usually we have a conformance testing program which validates, "hey, this is valid Kubernetes. It does all the things that Kubernetes does. It quacks and walks like Kubernetes, so it is Kubernetes. Right?"

So we needed something like that for AI. But then we realized that we shouldn't be thinking about vendors per se. We should be thinking about end users of this conformance testing program. It's them that they need guidance around and confirmation that the Kubernetes that they pick has all those capabilities that they need.

So we switched it around and we said, hey, we are going to use lazy consensus. We are not going to go vendor by vendor to get an okay from everybody. And we also have values in Kubernetes itself and in the TOC and CNCF, in general, where we are talking about no kingmakers, which is for projects, but also we represent ourselves as much as possible. We represent our employer as well.

You would have heard some of the Kubernetes folks saying, "wear your agenda on your sleeve so everybody knows and nobody has to guess." Right? So yes, you need to bring that into the equation. But people will know who you are as well, right?

If two people from Nvidia go to the same working group, like me and the person who's doing the DRA work, right? So they will know how to distinguish between the two of us. Right? They will know that this person is coming from SIG architecture perspective wearing this hat and that person is coming from a working group for different AI things. And they are representing a different angle of the process.

And at one point I was known as the OpenStack person. And nobody calls me the OpenStack person anymore. Yeah, it's a lot of that. But in general, yes, your employer asks you to go work on this stuff. And yes, CNCF is set up to be a place where vendors aggregate. But as much as possible, we try not to let that get in the way of true collaboration. Especially because, like I said before, there is a bunch of us who stay with the project across the companies that we work at as well.

John: Yeah, that's super interesting. I've always been really fascinated by some of the people that seem to go amorphously between companies staying on these projects, and they're obviously some of the more important people that, in my eyes, the whole thing would sort of fall apart without.

Maybe this gets into the realm of career advice, but how do you achieve that level of prestige? That seems crazy to me.

Dims: Before we get to that specific point, I do want to say, we have a lot of people who come for a specific feature or a specific bug or things like that. If you look at the statistics in the LFX Insights, for example, in the last year there were about 100 people who did 50% of the work.

John: Oh, wow.

Dims: And that translates to around seven organizations sponsoring these hundred folks. Right? And the rest is a long tail. So you can see how many people will come and go. Right.

John: Well, just to double click on that. So just so I understand--

Some 50% of the work is done by about a hundred people across 7ish orgs.

Dims: Right. So the people who come and go are doing very specific work. Like Signode storage, you know, they're a storage vendor and they need to get a patch into Kubelet because you know, their CSI driver has a problem or something like that.

So then what ends up happening is we do want to convert those people to stay. Like, you know, you're going to get a change into Kubelet, so why not do some work in Signode as well?

So we want to persist those people across the things and elevate them into the career ladder stuff that we have, contributor ladder that we have, and get them to the point where they are active members, you know, chairs, leads, of some sort of the other sub projects or special initiatives or whatever it is.

So we need to get them through that ladder to a position where they can work across Kubernetes special interest groups. That is when we know they stay right. Even then we lose people. There's lots of people who go to some specific big companies I won't name and we never see them back again.

That happens. That's not a wrong thing either. But you learn everything and you go there and you apply the things that you learned here. So that's not a wrong thing or anything. Life changes.

So there is a set of people who have stuck around. That doesn't mean that more people are joining the group and that doesn't mean that some people are leaving that group on a periodic basis. The idea is to make sure that that group is big enough to carry the burden across years. So that is the challenge here for the long term survival of Kubernetes.

We need those kinds of people who are able to influence more of what gets done in Kubernetes and have memory. We can't write everything down. Google Docs get stale and KEPs get stale, PRs die on the vine and issues don't get addressed, or get auto closed by the bots.

So there is some amount of organizational memory that is still there in people that we need to be able to persist across time. And that is the hard part for us to make sure that there is enough people.

That doesn't mean that we don't have challenges around funding CI/CD jobs and things like that. That is always there. But from the people perspective, we do want to elevate people to get to the point where they are active contributors, long term contributors. And that is the challenge.

So it is more than a prestige. It is a lot of burden and a lot of work and a lot of pain over time. Yes, the prestige is there but the responsibilities are much higher. That is the issue here. And we do need to spread the responsibility and the pain a little bit more.

Just one simple example from yesterday, before me and Ben the Elder, we were talking about the Kubernetes conformance program and how many of us have approved rights to elevate tests to become a conformance binding test.

And at this point, me and probably one more person is active and I was like, "Ben, come put your name in as an approver so I could go for a vacation, right?" So, that is the situation we are in. We are short on people who have been around for a while or who can come up to speed quickly so we can train them and who are willing to be there.

John: Yeah, it's a really excellent answer because sharing the load among a group of people who have all the context on an already massive project, that's big. Do you see it getting worse, getting better?

I definitely want to touch on AI as a thing that most developers seem to be using these days. Is that helping? Is that making it worse? What's your views on all that?

Dims: The first thing I'll say is we have to be relevant. Kubernetes has to be relevant. And if you were in Paris KubeCon, we kept saying, oh DRA is there, DRA is there. But DRA wasn't there at that time. But now Dra is there. But you know there is more ongoing work around DRA and there's at least five more KEPs that are going to land in 135.

John: And this is dynamic resource allocation for GPUs.

Dims: yeah. And it makes it really easy for, especially Nvidia GPUs for sure. But it's not meant just for that. It's meant for general purpose across GPUs and people from Google, Intel. Nvidia are all working on parts of the DRA project and it has touched everything.

Whether it is scheduling or Kubelet or you name it, it has touched everything Sigapps. It has touched almost every basic set of components that we have and it's looking good.

For example, Nvidia GB200 GPUs. If you have to use many of these GPUs together in Kubernetes, the only way to do it is using DRA. You can't use the old device plugin. So that is a huge enabler for Nvidia, who is my employer, I forgot to mention right at the beginning.

John: There we go.

Brian: I was going to ask about, you mentioned Nvidia and also I didn't know about this DRA because I wasn't in Paris. But I'm curious, what's Nvidia's involvement in Kubernetes and what's the outlook look like with the GPU-powered inference that can scale?

Dims: Absolutely. So if you ask Jensen, he'll say, what will he say? Right. So we are a hardware company, we sell GPUs, we want to make it available to everybody as much as possible and everybody should be able to utilize it the best they can, given that it is scarce and given that it is costly.

So utilization has to be very high and things shouldn't fall over because you run a lot of these things for a very long time. So whatever tools we can give the community to make it as easy as possible, as quick as possible, as reliable as possible.

We have to do more work around observability and those kinds of things just to make sure that our GPUs are well used by everybody because of how important it is for everybody's work and their workloads, whether it is training or inference.

John: Yeah, this is a favorite topic of Brian and I's. We'll chat sometimes about where we see the broader ecosystem of computing going. And I've been a firm believer that Kubernetes will be that platform, even though it was built as a container orchestration platform and then became the platform to build platforms.

And now it seems like the place that, with Nvidia GPUs or TPUs at Google, you would go to actually do inference. Is that a correct assumption?

Dims: I want to say yes, because you know, I want more people to use Kubernetes. But the other way I also think about it is, hey, what are the set of things that needs to be available in a Kubernetes cluster that will make it really easy for the workloads to be moved around?

Today you might have reservations in AWS and you might not have it by the end of the year and you might have to run the same workload in Google or you might have to go to Azure or OCI. So what is a set of things that is common that you can rely on that, the workloads won't see the difference, so to say.

So we have to build up that stack on the worker clusters, so to say. So what is the observability? How are we monitoring these things and what are the errors? How do we make sure that if one GPU out of eight is down, then how do we make sure that we don't just reboot the whole node? Because that's going to take a long time.

And if we can limp on with seven GPUs, for however long to finish the training or whatever else that is running there, then maybe we should do that. We should be able to surface these things not just to the device plugin and the DRA driver, but eventually into the application too. They can take some proactive measures, based on what is happening within the node within the worker cluster.

John: Interesting. Yeah, well, I definitely like Kubernetes for AI. That's for sure.

Before we go to the next section, Dims, I wanted to give you the chance to give your pitch to people who may be listening who are interested in getting involved in Kubernetes. It's a lot of people in sort of the startup ecosystem that listen to this. So maybe companies who are looking at adopting Kubernetes. What would you say to those people who are early on this journey to Kubernetes?

Dims: So a typical thing that I would say to these people is please come.

We need people of all kinds, all stripes, all abilities, all capabilities. Like we will put you to work. We just need you to show up and we need you to show interest and we'll make use of you in whatever way we can.

The simplest thing is come join a couple of meetings in special interest groups. Just volunteer to take notes, don't do anything else. And just by doing that, you can ask questions and you can say, "hey, I wrote this this way. Is that the correct way?"

Engage, get engaged. Spend two hours a week or four hours a week. Limit yourself, put it on your calendar. Review some PRs and look at some issues. Change it around, talk to somebody, do whatever you can to understand.

One thing people will ask is, oh, do I need to know Kubernetes completely or do I need to know Golang completely? We don't care. You come, you look at the PR and you ask some questions and then we'll tell you why we made a choice or what was the background behind it.

And then you learn the language, you learn which piece of code we changed and what test we added. So then you can run the test yourself and see if it works. So don't be afraid and we don't bite.

And we need you, we literally need everybody we can engage and we welcome everybody. I'll give you one more example. For example, we knew that in the other open source projects that I worked on in the past, I never felt that we had a good involvement of people from India.

So here we started an In-dev channel about four and a half years ago. And now you can see how many people from India are here who are very vocal, who have leadership. They are CNCF ambassadors. They are everywhere.

So we do things like that to encourage people to come join and talk to each other. Don't get me started on good first issues. It has its place. But don't run after good first issues. Just go listen to what is happening in a SIG meeting. You'll get 15 good first issues because not everything is written down and not everything you can write down in a form where it's easy for a beginner to start doing stuff.

But if you show some interest, there is a lot of possibilities here. And don't immerse yourself and say, okay, I'm going to do hardcore Kubernetes for two weeks and then you burn out. We don't want that either. Right. So pace yourself, show interest, and engage.

John: Yeah, absolutely. Wasn't there recently a CNCF event in India? Was it a KubeCon India?

Dims: Yeah.

John: Okay.

Dims: Actually had a second KubeCon.

John: Oh, excellent.

Dims: Yeah, so the first KubeCon I missed, but I made it to this KubeCon in Hyderabad. So it was amazing. It was a lot of energy.

Yeah, it felt like a regular KubeCon, of course, with a lot of my friends missing. But I made new friends too.

John: Yeah, KubeCon has been one of my all time favorite events. I've gone to a number at this point, have had the chance to speak at a few. If anyone's able to go, it's a little expensive, but you can always get the employer or, there's a student discount I believe.

Really excellent event and just so many people willing to be around on the hallway track and just talk to you about stuff.

Dims: You don't even have to be a student. You can look for sponsorships. There's four kinds of sponsorships if I remember right. So you don't even have to be a student to ask, "hey, I need help. I'm looking for a job. I need help. I need to get to KubeCon." That is definitely a way to get there.

And there's so much networking opportunities while you're there as well. So talking about KubeCon, there is a special section of KubeCon called Maintainer Summit. So me and Jordan, we are giving a talk. So if you're coming to KubeCon, head on over to Maintainer Summit and hear us talk about, let me read the title.

John: Yeah, please.

Dims: Tending Kubernetes Dependency Tree. Your favorite topic too. And the ending of the title is Bonsai or Bonfire.

John: Sounds like open source for sure. That's great.

Brian: Yeah. I'm actually looking forward to potentially going to Atlanta this year. So, yeah, highly recommend. Great event for everyone.

Dims: Yes. Yeah, thank you.

John: Well, I want to move us on to reads. So Dims, are you ready to read?

Dims: Sure. Let's get started.

John: Okay, so we have a few reads this week. Why don't we start, Brian, with you? You had a couple of reads. What do you got?

Brian: Yeah, well one read I'll get out of the way. Dims, you mentioned LFX Insights. I've actually been building a successor to OpenSauced, not the LFX Insights part, but more the contributor side and it's called contributor.info and this is another part that I was really passionate about, but it just didn't really make sense as a company.

So thanks to AI, I was able to build a bunch of interactions and charts and graphs for contributors and really for the sense of understanding how to approach projects. And then we've also got--And I say we, but it's me and one contributor right now. But also contributing with work spaces for projects or companies that are maintaining large amount of activity.

Which, at my day job right now, Continue, we're actually doing that today and I built a lot of this to help support that. So I wanted to mention that since contributor.info, it's actually finally usable, people can leverage it.

I'll skip over to the next pick which is, Pragmatic Engineer has a newsletter and I would highly recommend to people if you're an engineer and you want to understand how engineering happens at other large organizations. It's a great newsletter.

And he recently had Boris from the Claude Code team sit down and do an interview to talk through how Claude Code is built inside of Anthropic.

And this is really fascinating because this was like a Skunk Works team earlier this year that's growing into a much larger team it sounds like, but how they handle PR reviews, it sounds like I think the expectations 80 to 90% of the PRs that get put up, and this is open source as well, are reviewed by Claude first.

Which is melting my brain because when I was at GitHub, GitHub was a big practitioner of you review the code first. As the author of the code, you are the first reviewer and by the time it gets to PR, it's last mile time. Like everyone is coming in just to see if it aligns with the stuff they're working on.

So this is like inverting that which inverting, my experience, maybe everyone's been letting it rip and letting the AI review code all the time. But I put a link into our show notes with the PDF. I do want to mention that this is a paid post so it is specifically part of his paid newsletter.

But I do want to recommend people to check out the newsletter because I think it'd be worth it if you're looking to level up. But I'll take a breath there because John, I know you've got a lot of experience between VMware and AWS about PR review and knowledge transfer. I'm curious, in this advent of AI, what's your thoughts on how Claude Code is being built?

John: Well, the first thing that stuck out to me was they were trying to get it to do some AppleScript, which the astute listener will remember when I was trying to do a bunch of AppleScript, I think we were talking to some of the Flox people or something. But honestly, hilarious use case. Because nobody wants to write AppleScript. So just get AI to do it.

I don't know. Brian, you know, I talk about this all the time too. I go so back and forth between where in my mind's eye AI should sit in some of these things. And I think maybe one of the conclusions I'm coming to slowly is that any extreme is maybe bad. It's obviously not going away.

The extreme of just totally ignoring these tools and totally ignoring the capabilities that these things have isn't tenable. But then also I think, letting Jesus take the wheel and just letting it go, go, go, and not even looking at the code is a mistake on the other side of the extreme.

So it's interesting to me that they have it do a first pass. But I'd imagine there's things that they had to either revert or that they caught that they then had to go fix. I wonder where that human in the loop is ultimately. Yeah, Dims, what do you think?

Dims: So I've seen various teams do it differently. Like one team uses CodeRabbit. The CodeRabbit does, as soon as you submit the MR in GitLab, it does a review. It makes really good sequence diagrams, apparently, and people love it.

So it gives an additional dimension to the review itself. You're not looking at the code, but you're also looking at a sequence diagram. Seriously, generated from the existing code plus whatever is in the PR. So it was like mmm, okay. Then your wheels start turning.

So it gives you a little bit more context than just looking at raw code. And people just don't go by exactly what it is telling you to do. But it's your first pass. And then definitely people will chime in and say, "oh, ignore this suggestion."

People do tune these things to the team's liking. They might not like some suggestions. They'll turn things off and turn things on, as well.

It's a tool. You have to learn and you have to know how to use it and you have to live with it. It'll speed you up in certain cases and in certain cases, it's going to kick you, especially if you're not paying attention.

And if you're not actually reading what it's generating, then you're going to pay in the longer term. So you have to face all those things. So it's definitely, like you said, it's not going away and we should learn to live with it.

Brian: I was going to say, I'm actually working on a blog post of how to replace CodeRabbit with a custom solution. So all the pieces exist out there and what I've been excited about is if you can just get an agent to run, but also read your code and have proper rules, but also run offline, air gapped on your own server, that last part is the thing that I'm very excited about, is running it anywhere but it run it securely.

John: Yeah, because we had talked about that CodeRabbit big vulnerability, or exploitation, that happened a month or so ago. So locally or I guess on your own hardware would make a ton of sense.

So I had a couple reads. The first one is maybe relevant to some of the things that, Dims, you've been seeing in the community and stuff.

It's actually a HackerOne submission to the Curl repository, or I guess the Curl CVE system, where it was pretty obviously generated by AI and it doesn't even really use any of the libraries from Curl in libcurl.

And I saw this on Hacker News and basically the Badger, Daniel Stenberg, who maintains Curl was like, "hey, this was obviously written by AI, you're banned."

And Curl had made a pretty extreme stance that they're not going to accept any AI generated code submissions, especially since they get hammered by a bunch of these CVE requests. So it was very interesting that this caught wind and people were resonating with just, you know, like "you're banned," which maybe is hilarious.

So Dims, I'm curious if you've had to make such a decision or seen any of these and just been like, "you know what, we can't. We're not going to spend time looking at this."

Dims: Oh yeah, I see all kinds of, PRs getting open because I watch, my GitHub notifications is crazy and I'm a moderator in a bunch of things. So that's a whole other story. But yes, I do see PR submissions with AI and issue submissions as well.

So, the thing I'm trying to say here is at Kubernetes level and also in the CNCF TOC level, we are trying to figure out what is a policy, what should the policy or policies be. There is some policy that is written down by the Linux foundation already and we are trying to figure out what should we do in Kubernetes itself.

For example, I have an issue open for, that Kubernetes Steering Committee is trying to think about is, "hey, I want to say this specific PR was co developed by Claude."

So then people will know "okay, I have to pay extra attention, it's a human plus AI entity that is doing the stuff so we need to pay more attention."

If I don't know that Claude had input into this PR then I probably won't see it as well and maybe it triggers me to actually go pay more attention to it because at the end of the day it is a human reviewer that has to deal with it right and accept it or not.

And also it'll give us hey, over a period of time how many are coming from Claude, how many are coming from Cursor, how many are coming from somewhere else? It gives us some aggregate and what is the general quality? How many of them got rejected? How many of them got reworked, how many times?

So then we can go analyze some more data through the GitHub stuff and say okay, then we can also tune some of the agent markdown files to the things that we like.

So we can say, "hey agent, go look at this. This is the kind of thing that we expect. We expect idiomatic code, Golang code and we expect you to do certain things in a certain way and here is what the PR body should look like, title should look like whatever else."

So then we can add more information in the repository itself to help guide the agent to do better work, so it is lesser work for the reviewer on the other side.

So we have to think about those things as an open source project. We can't just say no for any activity from using AI tools.

John: Yeah, one thing that I wish these platforms like GitHub or maybe I just wish it was more of a common thing that happened was that people used "co-authored by" or "signed off by" inside of their commit messages.

And I wish Claude or Amp, and I've seen it do this sometimes, not all the time, sometimes I won't even try to make a wip commit or anything like that, but I wish it just automatically co-authored a commit for me and then I could push that so that it automatically shows up in the data.

Brian: I think most of them do. So Claude will do it if you request a commit. And I know this is because, Continue's got a CLI, so we also do co-author it as well. So that way the commit is you, but then co-author with Claude and or co-author with Continue or whatever.

And then they'll have an account that's managed by the company that then will also have an avatar next to yours. So not everyone's doing this, but I think the best practice, we were just talking about this morning at work, about the best practice of, how do we approach this?

We don't want to make it seem like AI is doing all the work because the prompter's still hopefully thinking and reasoning first before you write the prompt. And if it's not, it's going to be apparent in the code review. But at least there's someone that we can attach to the PR that's not just an unfaced contributor.

Dims: Absolutely. And we need the data over a long period of time so we can make educated decisions around which tools and we can recommend people to say, "hey, we love these specific tools, and it works well with our markdown files," and we can give some additional guidance so to say.

Brian: Yeah, and I think we're moving into a place where I think the prompts that people start using to generate someone's code especially, well, hopefully in open source, but at least at companies and enterprises, we're starting to actually store this data.

So I think Cursor's done this. Continue does this. A few other ones are basically building a knowledge store to understand which prompts are working, which ones are not, and generate rules based on those prompts. So we're building a sort of war chest of context engineering.

So again, when you were saying, you got to live with it or at least deal with it, when it comes to this AI world, I think folks are really applying best practices and good engineering practices. And what's exciting about this is there's a lot of junior engineers, a lot of college grads who don't have the years and years of experience that maybe they can't just jump in and write some code, but at least they can learn through previous documented experiences and prompts and examples like that.

So who knows, in three months everything will be different. So we'll just have to wait for a quarter.

John: Well, we're running up on time, so I definitely want to respect your time, Dims. Thank you so much for showing up on this. And listeners, remember, stay ready.