1. Library
  2. Podcasts
  3. O11ycast
  4. Ep. #56, Opinions on Opinions with Kris Nóva
O11ycast
34 MIN

Ep. #56, Opinions on Opinions with Kris Nóva

  • Kubernetes
  • Observability
  • Infrastructure
  • Cloud Infrastructure
  • Enterprise Security
GuestsKris Nóva
light mode

about the episode

In episode 56 of o11ycast, Charity, Liz, and Jess speak with Kris Nóva, an author and software engineer with an expertise in Linux kernel security. They explore the intersection of platforms and observability, unpacking infrastructure management, the cluster lifecycle, outcome-driven development, and the importance of best practices.

Kris Nóva is a computer scientist, alpinist, author, public speaker and transgender advocate best known for her work on Linux and Kubernetes. She specializes in Linux kernel security, distributed systems and infrastructure management, and open source software engineering. In 2017 she co-authored Cloud Native Infrastructure, published by O’Reilly.

Nóva is well known for her open source contributions including projects like Linux, Kubernetes, and The Go Programming Language. A popular public speaker, she is best known for her Kubernetes clusterfuck talks. She co-founded the cluster API project, created Kubicorn, and created naml for managing Kubernetes resources with pure Go.

transcript

Kris Nova: So some of my favorite questions to ask folks whenever I start having conversations around running large enterprise platforms, specifically around Kubernetes, I just ask them what their opinions are. I think that there's a lot of folks in the word who love getting asked that question, and I think there's a lot of folks in the world who will be like, "Hey, maybe you could help me out here a little bit, or maybe you can make a suggestion or a professional recommendation for me."

Really that gives me more of an indicator of who I'm dealing with and whether or not Kube is right for them or not, more than anything else. If I don't hear something along the lines of like, "We have a detailed process that we have internally on how we form technical opinions and how we make decisions, specifically around any topic in Kube." Doesn't matter whether it's logging or observability or pipelines or security or how I build my app, how I run my systems, how I manage my infrastructure or whatever.

If they don't have a good answer to one of those with a formed opinion and feel confident that they've been able to form that opinion and actually instill it, there's other conversations we need to be having at this point. The whole point of Kube is to be flexible, and if somebody can't form an opinion and actually implement something and feel good about it, then Kube is likely the wrong piece of technology for them.

Liz Fong-Jones: So it kind of relates to the idea of shadow IT, right? That if you don't have a process for standardization, people are going to come up with their own mechanisms for developing code that are not necessarily in conformance?

Kris: Exactly. There's always going to be some void in Kubernetes, there's going to be some nebulous, ambiguous shape that somebody needs to go and satisfy with some concrete implementation, and if we're not speaking about those pragmatically and objectively, and talking about the different options we have and the trade offs with those options, Kube is not going to be checking the boxes that you're looking for.

If you want a WordPress style experience where you turn it on and it yells at you until your MySQL is configured the right way, Kube is not the tool for you. I love Kubernetes, I've built my career around Kubernetes, it's a fantastic tool. However, just because it's great, doesn't mean it's always right for the task at hand.

Jessica Kerr: So if you have a question and you think maybe the answer is Kubernetes, you better be prepared for a lot more questions?

Kris: I think so. I look at Kubernetes like I look at it like a mesh or a glue. It's like a binding substrate that holds things together. Kubernetes is never the answer. It is the vehicle to get something together.

Jessica: I just pictured a kid with a hot glue gun, "Now I have a hot glue gun, I can fix it."

Kris: Exactly.

Liz: So now would be a good time for you to introduce yourself.

Kris: So hey, what's up, everyone? My name is Kris Nova. I am a programmer, I'm an engineer, I'm an architect. I do a lot of things, I've written a couple of books, I've worked at a few companies. I do have a good way of forming technical opinions, and yeah, I'm just here trying to make the world better and try to help folks create platforms that will actually be effective at making their problems go away.

Liz: So I love what you said about forming a process for forming technical opinions. What does that tend to look like? What does a functional process for engineers to come to a consensus on things given that we're like cats, it's hard to herd us into one place and get us all moving in the same direction?

Kris: Honestly, I think the meta discussion is most of the time I don't really even care what the process is, as much as I care that there is one. Honestly, I don't even care what the opinion is, as much as I care that somebody holds it and sustains it over time. And so I think everybody's going to have their own personal process for how they go and they develop an opinion.

I think for me, most of it has always just been starting with a good problem and then starting to look at the different trade offs of how things may or may not make sense. Then once you do come to some conclusion, the ability to communicate it quickly and latch onto it and hold onto it and not move it around and make it so that folks are downing it I think is important for me.

Liz: So you mentioned working at a variety of different scales throughout your career. How does those things change? Certainly I've experienced this as Honeycomb has grown from an engineering team of 10 to an engineering team of 50. The patterns you use are different, because at a small company you go, "Yes, we can all talk to each other and agree in a single meeting," but that's less possible when you have 50 engineers, or 100 engineers, or 1,000 engineers.

Kris: There's a couple of schools of thought here, and really I look at it as like there's two main ways of doing it. There's this loose coalition of design by committee, collaborative something or other, where there's this lazy consensus as like a way to move forward. If you look at Kubernetes, how the open source project operates, that's the name of the game.

If you want to get a decision made or an opinion instilled, or get some concrete path forward, usually that's going to look like putting something out there, then when enough people neglect to tell you no, that's your sign to go ahead and move forward. I think that the trade off there is time.

I think really the most effective way I've seen at forming an opinion is going about it your own way, coming to your own conclusion and talking to the folks you need to talk to. In my opinion the key to instilling that and making that law is the ability to communicate it quickly. If you can't communicate an opinion in like 30 seconds or less, people are going to poke holes in it and doubt you.

But if you can just walk into a room and confidently be like, "No, we're going with Honeycomb for observability. There's no argument here, and here's why," and, boom, and we're done. And that's it. That gets you 60% of the way there most of the time in my experience.

Liz: I think it's really interesting that on the one hand you're saying it's important to be opinionated, and yet the frameworks I work on like Kubernetes, like Open Telemetry, are just so unopinionated on purpose because they have to be able to comply with whatever people's visions are.

Kris: I think so. I think there's totally an economic thing at play here, too. So if you look at how Kubernetes came to be, it's was this cloud abstraction that was supposed to work across all the clouds, and it immediately went to the Linux Foundation and then there was this subsidiary called the Cloud Native Computing Foundation.

Their whole thing, the whole reason that whole 501(C)(6) exists is because they want to instill vendor neutrality, which means no opinions, "We don't say Google is better than Amazon, or we don't say Amazon is better than Microsoft. We stay neutral, and we very decisively don't have an opinion on which one we go with and our technology embraces this methodology and embraces this sort of neutrality at day one."

So when you have this primordial glue of Kubernetes middleware that is just like what you were saying, hot gluing everything together, there's not really going to be any sort of concrete, "Here's what my opinion is on what you get out of the box." It's going to be very loosely coupled and loosely opinionated on day one, and that's the whole value add.

So if you don't have a need for it, with a really firm set of philosophy and opinion behind how you want to go run it, you're just going to be dealing with this unopinionated glue the entire time.

Charity: This whole tension between expertise and openness and opinionatedness, this is why I always get so irritated whenever anyone ever gives advice, just like, "Here's how you should do it." Without giving the context in which those decisions were being made, because there is no one way to do it. It is always dependent on the problem, the people, their expertise, the rate of growth. There is no answer for how to make technical or engineering decisions that isn't contention on so many different things. This is why judgment matters.

Liz: Also why consultants always tell you, "It depends," because it does actually really depend.

Charity: It does. No, but I like this point that Jess just made, "OTel is unopinionated about the backend, and to achieve that it's very opinionated about format." Hmm.

Liz: Right. It's one of those things where we have to make sure that we can supply you with all of the bells and whistles. The backend can choose to drop them, that's on your backend. But in order to support the maximum amount of telemetry, we had to say, "Yes, you are going to have to be able to support multiple attributes, and in fact, up to hundreds or thousands of attributes.

This is just something that we're taking for granted, rather than saying we're locking you into the lowest common denominator, we're providing you with the maximum flexibility. And then here's some best practices on how to do it." But I think that gets to the importance of best practices and examples, right?

As you were saying just now, Nova, you have this pile of hot glue, you have to show people what can be done with it in order to convince them that there's value, that there's something there, right?

Kris: Yeah. I think so. I think that like Charity was saying, the ability to come in and form an opinion based off of what it is you're seeing, the context at hand, the constraints at play and actually make that make sense and have that be outcome driven. I think that there's an art to that, it takes time, it takes thought, it takes understanding and empathy. It's a whole process. I'm exhausted after I go through that process, right?

Charity: That's partly why it's so incredibly refreshing and interesting and fun, whenever a rule of thumb does show up that you're like, "Yes, I can tell people this is a rule of thumb, that you should follow this, unless you have a reason to break this." Choose boring technology, I think that was so brilliant because it should be the default choice, right?

You have a limited number of innovation tokens, you can't spend many of them or you'll doom their business, right? So how are you going to spend them wisely? Then how are you going to rely on the most boring software possible for the most of it? That's advice you can give pretty much everyone, and that's why 10 years later I'm still mentioning this at least once a week, right?

Liz: Yeah. I do want to go back to one of the points that you've just made, Nova, which is you talk about outcome driven. What does outcome driven mean most typically? Some people prioritize things like cloud costs then, the other people prioritize, "Oh, we must be multicloud." What are some of the dimensions that people tend to care about the most often when they are evaluating what should we be basing our platform on? What should we be optimizing for?

Kris: I think one of the things I see a lot and where I see a lot of folks get in the weeds is optimizing for the wrong things, or maybe a better way of saying it is optimizing for the problems that they see every day. So as an architect, one of the things that I have to remind myself is my opinion doesn't really matter. It doesn't really matter what I want to do, it matters how I relate our technology to the business and the needs of the business, and how I can solve whatever it is the business is dealing with.

Right now I'm working at Twilio, and we have our own corporate goals, and there's a lot of outcomes that are important to the company and I'd do my best to prioritize them. I see a lot of engineers go and say, "We're going to optimize on our ability to manufacture clusters." Or, "We're going to go optimize on our ability to have high cardinal logs across the entire stack."

Or insert your flavor of technology that is very important to them today. While I'm not necessarily arguing with any of that, I do think that it takes a special attention to detail to be able to say, "Okay. We're going to build this and we're also going to get the cost savings we're looking for." Or, "We're also going to set ourselves for success with upgrading our infrastructure." Or, "Migrating from this legacy system to this new technology, while also addressing compliancy along the way." Or whatever.

Charity: This gets back to, I don't know if you've read the book Good Strategy, Bad Strategy, Kris? Jess and Liz and I all have. It's something I'm a little bit religious about right now, because strategy is so simple and yet people fuck it up all the time by assuming that strategy means goals or wants or numbers.

When in fact, strategy is simply about making a judgment call about how you want to grow the business right. "Here's the plan, here's how we want to succeed," and then coming up with a limited number of actionable items that will get you there. Right? So to your point, your goal as an architect is to make the business succeed, and as a technical architect your way of getting there is to break it down into, "Okay, which technical things do we prioritize in order to get us there, and how do they roll up to the bottom line?"

You can't just go, "I want to roll out structured logs because somebody on the Twitter says it's better." No. It's, "We have decided that the way that we succeed as a business rolls down to these technical objectives, and we can't achieve them unless we have the ability to do high cardinality, dimensional analysis, and therefore it's going in our list of priorities below cutting costs, but above moving to Kubernetes." Or something like that, right? But it always has to be justified and resourced.

Kris: Yeah, absolutely.

Liz: Yeah. I always say that when you are facing not being able to deploy at least once a week at minimum, that should be your number one priority, is reducing cycle time first, using whatever technology you have to hand that's going to get you there fastest.

Then we can think about things like adding in finer grade observability, because no observability is going to really help you if you are stuck gated on being able to release any changes to fix the issues that you find with observability. What are some of the things that people stumble over the most when they're doing this?

We talked about poor scope, poor definition, so let's assume that we have an idea of what we want to do. What do people then tend to mess up or not get right?

Kris: I think two things that I would tell myself if I was to go rebuild the platform or if I was to go sit down with somebody who's about to go start building a platform, number one would be start small would be the first thing I would say, and I think the second thing that I would say is embrace turning things on?

Charity: What does that mean?

Kris: This is more of a reflection of my style, I think. But one of my problems is I'm a perfectionist. I had this very flawed belief for many years in my career that I would be able to build perfect infrastructure, that I would be able to go and actually build the perfect Kubernetes cluster and then manufacture a million of them at scale, and make each one of them identical time and time again. And I think I can get close, and I tried really hard for many years to write the software to actually make this happen.

I do think though that from a business perspective, it's more organic than that. There's a lot of value in just turning on systems with some good, base, core principles.

Like you said, Charity, keep it simple. Keep it simple and give me 10 best security practices and we'll start there. Just get us 60, 70, 80% of the way there and turn it on, and then we'll let the problems come. It's not really about building the perfect infrastructure on day one, it's about not letting the debt pile up and those fires get out of hand. That's one of the things that I would love to go and redo, is just-

Charity: How did you phrase that again? Just turn things on?

Kris: Yeah. Just start small and embrace turning things on.

Charity: I love that. At Honeycomb, I think that what Christine and I did was very similar, was we'd say to each other, "Everything is an experiment." And that's one of our company values now, but it started with us just... Because she's a perfectionist, I am not, but we kept getting blocked which is like, "Well, what if this? What if that?" And we were just like, "You know what? No decision is permanent. Everything is temporary. What's important is that we make a decision, take a step and start going, and we don't stop." Because you die if you stop in the water.

Kris: That's one of the things I say, I don't say, "Die if you stop in the water." But one of the things I say all the time is some decision is better than no decision. Some opinion is better than no opinion. I don't care what you pick, what I care is that we pick something and that we can identify quickly if it's wrong, and that if it's wrong, that it doesn't completely foil our entire quarterly plans or whatever. A mistake becomes something that's very trivial. That's actually more important to me than making it right on the day one.

Charity: Oh yeah. Observability supports that by making it visible, what's going on. And Kubernetes, does this also support being able to turn things on freely because you can also turn them off?

Kris: I'll have to reserve my opinions on how Kube turns things on and off later. But I think clusters, we have a part of Kubernetes called Sig Cluster Lifecycle, and the whole point of this group was to be like, "You turn a cluster on, you manage it over time and you ultimately turn it off one day." We call them Sigs in Kube, but it was like this whole group of folks and that was their whole job, cluster lifecycle.

Beginning, middle and end of a cluster. I think that what we realized quickly was that the problem wasn't turning Kube on, which don't get me wrong, that can be its own pain in the ass. But getting a cluster online is actually the easy part of the journey. It's keeping it online and keeping it healthy and keeping it upgraded, and not getting completely buried in drift and technical debt. That's the hard part in my opinion, in my experience, in operating Kube at scale.

Liz: I think the other thing I really loved is the idea that you have to get people using your platform to get the operational experience with it in order to know how to further develop it, right? I've seen too many people do these massive lift and shift migrations or these massive migrations that are, "We're going to develop the future platform with the best of everything," and no one is using it yet because it's not ready for three years. By the time it's ready for any traffic, the best has already moved on.

Kris: Yeah. That's start small and embrace turning things on. I'll say that 100 times a day.

Charity: I love that.

Kris: I talk with my partner about this a lot. She comes from PagerDuty so she's very much in the, "How do we keep systems online from an operations perspective mentality?" But we've toyed with this thought experiment of the two ends of the scale, if we had a scale and a left side and a right side, and the left side was perfect, pristine, mechanical systems that we manufactured perfect, wrapped in plastic, sealed, labeled, ready to go, as mechanical as they could be.

The other end of the scale is just complete chaos, total BASH scripty, just turn it on and we don't know who has the keys or where the root password is, it's just complete, total disarray. If you were to turn both of these systems on and apply observability to either end of the scale, would you be okay? Would you still be able to operate both of these systems independently of each other if you just had a good way of managing it and observing it?

I'm a firm believer that it doesn't matter how completely organic and chaotic your system is, if you can see it, you can respond to it and you can actually control it.

Liz: Totally.

Charity: So maybe you don't have to know how you'll turn things off. If you see it, you have options.

Kris: Right.

Liz: So turning our attention now to that intersection between platforms and observability, where do people start realizing that they need observability? Observability may not necessarily be people's first priority when they're building out a platform, because they're first thinking about, "How do I even make the software run? How do I even get the deploy processes working?" What indicates to people that they've maybe outgrown logging, for instance?

Kris: Outgrown logging? I hope I never outgrow logging, honestly.

Charity: Really?

Kris: I love looking at logs. I'll look at logs till the day I die, I hope I do.

Charity: Define looking at logs though. You mean line by line, one by one, machine by machine, container by container?

Kris: I mean I think there's a big difference between trying to go dig and correlate problems in a huge fleet of infrastructure logs, versus actually sitting in front of a single system and watching the system via its log output. When I say look at logs, I mean I just want to go and look at a single system and see how that one system behaves, versus trying to connect systems together.

Liz: Oh yeah, that makes total sense, right. If you can identify local behavior, it's best to not need to look at the whole system together, right?

Charity: Totally.

Liz: Yeah. It's that minimum viable thing, yeah. I'd agree with you. Gosh, this is the first I ever appeared on O11yCast as a guest, where Charity and I went back and forth about operational logs and how a transient buffer that's not necessarily centrally indexed can get you a fair bit of the way and that people overcomplicate it when they then try to centrally index all the logs together for all of the systems.

Charity: Yeah. I think that I now have this knee jerk response to the word log, which is probably not helpful or fair, and it's mostly a reaction to people who mean logs to be... What they mean is unstructured strings that they're just spewing out haphazardly everywhere, and I'm so over those. I think that those only belong in your development environment, I think that once you're operating anything at scale you have to have structure.

Because most problems, you're only going to see when you're taking a broader look at things, you're looking for patterns. You deep dive into a single machine once you've identified the problem or once you've found an example of it, and that's when you go through it in more detail. But the finding of problems needs a different lens, I think.

Kris: I couldn't agree enough. I think one of the things that I've seen a lot is work in this constant state of, "We're never going to be on the platform. We're always going to have one foot in the door and there's always going to be a latest and greatest that's going to be in the future, and we're always going to have something right in the middle and there's always going to be the folks lagging behind."

I think that structured logging is something that I'm looking at today, and I would love to be able to say plumb all your logs through this Go library and start instrumenting your apps, and go get very intimate with how you frame your data so we can go query it later. I think what I've found to be the most effective conversational tool as an architect is to say I don't care what you do, however if you structure your logs, we're going to have a much better relationship.

I think there's a ton of incentive into saying, "You are empowered to go and manage your logs however you want, if you're just going to give me a string of data when your shit breaks, you're going to be dealing with strings of data and you may not realize the impact and the consequences of that today, but that's on you." I've noticed really quickly that all of a sudden JSON blobs started showing up in our lines.

Charity: I love that, I just wrote that down, I'm totally going to tweet that quote out. That's brilliant. Yeah. Because when you dictate to people, when you're like, "Do this, do that," they're like, "Well, fuck you." But if you're like, "You can do what you want, but here's how to help me help you," I think you put everything in a much better light.

Kris: Right. I still see logs showing up and I wouldn't be surprised if there's some XML somewhere in there. I just don't know what I'm going to see and-

Charity: Well, and not everything needs to be done right. You get like 80% of the benefit from like 20% of the shit, right? Whenever I'm talking to people about how to roll out observability or structured logs or whatever, it's never, "Start at the top, at A and work your way to Z." Nobody's going to get past F or G, right? But it's like, "Use it like a headlamp. Where are the problems in your system? Go instrument them first as you're trying to debug them. Don't sit there scrolling past individual machines."

Instrument as you're going and that will help you debug, and you will end up instrumenting and having more visibility into all of the hotspots of your system, and then do the rest as it comes up. You get paged about something? Cool, just go instrument first as you're debugging it and after you've done it two or three times, it becomes a faster and easier way to debug and it leaves a much cleaner system behind for everyone who comes after you to try and debug it later.

Kris: Yeah, totally agree.

Liz: I think that goes to the idea of our systems are living and breathing things that we can interact with and change, and this goes to Jess' discussion of [Simethacy], we're not stuck with the system in a fixed state, hammering on it as if it's a black box.

Charity: Yeah. There is no fixed state, no matter how much you'd like for there to be.

Liz: So you were around for some of the earlier days of Kubernetes, and how the project has really, really grown. I'm always eager to learn from the experiences of people who have been through that stage of growth and open source because one of the conversations we were having on the day that we've recorded this episode was at the Open Telemetry governance committee where we were talking about the challenge of how do we get people to spend their time maintaining the machinery, rather than just building out the newest, shiniest, hottest signal types inside of Open Telemetry. So what advice would you give me as an Open Telemetry maintainer, as a Kubernetes maintainer person?

Kris: Probably the same advice I'd give a platform engineer; start small. I think in open source that usually comes in the form of how do you get things done and how do you get things accomplished as the project grows, and I think that as the project grows, history starts to shift. Even myself, I started a little bit as like, "I'm going to work on the infrastructure side of Kube," and then all of a sudden I had folks starting to reach out to me about security.

It's like, "Well..." Yeah, I did look at that early on and as it turns out now I'm one of the knowledge experts in this area, and I'm like, "I just read a few Google Docs but I happened to read them at the right time when these systems were being designed, and now all of a sudden I'm an expert in this area."

I think that Kube, I would say, it's borderline out of control in the amount of sprawl that we see at the project level. It's got all of the same problems that any 2022 economic dip corporation that you see out there as the stock market's going crazy and there's a ton of uncertainty in the world, people change jobs, people all of a sudden move out of engineering and move into management, and they stop contributing.

These small things start to fall through the gaps and these big, big efforts where there's hundreds of people and there's this big design by committee and we have to go through this formal process, all of a sudden you're actually much more effective with a group of three or four people who all think and feel and want the same thing, and can operate as a unit and then grow that unit, than you are to try to tackle this whole, out of control monster of a portion of Kubernetes that you see today.

Liz: Yeah. I think that definitely pushing back on increasing scope is something that we've taken to heart, right? We are in the process of saying no, potentially, to a major addition to the project, which we know that we don't have bandwidth for the moment. I also love the other thing that you said about incremental steps, we need to put together a wishlist of, "These are the areas where we think people can contribute, these are the areas that people can take a load off of the existing maintainers."

Yeah, I hear you on that sprawling thing where there's, at the moment, probably only half a dozen people in the world who fit all of Open Telemetry in their head. We cannot operate by consensus of those people alone. There has to be delegation, there has to be trust in a subset of the people to make the right decisions about things.

Kris: It's really similar to platform, you have these large platform teams.

One of the first questions I'll ask anybody when they're talking platform is who operates the stack? This is a fundamental question and this is a great way to see who I'm talking to, it's like looking in somebody's refrigerator, you learn so much about them in like 20 seconds.

Charity: Oh my god, so true.

Kris: Here, Charity, what's your opinion, do you think engineers should operate their own infrastructure? Do you think there should be a centralized infrastructure team?

Charity: Well, yes, and I think engineers should be responsible for the code that they write and birth out into the world, full stop. Smaller teams or some environments that have a very thin layer of infrastructure, then that means they're operating their own infrastructure. But I think that what we're seeing emerge in the market is the existence of platform teams that sit between... You can't expect every engineer to know everything, right?

You don't expect frontend engineers to necessarily understand how to scale up clusters or how to do stuff with the database, I get that. So I think that the platform team is the point of elasticity there, where it helps you scale up. It becomes that fulcrum point that makes lots of infrastructure palatable and tractable by lots of different kinds of engineers by making repeatable patterns, by making reusable code, by making templates, so you don't have to understand, "Here are all the observability tools that we use for something."

No, you self serve. You're like, "Here's the recipe, here's how I'd make I monitorable, here's where I make a dashboard or whatever." And so the job of the platform team is to help scale the rest of the company. I think that it's a mistake to say that they should all own their own infrastructure, but they should all own their own code and the question of how much that infrastructure does or not consist of is up for grabs.

Kris: Awesome, couldn't have said it better myself. Completely agree. But yeah, I think to your point, Liz, dealing with projects like Open Telemetry, same exact pattern in my opinion. You have a large group of folks, you have folks on the end, and it's like, "Do we completely delegate all responsibility of this one subsystem to this whole team and have them operate it entirely?"

And it's like, "Well, wait a minute. Now they're going to invent a code of conduct for this specific corner of the project and now they're going to go and reimplement how they make decisions and now they're going to go implement a new mailing list." Then it's like, well, now I'm just learning this whole other project just to go familiarize myself with how we do tracing or whatever.

Liz: Yeah. I've already seen this happening, the Open Telemetry Collector is its own entire beast that's almost disjoint in some ways from the per language SDKs. The per language SDKs share a lot more in common because they're trying to implement the same spec. Whereas the collector is this independently running binary which has its own considerations and its own development practices, but also how can we learn from each other?

How can we have the right sized components without creating silos? Yeah, I love that thought process that you mentioned that open source projects are microcosms of what a company might look like, except that we're not paid by the same employer.

Kris: Yeah, exactly. It's just large groups of people. Let's say this is age old stuff here, we have tried to listen, we have people who group together. This is just how people sort themselves.

Charity: Yeah. When software becomes an identity, that's something that I feel like you have to shed if you want to become post senior as an engineer. You have to shed that very personal identification that you have like, "I am a Chef engineer." Or, "I am an engineer who does this language." It always bums me out when I see people say that they're a JavaScript engineer or a Golang engineer. I'm like, "Are you a software engineer who writes Golang engineer? Because those are very different things."

Kris: Yeah. It's a big journey. I just went through this whole exercise over the past year of how important it is to form... I'm obsessed with opinions right now. But how important it is to form an opinion, and one of the things that as I've been mentoring more principle engineers into the space, it's like I have to sit folks down and be like, "Your opinion is not the company's opinion. You might be a Go engineer, you might be Mr JavaScript or Mr Rust or whatever. But that is not necessarily what the corporation needs and I hate to break it to you, but you need to go be the voice of this now."

Charity: Nova, thank you so much for coming on here. I miss you so much. I hope we can get together in the next year because I miss violently arguing and agreeing with you on everything. It's fantastic.

Kris: Yeah, this was a ton of fun.

Jessica Kerr: Thank you for your opinions on opinions.

Kris: Exactly. Awesome, thanks for having me.