1. Library
  2. Podcasts
  3. Third Loop
  4. Ep. #2, Features and Futures with Kent Beck
Third Loop
48 MIN

Ep. #2, Features and Futures with Kent Beck

light mode
about the episode

On episode 2 of Third Loop, Kim, Heidi, and Adam sit down with Kent Beck. They explore how Progressive Delivery extends ideas from Agile and Extreme Programming by focusing on safer releases, feature flags, reversibility, and observability in production. The conversation also dives into AI-assisted coding, experimentation, and what it takes to ship software users can actually trust.

Kent Beck is a software engineer best known as a creator of Extreme Programming, a pioneer of test-driven development, and one of the original signatories of the Agile Manifesto. He also helped create JUnit and is the author of Tidy First?, while continuing to write, speak, and explore AI-augmented software development.

transcript

Adam Zimman: All right, here we are on another episode of the Third Loop. We're looking forward to having our wonderful guest join us today, Mr. Kent Beck.

Kent, why don't you go ahead and introduce yourself and tell us what you've been working on recently.

Kent Beck: Thank you very much, Adam, Heidi and Kim. It's great to be here. I'm an old programmer. I still program regularly and really enjoy it. Things that you may have heard of that I've done are Patterns for Software Development, JUnit and that whole family of testing frameworks: test driven development, Extreme Programming, 3X (explore, expand, extract).

And what I'm doing now is lots and lots of augmented coding. I'm happiest when things are most impossible.

Adam: Awesome.

Kent: And so everybody asks, "oh, well, how do we do blah with augmented coding development?"

I say, "nobody knows. You got to try it."

And I just enjoy having those crazy ideas, trying them out. Eventually we'll get this kind of figured out, I assume, but right now, everything's in flux.

Adam: Yeah. And then the other thing that I think a lot of people know you for is being the alphabetically first author of the Agile Manifesto.

Kent: Thank you for that academically correct reference. I appreciate that. A lot of people skip the alphabetically first part.

Adam: Yeah. You know, one of the things that really intrigued us about having you on is that while we were working on our book Progressive Delivery, we encountered a lot of the concepts and the ideas that you have really championed for the industry.

And it was something that, having had the pleasure of being able to meet you over the years and have always really appreciated your insights, we wanted to bring you on and have you talk to us a little bit about some of your thoughts, having been a reviewer of the book and some of the ideas that we brought forward and hear from you as someone who worked quite a bit--

I know that during the actual drafting of the manifesto, I've heard stories you tell of you were sick the whole time. But at the same time it wasn't just that one incident, it was something that like, similar to Progressive Delivery, you saw these patterns of success that were emerging over many years and you looked to be able to kind of lay those out as a group for others to be able to benefit from.

And you know, this is very similar to how we looked at Progressive Delivery and we wanted to get your take as you've taken a look at some of the ideas that we've brought forward of how do you view Progressive Delivery as different from Agile? Is there anything that kind of appeared to you?

You know, like, we talk a lot about even the name of this podcast is the Third Loop and that's in reference to actually paying attention to user adoption, user consumption. And I think that I look back at Agile and a big part of Agile was that notion of, hey, maybe we should actually talk to the user.

Kent: Yeah. Shocking.

Heidi Waterhouse: Weird, but true. Might work.

Adam: Yeah. So are there any things that have jumped out to you as being a difference between Agile and Progressive Delivery?

Kent: The big one that jumps out to me is that you don't treat implementation as a magic black box.

There's this tendency in Agile to say, well, let's get together every two weeks and the rest of it's going to sort itself out. And in fact, the way the incentives for software development are set up, people just make way too many mistakes for that to actually work.

Heidi: So when you say mistakes, do you mean like mistakes in intent, like they don't understand what the user wants, or mistakes in implementation? Like, "we made bugs?"

Kent: Both. But the "we made bugs" part is both avoidable and inevitable given the way most software development takes place.

Heidi: Mhm.

Kent: You know, the whole, "well, is this good enough to throw it over the wall to QA?" The only refinement we made to that is that there's no QA to throw it to. But the incentives for a person making a programming decision to make sure that they don't break something are really weak compared to the incentives to claim that they're finished. And that's a social problem and a business problem and a culture problem.

And the superficial reading of Agile just treats that as like, well, we'll work that out somehow. And Extreme Programming goes into great detail about how do you achieve that level of reliability and that level of responsibility and Progressive Delivery continues that to say, "no, actually implementing that, where you can release increments of value multiple times a day, that is not just magic that that happens. You have to work extremely hard at it. But once you do, once you're up on your skis, you can really go."

Adam: Yeah, I think one of the things that occurs to me is that a very strong intersection between what you have been advocating for for decades with Extreme Programming, I also really liked some of the things that you talked about in Tidy First, really kind of looking at and I think I actually heard you say in one of your interviews that one of the things you were most disappointed about with regards to the Agile Manifesto was the name Agile and I think you said that if you had had your choice, you probably would have called it Conversational Programming.

And I think that one of the things that we tried to do to encourage that with Progressive Delivery was this notion of radical delegation. And kind of explain it as, how would you give directions and communicate objectives to assure that the outcome is consistent with your expectations?

And so thinking about it in the terms of, like, it's a definition of constraints. It's being able to think about like, how is your intent going to manifest in your programs, in your applications. And the way that we talk about radical delegation is: Can you hand over the control of whether or not a like line of code or a feature gets turned on and off to someone who is completely removed from the development of that code?

So in other words, could you actually say, I want to turn on this feature for a hundred, a million, a billion users as a customer support engineer or customer support or customer success manager, and not as a developer? And so has the developer done a good enough job in being able to create that code that that on-off control can actually be handed to someone else?

And you think about that in terms of like, I know you did a lot of time at Facebook, and you know, the whole idea of like, everything is everyone's responsibility, that kind of mindset. But this is it like what we were trying to drive at of like how can you build your programs, your services, your applications so that they are resilient and so that that notion of being able to deploy on Fridays is-- Of course you deploy on Fridays. It's a day of the week, right?

Kent: Correct. I program on Fridays, I deploy on Fridays. Get over it.

Adam: Yeah. So I would love to hear your take on this kind of notion of that communication aspect to be able to ensure expected results. In your mind, how does that play into that notion of alignment and things like that?

Kent: Well, first the XP (extreme programming) goal is not to have results match expectations. I prefer to have results not match expectations in positive ways.

Heidi: The faster horse problem?

Kent: Yeah, yeah. So XP is biased towards that exploration, that early phase, to a large degree. The practices are still good for the extraction phase where you're fine tuning a proposition of known value. So that's my first reaction to what you're talking about.

My second reaction to what you're talking about is why insist on this separation between the programmers building these dials and gauges and the person who gets to turn the knobs. Like there's a impulse in the industry and has been. I mean, my dad was a programmer. I watched this for decades, decades and decades.

Like if only we didn't need programmers or at least we could lock them in a room, then things would be better. And you know, I'm a programmer and sometimes I'm kind of a jerk. And I get it, I understand why people might want to lock me in a room.

And yet sitting together and working on something is such a powerful source of energy and feedback and value that I want to find ways to bring people together, not ways that we can successfully keep people apart.

Adam: It's a great point. I would also offer the mirror image of that, which is I think that there's always going to be value in collaboration and bringing people together. I also think that there's an aspect of dependency management that we were trying to account for. And I've seen kind of in my career where you don't want to be in a situation where the only person that can solve the problem is the person who wrote the code.

Kent: Yeah, a hundred percent.

Adam: And I think that that's where the notion of whether it's quality comments in your code, whether it's documentation, whether it's kind of having this notion of feature management to be able to have this control plane These are the kind of aspects of how do you make sure that you don't back yourself into a corner?

And conversely, as a developer, oftentimes after you've built a set of functionality, you're interested in moving on to the next thing. Now, I know that there's still, like, going to be--

Kent: Oh wait, wait, wait, wait, wait, wait. I think you're projecting a little bit here, Adam.

Adam: Let me finish, let me finish.

Kent: Okay, go ahead.

Adam: So I think that there's this idea of, like, how do you make sure that you, as a developer, you're leveraging and taking advantage of the fact that you are not an individual, that you are on a team?

And so how do you make sure that the thing that you've built isn't something that then is solely your responsibility, that you were the sole proprietor of and worker on, moving forward. So that, to your point, it would be great if we all work together on all the things or all the things that we felt we could contribute on the product. So it's not that you abandon it.

Like, I'm not not advocating for that notion of throwing over the fence. I'm more thinking about it in the context of how do you actually bring the work that you've done to the community so that the community can continue to contribute?

Kent: Yeah, and XP has a lot to say about that.

We rely first on social structures, factors, rituals, rhythms. So there's lots of collaboration at every timescale to make sure that there's no information that only one person understands.

Heidi: Yeah. So we don't want to silo.

Kent: Correct. And this goes against the grain of this Tayloristic, divide and conquer, "we're going to ignore integration" paradigm that people bring to all kinds of industrial activity.

Heidi: So do you think that's still true? Like, XP was literally the book they handed me when I got my first technology job. And ever since then, people have been doing more or less actually Agile implementations where I work. Do you think that people are still in that, like, "I make a widget and I never have to think about it again?"

Kent: Well you just heard Adam say it, and I'm not blaming or pointing finger. I think you're reflecting an attitude and a belief that that is absolutely widespread spread because if only we could get all the worker bees working separately, everybody's typing all at the same time, you know, "villain rubs hand and twirls mustache."

Adam: Yeah. You know, one of the things that I'd really appreciated with Tidy First is that ,when I've run product teams, this is something that I have tried to actually encourage engineering managers and engineering leaders to really think about. You know, I really appreciated your notion of the 50% because the best that I've ever been able to convince an engineering leader was 60-40. 60% new features, 40% tech debt. The term that's always been used with me has been tech debt reduction.

Kent: Technical debt. Yeah, yeah. Which is just so defeatist.

Adam: Yeah. But I really appreciate, so I was a fan of the 50-50 because I think that that's actually much more accurate if you want to ensure your code doesn't degrade over time, your velocity doesn't degrade over time.

Kim Harrison: Wait a second. Kent, can you tell us about the 50-50? Not everybody might be familiar.

Kent: Sure. So the phrase I came up with for it, just over the weekend is "features and futures." Oh yeah, I know. Why didn't I put that in the book three years ago? I don't know.

Adam: Well, you know, second edition.

Kent: So we're dividing our investment between features and futures. And the challenge is it's so much easier to count features than it is to count futures that everybody's biased in that direction. Adam, I think you represent my thought experiment dream world where product goes, "if you need a little more time, that's okay, go ahead and take the time."

And engineering says, "no, no, I think we're good. I think we can deliver more features or more quickly than that."

So we take that usual tug of war like, oh, "why are you wasting all this time on the blah blah blah," and turn it exactly backwards.

And they're pushing in the same direction where sometimes it's 70-30, sometimes it's 30-70, sometimes it's 50-50. It depends on the relative value of features versus futures. And the strategy that leaves so much value on the floor is working on low value features just because they're easy to count and skipping super high value futures just because it's underwater, it's underneath the surface.

You know, you've got this swan gliding smoothly along the surface and underneath is all flailing feet and other stuff.

Adam: Well, I loved your articulation of one of the big aha moments for you was when you actually implemented options algorithm and thinking about the time value of money or time value of code. Right? In terms of like building something quickly may in fact be worth more than building something right that's going to take a lot longer.

And how do you think about that in terms of progression? Right. So can you share a little bit with folks of this experience? Because I'm not sure everyone's listened to quite as much of your content as I have.

Kent: In the recent days, yes. So the experience was, I mean, this is one of the advantages of chasing yourself down rabbit holes is I went down the rabbit hole of options pricing formulas. This is 20, 25 years ago. And I just started implementing every option pricing formula I could find, starting with Black-Scholes and going on and on.

And in the process, I got a visceral appreciation for the value of options versus holding on to whatever the underlying instrument is. And the great thing about options is the higher the variance, the more valuable the option. And I learned this at a time when people were telling me if we could just get the spec right up front, and then nothing changed, software development would go so much more smoothly.

And I'd look around and I thought, no, again, it just doesn't. This is a pipe dream. This is a hookah vision. This is just not going to happen. You know, and side note, spec driven development, we can talk about that afterwards. But if you have options and the variance goes up, the option becomes more valuable, not less.

Whereas if we had that perfect spec and the variance went up, the value of the spec would drop to zero. Because it's out of date. It doesn't mean anything anymore. But if you're holding options, the more you're holding options, the higher the variance, the happier you are because the chance that you'll find something that pays off big goes up.

But I had to go through this. You know, I had a 2, 3 month span where I did completely inexplicable work. Like, why are you doing this? How's this going to pay off? It's not. Except for my curiosity.

I've spent my entire career with the curiosity satisfied as the main payoff. It's how I'm wired. I'm not going to fight it anymore.

Did that answer your question, Adam?

Adam: It did. And the reason I wanted to kind of talk about this was I actually was really interested in this in the context of some of your more recent exploration with, as you refer to it, as the genie, if you want to tell people briefly what the genie is.

Kent: So the coding models are a genie, and genies have this long history of stories of they grant wishes and it's not what you want. There's one called the Monkey's Paw, which is in a similar kind of vein, and even creepier because it's actively trying to destroy you, and it sometimes feels that way.

Adam: So in this context, when you think about optionality, one of the things that I've been hearing a lot more of is it used to be that experimentation, the biggest area of pushback that I heard was around the cost, right? It was the idea that a failed experiment was wasted time, which I never agreed with. As someone who studied physics and science, I was just like, no, a failed experiment is phenomenal. You have so much more information than you had before you ran that experiment.

Kent: Right.

Adam: But I think that that was, especially when you look at it from a business perspective, that was always the thing of, like, well we want developers working on things that are actually going to be successful when you know, like, that's what we pay our product managers to--

Kent: When we know that we can't predict what's going to be successful. But we're going to insist that they all work on something successful. Reminds me of this story. I don't know if you mind me jumping in with stories.

Adam: Go ahead.

Kent: Big bank bought a trading desk, and they discovered that 90% of the profits came from 10% of the investments. And so they mandated that the trading desk stop making the other 90% of the investments. That was the focus. And I understand why you might think that would pencil out, but you just can't predict which is which, and so stop trying. Back to my first contradiction of you, Adam, which was around matching expectations.

That's why meeting my expectations is not my goal. I want to either exceed them or learn something in the process. And part of the problem is the language that we use, calling it a failed experiment. The only failed experiment is one that you don't learn from. That's a failure for sure. Just because the outcome isn't what you wished it was means nothing. But if you don't learn anything, yeah, then you wasted your time.

Adam: So in the context of AI, are you both personally and as well as anecdotally or with folks you're speaking with, like, I'm seeing a significant turnaround in the number of people that are interested and willing to experiment because they're like, oh, I can get AI to go in these 15 different directions for me and try things out and come back to me with like the recommendation whereas it used to be, well, I had to kind of try one at a time.

Kent: Right. And it would take a while and it was fun. You know, I want to make a widget that tracks the tides. Oh I'd have to update X code and I have to blah, blah, blah, blah, blah. And then there's some Noah API I'd have to figure. And now it's like, "oh, okay."

Now is a tide tracking widget valuable? Not completely orthogonal question. But can I experiment with it? A hundred percent.

Heidi: Also, I think there's a social thing about if you're experimenting with a genie, you don't embarrass yourself in front of someone else. I think this is a large part of the resistance to pair programming is that you have to look stupid in front of another human. And there's a lot of social gradation and nuance to that that makes it extremely difficult.

Kent: Yeah. If you haven't done the work to build and maintain the relationships that allow you to reveal who you are in a professional setting, that's part of the hard work that Agile just glosses over as if that doesn't matter and that XP talks about in a lot of detail.

Kim: This kind of takes me back to a question I wanted to ask. You said you would have called Agile something else. Tell us about that for a second.

Kent: So I'm definitely an amateur marketer, but I have at least some principles that I use. And one of them is if you want to create that totem for an idea, it should be defensible. It should both draw in the right people and repel the wrong people at the same time. And the problem with Agile is everybody wants to be agile and everybody's going to say they're agile, whatever that means, because it sounds cool. Because who wants to be rigid and fragile and predicting things you can't predict? Nobody wants that.

Everybody. You know, you'll look at a gymnast or a dancer and you think, oh, that's agile. And so everybody's going to say they're agile, even if they haven't done the background work. So from the moment I heard the word. And that's why Extreme Programming, a lot of failures, and hat is absolutely off to you guys for finding better words for communicating with a wider audience than I did.

"Extreme." If you haven't done the work to become an extreme programmer, you're never going to say that that's who you are. Because it just, it sounds-- People are going to go "extreme, really?"

So it was a very defensible brand and Agile was not at all a defensible brand. And you see the consequences of it. You've chosen a word that has recently gone way out of favor. Good for you. "Progressive." Yeah.

Heidi: We thought about that like it took a while to write the book and we're like, do we want to keep saying that? But there was no other way to say we want that connection to humans, we want that connection to users and we want that to come back to us.

And I think that was one of the things that I really struggled with as I was mostly a technical writer and so I was the other side of the fence. People would toss stuff over to me and I'm like, okay, but do we know how people are using this in anger, in the wild? You know, has the product manager ever done the job that they're specking software for?

Adam: Yeah.

Kent: Has a programmer actually sat down and watched how somebody uses this? No, no. Because that's not the way the incentives are set up. We've had this Taylorist mental model of let's spread the work out. Never mind that, one, the pieces of work interact with each other and two, you don't even know what the work is.

Let's spread the work out and then there's some magic integration that's going to happen and everybody's going to behave the way we expect them to behave. We're going to make that assumption. It's easy to integrate, that we know what the work is, that we know how to divide it up, that none of the pieces of work interact with each other and the users are going to behave exactly the way that we expect they are.

We're going to base our entire economic system and our viability as a business on those five easily proven assumptions. Aah!

Adam: And the idea that everything that we build, the users are going to love.

Kent: Oh yeah, sure. Because we built it.

Adam: Well, this is one of the core concepts that we talk about is this notion of technological jerk.

Kent: Right.

Adam: The idea of: Technology is released that, from a user perspective, is so jarring and provides such a reaction that it's like physically felt. Right? Like you can actually feel as though the technology has pulled you in a direction you weren't expecting.

Kent: Oh yeah, I'm around non technical people most of the time, when I'm out in my community, and I hear those stories all the time. So the mental model, we recognized this in the early days of XP, but we didn't go as deep as you guys have gone in finding solutions to it. But I did have a metaphor that I liked.

I would like new releases to be like getting your driver's license. So you get your driver's license and you're really excited because the whole world-- Like new things are going to be possible and you don't know exactly what, but you know that freedom and possibility are out there. At the same time you're a little bit nervous because it's a big new responsibility. I'd like releases to have that kind of emotional valence.

Heidi: For the developer or for the user?

Kent: For the user.

Heidi: Okay.

Kent: I like the users to be thinking, oh, this is going to be-- Like, I don't know, I just had my iPhone update and I'm just thinking, "oh for Christ--"

Heidi: Yeah, new phone day used to be exciting and now it's just a tedious drag.

Kent: No, Heidi, it's not a tedious drag. It's a hair pulling out moment, which works better for some people than for Adam and I. But sorry I interrupted you, but it is definitely worse than just, "oh, I got my phone got updated. This is going to be a drag," right?

No, buying a new laptop. That's a drag. Because it's like, oh we're going to connect and I'm going to discover just how many apps I actually use.

Heidi: But not, "Oh God, my 2fa is broken."

Kent: Yeah, exactly.

Heidi: Yeah.

Kent: Yeah.

Heidi: So the thing I want to say about driver's licenses is we're all old enough that this was a one step process. We took our test, we had our driver's license, you could do anything. The kids now, that's not how it works. There's like this super graduated, like you can't drive at night, you can't drive other teenagers, you can't. Like it's a graduated process to give you more capability with less risk.

And I think that's one of the interesting things about when we give users software. Are they aware of, can they comprehend what level of risk an action carries? How do you think we should make that apparent? Like there are some things that aren't risky, but there are some things that are and it changes.

Adam: Well, I think that this is actually similar to your analogy that I really liked Kent, which is the notion of haircuts versus tattoos.

Kent: Yeah. Yeah. This is a parenting thing.

Adam: You know, like, there are some changes that you make that are like haircuts, and then there are some changes that you make that are like tattoos.

Kent: Correct.

Adam: So, like, how do you, like, think about that in the context of, as you introduce change, is it something that is, "hey, look, worst case scenario, don't worry, it'll correct itself" versus a larger change that, "Oh, no, that's going to be a little bit harder." And how do you prepare for the difference between the two?

Kent: Certainly, as engineers, we need to be aware of the reversibility of user actions. And oftentimes making user actions reversible is a lot more work than-- Like implementing undo. I remember when undo came out.

Heidi: Magic.

Kent: Didn't used to be. It wasn't a thing. As a user, it was magic. As an engineer, I was like, "oh, you mean I'm going to have to implement everything twice? Once in the forward direction and once in the backwards direction? I'm not even sure I can implement it in the forward direction because I've never implemented a GUI before."

So that is more work. But if it removes that fear from the user's experience of it, like, no, just undo always, always works. Okay. If you can just assume that it's there and it always works, then it's a whole set of anxiety that you don't experience anymore.

Adam: Undo is definitely real for me, but I also, I'd say the one that I remember is when autosave was introduced.

Kim: This is a big deal.

Kent: I remember typing and like, save, type, save, type, save, whir, whir, whir, chunka, chunka. Yeah. And then at some point I just stopped worrying about it.

Adam: Right.

Kim: I have lost papers in college and I'm the youngest one here. When autosave came about, oh, my gosh, it changed everything.

Kent: Yeah, absolutely.

Adam: Yeah.

Heidi: I'm fascinated that the Microsoft Perennial license, you can get Office for like, $99 or whatever, but the feature that they chose to use to force you to upgrade to more cloudy is autosave.

Kim: I would do it for that.

Heidi: From a pricing perspective, I'm like, yes, you have accurately identified one of your most valuable components. From a user perspective, I'm like, dang it.

Adam: So I've got another topic that I wanted to kind of get your take on. You know, as someone who is generally recognized as one of the greatest proponents, if not the most influential people to create test driven development, I wanted to talk to you a little bit about test driven development in the context of Progressive Delivery.

And this is a question that we get oftentimes around kind of use of feature management or feature flags for motivating a more Progressive Delivery of your software. And one of the things that comes up is, well, how do I test for that? I would love to hear your take on thinking about this in the context of test driven development, of using feature flags alongside test driven development.

And what does that look like? And is that something that you have thought a lot about? Is that something that you've done? Is that something that you disagree with? I would love your take on that.

Kent: My first response is the most important thing about feature flags is how you retire them. So there needs to be a strategy for, "okay, this is ship 200, we're never going to change it. So we're taking it out" because it's so easy to have in a large code base worked on by many people to have future flags that interfere with each other.

And you have this N-squared problem where any feature flag might screw up any other feature. Any feature flag setting off-or-on can screw up any other feature flag setting off-or-on. So you have this, this grid and everything potentially can mess up everything else.

And the smaller you can make the number of the feature flags, the smaller that grid is. So the first feature flag implementation I did this was in Smalltalk back in the day, had a automated ship it and that would go through all the code and find all the if statements and collapse them and get rid. So delete the whole thing, but all the code that also depended on it. I was very proud of that and it worked really well. So that's the first thing.

Then your second line of defense is design. So if you can make the code behind one feature flag completely orthogonal, that means you cannot affect the code affected by the other feature flag. Then in that grid of everything can mess up everything else, you've got one blank square. And that's a design problem, that's not a coding problem.

So within that I want to reduce the scale of the problem as much as possible. And then within that, yeah, i f you have feature flags and they affect each other, you're going to have to test for the cross product and that's going to be expensive.

But if we want to maintain this feeling among our users that they're excited to see the next thing, you don't have a choice. That feeling of anticipation, our users are excited to see what feature comes next. I mean, to me that's the deep alignment between Progressive Delivery and XP is we want our users to be excited about what comes next.

If we want that, then you just got to do the homework to do that. And if that means you go slower, then you go slower. But boohoo , if it was easy, regular people could do it.

Kim: So it's interesting you say this because we were recently meeting with the DORA group and many people asked, well, how do you measure this? And I think you're getting at something. Are people delighted? Like, what is a success metric? Well, how does the user feel? Right?

Kent: Which should reflect in profitability.

Kim: One would hope.

Kent: In the end that's the measure everybody agrees matters.

Adam: Absolutely. And you know, ideally that profitability is coming from the value that is being gained by the user, not the value that is being hamstrung and abstracted from the user.

Kent: Yeah, exactly.

Adam: You know, in the course of this, one of the things I've tried to encourage people to think about is also-- I didn't realize that in Smalltalk you'd actually built that mechanism to be able to kind of clean things up. This is something that I'm a strong proponent of, of like, how do you think about your feature flags in a way that is part of your overall system? Right.

And how do you think about, especially at the design phase, thinking about, okay, if you're going to build with feature flags, how do you put one in that is going to be a transitional flag to help you release in a safer fashion versus the idea of having some type of feature flag, especially in the context of large services, that is going to actually be a long term flag intentionally.

That is going to give you the ability to avoid the problem that I saw in my days at VMware where oh no, we're just going to build this special release for this one customer. And so how do you create that notion of like, okay, well, we want to actually use feature flags for segmentation and be able to do that.

But also it gives you that optionality you know, so that as you continue to build, if certain features become commoditized, can you quickly and easily expand who has access to it versus having to completely refactor your code or rebuild the way that you've implemented it? Instead, can you actually change that more quickly and easily through like a feature management system.

Kent: So feature flags are an example of futures. So that gives us the option of giving someone that experience or not. And yeah, some of them are long term. It kind of bugs me just in an aesthetic engineer way that there's no difference between the mechanism we use to do gradual rollouts and this mechanism where customer A gets a feature and customer B doesn't get a feature.

I don't have an answer to it, but I can just say it like, tickles my sense of engineering aesthetics. And I don't know what to do about it.

Adam: That's fair.

Kent: I have learned to pay attention to that feeling.

Adam: As you think about that, I would love to hear where you end up or where your journey takes you.

Kent: Yeah, I'll drop it in the back room, but the back room's pretty crowded.

Adam: That's fair. The other thing that I haven't heard you talk a lot about, but I wanted to kind of get your take on, is in the context of user satisfaction or user value, how do you think about, as a developer, observability and the instrumentation of your code and how to make sure that you are building in a way that allows for both qualitative and quantitative feedback?

Kent: So the mindset shift that I had to go through was recognizing that I can only observe so much before production.

For a long time I believed that TDD done well meant that you never had to say you were sorry. And I had to realize, oh red, green is not an accurate statement of the world. So when I started working at very large scales, that's when I learned the hard lessons about observability, where not only can't you predict how your code is going to work at scale in production, but you have to do extra work to know how it is working in production at scale.

And how can you make that work for you as much as possible? So it is extra work to put in observability. But compared to this fantasy world where everything just works and you've proven it because you wrote all the tests and you knew everything, there were no external dependencies-- Like, it just doesn't work.

TDD can become predictive to a certain degree, but then there's just a bunch of stuff that happens in production that you wouldn't believe if you hadn't been in production at scale. You know, a rainstorm comes and all the transformers blow at the same time. And, like, really? That's a thing? Yeah.

Adam: Yeah.

Kent: Didn't know. Who knew? But now we know that the code needs to work if that happens too.

Adam: Yeah, one of my favorite Charity Majors quotes is "everybody tests in production. Some people do it on purpose."

Kent: Right. Yeah, exactly. So I had to learn that. And it was a surprise and a shock. But if I was a better engineer, I'd be able to make the tests that I'm writing, explicit tests that I'm writing: If everything's green, then everything's fine. No, that's not true. If anything's red, something's not fine. Yes, that part's absolutely true. But if everything's green, that just means I have to pay attention to what happens in production.

Adam: Yeah. This is the thing that I have also been thinking about a lot of that notion of you can have a system that runs perfectly fine. It doesn't mean that users are going to be happy with it or going to be getting the most value out of it.

Kent: Or any value.

Adam: Absolutely. I think that we're coming close to the top of the hour and wanted to close out with the two questions that we want to end with. One is, are there any other examples of technological jerk that you have personally experienced or have had recently that you can think of that. I know you mentioned the phone update, but are there any other ones that you know, come to mind that you can share with us?

Kent: Oh, car updates are one. Because it can kill me.

Adam: Right.

Kent: I have a new car, and it has lane assist. And lane assist means sometimes the steering wheel is not connected to the wheels.

Heidi: As a person who drives on snowy roads that the snow causes white lines, I turn lane assist off. It's trying to kill me. Because the last thing I need when I'm driving in snowy, icy conditions is for the wheel to jerk suddenly.

Kim: Yeah. Oh, I feel this.

Adam: And then the other question is, who else do you think that we should talk to that you think would have interesting things to say about some of the things that we've been talking about around Progressive Delivery that we should look to engage with?

Kent: I would look to people who aren't in the tech world. So there's this book called Alchemy: The Power of Ideas That Don't Make Sense or something like that. And it's written by an ad executive. I would look for people who've been thinking about adjacent problems, but from a very different perspective.

Adam: Yeah, it's interesting that you say that. One of the books that I think was something that was talked about a lot by us as authors while we were writing was Alvin Toffler's Future Shock.

Kent: Future Shock, yeah.

Adam: And that same. The realization that this was back in the 60s, 70s, that people were feeling that technological jerk. That same kind of perspective of things moving faster than they were prepared to handle.

Kent: Yeah. Another set of influences is the architect Christopher Alexander, who's passed, so you can't talk to him, but he talks a lot about how his goal was to shift the responsibility from the architect to the people who are going to be living in the house or working in the office.

Kim: Oh, I like that.

Kent: And that was part of it. And the second part of it, he talks about in production of houses is you not making one big investment, and then, okay, the house is done. But expecting any structure to be a sequence of investments.

Adam: Interesting. Progressive Delivery of a house.

Kent: Correct.

Heidi: Yeah. Evidently it was a real pain to live in any Frank Lloyd Wright house because he would come back in and put the pillows back where they belonged.

Kent: Correct.

Heidi: Also leaky.

Kent: Yeah. And turns out my girlfriend's an interior designer and she visited a Frank Lloyd Wright house, and they just hadn't kept up the maintenance. It was a maintenance nightmare. So it was kind of dilapidated, falling apart, but you could see these little spots, and you're just like, that is so beautiful in the midst of all of this chaos.

Adam: It's fascinating.

Kent: So it doesn't matter how good it is at release. What it matters is how long people can live in it.

Adam: And the effort that it takes to maintain it.

Kent: Yeah.

Adam: Kent, this has been phenomenal. We really appreciate you taking the time with us this morning. And really appreciate also all the contributions that you've made to our industry. I think it has made a lot of programmers better and more thoughtful, and I really appreciate that.

Heidi: Yeah.

Kent: Oh, thank you so much. I really appreciate that you guys have taken this series of ideas, and it is about leveling the power differential between users and engineering and business. Leveling that power differential. And you've wrapped it in language that's so much more easily absorbed by a larger, less technical audience than I was ever able to do. So feels like you've taken the ball further downfield, and I really appreciate that. Thank you so much.

Kim: Thank you.