
Ep. #4, Signals and Levers with Elisabeth Hendrickson and Joel Tosi
- Continuous Delivery
- AI
- DevOps
- Product Engineering
- Testing in Production
- Enterprise Software
- Product Adoption
On episode 4 of Third Loop, Elisabeth Hendrickson and Joel Tosi join the hosts to discuss systems thinking, software delivery, and why organizations often solve the wrong problems. They explore their upcoming book Signals and Levers, unpacking the CREATE framework and the illusions of progress, predictability, and control. The conversation also dives into AI, user trust, feedback loops, and what it really means to improve delivery.
Elisabeth Hendrickson is an engineering leader, consultant, and author known for her work in software quality, systems thinking, and organizational learning. She is the author of Explore It! and co-author of the upcoming Signals and Levers, and is widely recognized for challenging traditional QA practices in favor of more effective, holistic approaches.
Joel Tosi is a software engineering leader, consultant, and educator with a background in building systems and guiding teams through organizational and technical transformation. He is the co-author of Signals and Levers, where he brings practical frameworks for identifying root causes and enabling better decision-making in complex systems.
- Signals and Levers (Elisabeth Hendrickson & Joel Tosi)
- IT Revolution
- Progressive Delivery
- Thinking in Systems (Donella Meadows)
- Donella Meadows
- Ward Cunningham
- Jeff Patton
- Ruth Malan
- Jerry Weinberg
- Future Shock (Alvin Toffler)
- Extreme Programming
- ChatGPT
transcript
Elisabeth Hendrickson: Hi, I'm Elisabeth Hendrickson. I never know what to say about myself. People might know me as Test Obsessed on the Internet. I did write a paper called "Better Testing, Worse Quality?"and then ran hard and far away from QA or anything that looks like it, and ended up in engineering leadership. And I'll just stop there because I don't know what else to say.
Joel Tosi: I'm Joel Tosi. I also don't know what to say. For a while, just was building systems, did some stuff at Red Hat for a while, spent a long time consulting, doing stuff around Dojos, and kind of team learning stuff. And then I was lucky enough to re-encounter Elisabeth's wonderful work. And here we are today.
Adam Zimman: All right, well, I'd like to once again welcome our listeners to Third Loop. And this is a phenomenal opportunity to talk with some other writers that are working on things in a similar space of trying to figure out how we can make software not only easier for people to deliver, but also, coincidentally, making software that actually makes lives better for humans.
So, I'm excited to have you all here with us. This is, you know, a podcast that came out of the book that we wrote on Progressive Delivery, thinking about how we could actually build the right thing for the right people at the right time. So looking forward to a lively discussion.
But why don't we start, give us a pitch of your book which will be coming out this fall.
Joel: I'll try it, and Elisabeth, you can fill in the gaps. So I love what you said, Adam, because in our consulting gigs, it's always been everybody gets kind of wrapped up in building stuff. We want to build the right things, but we get kind of confused in the noise.
In the places I've been, I always felt like managers wanted, like they had the best intentions. Right? Like, nobody ever said a manager wasn't trying to help. They were always trying to help. And I think the challenge was as Elisabeth and I talk about quite a bit, they react where they see the problem and not what's causing the problem.
And I think if I were to summarize the pitch of the book, it's almost that in a nutshell: How do we solve problems where they occur and not where we observe them? And to do that, you actually have to be thinking about the systemic effects and what's causing issues and see different signals.
Heidi Waterhouse: Wait, let's back up a second and say the book is called--. And it is coming out.
Joel: Great catch, Heidi. The book is called Signals and Levers, coming out September 22nd from IT Revolution.
Elisabeth: And the subtitle is Systems Thinking Tools to Unblock Software Delivery.
Adam: Awesome.
Kim Harrison: I love everything about this. I'm very excited today.
Elisabeth: Yay! But now I'm super curious. Tell me more about what makes you excited.
Kim: I like thinking about things more holistically. You're looking at the system, I think Joel pointed out, let's not look at the symptom. Let's actually try to get at the cause and try to consider how to actually fix this and not just slap a little band aid on this little thing that we see in the corner.
Adam: Yeah. I mean, interestingly, you know, this was actually-- That larger picture was a big part of what we talk about with Progressive Delivery. But what we realized was that the canonical DevOps, kind of infinite loop, the whole premise of this podcast is that we realized that there's a third loop.
You know, you've got your developers that are building a thing, you've got your operators that are making sure that it gets delivered, and you got your users. Right? And your users actually have to adopt all of the things that you are producing. And so, you know, our extension of that holistic viewpoint was realizing that we needed to do a better job paying attention to user adoption as part of our build and delivery cycle.
And so how do you actually kind of create that notion of that full three cycle feedback loop that actually pays attention to how people are using our software and what are the things that they're able to do with it and how fast are they able to accommodate changes?
Elisabeth: I love that. And when I was looking at your book, which I'm gonna confess, I haven't actually read your whole book, but I've read parts of it and here's the thing that I love. I feel like our books are coming at the same kinds of topics from slightly different perspectives. Each of which is important obviously.
And what I saw in your tools where you were talking about that third loop and what Progressive Delivery actually means, and you're talking about feature flags, et cetera.
These are all tools that support learning. And ultimately systems thinking requires learning. And it turns out that that's actually a really hard thing for a lot of organizations because it requires that they be willing to invalidate the mental models that they used to start building software to begin with.
Adam: Absolutely. Yeah. So why don't you tell us a little bit more about Signals and Levers and how you are thinking about like, you know, is there a framework that you're proposing in the book or are you actually proposing bringing forward certain other tools that you think are important for people to consider?
Elisabeth: Yes, we are. So Signals and Levers. Let's just start with the fact that we had a really hard time naming this book. And we love the name now, but when we started we had a class that we were teaching that, full credit to Joel, it was Joel's class and I was the one kind of riding his coattails.
But he had this systems thinking class that involved using causal models, short charts, U curves and pulling those together in a really nice tight self supporting framework for how you get to the point where you can figure out where the pain is caused instead of where the pain is observed, which is one of the things that Joel said earlier today.
So the original name of that class was A Systems Thinking Toolkit. And that was, I mean that was accurate. But when we went to go create the book, we realized that it doesn't really do service to the book. So then, well, we went down an entire path of well, what the heck is happening? But with swear words.
And we didn't think we were going to get that by our publisher, IT Revolution. But we were willing to like you know, the "no a**hole rule" and has the asterisk. We figured that maybe we could do something like that. And we do in fact have in our heads the idea that we would do a sweary version of this for like an after dark webinar kind of thing.
"Effing systems, man." I mean, you know, "Unintended Consequences." You get where I'm going. Okay. But then we realized that everything that we were talking about enabled you to see what was going on, "signals," and to figure out where or have a better shot. I'm going to say have a better shot at which levers to pull.
So often organizations are pulling on the lever that is most apparent to them, like a control lever. "We're having problems with quality. So we will add QA at the end of the cycle. Because somebody with QA with quality in their title, that's going to fix the quality problems, right?"Spoiler. No, it actually exacerbates them. See my 2001 paper.
But anyway, so then as we dug deeper into the materials and we realized, all right, context is everything. It depends. We know it depends on. We were still using those same original three tools from Joel's original class, the causal models, Shewhart charts, process control charts, and U curves. But there was something else missing which gets at the framework that we introduced in the book and that was how do you kind of categorize and reason about these signals that you're getting?
And we realized that there are six dimensions of context. And this brings us to the CREATE framework that helps you kind of understand in different terms a little bit more about what you're seeing that maybe you're looking at the capacity of the system and you're trying to apply a capacity C, CREATE framework, C for capacity.
You're trying to apply a capacity solution to what is actually an E for execute, execution problem. So the six dimensions are capacity, risk, execution, adaptability, trust, and economics. And so we would argue that just about anything you could come up with would fit in at least one of those dimensions.
And that allows you to reason in terms of: What problem are we actually trying to solve? What signals are we getting that tell us that that is a problem? And is there any possibility that we have miscategorized the problem?
And so the reason that the levers keep backfiring is because we keep pulling on say capacity levers when we have an execution problem. So throwing more capacity at it, shocker, isn't fixing the problem as just one example.
Adam: I think that that sounds awesome. So it sounds like with this framework you're able to help people recognize not only patterns, but anti-patterns in terms of like not only how they can do things well, but also give them some ideas and examples of how maybe you recognize or how to recognize when things are going poorly and why.
Elisabeth: A hundred percent. We actually start-- Chapter one is about the three illusions. it used to be the first three chapters but some of our early readers pointed out, could you give us a win, please? This is too depressing. So that became one chapter.
Adam: That's fair. And the rest became footnotes, I understand.
Elisabeth: Pretty much exactly. Yes. I'm sure you've been through this. Joel, do you want to explain what the three illusions are?
Joel: Illusions of progress. And so what I think is really interesting, especially in Progressive Delivery , a lot of times we think because we're shipping software, therefore we're doing better. Right? And it's just like, well, actually you have an illusion of progress.
If your customers don't like what you're delivering, it doesn't matter if you keep on shipping the same stuff, even if you're doing it faster.
Right? Crazy.
Adam: Well, I mean, just to stop you there, like, I think this is one of the things that--
Heidi: I feel some synergy here. Haha.
Elisabeth: Yes! Haha.
Adam: Yeah. So this was one of those things where I know that when we were towards the end of our book publication cycle, you know, we were finishing up last summer, and so all of a sudden our editors, It Revolution. Came to us and we're just like, "so this AI thing, do we need to like change or have you rewrite your book?" Haha.
And you know, we had a little bit of a like existential moment of like, "do we?"And so, you know, we talked about this as a kind of author group. And the thing that we realized was that this was actually the most amazing reinforcement of the things that we were talking about.
As I think many of us from a practitioner perspective have realized, AI is the ultimate tooling for being able to amplify pre-existing conditions.
And so that whole notion that you just brought up of like, you know, shipping faster doesn't necessarily mean better. I think that we've started to see more and more of this from companies that had really poor practices with regards to, whether it was product market fit, customer adoption, usability, like all of these things.
All of a sudden they're just like, no, I can use generative AI to be able to ship a hundred times more features. And everyone's like, yes, but I still will not adopt any of them. And things are starting to go further and further off the rails for some of these organizations because they didn't fix the fundamentals of, to your point, you know like what is the root of what people need, what people want, that is going to actually drive your business forward.
Joel: Yeah. I think there's a beautiful question that Elisabeth and I were even talking about. Maybe it started a few months ago.
Imagine you're doing things well and AI is actually an amplifier, just like you're saying, Adam. But if you're shipping faster than you can actually learn, now the bottleneck just moved. Even if you have great practices, if you're getting them out there and you can't process the feedback from your customers fast enough, it doesn't matter.
Adam: Yeah. Part of our assertion is that part of your "great processes" need to actually include feedback from your users.
Joel: Yeah.
Heidi: Yeah. So we decided that what it was, was in your phrase, a capacity and not a transformation of the whole thing. And I love this CREATE framework. I'm thinking about it and I wish Donella Meadows were still here because I want to hear her take on it.
So much of what she said in Thinking in Systems was about the effects are not the same as what you think you're putting in. And that's what I hear you saying. It's like you are getting effects. And I always think about this in a biological sense.
Like, we can't measure how much thyroid hormone you have. We can only tell if your body is trying to get more thyroid hormone because of your thyroid stimulating hormone. It's not a thing we can directly measure, but we can see the effects. So how do we, in tech organizations learn to stop and think a moment about what is causing this outcome?
Elisabeth: Mhm. Oh, and there are so many things that in the tech world we-- So what you're describing is a proxy measurement. We can't actually measure the thyroid hormone, so we measure the thyroid demand hormone--
Heidi: Stimulating.
Elisabeth: Yeah, I don't know anything about biology. I'm really sorry, but--
We can't measure quality. That is not a thing that you can measure. But you can measure customer satisfaction, you can measure defect counts, you can measure all kinds of things that kind of give you some signal about quality.
And we do this all the time, all over the place, everywhere. And the place where people really get themselves in trouble is when they confuse the proxy metric for a measure of the thing that they actually want to know about. And so they start counting defects and saying that's our measure of quality. Well, a defective a defect report is just an opinion. That's all it is. It's an opinion. It's an opinion.
"I expected it to work this way. It works that way." Sometimes it's a really important opinion. Like if your customers are telling you you've got defects everywhere, you better listen. But on the flip side, so many organizations that I worked with in the 90s, I was in that QA side of the equation. And I remember Ron Jeffries saying on some mail list somewhere, "oh, I wish QA would stop making up requirements."
And I remember being very offended, like, what are you talking? And this might have been the early 2000s, but whatever, it was quite a long time ago now. And at the time I was very offended until I really sat down and thought about it. Because every time a QA person logs a bug, they're asserting that the requirements are that the bug not exist, but that may be an undefined behavior that nobody cares about. And so QA is just making stuff up.
Adam: I mean, see, the solution to that was something that I witnessed you know, at my time at VMware, where there was certain teams where they would dismiss the bugs as features. Haha.
Elisabeth: Right, right.
Adam: I totally agree with you though. I mean, one of the things that we talk about is the metric is not the goal. And being able to recognize that there needs to be that notion of higher order thinking and feedback of what are you actually trying to achieve and who are you trying to achieve it with and for.
So that's something that we keep coming back to, is this notion of the user. Ultimately, all software, all software is produced for somebody to do something with it.
Elisabeth: Mhm.
Adam: And that someone, if you're not paying attention to their ability to be successful or their ability to kind of complete the task at hand, whether that's playing a video game or operating a large piece of machinery, or completing some type of banking transaction on the Internet, whatever that is, if you're not paying attention to whether or not they're being successful and whether or not they're actually feeling as though this is a tool that is making their lives in some way easier, better, you know, more straightforward, then you're probably doing it wrong and you're probably going to run into somebody else coming along to do it better.
Elisabeth: Yeah, I mean, it occurred to me as I was reading the first intro of your book. It opens with the story of the parents trying to transfer money in a bank account. Right?
Adam: Yeah.
Elisabeth: And the bank has changed a whole bunch of stuff. And the users, they were not along on the journey for this, and now suddenly, and I think we've all had the experience of having to help someone who is less aware of how technology works, help them overcome some kind of hurdle and figure out how to actually do what they intended to do to begin with.
And to me, if we come back to the CREATE framework, this is about erosion of trust. And that becomes really interesting because now if we do that enough times and our users don't trust us, then even when we did something good, they throw their hands up and they groan, "ooh."
Heidi: Right. "Oh, more change. I don't like it."
Elisabeth: Right.
Adam: Yeah, I mean, I think that we've all experienced that, right? Where there's like, either a certain vendor or a certain product, you know, or even just a certain feature within a product where you try it the first time and you're just like, oh, that was horrible. Right? And then all of a sudden they're like, no, we've made a bunch of changes. You're like, yeah I'll just do it the way that I know that I can actually get through it without, you know, cursing.
Joel: I actually think there's something potentially even deeper there. Like, Elisabeth definitely mentioned the trust side of it, especially with their customers. In a lot of organizations, what happens is they actually not to keep on going to the CREATE framework, but they actually lack adaptability.
And what I mean by that is the product was built a certain way 10 years ago, and the market has now changed and the way people want to use it has changed, but the architecture or the implementation became very rigid. And so now the company lacks adaptability. And now the adaptability becomes a trigger that now creates distrust in your users. And so you see kind of all these signals kind of like weaving into each other.
Adam: Yeah, there are multiple different ways that that can present itself. Right? Where there's the one where it's like, you feel like the workflow or something stagnates, and you're just like, oh, I wish that this was just taking advantage of new constructs or new paradigms in the way that we've found better user interfaces.
I've also seen the interfaces where they just keep adding, you know, more and more buttons, more and more dials, more and more levers and sliders. And then all of a sudden you've got like the cockpit of an airplane, and as a new user, you come in and you're just like, "whoa." Haha.
Heidi: Right? And as an experienced user, you're like, I use--
Adam: 10%!
Heidi: --10% of this, but it's spread across the spectrum of all the things. But you can't tell what 10% a new user is going to need necessarily.
Adam: Well, and interestingly, like, you think about it from a perspective of like, intent of the company, oftentimes the reason they added the new button or the new function was because they were actually trying to expand their market.
So it's entirely possible and likely that they're actually, those new things are going after the new users and they will never even realize the value of what was there to begin with. So I am liking this CREATE framework. I think I agree with you. I think that y' all did a good job of taking this in a, in another kind of dimension that, you know, we were touching on that really resonates.
Joel: Thank you.
Elisabeth: Hey, I want to come back to the-- So at the very end of your production cycle, AI, sort of like, it wasn't new because AI is not new, but it was sort of. I don't know if it was right when ChatGPT had gained massive traction or--
Adam: Pretty much. So it was basically, you know what was it, version 3.5 came out and all of a sudden, you know, it was mass user adoption, right? Went from, you know, a few hundred thousand users to millions of users within a week. And there was this idea that, like, "oh, does this change the things for you?"
Because we built a framework you know, that we refer to as the four A's, where we were talking about abundance, autonomy, alignment and automation. And those were kind of the progressions of how do you start to think about the way in which you build, to be able to not only deliver you know, more quickly, but make sure that you're actually paying attention to that adoption cycle and feed it back into all the things that are going to improve the user outcomes.
And so automation in particular was something that kind of came up and it's just like, well, does that need to be replaced with AI? And I was like, no, turns out, you know--
We did realize that, you know, we, we did go back and we did add a little bit to kind of each section about where AI kind of fit in and how to think about it in the context of this kind of emerging technology. That was definitely adding value in a lot of ways, but also make it so that it was clear that we needed to keep the focus on these principles that we were talking about of how do you actually impart change?
Because AI isn't going to do the work for you. So you need to really kind of shift how you're thinking about your organization and your the way that you're optimizing for that and how you also look for where are the bottlenecks? Because it turns out with AI, if anything it's accelerated the bottlenecks, right? It's not only finding them, but shifting them.
I know that Elisabeth and I have talked about this idea of the shifting bottleneck. And where is all of a sudden the problem? Is it really the fact that now it's like code review is the problem, or is it that testing's the problem?
Or is it that we're suddenly kind of like doing this whack a mole thing where all of a sudden we're trying to chase after the visible problem as opposed to the point that you all are making is that notion of the underlying condition. And how do you actually start to address that?
Heidi: Which sort of sounds to me like value stream mapping, but like backwards and in high heels.
All: (laughter)
Elisabeth: It's the Ginger Rogers of value stream mapping. That's lovely. Keep going. I'm so sorry.
Heidi: So what you're saying is I have a stream that is currently producing value. How do I straighten it out? Or are there places that it should be a meander that it needs to take some time to get through this?
Like one of the things that I see people trying to hurry up with AI, and I think it's a terrible mistake, is user acceptance. Like you can't get an average adoption rate person to try something right when it comes out. They just will not.
So the people who will try something right when it comes out are not representative of your parents who are trying to do banking. And the value stream therefore does need to have a meander in. It does need to have a slowdown and say like, this is not a bottleneck. This is a valuable riparian environment.
Elisabeth: Yeah, I love that.
Joel: There's actually two things along those lines that we kind of even go into in our side a little bit, Heidi, we talk about the idea like "Pausation." So instead of causation, we're talking about Pausation. It's like, you know that you intentionally need to slow down and wait because you need time. And then we also get into latency and feedback loops.
And many times, you have to be aware of latency. You have to be aware of latency when it's a benefit. You also have to be aware of like when you create latency. Right? Because a lot of times in organizations, they create latency in their feedback loops through poorly designed process or multiple steps or segregated silos.
And so like, we talk a lot about time delays and about latency. And this idea of pausation and knowing the difference between the two when it's advantageous, when it's reactionary. I love it.
Adam: No, I mean, I think that you know, having done a lot of like systems work, it's one of those things where sometimes latency is your best friend because race conditions are real .
Elisabeth: Okay, but don't put sleeps in your test.
All: (laughter)
Joel: Unless you're using Selenium. Then you have to. Right? I mean, isn't that how it works? Haha.
Elisabeth: No!
Joel: I'm kidding, I'm kidding.
Heidi: Arm wrestling.
Elisabeth: Haha!
Adam: Yeah. No. And I think that this is also something that's, you know, we're seeing a lot of things break where all of a sudden agent interactions start to look a lot like DDoS.
Elisabeth: Mhm. Yeah.
Adam: You know, because the systems weren't built for that level of, you know, kind of interactive computer to computer type communication.
Elisabeth: Well and that gets to adaptability. Right? An organization that has kind of allowed their architecture to continue to just accrete stuff as opposed to figuring out how to, how to distill down to what is the core of this and making sure that it's well factored for adaptability. And just year over year they have the vendor du jour slapping new capabilities into it and it's now extremely fragile.
Every time they try to do anything it is likely to fall apart. And all of a sudden now everybody is demanding to know, okay, where's the API? How does my agent talk to this thing? Do you have an MCP interface for it? And the organization has no ability whatsoever to respond to that request for change.
Adam: Yeah, and I think that this is where it's like also the, the reality of our world today for so many vendors and so many creators of software, is that you reach a tipping point where all of a sudden you're not talking about like one or two users who are all the same type of, you know, willing to adopt the newest thing. You know, those kind of super early adopters, you start to spread out along that kind of bell curve that we know of from crossing the chasm.
And all of a sudden you've got the laggards and the early adopters that are both interested in your software. And so how do you actually serve, whether it's a service or a product or feature, whatever, to both ends of that spectrum in a way that you don't alienate people and in a way that you don't actually break the user experience for anyone.
So, you know, continuing to deliver value to the most aggressive individuals and also continuing to deliver trust to the individuals that depend on your service is becoming a greater and greater challenge.
Elisabeth: "Foreshadowing is the hallmark of a quality production," I think was a Bloom County quote from way back when. We actually know what our next book is. Our next book is called Delineations. Subtitle: Drawing Lines is Hard.
And what you're pointing at is that I think that this is a question of how do you draw the lines in your architecture so that you can serve those constituents if in fact that is the business that you're in.
Like, if you're a financial services company who has a consumer facing web interface, that can't change constantly because your users are just not gonna-- They don't wake up every morning thinking, "ooh, I wonder what cool weird things they did to the UI so that I can--" Yeah, no, that's not, that's not.
Adam: Well, I was just gonna say you say that. But at the same time, you think about it from a perspective of the way that, you know, we've seen banking change just in the past decade and the fact that, like my kids, they don't know what to do with cash. They'll get cash and they're just like, "can you take care of this for me and convert it to electronic money?"
You know so similarly, they both have checking accounts, as known by the banking system, but have never in their life written a check. Never even had a check printed with their name on it.
Heidi: Nope.
Adam: You know, and so I think that, comparatively, they'll send money with like, whether it's Apple Pay or Google Pay or Venmo as well as, you know, how do you then transfer that to like an actual bank account? Or do you leave the money there, like all of these different constructs in the financial services space.
But then you talk to my parents and they're just like, "I don't know what any of that is and I don't want to know."
Elisabeth: "Can't I just write you a check?"
Adam: Yeah, yeah. Or send you cash. You know my parents still will occasionally send cash to my kids in the mail. And it's like, it's so cute. But at the same time my kids are just like, "can you guys take this and do something with it? Make it into something I can use."
Elisabeth: See, I actually though, I think that this is exactly the point and I think this is why your book is so important. Because as technology moves faster and faster, which it is. This is not like "technology moves faster," casually. It's a, "whoosh! Heading for the singularity."
And that means that now the uses that we have to be able to support if we want to bring everyone along are just, yes, exactly. You can see that, that the dots on the curve are getting farther and farther apart at an incredible rate. And if I were any good at physics, I would probably have a great metaphor right here, but I don't.
Heidi: We do. We call it jerk, which is what we wanted to call the book, but wiser minds talked us out of it. Haha.
Kim: Speaking of curse words.
Elisabeth: Wait, unpack this for me. Jerk?
Adam: Okay, so jerk is actually the rate of change of acceleration. So it's third derivative of position.
Elisabeth: Oh, genius.
Heidi: So you know, you feel constant acceleration in a train, but when it changes, that's a jerk.
Elisabeth: Right.
Adam: And it turns out that if the rate of change of acceleration is too great, you will physically feel it. You know, like when an elevator moves too fast, or when you're in a car accident. And you know what we talk about in the book is the fact that you can feel that both physically in physical motion, but also from a technology perspective, when your technology changes faster than you can wrap your head around, t hat is the technological jerk that we all feel.
Elisabeth: That makes so much sense.
Adam: And the fact that it's a double entendre is not entirely unintentional.
Elisabeth: So what you're saying is that we should do another one of these for the after hours version where we talk about sweary systems and jerks. Haha
Kim: Yes, absolutely.
Elisabeth: So part of the reason that I was asking you back a few questions ago about AI in your book is because in our pitch we had to explain to IT Revolution, who is also our publisher, what our angle was on AI.
And I remember Joel and I kind of looking at each other over a Zoom going, "what do we tell them?" And so our book has. We don't even have as much reference to AI as y' all do because we put it in the very beginning and at the very end. And everything else in the middle is it's--
Adam: That doesn't matter.
Elisabeth: Yeah, yeah, it doesn't. Because for the underlying principles, it turns out they still apply.
And what I find fascinating is despite the jerk increasing and everything that we've just talked about, the things that organizations are struggling with are the same as in 1968 at the NATO conference on software engineering where the term "software engineering" got popularized and the term "the software crisis was coined."
If you, I read-- I have a tendency to go deep down the rabbit hole on certain things and so I--
Adam: You're not alone on that one. Haha.
Elisabeth: Oh, I would do like, you know, 20 hours of research for one sentence in the book. Haha.
Adam: So Heidi and I were the individuals on our team that would maybe take that pattern to the nth degree as well. I think that there were a couple sections where Heidi and I both wrote like 20, 30 pages that got reduced down to like a paragraph.
Heidi: Mhm. Yeah.
Adam: Because we realized no one actually needed to know the--
Heidi: We kept them though. They're our babies.
Adam: Yes.
Elisabeth: Okay, sign me up for that newsletter. Haha.
Adam: Yeah, yeah. The history of the term bikeshedding is actually extraordinarily and interesting to some audiences.
Heidi: Mhm.
Adam: But not necessarily all of them.
Elisabeth: You can't just leave it there. Can you give me at least the TLDR on the history?
Adam: So first off, you've heard the term bikeshedding.
Elisabeth: Of course. Yes.
Adam: And so now tell me, as you understand the history or you know, origin of that term, like what comes to mind for you?
Elisabeth: Well, I don't know anything about the origin of the term, but I will say what I think it means. It is the tendency of an organization or a committee to be totally fine approving millions of dollars, if not billions in spend with very little discussion. But as soon as you're talking about painting $100 bike shed, there's going to be subcommittees and meetings and approvals on the color of the bike shed. That's my understanding of what the term means.
Adam: Yeah. So close, very close. That was ours as well. It turns out that it was in relation to a nuclear facility that was going to be built in the UK and that when they were talking about it, it was more a function of not the color of the bike shed, but it was more a function of whether or not the bike shed should even be built. And more importantly, they actually got more wrapped around the cost of coffee, but they had no idea when they were asked to estimate--
So basically a budgeting committee came into a bunch of physicists and engineers and said, okay, well this is our estimate for the cost of like the nuclear facility of like, you know, millions of pounds. And they're like, yep, looks good. And then they came in and they were just like, and the cost of the bike shed is going to be 480 pounds. And they were like up in arms.
They were like, no, no. Half of them were like, that's way too expensive. The other half were like, no, that's not nearly enough. And then, you know, the next line item down was, it was like, they saw that there were, for these meetings, they were also estimating a cost of, like, 30 pounds for coffee. And they're just like, "that's ridiculous. What coffee costs 30 pounds?"
And so they got completely wrapped around this idea that the cost structure of things that were, you know, tangible or things that were approachable was something that they would argue at infinitum about. And it was both sides. But it was the actual, like, the costs of those, to your point, the larger ticket items, that they're just like, "we can't even break that down. So. Sure, whatever you say."
Elisabeth: Oh, I love that. And this would have been how many years ago?
Adam: It was late, I think, late 50s, early 60s.
Heidi: Uh-huh. Yeah.
Elisabeth: So all of these things, like the 1968 NATO conference on Software Engineering, if you read the, I guess they're the proceedings. They're not exactly a transcript, but there are a lot of places where it is a literal transcript of the back and forth discussion.
So if you read that, it reads like the meeting minutes from something that happened last week in some organization. And what you're describing right here sounds exactly like the meeting minutes for some other group in some organization, like last week. These things are just. What is the word that I am looking for? It's, like, literally the opposite of ephemeral.
Kim: Timeless?
Elisabeth: Timeless, yes. Evergreen. Thank you.
Heidi: Time is a flat circle.
Elisabeth: Yeah.
Adam: This was the other thing, I think the other one that we actually have a couple of quotes in our book of Future Shock.
Heidi: Yeah. We read Future Shock, and we're like, "stop looking in my window."
Elisabeth: Haha.
Adam: Someone wrote our book in, like, you know, 1960.
To your point, we kept finding these reports about the same concept of technological jerk called different things. But the pace of change of technology in reports from the 50s, from the 60s, from the 70s, from the 80s, from the 90s, you know, it was just like, every decade, there was, like, some type of crisis moment of things are moving so much faster than they ever have before.
Heidi: "Will we be able to prepare the children for the future?"
Elisabeth: No.
Adam: Yeah.
Elisabeth: Sorry.
Adam: Turns out the children weren't the problem.
Elisabeth: So you say the word crisis. And software, the software crisis was kind of where our book starts. And Gerald Weinberg, I studied with him for a lot of years, and he had a phrase that I just loved, and that was that: "it looks like a crisis, but it's the end of an illusion."
Kim: Oh, I like that.
Elisabeth: I loved that. And it fits so well with the. We start with three illusions that you can see in the 1968 proceedings from the Software Engineering Conference, but you can also see in your AI generated meeting notes from your meeting last week. And it turns out that that crisis of whatever-- Pick your favorite crisis. Like, "what do you mean it's not gonna ship on time? You've been saying the status is green for months and all of a sudden now you're telling me it's not gonna ship on time. It's a crisis."No, it's the end of an illusion, buddy.
Heidi: Yeah, I love that. That's called a watermelon status.
Elisabeth: Yes.
Heidi: Green on the outside and red on the inside.
Elisabeth: Yes.
Adam: So I interrupted Joel earlier and you only got through one of the illusions. So do you wanna share quickly?
Joel: Oh yeah, sure. So we mentioned the illusion of progress. So just like we talked about, you know, building stuff and the customers want it, even I would offer up, inside organizations. The illusion of progress, like everybody becomes, has their own repo. But you know, to integrate it, everybody has to work together.
So everybody's going super fast in the repo until you have to actually integrate it. Then it just kind of grinds to a halt. So you get the illusion of progress. We talk about illusion of predictability which I think is-- I love that I was worth one group beautiful, wonderful people. But the product owner said, my team needs to estimate better.
Now, I'm not here to kind of talk about estimates, but when we looked at their actual data of their cycle time, it was 12 days, plus or minus 12 days. And I go, "this is not an estimation problem. This is a systematic problem. The system's creating all this variability inside your work where sometimes it goes really fast and sometimes it takes 17 cycles to go through it."
And like, that's the problem to solve. So you get this illusion of predictability. And then quite possibly our favorite, the illusion of control. "We had an outage. We better have a risk board to get together to stop any other problems from happening" or "we're going to be late, we better get a meeting together, then we'll be back on time."
There are all of these kind of illusions of control that we think by reacting to something now we have the problem under control. So, yeah, those are the three illusions, progress, predictability and control that we start off with.
Adam: I mean, come on. everybody should have five nines uptime on everything.
Elisabeth: Well, yes.
Joel: Exactly.
Adam: I mean, especially if you put on your website.
Kim: I think this is something Heidi talks a lot about testing in production. We're all doing it. Failing gracefully.
Joel: Yeah.
Heidi: But I think the illusion of control is very interesting because we tend to think about it as technologists from the point of view of like we're making something. And so if something goes wrong, that's our control. And I think that that is once again failing to involve the user in what's going on.
Like you can make the most perfectly operating thing that nobody wants and you don't have control. Like this is the Liquid Glass example.
Elisabeth: Oh my word. Yes.
Heidi: You have never seen anybody as mad as a bunch of 70 year old quilters who cannot see what is going on on their iPhone. So mad.
Elisabeth: Justifiably so.
Heidi: Mhm. But it works perfectly. Like there's nothing wrong with it, quality-wise.
Elisabeth: Wait, wait, wait, wait, wait. That last word, are you sure that's that word that you want?
Heidi: Well, I think in the illusion of control sense that it does exactly what the product designers intended it to do. They just forgot that whole acceptance part.
Joel: The "people have to use it" part of it.
Heidi: Yeah.
Elisabeth: Right.
Heidi: Well, and they're all under 40 as near as I can tell. It looks fine to them.
Elisabeth: Yeah. I'm not a 70 year old quilter and I will tell you that I, I'm as a 50 something year old, not fond of glass at all.
Heidi: Mhm.
Elisabeth: But that "quality"word is really interesting and the "intention" word is really interesting.
So coming back to the discussion about the bottleneck, I would argue that the bottleneck hasn't shifted, but that the rise of AI has brought it into bas relief.
That because we can now generate code so much faster, the bottleneck was always at the end of, "but are we shipping a known quantity?" And you can do that with code reviews. You could do that. I mean a combination of code reviews and QA and edit it. And all the way back in you know, like 1999 when the first XP team, I might have my dates a little bit wrong--
But Extreme Programming introduced this notion of test driven development so that we always have a known quantity. And this is one of the things that Ward Cunningham and I totally bonded over because he was saying, look, because Extreme Programming allows us to always know that we are doing what we intended to do, the question now becomes, what didn't we think about?
And at the time I was doing a lot more with exploratory testing. And so his point, and this is why he wrote the foreword to Explore It, my previous book, because his whole point was, look, if we work in this way, the software is always ready to explore and now we can get more information so that we can ship a known quantity.
So we did what we intended to do. How do we get feedback on whether or not our intentions are aligned? Right? You're reacting, right?
Adam: Completely. I mean, yeah, this is the thing about, I think so many software organizations nowadays, they either forget or ignore, you know, that idea. And I think that it's one of the things where, having worked in both large organizations and startups, I see it, you know, something that, you know, when a startup forgets to do that, well, they just don't get the next round of funding and they go away, right?
But for like large organizations that have millions or tens of millions of users and they start to either forget or ignore doing that, you're doing real harm to people, you know, you're actually making their lives worse. And you know, that's something that I think we are all agreed on and I think you are too, that like we should be trying to avoid that.
We should be trying to say, hey, look, how can we be responsible and how can we actually make sure that we're continuing to make people's lives better or at least, you know, provide the status quo as a bare minimum.
Heidi: One of the interesting things about test driven development to me was always that it asked developers to use the imagination that we have not asked them to cultivate in order to write tests. And the good thing about a real QA person is that they're the kind of people who will order negative one beers.
Elisabeth: Haha.
Like the ways they think to break things are diametrically opposed to the kind of mindset that it takes to build software. And we've really lost a lot of that in our "developers do everything"movement. Even though we're turning out things that pass all the tests, you were saying this they're not prepared necessarily for the weird ass things that people will do in production.
Adam: This is one of my favorite questions that I get all the time from my wife is: What did the developer want me to do here?
All: (laughter)
Heidi: I talk about when I'm struggling to learn software, I'm like, I need to understand this theory of mind so that I can understand what it's trying to get me to do. I've been trying to teach myself graphics editing software to do pattern design.
And I'm like, I don't understand vector drawing programs. That's not something I've ever been taught. And trying to teach it from the interface is not going well and I need somebody to explain what the computer thinks it's doing.
Elisabeth: So I do want to say though, in defense of developers, the very best developers that I ever worked with actually did think about all the things that could go wrong. They thought about this is what I'm trying to do. And then here are all the things that could go wrong. And so back, back when I was involved in like testing and QA, I loved--
I had this one developer that I remember working with who would write these specs because this was back in the day when you wrote massive specs and then you did a lot of development and then it got tossed over. I know that was last week for some organizations, but go with me here.
He would write these specifications and I loved reading his specifications because they were so educational about all the things that could go wrong that he had thought about and taken into account in his design. So I think that, you know, I hear you, Heidi, when you say that we aren't really cultivating those skills. I think that we need to do more to continue to cultivate those.
I think that some developers have stumbled just accidentally into this mindset and bring it to their work. And now we could do this a little bit more intentionally. And are going to have to because guess what?
Heidi: Yeah, but the system does not reward that behavior anymore. Like now they're a bottleneck. Like, why are you so slow?
Kim: Yeah.
Adam: And I think that this is the thing, is that when we got rid of that role of QA, and this is something that, you know, I know we've talked about is when we got rid of that role, we didn't get rid of the responsibility or the job to be done, but we put it on to people who may or may not have the experience or the kind of same level of passion for doing that work.
And so it became something that we made more of a labor or you know, kind of unwanted task to be completed. And if there was a way to avoid it, it gets avoided.
Elisabeth: I have so many thoughts.
Joel: Heidi's comment around like the systems aren't rewarding this type of behavior. You know, "you're going too slow." It's always interesting when you hear organizations say that because underneath that there's an assumption, right? If we went faster, we would have more money or something like that when there is no data to back up anything that they are-- Like the whole "slow" thing is just somebody's opinion.
Heidi: Yeah, that's just like, your opinion, man.
Joel: Yeah. Like, the whole idea is like, I just want more stuff, so I have more stuff. It's like a kid at Christmas. Like, you're not gonna play with the toys anyway. Having four more boxes of Legos doesn't change anything.
And it's an interesting paradigm, this rush for this sense of urgency with no reason.
Heidi: People ask me all the time how they could get developers to write more documentation, and I'm like, have you ever promoted someone for writing documentation? Have you ever fired someone for not writing documentation? No? Then you don't actually care about documentation.
Adam: Yeah.
Heidi: Like, until you set the rewards and incentives up to encourage the behavior you want, you're not actually doing the thing that will make it happen.
Elisabeth: 100%. Feed what you want to grow.
Adam: Well with that, I know that we're you know, coming up on time here. This was an awesome and totally fun conversation, you know, and looking forward to the release of your book in September and looking forward to reading it.
Kim: I have one last question. Who else should we talk to? Who do you think that we should invite and have a discussion with?
Joel: I would say Ruth Malan. Every time I've interacted with her, it's always been mind blowing. Yeah, it's just wonderful. And the way she explains things and processes things and the things she thinks about, it's just always amazing. So I love my interactions with her.
Always been a great fan of Jeff Patton and the way kind of he thinks about products and just the way he kind of interpolates ideas. I think he's got a lot more thoughts than sometimes I think he gets out. He gets kind of amped up at times.
And then I've always loved reading and hearing from Ward Cunningham, but I know Ward isn't doing as much these days, but every time I've seen him speak around things, it's always been amazing. So, yeah, Ward, Jeff, and Ruth.
Heidi: Excellent.
Adam: Awesome.
Kim: Thank you.
Content from the Library
Third Loop Ep. #3, Give It a Name: Why Software Needs a Third Loop
In this episode, the hosts unpack the thinking behind the name Third Loop and what it represents. Building on ideas from their...
Third Loop Ep. #2, Features and Futures with Kent Beck
On episode 2 of Third Loop, Kim, Heidi, and Adam sit down with Kent Beck. They explore how Progressive Delivery extends ideas...
Third Loop Ep. #1, The Story Behind Progressive Delivery
In this debut episode of Third Loop, James Governor, Kim Harrison, Heidi Waterhouse, and Adam Zimman explore how the concept of...

