Jeff Gothelf
Lean Product Design

Jeff is the author of Lean UX and is a Principal at neo. He’s spent over a decade as a Designer & User Experience expert helping companies build better products.

Collapse
00:00:00
00:00:00

Introduction

How many of you guys are runners? Joggers? Exercisers? Hold your hands up. There you go, nice. Okay, now keep your hands up for a second. How many of you, while you're out running, take photos while you're running and post them to social media? Significantly less. A fewer number. And the reason for that, and I apologize for this in advance, is because generally that is asinine.

You're out there. You're exercising. You're trying to get things done. The last thing you want to do is be on a photography nature walk, right? We're trying to get ready. It's going to be beach season here in about three months. We're trying to get ready for that and get out there and running.

Feedback-Based Development

Now, traditional efforts about whether or not we should build certain features rely on essentially the same type of mechanism that I just used to figure out whether or not it's worthwhile to enable this particular activity or not. We ask our customers what they think, or if they'll do something in the future, and they speculate about whether or not they'll do that in the future.

Of course in the future, everything is perfect and we all make all the right decisions. But as it turns out, if you're out running about, exercising out in the wild, you end up running past things as you're out running, and in some cases you run past scenery, really nice scenery in many cases.

In fact, one thing that was really interesting is that I was out running in Los Angeles last summer. It was early morning. I'd flown in for a client meeting and I had, you know when you fly from East Coast to West Coast, you're up super early. So I was up super early, and I wanted to get out there and take a run. I'm out on Venice Beach. I run down Venice Beach and take a left onto Venice Pier, and as I get about a half mile into the ocean, you run out of land.

I use this app called Map My Run. That app, what it does is it maps your run. It's a pretty good name for an app. I got to the end of Venice Pier and I paused the tracking for just a second because I kind of ran out of land and this is what came up. As I paused it, it said, "Hey, do you want to add photos to your run and post them to your route to share with the people who are tracking where you are going, who are following you on this app?"

Now, again, if you would've asked me that outside of that context, I would've felt like that was a ridiculous idea. But in this case, actually, this was the scene. I took this photo right at the end of Venice Pier. The sun was rising. The birds were chirping. The surfers were out. The ocean was doing whatever oceans do very calmly in the mornings, and I had all of this adrenaline pumping. And it was beautiful.

I live in New Jersey. I don't have this kind of scene where I live. And so I immediately said, "Hell yeah!" I like the fact that it said "MVP" as well. I'm going to tap that button, and the app said "That's terrific, why don't you give us $30?" to do that. And so it was a bit of a "wonk wonk" kind of moment there because you know I really wanted to take that photo. I did end up taking the photo, just not posting it up to my run to track it.

User Experience as Opportunity

But the idea here, the ability of this app to learn my intent as a customer, to learn what was valuable to me, was impeccably designed and timed. The effort that went into learning this was trivial. The extent of this experience that Map My Run built here was a modal overlay and a sign-up screen.

In reality, none of the features on this screen ever have to exist until the company gets enough taps on the "Buy" button or the "Go MVP" button to see what gets people to pass through to the next screen and then what gets them to sign up.

There's an opportunity here to really understand and learn what customers do with our product or service.

Whether there's value in certain features before building any of those features, how to price those features, and how to get our customers to actually want to take action on these services and these features that we put in front of them.

And again, the investment that they put here was trivial compared to the investment it would take to actually build out the photo posting feature or any of these other features.

There's a real opportunity here to build experiences that take advantage of the context of use, to understand what value we're providing to our customers, the people who consume our products and our services.

Defining Value

Value is one of those words that has taken on about the same amount of meaning, and therefore no meaning at all, as the word "innovation." It doesn't really mean much anymore. So I do want to break this down into a bit more explicit definition of "value." There's the common definition, right? The one that we're most familiar with. This is the literal, the financial meaning. It's a number that we've promised to someone or that has been promised to us that we're going to achieve.

We're going to sell this much stuff. We're going to make this much money. We're going to get this many users. That's the kind of value, and this is the kind of number, that ends up in spreadsheets and in business plan decks and that type of thing. But there's a different definition to the word value that I think is super important. It's the qualitative side of the word value itself, and it doesn't fit neatly into the tools that we use to measure the financial version of value.

Value is the relationship fostered between our product and our customer. We have to understand as quickly as we can what's valuable to our audience and how do we learn it, all in service of reducing the risk of building things that people don't want.

For example, Map My Run never has to build any of those features if they don't get enough taps that say, "Yes, this is something that I want to do while I am actually running." So we have to understand where the real value lies in the context of use of the products and services that we're building.

The trick is that this is really hard because the relationships that customers develop with the services and the products that we build for them are emergent. You can't predetermine them. You can't decide exactly how your customers will use your product, what they'll do with it, if they'll come back, if they'll love it, if they'll tell their friends about it. You can't force this and they don't happen overnight.

The relationships happen through a continuous interaction with the product or service. If those interactions are positive, then customers come back and use our products over and over again, and they begin to differentiate us from the competition because they see value in those interactions.

Those repeated interactions form an experience, an experience that is explicitly designed to create these long-lasting relationships, and in turn, that premium value.

Building Relationships

These great relationships are realized through return visits and retention of the customer base. Getting them to come back, getting them to try it, getting them to come back and do it again and again and to tell their friends. And you see this story play out with the biggest successes.

Story of Instagram, right? When Instagram was purchased for nine-, sorry, for only a billion dollars, they were one of 3,700 other photo sharing apps in the app store. What made them so valuable was the relationship that customers had with the app, the network that came with it, and that was explicitly designed.

WhatsApp, same story. Ironically acquired by the same company but for 22 billion dollars this time around. Again because there were so many users who have built such intimate relationships with this, that the product had extreme value for them.

And if you think about offline companies like Shake Shack and Disney World. These are companies that excel at creating great experiences that customers value and come back to, and they start to build this relationship with these companies, these products and these services.

The challenge is that there are an infinite number of ways to create great product experiences. It's literally infinite. You can pick from any combination of words, colors, steps, processes, requirements, interactions, right? Any. Where do we start? How do we know where to start? How do we know where to begin?

If you think about companies that have been successful in the past and are still successful today, take Groupon for example. Groupon really focused on quirky, colloquial copyrighting to get people into the deals and to get them to buy them. That worked really well for them. Google focuses on clarity, utility and efficiency to make sure that people get what they need out of that service. Netflix. Ubiquity, continuous streaming, keep people watching, right?

So how do we know where to start? How do we know which experiences are going to drive these great relationships? A lot of the times we try to focus on marketing, the advertising for customer acquisition, or word of mouth, and we need to really rethink some of these definitions.

The Inhale Process

Nathan Shedroff down at California College of Arts, he runs the design MBA program down there, talks about marketing as the "inhale process" of your product design of developing process. It's how you learn. In other words, these should be activities that lead us to understand what customers actually are trying to do with our product, why they're trying to do it and how we can best solve them. And that plays really well into user experience, design and research, how do we build learning into our product design capabilities.

The interesting thing is that the way we've done it to date has been relatively successful in a non-digital world. And certainly not in the continuous role that we have today, but the tools that we've been using. Tools like surveys, like I did for the post features, the post photos question at the beginning of the talk, or ad campaigns, or email marketing, or focus groups.

Those types of tools don't work as well anymore when we live in a world where you can push content, features, and new experiences out to market as fast as five times a minute. And that's a real statistic. That's how fast, that's how often Amazon pushes code to production. Amazon pushes code to production every 11.6 seconds. Five times a minute, some customer somewhere that uses the Amazon service, or any service, sees a change in the experience.

Now they're not re-designing the entire checkout process, or they're not re-designing the whole product page. They're moving a button to the left, to the right, three pixels. They're changing the call to action or they are tweaking the personalization algorithm. But they are set up a system that allows them to deploy ideas into market very quickly.

In addition to that, they've set themselves up to sense how those deployments effect customer behavior. So once we push this thing live, how does it change the way the customers interact with our product or service? And then, equally as importantly, they're set up to respond to that onslaught of new insight at the same pace.

This new ability shifts the way that we build products and it changes the way that we have a conversation with the market. The conversation we can now have is a continuous one.

We can push ideas live, we can see how they change customer behavior, and then we respond to those changes based on that. Maybe we continue down the same path or maybe we don't, but the faster we can get ideas into market, the faster we get that feedback into our process.

What this does, is it helps us design our products. It really makes us think about product design in a lean approach, a lean product design. It's the actual product today, the experience of using that product, or an approximation of that product that gives us the best ability to learn to build this inhale function. And it increases the pace of learning from a traditional way of understanding what our customers want, right? We build, we learn and then we adjust based on customer feedback.

Finding and Reaching Goals

Our goal is to get real-world evidence out of the market that tells us that this is something that customers will actually participate in, and this is something that they won't participate in or they won't actually pay for.

There's a couple of components of lean product design that I want to go through and I'll share with you some case studies from our experience about how we've been practicing it, to learn how to build the products that we built.

The first and most important thing is to remember that any kind of lean approach to product design or development has an element of humility built into it.

What I mean by that is that you cannot determine the end state of a product. You don't know it. We're approaching the product development and the product design process with a heavy sense of skepticism. We can take a good guess.

We can take an educated guess based on what we know about our industry, our domain, our existing customers, perhaps, but at the end of the day, what that end state looks like is unknown. So approaching that means that we have to figure out how to mitigate the risk of building the wrong things.

Everything that we worked with was essentially an assumption, right? The idea of what we're building, who the customer is, what problem we're trying to solve, they're assumptions. They're guesses, and we need to understand very quickly whether they're the right guess or the wrong guess, and we do that through experimentation.

A tactical approach to testing our assumptions that we build in a cyclical manner into the product design and development process. It's not something we do at the beginning, it's not something we do at the end, it's simply a part of the process.

We run a series of experiments to understand whether or not customers do actually want to do certain things like the Map My Run example. Will customers actually want to add photos? Let's find out in the processand look to build the least amount of effort into our experiments, so that we can learn as quickly as we can whether these are the right things or not. As we do that, we begin to iterate. If we don't get it right the first time, then we try it a different way and see if it makes more sense the other ways.

Now look, the experiments that you run and the iterations that you go through will lead to what's called failures, but I don't even use that word anymore. I really want to focus on the learnings that come from these experiments, right?

Building that iterative capability to understand that we tried something and it didn't work, but we learned something good from that, and we can try again, allows the product to be nudged in the right direction every time that you run that experiment.

At the core of all of this is keeping the customer in mind. Whoever you're building a product for, you have to keep a user-centered point of view.

Human Data

The relationships that we're trying to build are about people, people trying to solve a problem within particular contexts. And our goal is to understand what those contexts are and how to best solve those problems.

The user-centered point of view applies to any type of audience. So if you're not building a B2C product, if you're building it for a techie audience, or for a B2B audience, those are still humans at the end of the day that have to use your product.

They have to achieve some kind of a goal or a task.Their success criteria may not be to upload a photo and share it with as many people and get as many likes. It might be to do my job better, to get a raise, to get home on time for my kid's soccer game. Whatever is important to them.

But we have to remember that no matter who we're building for, ultimately they're humans that are using our products and keep them at the core of the understanding of what we're building and why we're building it.

As we run these experiments, as we collect this information, we use data to make decisions about whether or not we're going the right way. Of course not this data, it's this data, this quantitative data and understanding of what people are doing. We want to measure quantitatively how the changes that we're deploying are changing customer behavior.

Do customers come back more often? Do they do more stuff? Are they more efficient with the processes and the tasks they need to complete? Do they tell their friends about it? Do they do whatever it is that we're trying to actually get them to do?

Once we have that quantitative data we want to build the qualitative side to understand as well. It's not enough to measure what people are doing, but we have to actually talk to them to understand why. If you only measure what people are doing, then figuring out how to change that behavior is a stab in the dark, because you don't really know what the motivation is for that behavior.

If you talk to customers, you can understand what is actually driving a particular behavior set, and that allows the next experiment to be that much more informed. Again, this is not something that happens once during the product development and design process.

This is the way that we make software today. It's simply part of the process. We talk to customers on a regular and on-going basis. As you do this, you start to build a level of responsiveness into your company.

You start to build the sensitivity that says, "We're going to constantly be on our toes because we're just never too confident that we're headed in the right direction." There's always that nagging sense of skepticism that says, "We better get this right, we better get this right, and if it's not right, let's figure it out."

And that allows us to build this responsiveness into your company, like, we've got all this data coming in, and it makes it easier to pivot, to change direction and to react to the information that we're actually collecting.

So if you take these components, you can start to really refine the products and services that you're building. You can start to build really interesting experiments and collect really interesting data.

Case Study: Cooking Light Magazine

I want to share with you three case studies from the work that we've been doing over the last couple of years at my company Neo. First is for a magazine called Cooking Light.

Cooking Light is a 30-year-old magazine. They're based in Alabama, I believe, and they are sitting on an archive of evergreen content that includes nothing but healthy eating recipes.

There is a huge demand today for healthy eating recipes, and no one is buying their paper magazine. They couldn't figure out how to monetize this archive of content. They're generating new material, and they've got 30 years of evergreen content.

We sat down with them and we brainstormed a bunch of different ideas about how we could potentially use this content to create a new revenue-generating subscription service, because that's the business model that they know how to operate.

When we found something that stuck, we sat in a room with them for a while, we threw a bunch of Post-it notes on the wall, and we found an idea that we loved. We talked to a handful of customers, and that seemed to go well.

The first version of the product, the first experiment that we built, was a concierge MVP. It was a landing page that promised the Cooking Light diet. There was a sign-up page attached that offered people to sign up for the waiting list for the Cooking Light diet service, and it had a price point in there.

It said that the service was full, but the price point was 20 bucks a month, 40 bucks a month, and we ran a split test to understand what would actually get customers to sign up for the waiting list for a service they didn't have to actually pay for.

Testing the Waters

What that did is it brought dozens of early-adopter customers into our queue, and we put those, the initial 40 customers, into the first alpha test of our concierge MVP.

Now, the concierge MVP was literally a Google form that we sent out to the first 40 people, and we asked them to fill out some information. It said, "Tell us about your family. Tell us about your eating habits, your exercise habits. What are your goals for your family? Do you have any calorie goals for the week?"

And then we took that information, and we put the subject matter experts from Cooking Light magazine on a task to dig through their archive of content to find recipes every week that then we would put into a pretty HTML template, email template, and email out to these customers every week.

On Sunday night they would get an email from us that was the Cooking Light diet. It was manually sourced by the editor of Cooking Light magazine. They didn't know this because it looked nice. They had signed up for a service, they filled out a form and they got data. They got information back. They got the service.

At the end of the week, we would call these people and ask them about their experience with the service this week. "What'd you think about the email? Did you read it? Did you cook anything? Which ones did you cook? Which ones didn't you cook? Why or why not? What else would you like to see in the service?"

Through these continuous conversations on a weekly basis and the manual "The Wizard of Oz" nature of the service, we began to measure what features were working, what content was resonating and what wasn't.

As we began to build more people into the concierge MVP, we started to gain confidence about where we should start writing code. Through this process we began to automate the service.

Over the course of a year, we went from Post-it notes to a concierge MVP that was manually run by humans in New York, and to give you a sense, if there was any inquiry that was being sent to the Cooking Light diet service that essentially came in outside of the hours of 9 a.m. or 5 p.m. East Coast time, we let them know that our servers didn't work during those hours. And then we would respond the next day because our folks had to go home.

The benefits here, just to finish the story, a year and a half later, we are now done with the project. We start off with a set of Post-it notes. We went on to a concierge MVP, measured traction, usage, value, relationships, and we shipped the features that actually delivered those values over time and automated and expanded the service.

Today it's a million-dollar-a-year run rate service. It's far from flawless. It has a lot of room to go and, from a start-up perspective, that's a fairly big win. From Time Inc's perspective, I don't know if they're going to invest in it or not. It's going to be interesting to see what they actually do with it.

We managed to understand what to build and how to build it by taking these small risks and measuring the traction of the features before we committed to building any code.

And then we refined the design. And we refined the calls to action to get to a point where it was on brand with the Cooking Light experience that they currently had in the brand.

Case Study: Taproot Foundation

That's one example of experimentation and learning. Let me share with you another one. This is a company called Taproot. They're based in New York and what they do is they help connect non-profit organizations and pro-bono service providers.

The business problem that they're trying to solve is they wanted to build a new, two-sided marketplace that connected non-profit organizations with pro-bono service providers, which is what they do. They came to us and they said "Look, we'd love to build this service." We went into this project with a series of very high-level assumptions. The biggest assumption that we made going in was that there was going to be a glut of demand and a dearth of supply.

In other words, there's going to be lots and lots of non-profit organizations seeking service providers and very few service providers willing to actually deliver free services to these organizations.

We built, again, a "Wizard of Oz" MVP that allowed both sides to enter the marketplace using these forms: "Tell us what kind of business you are, what you're looking for, what services you need. What's the project like? What services do you provide? How often will you be available during the week, etc." And we matched people up manually to see how well the product worked, understood where to automate the service.

Challenging Assumptions

The fascinating thing, the most fascinating thing that we learned about that ecosystem when we started to build this concierge MVP marketplace, was that, in fact, our assumptions were 180 degrees wrong.

There were tons of service providers willing to provide their service, and a lot fewer non-profit organizations that actually wanted this particular service. This fundamentally changed not only our design, but our features, as well as our success metrics.

The things that we were measuring had to fundamentally change because now we had to understand what was important to the suppliers much more so than the non-profit organizations, and we also had to understand how to acquire more non-profit organizations.

Had we gone ahead with our initial assumptions and simply built features, we would've built the wrong system.

We would have designed and optimized it for a market reality that wasn't actually true. So, by these incremental experiments, we actually built for the realities of the real marketplace.

Case Study: Real Simple Magazine

One final example: Real Simple Magazine. Real Simple Magazine wanted a new way to engage with their target audience. They knew that if they could bundle a subscription service with the print magazine service, they could retain customers for a lot longer and get a better sense of how long they're going to stick with the product.

We toyed around with a bunch of different ideas to understand what they were looking for. Their readers are usually working, urban women who are looking to kind of simplify their lives and focus on high-end luxury products.

What we ended up coming up with was this service called Two-Do, and essentially what this is, it's a service to nag your spouse or partner to do something for you.

Now again, to understand how the dynamic would work in building a digital version of this service, we faked the experience. We created the opportunity for couples to join the service.

We ran the entire experiment in a trello board. Every couple had a column in the trello board that talked about what each person in the relationship was asking the other one to do, and essentially these couples would email "the service," which was an inbox that we ran. We would then translate that request into a pretty HTML template that looked like this, and we would send these to the partners.

We used Real Simple branding, Real Simple language and Real Simple design elements to make the experiment feel as real as possible. Because that was important.

The more on-brand that you can be with your experiment, the better you can understand whether or not customers are actually giving good responses to that. Because brand loyalty adds legitimacy to your experiments.

But again, all we really did to test this idea out was build a couple of HTML templates that included some quirky design elements, some quirky copy, to test how well a service worked that allowed you to ask your wife or your partner or your husband to pick up the dry cleaning, change the oil in the car, and whether or not people would actually pay for a service like this.

Amazing enough, it turns out that people don't want to pay for a service like this, and this one unfortunately died on the vine. But that was, again, learned much much faster with significantly less expense than building out the entire system, automating it, and then sitting back and measuring this behavior.

Front Row to the User Pain

The other huge benefit here is that the people who man these experiments, the people who sit behind and collect the data, and call the customers, and do the research, have a front row seat to user pain.

They can really understand why we're making the product decisions that we're making, and they can inform the rest of the team as they move forward.

Now remember, our key measure of success in all of these experiments and in the way that we iterate our products and the way that we design them is changing customer behavior. Did we fundamentally change the way that customers interact with our service? And we can measure that.

That's the beauty of building a digital product. We can understand whether or not customers bought more stuff, came back more often, completed a task more efficiently, and that's how we measure success, not with the shipment of features. "Did we ship the feature?" is highly irrelevant if we didn't get customers to do something more beneficial for them and then ultimately for us.

As we think about this, we want to make sure that we don't forget that the experiences, the reactions that we have to these experiments are deliberately designed.

We want to lean on the experience of opinionated designers. Designers who have studied the data, the quantitative data, the qualitative data. Designers who have talked to customers, who understand what problems they're trying to solve and in what context, and are deliberately designing the next experiment, the next idea to drive a change in customer behavior.

My favorite quote on this comes from Jared Spool who's one of the big luminaries in the UX world. He talks about design being the rendering of intent and our responsibilities to understand the intent of our customers as they come to use our products: "What are they trying to do?" And then, "How do we deliberately design an experience that allows them to do that most efficiently?"

At the end of the day what we're doing is we're building a culture of learning into our company, and the best opportunity to do this is when your company is small. Place value at the beginning of the company's life on learning first, and scaling-out second.

Let's figure out what to build before we launch it to everybody and we do that based on real world insight, not an over-reliance on gut feels. Gut feels are okay as long as we're willing to be convinced that we were wrong and through this lean product design, this research, this experimentation effort. We can understand very quickly whether or not we were wrong or not. This is what product development, product design looks like in a continuous role.

Delivering a Fluid Product

That's the benefit of this continuous role because we can get ideas into market very quickly and we can get feedback very quickly, which means that we can adjust very quickly. At the end of the day, what we're trying to do is break down any silos that may exist.

Again, there's such an opportunity to do this at the start-up phase of a company, the early stages of a company, to make sure that product design, technology, marketing, are all working together to understand the customer to understand the pain points. And to share that learning with each other so that everybody is working from the same data sets and understands why we're making decisions about certain features and certain aspects of our product.

At the end of the day, it's about getting to product faster. It's about understanding how to build the biggest successes by getting a sense of what our customers intent is, what they'll do in the context of use and if you look at the products that are successful today they do just that, they understand exactly what the customers intent is, right?

They solve a problem for the customer. They entertain, they bring people closer together, they educate. And to discover what these intents are and how to build these experiences we have to get to product faster.

The sooner we can get ideas into market, the sooner we get feedback, the sooner we can build better products but actually have better design experiences. So thanks so much for listening and I'd love to take your questions.

Q&A

Managing User Expectations

First of all, don't put a lot of false features into the product. That's the first way. So the question was, how do you manage user expectations, user frustration as you're running experiments like this whether it's with feature fakes or concierge MVPs? First of all, don't do it a lot at a time. Reserve the riskier experiments for the riskier features. That's the most important bit of advice.

Now the nice thing is, is that many cases if you're building these experiences in the context of your existing workflow, you have a sense of which customers fall into your mouse trap and so you can mitigate the damage very very quickly.

If you build a feature, a fake feature, and someone clicks on it and then they give you some feedback that they're frustrated with it, you have the opportunity to back channel to them very very quickly and say, "Hey listen, you found our trap door, really sorry about that. We're just trying to figure out what to work on. What brings the most value to you, the customer? Here's a month's free subscription." Or, "Here's an Amazon gift card," or you know. You can make it right.

Remember we're not leaving features like this up for months at a time. We're looking for a level of indication from the market that this is working or it's not working. We're looking for 200 out of 10,000 users to actually give us an indication that it's the right thing to do. Whatever number feels right to you to justify the existence of that feature, and then you take it down. So, first of all, use it sparingly on the riskier stuff, and make sure that you back channel to the folks who fell into the trap to ensure that they're not completely pissed off, and you can always make it right.

Look, if you do end up with a customer or two that are so furious that they clicked this thing and it wasn't there, they probably weren't that loyal of a customer anyway. If they're like, "I'm leaving your service, and I'm never coming back," right. They probably weren't that loyal of a customer anyway. And maybe that was a good opportunity to kind of weed them out of the system. Just some ways that you can mitigate the risk there.

Lean UX versus Lean Start-up

Is there a difference between lean UX and product design and lean start-up? I think the lean start-up principles thrive in this environment. The components that were always missing for me in lean start-up were the design elements, the designers, where were they in this process? How do we build? How do we do great customer development, customer research?

Go talk to customers. Terrific, there are people who've been doing that for 40 years, there is a lot to learn from those folks. How do we build those disciplines into the lean start-up and to build, measure, learn and those leaps, and how do the designers fit into it?

I'm hoping that some of the stuff I talked about discusses that, but again, it's ensuring that there's a cross-functional collaboration with product, with technology and with design. And that you ensure that learning is shared across all those disciplines so that we can actually let the designers do their jobs more effectively.

Knowing Your Customers

So it's a great question, the question was, "Hey, you're an expert in this field. You've been doing it for years, you know the domain, you know the industry. I don't really want to be that humble about my ideas. I think I know how to fix things."

My favorite, so I'll share a story, a recent story. My favorite story about this is the Amazon Fire phone.

The Amazon Fire phone was a debacle. It's a 170-million-dollar write-off for Amazon, and if you read any of the stories about what happened there, it was Jeff Bezos's baby.

He determined everything about that. He made all the demands. The customer that the teams were building for was Jeff Bezos. And when those teams couldn't deliver what he wanted, he fired them and he found people who could give him what he wanted.

When interviewed, those teams, they asked, "Why did you build it?" It's ridiculous, right? There's a lot of ridiculous aspects to the Fire phone. The price tag was supposed to compete with the iPhone and there's all kinds of gimmicky stuff in there that was supposed to get you to buy it, but everybody saw through it.

The people who were on those teams respond without fail, "Why did you build it this way?" Because Jeff wanted it. Because Jeff said so, and Jeff's been right a lot, right?

To his credit, he's had many many many huge successes. He is where he is today because of a lot of big guesses, big risks and being right. And so they just assumed "Hey, you know, the man's been right so far, let's just keep going with it." And then it fell on its face.

And so I think that obviously if you know your industry, you know your domain, your guesses, your assumptions are perhaps less risky.

But as you think about it, I would urge you to consider the story of the Fire phone kind of as, if you're making your decision because you've been right for the last 20 years, it doesn't mean you'll be right again about the next thing. So hopefully that was helpful.

Delivery Expectations

The question was, "When you're working with a client," and I assume you're asking when I'm working with a client, "What do we set as the expectation about what we'll deliver at the end of the engagement?"

Our goal is to help them deliver an outcome. And an outcome is a measurable change in customer behavior.

For example, I'll share with you a proposal that came our way that unfortunately we didn't win the project for.

You can ask me why as a follow-up question, and I'll tell you the answer because it's fun. The New York State Senate came to us and said, "Hey we want you to build this digital government initiative to connect the members of the New York State Senate with their constituents in a digital way." And they had this grand vision of messaging and threads and communications.

We said, and it's fascinating, "There's really no other governmental body in the US that's doing anything like this." We really wanted this initiative, and we said, "Okay, what's the measure of success here?"

In this case, it was interesting because it wasn't necessarily changing the behavior of constituents, it was more changing the behavior of the senate.

They had a clear measure of success. The outcome they wanted to achieve was a reduction in the cost of postage. The New York State Senate spends six million dollars a year in postage sending out campaign propaganda that senators send out to get re-elected.

So they figure, look, if we can build this system that allows this digital communication with constituents, then we can reduce the cost of postage, right?

That's a meaningful, measurable outcome that is a reflection of a change in customer behavior and user behavior of this system. That's the kind of promise that we try to make with our projects, to say,"Look, you're going to buy our time and we're going to use that time to most effectively change the way that your customers behave in a way that's meaningful to you.

Let's set that out at the beginning and let's re-evaluate that over time to see if that's actually the kind of behavior that we definitely want to encourage, and move that forward."

Do you want to ask me why we didn't get that project? I'm glad you asked. Why didn't we get the project? Because government is incapable of procuring services without a defined output, especially when the person who's signing off on that check is an elected official. In this case, it was the attorney general of New York State, and he's going to sign off on this massive, high-six-figure, low-seven-figure project with us.

It was an election year, and the question is going to come up undoubtedly from his opponents that are going to say, "Hey you just signed off on some million dollar check to this tech company, what are we getting for it?" And he can't go, "I don't know," right? That's not a good answer and, frankly, we don't know.

I wouldn't know what to tell him. They gave us a list of 100 features. There's no way that we're ever going to build those 100 features, and certainly we don't know which ones will actually effect those behaviors.

So while there was a genuine desire to work together, they simply couldn't procure the services from us in that way. We run into that a lot with large companies. Procurement is a real slog in the enterprise space, especially in the enterprise space and certainly government.

How Much Experimentation?

So that's a great question. The question was, "How much experimentation do you need to make yourself feel comfortable?" So here's the way that we like to look at it. There is this kind of curve of truth-iness that essentially is the life cycle of your product.

At the beginning of your product there's very little truth in your assumptions. There's a bunch of guesses, wherever you're starting from and so the extent of, the amount of work that you put in at that point should be very low.

It should be rough, right? Paper prototypes, customer conversations, the lightest form, whatever the lightest form of experimentation is that you can run to get some basis of evidence into the product development and design process. As you get more truth into your assumptions from the market, the fidelity of the thing that you're building increases.

You move from prototype to maybe a live-data prototype or do a single-feature service in a multiple-feature service. So I would argue that the experimentation never ends, but there is an increasing level of fidelity, complexity, comprehensiveness that comes from ongoing experimentation that is essentially the life cycle of your product.

The thing really to never lose touch with, and it kind of sucks, I'll be honest with you, is that sense of skepticism. We want to feel good about what we've built. We want to feel like it's a success, and to some extent, that we've achieved some kind of a milestone that maybe it's finished for now and we can move on to the next thing.

But the reality is that if we keep this position of humility in mind, and this constant state of skepticism, and we're always thinking about, we can make that better, or we can improve that situation, all of that is to drive truth into your efforts. That's the way that I see it.

Changing Customer Behavior

The question was, "Changing customer behavior, how do we arrive at that as sort of the goal?" Why build software? Is it strictly for the pleasure of writing code and designing interfaces and shipping features?

It's the "so what." It's really the only reason that we do what we do. We're trying to change somebody's behavior and in a positive way usually, right? We're trying to get them to be more successful at their job, we're trying to bring them closer to a loved one, we're trying to get them to save some time in figuring out their finances.

Whatever it is that we're trying to do, we're actively trying to change the way that people do something with the software that we ship. It's the only reason to do it.

Sure, mortgage application completion rates, or the time spent on completing an online mortgage application. The cost reduction that comes along with getting people to complete the applications online means that we can re-purpose the time on something perhaps beneficial to the bank itself.

You've got to be the "C" benefit as well as it to be the "B" benefit and any opportunity that we have to get people. The amount of reduction and the amount of time spent doing menial tasks that can be automated. Now we're re-purposing our people's time to be much more successful, more efficient and productive at work when they don't have to do the menial tasks that this particular service has automated for them.

Anything that changes the way that people either function in their private lives or in their professional lives is an outcome. It's the goal and the reason why we frame success in this way is because there are a thousand ways to achieve that, right?

So we'll take a guess as a feature. When we got that feature-set from the New York State Senate, those were 100 guesses about what would actually get senators to communicate digitally with their constituents and vice versa. We don't know which ones will actually work.

So by using outcomes as the measure of success, that's an objective barometer. "Hey, we're testing this feature, and it turns out that it's actually driving people away from the service, and we can measure that. So let's not work on that anymore or let's figure out why and tweak it and fix it."

So that's the key here. It's the "so what" of this. Like why are we actually doing this? Again, it's not to build the app, it's not to ship the feature, it's to positively impact somebody's life.

Avoiding Local Maximum

Yeah, it's a great question. The question was, "How do you avoid the local maximum as you're optimizing?" It's difficult. It can get really easy to get very narrow minded and focus on the experiment and optimizing and tweaking and, like we said, three pixels to the left, three pixels to the right. I think that after a while you start to see that the changes in your measure of success are negligible.

So we move it three pixels to the right or three pixels to the left, we're not moving the needle in an effective way. We've tried five different things, and we squeezed as much as we can out of this idea, and that's really when you have to make the leap to something else.

This is a luxury that I had at a mid-level company a while back. One of the most valuable people that I've ever had on one of my teams is a curious analyst. Somebody who can look at customer data, the existing usage data and really analyze what's driving certain types of behavior, or at least what seems to drive certain types of behavior, which then gave us focus to say, "Okay, well, maybe we can get other people to behave like this." And then that fueled our next experiment.

Looking at your data and analyzing qualitative information to give you a sense of where to jump to. But if the changes that you're achieving with your experiments four, five, six times in are negligible, that's a good sign that you could look back and make a jump.

Did you learn something?

Share this with your friends!

Want developer focused content in your inbox?

Join our mailing list to receive the latest Library updates. After subscribing, tell us your preferences to receive only the email you want.

Thanks for subscribing, check your inbox to confirm and choose preferences!