October 10, 2014
Apiary on API Versioning and Testing Strategies
Apiary (a Heavybit member) works on rapid iteration of API design. We provide API Blueprint, a simple, markdown-based language to power your...
BM: Thanks. We're going to talk about what our growth team at Intercom means today. I'm Ben McRedmond. I lead our growth team.
SO: I'm Stephen O'Brien. I'm the engineering manager for growth at Intercom. We both work at Intercom. What is Intercom? It is a single communications platform for the entire company to talk to its customers.
I guess this is in contrast to more traditionally having five or six tools that the company use to talk to people. Maybe one for support, maybe one for marketing automation. Intercom says that that's actually one problem, and you should use one tool for all that.
In terms of the company, some numbers. We have just over 6,000 paying customers now. We're growing really fast. We grew 5X in 2014. We're on course to do about 4X this year. In terms of people, we have 120 employees in total now. And again, to hint at the growth rate, 90 of those started in the last year and a half alone. So, growth, growth, growth.
BM: We're going to start by contrasting what our growth team is to growth hacking, and how a lot of people are thinking about growth currently. Our perspective on growth hacking is: growth hacking is bullshit.
Growth hacking, we think, is not a real thing. And there's lots of problems with it.
The first problem we talk about with growth hacking is, we think, growth hacking is focused on tactics, not strategy.
You see this a lot. Great example, go to the homepage of growthhackers.com, this community for growth hacking. And you'll see suggestions to use a referral program, or launch a referral program, because it worked for Dropbox.
Everyone knows the Dropbox referral program: refer friends, get free space. But that's being recommended without any consideration of why it works for Dropbox. It works for Dropbox because Dropbox was business to consumer. It was consumers who want to spend less money. In a lot of business-to-business companies, you are not spending your own money. You're spending someone else's money.
That tactic is being deployed without any consideration of the context in which they came up with that tactic. And we think it's not successful, but they actually think they are being successful.
SO: Because of pseudoscience, basically, right? Like, if you measure badly, you'll think you're being successful.
The most classic version of this, I think, in growth hacking is an over-reliance on A/B testing really small changes in a funnel. Like at one point in the funnel. And so let's say that you're trying to make one step convert better, and you make a huge button, and you say that everyone clicks the button more, and we're doing better, and we have a success right here. You're possibly just disrupting the overall funnel. But that's much harder and slower to measure, and it's much harder to positively affect.
BM: Why do they do these things? Why do they make what we would consider these mistakes?
First we need to define what is growth hacking. We tried to find a definition of growth hacking, and Sean Ellis was actually the person who defined growth hacking a few years ago. He defined a growth hacker as someone whose true north is growth.
Now, we don't super know what that means, but we'll come back to it. So I think of this in the context of Paul Graham once wrote this great essay titled "Startup Equals Growth." And what he was trying to define is, why do we call Facebook, Instagram, all the companies here, "startups?" And why do we not call a new bar or a new coffee shop a "startup?"
The distinction he came up with is startups are a subset of new businesses. They are new businesses that are designed to grow fast, grow really fast. And when you take that perspective, growth is actually everyone's job.
So the idea of there being people who work on growth is slightly incorrect. You know it's engineering's job to grow the company. How does engineering grow the company? Engineering grows the company by applying science to build products. Taking a scientific, methodical approach. It's marketing's job to grow our company. I think, obvious, marketing would grow the company through, they communicate the value of a product, service or brand, and that drives revenue, drives user growth, etc.
We come back to growth hacking. A growth hacker is someone whose true north is growth. How do growth hackers grow the company? Well, effectively, it sounds like growth hackers want to grow the company by growing the company, right? There's no process defined there. Growth hacking only defines an outcome. It doesn't define any process or framing of the problem.
Engineering, marketing, design, sales, all of these functions, all these traditional functions, have a framing of the problem. And our opinion would be that their framing of the problem is their tool for solving the problem. And that without that tool you end up making these mistakes, you end up sort of jumping from tactic to tactic without deeply considering why you're doing those things.
SO: OK, so we don't think that growth hacking informs a great growth process in a company or growth team. But we actually do work on a growth team. So how does our growth team work? First of all, it's interesting to think about who's on the team.
Our growth team consists of product managers, designers and engineers who all work together to support and shape that go-to-market strategy.
These are product people focused on business problems, I guess. Crucially, it's its own function. We're not in the marketing team. We're not in the sales team. We're not in the product team, either. And we report as a unit to the CEO. That's a really crucial detail that I think we'll come back to.
Finally, then, what's our conceptual sense of ourselves? We like to think of ourselves as building the software storefront and sales people for Intercom. A great offline analogy is when you buy a quality watch in a quality watch store, the product that you buy is really nice. And it's been worked on by people who really know how to build that product well.
But, actually, the experience of buying that product is brilliant, too, and it's been worked on by a different set of people. When we look at SaaS software and how it's sold online, we see often brilliant products made by brilliant people, but not sold in a very nice way. So that storefront metaphor is something that's important to us.
In terms of the scale of our team, we're at 10 people currently. We're expected to grow to about 22, by end of year, is our current plan. All of that is just to say that this is a really big investment for Intercom. Intercom's relatively a large company, I guess for some of you guys, but not that big. And this is like putting the money where its mouth is.
It's not an experiment. It's a strategic bet. But what is it that we actually do day to day?
BM: We're going to talk about a few of our tactics, even though we think you shouldn't consider tactics without strategy. But we'll get back to that. One of the things we work on is signup.
Signup is the process of getting people in the door, exploring your product. And one perspective we take on this is: let's always be evaluating. What do successful versus unsuccessful accounts look like? You have to always be challenging assumptions you previously held.
This is what our signup flow looked like in early 2013. And this signup flow, I actually couldn't find a screenshot of this, and I had to find it from a Quora question which was, "How well is Intercom's crazy signup flow performing?" And it actually performed great, performed really well.
And this was very much targeted at early-stage founders who are technical, want to do everything themselves. They want to get set up quickly. It created a really rich initial experience, you know. You landed in the product with live data coming in, being able to send messages to your users. And it performed well for a long time.
But a year later, we wanted to check, "Do our assumptions still hold about the kinds of people who are coming to Intercom?" So we looked at this chart. A key metric we track is (we're a product for talking your customers, your users), "How many of your users have you loaded into Intercom?"
We broke this down into three buckets: people who installed and did not take a trial, so they did not put down a credit card to try paid features. And we can skip the middle one, but the third one is people who installed, took a trial, and became a paid customer. And you can see, for the people who don't trial, there was a huge spike at zero. These people were not getting their users into the product.
For the people who became paid customers, they were. So we were like. "Does our assumption still hold that we're mostly selling to technical, founders, small companies?" And it turns out it hadn't, obviously. And the product had developed sufficiently, we were far enough along such that we were actually selling to larger customers, non-technical teams. We're selling to marketing. We're selling into product, when before it was more early-stage.
We had big successes in making it easier to get data into Intercom. We created this tool called CSV Import, an alternative way of signing up: drop an Excel file, emails, addresses, names. And it increased signups 30% month to month, and that's held forever since.
That one project was a huge driver for our growth, today even. And that came from just continuously asking ourselves, "Do the assumptions we made about the people coming to us still hold?" That said, we do consider often the balance between upfront work and richness of experience.
That initial signup flow, which required this crazy amount of work, where you had to take this piece of code, put it in your code, deploy your app, load your app, come back, and remember all those steps, it performed incredibly well. And the reason it performed well in a holistic view, you know, maybe the signup rate from visits, to our website, to signing up wasn't as high as it could be.
It performed very well later down the funnel, because these users came into our product very, very well set up for success with a really rich experience. So we maybe degraded that a bit with the CSV import, but that's something we think about a lot is that balance.
SO: OK, a second area where we spend a lot of time thinking about and trying to act upon is onboarding. These new users after they've signed up, how are they getting on? And so I think a really good tactic for getting to the bottom of these types of problems is just to look at what people say to you.
I guess we're a product that encourages everyone to talk to their customers, so maybe we're biased. But analyzing the conversations you have with fairly new customers is just a great place to find out where they're having difficulties. It's pretty simple.
The important part, I guess, is that we're going to solve these through software, right? We don't want to just have to employ a lot more people to deal with them. More customer success reps who want to do this in a rich, scalable way with software, which will stay efficient as we grow.
Here's a problem that we found we had when we spoke to customers. This is mid last year. The product had grown and become more complex. It was solving more problems. But for brand new users, it meant that they opened the door into this really complex new place, and they just literally didn't know where stuff was.
We'd see a lot of conversations saying things like, "Hey, how do I find my auto messages?" or "Hey, where's my user list at?" And it was clear that people just needed a hand to find their way around this UI.
This is a really common problem, I bet some of you had this, "How do we help new people?" I think you'll all have seen this pattern where you just point out stuff on a page to the new user. You sign up, and you get a tour, and it says, "Oh, here's our navigation bar, Here's where you go to read your messages," all that kind of stuff.
The more we thought about that though, we felt it was kind of missing the point. And this quote is just so good, this is from Ryan Singer of Basecamp, and he's talking about onboarding flows. He says, "It's much easier to just explain your mechanisms, and that's why we all do it."
In our case, here is the user list. Here are the auto messages. But I guess the design of our team allowed us a bit more space and time to think this through and think "What really are the problems that people are having that they're trying to solve? They don't actually just want to know where the section is, right?"
We came up with something that looks a little similar to those usual solutions, but is actually pretty different. I'll talk through it briefly. You can see that there's like a circle there highlighting a part of the page, and that we're on step three of 10. Which so far looks like one of those point and show tutorial sessions, but actually what it's doing is enabling the customer to do a real task.
So in this case, this is like a tutorial for product research, and the user here is filtering their actual user list, for people who have had quite a few sessions, and they're going to end up sending them a message to ask them about the product and their use of it. So the person's actually getting a real message out of this.
Right? They're not just being shown where stuff is, they're being guided through a real task. This obviously demanded quite a lot of design and engineering resource. This had to fit with our entire product. We had to build a framework for designing these things. We had to go ahead and make them and test them, et cetera.
But our strategic design kind of enabled us to take that tactic. And it worked really well, we found, shortly after shipping a bunch of these, that the cohort of users that had gone through one of these tutorials turned out to be about three times as activated in one of our key internal metrics versus the people who were in the cohort before that.
BM: The last thing we work on that we're going to talk about is pricing. It's slightly atypical, I guess, that a team of engineers, designers, and PMs would own pricing, rather than say, marketing. But it's worked for us, so we'll talk about a few of the thoughts we have around pricing.
We very much believe in value-led pricing. And what that is is focusing on the value you deliver to a customer and capturing as much of that as possible.
The biggest contrast to value-led pricing might be cost-led pricing. "It costs this much to make our product, now we're going to add on 20% on top of that." So we're focusing on, "What's the value we're delivering, and how do we capture the most of that?"
We've made mistakes in the past where we were not focused on our user's perspective of Intercom and the value they get from it. We would start with, "OK, here's our revenue plan. Here's our customer base, here's the type of customers we get, let's reverse-engineer a pricing model from that." We did that twice, and every time we did it, it just blew up in our face. It doesn't work because your users don't care about your revenue plan. They care about the value they're getting.
There's this great book, Pricing On Purpose, by this guy Ronald Baker, and he basically, for this entire book, talks about value-led pricing. Highly recommend it.
Next thing we talk about is how many iterations you need to get it right. Now, it sounds kind of obvious, right? I'm guessing most people in this room work on product in some way, and the status quo of how we believe product should be built, in lots of small iterations, it's not big jumps.
But pricing changes are sufficiently scary enough that I think a lot of us shy away from this. We definitely did initially. But once you get over the initial hump, the initial scare, how scary it is, it's fine.
As one example of that, we made seven new pricing models in 2014, here is all of them, the main class for running those price models in GitHub. We're running them all concurrently. There's a lot of work in supporting that, but it's really important, and we look at the progress we've made from the first pricing change we made, from the second, and you know, we're plus-30-or-40-percent average revenue per customer from those points.
It's really scary to think where would Intercom be today if we hadn't kept iterating, kept changing prices. And it is hard because each of these models has their own logic. They charge off their own variables. They have their own UI, and they're all running at the same time. So it's a big commitment, but it has been so impactful to our business.
But how do you practically make all these pricing changes? Well, what if you only change pricing for new customers? Until you know you have a model that works. We made this big mistake once where you know we rolled out this model, this was one of the models, by the way, which we reverse-engineered from a revenue plan that didn't work. And we rolled it out, all new customers were going to be on this pricing model.
We mailed our entire customer base and said hey, we changed our pricing. Don't worry, it's not going to affect you for a year, but in a year you're going to be on our new pricing. And everyone freaked out because the model was crazy and wrong. And it wasn't even affecting them at the time.
We believe grandfather always, and don't announce.
And maybe it sounds sleazy or sly, but it's not. Our perspective is, OK, you shouldn't be telling people you're going to move them to a pricing model which you don't know works yet. And because you don't know it works yet, and you don't know if you're going to move them, nothing's changed for them yet. So don't tell them because it just creates anxiety, and it's totally unnecessary.
After the time we made that mistake, stopped announcing, just grandfathered everyone. A few people will be mad. They'll always be mad. There'll be these weird customers that check your pricing page every day. Don't worry about it. It'll be fine.
We grew 5X last year. A huge amount of that was driven by pricing. And maybe 30 or 40 angry customers. When you take a step back, you realize it's a very small percentage of your actual customers who are too busy getting their job done day to day.
SO: Right, so we've just done the exact thing that we said you shouldn't do. We've told you a load of tactics that we do, and previously we said that tactics don't work. And that is true, right?
You shouldn't just go ahead and implement all of this stuff exactly as we did it, because tactics themselves are useless in isolation. The reason we think these tactics worked for us is that they were informed by our strategy.
The really crucial part is that you need a strategy, and you need an org for growth to support the tactics that you came up with. And, over time, tactics can change significantly. Ben discussed in signup how a tactic, at one point in time, it was good and became bad at another point in time. But your overall strategy can remain the same.
And so I think we a couple of things that we can communicate, which hopefully are a bit more portable related to strategy.
BM: Thinking about the people on the growth team, what are these people like? We are product people. We're engineers. We're designers. We're PMs. We're building product.
But I think it's a slightly different mindset than building product for your customers, which we are doing, but we're not solving our customers' problems. We're actually solving Intercom's problems. We're solving business problems. And we actually didn't realize this until we sat down to write this talk.
We're like, "OK, out of 10 people on the growth team, we actually are four former founders." Actually, four YC founders from different companies are on the growth team. Three are engineers and one is a PM. And that's the kind of person you need working on these problems. They have the business mindset. They care about the business problem. But then have enough of the empathy and product mindset to translate that to create like great human emotive, simple experiences.
Second thing we talked about is how we think about growth at a very forward scale. I describe us as design-led at the macro level, and metric-led at the micro level.
A lot of people these days talk about being data-driven. And I think that's too simple. The world is too complicated to wrap up into one metric which you want to inform or drive forward.
So we take a broader perspective. We think of being design-led, which is you incorporate data, you incorporate deep thought about the problem. You incorporate qualitative data. You incorporate all these things. You take a really broad view of the problem, and then once you have zoomed in on a little bit of the problem, you've refined the problem space a little bit, then you can be really data-driven. Then you can be metric-led, then you can say, "OK, the numbers really matter here. We want to affect this metric."
But we really think taking that broad perspective at the start is really important. One example we have for this is our pricing. We had all this trouble with, "OK, we're reverse-engineering from our revenue plan. It's not working." Took a step back, said "OK, why is it so hard to price Intercom?" The reason is, Stephen mentioned at the start, how we are trying to build one communications platform for your entire company. So there's actually five different communication use cases which people are using Intercom for.
Once we recognized that, and we're like, "OK, actually, we could split Intercom to these packages. And these packages would contain these different communication workflows, customer support, customer marketing." One we did that and priced around that, it became much simpler. But I don't know if we would have ever gotten there by being very data-driven at the high level.
Once we got there, then we drilled into the numbers, into the data, and we created. From there what followed was a really effective pricing model.
Our third point is this, which is, quality is really important. We started this talk with a contrast: "What is growth hacking? What do people currently talk about in the term, the area of growth?" And I think it would be unfair to say that they would say quality is not important. They would never say that explicitly, but they might say it implicitly by valuing other things more than quality.
They might value number of changes made, or speed of changes made, over quality. And we always come back to this idea of the store. You know, you go and you buy a Rolex or a luxury watch or something, right? Not that I've ever bought one. But you go to the stores and they're really nice. They're beautiful, and great experiences, knowledgeable salespeople. That experience for buying your product is really important, and we think of ourselves as building that storefront.
Which of these stores would you rather buy a phone in? The one with the smashed window or the beautifully-manicured, every detail has been considered? Because of that we think quality is really important.
You have to be really careful when you're making changes quickly to not let bugs be introduced, to let visual bugs be introduced. It should be polished, clean, it should always feel like this beautiful, well-considered experience.
SO: Cool, so I guess our one piece of really important advice we think that all of you should listen to today is concentrate on building a strategy for sustainable and systematic growth. And you might think that Intercom are possibly a bigger company that a lot of you guys are at right now in your stage. But we just say it's definitely not too soon. Intercom's growth team started when we were 12 people, when we were just pos A.
When you take a strategic approach, you don't get immediate benefit. It takes time for you to get the strategy right and to build things.
When we look at our growth rate right now, and our absolute financial metrics right now, we can clearly chart a component of that growth and a component of those absolute numbers that are directly connected to the work we've done on the growth team. But that didn't happen right away. So it's kind of scary to think, "What if we had waited six more months, or 12 more months, or even longer?" We'd just be in a much worse situation now.
So get crackin'. Thanks guys.
BM: To tell you about our first 30 days, I'm going to be incredibly unhelpful, because we had absolutely no idea what we were doing. So I'll try and think what we should have done. We started out by this whole process, and the reason we're excited to give this talk is it took us so long to work out what a growth team is or should be.
Look at the core drivers, core levers of your growth. For most early-stage SaaS products, you might not have a sales team. You might not have a marketing team, but you definitely have ways that your product is sold and marketed. You have signup flows. You have payment flows. You have upgrade flows later on.
You know which of those is the biggest lever we have, and it's usually pretty far up the funnel. Signup's usually a good place to start. And holistically evaluate, do I think this is a good experience? Does the data say this is a good experience? And if not, what would we change? What would you change from a product perspective? How do we create this great experience? How do we make this a great product? We're always of these things as product problems.
I would systematically work through each stage of my funnel, each lever for growth, until I find an area.
Honestly, when we started doing this, when we realized this is what we should be doing, it's kind of shocking because often times these areas of your product have never been owned by anyone. They were hacked together by your CTO when you built it because you're focusing on the core functionality.
Systematically work through that, "OK, is signup good? Do numbers say it's good? Do we think it's good?" And systematically work through that until you find areas that you feel like you can affect.
SO: We're doing a lot of recruiting right now. You saw that, trying to go from 10 to 22. Every time I talk to an engineer, I say, I kind of apologize in advance. I'm like, "I know growth team doesn't sound appealing, but let me explain." And, actually, we've talked a few times about, "What is the word? Growth team doesn't even feel quite right for what we do."
But I think holding an opinion like we do, and like we've tried to get across today, really helps. Because most engineers once they hear, "Oh, we think growth hacking is bullshit," are like, "Yeah!" Because they are. I don't know, they're a type who appreciate a bit of controversy, perhaps, and who want to work on product problems, and who want to do the right thing.
I think, actually, it's our mindset that enables us to do it. People come and they meet our team and they're like, "Oh, these are product people. These are my people. And then you get a subset of those people who are interested in business problems who are a little bit entrepreneurial-minded, perhaps.
Although we're not always fixated on metrics and taking that quantitative approach, when we do good work, we see the MRO go up. And that's a deeply satisfying thing.
So I think it's a combination of having this kind of outlook, having actual interesting design and engineering problems to work on. And then communicating that.
BM: Yeah, I think the feedback loop is a really big thing. Once you've convinced them to join, to get them really sold on being on the growth team is, the feedback loop for the kind of problems we work on is so much shorter than working on product. If you go build a big new feature, it might take six months for you to see any revenue from that.
Whereas you make a change in a signup flow, in a pricing flow, and you immediately see either positive or negative results. That's really motivating thing to be able to see the impact of your changes so quickly.
It's something we definitely still struggle with and are revisiting since we came up with this idea of packages, and that there are many different use cases for Intercom, and averaging that across our entire customer base is maybe incorrect.
But we do have a few things we look at.We look at number of users you have on your account. We look at number of messages sent, and the number of teammates you have. We took a very "dumb" approach to this initially, or not dumb, but simple, maybe: "What is the absolute value of each of these metrics for people who've become successful?"
There's a certain number of users, certain number of teammates, certain number of messages. And we define people who meet that level as being "activated." So, pretty simple.
Often times even defining any metric which you then work towards impacting, as long as it's not completely crazy, the biggest value you get from defining a metric is like a framing of the problem. The thing that matters may be perhaps more so than how accurate the metric is itself.
SO: To the second question, which was, I think, "How do we prioritize this over other work that we could have done?" I think our product-management philosophy is pretty similar to how product people do it, right? We definitely had some data to support the fact that this was a need of people, people were writing in with conversations to tell us that they had problems.
But then, as we explored that deeper and deeper, we just developed belief that there was this cool thing that we could build, that people would like it. We started building it. It felt cool. We had prototypes. That was like a really product-y experience to build.
So to sum up, we had a case for it. We had a data and business case for it. But we like building cool shit that people like to use, too, right? And yeah, we're really proud of what we produced there, just from a product point of view.
It's a really interesting boundary. We mentioned the fact that our team, or Ben on our team, reports directly to the CEO. So that begs the question, right? How do we know what to build?
We were really inspired by some other growth teams in the business like, I think, Facebook's early growth team, who were never driven by working on growth-y things. They led, for example, the internationalization effort of Facebook, which could easily be regarded as a product problem.
So we just have buy-in from our CEO, but also from our product org. It's useful to have people who are fairly agile and free to work on things that can affect growth. Even if they might be inside the problem domain of another product team. Yeah.
BM: Yeah, so we talk a lot about this internally, and where we came to recently was, more specifically, product teams should never assume that growth teams can do anything. They should build what they think is the best product.
What we do is we're constantly evaluating all the data we have in front of us. All our opinions about what it's like to get started with Intercom, continually evaluating that, and say, "What is the biggest opportunity for growth right now? Where is the most impactful thing we could work on right now?" We go and work on that.
Which means, why the product team should never assume we should do anything. They shouldn't assume we're going to build tutorials. We built tutorials because we were like, "That is the most impactful thing we could do to impact growth at this one point in time." And it would be totally fine if they built them, so the line is very blurry.
How we deal with it on a day-to-day basis is, often times we'll want to build or change product in an area owned by one of our other teams, one of our main product teams, and what I say to my team is just be respectful. Talk to the PM, talk to the engineering lead, agree on the changes, 90% of the time they're like, "Go do that."
Five percent of the time they're like, "Actually we'd like to do that. That's a great idea, and you know, we'd like to spend more time on it than you'll spend on it." And then maybe 1% of the time or, that doesn't add up to 99, but fuck it, maybe 1% of the time, they'll be like, "Actually, don't do that. Here's the reasons why." And we'll dig into that. That's how we think about that, but it's definitely something we had to work through.
I think there's also a point for engineering there, right? In the same way that the ops team may have engineers on it, like dev ops people, perhaps, and the product team will have engineers on it. They have to coexist in a kind of dotted-line-boundary engineer organization.
I think growth engineers have to be in there, too. A lot of the stuff that helps parallel distributed engineering teams coexist just helps us.
BM: When Intercom started out, I joined in January 2012, product was in free, private beta. And June 2012, we started charging flat $50 a month just to prove that people would pay for it. Obviously, a terrible pricing model. Don't recommend it. It does not discriminate at all obviously.
So that lasted for about six months. We created then what was our pricing model for the next year, which was what we called "monthly active users." We had three simple plans that were classic SaaS pricing: small, medium, large, differentiated mostly by usage, not features.
The usage metric was number of active users. And the change from $50 a month to that? Hugely impactful for our business. We always look at revenue charts and there's kinks in the graph, which are successful pricing changes, basically, and there was a massive kink in the graph when we went to this model.
So that brought us to about mid-2014, last year, when we were like, "OK, first time we changed prices, unimaginably impactful on our business." And you know, how I think about pricing, why all these changes are worthwhile, because it's so, so, so hard that we changed seven times last year, and if you were to define some mythical perfect pricing model, we're maybe one percent effective from that.
These changes are always, always impactful. So we decided, "OK, let's start working this again." And we came up with this model we called "problem for message," and this was the model: we looked at our yearly revenue plan, and we said, "OK, this yearly revenue plan calls for certain average revenue per account." Whether it be $100, $200, can't remember. And that's monthly.
From that we said, "OK, how do we get to an average number like that across all accounts? How do we reverse engineer that number from the distribution of the types of user we had?" Came up with this model, and it was just way too expensive.
It was this mythical model that didn't map at all to the value people got from Intercom. And one of the interesting things we found was it was per admin, which is per teammate using Intercom, and per message. Per-teammate price is fine. But a lot of the ways people were using Intercom didn't necessarily mean they were sending a very large volume of messages.
They're doing product research. They're doing onboarding messages. It's not really equivalent to signing up for MailChimp and doing a massive mailing list. The problem is, when you make yourself easily compatible to similar services, and you're much more expensive, even if people would be willing to pay that price, when wrapped up and calculated in a different way, they feel really ripped off when they can easily look at someone else who looks much cheaper.
So that was a big problem we came into. The per-message price was just too expensive. People were just turned off by it. We had to do this post-mortem. It was like, "Wow, what did we get wrong?"
We got too excited. We were too excited by the idea of seeing that kink in the graph again. We got carried away with ourselves. We weren't measured. We weren't responsible. And we didn't take the customer's perspective.
That's what being value-led is about. It's taking the customer's perspective. What is the value they get?
The value they get wasn't actually by sending lots of messages or by having lots of teammates. It was this single-platform idea. It was being able to do all your communication in one place. And when we kept reflecting on this, then we got to, "OK, let's split it up by the use cases you're using Intercom for. Let's split it up by these packages." And then, "OK, what's the best proxy for increased value when you're thinking about one platform?" And the best proxy for that was the number of users.
You're going to get more value from Intercom from having all your communication in one place the more users you have. From there we have what we call "our jobs we've done" model, which is a reference to the sort of methodology we used for coming up with these packages, which is, "What is the job they're solving for our customers?"
From there we've settled in on a high-level strategy, and we started iterating on individual numbers. We said, "OK, we tried one experiment where we increased prices 40% changing nothing else but the model to see how demand would react." It went down. It went down by less than we would expect. And from that we did another increase from the original one which was less than 40%, I think it was about 12%, and that's the model we're currently on. And it's hugely, hugely successful.
But yeah, the iterations were really important. It was a really important learning process for us. There's so little about pricing because it's so unique to your literal product or company that is widely applicable. The lots of iterations, and try lots of things, and try and understand what pricing means for you, is the best advice I could have. And having a strategy and organization support that.
We were using Stripe subscriptions, and we actually built an entire replacement for Stripe subscriptions product to better support making all these changes, and all the engineering behind that. We couldn't have done that if we were dependant on an external engineering team.
SO: Yeah, I think the most transferrable part about that, because the actual pricing strategy isn't too transferrable, but the idea of engineering a system that can accomodate multiple coexisting pricing models, and thinking about that early on, is a really important idea. It's the kind of premature optimization that's good, I think.
BM: We're probably not going to open-source that.
SO: I think there's parts of it that we're planning on open-sourcing. So we still have all sorts of problems that we didn't really want to have to solve, because we built this huge system. Like lots of it relating to records that should, there should never be two of, but are really hard to achieve that constraint in the database. We took some kind of ingenious approaches there that I'd love to open source, but I think the overall system is a little bit too coupled to our business logic.
BM: If our CEO was here, I'm sure he'd say, "One day." We'd love to do that. We'd love to do all communication. It's probably a while out, though, unfortunately.
SO: Thank you.
BM: Thank you.