Brian Balfour
Building a Growth Machine

Brian Balfour is the VP of Growth at HubSpot. In the past he has worked as an EIR to Trinity Ventures and cofounded Boundless Learning (acq. by Valore) and Viximo (acq. by Tapjoy). He also publishes advice and insights regularly to Coelevate.com. His sweet spot is helping companies build their growth processes just as they’ve reached product market fit.

00:00:00
00:00:00

Introduction

What I want to do today is I'll start really high-level and then narrow in on the specific piece of building a growth machine and go through it step by step, tactic by tactic.

A little bit more background on myself: I've worked both on the B2C and B2B side, everything from building a user base of millions of daily active users to, obviously, Hubspot which is more in the B2B-SAS space.

Building Growth Machines

There's really one mission, one goal that I think we're kind of all in this startup space, building companies. And that's really about authentic growth. Growing something that is authentic that really provides values that's long lasting. Not the type of growth that kind of explodes out of nowhere and then disappears the next day. But there's a few things that we need to understand at a very high level that growing in a world of software has massively changed.

The first big change that we've seen is that the lines between marketing and product are completely blurry. Really the question is where does marketing end and product begin in software? Because certainly, from a user perspective, they really see it as one cohesive experience.

They don't care how your organization is structured, whether you have marketing separate from engineering. They view it as one fluid experience, and that better be a great experience.

The second big thing is that what we've called marketing in the past has become way more technical and quantitative in the world of software. What I would call "Mad Men skills," that qualitative soft side of branding skills are becoming less important in the day and age of software. They're still important, but the quantitative and technical piece has become a much larger piece of the pie.

The third big, high-level change is that data is just becoming more accessible, both qualitative and quantitative. If we look back five or six years ago, tools that were commonplace now like mix panel, and the dozens of analytics companies out there, were either just getting started or even didn't really exist. This is becoming easier and easier for us as we move forward.

Last but not least, scale and speed of growth has really accelerated. So the time from zero to a million users in the B2C world is accelerating, and the time from 0 to one million ARR, in the SAS space, is accelerating. And so because of a lot of these changes, we need to realize a few things.

How to Approach Growth

Number one is that growth does not equal acquisition. This is the biggest mistake I see people make, is that they think they can grow just by driving at the top of the funnel. But I can increase activation rate, retention rate, our revenue and referral, and our overall growth as a company will increase. We need to look at it as one cohesive place and one cohesive funnel.

The second big thing is growth efforts actually require mixing the skill sets of product marketing, engineering, and data all together, to basically achieve and work on these initiatives. You can't really think about them in silos because you end up with a bunch of gaps in between all those layers of the funnel.

And last but not least, growth and how we've technically thought about product in the past is actually pretty different. The methodologies, the way that we approach these problems, is different. And so at the fundamental layer, there's really three things we need to achieve: we need to build core value, we need to give the largest percentage of our target audience to experience that core value as quickly as possible, and then we need to get those users to experience that core value as often as possible.

Doing the first bucket, solving the first bucket, is very different than solving the second and third bucket in the way that we approach those problems, evaluate those problems and solve those problems, eventually. And so one high-level way that we can think about growth as we move forward is that any good product, basically, if you build really deep core value, something that really strikes home with people, you should have a natural adoption curve. Customers will refer other customers, and you'll grow naturally.

But that doesn't necessarily represent the true growth potential of the product or company. Growth is really about how we take that from the gray line to the blue line and basically optimize all of those different things and all of those layers of the funnel to make sure that we're reaching our true growth potential. At the end of the day, when I talk about growth, It's not necessarily the way everybody talks about growth.

Growth is more about a change into how we think about our team structures, people, methodologies and processes. It's not so much about tactics or hacks, where I think most of the people spend their time consuming.

The one piece that I really want to talk about is not with tactics. Learning how to grow authentically does not start with tactics. The one that I really want to dive into is process.

Process Versus Tactics

When I say this to people, "This is where you should be starting," a lot of people kind of look at me wide-eyed and say, "Why the hell should I start with process? I'm a small startup. I don't need to think about process. I just need to move forward, just give me the latest tactics that work." But there's really four reasons you need to focus on process first and tactics second when it comes to growth.

The first is that what works for others is not going to work for you. At the end of the day, your audience is different. Your product is different. Your business model is different. Your customer journey is different. Your business is different, plain and simple, from one business to another. You can draw some analogies, but at the end of the day, what makes a really successful business is combining a unique set of variables together.

You need a process that's going to uncover those unique set of variables. To figure out the ones that work for you, figure out the combination that works for you and not necessarily always rely on looking at others.

The second big thing is that growth is assembled from a lot of small parts. And so we see growth curves like this, often in TechCrunch or other press articles. We tend to really want to focus on this point. We're always like, "What is the one thing that they did that basically caused this explosive growth?" But what we should really be asking is, "What are all the little things that they did to get there? And what are all the things they did to keep it going afterwards?" Because, at the end of the day, silver bullets don't exist.

There are certainly things that you'll do that will be outliers. They will cause a magnitude order more of growth than some of the other things. But at the end of the day, it's never one thing that gets you on that growth trajectory.

We need basically a growth machine that's going to continually test all of these little inputs and learn from them over time. That will lead to the successful combination of things that will work for our business.

The third is that the rate of change is accelerating. This is the one that I think about most and actually worry about the most. If we just look at acquisition channels we can look at retention loops and engagement loops as well. This is already out of date, but over the past year there have just been fundamental changes in every single one. Take Facebook for example. What works on Facebook today, either on ads or platform, is not what worked 90 days ago.

My team is completely operating and executing a completely different set of tactics than they were 90 days ago because all of these channels, all of this world, is accelerating at a massive rate. When we look at this on a macro level, this is a graph from James Currier who ran Ooga Labs and has built I don't know how many companies to the tens of million users. He put this together, and as you see on a macro level, as time has kind of gone on in the software and Internet world, more and more channels basically appear over time.

More importantly, the cycle between launching, peak effectiveness, and it depreciating is shrinking, meaning it's accelerating.

What we need is basically a process that's going to continually experiment and uncover the things that are working, and uncover the things that we thought worked in the past that no longer work any more.

The fourth and last thing about process first and then tactics second, and then we'll dive in deep, is you need a machine. A growth machine means three things: it's scalable, predictable and repeatable. We look for those three elements. Because when you have those three elements, that's when you know that there's a great foundation, a great machine in place. And that you know what your inputs are, and for those inputs, what you're going to get on those outputs.

The analogy that I use is that it's the machine that produces the tactics, but the process is what makes the machine. So that's why we start here first with the process. Diving into this process, this is exactly what we use on my team at HubSpot and at previous companies. It starts with the goals. What are we optimizing for?

What to Optimize

We optimize for four different things. The first is learning, first and foremost. It's always about constant learning of your customer, product and channels, and feeding that back into the process to improve over time. A failure to us is not a failed experiment or a failed initiative.

Failure to us is that we did something and we didn't learn from it. At the end of the day, if we don't learn from it, it's pretty much useless.

The second thing is rhythm. Momentum is a very powerful thing. In the nature of a highly experimental process and building these growth machines, you're going to fail more than you're going to succeed. So to fight through those failures, establishing that cadence to fight through those failures to get to the successes, is really, really important.

The third and fourth are more from a team perspective. We really optimize for autonomy. Basically, individuals decide what they work on within a given set of guard rails. With autonomy, obviously, comes accountability. You don't have to be right all the time with this process and this team, but there's an expectation to improve over time.

If you improve, that means you're learning and you're applying those learnings back into our process and our ideas. At a high level, this is what, step by step, the process looks like. It looks very overwhelming at first, but we're going to walk through it step by step and it's actually very easy. What you see here at the top level, the first three stages, is what we call our "zoom out" phase.

We do this about every 60-90 days depending on what we're working on. Then the bottom cycle is what we run daily and weekly. The first part of the zoom-out phase is really about finding levers. And the question that we're really trying to answer is, basically, what is the highest impact area that we can focus on right now given the limited set of resources?

I used be very naive when I started my first company, that it was going to be another three months and then we would have everything that we would need to execute all of our initiatives. They're another six months; we just have to get over this mark. But I'm in a public company now worth a billion and a half dollars with over a thousand people.

I can tell you, no matter how big you get, you will always be limited.

Either by time, money or people. You will always have limited resources, and so you have to get really, really freaking good, at day one, on how you answer this question. Because order of operations does matter.

The way that we find the answer to this question is we use what we call our growth model. It's basically just a giant Excel sheet. And the growth model helps us evaluate a few things that I'll talk about in a second. But what I want to first talk about is how we generate this on a business-by-business basis.

The growth model starts with, basically, identifying your top-level goal. For one of our products called Sidekick, our goal is weekly active users. It's built for professionals. It's an add-on to your email. Professionals are on email often on a weekly basis, and so we chose this weekly active-user metric as our top-level goal. At lot more went into it, but I want to move on into the deeper pieces of this.

We start with this, our output. This is what we want to drive. This is how we monitor whether we're growing or not. But we don't sit there and try to come up with ideas like, "How do we move weekly active users?"

The big goal is, "How do we break this down into small enough pieces where it becomes actionable?" And that we can evaluate all of those inputs.

We break this down just like a math equation, into smaller and smaller pieces. Weekly active users basically equals the number of new people I've activated or acquired in a given time period, plus all the people that have retained from previous time periods. But we don't stop there. We go deeper.

We can break down "new activated" into its sub-components. The number of people that might have registered via Facebook ads, times their activation rate, plus the number of people that have registered via viral times, their activation rates, so on and so forth. And then we can break those components down even smaller.

The number of registered users via viral is really a function of the number of impressions I get to the invite page times their conversion rate, times the invites per user, times the email-click rates, so on and so forth. And we go through this with all of our different variables. We do the same thing with retention, and at the end of it is that output, that model.

We build that model. We build that Excel spreadsheet to basically look at things a year to two years out. And we evaluate three things. We look at the baseline, where are we today on all of these different inputs and variables. What do we think the ceiling is? Basically, it's our educated guess based on where we are today.

Where do we think we can get that number to if we worked on it, if we focused on it? Then, looking at that ceiling, what is that impact? Where is that sensitivity over time? We're not looking for things for impact that are going to give us really short-term impact in the next week or 30 days. What we want to look for are things that have impact over time, the next six months to a year.

If you look at just the next 15 to 30 days, what you end up doing is missing a lot of core components.

Things like retention or virality that compound over time get hidden when you take a really short-term view.

So we look at these three things to, once again, go back and answer that question. What is the highest impact area that I can focus on right now given limited resources? After we identify that area, we set some goals.

Goals and Frameworks

We use a framework called OKRs, objective and key results, that a lot of you are very familiar with. This is widely used at Google, Zynga, LinkedIn, a bunch of other top Silicon Valley companies. But the goal here is that we state our objective. This is kind of a "why?" behind this area that we've decided to focus on.

We set a timeframe no shorter than 30 days, no longer than 90 days. Anything shorter than 30 days is just too short for us to make a meaningful impact. Anything longer than 90 days means we're probably biting off more than we can chew at the moment. Then we set three quantitative metrics that basically tell us whether or not we're achieving that objective.

A good example of this is, at one point on one of our products, we wanted to make virality a meaningful channel to us. We had this metric which was the number of new users coming from viral divided by our weekly active users. It gave us a rough percentage of what our active user base was generating in terms of referrals on a week-to-week basis.

We set a timeframe of 90 days, and then we really focused in on trying to achieve these KRs. The one I really want to focus on is this first one. We always set OKRs on the inputs, not the outputs. Meaning if our output, our top level goal, is our weekly active users, we're never going to set an OKR against weekly active users. We always want to focus on the inputs. Focusing on the outputs basically leaves it to be too broad, too tough to come up with ideas, to really see if you're moving that number in a short enough time period.

And so once again we go back to that model, and we focus on one of those inputs that we believe will lead to the outputs. After we set some goals, we go through our third stage of the zoom-out process. This is just exploring the qualitative and quantitative data of the area that we chose to focus on.

How we draw insights is in combining three things, not just the quantitative, not just the qualitative, but also our own intuition.

A lot of people debate, "Is growth an art or a science?" The reality is it's kind of a mixture of both. A lot of times the quantitative data tells us what's going on, but it doesn't tell us why it's happening. And so we have to move to our qualitative data. And then, even if we get why it's happening, we're never going to get all of the information out of our users that tells us what the solution is.

Growth Insights

Our users don't really design the solution for us, so you have to combine your own intuition to really draw massive insight into growth. The way we explore that qualitative and quantitative data is through a number of different measures on the qualitative side, whether it's user surveys, one-on-one conversations, or support tickets.

On the quantitative side, we have a number of different tools that we use, whether it's event-based data or revenue data, or just raw data in our data store. But we explore it in a number of different ways depending on the area. A few different tools that we use in this process are what we call "mini models," basically a more narrow version of that big macro model. We got through the same process. We look at the baseline, ceiling and sensitivity over time.

If we're really focused on that viral piece, we'll break that viral piece down into even smaller pieces. We'll look at which of the smaller pieces are the most impactful ones that we can work on and where we can start in this OKR period. We also, to gather qualitative data. We do a lot of one-to-one, open-ended email surveys.

If we want to know why a user didn't convert on an invite page, for example, we will actually email users who hit that page, but didn't invite anybody, and ask them why. Like, why did they decide not to do it? What we end up with is a bunch of qualitative-trend information telling us things like, they didn't know who to invite. The reward isn't high enough. They didn't want to spam friends.

The big purpose behind this is we can take this qualitative information, and now we can come up with a lot more actionable ideas.

We can look at one of these things and say, "Oh, they didn't want to spam their friends." Well, we can add trust badges. We can show them the email that we're going to send to their friends. We can add copy that talks about and ensues trust.

All of these experiment ideas come up in a very targeted direction. I should say, we use these techniques not just on acquisition channels like viral and referral in the B2C spectrum, but all parts of the funnel, whether it's acquisition, retention, revenue and engagement. And so once we complete that zoom-out period, which will take anywhere from a few days to a week most of launch into this weekly cycle.

What we start off with is basically a brainstorm which generates a backlog, and the big key here is that we brainstorm on the inputs, not the outputs. This is a common theme that you'll be hearing throughout this whole thing. If we look at something we want to improve like activation rate, the number of people who have converted from registering to activating in the product, we will break that down into its smaller pieces. And we will go through one by one on the smaller pieces and brainstorm against those individual pieces first. Then we will brainstorm against the whole.

Once again, the smaller you break down the pieces, the more actionable and easier it becomes to come up with ideas. There are four different tools that we use to come up with growth ideas. These are all stolen out of a great book called "The Innovator's DNA." I totally recommend reading it. There are just four different techniques, and the reason we design specific exercises are so that nobody on the team should be sitting there saying, "I need ideas."

They always have a set of tools to go back and generate more ideas with the team. This helps as you scale this team out over time. We do a couple things. I'll highlight a couple of them. One is we do do some observation of how others are doing it. So if we want to look at that referral mechanism, we will go and observe how a bunch of other companies are basically operating their referral mechanism. But we'll specifically look at companies that are not in our competitive space, because it actually does a much better job of generating ideas.

We do another thing called "question storming." What that is is we get in a room, three or four people, and we do nothing but ask questions about our given focus area for 15 minutes. That might be questions like, "Out of users who are inviting, who are they inviting? What are the common elements? Out of the people who didn't invite, why didn't they invite? How many people are they inviting over time?"

All these questions start to lead and start to point out a bunch of places that we don't understand in our growth model and in our product.

And so typically any great answer, any great solution, starts with a great question. And so this is a way to come up with a lot of inciting ideas, questions that incite ideas.

The next step is that we'll go and we'll prioritize, and so the team members can choose any ideas to work on. But they have to prioritize, and they have to talk about it in a very common language. That common language is about three things. The first is the probability of success, and we look at low, medium and high. It's very quick assessment.

Low probability things are things that we've never done before; it's an area we've never focused on. We know very little about it. High probability is this experiment, something that's generated off of an experiment we ran and learned something from very recently.

The second is impact. We'll actually make a prediction, which I'll talk about in second. And the third is, of course, resources. How many resources is this going to really take to execute? But the impact, the prediction, is the most important part. And the way that we do that is we generate a hypothesis. It pretty much comes in this form. It's saying, "If this experiment is successful, this variable will increase by this much because..." And then we list out our assumptions.

What this causes the team to do is actually think ahead about the ideas. It removes that gut and that emotion from the process, and they're really looking things in a common language across the entire team. They're really taking impact into account, and so what this does is, and we don't expect people to be perfectly accurate on it, but it does two things. It basically gets people to think about their experiments in a more structured way, ahead of time, and then what we'll talk about in the last stage of the cycle is that it actually helps us extract way more learnings out of the process than we would if we did not do this.

The way they come up with these assumptions is they look at the quantitative data, the qualitative data, or even secondary data, things that they might have read, ideas they might have gotten from other places. We use a tool. It's basically a Google doc. It's called an experiment document. They take about five minutes to write out this hypothesis and their assumptions. That way the team can access the reasoning behind all of the different experiments that people are running at any given time.

It also acts as a very quick knowledge store for us, so that we can build off the learnings, build off the successes over time, especially as the team grows and grows and grows. So, once we prioritize, we design what we call a minimum viable test, and that's really the minimum viable thing that we can do to understand that hypothesis that we produced.

There's two really big forms of this. It's about one, the efficiency. What is the least amount of resources that I need to gather this data? But it has to be reliable and valid data.

Sometimes we can come up with a super efficient hack or thing to test this idea or this experiment, but if we're not going to get valid data out of it by the end, we're not going to learn from it.

And like I mentioned before, if we don't learn from it, that's the true failure for us. In some cases we would actually put more work into it to just make sure that we get really valid data out of the experiment. We write out the experiment design in a bullet-pointed list. That way other people can understand the context of the design, and it also helps really reinforce that second piece, the valid piece.

It forces the team members to think through how they're going to run this experiment ahead of time, to make sure that we're going to get really valid learnings out of it. After we design the test, we'll go implement. This is super easy. Go get shit done. There's really nothing to explain there. And then we'll go into the most important step. This is about analysis and learning.

Extracting Learnings

After we run the experiment, it's all about how to extract the most learnings out of this as we possibly can. The first thing we'll look at is, "Was this a success or a failure? Did it improve or not improve the thing that we were targeting?" The second is impact. By how much, how close to our prediction were we? And then third, and most importantly, is, "Why?"

Why did this thing succeed, why did it fail? Why were we way off on our prediction, why not? Because in this question, "Why?" of what's happening, it really forces us to think about how our users might be reacting to the certain experiments that we're running. Our users or our channels. It helps us generate not only new learnings, but also new experiment ideas that we can then go and run.

They'll write out in really quick, bullet-pointed format the "Why?" and the results and any sort of action items that they can into the experiment document.

And then, last but not least, we take the successes and we systemize them. There are two ways we can systemize. We try to systemize as much as we can with technology and automate things. Certain things we can't automate. Certainly things in content marketing and stuff just require a human involved, and for those things we write playbooks.

The playbooks are basically there for us to make sure that we're standing on the shoulders of our successes and that we're not constantly repeating things and learning the same thing over and over over, and that the whole team can move forward.

We'll just continually repeat this over and over and over, within a 90-day-OKR period. What that looks like for a member on a team is, on Monday we have one meeting as a team. It's our growth meeting. We look at a few things, but we really focus on learnings. We don't really focus on what we did. It's just all about the learnings that we extracted out of the process and sharing them across the team. Then the rest of the week, they're going through the other steps. Most of the time is probably spent on the implementation stage or the analysis stage.

A little bit more about this growth meeting. Once again, first and foremost, it's about learnings. That really hammers home the number one important thing about this process. But then, second, we'll go through our goals and talk about anything that might be blocking us from achieving those OKR goals through one of those cycles. We have a template for our weekly meeting. This just kind of forces everybody to follow the same format, speak the same language and make sure we're extracting as much learnings as we can out of the entire process.

Then what we do on a quarterly basis, or even maybe semi-annual, is that we take a step back and we say, "Well, how can we optimize this, our whole system, from a macro level?" And so we look at a few things. The first is kind of our batting average. How many successes to failures did we have? Within an OKR period, what we should find is that that batting average should improve over time.

What most OKR periods look like is that we start off, and every experiment is a fail because we're learning. We don't know anything about that area. But as we learn and we feed that back into the cycle, instead of "fail, fail, fail," we'll start to hit a few. It's like, "fail, fail, succeed, fail, fail, succeed." And towards the end, where we really know and understand the area, we're like, "success, success, success, occasional fail at the end." But that's kind of what a typical OKR period feels like for us.

We also look at accuracy, our predictions getting accurate more of the time, and we look at throughput. How many experiments are we running in a given time period? We talk about three things of how we can optimize this.

First is team. We know we can improve our current team skills. We can add to the team, and we can remove from the team in some cases. The second is our infrastructure whether that's our analytics, both on the quantitative and qualitative side, or even the experiment infrastructure we use to run a lot of these tests. And then the third is our process. We could, basically, blow up this whole process and refine pieces of it to make it more efficient, produce better quality ideas, or even get more learnings out of it.

Three quick final words on things that I often get asked. The first is that iteration does not equal incremental.

This is, I think, a common misunderstanding, is that even though this cycle is really based on taking learnings and iterating on those learnings, that doesn't mean we're always doing incremental things.

We might actually take a learning, and that learning may inform us to do something really, really big. Also, basically, the size of the project does not equal the impact of the project. A lot of times, very small things can have much larger impact than very large initiatives. The second is that it's never too early to start this process. I think a lot of you guys are probably very early-stage. You might not adopt all pieces of this process, but the more important things are, basically, the checkpoints, the questions that you ask along the way.

Thinking about the impact of your initiative, thinking about how you extract learnings out of everything you're doing, and basically feeding that back into your ideas and how you prioritize everything else. And so, once again, you might not adopt this word-for-word, but the core elements of this still hold true whether you're early-stage or late-stage.

Last but not least, a final final word is that there's nothing special about this process absolutely at all. It's basically just a combination of the scientific method, some lean start-up principles, and some own specifics to the topic and initiative of growth.

Success in this really comes down to grit, focus, and persistence. Most people just don't have the three things to maintain this process. When they have a failed experiment, they won't sit there any analyze and understand why it failed. They just want to move on to the next shiny object. Or they don't want to prioritize based on that impact probability and resource, and they just want to work on things that they feel like working on. It's those kinds of traps that actually derail people and get people off the track from building a really solid foundation and growth machine. That's it. Thank you.

Q&A

Building a Process Around Content Marketing

I would actually say most content marketing is even more process-driven than things like referral and virality. And so long-term success in content marketing is around establishing a series of steps that a number of team members can walk through to produce a quality piece of content that you know with a very high probability is going to get a lot of traction with your audience. It's less about producing those one-off hits.

One of our blogs, for example, our product called Sidekick, we write some very long-form content specifically optimized for SEO. We developed a playbook over time that was about three or four pages long about everything on how you start the research process, what to go after, and then how to choose which one to go after. And then how you break down and build an outline for that page. Then how do you take that outline and execute the research to fill in that outline. So on and so forth, all the way through to promotion.

Those are things that we've experimented with over time and have been able to prove with a fairly regular piece of success that they are going to produce a high probability of success. And so I think that's probably one of the bigger mistakes in content marketing: people think about pieces of content as these one-off things.

Content marketing is really about how do you build a flywheel, a system, that continually attracts the target audience, captures them in a conversion funnel, nurtures them into some sort of specific point.

A system is even more important, and playbooks are even more important for things that require more humans, like content marketing. Otherwise, you end up with all sorts of variability that can lead to a lot of initiatives that aren't really useful.

Content and Network Effects

If we're talking about content marketing specifically, I think one of the big secrets about content marketing is that, and specifically what Hubspot talks about, is that marketers were actually the perfect audience for content marketing. When you think about the loop of content marketing in terms of producing content and distributing it to a base, that base shares it out to draw in newer people and then capture those people. There's a few pieces of that funnel that are a lot lower-friction than other audiences. Particularly that distribution piece.

A lot of marketers share a lot of content, right? And a lot of marketers are okay with handing over their information to download ebooks and all sorts of other stuff. Compare that to a developer audience. Developers tend to be a little bit more conservative and a little bit more cynical. So, if you put a pop-up form in front of them, they're going to tell you to go away or screw off. They're not going to come back. And so you have to treat audiences very differently, but that doesn't mean that you can't use content marketing for different audiences.

At Hubspot we've also built a great content presence around salespeople. Salespeople actually share even less than developers, which is kind of crazy. But if you even look in the developer space, you can look at things like New Relic or Docker, they've done an amazing job of building amazing content presences using a different set of tactics, techniques and systems specifically designed for those audiences. That goes back to first point of, yes, they're both content marketing, but the underlying variables, the underlying inputs, are actually completely different when you actually break it down.

Variables That Make a Difference

There are a couple things, you know, distribution methods of the content. In Hubspot's case, the marketing case, to distribute they're going to get a bunch of marketers to either share via Twitter or Facebook, because of where they live and share things. Or they'll run a bunch of webinars and stuff, because marketers will get on webinars.

Docker, on the other hand, is like, "Well, you know what? Actually, the sharing rate for developers on Facebook is not that low. We're not going to really do that." What they actually did is they reached into their user base, they got their customer base to write a bunch of different, unique ways that they were using Docker, and then Docker basically used the rest of their distribution base to put a bunch of traffic towards that person than wrote about them.

To vote things up on Hacker News, right? Docker was on the front page of Hacker News every single week for, like, four months straight. It was kind of ridiculous for a while because they identified Hacker News and these other places as, "This is where this content is going to be distributed," which is totally different than Hubspot. It's going to be using a totally different set of distribution tactics for that audience. So that would be one difference.

If you want to talk about more details, at Hubspot you're going to have lightbox popups. New Relic and Docker, you're not going to have that, because that's going to scare the developer audience away. You're going to have a bunch of webinars over here. You're going to have more things like free trials and free setups, because developers are a little bit more autonomous and individual and self-reliant.

There are all sorts of different, minute details that will be totally different, but at the end of the day, it's still content marketing. It still follows the same essential content marketing loop of creating compelling content, sharing it to your distribution base, getting that distribution base to bring in more people, capturing a piece of those new people and rinse and repeat.

Cost of Creation Vs. Distribution

It's a spectrum. At the beginning you want to spend more. Content marketing is one of these flywheels that has a very different trajectory than a lot of other channels. It takes a lot longer to get going, but once you get it going, it carries itself under its own weight and momentum and takes off.

A total 180 from content would be paid marketing where you can do a bunch of experiments at a small scale, find something that works, turn the knobs up really quickly and you get this spike. Then you do a little bit more testing and experimenting, find something else that works, scale it up. Content marketing has a lot longer base, it takes a lot longer, it kind of gets going, so the key part of shortening that base and getting that ROI is actually is that distribution element of it.

I would recommend, at the beginning, it's much more like 70/30. Focus on a low quantity of really high-quality pieces of content, and make sure they get distributed as broadly as you possibly can. And so one tactic I even use for my own blog is that, when I started, I wrote maybe once every month but focused on a really high-quality piece of content. I developed a list of 30 friends, and these 30 friends were all in the Boston tech community.

So there was some sort of density effect there, and so I would launch this quality piece of content. I would write all my 30 friends and say, "Hey, I spent a lot of time on this piece of content. I just need your help. I need a favor with these next few pieces of content to help get it going. Here's what I'm trying to achieve, do you mind sharing it here, here, and here?"

The key of that is maybe 15 out of the 25 would share it, but they had such a density effect that most people saw two or three people share it, and then it created this weird psychological effect of, "Well, if they're sharing it, I must share it too." And they just kind of exploded. I actually went from zero to 1,000 subscribers on my blog in three posts. And it's very doable on content marketing and the B2B side, too, if you execute that distribution piece right up front from the beginning.

Then, over time, that spectrum changes. You'll shift more towards 50/50 for a while, content versus distribution. Now Hubspot's at where they've got a massive email newsletter list and a domain authority of 90 something. They can write some piece of content, and it's going to get distribution no matter what. For their assets, for where Hubspot's at, it actually makes more sense to spend more time on content production than it actually does promotion. It depends where you're at in that spectrum.

How to Hire for a Growth Team

The concept of growth teams is still fairly new. Facebook was the first one. They instituted it about five years ago, and they've become more and more popular. Now pretty much all Silicon Valley companies, from LinkedIn to Google to Uber to Pinterest, all have some sort of concept of a growth team.

The likelihood that you'd still find somebody with one or two years of experience working on a very well-established and developed growth team is still pretty small. And so at the end of the day, you've got to look for the people that have the inputs that are going to generate the kind of output that you want.

The things that we look for on our team are a few things. This is actually on my site, coelevate.com/growth-team. But the common things that we look for, a few of them would be, we look for people who are motivated by impact over everything else. One way I evaluate this is, "Tell me about your favorite project that you've worked on and why." If they talk about it the context of the impact it had on the business or the product or the output, I kind of know what they're motivated by. But if they talk about, "Well, I did this thing, and it was just pixel-perfect design," or, "It was just this crazy-hard engineering thing, and that's why I wanted to do it," I start to understand they're motivations are a little bit different and probably not well suited for that. So that's one.

The second is voracious learners, people who are not only okay with just understanding what happened but why it happened. I find a lot of people are just okay knowing that the numbers went up or the numbers went down. But it's the people that you want that are constantly asking "Why, why, why?" Constantly learning, constantly learning. So that's the second.

The third is more of a hacker mentality than a craftsman mentality. People that look to solve things very efficiently, not look for the perfect long-term solution up-front. Those are probably the three most common, and the three most important, and then I think I have three or four others listed on our site.

Collecting Large Qualitative Samples

Most people think about qualitative feedback as just one-on-one user-feedback sessions. We actually spend more time automating a lot of our qualitative feedback. There's tools like Intercom and stuff that can help with this. We have something internal at Hubspot now that we built among our team.

This depends on the volume of your customers and users, too. For more B2C-type products with higher volume of users, we basically can identify certain segments of users that have taken a certain set of actions, or not taken a certain set of actions, to trigger a one-on-one email to them that looks like it's coming from one of our team members asking a very specific, open-ended question. We'll send out maybe a couple hundred of those. The response rates are typically 30% if you do it correctly, if you do it well, and very well targeted.

Then we take all of those responses, throw them into a spreadsheet and somebody goes through those qualitative responses, categorizes them, and then we end up with that pie chart I showed earlier of directions of qualitative reasons. If we have a little bit more time, we'll follow up with a second step and we'll respond to each one of those emails. We'll ask, "Oh, that's interesting. Please tell me more. Why? Why did you say that?" Something like that.

Get more detail out of them, and through that interaction of email, and if you ask the right, open-ended question and not provide them the answer, we tend to get really good response rates. We get a lot faster feedback and a lot higher-quantity of qualitative feedback than we would through one-to-one user sessions.

We also look at support tickets. We extract a lot of information out of there. Sometimes we've done some of the live chat in certain areas that people are getting particularly stuck, where we feel like we need to talk to them in real time rather than triggering this after the event. Those are some of the number of methods that we use to collect it a little bit more efficiently.

What Keeps Him Up at Night?

Well, how many things do you want to go into? What keeps me up at night? I think the first thing that worries me is one of those first things that I was talking about, the rate of change. The faster things change, the higher the throughput you need in terms of experiments. And actually, as your teams grows, it's harder to maintain that super high throughput across all of the teams. Because if you don't constantly, constantly experiment, you're just going to be sort of left behind, right?

This isn't the best example because it didn't produce the best companies, but during the social gaming boom on the Facebook platform, it was the companies that ultimately had early information and executed on that early information of changes to the Facebook platform fastest. That's how Zynga won. That was one of the main reasons they won.

And so that keeps me up at night as well, how are we constantly experimenting with things? I think the second big thing is, in terms of growth channels and platforms, we tend to go on a macro level through these waves over time where a couple platforms will launch, and they'll totally change the game and open up a crazy amount of opportunity. Obviously, iOS and the Facebook platform were that at one point.

It's cyclical, right? We go through those periods. And then we go through these periods where there aren't those new, massive launches and channels, and you just have to get really freaking good at competing in one of the more mature, competitive channels. That's much harder said than done.

I would say, things like content marketing, we're probably in that stage right now. There are some things you could point to, whether it's things like the Pinterest platform or Instagram, or you could say that there's even a little bit more on content marketing. But even content marketing's getting really, really crowded these days. We're in that period where it's really, really hard. It can be really hard to compete.

And then I think the third thing is that this stuff is going to continually shift to be even more and more technical than it is today. I think ultimately that means you've got to get more and more engineers involved in this. And here's two problems with that: more shortage of that type of talent, and second is getting engineers to be very interested in solving these problems.

There's a lot of negative historical bias towards it. But I think Facebook and Uber and a bunch of others have done a really good job of reframing the problem to engineers, how big of a problem it is, and making it really exciting initiatives to work on. So, those are probably the three biggest things that would keep me up. Thank you.

Did you learn something?

Share this with your friends!

Want developer focused content in your inbox?

Join our mailing list to receive the latest Library updates. After subscribing, tell us your preferences to receive only the email you want.

Thanks for subscribing, check your inbox to confirm and choose preferences!