December 20, 2018
Winter Break Resources: Reflecting on Your Leadership Style
As we're about to go into a period of downtime, this is a great time to dig deep and look into how you might better manage your team in 2019...
Hi everyone, my name is Colin Zima. I've been at Looker a little over six years. The quick life story is, after college I went into structured credit trading, trading all sorts of esoteric credit instruments that blew up in the 2007 crisis. I decided to leave financing to go try my hand at tech and spent four years as a statistician at Google in search, so I have a lot of background in search and measurement. I left Google and started a company which had mixed success, but after about a year and a half, we ended up selling it to a company called HotelTonight. At HotelTonight, I led data and data products, so I did all of the reporting analytics and also some of the work on search and the data driven products that they have.
At HotelTonight, I was one of the first customers of Looker, so we started working together when it was about a three person company. I grew close with the team, loved the early product, and wanted the chance to talk to more data people, so I eventually decided to ask Looker if I could have a job. At Looker, I started leading Customer Success and Support along with the Analytics team and eventually took over Product and offloaded Customer Success and Support. Most recently, I've shifted to Strategy. I like to think of what I have done in life, as data-enabled anything. Sometimes it's product building, sometimes it's credit trading, sometimes it's Customer Success.
I think it helps to understand what we are at Looker for people that aren't familiar. We're an analytics platform, that helps people get more value out of all the data that they're collecting. The company grew out of a pair of founders, one of which actually built Looker as a data product inside a previous company that he was at. Eventually, he spun out and rebuilt the code base, and obsessively tried to make people successful with data. This is a picture of the whole company at the time. This was five or so years ago, everyone's wearing different customer t-shirts.
One of the things that made Looker most successful beyond the product, which is obviously amazing, was that we were completely obsessed with making customer successful. Every single person spent time on Support. When you started at Looker, you rotated through Support. We gained an empathy for our customer base and developed an obsession with making them successful. We like to talk about not really caring about how much money we were making for customers, but if they loved us and were happy with the product, they'd be successful.
We hired our second-ish CSM to join Looker, and with them, I started trying to visit every single customer in person, and there were about 75 at the time. We called our team Engagement because we thought it sounded friendlier than Customer Success or Support. The reason that we did that is, we just wanted to understand what they liked and disliked, and talked to them as peers. One of the reasons I joined Looker was to try to understand more different use cases for data and see if I could draw inference between our customers who were using data. The grain at the center was trying to learn about what made people successful or unsuccessful and eventually that led to this conversation.
Our CEO saw us being successful with customers, and customers loved Looker, the product. But we didn't have a system for making decisions about what we were doing. We obsessed about each and every customer but as we started to grow, we needed to create frameworks for doing it more effectively. If you're a consumer business, you can never do that. If you're on the enterprise side, do that as long as you possibly can, but eventually you're gonna get bigger and you're not gonna be able to do that. To that end, we wanted to create a more predictive and a more operational framework for understanding what customers did with our product.
When we look at 100 customers, why are some successful, why are some not successful? More importantly, can we begin to understand what we want people doing with the product? When you think about your product and how it gets used by your customer, there's an intuitive understanding about what you want people doing with the product.
There's a vision for how we want people using Looker and sometimes, the vision doesn't actually match up with what people are doing with the product.
You might have 10 features and two of them might be actually important to your customer. But strategically, there might be a whole variety of things that you think make a customer successful or that you want to make a customer successful with.
What we set out to do was create a framework for doing that evaluation. This was perfect or me, because at the time, I was managing the Customer Success and Support team and the Product team. From the Product side, I could say, "I want people to use these three features of the product." If people use the scheduler in Looker, they're going to be more successful with the product, and if they're not, we need to go talk to them about why they're not using it. We could still obsess over each and every customer in how they use the product.
So we sat down one afternoon and created a score to describe whether customers were being successful or not. We went through this process of defining what health is. This could seem daunting superficially, because there's so many things that measure success, and you want to pull a data scientist in and figure out "what are the correlates for success and can I do real prediction?" If you spend a lot of time with data, you go through this cycle of thinking that you need more skills, and then realizing that just doing the work is probably the most important first step. So, instead I literally just sat down, wrote down the 10 metrics that I think are important, and turned them into a score of 1 to 100.
We were already publishing hours to the company because we were small and we could move quickly. So I started collecting all of these metrics and I asked, "how many minutes is someone using the product? Are they growing over time? Do they have a super user that is 100% of their usage? Because that's bad. Are they using the scheduler or they're not? Are they using these 10 features?" I threw them all in a pot, I made up scores for each of them out of 100, and I gave every single customer a score. What was fascinating was, at first, it wasn't perfect. I think that's really important to understand. You're not going create a perfect score. The goal is not really to predict churn, the goal is to create a framework to understand what people are doing.
We took all the things that every single customer was doing with our product and we turned them into this single number that you see on the left hand side, which we called Customer Health. It gave us a common language to start talking about the customers that we were spending time with. For anyone that's building or working at a SaaS business, you know that the world has shifted from a model where you sell someone a box of software,and maybe they come back 12 years later or maybe they don't, to a world where we're trying to earn a customer want them to stay with us forever. We're directly incentivized to continue to make them happy, make them come back and use our product. That's true whether you're a SaaS business, or whether you're a consumer product.
Recurring revenue is really important, and we'll get into some examples of that from the consumer side. Anyway, we saw our Customer Success team spending 1% of their time with each of the 100 customers. Certainly, the philosophy is that you want to grow your successful customers and turn them into advocates. But when you sit down and put the math in a spreadsheet, if you're churning out five out of those 100 customers a year, and you can go spend 10x as much time with those five customers and keep two of them, the leverage that you get from that is enormous.
This started giving us a framework to be able to tell our Customer Success team where they were focusing our time. Were they focused on creating advocates at the high end, or trying to do kick saves at the low end? Could we actually look at a time series of this feature usage and understand more actively the ways in which customers are being successful, and the ways that they aren't? It gives you a quick heat map of how one customer compares to another customer. It's not perfect but at least now we have a way of understanding who is using the product in different ways, and whether we can recommend behaviors.
When we think about doing this, and when I recommend that other companies go and do this, I would emphasize that this isn't a giant planning exercise that takes interviews with 100s of people and an enormous business process. The most important part is really just defining what a successful customer is for your business. What I see, both as a Looker employee and as someone that is selling a data product to lots of different customers, is that the most important thing that any company needs to do is align on what matters to them. For most people, it's making money eventually. Most companies are not in the business of just doing things because they're the right thing to do. They're trying to build customers.
But I think even for products that are just not directly sold, it's really important to understand how you want someone to use the product. If I'm Facebook or if I'm Twitter, logging into the product and doing nothing, it's probably not a great determinate of future success. There are actions that you want people to take that define how sticky they are in your product and whether they're using it the right way. If you're a social site, you want people creating content. If you're a media site, you want people to consume news on a recurring basis. It's really important that you actually declare what you want people doing with your product.
For Looker, we want people operationalizing data in their business. That means, coming back to Looker every single day, it means scheduling content into their workflows, it means interacting with other users. Again, you can't let the perfect be the enemy of the good here. It could be a single metric that defines the success of your product. When Looker started, Lloyd, who's one of the founders, created a metric called Usage Minutes. Literally all it did was count how many five minute blocks an individual had spent on a Looker webpage. If you were using Looker actively for an hour, you used it for 12 five minute blocks, he recorded you for an hour. The only metric that he paid attention to for the first two years, was how many usage minutes every single customer had. That was his health score. Were usage minutes there, and were they growing, and was an individual user being successful?
You can truly start small but the idea is, you want to create a metric or a set of metrics that the entire company can rally around to understand, what makes our customer successful or what are we actually trying to accomplish as a business?
If that metric is just revenue, then you're missing the things that actually predict future revenue.
Again, for media companies, it might be articles consumed. For a SaaS business, it might be using these three features of the product or inviting users. But defining success is number one. Then you need to go track it. If things aren't trackable perfectly, great. If things aren't trackable perfectly, the create metrics that are correlated to them.
There's always an example that we used at HotelTonight, which is, we couldn't see people that were unhappy with our products. But we knew if you opened the app, and loaded a page that had no hotels on it or searched for a city we didn't have, that you would have a really bad time. It's an easy metric to track as a bad user experience. So the inverse of a health score. From there, it's about operationalizing it. Once you have these metrics about who's being successful and who's not, now you have a framework to define what your Customer Success team is doing. Do you want them to be spending time creating advocates out of your great users, or spending time trying to save revenue from customers that aren't being successful?
For us, it was obsessively going after customers that for whatever reason weren't implementing the product, trying to understand what they could do better, what we could help them with, could we offer services? We wanted every single customer to be successful. That's changed over time. As you need to grow your pipeline, you'll want advocates and these things can shift. The key is to come back and adjust. Health scores don't need to be static. The one that I made up that afternoon, we did not change for three years. Since then, we've overhauled it four or five times to tune the different weights.
I want to emphasize how little thought can go into a metric that can really create a lot of value for the business. I literally just sat down and started thinking about stuff that I think makes people successful. So using the product, growing, again, maybe growth matters to you, maybe it doesn't. For me, inviting a new user every week was a pretty good sign that they're getting value. We collected a bunch of different features in our product. If you don't have features that are important, throw that out and just focus on usage. Or really double down on growth or maybe only have one feature that really matters. If you go back and look at this table, I like this ensemble view because you can try to go pick out a reason to go have a conversation with a customer. And so, creating a wider table was actually a good thing. Creating a wider table was actually a good thing and some of these things were completely random and sort of floated in and out of the score. If we released a new feature that I wanted the Customer Success team to care about, an easy way to do that was to tuck it into a health score.
When everyone is rallied around a single metric, you're all talking the same language, and you're all focused on success in one way. By creating that metric, you've created organizational efficiency around doing and behaving the same way. From there, it was about pushing on the metric.Our focus was keeping the red customers down. You can see that the way that we looked at that, was looking at how many red, yellow, green customers that we had, and trying to just kick that red customer number down as much as we possibly could. As we grew, we obsessed over those red customers.
You could look at it in a different way; "how many customers do I have that are super advocates." There's a variety of ways you could read the data once you have it in place. Once you go through this small effort of actually defining what success means and people are talking the same language, you can build operational efficiency around it. This is an example of a tool that we use today, to understand the health of a customer. When they renew, we get a nice breakdown of how they're using the product. This customer looks like they're using our backend less. Great, that's a place that we can go and engage them in a conversation.
We like to break these things down a little bit, but by just seeing the 50 or 64 right off the bat, we have a good understanding of where this customer sits in terms of success. So if I'm going to go walk into a conversation with this customer, as Customer Success manager, or as a marketer, or as a product person, I know exactly where this customer sits as a user of our product. Are they wildly successful? Are they sort of in the middle? Do they need attention in given areas? It doesn't take a lot to get started here and once you can operationalize it, the entire company can find value in it.
It's about more than the Customer Success team and scoring what they're doing.
If a sales rep for an enterprise company could know what makes a customer successful, they can be aligned to the Customer Success org around driving the type of usage that they want to see.
If the Product org has priorities, they can either incorporate it into this and push those metrics forward. By creating this alignment, you can get lots of people rowing in the same direction.
The way that I would extend this is not to think about this necessarily exclusively through the lens of Customer Success. For Customer Success, you can get a more nuanced ensemble score that can do a whole variety of things to help define the business. The same concept rings true in consumer businesses as well. According to Facebook, having a X number of friends is when you actually start using their product. For Twitter, the timeline just doesn't work if you don't have 30 people. By creating this single way of understanding what a good user is, you create a way for the product to work and for the org to understand if what they're doing is working. For example, if you don't get more good users this week, then perhaps you aren't doing your job effectively.
Even for early-stage companies, you don't need numbers or a consultant, you just need to walk and talk to people. As soon as you start having even a handful of customers, creating definition around success is really important. This is an example from a talk that I've done for Looker in the past, but it's reflective of what we did at HotelTonight. The same way that the 30 follower inflection is important for Twitter, at HotelTonight, we-- you see this in retail e-commerce as well-- wanted to understand what the important inflection is for a user.
These are two very different products with two very different retention patterns. In the second product, you can see that after a person does one order, their likelihood of coming back is already 65%, and when they do two orders, it's 80%. So getting someone over the first purchase threshold is extremely important to this company. The product has to be so good, or the purchasing process so natural, that the first purchase is the key moment here. For the second business, you can see there's an enormous fall off between one, two and three. When you get a user to use this product once, they haven't been retained. It's about getting this person to three items. To use Twitter again, the first two followers don't matter, you're still not really using the product. It's the third follow that's an important inflection point.
You want to understand the type of purchasing behavior and renewal behavior that your customer has, and revolve the entire business around serving the user and the market at those inflection points. If you have a sticky product, you need someone to use it once. Canonical example is PayPal giving people money to start using their product. When people received the money and started using PayPal, they kept using PayPal For a Facebook or Twitter, finding friends is incredibly important so they needed to build a process, less focused on grabbing users but getting existing users to use the product twice.
For Looker, if you buy the product and don't use it, we haven't been successful. We need you to start using it on a daily basis, we need to create value, we need to make a schedule, so that you're getting Looker in your inbox. These are the types of things that need to go into a health score. Start with one thing. Number of users with 30 followers on Twitter. Minutes of usage on Looker. I think in the back of Lloyd's mind, he had either a growth number or just an absolute number that reflected "this thing is providing value."
Don't be afraid to get started, just define what success means beyond just renewing or using, define the behaviors that drive usage or renewals. Make sure that you track them. You can start with one, then get everyone to start pushing on it. It's that simple to get started with a health score. You can always make them more complex over time, they can grow with your business. The company and your teammates, will work better if you can create clarity around using the product successfully and how to act on that.
I would say it's the inverse actually. When you think about a fleet management product or an IoT product, certainly from a product development point of view, it needs to work in real-time. It needs to be functional for a driver, while they're driving, or for a monitoring service. I would argue that when you reflect on whether a customer is being successful or not with your product, there are probably not real-time drivers that indicate whether the customer's being successful. If you're a fleet management product, and more users are being onboarded over time, and they're using the product, that's probably a great measure of success.
People love looking at the Google Analytics real-time chart. I would emphasize, the faster that you can get a data pipeline and the more that you can make it move, the more you'll have people watch it. Using Twitter's example, getting users their 30th follow and putting them up on a board, gets people excited. But generally, you're going to want to reflect on the holistic version of success in your product. Those can be really slow moving metrics. Our health score was a monthly thing. It just could not move day-to-day.
When I reflect on someone using Looker, their success with our product doesn't move day-to-day so it doesn't make sense for health scores to move day-to-day. You want to be reflective of what your product, and what success is with your product.
I don't think I've ever put a health score in front of a customer. I would say that we're talking about the underlying drivers of these things a lot with customers. "Hey, you're using the product a little bit less. Tell us about what happened there." Or "hey, it looks like you have two users using the product and no one else but you have 100 users that you're paying us for. Could you tell us about that?" I'd say we're trying to take it down to the drivers a little bit more, than the health score itself.
That said, we did a business review of Slack and they shared a bunch of Looker's metrics for using Slack. They told us what percentile we were in terms of emoji usage and all sorts of things. If you can productize sharing some of the information, people love to know where they sit in terms of being successful with the product. A great example for enterprise software is, our buyer wants Looker to be implemented well so they want to know, "are my users more engaged than this other company that looks like me?" That's a great time to go share health.
You generally don't want to walk in and say, "hey you're using the product horribly, what's happening?" You want to take it to a tactical understanding of what the customer can do to be successful. Sometimes, that's sharing components of it, it might be sharing the score you can tell them, "you're the best user of this feature." You can also use some of these things as drivers of warmth in your customer base as well. Both of those are actually applicable.
This is gonna vary by business. At Looker, we have one product so there's nothing that we're gonna go attach to Looker to cross sell. There are lots of companies and products that have a different model than we have. They might have nine or 10 different modules and they might want to look at the correlation between using one module and using two modules because there's going to be more nuance there. I would say the proxy for us is that, we do a lot of cross selling and using stories from one customer to market to other customers.
Ted mentioned lookalike companies. Scores are going to tell you two things. First is that you're going to know what a successful-user company is going to look like and you can sell to them directly. So if Uber is using our product, Lyft will want to use our product. It shows you the types of businesses that would be successful, or the types of users that are going be successful. More importantly, it gives you a good understanding of who's gonna advocate for your product.
Looker was truly driven by people being successful with our product. We grew from customers, like me, who loved Looker and told their friends about it, and then their friends started using the product. Health scores gives you a good way of targeting, who are the people that have a health score of 99 out of 100? Those people are using the product well. Go understand what they're doing and get other people to do it. Second, get them to talk about your product as much as they can. There's nothing better than true advocates for your product. This gives you a framework for finding those advocates and talking to them.
We recently put in a metric called Quarter Penetration, where we look at the percent of users in your company that are using Looker. If 10% of your company is using Looker, and they're using it really successfully, great, that's an opportunity to go talk to them about how to get the other 90% using the product. There's a whole variety of different extensions that you can take after you start thinking about what success is and what you do with success but it's really about getting that framework to start.
They do evolve and I'd say, this is sort of the feedback loop that you have anytime that you create a metric. Ultimately the metric that we're trying to create is how happy are they and how likely are they to churn? It means a whole variety of things at the same time. One of the things we saw a couple of years in was that, it would completely miss companies that we thought were very happy, that churned out, and we'd wonder why this company fell out of the sky. What happened there? This is the time when you want to pair account teams with customers. One of the reasons we discovered was that, when the person that bought Looker turned over, a new person came in and sometimes they would have a completely different view of what analytics is. There's no way for our health score to know that just from looking at how the product is being used.
You end up with a decision whether to keep the metric, if you know its weakness, do you enhance the metric? You can make all sorts of decisions. I talked about that corporate penetration number. We've done adjustments for how much someone was paying us. If they have 100 users and they're paying us one thing, or they're 100 users and they're paying us another thing, are those two companies likely to be successful in the exact same way?
Can you capture those numbers, what can you complement it with? Another example is that we had our CSMs fill out subjective supplements to the health scores like "here's the temperature on the account." That's a great way to flag our advocate turnover. Is that half your score? Is it none of your score? That gets into business process management things, instead of how often people can do that. But it gives you a framework for thinking about how to make updates.
The first score I created was purely temperature based. It was "let's add these metrics together and figure out if we can predict churn." We did eventually pipe these in as predictive variables into a regression engine, and you end up with different weights on different things. This is where you want to think about health a balanced way. We wanted to predict happiness with HotelTonight via our search results and optimize the queries that were getting returned. When we ran a regression, all we found was that people wanted the cheapest rooms. It was a pretty simple regression. It threw out every single other variable, and it told us to stack rank hotels in ascending order of price. Now we know that it optimizes conversion, while still thinking about the other variables that that regression doesn't capture. Does HotelTonight feel luxurious to the user? Maybe we're willing to sacrifice some perfection in our ability to click optimized search results to make the app feel a different way, to provide a diversity of results.
This is where you want to balance. Absolutely go take all these inputs and run a regression and see what spits out. If it tells you that the only thing that matters is usage growing, then great, you know that. But again, you want to think about this as more than a churn predictor but also a behavioral driver. If I, as a product leader, think that scheduling in Looker drives long-term retention, it might be difficult to perfectly tease out that variable into churn prediction, but I am capable of driving behavior of the organization by putting whether people are using these features into the product, into the health score.
You can think about some of these as a constraint optimization problem, where we want to predict churn but we also want it to be interpretable and a lever that we can use. It might just be that your highest usage customers are the ones that retain, but you want to turn that into a health score. Pairing what you know with a couple of complementary metrics, may not be predictive in a statistical sense, but in a behavioral sense, can tell you a lot and I say that as a data science person. You always have to take it with a grain of salt because inevitably, you're not capturing every single metric. It's about integrating, not replacing.
The first thing that you want to do is define what success is. The second thing is that you need to be capturing some data. I was fortunate that Looker was already so obsessive about capturing everything when I started. They were like, "we don't know how to use it yet, but we're data people and that's what data people do." So you need to have some tracking in place. But I'd say by far the biggest thing, before you invest a lot of time, is just create one or two or three little metrics that can describe what success is. Is this customer growing over time? Get started with something, see if it's working and then enhance it or change it. People tend to get stuck on trying to set up perfect tracking for things. Maybe they don't want to track usage, they want to track users. Or maybe they don't want to track users, they want to track page views or a certain type of page view. Lots of things are correlated in all of these different buckets. I could describe usage 15 different ways in Looker. They're all gonna do about the same thing, and they're all gonna get me started.
Once you have anything, use Looker, write SQL, do it in Excel, it doesn't really matter. It's more about just saying, "I'm going to quantify this, I'm going to track it over time, and then I'm going to see if it's working or not." The worst scenario is spending a lot of time doing something that's not valuable. It should be an MVP, just get something out the door, build some light process around it, see if it's working, adjust it.
If you have a freemium tier, I think you're just thinking about health in sort of two bifurcated populations. You've got a free tier of user so rather than churn, you're trying to predict what turns a user from a free user into a real user. The Twitter example is a great one, which is, "what took this person from a passive user of our product to someone that's actually going to return and use it on a regular basis." It's very likely that you want to create some way of understanding the users that you're reaching out to, or not. Sure, you can have a process where you spam every single user of your product and ask them to please come sign up. But you're probably going to find more success dividing your time towards focusing on the type of users that are actually gonna be successful and driving the behaviors that are actually gonna make those users successful.
Rather than thinking about blanket communication strategies, if you have a free product, what are the type of behaviors that correlate with going to paid? From there, you've got a two part, is that behavior what drives success or are your successful customers predisposed to that behavior? You don't want to think purely as a data person. You need to know and think hard about what actually drives success in your product. The New York Times has their five free articles model. I have no idea if they just picked that number out of the air or whether that's enough to get someone excited about the product, but I think the idea is, the people that are hitting the five every single month are probably much more valuable to talk to than the people that are looking at one article a month. The people that go through that allocation in the first two days of the month, are probably very different than people that might do it in one session.
The idea is, you're trying to define who you're talking to and who you're not talking to and what behaviors would make you want to go interact with that person, that's the definition that you want to create. Whether it's a free customer that you're trying to upgrade or a paid customer that you're trying to retain, it really is about just defining process with lightweight numbers, and then building structure around trying to push those numbers. I get to say this as a data person but take everything with a giant grain of salt. Don't fall into an analysis trap where you feel like you need to understand these things. Thanks everybody, good luck.