1. Library
  2. Arrow Icon
  3. Beta Testing for Early Developer Teams
  • Beta Program
JUN 14, 2016 - 67 MIN

Beta Testing for Early Developer Teams


UX designer and former Treasure Data product manager Luca Candela discusses defining your feature tests, cohorts and research experiments. He also covers collecting your datasets and determining the nature of your feature releases to build products that keep developers coming back for more.

  • Introduction
  • Beta Testing for Early Developer Teams
    • Betas Gone Bad
    • From Alpha to Beta
    • Challenges in Beta
    • Running a Good Beta
    • Tools for Beta Testing
    • Systems and Docs in Place
    • Exit Conditions
  • Q&A
    • Team Beta Process


Thank you very much for coming tonight. My name is Luca. I'm a product manager. I used to be a designer, and I spent the last few years of my life building new products and bringing them to market. So I've done these in a few cycles and I learned quite a bit about how to beta test my products.

One of the things I learned is actually what a real beta test is, as opposed to other things that we call betas and they're not necessarily that. Defining things is actually pretty useful to run a better version of it once you know what you're looking for and you know what you're doing.

Beta Testing for Early Developer Teams

First of all, as I said, let's try to define things a little bit better. You frame early testing by answering a few questions. The few questions are: Is this problem I'm solving a real problem? And is it worth solving?

If you're answering that question, who is at that stage of the product development? Nobody. You found your market, great. So, is how I'm solving this problem the right way to solve it? Is that the question you're trying to answer right now? None of you. One, perfect. Okay, we'll talk later.

Is my solution going to work in the real world? Is that the question you're trying to answer? If that's the question, then you are in a different phase. And the phases are if you said yes to one, then you're seeing discovery mode, the validated learning, what the lean people call that phase.

If you're trying to figure out if you're solving the problem the right way, that's definitely what we call alpha testing. If you're trying to figure out, "Is my solution going to work in the real world?" That's beta testing. And there's a lot of confusion around these definitions.

I'm going to go through it and figure out what is what, also because running the different stage at which you are at the moment influences greatly the things that you're supposed to do and the things that you're not supposed to do.

Let's talk about the big area factor, what is beta testing? Which is the most formal of the three phases. First of all, like a lot of stuff that I've heard called beta testing, and then there's actual beta testing, which is, actually, a pretty well-defined part of the software development life cycle.

Let's analyze, let's go and look into what beta testing is.

Beta testing is for the purpose of validating the readiness of a product and a service for general availability. At that point, you know what problem you're solving. You know how you're solving it.

What you're really trying to do is to figure out if you, as a product organization, as a company, if you're ready to deliver your product or service to the public. So it's not just the testing of, does the product work? It's a testing of your support systems, of your feedback systems, a little bit of your marketing, too.

It's very important that you make sure the documentation, that all of the stuff that's around the product, enables the product is right, too. And so it goes without saying that since this is essentially a deployment readiness program, the beta testing should be led by a product manager or the person that runs product management in your company. Could be the CEO, could be the COO. Doesn't matter.

I've seen a lot of these being done by people without the right kind of influence and authority inside the company, and that's usually one of the failure modes of beta testing. Having people in charge that cannot trigger the alarm and say, "Hey, things are not going well. We have to change something."

I call it "generalized software development lifecycle," because if you think about this diagram, you trace a cycle around these three boxes, and you get your Scrum, your Kanban, all your Agile development life cycles. If you're doing software for a big enterprise or government, you might be in a waterfall mode, it doesn't really matter. It's just a matter of how often you cycle and how many cycles you do.

There's actually an interesting sidebar. Recently, I was reading the original paper that proposed the waterfall mode. The thing that nobody realizes is that in the paper, the author, right now I cannot remember the name, actually suggests that the cycle, the whole waterfall cycle, should be run twice. Once for learning and the second time for actually building the thing.

There's actually no formal software lifecycle model that proposes to be software, just once, none of them. Isn't that amazing? Anyways, betas are conceptually super easy. There's no secret to it whatsoever. And I often see them over-complicated, and many of the reasons why betas go awry is because many, many groups I've seen try to innovate on how they run the beta, and that's really not something you want to do. Stop. Just do it the way you're supposed to do it. You're going to be fine.

The first thing you need to do is you need to recruit a pool of candidates from your target market. And that's actually very specific. You cannot beta test without people that are supposed to be your customers and users. I've seen this done many, many times, and it's a source of terrible problems later on, so make sure that you recruit people from your target market.

You're also going to have limits on how many people you can put in this program, and so you want to select the most qualified candidates as participants. And it doesn't mean that you necessarily need to pick the best of one type. It means that you need to know what you're looking for and only get those people in. Otherwise, your program is going to become pretty meaningless. And failure to do proper recruiting selection is usually the biggest failure mode I've seen in real life.

Betas Gone Bad

How do beta programs go bad? Why does what we learned during the beta program not apply to the real world? Usually, failure in proper recruitment and qualifications is the reason number one. The third step is get them to sign an NDA. I'm actually not sure why we do that. Sometimes it's to protect us against competitors, or make sure that we don't spoil the marketing launch and don't have a ton of people blabbing about your new product out in the open.

Truth be told, I've had more problems getting people talking about my products than stopping them from talking about my products, so I'm not sure that's very useful. Sometimes it's to remove the anxiety from CEOs. I've worked with a lot of CEOs, they ask me to get people to sign NDAs. Go ahead and do it. The nice side-effect of the NDA is that it introduces one more hoop that people have to jump through, and that usually helps you select people that are already more motivated than others. But I don't find it a requirement.

Put it in place if you need. Also make sure that you tell people when the NDA expires, that's really important. Then the next step is explain to the user how to install the software or onboard themselves, depending on if it's a service or if it's an installable product. And then you have to teach them and tell them the workflows that you want them to try. Pretty simple.

You have to say, "This is how you get started, and this is how you do useful things: useful thing number one, two, and three." Easy. Once you have walked your beta testers through the product, you actually have to give it to them. So you have to figure out how you're going to give it to them.

That goes for testing your packaging, your distribution strategy. I never made apps, because that's not what I do. Like as a developer product, usually my products are either installed or provided as a service. I bet that's true for most of you.

How many people here are doing products that get deployed on the cloud or on the private cloud of your own customers or users? None? One, two. How many people have a service, something that's deployed from your own cloud? You run the service and people just log in. One, two, three. Okay, the majority, so, SaaS wins. Awesome.

Then once you have people onboard in your product, you have to make sure that your users provide you feedback and you have to figure out how to collect it in a way that stays manageable. Once you have a few users, it becomes hard to keep up with. The rule of thumb is that one person can handle roughly 100 testers. More than that, it's going to be unmanageable.

For every hundred testers, you need at least one person full-time dedicated to collect feedback and to summarize, prioritize and then distribute information to the team. That's mostly for developer products. For consumer products, it's a little bit bigger. For infrastructure, it's actually a little bit less than that.

Like I've seen that, as far as infrastructure products, when I was at Treasure Data, with more than 20 testers it would break down. And then, of course, you have to get bugs and feedback from users. You have to prioritize it, and you have to fix it. And you want to be able to release a new release, probably, once every two weeks. More than that, then it becomes a bit disruptive for testing. Less than that, you give the impression of not having any momentum, and people will disengage and go away. And that will happen anyways, by the way. So, tricks to deal with that.

From Alpha to Beta

Beta programs have tons of different names. Somebody calls them customer validation, customer acceptance testing, user acceptance testing, friendly user testing, field trials, production testing, pre-release, early release, you know. Call it what you want.

If you have a product that's well defined, and you're trying to figure out if it works in the field, that's a beta. If you have a product that's somewhat defined and you're trying to figure out if that's the right way to solve the problem, it's an alpha.

And if not, you're obviously designing your product. The important point I'm trying to make is that alphas are not betas. They are run differently, and you have to be mindful of that fact. If you're running a beta, you cannot go back and take feedback and say, "Oh, let's change the feature set." that's not the way it works. You're just going to take your program, throw it away, and you're going to have to start from scratch. Because if you change the mix of the feature set, then you need to change the way you support. Then you need to change documentation. You need to change everything.

At that point, what is your testing good for, if your product has changed? Don't do that. Or if you are, you have to be disciplined going into the beta. It also is a big test of maturity for the whole organization. That tells you, "Am I ready to go into beta? Can I stick to the product I built? Is the product built mature enough that now the only thing I need to do is figure out if I can support it in the wild?"

If you're not there, then don't even waste the time going to beta. You're not ready, and you have to do different things. Alpha testing, as I said, happens during development. You probably want to involve friendlies. By friend and family, we don't literally mean families, especially in developer products. What we mean is people that know you, probably trust you and are going to have slightly, little extra motivation to test a product.You have the extra motivation in listening to what they say, because you trust them. And it's going to be a big, collaborative experience.

You're going to have incomplete features and tons of performance issues. Usually they last three-to-five times longer than a beta program. A beta program normally lasts between six and eight weeks. You cannot do it faster than that. It doesn't work, because the setup time dominates the testing time.

In less than eight weeks, you're just not going to be very thorough. And more than that, you're just going to repeat the same thing over and again, and it's not going to be very useful. Roughly, stay around eight weeks. It helps you identify conceptual problems in the product.

For example, let's say that you're designing a developer product with a particularly complex API. During alpha testing you're going to figure out, "Okay, nobody can figure out how to use this particular endpoint." Very good point. A stopping point is to figure out, "Are we doing the right things? Maybe we should reconsider." That's a good point. That's a good moment in time where you can probably still make changes.

You don't have a lot of documentation. You don't have a lot of investment in the feature. There's nobody out in the field actually using it, so that's a good time to change things.

Later, it becomes more and more complicated. And also you're going to use different testing protocols. So at this point, it's actually common to do long-term followups with users. Instead of just surveys, one of the things I found very useful to do is something called a "diary study." Are you familiar with the term at all? I see somebody nodding.

For the people not familiar with a diary study, it's simply asking, and Google Docs is fantastic for that, I ask a few of my users to take notes, while using the product that we're building, daily. Essentially, it was simply, put a section title. "That was the day..." And then jot a few notes: "I am not sure I'm understanding these." Or, "I really don't like how this works. I'm a pretty big fan of how section 'X' works." And so on, so forth.

The nice thing about these diary studies is that you can see, first of all, how the user evolves in their ability to understand the product. And second, you get a pretty good idea of the learning curve of your product. And the other nice thing is that you can also intervene if there's a big problem. You can intervene before they lose their commitment to testing your product, then you can bring them back on the right path.

That's important, your alpha testing should be the time where you decide, "Okay, I'm ready to go into beta." Your alpha testing should be conducive to say, "Okay, this product doesn't need any more changes." Now we reach like a reasonable V1 form. At that point, we need to figure out, "Are we ready as a company to deploy this product and get money?" Very important. You don't want to skip that.

I've seen a lot of companies launching straight from alpha. Always a bad idea. When you call it a "public beta." You're not Gmail. Don't do that. Google can do that. Here's the important thing: Alpha and beta are very useful, but I've seen so many teams go and try to discover a product by trying things. That's not the way it works.

Even companies with a million users like Facebook, a billion users, actually, like Facebook and Google, there's a lot of forethought and a lot of planning that goes into every feature. And then the experiments are built to remove risk from the equation.

You cannot experiment your way into a successful product. I've never seen happen. It doesn't happen.

Even if you stumble into something, you still have to be deliberate about growing it. So the point is that you need to know what you're going to build, at least from the point of view of, "What are the problems that I'm going to solve? How am I going to solve them? What are the workflows they're going to provide?" Have a plan. Those are the questions I have, and let's figure out how to remove those particular questions. That's actually the job of alpha.

When you are to the point where, "Okay, I think we got it. We have pretty consistent and positive feedback. Let's figure out if we can support it." That's beta. But neither of those phases replace proper product planning and management. You still have to do your homework.

Challenges in Beta

What's hard about beta programs? First of all, the resources to run them. It's amazing how when you start getting into these programs; everybody's busy. And if you don't have anybody, nobody can write documentation. Nobody can reply to the tickets of your customers or prospective customers or users. Nobody can take time to analyze data coming from telemetry. Nobody can spend time making a survey to qualify your users. Nobody has time to prospect, to get users into beta.

If you don't have a person dedicated to that, the temptation's going to be cutting corners. Funny fact: I consulted for a few months lately. The most valuable thing that I could do for those companies is getting paid to sit there and watch them do their homework. Just by the virtue of being present, without really teaching anything or imparting any wisdom, just by the fact of being there and present, my clients have to do their homework and really be considerate about what they give me. Because I'm not immersed in their culture, and I don't know what they do every day, so they have to write it down.

The simple act of writing down what they're supposed to do brings an extreme amount of clarity. Don't wait, necessarily, to pay somebody $200 an hour to come to your company and help you do what you already know you have to do. The biggest factor in the success of a product development process is being organized. Honestly, that's the biggest difference I've seen between successful products, unsuccessful products, that somebody has spent the time to be organized and disciplined about what the product is supposed to be, what it is supposed to do. How are we going to do it?

Have we identified our target market? Have we spent enough time talking to them? Do we have a plan? Is this plan written down? Does the plan make sense? Can a person that's not involved in the project read the plan and keep a straight face? Those are the incredibly simple things to do, and yet, not enough people do it. So I encourage you to do that.

Recruiting good beta testers: If you don't know your market, if you don't know who you're looking for, it's going to be very hard to do. That's why you need good planning. That's why you need to know who you're building for. Especially in developer products, you cannot say, "I'm building for everybody." That's not a thing. You're probably going to need to find people that use the language that you're providing, or the role of type of product, or where they work or the size of the company.

You need to do segmentation. That's easy. It's not that hard. Go on LinkedIn, find 10 people that could be your customers, and read their LinkedIn profile. That's a persona for you right there. Find what they have in common, and then try to become predictive. If you can create a decision tree that says this person is a good beta tester, you have your survey right there.

When you can say, "Okay, this will be a perfect person for my beta test." Why? Because it checks this box, this box, this box. Great, you got it. It's not hard to do. But you have to do it. Maintain user participation once you have them, since they're so expensive to find, especially in time. If you advertise your beta program, it's also going to be expensive as far as money goes.

You want to make sure you don't lose them too fast. You have to keep them engaged.You have to keep them talking to you. Set expectations. The feedback needs to be provided every two weeks, every week, every month. You need to have a proper method of providing feedback. It needs to be consistent, and it needs to be possibly easy to do, so people will do it. And it needs to have both consequences and incentives.

Collecting relevant feedback: You want to make sure that all this stuff that they tell you actually goes somewhere. And you can prioritize properly. Especially in beta, you might not want to act on it. So you're not going to say, "Hey, sure, great. Let me go change my product because you had a good idea." You don't do that. You've got to use that kind of feedback for your next edition on the product. But the thing is, you want to know what's going to be coming next.

A good beta program can give you your roadmap for the next year and a half, if you run it right. Because it's the highest bi-directional engagement moment you're having in your product lifecycle.

Once your beta is over, then most of the times you're going to hear about your customers mostly when they're angry or when you're sending to them. Two pretty adversarial moments. But a happy customer, they're hard to reach. You're going to find the two or three that are super happy about your product. They're going to tweet about it, but the vast majority of the people that are happy, you're never going to hear from them.

That's actually very important, and those are the ones that you care about. Because, for the people that happy with your product, you have to figure out, "Okay, how do I sell you more of this service? "How do I make you grow?" And those are pretty hard to hunt down, because they're going to be busy. They have other things to do.

Organizing and distributing feedback, that's very hard when you have 100 people that, on average, share three-to-10 pieces of feedback per week. How do you take all this information and channel it the right way? Ticketing systems are gold. Ticketing systems designed for the interaction with the customer, not for interaction with the development team. And by that I mean Freshdesk, Zendesk, Desk.com; they're all "desk."

You have differences between a case and incident. When you can aggregate cases, that's actually very important, because at that point you can also have things bubbling up.

Running a Good Beta

What makes a good beta program? We had our introduction. I think you understand, more or less, how betas work. Now let's figure out how we run a real one. And there's going to be examples from my past experiences.

First of all, every beta program, as I said, needs good testers. How are you going to go about finding your testers? There's many different ways that you can recruit your testers. Actually, one of my favorite ways is, you can go to an online forum for people that are roughly in your target market.

I was consulting for a company that's building a product for doing, essentially, BI on top of Elasticsearch using SQL. The first place I planted a few links was actually the Elasticsearch forum. They have a great discussion forum, and I said like I was looking for people. I did like a bunch of searches, mixing BI tools and Elasticsearch.

I found a bunch of people trying to connect Tableau, a bunch of people who are trying to connect QlikView. Some, they were trying to connect Re:dash, and so on. On all of those threads, I started saying, "Hey, this is interesting. I would like to hear more. This is what I'm building." And I give them a way to get back to me. Over time, that passive method of planting links brings some trickles of traffic. Usually you get highly motivated beta testers that way. And that's one way.

The second way is friends or acquaintances, especially if you are very deep in the community you're trying to serve. Your personal connections and the personal connections of the people you work with are probably going to be the best. I got companies like Intuit to test that product that I was talking about, because one of our developers was married to a data scientist inside Intuit, and so we had their science team at Intuit come on board.

Personal connections are great, and there's a lot of people that don't really exploit that enough. One of the tricks is to, again, go on LinkedIn, find the people that you would like to be talking to, and then see if you have second connections through people you work with. Or go to the profile of the people you work with, look at the shared connections and see if they have titles in companies that you would like to be able to talk to. Be strategic, and go and stalk the right users. Professional events, they're always great.

Right now, if I was beta testing anything, I would actually let you know, and probably some of them, some of you, might be either interested or would know somebody that's interested. And of course, you're going to be doing some search engine marketing, and you're going to have your landing page. That's usually not the main way, I mean, it's a good way to collect, but you cannot put a landing page out and then expect, "Okay, I'm done. This is all I'm going to do." Usually you have to drive traffic to it.

All of that goes into the most important step which is the qualification. The qualification is the most important step. If you don't qualify your beta testers enough, you're going to have what I call the "professional beta testers," people that try everything.

Their feedback is not particularly relevant or useful, because the over-enthusiast type usually is not a good representation of the majority of your users. The majority of your users are going to be pretty hard to reach.

You want to make sure that the people involved in your beta test are going to have the right qualifications, the right situation. First of all, let's say you want to test a physical device. You're testing a new router for big houses. At that point you want to find people with big houses. Finding people in San Francisco in their small apartments is not going to help you. They might be very willing to help, but they don't have the means to help, so you have to qualify them.

Qualification, NDA, preparation and onboarding: It's a process. You need to have it down. The nice thing is that like once you run your beta for a while, you're also going to smooth out your ingestion pipeline to the beta program.

Tools for Beta Testing

The other thing is tools. When you're prospecting, that's actually the biggest time sink of your beta program, figuring out how to get the right people inside the program. I don't know if you ever heard of it, but there's a few of them, and there's Clearbit, there's FullContact, and there's Rapportive.

Especially Rapportive, let's say that you're trying to reach somebody inside a company that has a pretty well-defined name structure for emails. Start stabbing the dark, for example: first name-last name, last name-first name, first letter of the first name and last name. And as soon as you hit the right person, the right combination, Rapportive will show you the profile on LinkedIn so you know that you actually have the right one. And then you can actually send the email. That's a pretty good way to go about it. That's the important thing.

Prospecting is probably the most valuable skill that you have to learn if you're trying to get the right people, the people that will become your users and your customers.

You're not selling yet at that point. That's actually a very important distinction. You're not doing sales. You're creating your cohorts for beta testing. Very important: Make your people jump through hoops. Truth is, if it's too easy to get in your program, it would also be not very interesting. You're going to have very low engagement. You spent so much time preparing people and getting them onboard. You have to make sure that the people that come through have the right motivations, not just motivated, but the right motivations.

They really want to help you. They really have something to gain from your beta, and they will be involved in the process. The best way to make sure that that's the case is you have to make them work for it. Make them jump through a few hoops, and as I said, NDAs only if strictly necessary. They might be an obstacle, in my case, in the case of DataPad, for example.

Everybody wanted to sign an NDA because the CEO was Wes McKinney, the guy that made Pandas. And we were hot. In the case of another company I consulted for, they cannot disclose, were not that hot. We didn't have any superstar in the team. And so at that point, NDAs were an obstacle, especially because I was actually dealing with much bigger companies. In a bigger company, people don't necessarily have the leeway to sign an NDA with it. There's a trick for that.

If you have somebody come back and say, "Hey, I don't know if I'm allowed to sign NDA with you." Say, "No problem, give me your mutual NDA." Say, "Does your company have a mutual NDA form I can use instead?" That usually removes the problem.

The next thing is provide clear instructions. And it seems weird that I have to say it, but I have to say it. Have an introductory email that explains the rules of the engagement. So you have to say, "You have to provide feedback every two weeks. If you don't provide feedback for, let's say, three consecutive weeks, you're not going to get whatever it is that I promised that you would get. If you provide enough feedback by the end, you would get this amount of time free on my platform, or this goodie, or this payment." It doesn't matter what your incentive is. There needs to be one, and there needs to be rules.

Make sure that there's a way for your users to talk to each other. Unless there's a big obstacle, if you have companies that might be in a competitive position, or for some reason you cannot do that, please set up some way for customers to talk to each other. Because there are things that users will not say to you because they're going to feel dumb, or they're going to feel like inadequate, or they don't think that they can share with you. But they will share it with other people in the same program.

In the past I've used Google Plus, that was probably the best thing I've ever used for running private beta groups. I'm not sure that Google had that in mind when they built it, but I was really sad when they changed it, and it doesn't really work for that anymore. Basecamp is great for private beta testing groups. Basecamp is great because they have the documents that you can essentially make diaries in, so it was pretty great.

This is an example of one of the emails that I used to send when I was with DataPad. As you see, I start with, essentially, "Hey, thank you for your interest. Before we can share the application with you, you have to sign the NDA." I would use a Google form at the beginning, and then it became a TypeForm. I'd explain why I'm trying to keep these under wraps. I'm saying, "Hey, we're trying to make sure that we don't spoil the marketing launch of a product, so help us out with that." I was really specific.

A few examples of the things that we would like you not to do. And I'm not trying to say, "Hey, we're going to sue you." No, it's, "Do us a favor." We're trying to set up a good relationship, mutual trust here. And so you have to strike the right tone. It really depends on the kind of users that you're attracting. And then I say, "Hey, in exchange of your help, "you're going to get a full year of free service "once we launch." And that worked pretty well.

You don't want to do, essentially, the onboarding of all your beta users at the same time, because if there's a problem with your process, you want to catch it with only a few of them, because you might lose them. Keep your most valuable beta users for cohort number two or three. And then figure out how your process works.

Systems and Docs in Place

One of the things I noticed that actually really, really helped is that you need to have at least some sort of documentation already by the time the first users come, otherwise you will be saying the same thing a hundred times, and you don't want to do that. You're trying to figure out if you're ready to launch.

Would you launch without documentation? No. Then you cannot run a beta without documentation.

It's the same thing. It doesn't have to be finished. It doesn't have to be perfect. But it has to be there. Because if it's not there, people won't make comments about, "Oh yeah, this was missing in documentation." You at least want to know what your users and what your testers are missing, but they would not make that comment if you said, "Oh yeah, don't worry. We don't have documentation yet." There's nothing to comment on.

Provide a testing routine. That's one of the most leveraged piece of work that you can do. And, by the way, that spawned so many useful things. Leverage your efforts, make sure that every single piece of work you do can be reused. It usually can be reused because your qualification test, for example, can be reused for sales later on when you start selling your product.

Your testing routines can become sales demos, can become marketing demos, for QA, for new releases. There's a lot of work that you can save by just doing that work well, once. Then collecting feedback is very important. As I said, establish a cadence, very important. There's a lot of teams and groups I've seen. I've been in so many beta groups where the cadence of feedback was not established. People would say, "Oh, welcome to beta!" And then they would not say anything to me.

What am I doing here? You have to tell people what you expect from them: "I want to hear from you every two weeks, and if I don't hear from you every two weeks, you're not going to get the free copy of our product." That's easy. Or, "If I don't hear from you at least 50% of the time..." And feel free to ask for that, because otherwise, what are you running the beta for? You're just going to have a lot of people hanging around with really nothing to say. That's not useful.

You need some way to separate successful users from unsuccessful users. Again, good product management. I worked with companies that, three years in their life, did not know how to say, "This is a user we want, and these are users that are not really helping our product." They didn't really know how to tell them apart.

You should know what a good user looks like in your product. And you should try to understand why, because you want to find the bad ones and figure out why they're bad. But you're not going to be able to do that if you don't know what they look like.

Here's a good user, one that comes back every week. It does five of something, or one of each between three different workflows. I don't know, it's up to you, but you have to figure out how you're going to tell good from bad, and then you have to systematically follow up with the bad ones. You want to understand "why." Why they didn't do the three things that you asked them to do, why they did not come back every week.

Once you do that, especially when people report problems or don't come back and you have to follow up, don't apologize. I've seen a lot of people being fairly dismissive of their own product. It doesn't help. Don't do that. Don't be apologetic. Don't bash your product. Go straight to the point and say, "Hey, I see you're having problems. Let's solve it. Let's figure out what it is." Be propositive and be helpful with continuous feedback.

If you are in alpha, this is useful. This is a real screenshot from one of those diary studies I was talking about. And as you see, there are actually a couple of things that are helpful. From this mode of following the progress of a user, you can have a conversation. You can have a structured conversation around each of the notes. You can ask for clarification.

All of this feedback was moved into Jira, Confluence, ticketing systems, but for now, that was perfect. It gives you an idea of how things progress over time. Google Docs has revision tracking, so you can see how things have changed over time.

The other is the feedback request. In retrospect, I might have done these a little bit differently. I noticed that sometimes people would have problems, not problems, but more like they were lazy, and they would not click through a form. But they would reply to an email. And so what I did is I started copying the key questions inside the form in the email. That actually raised the reply rate a little bit, but it was also very laborious to deal with. I don't know how I would go about making it better, but if you come up with something better, please let me know.

The point is that the questions were very simple, very straightforward. What were you trying to accomplish? Were you successful? As I said, you want to figure out a way to tell successful from unsuccessful users. In that case, I was asking them, "What were you trying to do? "Were you successful in what you were trying to do?" And if they say no, I'm like, "Oh, what were you trying to do? Where should we consider our efforts next?" Not because I'm actually trying to use that information to define my roadmap, but I'm trying to understand what's the perception of the product right now.

What's missing? We knew, when we started doing this one, this was actually for my alpha test. Although I called it beta in some places, this was from my alpha test. We knew that there was an incomplete product that had a lot of holes. So the question is, are we thinking about the holes in the same way that users do? There was actually, in most of the time, actually, yes. It was very straightforward. It was very obvious what the need was, what we were working on, but it's still good to know that.

The other thing is, you want to know, you want to understand, the readiness of your product. At that point we're in alpha, and I had a pretty good idea of what the minimum feature set of the product had to be in order to go to production. But you want to second-guess yourself a little bit. If the last question was, and really, that's all you want to know, "Are we ready to be used for production purposes in your company?"

Some of the users actually surprised us, and we were much readier than we thought we were. One of the customers that we closed before we got acquired by Cloudera was actually the Democratic National Committee. And for them, we weren't ready enough. Honestly, the biggest concern was sharing subsets of data and price, because they don't have the budgets that an enterprise might have. And so for them, the fact that there were like a few missing features, not a problem. And this is also very important: Have a re-engagement strategy.

What are you going to do for the people that fall out of your beta test, or alpha test, or whatever? How are you going to bring them back? The best method I found was actually bombproof, and I'm ashamed to say it, but it's shame.

True story. This is the email we're sending. I don't know, I always feel bad about showing this email, but it worked like a charm. This had more than 50% success rate at bringing people back. It should just say, "Hey, did we lose you?" Because these are people that actually have made a commitment to you. You have to make them super-swear that they're going to be diligent beta testers, that they're going to give you feedback at least 50% of the time. You have to remind them that they made a commitment to you.

You have made them sign documents. In my case, very literally because I didn't have a HelloSign account, so they had to sign documents, scan them, send them back. It was a huge hassle. And yet, some of them would fall out. Like, how do they dare fall out? This is the email I was sending, and it was very effective. I did not send it manually. There was a template set up in Mixpanel, and I was sending it to people, they were not logging back for more than, I don't know, four, six weeks, something like that. But it was super effective, so do it.

Feed the beast. What do I mean by that? That you want to stagger your beta cohorts. I said that before, but it's useful to repeat. The first testers, of course, will experience more issues than the followup cohorts. They will have more bugs. The documentation will suck. You will not know how to run the beta properly. Dissatisfaction might emerge in the beginning, and you want to make sure that if you ask only those people, then you're not going to be ready, or they're going to have a very negative opinion of your product.

The question is, once you solve those problems, you want to see what it looks like, now that we have solved this problem. You want to get some fresh eyes, and also because at some point, people reach a natural plateau in finding problems.

I remember a study showing that, on average, a single tester, we'd be able to find a maximum of 30 to 40, I don't remember exactly what percentage. But the point is that one tester alone will never be able to find all the bugs in your system, even if they systematically use the system. You want to make sure that your coverage is high, and that stays true for cohorts.

Once you have a few bugs, you don't know what's after those bugs, because there is going to be a gating effect. Test your beta routine, and keep your high-value beta users for later. Don't put them in right away. You want to give them a good experience, especially the ones that you think are going to be your first landmark customers. You want to make sure that they have a good time, that they don't see the nasty bits.

Whoever has a good experience, even sometimes the people who've had a bad experience, ask them for referrals. Once you have people that understand the power of your product and like it, say, "Hey, do you know anybody else that could benefit from this product?"

Most of the referrals from my beta programs usually come that way. And so usually, the last cohorts actually are people referred from the people that go first. That's something I haven't really seen done a lot, so that's another trick that you can use, very important.

Exit Conditions

I've never worked anywhere or seen a beta program where the exit condition was well done, was explicit. I've seen exit conditions as, "The beta program is going to last eight weeks." That's not an exit condition; that's a schedule.

You need to have an exit condition. You need to know what "ready" needs to look like in your world. And it's surprising that so few do it.

For a good exit condition, you need a population. You need to know what's a functional requirement, and you need to know what's the quality requirement. So, x percent of my testers need to be able to do x, y, z how many times? Encountering a maximum of two low-priority bugs, known blocker bugs per week? That's a good exit condition. There might be more, but the point is that you need to know when you're ready.

You need to establish ahead of time what ready means to you. And then you might change it. You might realize that it was too high of a mark, but you need to know where you want it to be or not.

The next one is how I set up my pipeline at DataPad. I found marketing on Twitter for developer products extremely effective. It's also very easy to target people very gradually. I was blogging on something called Ghost, which is essentially like WordPress but in node,js. It's very lightweight, very fast, and supports Markdown natively.

And LinkedIn, LinkedIn is great. As much as I'm not a fan of the product itself, everybody's there, and everybody there tells you what they do for a living. So it's a great way to recruit. And then I had a landing page with a few ways to optimize things like click-through rate, dwell time, and so on. Everybody that got through the subscription forum will be sent to MailChimp.

MailChimp did two things. First of all, it confirmed the emails, so we were making sure they were real emails. And then it would send a followup message with the survey. At the beginning of the survery there was a Google form, then it became a Typeform . I strongly encourage you to use Typeform. It's the best survey tool that I've ever used. It works great on mobile. It's cheap or free, so there's no reason not to use it.

The results will go to Google spreadsheets, and there we'll do my cohort design, figure out who signed the NDA, who's got these three yeses, and who's using the tools that I'm trying to test for, etc. It's depending on that and who works at companies that I want to engage later or sooner, or, who's a student? Actually, I forgot to say that. Use students in the first cohort.

They have tons of time, they are very tolerant of bugs. lf you lose them, you were never going to sell them anything anyways, so they're perfect. Then I would send them to the NDA and documentation in Google Docs, and then there were links in there that would send them to the community in G Plus. From G Plus, especially, there were links to the application.

Once they were on board in the application, I would use Mixpanel to track what they were doing. I would use Hangouts and ScreenFlow to record interviews with them to get them to show me how they were using the products. I have to make highlights and understand the problems.

And then tickets and problems were going to both Freshdesk and Sentry. Honorable mention to Sentry. If you don't have some sort of exception-tracking in your software, it can be Raygun, Sentry, it can be Rollbar. Do it now. That changed the way we develop software, and that changed the way we're running betas.

You will realize that most of the bugs in your application will never be reported by users. And so users will simply experience something bad, give up, and not tell you anything. The majority of bugs that we fixed in DataPad during the beta was actually reported by Sentry, not my users. So go ahead, install it, you will thank me.

It's a very hard case to make for teams that don't use that kind of software yet because it's always seen as a big hassle, a low ROI thing to do. And it's absolutely fundamental because it exposes your product and QA people to things that usually only one engineer in your team sees, because they look at the logs but you don't have them.

It keeps statistics about which bugs are happening most often to which user, and which ones are coming back. The regressions, it's super useful. I cannot emphasize enough, if you don't have exception tracking, install it now.

Don't use beta as free QA. It's tempting, but it's a very expensive way to get free QA. You're really not going to get a lot of value out of it.

If you expose users to too many bugs, they will jump away and you would not get any feedback.

No clear exit condition: If you don't know what you want to get out of your beta, the program essentially is going to be useless. It's just going to be time, you are just going to be there for eight weeks and say, "Okay, and now we can launch." How do you know you're ready? You don't know. Entering beta too early is a recipe to waste a lot of time and effort.

You're going to be recruiting people. It's going to take a lot of work. You're going to vet them, you're going to put them in an application that will make them run away in a few minutes. You don't want to do that, so you want to make sure that you get to beta when you have a reasonable expectation that people can actually do something useful with that.

If the application, by the time you enter beta, doesn't do anything useful, you have different problems, there's no point for you to be in beta. Don't be in permanent beta, Gmail style, because of two reasons. It discourages your users from taking you seriously. If you want them to report something, that's the best way to stop them from doing that. Because what you do is that you give them the impression that it doesn't matter, no matter what they do. It doesn't matter. You want to make sure that there's a clear beginning and a clear end and your users should know.

Not enough time allocated for the beta, I've seen that quite a lot. What happens is you spend all this time setting up your program, and this by the way is the cause of another problem, which is irrelevant beta. Even if something bad happens during the beta, there's no time to turn the ship around.

A good beta program requires at least six to eight weeks. And it's based on cycle time, so let's say that your application provides value in three months. Then you need to give time to your application to provide value. You cannot beta something without actually making it do the job it's supposed to do.

If you have an application that provides value in a year, then your beta will last a year. And you have to account for that in your business plan. If you change a core feature, the program needs to start from scratch, because at that point the feedback you received is irrelevant. Is this feedback still relevant to the application I have now? No. You changed documentation. You changed feature set. You changed the value proposition, so you have to retest.

The "failure is not an option" mode. I've been involved in one of those. And so the point was, "We're going to launch anyways, so what's the point of collecting feedback? We don't care. What's the point of recruiting people? We might as well just go ahead and start selling." In that case, if the only thing you are doing is QA, you might as well just do internal. It's a lot easier, and it's a lot faster to run.

Beta as a customer acquisition strategy, that's important. It's never going to work. The worst possible time to sell your product is during the beta. It's hard to do, so what you want to do is to make sure that you are ready to do that.

You possibly have a few great references that you can use for customer acquisition, but you don't want to do it too soon. Frankly, the people you're selling to, they don't even know if you're going to be alive for very long. So it's the most wasteful point in time for you to do sales. Don't try to use the beta program for selling. It's going to be a very long-term activity. Just go ahead and use the beta for what it needs to be. That's it.


Team Beta Process

Actually, DataPad was a collaboration tool. We had what I would call captains. We had people getting to the beta and bringing their own teams. But there was always a person that would represent their own team. So that doesn't really tend to change much. It's unrealistic to expect that an entire team is going to provide feedback, but you can find a highly engaged member of that team. And that person will essentially be your channel to reach the other ones.

Then what happens is that sometimes we would do, for example, we would have "office hours." Every week, we would do a webinar. It was essentially just like a way to talk to people and get them to show up if they had questions for us on how to use the product.

We'd train them. What happens is that, usually, two or three people would show up. It was not like a huge success, but somebody would come and ask questions. Sometimes, our captain, our champion, would bring somebody from their own team to ask questions and to have discussions because they were starting to get into the product. They wanted to understand how to use it better.

All I've said, essentially, applies to teams. You treat them as either a collection of individuals, or one person would sign an NDA for their own company. At that point, you deal with that person, mostly. Sometimes it's good to have a pre-engagement document that essentially tries to capture "What is your situation? What are you going to try and do with our beta? What is your success?"

Essentially, it's kind of like the same kind of document that you would sign for a POC, except that you do it to figure out if the beta is successful. That's in a very complicated and heavy, enterprise-level infrastructure product. If you don't do that, then probably it's overkill, but I would happen to have been involved in some of that.