November 4, 2016
Founder Panel: Developer Product Design
In this Q&A panel on Developer Product Design, Heavybit Product Faculty Chair and Squarespace Developer Platform Lead Cole Krumbholz is join...
My name's Chelsea. I'll be speaking today about making performance a design consideration, and particularly about how this applies to developer-focused products.
I'm a product designer and developer at Sourcegraph. We're building a better way to read and understand code. For the past seven months I've been focusing on designing user experiences for developers. And while we still have a long way to go, I wanted to share some of the things I've learned about considering performance as a part of design.
Why should designers care about performance? Isn't that something that mostly involves engineering? And what does performance actually have to do with design? To answer this question, we really need to ask ourselves what "user experience" means.
According to Don Norman and Jakob Nielsen, two of the foremost user researchers in the profession, user experience involves all aspects of the end user's interaction with the company, its services and its products. And as a designer focused on a developer-focused company, user experience design should really be at the forefront.
When was the last time you used a slow or sluggish website or tool and thought, 'Whoa, this is really great.'?
"This is just the best. This is my favorite part of using the internet. I'm having the best day because I get to wait. Waiting's the best. Waiting is an awesome experience." I'm guessing the answer is never. This is where performance is the responsibility of both design and engineering. It has a direct impact on the user experience of a customer. And there's even data to back this up.
In 2008, the Aberdeen Group surveyed over 160 organizations about performance, customer satisfaction and revenue. They found that the average impact of a one-second delay was a seven percent drop in conversion. That was in 2008. So, in the eight years since, I doubt our customers have gotten any more patient.
Google discovered speed was really integral to UX as far back as the early 2000s. In a Web 2.0 talk in 2006, Marissa Mayer discussed how they ran an experiment to increase the number of results per Google results page. An additional 400 milliseconds led to a 20% drop in conversion and revenue.
There's some theorizing about whether users were paralyzed by choice. But when they realized the page took another 400 milliseconds to load, Google started experimenting with speed in all of its other products. Google Maps in particular saw a boost in usage and traffic after speeding up page load time and reducing the size.
Even at Amazon, they did an experiment where they arbitrarily delayed load time by a hundred millisecond increments, and they found that with each increment, sales decreased by about 1%. So this really spans different types of companies and different types of products.
The main point is that improving performance improves user experience, and this is something that designers as well as developers should be considering.
As users, we expect things to be fast and responsive. Slow load times give users time to think about whether or not this product is reliable: "If it's this slow, can I really trust this company to know what they're doing? Maybe not. Maybe I should just bail right now." And this holds especially true for developers.
Slow response times impede a user's ability to learn and to think. Effective learning is all about doing or exploring something and getting feedback on whether or not that was correct, immediately.
For a developer, every second spent waiting can be minutes of lost productivity. As a developer, if this wait time breaks my flow or train of thinking, each second of waiting could be five seconds lost to stitched-together thoughts that have been wandering around. At Sourcegraph, we talk a lot about keeping developers in flow.
Being in flow means someone is fully immersed in an activity, and they find it engaging and challenging.
Continuous progress, feedback, and focus are key to keeping someone in flow. Earlier this year, at Sourcegraph, we were using Asana for tracking bugs and projects and other things that we were doing at the company. Many of us found that we were spending a pretty significant part of our days looking at this. Has anyone seen this before? Okay, you feel really awesome about it?
This would roll for anywhere between five, maybe 30 seconds, before Asana would load. I found myself wandering to other tasks while it was loading and getting distracted. I'd do something else for 10 minutes and then remember I was trying to use Asana to focus on my most important tasks, but I couldn't see them until it was loaded.
That's not 30 seconds of lost productivity; that's 10 minutes, sometimes more. Eventually, we broke. When the team was asked if they wanted to move off Asana, there was a resounding yes. Everybody piped up about how much they hated staring at this screen.
We were using a tool that was breaking our flow multiple times a day, so we moved off of Asana. The whole team agreed that this tool made us feel less productive. That built-up frustration convinced us that the experience of this productivity app was a net loss for our productivity.
If good performance is integral to design, what does good performance actually mean? What difference in performance can users actually perceive? At the end of the day, it's the perception of speed that matters the most when prioritizing what to optimize and what to change.
You might not see much change in behavior from a five-millisecond improvement, but a hundred milliseconds can make all of the difference.
In a 1993 book called Usability Engineering, researcher Jakob Nielsen identified three primary benchmarks for response times. To a user, a delay of less than a hundred milliseconds will feel pretty instant. They pretty much won't notice the gap. A delay of one second will be noticeably sluggish but kind of okay. And Nielsen thought that a delay of 10 seconds or more would cause the user's attention to drift. Remember, this is 1993.
A more recent article from Treehouse speculates that by today's internet benchmarks, this is probably more like five seconds. So it only takes five seconds for a user to go from engaging your product, to doing something else, to looking around, to maybe starting another task, to maybe just leaving.
Remember, when we're talking about developers, the distraction time could mean hours of lost productivity. Getting back into flow once your focus is broken is not that that easy. But what if the user actually has to wait? So yes, we should make things faster.
What do we do if the user really, really has to wait? First, I think we should ask why.
Before we start thinking of clever ways to make our users more patient, we should ask why. As designers, we're pretty often asked to designed really specific things: Make a loader for this, because it takes five seconds to load.
When performance is such an important part of user experience, we should ask "why" before we start on tasks that specific. Designers should design for waiting less, not first. Work with engineering to brainstorm or improve actual performance before designing for wait times. Make speed part of your design specs. Waiting should be the exception, not the rule.
Okay, what if a user really, really has to wait? Once you've done all that, and there's still response times that fall below your desired benchmarks, as I mentioned earlier, it's really the perception of speed that matters the most.
In the absence of absolute speed, there are things we can do to make things appear fast, keep a user's attention, and manage their expectations.
As a rule, the longer a user should be expected to wait, the more information the application should provide. How many of you have ever had your flight delayed? Nobody? Okay, all right. I was like, airlines have really gotten way better than I thought.
So, if your flight's delayed by five minutes, what do you do? Usually, just wait, right? I'll wait. For up to 20 minutes, we probably still just wait, maybe get a little fidgety, look at our phones a lot. Thirty minutes, I kind of want to know more details about what's going on, why it's taking so long.
An hour, I'm like, "Okay, really, what's happening? Why isn't anyone telling me what's happening?" At two hours, start settling in, maybe start sleeping, maybe make friends with everybody at the airport, maybe you start plotting revenge against the airline.
Despite not being able to do anything about it, you want to know exactly what's going on.
It actually doesn't make a difference if they tell you what's happening. Nothing's going to change, you're not going to be able to get on your flight. The same principle applies to load times. The additional information makes people feel better.
So, what do we do with this? Remember, if a response time is less than a hundred milliseconds, it feels instant. Up to one second, it'll feel sluggish, but a loader will only be a distracting flash of information. It's really best to focus on going the extra mile to make these interactions seem instant if they're between a hundred milliseconds and a second.
A spinner, a loader, can keep a user's attention while the content loads. So, for short-time loads, this is better. Simpler is better. The user won't have much time to read any text, so text explanations would probably also be a distraction, and stuff like this works really good for loading content or submitting requests and forms.
When we get to three-to-five seconds, the user will want some expectation of how much longer they will have to wait. They want to know when they're going to be done. A progress bar can give them a good idea of how much longer a task will take, or how much more the system has to do.
If the progress bar's really slow or stops for an extended period of time, this interaction can cause their attention to drift, just like the puppy. So keep the progress animation pretty fluid. This is really good, again, for loading content that might take a little bit longer: downloading files, buffering, etc.
If something takes more than five seconds, the user will not only want to know how much longer it will take, but they want to know more about what's going on. So give as much detail as you can without writing paragraphs. Things like expected wait time, "progress so far," and what the system is actually doing, will ease the user's mind, even though they know they can't do much about it.
Finally, tell the truth. Nothing's more frustrating than being lied to. While progress bars can be really clever, make sure you never get to that extended period of time where the progress bar is lasting 30 seconds, but you message the user that it will only take five seconds. You might get another couple seconds of patience out of them, but generally they'll feel pretty angry at you.
If you give a user a time expectation, let's say one minute, but it actually takes five minutes, you're deteriorating the trust in the product. In this case, he said five minutes but he really took 20 years. That would be pretty bad. So, the next time your application gives a time, they might not trust you. The information that should hold a user's attention should ease their mind, not make it more anxious later.
What's so bad about this loading screen? Well, it doesn't do a great job of managing my expectations. The checklists are cute, but they're pretty much nonsense.
Delight should really be a sugar on top of a really good experience, not cover a bad one.
After waiting for five seconds, I have no idea when the loading will finish or what it's doing. I really have no expectation of how long I'll be waiting or why. I just get really mad that I'm waiting. And after this happens five or 10 times, I get really, really mad that I'm waiting, and eventually get pretty frustrated.
I haven't done this, but... So let's take a look at Slack. Asana's not the only one with some questionable loading experiences. These loading screens sometimes take a few seconds, but sometimes they can take more than 10 seconds.
While they're really amusing, and you can do fun things like add custom messages, which I think actually does a lot to ease a user's anxiety, they also don't give me a good expectation of how long I've been waiting. I don't know how long I'm going to be here.
If it gets stuck, I might not know. So it doesn't manage my expectations very well. It might distract a little bit, though. It's probably the one thing I've ever really been frustrated with Slack for, because for all its flaws, how many of you have ever spent a huge amount of time waiting for IRC to load? Probably not.
Let's not just look outwards. Here's a screenshot from Sourcegraph's view-references action. The term "loading" isn't totally in focus, so you're not sure it's really grabbing your attention as something that I should be looking for, for information. But it's also not giving us much information in the first place.
This action can sometimes take five seconds or so. So while we're designing for better loading states like this one, we've also outlined a series of response time benchmarks for "hover," "jump to definitions," and other major interactions.
Every week we report on and track progress on these benchmarks, so we're simultaneously making things faster and attempting to design a better way to manage user expectations. This two-pronged approach really gives motivation on both sides, to create a good experience.
Now let's look at some good, or kind of different, examples of what people are doing to manage loading expectations. Medium progressively loads images as placeholders, so that the page render isn't blocked by large images. But the user has a good idea of what's there. The user can choose to wait for the whole image to load, or move on.
How do they do this? First, they load a tiny version of the image, a small, low-quality JPEG, maybe 30 pixels wide, at one-to-two kilobytes at the biggest. Then they scale it up, creating a blurred image effect. The low DPI gives me enough information to decide whether or not I want to wait.
After the full image is loaded, it's faded over the blurred image, creating a pretty smooth transition. The blurred image, while it doesn't have any text, it gives me enough information to manage my expectations. I know that an image is going to be there, I know roughly what it's going to look like, I know how big it is.
I can decide whether or not I want to sit and wait for it to load, or just keep reading the article. I think this is a really creative way to show a loader or manage loading expectations and increase the appearance of speed while still giving the user information about what's going to happen next.
How about browser loading? It's not always clear up front how long a page will take to load. Safari and Chrome take two different approaches. Chrome has your typical spinner, while Safari has a progress bar. I find Safari's design a little bit better.
While the progress bar can sometimes stagnate, it gives me more information.
I can expect how long a page will take to load, I know how much has loaded so far, and even if it freezes, it gives me more information than the Chrome one, which kind of just spins until it's done spinning, so I don't really know. I don't always know what's going to happen.
So, a few takeaways to sum everything up. Fast response times mean a lot to users, especially to developers. Breaking a developer's flow can mean hours in lost time and productivity; they're much more likely to abandon a sluggish product because they try to use it every day.
If a user needs to wait, manage their expectations with useful information. Even if they can't do anything about it, it'll help them ease their mind. And, I think this is the most important thing, performance is both a design and an engineering concern, and it should be treated as such.
Try working response times into product and design specifications, collaborate with engineers on response time expectations, and make sure everybody knows where the requests are being made in the product design, and in the end, at the end of the engineering process. Designers should be aware of where load times may cause lags. Thank you.
Yeah, so maybe a more accurate summation is, don't lie a lot. Something I think that can deteriorate trust over time is, if you see that progress bar stagnate at a similar spot, if you have a lot of progress bars in your app and they all stagnate at the same spot, eventually someone will notice and no longer trust.
Basically they're waiting for it to get to that spot, and then they're like, "Now I don't know what's going to happen," when they finally understand that it's not giving them as much information as they would like. I think this can differ between audiences, between customers, between products.
If you only ever have one progress bar, if they don't see it very often, people might not ever notice, and they'll still trust you.
The big danger is when you have a lot of loading indicators, and they all sort of lie in a way that's predictable.
I guess one sort way to lie, that might be a less detectable way to lie, might be to randomize where it stagnates. But I think that does, if you give specific information like, "This will take four seconds," and it consistently takes 30 seconds, that will much more quickly deteriorate trust. Did that answer your question?
We have quite a few of those at Sourcegraph, and we're still sort of getting our loading times in place, just because we do have quite a few of those.
One of the things that we've discussed is showing actual progress. While a progress bar might not be an indication of time, rather than saying, if you don't know what time is, you could have some messaging around what is actually happening.
For us, sometimes we have this thing where it sets up a workspace. A workspace is sort of a repository, and all the repositories connected to it and all the links to those repositories. We can say, "We have this many repositories," and there's like 50 out of 100 that we're setting up links to and actually show the progress going forward.
I think as long as there's real progress being made and you have actual information to show people, for most loading times, something is happening that you know about, right? Either there's a certain amount of data being loaded, or there's a certain amount of processes happening; you can give some feedback around what that is.
If you don't want to give specifics, if the specific information is too technical, really focus on what the user cares about.
For us it's usually the thing that we're delivering, is this browsing experience with code, and for what people care about is how connected is the code to other pieces of code. So those are the things that they care about seeing loaded, and those are the things that they might put more effort into waiting for. Because it's what they want.
Messaging the progress you've made on giving the user what they want is a really good way of kind of managing that, even if you don't know the time. Really focusing on the value can be good. And then another thing that I've seen work really well in the past is just messaging things around how much work the system is doing.
There's an experiment that's become pretty popular now, that Dan Ariely did when he was working for a airline search company, similar but apparently not, but maybe Kayak. And they did an experiment where the search results were actually pretty instant, like they could just show them.
They actually slowed down the appearance of the search results but added a sort of indicator of, like, "Searching this site," "Searching that site," so it showed that the system was working through all these different travel sites to find results. And they actually found better conversion when they slowed it down but cycled through a bunch of different things.
The theory behind that is that when you see that a system is working for you, you trust it more.
You trust that something is working to find you the right that you need, so I think that works specifically where you have someone searching for something, and they want it to be very curated for them. They want to know that someone's working to find something just for them.
That can be one way of showing value to users, if it really is taking a long time, that you are looking for something specifically for them, and you can message around that. I think for developers, developers can really see through those things. I think developers really value speed. When I'm developing, I really value speed, I value my time.
I think there's a lower limit to how much we can do those kinds of things when we're catering to developers, because so much of their time, they really manage their time really well and valuably, and they don't really want tricks. They want to get to what they're doing quickly.
Yeah, so, I remember when NPM updated their loader, no one would stop talking about it. Everyone was so excited about the NPM loader, and I was just like, yeah, even CLI people appreciate design.
I think there are a lot of things, like what NPM did, that we can do in CLI. I think a lot of the same principles actually do apply. There are certain animations we can do, but I actually think in CLI, showing console messages that tell the developer what is actually happening, in the same way that when you do an NPM install, it'll actually go through.
I think that's super, super helpful. And the same thing applies, it's like, "An operation is taking a while. We're telling you exactly what's happening."
One thing that I think is interesting about that is even if it floods your console, people seem to be cool with it, which is a little different than, I think, consumer products. If you were to flood them with that much information, they would really hate it. But with CLIs, I think people tend to err on the side of detail, and that, I think, goes really well for developers.
They want to know exactly what's happening, also because they'll have a log of what happened. So in case something goes wrong, go back and debug and really trace it back. So I think the same principles apply, except with even more information than you would in a consumer product. Does that make sense?