March 22, 2016
Bias Busting: It’s About Tracking Data and Being Deliberate
Hiring is one of the biggest barriers to scaling and growing an early-stage company and unconscious bias is just one obstacle to hiring. We ...
If you don't know who I am, I've been a long-term blogger at Coding Horror since 2004. And in 2008, we founded Stack Overflow. I haven't been at Stack Overflow since 2012, and my previous Heavybit talk was about Stack Overflow. So if you're interested in Stack Overflow, definitely watch the other talk. It was very detailed, it came out really well I thought. But today we're going to talk about a project I started in 2013, Discourse, and the actual official name of the company is, in fact, Civilized Discourse Construction Kit Incorporated. We chose this name for a couple of different reasons, one is it's an aspirational name, which I'll get into.
But also because I was a huge admirer of the construction kit genre of games early on in the days of computers. There was a music construction kit, RPG construction kit. I always loved these games because you could create things with them. And that's what I wanted to do with Discourse.
So, Discourse is essentially forum software where we try to do a couple of other things. But that's an accurate description of what Discourse is and what it does. And the reason we chose the name Discourse was it's kind of a 10-dollar word, it's a very fancy word for discussion. But, again, it's aspirational.
I wanted to figure out how Discourse could actually help people talk to other people online without it degenerating into what everybody associates with. Don't read the comments sort of observations. How can we actually build the software in such a way that we don't lose track of who we are as human beings.
And, to me, that's a really interesting problem, and it's very material. As we got deeper into the election cycle and what's happening in the world, I realized this is tapping into something that's actually happening to us.
We're becoming increasingly divided for reasons that I don't fully understand. As a society, there are these deep rifts. There are rich people and there are poor people, and that rift is getting hugely wide. There are people who believe X and people who believe Y, and that rift is getting really, really wide. And if you think about the history of software, of what we consider killer applications, the first killer app was VisiCalc and Lotus 1-2-3, and that's all about numbers, that's about data.
Computers are good at data and numbers. And this sold computers. This sold IBMs, this sold Apple II's. You would pay thousands of dollars for the computer just to get access to the software because it was so transformative in what it let you do with numbers. And I would posit that the next big killer app was really word processing, kind of in the Windows, the more GUI era. Whatever your favorite word processor was, Word I think was one of the bigger ones. And then as you get into the internet era, things start to change.
I think the web browser is really the next killer app, but the web browser doesn't do anything. It's just a container for other stuff so you could browse the web with it. And, of course, the web became an application in itself. And the interesting transition here to the next killer app is really Facebook, where you're not dealing with units of data anymore, you're dealing with people. You're dealing with software that controls how people interact with each other, which is such a transformative change in how software works.
It's not about the numbers, it's not about the data, it's about how the software makes people react to each other and what happens as a result of that. And then, along that same axis starting in 2005 to 2010 and today, we think of personal computing as everybody has a computer on their desk, or everybody has a computer in their home. But that wasn't really radical enough.
What was funny about the Microsoft mission statement "a computer in every home" was that it didn't dream big enough. What we have now is a computer in everybody's pocket, which is far more transformative than a computer on every desk.
That's really personal. If you think about personal computing, nothing's more personal than a device you carry with you. I have mine right now. I can't not have it because of my kids.
And that's the world that we live in where personal computing is about 24/7 connectivity to every human being on the planet all the time, and that's a big, big change in society, and I think we haven't really absorbed that change yet, we're still working on this. This is one of my favorite Mark Andreessen quotes. He eventually deleted all his Twitter. I think he got too controversial for Twitter. I don't understand why. So this doesn't exist anymore, but I swear that he did post this.
It's still one of my favorite things people have said on Twitter because I was like, "wow, the nerds won". We won so definitively. We won everything, so much so that we don't even know what to do with what we won. We've changed the world accidentally such that everybody is a computer geek and everybody's staring at their phone all day long. So, you know, the bad news is we're like the dog that caught the bus. What do you do now?
So that's the good news. The good news is that we're all unbelievably connected, and then that turns out to also be the bad news, that we're all like infinitely connected to each other, because we've never really had that before, and it changes the way you interact with people. So getting back to Discourse, one of the things I wanted to address was that forum software was thinking about rules of how you post on the forum, and I was thinking that's the least interesting thing that forum software does. That is, tell you how to make something bold.
That's the least interesting thing that software could tell you. Some interesting things it could tell you is like how to get along with other people, how not to get angry at this other person. How not to be angry in the first place with these other people that are talking to you. How do you get people to come and talk to you, right? These are interesting topics. These are interesting things that the software can help you with. But none of the forum software is dealing with that.
It was worried about mechanical software stuff that didn't even really matter. So, when we started Discourse, my central idea was, can software get people to get along with each other better than they did in the absence of the software? Or to say it another way can we make forums great again? So the first thing you would do when looking at this problem is say, "okay, all we need is a good set of rules, because software developers love rules". And that's what Stack Overflow was.
It was a giant set of rules, and a game that people would play within its rule set. It was like playing Risk, or Monopoly, or name a board game that's fairly complicated. This is actually my local McDonald's drive thru. I took this picture, actually, and they have rules. You go up and you have certian rules about how you can interact here. And these are not unexpected rules. These are not strange rules. Most people wouldn't have a problem with these rules.
And the other thing that I love about rules like this is every one of these rules you know is there for a reason. If you see a no elephants sign it's because somebody tried to bring an elephant. You don't just randomly think, "no elephants". So all these rules reflect things that are actually happening. And we did that. So if you look at Discourse,
We spent a lot of time when we started the project in 2012, coming up with ways to really thoughtfully and briefly say "here are some rules for civilized discourse, for getting along with other people in a discussion context."
And it's great. It works really well. It's very clear, but rules have their limits. First of all, how do you get people to actually read this? Nobody really wants to read all your BS when they go to your site, right? They just want to go and get something done. They're not really there to learn about the intricacies of the rules that you have and your social engagements, nobody cares. They just want to do what they want to do.
So here are some of the ways we did this. Here's a badge called, "Read the Guidelines". So if you actually go to that page, I can't guarantee that you read it, I don't track your eyes or anything, but if you scroll all the way down to the bottom, then, "hey, you get a badge". So we're encouraging you to read the guidelines because they're there for a reason and they're good guidelines.
They make sense, they're reasonable, nobody would disagree with these guidelines. You can refer to them later if you have an argument or things aren't working out. The next thing we did later was we put an alert on, this is, essentially, the hamburger menu that's sort of the pulldown menu of all the things you can do with the software that are sort of slightly off the beaten path.
We put little new indicators, like new. If you read that, it's attracting attention to it, it's like, "hey, can you please read this?" We want you to read it. And that stuff worked. I mean, people go there and they read it. And this is from Airbnb. I recently had to re-sign up for Airbnb because I like lost my account. I don't actually know what happened to my account. But I signed up again, and as I signed up again, I had to accept this EULA of like, "Are you going to be racist? Are you going to be sexist? "Are you going to be prejudiced on our site?"
And I had to say, "No, I'm not going to do that stuff, right?" At step three of signing up, after you validate your email and all that stuff, you're validating that, "You're not going to be "a jerk to other human beings on our site." So, I think that's important and I think that works, but you have to frame it in the context of things you're going to refer to later, what I call aspirational guidelines.
So when you later find out that people were behaving in kind of racist ways on your platform, "Well, look, this isn't what we signed up for. This isn't what you actually agreed to do, and this isn't what we stand for as a site." And I think that's incredibly important.
You have to have those rules because if no one can say what it is you stand for, then you don't stand for anything. Anything is now fair game, technically.
So you need it for that reason alone. But there is a lot of social science that says if you forced this stuff on people it won't work. I'm not saying Airbnb forced that on me, in fact, I agree with it and I had no problem with it, but if you get to say mandatory diversity training or sort of singling out managers and saying "you're responsible for fixing whatever bias problem we have" people will knee-jerk, reflexively reject that and actually do the opposite just to prove to you that they're their own person.
This doesn't seem intuitive, "why would they do that?" Well, you're telling them what to do. And people don't like to be told what to do. They just really don't. So that stuff can backfire, it's really interesting. Even stuff like just reading a brochure on diversity. If you feel like you were forced to read the brochure, this is, again, science, they measured this. People who felt like they were forced had strengthened bias against black people. Which is crazy.
But if they felt like, "Oh, I chose to read this. I wanted to read this." Then they had reduced bias. So you have to be really careful how you do these rule sets. You can't force them on people. They can exist and they're aspirational, I think they're great things, but you have to draw the line. You have the water and you're like, will the horse drink. So it's a subtle thing. You can't really force that on people. But they're important for aspirational reasons, so they still need to exist.
So what do you do in that case? If you can't force it on people. You can't say, you must do this, this is mandatory. If you don't do this, we'll hate you and we'll force you out of our site. There is this fantastic book by Dan Ariely called The Honest Truth About Dishonesty, and I recommend this book to everyone in this room, it's an amazing book, and one of the primary things you learn from this is that, first of all, that everybody lies about everything to some degree, which is, I think, normal.
Pure honesty is crazy at some level. Nobody actually wants you to be completely honest with them all the time.
There is a certain amount of normal social lubrication of lying that we do. And there are rules about how people do this and when they do this, and the only way to change that was through just-in-time nudges.
In other words, at the time, people were tempted to engage in the problematic behavior right then. You would step in and say, "hey, look, try to remember to be your best self in this troubling scenario you're about to enter." And that really, really worked in a way that ethics training years ago totally would not work. So that's the key lesson, is the incredible power of knowing the right time to sort of interject and say, "hey look, let's slow down, let's think about this, let's take this in the right direction, and let's be the people that we want to be rather than people that our emotions are letting us be."
And, you know, no illustration, I think, is better for this than the classic XKCD 386, like I know the number by heart because, you know, this is a lot of people's experiences on the internet, and, in fact, I was still finalizing this talk today and I was sitting there arguing with people on Hacker News. I was literally doing this today, I was "Grr, somebody's wrong about something."
So it's a very powerful feeling of, you know, you need to interject, you need to say the things you need to say to counteract something that's happening, whether it's, you know, some minor thing or some big thing that's really, you know, it's just emotional at most. So what we do at Discourse is at the moment you start typing, not during sign up, we prompt them to "read the guidelines."
When you start typing and you say, "hey, I will reply to this post, I have pushed the reply button, and I have now started typing letters". At that point, we're going to say, "hey, you're new here, that's awesome. Just remember, we're here to argue about ideas, not people, right? And we're here to be kind to each other." That's the overarching rule of discussion, is this a kind thing to say?
It may be necessary, but is it kind? And, you know, are you actually improving conversation? Just real simple guidelines, you know, three bullet points, right? That's all we're saying, and then you can read the rest of the guidelines, which is that document I already showed you guys up front. The big FAQ of guidelines. So that's the goal, to interject at the time of temptation. The minute you start typing, that's when you want to have that interaction.
So there are many other ways this can work. One thing that I was very strongly motivated to implement, and this is in the current version of Discourse, we just actually did this, I found that I would reply to the same person over and over because I was basically arguing with that person. So, that person replied, then I would reply, then I would reply to that person, that person would reply to me, and we were just two people arguing.
So this is what I call the get-a-room reminder of letting you know that, "hey, it's a conversation with a lot of diverse voices, but you're only talking to one person in that conversation. Is that a good conversation?" You know, probably not. I want you to think about this. I want you to think, "Jeez, I'm replying to the same person over and over, that's weird." Now, this only happens once, we're not going to nag you all the time in the discussion.
But in that discussion, assuming there are enough people for this to matter, if it's just you and that person, it's weird that you're only responding to this one person. Are they pushing your buttons? Is there something not good about that interaction that you should think about?
These are the kind of nudges that I really like at Discourse. I think a really powerful design element in software is to think about when people come to these juncture points how can you help them make better decisions?
I heard a mom at aftercare, I heard her saying to her son one day, as they were leaving, she said, "Did you make good choices today?" And I thought, wow, that's really good advice, right? Did you make good choices today? Because sometimes you don't, sometimes you screw up, and that's okay, but you've got to be thinking, "Am I making a good choice right now with what I'm doing."
So this nudge that I'm showing you on the screen is about short replies, because another thing we liked at Discourse was the idea that you're actually having these real conversations, it's not a chat system. You can post short replies, but we really would prefer if you have some more meaningful interaction with the other people on the site. So when you try to post something really, really short, we're like, "hey, that's too short, but have you thought about pushing the heart button"?.
If that's all you need to say, is like "that's cool, that was awesome, or I feel you, dog", then press the heart button because that's really what that's for and it makes the discussion better. And you can always reply more later. Nobody's stopping you from doing that. So you can do this in a lot of different ways. We have a lot of these nudges that we come up with over time, and the way I come up with these is really I participate in the discussions and I think about when I do things wrong.
Because I think for me,
Discourse is about an understanding that you're your own worst enemy a lot of the time. That's true of programming. That's what Coding Horror is, is you're the coding horror, you're the person making all these mistakes, right? So, Discourse is the discussion manifestation of that.
How am I screwing up today? And how can I not do that in the future? And how can I do better? So these are the kinds of things we want to remind people about at the time they happen. So to give you an example outside of Discourse, there is this system, Nextdoor. I don't know if you guys know it, it's basically Facebook for neighborhoods. So you join and it's people on your street, and you talk about stuff that's going on in your street, and my wife had joined this site and I heard her talking about it and she was kind of complaining about it.
She's thinking, "God. there is a lot of racism in our neighborhood." Even before all this other stuff came out with Nextdoor, it became a known issue that you kind of learned. I talked about how we're all hyperconnected now because everybody carries a smart phone. How many interactions with my neighbors did I really have? I mean, I would see them on their porch, I would see them in their cars, I would walk by them and say "Hey, how's it going?"
But then on Nextdoor, you're finding out what your neighbors actually believe. Not always what you want to hear. So you're more connected, that has its pros and cons. So, Nextdoor knew they had a problem with casual racism because people were concerned about crime. But it turned out a lot of the crime that was a problem was person of unusual skin color doing something, I don't know what it is, which isn't an awesome crime report and also kind of racist.
So, what they did was awesome, it's exactly what I'm talking about. So when you go in and you say, "hey, look, I want to report crime". First of all lay some ground rules. "What were they actually doing that made you think this person is committing a crime? What are all the identifying features of this person, not necessarily just the race, but all the things that identify this person, can you list them?" That's setting the tone. And then this is the best part, when you get into listing, it forces you to be specific.
That's amazing. Because that's teaching you how to make a really good crime report. Not just, "maybe you could be a little less racist", but also this is actually what the police would probably say. Actually describe this person to me, what are they wearing? Give me their shoes, give me their shirt. Give me their hair, give me their height, all that stuff. It's not going to be l"was it a black person? That's not going to be the first thing you're going to say."
So, I love this because this is all the nudges I'm talking about at the time it happens. Okay, I want to report a crime. It's teaching you to kind of be a better person, and it's not saying, "hey, by the way, stop being so racist," it's showing how to make a great crime report. So I think that's fantastic and that's exactly the kind of thing I'm talking about. there is another word called nudge, which kind of talks about all of the stuff that I'm talking about with nudges.
there is another way of doing this where you don't take away the thing you don't want people to do, but you make it harder to do.
There is the thing you want people to do and that becomes super easy to do, and there is the thing you don't want people to do and that becomes kind of this onerous process of painful steps.
So a good example of this is putting the junk food at higher for kids in particular. You put the junk food up high and put the healthy food at sort of more eye level and easier to reach. Sort of the opposite of every checkout lane you've ever been through in your life, basically. So Airbnb also had a problem with this. So Airbnb also had a problem where people would be sort of casually racist in the way that they would list their places that they're renting.
They would sort of notice that people were of a certain race or something they didn't like and just decided not to rent to them. So what made that possible is the fact that we're, again, hyperconnected. You're not renting from some organization that would be sued if they were racist, right? You're renting from a person that's not going to be sued if they're racist and, in fact, may not even realize that they're doing that, right?
It might be completely accidental, I don't know. But..
That's the down side of these user-to-user interactions, is it opens up the door to tons and tons of bias that you may not have at the institutional level because the institutions have already kind of thought about this when they have policies for it.
So they're also dealing with the same sets of problems that Nextdoor is in similar ways. One thing that Nextdoor is doing is essentially anonymizing the process of who's going to stay at your place. You'll get sort of an anonymous version of that person rather than the picture, and that's proven to work in a lot of different scenarios. So, another thing that we want to encourage on Discourse is leaving things at eye level to keep you reading.
Because I felt like a lot of the problems with discussion were that people were talking past each other. It's like that old scene in Fight Club where he says, "Are you really listening, or you're just waiting for your turn to talk?" And I think that's a huge problem in discussions. So I wanted to build in tools that said, look, read, just keep reading. I don't want anything to get in the way of the reading, and for me, pagination was already so dumb.
The idea is like you just keep scrolling, it keeps feeding you more, and since I actually did this, YouTube and a bunch of other sites have started, like after you read the news article, or the YouTube video, or whatever it is, they just serve the next one, they're like, here's more of this thing. And you can choose to keep reading, keep listening or not. So it's a powerful design mechanism to get people to do more of what you want them to do.
In this case, it's I just want you to read the discussion. So Ars Technica did a really cool science experiment. I don't know what year this was, I want to say 2012, they published an article about some controversial topic, in this case, it's gun control. And in the middle of the article, paragraph seven, actually, seven of 11, it says here, If you read this far, please mention bananas in your comment below.
We're pretty sure 90% of the respondents to this story won't even read that far. And then the comments begin on the article. Now, remember, this is a gun control article, so it's already kind of a loaded topic. So, you can't see this, but this is just page one. Okay, here we are on page two. Still going. Page four. So, finally, this one person on page four mentions bananas.
So, you know, what can you do to really encourage people. If you're going to comment on something, did you even read it? I think that's so fundamental to discussions. Did you actually listen to the other person's point of view before you talked? Are you just rushing forward to your talking points? And there are a couple of different ways to do that, one is prevent them from commenting until they get to the bottom of the article.
We haven't done any of this, it's all kind of experimental. We do know in Discourse, actually, how much you've read. In other words, how long each post is on your screen in a semi-scrolled state. So I can tell if you actually read, how much you read in a given discussion topic. So what Steam does that's really cool is when you're discussing games, because Steam is a gaming platform, it mentions in your comment how many hours you played the game.
So if someone's saying, "This game totally sucks, I would not recommend anyone buy this." And then it says under them "one hour played", you wonder, did you even really play the game?
What kind of opinion can you have if you haven't even really played the game very much? That sounds like a snap jerk reaction, versus somebody who's played it 300 hours, telling you how much they love it. Well, obviously, I believe you because you've spent 300 hours on this game, that's amazing, or sad, as the case may be, it depends on the game. So I think these are really interesting. Another thing that people have proposed that we haven't done is you can sort of quiz people about the article before they can comment on it to make sure they've read to a specific level on the discussion before they can comment.
They have to go all the way to the bottom of the discussion to read. there are a lot of things we're still thinking about. I mean, this is still stuff we're working on because I don't know all the answers, but I do know that you want to get to a place where you're rewarding the behaviors that you want people to have. You want people to read the stuff that they're commenting on.
There is this other platform, Civil Comments, that before you can comment, they do this interesting thing where you have to rate other people's comments before they will let you comment. So, in order to comment, you write your comment, you submit, it's like, okay, before you can actually post this comment on the webpage, you're now going to read three other comments and tell us what you think about, is this a civil comment?
Is this a good comment? I mean, does it add to the discussion, that sort of thing. And I think that's really interesting, too. It's another bar you would put in front of people before they can comment to prove that they have some stake in the conversation, actually help, and then they rate themselves, too, which is actually quite amusing. At the end of that little exercise, it was like, "Okay, now rate yourself." You know, it's kind of a trick, actually, it's really cool, it's a really neat design that they have, Civil Comments.
So there are really two sides to this, there is rewarding the things that you want people to do, and then there is ignoring and suppressing the things that you don't want. And if this sounds a lot to you like dog training, it's because it kind of is at some level. It's the same set of behavioral mechanisms we use to deal with a lot of things in life, and there is a lot of research that shows that when you use negative reinforcement for dogs, they have more behavioral problems.
there is a lot of science behind this, actually. So the negative stuff you have to be careful with. I talk about it more in the case of suppression and redirection. I don't mean like sort of berating people for doing the wrong thing. But let me give you an example on Discourse.
If there is a discussion that's kind of going really off the rails, you move it to some area of the site that most people can't see.
It's like, okay, I'm moving this to the protected area. You guys can argue here, but like this is going out of the main area of the site, for example, that would be a redirection. Because I don't think you want to initially ignore negative behavior. There is a lot of dog training that says if you see a negative behavior, you just ignore it completely. But I think human behaviors can be so negative that you actually can't ignore them, you have to do something about them.
So I call it suppress and redirect. Now, when it comes to rewarding the behaviors that you want, you have to be really, really careful here because there are so many sites that have gotten into trouble because they don't fully think through what they're rewarding. And they don't fully understand that once you start rewarding a behavior, people will do anything to get that reward, like stuff that doesn't even make sense.
Why would people be so motivated by this. But they are.
So you need to really consider whether you're rewarding the right set of behaviors. Think, "if I were an evil person and all I cared about was this reward and I would do anything to make it happen no matter how dumb that is or how harmful it is to other people, what would happen?" If you haven't thought that through, you need to revisit your approach.
Because you haven't considered what you're actually encouraging people to do. Taken to the extreme, what would that look like? And that's what you want to plan for. I do believe that some of these can be really simple like just showing up. Like I think, being a parent, one of the surprises about being a parent and having children, literally, 90% of the job of being a parent is just showing up every day, being there for your kids. Not doing the right thing, not saying the right thing, not even really planning what you're going to do, but actually being there and being present for your kids is 90% of the job.
And I think that's true of, I think, most communities, that you have to be there to have some stake in them, not all the time, but a reasonable amount. So a lot of the rewards that we give, for example, I think we did this first at Stack Overflow, we had a consecutive days award where, if you're on Stack Overflow for 10 days, you got a badge. If you're on Stack Overflow for 30 days, you get a badge, and then if you're on Stack Overflow for 300 days, you get some super badge.
And these are actually kind of hard to do because people have holidays and other things to do in their lives. So you're kind of mapping the balance of being there without causing people to obsess over showing up on your site every day. But some of them can be pretty straightforward. Obviously, great content. I mean, if other people are responding well to the content, this would be like on Stack Overflow, things that get upvoted a lot.
On Discourse, things are getting likes. On Facebook, the thumbs up and the other positive emoji reactions. Again, very straightforward, you always want to reward positive content, privily if it's coming from other people. I'm a big believe in peer acceptance of what's being posted. And one interesting thing that happens on Discourse that I hadn't actually anticipated was you have a vacuum of likes, where it becomes the norm for most stuff to get some likes.
And then people start posting opinions that other people clearly are not down with, and it's just like utter silence, like the absence of likes becomes void. Obviously, there is something wrong with this post, not to the point that you would like delete it and recreate it, all that stuff, but it's interesting how those dynamics start to emerge, where you're not saying anything negative about the post.
But you're not saying anything positive about it either, right? You're sort of killing them with kindness at that point. I'm not going to say anything mean, but I'm just not going to say anything at all. And I still think that's a superior interaction, like, again, negative reinforcement is so dangerous, but an easy one there. Numbers and sorting, hugely powerful, just be really careful with what you're dealing with when you get numbers.
This is a Stack Overflow screenshot. So we wanted to be very explicit about the wording, this is actually the weekly reputation scores. So it's just sorted by, again, peer acceptance of what you're posting. So we want you to post things that other programmers react positively to. This is a good answer, this makes sense, there is no real downside here. The one thing we did have to do was break this apart later into weekly, daily, quarterly.
It became dominated by long-term participants, and you want new blood all the time. So you do need time ranges here. So, when it comes to Discourse, rather than just a number score next to the name, there is no concept of putting a number next to someone's name in Discourse.
On Stack Overflow, everywhere you go, there is a number next to someone's name that represents essentially their peer acceptance score. We don't do that at Discourse, but if you go to the user directory, you'll notice something very subtle, which is ambient sorting.
We don't say, "hey, you should give out a lot of likes and receive a lot of likes". But it just so happens, if you go to user directory, that's the default sort. We're sending a message there, but it's not an overt message of you must do this, it's like, "oh, look, this just happens to be the default sort. You could change it to whatever sort you want." In fact, this is a really interesting page to look at on Discourse because you can see, you know, who's been there the most days, who's read the most.
We track reading time and all that stuff. Who has actually visited the most topics. It's really interesting data, but it's default sorted by this concept of reciprocation. You gave likes and you receive likes. It's a collaborative model, whereas on Stack Overflow, it's a more competitive model. It's like, your number is higher than mine. So what you're showing off on this page is "wow, look how much I gave up". And that felt good.
And it's actually really cool to see people that are giving more than they're getting.. It sends a really nice message out about what the purpose of this stuff is, it's not all to just absorb all the emotion in the room but to reciprocate. That's what we're here to do, but we're pretty competitive at Stack Overflow. Your answer needs to be better than the other answers. There can be more than one good answer, but you kind of want your answer to be as good as possible to be featured.
So, really, these are the four things that I'm talking about here when I talk about designing software to get people to be their better selves. So I think the first one is the aspirational guidelines. These are important for philosophical reasons, for building a statue that says, "This is what we stand for." And if you don't build that statue, it kind of says something about you as well. It's that you didn't build a statue, so I guess you're not standing for much.
So, build the statue, but realize it's not the whole answer, and, in fact, it's not going to be as helpful as you think it's going to be because it's hard for people to do the right thing. Making a list of things for them to do doesn't actually solve their problems, but what does is the just-in-time nudges. So if you can think about places in your software, people are doing X or Y and if you think "I really wish they wouldn't do X or Y, I wish they would do Z, and I think if they did Z, or even A or B, we would have a better community, we'd have better software".
Those are such powerful inflection points for you as a person who builds products to think about and plug into your software, and say, these are the times I know this is the critical inflection point where they're going to do A or B and I want them to do the right thing for them.
That's the time to step in and build those nudges, and that's what we continue to do in Discourse, we build more and more of those nudges in over time. Number three is about they still need to do things that aren't necessarily healthy or awesome but still need to exist in their product. So you kind of want to put barriers in front, but you want to make it easy to do the right things and kind of more awkward to do the wrong things.
Or the paths that you don't want them to take. there are a lot of paths like that in software, where there are always 10 different ways to do it, but you want to kind of herd them down the good paths. So make those paths like paved with gold, and lubricate those paths so they're really slippery so people just go right down those paths.
And then number four is easy, reward the positive behaviors. I didn't put it down, but when you see negative behaviors, again, I think your toolkit is really about redirection and suppression moreso than "you have done the wrong thing and now you will be penalized". Those are, I think, the valuable ways to think about your toolkits. And that's all I have for you.
You know, actually, I'm not super data-driven. Actually, the way I measure stuff and the way I did it on Stack Overflow, and I still kind of do this on Discourse, is I go to the site and I look at a page on the site and I think, how would I feel if I had created this page?
If I created all the content and all the replies, and how do I feel about that? And that's kind of how I measure success. On Stack Overflow, it was more about, is this a good question? Are these good answers? Are these authoritative answers? Do these answers make sense? Can I read this? Is it formatted well? A lot of this stuff is really simple, does the page load fast, for example.
So, on Discourse we're dealing with so many different disparate communities that have such different value systems. A lot of times, it's about the nature of human interaction. How do I feel when I went on this page, like a book can make you feel, like a passage in a song can make you feel. So I'm not heavily data-driven in the way that I make these decisions. I do tend to look for what I call complaint-driven development. If I hear a lot of people complaining about something, then I know it's a problem. So I do like to sort of gather feedback that way, but I don't necessarily build in a lot of data into those decisions all the time.
I would say, first, any kind of social software, you first want to let the people do what they're going to do and observe. That's kind of the same answer I gave you. Observation is so important because you have no idea what people are going to do. I mean, you might deploy this and find out, well, nobody leaves comments anyways, so why did I spend any time thinking about that.
So the first step is get it out there in a minimum viable way, making sure that nothing terrible can happen. So build in the horrible things protection, like the one click to remove this button. But I would say get it out there first and sort of get the tone of what's out there.
And then I'm also a big believer in sort of, say you do have a lot of people leaving comments, like once they do, it's like you'll have people leaving comments about leaving comments. They'll have opinions about how this should work moreso than you as a designer. Let the community tell you how it should work. So, yeah, I would say don't overthink it. Step zero, protect yourself from any truly crazy things happening, but, after that, just get it out there and then see what happens and act on the feedback.
I tend to think about the big companies that are affecting everybody's consciousness, like Twitter and Facebook.
I think a lot about some of the decisions they've made and how they do things and I honestly don't think there are a lot of companies doing great work in this space because it's viewed as a solved problem.
It's like people will use Facebook, people use forums, they're good enough. And I was like, well, no, they're really not, that's the problem. And I had the same reaction to when Slack came around, I was so excited about Slack because I was thinking, " oh my god, chat has been terrible for so long". People are saying, "What are you talking about, chat's great. You can just use IRC, you can use HipChat. I'm thinking, but those are terrible. I have a different perspective with the stuff.
I feel like people think good enough is already out there and there are not enough people sort of pushing and saying, this isn't good enough. What we need is to go to a higher level. And there is sort of a path of least resistance for some of this stuff, where it's sort of easier to do something for free on Facebook than it is to, say, set up Discourse. Wo that's like the real challenge for us.
It's not supposed to be, I would say it's much more collaborative, and software can be collaborative too. I think the way it works for software is like there is more than one way to do it, and there are two highly upvoted answers and they're very sort of complementary. They're different approaches to the same problem. So they're competing but not directly. It's not like, well, if you do this, then you're opposing everything I stand for in answer B. It doesn't really work like that.
You've chosen a different set of trade-offs and memory space or time. Maybe it's much more verbose, for example. So, I think, when I say competition, it's not overt competition, it's more friendly, like "who can come up with the best solution" kind of competition. But it is true that, in Discourse, it's much more about not creating any perception of competition. Who can be the most correct in their opinion. That never goes to a good place. That's bad.
So you really want just people to say, "Hey, you know, I feel you. That's an interesting opinion stated well, it's really interesting that you told me some story about your life that's really had an impact on the way I think about things." And, I think, for Discourse, the way I explain this to people is you're not really trying to change anyone's mind, because I don't think that's really realistic.
I mean, can you guys in this room think about the last time someone really changed your mind about something. If you held opinion A, some series of things happen, and you realized that opinion A was incorrect, or that you now have opinion B, which is a slightly different opinion. So think in your mind about how that worked. And I never had that happen to me because someone talked to me and just completely changed my mind.
For me it's a process of months of hearing different things, feeling different things, and then slowly just one day I wake up and it's like, "wow, that makes a lot of sense now".
You know, I've heard so many different varied opinions about this that I now have a more nuanced opinion on this. So, I think the evolution from A to B is not necessarily A, then you flip a switch, and now B, but like you evolve your position on A, it becomes more and more nuanced, you know, I learn more about this. It's not as simple as I thought, this is a scenario I hadn't considered, this is harder then I thought, and then B.
So it's this slow nuance that sort of creeps in. The reason we have these gun control debates is because first of all, we have a culture of guns in the United States, but also it's a hard, hard topic. It's just not easy to come up with something that's going to work for everybody.
That's the cool thing about Discourse, it's a general building block of community, and it could be for something super personal, or it could be just completely business. We're here to do customer support. It could be very, very straightforward in what it's trying to do, where no one would get emotional about it because it's customer support, how exciting could it possibly.
You're not going to have relationships, you're not getting married to these people. So there is a whole continuum. So I think it really depends on the focus of the community. I think, over time, one of the cool things about Discourse is you do develop social relationships with these people at some level because it is a social platform. Stack Overflow is not a social platform.
If you go to Stack Overflow to date people, you're doing it super wrong. I'm just going to throw that out there. But that's not wrong on Discourse because you're trying to get to know these people, right? You're trying to get to hear their stories, understand their stories, share your stories with them, and kind of get a sense of who they are. Like, for example, on Boing Boing, one of our earliest, we had three beta customers when we launched in 2013 that were trying to just figure out, "does this software even work?"
Does it work outside of my machine? And on Boing Boing, there was these huge communities of like gender-queer people and like I had never interacted with people like that, right? And just hearing some of the stories they would tell, I'd be like, I had no idea the bathroom was such a scary place for people with gender transition. I had no idea, I just literally did not know. I mean, as a white guy, it just never occurred to me that that was a scary place to be because people will hurt you.
So that was really interesting for me to hear. And I love that interaction, I love learning about that because I felt like, again, I had a more nuanced opinion about gender after that because I just didn't know. So I think it depends on the audience, but, certainly, Discourse is built to host those kind of discussions where it's like there is no answer, it's just a bunch of stories that people are telling. But it's all about the focus of the community, like how you set it up, what it's for, who it's for, the audiences that you're attracting, that sort of thing.
Change is hard, I mean, for sure, if you have established communities that are used to software X, even if software X is like unbelievably terrible. The longer they've been in it, the more they'll be attached to it even though that makes no sense at all. So, I totally appreciate that there is social friction to change. I can tell you what doesn't work.
What doesn't work is "let's do this side by side", that is just doomed. If you have the old thing and the new thing side by side, unless the new thing is literally like sex, it's not going to work.
So don't do parallel efforts. Say "we're going to commit to this for six months". Commit to it. One way or the other, that you have to do. The other thing I've seen work really well is if you convert the old content and sort of give them something that looks kind of like their old thing.
I kind of kick myself for not realizing this, but that's really powerful. People really react well to that. If they see, "Oh, this is my old content, it looks kind of like my old thing." They're willing to give it a lot more benefit of the doubt than they would be if it was like, here's brand new thing. Doesn't look like anything like my old thing. Doesn't have any of my old data. Now I have to go here, it's like, well, thanks.
So, I think if you want to position yourself for success, don't do that A-B thing, and, also, try to convert the old content, convert the old look and feel into the new thing. And I can't guarantee you it's going to succeed, but I've seen it work dramatically. Even on the sales standpoint, actually, if you're trying to sell software, if you can take their old data, crawl it, and put it in your new system, it's hugely powerful in sales, too. So it's not just for communities, it's actually a sales tool. So we usually don't have time to do that at Discourse, which is why we don't do it, but the few times we have, it's been like electrifyingly effective in getting people to switch and buy.
So we build discourse using Discourse. So I've had those discussions of like things that I just did not agree. I was thinking "his is a bad idea, we're not going to do this, it's not on a road map, and I think you're a bad person for even thinking about this feature". It does happen, I don't usually feel that strong, I'm joking. But there are things like that, where it's like, this is ridiculous, this is such a bad idea.
Particularly if it's popping up a lot, you've got to have a moderator come and say, "Look, you know, I appreciate your insight on this, but we can't go in this direction." It essentially closed the discussion, say, "look, I hear you, but we can't do that, right? For whatever reason, and then if they can't let it go, like you kind of have to let them go". If you're going to have users that are just like all going to push their pet issues, they can't really stick around.
So there is a certain amount of emotional labor that goes into any community, right? And I think, if you can't do it, I can't give you software that can do that for you, unfortunately. I think what I would recommend in your case is like look at other members of your community that really sort of get what you're doing and get you. They don't have to agree with you.
When I say get you, I don't mean they agree with me all the time, what I mean is they understand what you're trying to do. We have users at Stack Overflow, one of my greatest successes at Stack Overflow was, I could go to meta and there were 10 people who understand Stack Overflow better than I do. They understand what we're trying to do. They don't agree with me, but they understand what our mission is, right?
What we're trying to accomplish in the world.
You need to look for users like that, and those users are the ones you want to recruit and say, "look, you're our buddy here. I want you to be a moderator."
When you see discussions like this that are just kind of not really going in a good place, I want you to nip them off at the bud and say, "look, you know, I hear you, we can't do that right now, we'll consider it in the future". That sort of thing. And redirect to something else that's more like positive.
So that's my main piece of advice, try to recruit people that are fans of your product. We do this all the time at Discourse, and I'm like a big fan of like just like showering them with affection and swag. I have this thing I want you to do for me. They're our buddies, right? And they get us, and they can sort of, when I get cranky, like I sometimes get cranky, this probably won't surprise you, they step in and they're like, "No, Jeff, you're too cranky. Let me take this, and step away."
So if you can do that, because you get advocates out of that, you get people that are going to help you build the product. Not just help you manage your discussions, but really become your advocates and some of your best advocates. But, you know, it's not free, you have to identify those people. I will say that that page I showed of the sort of user list in Discourse, we did an exercise recently on Discourse where we send iPads, well, this is a secret, but now you know.
We send iPads to people in our community that are really awesome, and an iPad Pro, not the crappy iPad. So, we did that and I was just off the top of my head thinking of three people. I love these people, right? And I went to the top user page, which is, again, sorted by likes, and they were near the top of the list after our team, which we were required to be on our site because it's our jobs. But right under that was the people that I was talking about.
This is awesome, this is what I'm talking about, these are people that are getting and receiving love in our community, because I really like these people. And the data, there you go, data supported my decision. That was an off-the-cuff decision. Like, I like A and B, holy shit, they're A and B right under our team. So that was a great feeling, to have that data corroborate what I was thinking.
Identify those people so you can, over time, assuming your community has a minimum viable level of activity, get that data out. These are your best users. Shower them with affection, let them become your advocates. They know we're an open source project, too, so this is kind of self-serving because we need people to work on our project, Thanks very much.