1. Library
  2. Podcasts
  3. Don't Make Me Code
  4. Ep. #12, Security Ergonomics
light mode
about the episode

In this episode of Don’t Make Me Code, David and Steve are joined by Guy Podjarny, CEO of Snyk. The group discusses the current state of developer experience within security tools, the importance of non-engineers on your developer tool team, and the challenges of selling security to developers without resorting to FUD.

Guy Podjarny is the CEO of Snyk, a developer tool company that helps you deal with the security risk your open source dependencies introduce. He’s passionate about making a better web, and he does so through speaking, writing and building tools, mostly on the topics of Web Security, Web Performance and Responsive Web Design.

transcript

Steve Boak: We're calling this episode of Don't Make Me Code "Security Ergonomics," and we're here with Guy Podjarny, the CEO of Snyk.

Guy Podjarny: Thanks for having me.

Steve: Yeah, it's a pleasure. We always start with a little intro. Why don't you tell us a little about yourself and about Snyk?

Guy: So, my background, I'm Israeli. I worked in security for about a decade, or actually maybe about 14 years if you count the Israeli army, moved to Canada somewhere in that process and went through a bunch of acquisitions. Left security to found a web performance startup called Blaze, which got acquired by Akamai.

I was a CTO at Akamai for three and a half years or so, moved to London in the process. So that's where I am now. A year and a half ago or so, I got the itch to get back into startup land and left Akamai to found Snyk. Snyk is a developer tooling company that does security. That's really the core premise of the company.

Right now we focus on open source security, specifically in the Node.js world, helping you find and fix vulnerabilities in Node.js npm dependencies and just make that as easy and seamless as possible, as I'm sure we'll talk but during the episode.

Steve: The title "Security Ergonomics" comes from an interesting place. We all, as makers of developer tools, have concerns in both security and, of course, developer experience design. In your world, those things collide. And so I'm interested to hear more about how the product and its security in UX play together.

Guy: We think that ergonomics, the ease of use, how the user consumes security, is key and critical in the adoption of security by developers. There's this balance when you talk about security. Everybody wants to build secure software, but it's hard.

When you look at the security-tooling landscape today, it tends to be in pretty bad shape in terms of usability. It tends to be expensive tools that are hard to get started with, they don't tend to be self-serve.

There's a lot of good technology there, they build a lot of sophistication inside, but it's not easy to maybe get it out.

When you expect developers to embrace security, there's a lot of factors, a lot of ways or needs to do this, there's a lot of incentive questions, but fundamentally, there's a big aspect of how you get started. First of all, you talk about why should you get started, and why do you care? How do you even get going? Then subsequently, if you do want to do it, what's your first step?

As developers, we have a very short attention span in that sense. You've got a minimal amount of time to capture somebody and actually try it out, so it was already a bit of a higher bar to get somebody even interested, get a developer that oftentimes perceived it as "not their job," to still try it out, to still try a tool. And now it needs to sort of just work.

A lot of our premise, in fact, a lot of the thesis of how do we get developers to adopt tools, is to make it super easy. You can imagine this scale where, on one hand, you have how much you care about security, and on the other side you have how much friction it is or how hard is it to address it.

On one hand we try to educate, and everybody should care more about security, but on the other side it's just, "Let's make it dead easy, just make it as trivial as possible to get going." Then you can act, and you can take your first step. And then we have a conversation, and I can move you forward.

David Dollar: I think that's a really interesting way to think about it. It seems like almost every time you're thinking about security, you're almost, by nature, making things harder to use, almost by definition, I guess.

For example, one-time passwords. Like, great, now all my accounts are more secure. But I have to go set them up and have some way to make sure that all my employees have them set up and deal with management and all of that. It just adds all of this extra friction and work. I know that it's good, so I do it.

Guy: Right, and it ends up working. For most things, if it's easier, you do it more. And security is not only is it not an exception, it's probably even stronger in that sense. I think for many of these things, there's the ease of use and first action, and then there is some perception of what is right or what is minimal.

There's some things you do, like lock the door at home. Well, maybe not everywhere in the world, but in most places, you would lock the door. They don't really sit there and do a calculated risk of whether it's worth your time to put that key in the lock and twist it or turn it. It's just something you do. There are some social norms. There's best practices that people accept.

I feel like in some aspects of security today, like you talk about one-time passwords, those have evolved and they go hand in hand with simplicity. Having a password for every single account would not make sense at all before you had password managers, because you wouldn't remember them. You're just human and you wouldn't really think about it.

Humans are lousy at thinking about what will be a hard password and what wouldn't be, so now you have auto-generated passwords easily stored in a 1Password or LastPass or whatever that you have in there. So suddenly it became easier.

Similarly, phones made two-factor auth easier. All of these things go hand in hand where, on one hand, it becomes easier. So people are more willing to adopt it, and then they see others using it, and it just becomes a bit of a norm.

In the techie world, for instance, today, things like a password manager have really become very popular. They haven't quite permeated to the general population.

I think there's this virtuous cycle around trying to establish something as a best practice, and it has to start from simplifying, simplifying, simplifying.

Steve: To me, it's almost like tools that introduce new sensory perceptions to us, that password managers give us the added ability to see and share passwords. I think about it, like in your cell phone address book, how we don't have to remember anyone's phone number anymore. They're just there. And we don't have to remember our passwords anymore because we have great password managers.

For vulnerabilities, it feels like you're introducing a new sensory perception. Now, I can see where these vulnerabilities live and I have a tool to fix them, and so now it both raises awareness of the issue and helps me get the power to do something about it.

Guy: Precisely. And this is great, in general, and kind of a good path to maybe try and model in the world of security. There is still a challenge where some security actions are security features and they're very visible. You might not think about the implications of them, but they're at least in your face. Authentication is a good example of that. Some others are a little bit more insurance and a little bit more vaccine.

One of the challenges, one of the goals, is to try and bring them up, just make you aware of the problem. A good example of this is HTTPS, or maybe another example. I released this blog post that talked about how HTTPS adoption has doubled in the last year or so, or between July of 2015 and 2016.

First of all, that's awesome, and it's nice, and sort of a mini security celebration. We don't got many of those. The other aspect of it is to say "Why?" and how can we replicate that elsewhere. When you look at it and try to unravel it, this is theory at this point because you don't really know the causation and correlation, and all that stuff.

When you compare why that happened, there has been a combination of, simplification, carrots and sticks. On one hand, it became easier. Things like CloudFlare made it free and easy and just a checkbox. GitHub Pages have it on by default. Letsencrypt made it both cheaper and more automation friendly to be able to get a certificate, so it's just been easier.

Another aspect is sticks, like Google telling you that HTTPS would rank you higher, or I don't know, would rank you lower if your site was not HTTPS. Or a browser saying they're going to show an indicator on HTTP sites to say that you are not secure if you are not HTTPS, as opposed to showing you a green indicator when you are secure. So that's sort of a big stick.

Then you've got all sorts of carrots like HTTP/2 and Service Worker and a bunch of new web technologies that are only enabled on a HTTPS. This combo of things that incentivize behavior and incentivize both as individuals and organizations. Once again, HTTPS is very visible. You browse websites and you can visibly see that there are just more HTTPS, or whether it is HTTPS.

Vulnerabilities are tricky. They're sort of back scene, and a lot of our challenge is to try and make you aware of it. It's sort of this "find" stage, the ability of finding those issues. We need to make it easier for you to find the issues now that you have them. Then you have the opportunity to take action.

Steve: That's an interesting point, that something like a browser plugin, or how Chrome will tell you if a website is not secure. It's in your face, and it's in a place where an end user can see it, but like you were saying, vulnerabilities are behind the scenes. They're a little bit harder to discover. What does that look like for a Snyk user? How do you expose that to them, and how do you give them that ability to see things more clearly?

Guy: It's a constant challenge, first of all, I will say that. When you talk about security, there is the initial, "Take action, understand your current status," and then there's the ongoing, "What happens when the new vulnerability gets disclosed?"

First we need to get you to use the product, so it was just about lowering the bar to understand all of those dependencies and vulnerabilities that you have, very, very quickly and easily. We actually had a good learning about this moving from, or sort of expanding from, our commensally interface, from our CLI to GitHub integration.

What we've observed is, we had a really, what I think is a really easy to use command-line interface, where you could just do install Snyk npm, install Snyk, install -g. And then you go to a folder and do a Snyk test. and it finds all the issues. It has good links and handy information around the title of each vulnerability, how was it found, and some severity indicator to align priority.

People liked it and they used it on one project or on two projects, because it then became a bit of a hassle to go through when you had to have it cloned. That was one of our learnings from the CLI. When we did the GitHub integration, when we added that, then we modified that page. We had all the GitHub repositories, and you had a test button next to each one those that you can do. You would hit test, then you would see the issues.

Lo and behold, users would press the test button on one or two or three of the repositories. They wouldn't do them all the way, so we modified it and we made it auto run the test. We had to do some tech backhand to be able to withstand that, but basically, as soon as you opened that page up, it runs a test which is totally static and non-obtrusive, on all of your repositories.

Suddenly, users browsed through all of those, and they now are, like we see this in the numbers, substantially more inclined to take action on the issues. Because they have a bunch of these repositories that may or may not have been their top priority, but you make them aware of it. That was in the onboarding, just sort of expose them to the range of the problem.

Then we have a whole set of other actions that have to do with proactive alerts and sending new issues when there's a new vulnerability that is applicable to you and that is relevant. How do we make you aware of it?

Even if there is one champion on the team that has integrated Snyk, many of these notifications would go into, for instance, a fix pull request. There's a new vulnerability, we send you a fix pull request with the changes to make it go away.

One advantage of that is that it's a fix pull request. What better way to find out about a problem than with a solution of it? The other is, while there was one champion that integrated, everybody sees the pull request, and that exposes it. We have a bunch of other integrations with other pull requests just allowing the one champion to now propagate that throughout the team.

David: I really like that model, that makes a lot of sense. Using GitHub as a distribution mechanism, that really aligns for some of the tools that we've been using, too. We just added Hound and Code Cove and a couple of other things to one of our repos, and we did it because you could just push a button and get some really interesting information about your code without too much effort.

All that took was one person deciding to go try those tools and had it tied into the repo, and eventually the whole team is using it. It's been successful with us.

Steve: I think that's been a recurring theme here, too, is that

we all set out to discover ways of introducing our tools into workflows that devs are already doing everyday.

Because, like with your early CLI, if you require them to break their normal everyday flow and do something different, they're just not going to do it. And so the GitHub integration is awesome because they're probably going there everyday anyway. That exact process.

I feel like we've had a number of people talk about that exact discovery process. How can we take our thing, integrate it into a normal, everyday workflow that devs are already doing, to make that seamless? That's awesome that you've discovered that so quickly and made it seamless for people.

Guy: Thanks. I think we definitely see the benefits of it. It was also good, like the champions actually appreciate this, so this is not just you helping Snyk propagate more through the organization, it's you're helping this champion that cares about security.

Once a team reaches a certain size, fortunately, there typically is some people that are in that category. You help them now educate and communicate with the rest, so not only is it useful for us as a business, and permeate the tools, it's actually helpful.

And those champions feel like we support them, because we do help them educate the world. The other aspect of integrating with GitHub, not only do you expose more users to it, it's the fact that you, again, lower the bar around ease of use. You come to them. They don't need to come to you.

If you introduce another view, which inevitably you will need to because there's a lot of information, and there's just only so much you can do it, but if you have the right hooks, the right pointers, to tell them about a problem or about a necessary action or about a potential action,

if there's something that they could do, then it gives you the opportunity to provide value with a lower bar and then allow them to grow into deeper use of the tool.

Steve: At the last company I worked at, we attempted to do something similar with vulnerabilities, and we could detect them pretty well. Our biggest traffic days by far were when Heartbleed and Shellshock hit, those were some of our biggest traffic days, and it was not great news for our end users. But they served as good marketing tactics. We could go write a blog post about how we would automatically find and help people fix vulnerabilities.

Another thing we were talking about before was walking the line between good product marketing and scare tactics because we don't want to seem like we're scaring people into using our product. But this is also really, it's valuable information.

Guy: I think being constructive in security is tricky. At the end of the day, we're promoting using tools that are kind of insurance. You are doing something to reduce risk, and if that risk hasn't materialized, it's really hard to know whether that is useful for you or not. You bought this awesome new lock for the door, and then the next day nobody broke in. Does that mean it was smart to buy that lock for the door?

Actually, no, what bothers me a little bit more is that you didn't buy a brand new lock for the door, and then the next day nobody broke in. Is it smart to basically persist that behavior now? Sometimes it's only when the person actually gets hacked or gets broken into, that that triggers a behavior.

In another cases, it's other factors, whether a desire to protect themselves, or whether somebody else getting hacked scares them a little bit into action.

It's really hard for us. We set out to make Snyk a helpful brand, which manifested in even the color scheme on the website, just in many different aspects of the company, and it's a constant challenge. It's really hard to sell risk reduction without hyping the risk, but we try, and we try to be helpful and focus on the fixes.

I think that it may be worth talking about Heartbleed or Shellshock, and such. If you talk about a vulnerability just in the case of, "Oh ho, this guy's folding, how terrible this is. We're all hosed," then that's one thing. But if you talk about it in the sense of, "Hey, there's this really big problem here, and here are some very handy things for you to fix, and to help and protect yourself." Some lead is appreciative, and it's constructive. It's a builder's post, not a scarer's post.

David: It's pretty interesting thinking about tools like Snyk or just security in general, where it's like, if you're setting out to build a new application, this isn't one of the required steps between you and "hello world" on the web. How do you get into that required stack that people think about on the web?

Guy: It's not a simple feed, and hopefully it comes back to the almost peer pressure, or these become a best practice, how test-driven development started to get traction, arguably nowhere near sufficient still. Or even CI, I think, today, code coverage, continuous integration, these are the things that pay you dividends, maybe not in terribly long periods of time, but they pay dividends.

I guess my thesis is that people don't do them because, again, they sat down and did some sort of thorough thinking around whether this is worth their while, and whether a falsely broken build is not too time consuming, etc., versus what they get. They do it because it's the best practice, because that's how it's done.

We try to help permeate this and make it easy, but also try to help those thought leaders and the open source projects and such that use Snyk to advertise it through badges, through conversations and highlights. We haven't quite figured out a referral or a sharing mechanism.

Coming back to the fact that everybody wants to be secure, we wants to help the ones that do, those champions, help them celebrate that success. There's no single recipe, and I'm always open to ideas and things that we can do, because it's a constant challenge.

Steve: The case for it seems to be making itself in the media, that every week there's another vulnerability and another hack, and that this seems like a core competency, like something that every company is going to have to do soon. We're going to have to find and fix vulnerabilities quickly because they're exploited as fast as they're found.

Guy: I think another aspect of trying to get this done, in the world of software as a whole in evolution, is about sharing what you do. And that's another thing, that security is scary for people. Opsee wrote this great post around the container stack, or how'd you chose a container orchestration system, and people really care about that. Because, again, you don't want to make this super extremely knowledgeable request every time.

It's okay to rely on some mass smarts and somebody that you see as a role model who has made a good decision and outlines what they did.

People are afraid to share their security practices. It feels like I've just outlined all the things that I do, so you can find things that I don't do, and now you can hack in. It's hard.

The DevOp's revolution, sort of evolution, that happened, a lot of it was based on these blameless postmortems around people getting up on stage and talking about how they do deployments, how they handle a failure, how they had this massive outage and what they did about it.

It's really, really important for us to try and, as much as we can, evolve some of these practices around security and see other posts around how you handle security, which security controls do you have, and help each other learn.

Steve: We've been talking about how we approach design in security, and so now we're all building teams to do this well. Another recurring topic on the show is how the DevTool space is one where it still feels like design is under served, and we need more of it. And it sounds like you've done a great job building a team to solve this problem with good design from the beginning

Guy: It's something I like to think we did. It was intentional because, I think as I mentioned before, we were keen to make it easy and to lower the bar around getting started. My Head of Product, Johanna Kollmann, she's amazing, and she's a UX person. Here we are at this developer-tooling company for security, a heavily, heavily technical topic, and our Head of Product is not a developer.

She's the only non-developer on the team, which she pays for everyday, but at the same time, she's an amazing UX person, and she forces us to think all the time, even when we're inclined to jump to our technical conversations, to think in the context of a user and how that flows.

We brought a good designer up front. We actually had an interesting conversation, by the way. It reveals a little bit of the team, around color scheme, which was fascinating to be around designers. This is one of my learning experiences working with UX and designers, where we wanted to build this color brand. We wanted to build a brand that is all about being helpful, and helping fix issues.

The designer, Mark, when he started working with us, he interviewed a bunch of people. He got that theme, and he came up with a color scheme, suggesting it. It was all really good in terms of being welcoming and warm, and it was not at all alarming, including the colors for the high-severity vulnerability that you need to address, which should be a little bit alarming.

I mean, this is the point where fear should kick in just a little bit, and it was really interesting to try and balance that. We got to this magenta color for the high severity that sort of fit in. It was a really interesting exercise. I'm really happy that we did this. Thanks to Johanna, and I think the acceptance now of the entire team, led by her and the culture we've now built, we really do think about user flows all the time. We think about making it better to fix.

When you think about security issues, finding security issues is pointless. It's actually not at all useful. What's useful is fixing security issues.

If there is already an easy thing to do to fix the problem, then sometimes all you need to do is help point the problem out, and somebody will take action. But in our world, oftentimes there isn't an easy way to fix it. So it was important for us to actually build the functionality that helps fix it. Otherwise, all we're doing is just causing noise for you.

In our team, I think right now, we've managed to permeate that. We're very tempted, whenever there's some big hack or something that like that, to write some post about it. And always somebody else in the team now chimes in and says, "Hold on, how is this useful to people? How is it just not some noise?"

I just got held back by Danny, by our co-founder, trying to talk about the DNC conversation, and how the hacking happened there. Once again, as well, "How is that useful?" and stop there.

Steve: We had a really similar conversation about colors. It's one of those things in the monitoring world that is codified, and so we had to use red, yellow, green, and we ended up calling it salmon instead of red. In our marketing, you do actually see some red. I can't even give scientific reasons for any of this. It was just like, "Don't use red in monitoring marketing. Bad idea."

Guy: Yeah, there's definitely a lot to learn. I guess it comes down to design in UX being a razor, single detail in that flow.

David: It sounds like you had dedicated designers. Do you get them involved, even in things like CLI design? I saw the CLI demo on your website. It's beautiful. How do you integrate design at that level?

Guy: I think there's a lot of evolution going on there. First of all, we watched your talk, David, on the Heavybit library, which was very, very, very helpful on this notion. I love this, this premise of small, sharp tools.

It started there, maybe even with me just talking about the core functionalities. I worked with a very opinionated person on our team, Remy Sharp, who's really well known in the JavaScript world and has a lot of experiences. He built JS Bin and nodemon, so he came in with some experience.

That one didn't come from the designers, per se. It actually came from just a strong, opinionated view from the different people on the team and having the user perspective dominate. We actually designed three underlying core actions, or four, actually.

The authentication, which is not always kicked in, the Snyk test to find the issues at Snyk, protect to apply patches when you need to, an instinct monitor to take a snapshot of your dependencies so we can alert you to new ones. Then we built an interactive overlay on top of those with Snyk Wizard, which just runs the test, figures out what the next actions should be from the test output and just walks you through doing it.

The Wizard has its own logic, but you don't have to do it. You can get the full functionality without the handholding from the product if you use those four underlying actions. It manifested well there. I think the contribution of UX there, a little bit less this sort of design, pure visual, came a lot into the information architecture, deciding which pieces of information really, really needs to be shown there on the CLI, versus which ones we can link off to the website.

That was a very big decision component. In our world, there's basically different user flow between remediation and vulnerabilities. I probably don't want to go too deep into it here. You might have multiple instances of the same vulnerable dependency across your tree, in which case you have one vulnerability multiple times. We call that vulnerable paths.

Once again, multiple iterations that we've had, to get to these names, so you have multiple vulnerable paths. When we talk about vulnerabilities, you want to consolidate those together, but sometimes you have a single dependency that, because it brings in a tree of dependencies under it, actually introduced four different vulnerabilities, different types, different vulnerable components. In which case, it's multiple known vulnerabilities, but it's really just one action, one remediation action that you need to do.

Logically, we actually have a remediation workflow and a vulnerabilities workflow. Naturally, you want them to mesh together, but they just don't. Some of that acknowledgment was hard. Right now, I would say our web UI is based on the vulnerability workflow, and our CLI is based on remediation, acknowledging that when you're in the CLI, you're in that mindset.

You're in the mindset of, "I'm right there, I'm really close to my issues." And really, my goal is to make this go away, I'm not in a mode to soak up a lot of information. While on the web, you're much more in a "report" mode. You're much more in a consumption, and we have a fix button, which is awesome. You have a fix button, just makes the problem go away. But the fix button is less handholdy and a little bit more just automatical.

Steve: That's a really interesting point. We had an almost identical conversation with Sean Lee, the Head of Product Design at Docker, about how they differentiate their tasks between the web UI and the desktop app.

He said something almost the same, which was that for visibility and discovery of what's going on, they rely on web UI. There's a great tool for visualization, but people still go back to the command line to get things done. That's how they feel comfortable and fast.

That seems to be also a common thread DevTools companies; we, by nature, go back to the command line to get things done, but nice visual interfaces can be great for discovery.

Guy: Yeah, and I think for me it was a learning experience. My natural inclination coming in is the boot basically wants to create two different interfaces. They're the same functionality. You can fix via the CLI if you're more terminal bias, or you can fix via the web if that's your preference. I thought that we will do that way.

Unfortunately, we didn't actually set that a requirement as much, and I don't know if that's by chance or a design, and because of that, we just designed the workflows for each of those and learned about it after the fact.

Then I kind of beat myself up about it a little bit, because I built a product called AppScan earlier in my career, which is a web-app security scanner. This was at the time we had to educate people what cross site scripting and SQL injection was.

Back then, lo and behold, we would find the issues that would have multiple variance, etc. We had vulnerabilities view and remediation view, and so ended up getting to the same spot, which made perfect sense to me after we got to the same conclusion at Snyk. But somehow it escaped my mind ahead of time, so maybe cost us a little bit of time.

Steve: The team composition point is also really interesting because one of things I worry about the most, as a designer of DevTools, is that there are certain kinds of solutions that I just won't come up with as naturally as a developer. Like going to a CLI solution, I'm just not going to think about that right away.

Design really has to be a team sport in DevTools companies where developers and designers are working together towards solutions.

It seems like more and more product companies are moving this way, to have designers and developers embedded on product teams, but it just seems like table stakes for a DevTools company. Like, you can't have the designers working in isolation, because they just won't come up with the best solution in every case.

Guy: I think, for us, it also necessitated some process. We have our dev team, our entire team is split between London and Israel. That was a decision because we have awesome people from different backgrounds in those two locations. And it was a good decision, but it costs us.

We're a small company and there's a cost, there's a cultural cost. We try to fly people around and all that. We've got video calls, but still. One of the things that that forced us to do, and also maybe something that came from the London mindset and maybe from Johanna even, and myself, is to still instill a certain amount of process and documentation and make sure that things are outlined and are defined.

I think that gives us an opportunity to talk about UX a little bit more. We can see, like the cases where we fall, the cases where we get it wrong, are typically cases where we hacked something together or that we built it a little bit more in one of the locations versus the other, and we just built it. We just got it done, which is great in many, many aspects, but it also implies there wasn't really as much of an opportunity to have the conversation.

Just the sheer need to explain from one party to another, what is it that you are doing and why you're doing it, is very valuable. Although, I think today, I feel like design and UX, we've succeeded. That's one thing I feel is successful.

It permeated through the company, but sometimes when you are excited, you just want to get something done or you just want to have functionality out. That process gives you an opening for it. Process is generally a bad word in my mind, but it is helpful sometimes.

Steve: There was a really interesting talk here at Heavybit by a product manager, talking about his presence at companies, at early-stage companies, and how his greatest influence on the company is just by sitting in his chair. He doesn't have to do anything or ask anyone to do anything; his presence just make them aware that they need to document the order of operations and how they're going to do things.

I feel like design can be similar in that it forces everyone to think about how they're going to do something before it gets done, if only for the benefit of the designer who has to work on that. Getting that as part of your process is really valuable. I think especially the early-stage teams, who can be really chaotic if they're not careful.

Guy: I've got to say that this happens, not just dev-tooling companies and not even just small companies. When I was a CTO at Akamai, we would sit at some of these exec teams and talk about components. It's funny because there, I was always the anti-process person, and it sort of became a little bit that sentiment.

When somebody said that we need to set a process for this, they would look at me and semi apologize for it. I've got to say that over the course of those conversations, and now maybe implementing some of those learnings, I've learned to appreciate process where process is due. But yeah, sometimes it's the influence of an individual that represents a concept, represents an ideal or an importance, that is useful to get you to the right decision.

Steve: Let's talk about the ergonomics of security. This one really hits home for me, because we had an issue at Opsee onboarding people into the product. We had, I think, the easiest way of doing things, which it was just asking for some Amazon API keys to get started with the product, but people bounced.

They didn't like it, they called us out on it not being secure, and so when we changed it. It was more difficult, but it improved our onboarding.

More people were willing to take part, and so that was a really weird case for us where something that was harder for the user ended up being the better choice.

You told us about this term, "security ergonomics," so I'm interested to hear more about your view on it.

Guy: I think usability and security are often enemies, and sometimes you need to build something to make it simple. We mentioned some of these two-factor auth and another examples. This concept extends all the way from tooling, such as token management or things that are sort of heavy DevTools, all the way out to a user browsing a website.

I gave this talk with Rachel Ilan Simpson, a designer in Munich, for the Chrome team, who's amazing. We talked a lot about HTTPS, and in general around Chrome and how they handle security. We ended up doing this talk called "Security Ergonomics," talking about why users make insecure decisions. And I feel like some of those lessons actually expand out to pretty much like the breadth of those.

We ended up narrowing it down to three things. One is different motivations. So, when a user comes in and wants to take action, they are looking to take action. They're browsing through Facebook, they want to see baby pictures. Anything that you put in the way, it's just in the way. They're just going to push through. They might want to see their bank account.

You have to remember that users have different motivations, and you have to make sure that you help them get to that motivation. Sometimes security is motivation. Probably you've seen that in Opsee. They need to feel secure as they doing it, but you have to figure out the motivations of the user taking the action.

The second is lack of expertise. You can see a lot of that. If you clicked one of those yellow things on the Chrome browser, around the lock, you will see that this is yellow because the SHA-1 algorithm was deprecated. I can count on one hand the number of people that understand that statement, and that would know what to do to act on it. I'm a security expert and I'm not entirely sure what I'm supposed to do when I see that message.

You have to understand that users don't have security expertise, and try to help them make the right decision, even if you give them the alternate decision. Try and sort of guide them.

Great examples in browsers is around disabling and making it hard, all but impossible, to get through the security exception, the bad certificate prompt. The browsers have really evolved how they handle that, and while it's still technically doable, they've managed to get drastic, drastic reductions in click-throughs.

Then the last bit was this notion of forgiveness as a continuum, and talk about how when somebody does make an insecure decision, try to walk them through it. Can they start with something that's little bit less secure, and can you evolve them through it? Is this something that can be temporary?

In Snyk, we do something around allowing you to ignore vulnerability, but by default it ignores it for 30 days. We had to let you ignore a vulnerability, because sometimes you just have nothing to do about it. If we wanted you to break the build, or do something that's like a gatekeeper around those vulnerabilities, we had to let you get through.

But at the same time, we don't want to make it too easy to ignore a vulnerability, so we built in this option that forces you to just give a reason. Just text, nothing special, this is for audits purposes, for your own audit purposes, but you can ignore it. And by default, we would ignore it for 30 days.

You can go into that file, and you can edit it, and you can make that expiry be far out, but hardly anybody does. In 30 days it'll bug you, and hopefully by then, there would be other remediation solutions, or maybe the third time it bugged you, you would actually take action and then actually swap that dependency for something else.

I think the notion of forgiveness as a continuum, how you walk the user through a path of security, and that it's not all black and white, is very useful. Just different concepts for it. You can dig it up, "Security Ergonomics," Rachel Ilan Simpson. You'll find some good visuals and some learnings from Chrome about this.

David: I think it's really interesting. Also, the concept of trying to influence user behavior through design around security. You mentioned the HTTPS thing, changing from the green lock to the red unlock thing, just a very subtle, "We're now changing what is the default behavior."

It seems like your product does that in some ways, as well, as you're going through the test process. Actually being able to influence future behavior, for example, being able to ignore a vulnerability temporarily.

Guy: I think in most of these tools, we build in expertise. Like in Convox, you would build some expertise about all the mess that happens behind the scenes, that I don't want to understand if I absolutely did not have to, but if I have to make a decision.

It's useful for me, for you to tell me what the right decision is here, right now. If you can just make it for me, that's even better. But if you can't make it for me, then at least point out the trade-offs and put me on the default.

We try to build it towards that, through severities to tell you which ones are more important than the others, through in the Wizard, the default option that is selected, it's the fact that you have an update and you can patch this vulnerability, but you should update. The default is updated. If you just did "enter, enter, enter," you'll be updating, which is the right thing, in our minds. All sorts of small decisions like that.

Steve: I remember the fairly famous case of ATMs and how they used to give you your money before they gave you your card back, and so people would just walk away without other cards. Now, thankfully, at every ATM I've been to lately, they give you card back before the cash comes out, and so you can't make that mistake.

Those forced behaviors and sensible defaults can go a really long way to correcting and influencing the behavior you want from people,

and especially for security, that seems incredibly important.

Guy: Exactly, and there's an empathy for the user as well, to try to put yourself in the user's shoes and understand and know why they're doing it. They're not idiots. When you look in security, it's just so easy.

When you go to security conferences, in general, I would say the mindset is oftentimes less than helpful. You come back from a security conference, you kind of want to curl up in a corner and cry. Oftentimes, the conferences are all about all the things that are broken and how you can break it further.

While we'd look at DevOps conferences and look at those things, they're helpful. They're about what can you do to make it better, and they're all about the blameless pieces. I think for security, and as is the case in many of the other components, having some empathy for the user, understanding why is the user making this insecure decision and what can you do to reduce that likelihood or make it not happen, is very valuable.

Steve: All right, thanks again to our guest, Guy Podjarny, for coming by. It's been awesome talking about security in UX.

Guy: Thanks a lot for having me. This was a blast.

David: Thanks for coming.

Steve: If people want to get in touch with you online, where can they find you?

Guy: They can look for SnykSec in Twitter, or Snyk.io, and myself, I'm just Guy@Snyk.io.

Steve: All right, thanks.