1. Library
  2. Podcasts
  3. The Secure Developer
  4. Ep. #18, Collaborative Security with HackerOne’s Marten Mickos
The Secure Developer
38 MIN

Ep. #18, Collaborative Security with HackerOne’s Marten Mickos

light mode
about the episode

In episode 18 of The Secure Developer, Guy meets with Marten Mickos, CEO of HackerOne, a platform for vulnerability coordination and a bug bounty program that helps developers test and build more secure systems.

Marten Mickos is CEO of HackerOne, a global platform for hacker-based security that includes vulnerability coordination and a bug bounty program. Before HackerOne, Mickos served as CEO for both MySQL and Eucalyptus Systems and served as SVP for both Hewlett-Packard and Sun Microsystems.

transcript

Guy: Welcome, everybody. Thanks for tuning back in. Today we have an amazing guest, Marten Mickos from HackerOne. Thanks for joining us on the show today.

Marten: Thanks for the invitation, Guy.

Guy: Marten, you've done a lot in your career on it, before HackerOne. So I have a lot of questions and topics I want to talk to you about, but before we dig in, give us a brief history of time. What were your key activities and how you got to being CEO of HackerOne?

Marten: Yes, I'll start with the present, meaning HackerOne. We hack for good. We organize bug bounty programs, vulnerability coordination programs, and crowdsource pen testing. That's what we do. I've been CEO for nearly three years now.

Many people know me more from what I did before that, which was in open-source software. And most people know me from having been the CEO of MySQL, the first and the last and the only CEO, from 2001 to when the company was acquired by Sun in 2008. And then I stayed another year.

I come to the whole topic of security from the world of developing in a collaborative way, developing infrastructure software.

And I can admit that when I joined MySQL and even before that, I didn't care about security. I couldn't spell the word "security." It was not on my radar. When we now look around and find all this software that's not secure, I know where it came from. It came from people like me.

So now I'm here to fix that problem, repent and get everything in shape. And ask for forgiveness for whatever I've done wrong in my life.

Guy: That's always good to have empathy for your users, for the people coming in. You've been in that position, you probably will be again. Even the most security experts who talk to security companies and often times you'd find that in the development process there still is not enough attention given to the security practices.

Marten: Not nearly. I mean, it's terrible. It's completely terrible. But we don't care whether it's terrible or not. We know we can bring positive change. We'll do a little bit of change or a lot of change, whatever it takes, but time will cure every problem.

Guy: Yep, it starts with caring, right?

Marten: It does. But we must know, this is true both in open source and in security, that those people who care the most can also be the most difficult to work with.

The power of a collaborative model is that you can collaborate without agreeing.

You agree on the mission but you may disagree on a lot of details, yet you can collaborate. That is actually something that is a passion of mine. It's something that drives me to figure out, how do you get people to work together who don't really agree on anything?

Guy: A lot of what you do is this world of bug bounties. Today with HackerOne, you deal with bringing this community of people looking to find vulnerabilities, that generally frowned-upon action on the legal side. And then that the world is evolving to accept finding vulnerabilities in someone's code and reporting those.

You're trying to make all of that a positive action. So I think a lot of that comes down to collaboration around a fairly finicky topic. It's almost like feedback receiving. Let's dig into that a bit. Can you tell us a little bit of what a bug bounty is and then we can talk about these complexities?

Marten: Bug bounty is about paying somebody for finding the weaknesses in your own software. It's emotionally very hard to get there because you have to tell the world that you are not perfect and that you would like to hear the bad news.

You must have this mindset of saying, "Bad news is good news. Tell me about all the shit, and I'll fix it." It's not nice to say that. Many people don't go for their medical checkups for that reason. They don't want to know.

You have to have the readiness to want to know. But once you do that and you tell the world that you are interested in input, then you'll get it.

There are hundreds of thousands of white hat hackers in the world. If you tell them that you would like to know what's wrong with your system, they'll tell you.

Fortunately, we have a model here where we pay money to those who find it and we pay based on the severity of the find. So there's a baked-in business model that works beautifully so the best hackers get the best payments over time.

Not in every single instance, but it's a very fair system where I pay for you to tell me what's wrong with me. And the worse the problems are that you find, the more I pay you. But I pay nothing if you find nothing.

Guy: Right. Very much the success model there. Bug bounty is an interesting model and I'm a fan. We use it here at Snyk as well.

How do you find people's approach to it? When you have those first conversations with somebody that has not run a bug bounty program, what types of questions or objections come up?

Marten: The first question typically is, "Okay, how do I know that I'm not inviting criminals if I open a bug bounty program?" When we get that question, we stop, we look at our customer and we say, "How do you know that you don't have criminals attacking you right now?"

Then we discuss the notion that criminals don't wait for an invitation. They hack whatever they like. So what you do in bug bounty program is you invite, additionally, the good guys to hack.

For many, it still feels awkward to say, "Please come and hack me." But then when you think about it rationally, you realize that yes, it can only bring a positive change.

It can only improve the situation and it's true, the bad guys may be hacking me right now. So that's question number one. It's sort of a philosophical and emotional, even.

Guy: It's a really good point that's not intuitive, which is like you're saying, "Come hack me." But really, again, the people that weren't waiting for the invitation would have been doing that already. So it's not really that much of a change.

Marten: And we don't give any benefits to the hackers other than if they find something. So a criminal would never sign up with us because they can't gain anything. On the contrary, we will know who they are. We'll know from where they hacked. We'll know their identity if we're paying them a bounty.

Guy: Is there a difference around the monitoring of it? If I was playing devil's advocate to that I would say, when you're monitoring your system, then you're looking for these types of attacks and you try and block them and get alerted and respond to it.

When you turn on a bug bounty, do you lower those defenses? Is there anything that makes you more susceptible?

Marten: Oh, that's a great question. And technically yes, for some companies they do that. If they have very strict monitoring of their attack surface and they see every attempt coming in.

When they run a bug bounty program, they may actually white list some IP addresses. They may ask the hackers to come through VPN so they can see it happening. That's technically elaborate and it works, but what we must remember is that, however, security hacking that we do works best when it's diverse and free.

You get the best hackers if you don't put such restrictions. Because the best hackers, they don't want to mess with the VPN.

So it turns out that the best programs are actually open to everybody and they don't track the VPN. But that's up to our customer to decide.

Guy: How often do these programs run on the production sites versus running on some sample site or some side sites?

Marten: Nearly all of our programs run production code on production sites, and there's a good reason for that. Because we want to find the exact vulnerabilities that otherwise could have been used by a criminal.

There's no need in finding vulnerabilities that are not there in production states. So best to hack production.

But we also have exceptions here. For instance, our customer is Panasonic Avionics, world's largest maker of inflight entertainment systems. We hack them on a test device. Not in an aircraft, not while in flight. So yes, there are many situations--

Guy: Of safety gaps that trump those advantages.

Marten: Exactly. But most of it is live in production, websites, mobile apps, APIs depicted.

Guy: Okay, cool. I cut you off a little bit. We talked about the first objections being that one.

Marten: The next one, which is a very valid one, it's sort of sad but valid. We have companies who say, "Okay, maybe you can tell me about my vulnerabilities. But guess what? I already have 100 that I'm unable to fix. So why would I ask for more?" That's a more serious question in the sense that, you're right, if you can't fix them, why would you even bother to know about them?

The reason then, the way we discuss it, we say if you can fix only 100 or only 50 or whatever your capacity is, make sure you fix the most severe ones. So you should run a program and tell in the program that you're focused on only the very severe vulnerabilities and make sure you're not fixing some low severity stuff that doesn't matter.

That's one. Secondly,

if you are unable to fix the bugs in your code, then you have a bigger problem that you need to fix anyhow.

You need to deprecate the code. You need to get rid of your vendors, you need to hire more software engineers. I don't know what you have to do, but if you are unable to fix vulnerabilities and bugs in your code, you are not on a path to success.

I believe that soon enough, governments will pass laws to stipulate that every company must be capable of receiving vulnerability reports and capable of fixing.Otherwise they will not be allowed to carry consumer information. We're not there yet but I think we're heading there.

Guy: That's an interesting direction. First of all, fully agreed. You want to have the information and you want to know about all of the vulnerabilities so you can prioritize accordingly, even if it's a little bit harder to prioritize a long list versus a short list.

Maybe getting a little bit more into, indeed, that remediation process, who do you see engaged? Somebody comes along, gets a bug bounty report, a vulnerability has been reported, what kicks in in the organization? Does it go to the developer, does it go to some security triage, who receives it? What's the path to remediation that you see most often?

Marten: In the most beautiful case, the report that comes in to HackerOne, you click one button and it moves over to Jira, if Jira is what you're using. It's passed over to software engineering with a high prioritization and they'll start fixing the bug.

When they fix it, they mark as fixed, it comes back into HackerOne and we know that the vulnerability has been removed. That's the most beautiful execution here.

The many other aspects of it, you don't need to use Jira. You can use whatever tool you'd like. Once you've fixed it, you should collect information about your fixes and go back to the software design stage and make sure you don't create the same sort of problems again.

If you can look back into your software architecture, your software design, your choice of libraries, your choice of frameworks, and then you say, "Okay, if we are getting these vulnerabilities all the time, let's change something in how we code." That's how an organization will evolve to a new level of security.

Guy: Do you see that learning in practice? In concept, for sure, I know you report those issues. People get reported, they get remediated. Do you see a decreasing trend?

If somebody got a dozen reports about cross-site scripting vulnerabilities and fix them, hopefully, already a better state than not, do you see the frequency of this cross-site scripting type issues decreasing over time?

Marten: We do see a difference. It's a little bit like when you make popcorn in a microwave oven. In the beginning it pops a lot, then it's less and less and less, which is a good sign. You know that it all has popped.

It's a little bit the same with vulnerabilities, that you will have more in the beginning and it will slowly start shrinking. Of course, you push new code, so that brings them back. But still, with well-behaved programs, we do see it going down.

Some of our most software-conscious customers who really pay attention to this, they show a clear trend of cross-site scripting bugs going down because they are eradicating, systemically eradicating them from their frameworks or from their software development environment, essentially.

Guy: I guess the beauty of it is if it is harder for the hackers participating in the bug bounty program to find a vulnerability, it will equally or similarly be harder for an actual attacker to find it. So those bugs that remain there are harder to find. They are indeed harder for a real world criminal to exploit.

Marten: And what do you do then if you run a program? You start increasing your bounty values. You can start the program by saying, for the highest severity, we pay $5,000. Then after a while, no, $10,000, $15,000, $20,000 and you keep going up and up in price when the number of vulnerabilities go down.

This is how you ensure attention from the hackers. Because they know that there are fewer possible bugs to find, but they also know the price is increasing.

So the bounty's increasing, they stick to you.

Guy: Crowd economics in action there.

Marten: Exactly. Now we take the next logical step. It means that when you look at bug bounty programs, the highest bounty they pay is a measurement of their security posture and their security hygiene. It's only the ones with good security that can afford to pay high bounties.

Guy: And then expect a certain, expect at the end of the day a small number of valid reports, otherwise you're going to go bankrupt.

Marten: Exactly, exactly.

Guy: Cool, so this is bug bounties. As I said, I'm a fan. I think they're working well. So people are starting to open up to it. This concept came from the worlds of Google and Facebook, that originally the top tier companies and companies like HackerOne make it accessible, allow the non-giants to add a reasonable effort, go off and open that up and expand it. I fully recommend everybody to use it.

What if you're not a commercial entity? What if you're MySQL or that one even had at least a financial entity behind it. What if you are an open source project, non-for-profit, what do you do then?

Marten: Well, as the former CEO of MySQL, I have to say MySQL was a commercial entity and we figured out one of the most fantastic business models for open source back in the day, which we're very proud of. We managed to combine the good ethos with making money.

But of course, there are many open source projects that have no budget. Fortunately, the world is seeing the value of this now. So there are several initiatives in play now to support them in their bug bounty programs.

We have started, together with other companies, an initiative called IBB or the Internet Bug Bounty. It's a nonprofit. We collect donations from wealthy companies and institutions and then we turn around and use that money to sponsor bounties for open source projects who can't pay for the bounties themselves.

So that's a way of getting security hackers to focus on open source projects. But actually, the situation is even better.

So many hackers have so strong belief in transparency, and therefore an open source software, they'll do it for free.

Many of them say, "I don't need any bounty. If this is an open source product, I'll hack for free. I'll report my vulnerabilities for free. If you pay me a bounty, I'll give it back to you or I'll give it to charity." So there's a lot of good will in that space and IBB is just one. There are other similar initiatives.

The problem is not the funding. The problem is typically to find the project maintainers who will take the time and have the discipline to actually fix old technical debt.

Because we know that it's much more fun to develop new stuff than to fix the old. So having the discipline to go back and fix something that you created with good intent but it just wasn't perfect, that's the real bottleneck.

Guy: Fully agreed. We've seen with this report, the state of open source security, and we surveyed a bunch of containers. And generally speaking, most people just have no idea how to approach security. They would not have a disclosure policy on their project. They've never audited their code.

Most of them, when confronted, and granted there's probably a bit of a selection bias for those who've chosen to even answer the survey, claim that if an issue was reported, they would reply, I forget the stat, but it was a substantial amount of them talked about reply within a week or within a month.

But I think that none of them really consider that in volume. They don't consider how would they approach it there.

Marten: That's where I would say that Linux Foundation is doing wonderful work and they have the CII, the Core Infrastructure Initiative, which is there to help open source projects do the maintenance of the code and have people on staff who are ready to fix things.

Because we should never make it unattractive to produce new open source code. If we put too many obligations on people, they'll not do it. So we have to let it be really happy and fun and positive.

But then, we must also have the discipline side and say, "Okay this library or this product is now so common and so important that we need to hire full-time maintainers who will take joy in maintaining the code," and it's doable. It's absolutely doable.

Guy: For those open source projects that don't address an issue, do you make those vulnerabilities open? Do you think we should make those vulnerabilities that have been found via a bug bounty program, have been responsibly disclosed to a maintainer, and they, for potentially good reasons, including lack of bandwidth, did not address it. Do those bugs make it to the public eye?

Marten: I think it's a question not about open source, but in general. Meaning, if a vulnerability has been found and the owner of the software does not take action, what should other people do? I used to think that you have no right to impose anything on anybody. You should just keep it secret. Now I have changed my own viewpoint.

I do believe that it is in society's interest that we publish it. And I would point to Google's Project Zero. They do security research. They find vulnerabilities, they report them to the owners of the systems. But if the owners don't do anything, they will go and publish it unilaterally.

So they see a commercial entity like Google deciding to do that because they think it is in the interest of our digital society. And I tend to agree today that we can't,

if you produce code and it's used by many people on the internet, then you have a responsibility for the collective. And if you're not ready to take that responsibility, then keep your code for yourself.

Guy: Yeah, I agree. I think fundamentally, security through obscurity just doesn't work. If you're going to hide it, it's not going to stay hidden for long and the attackers or the bad guys are well-incentivized and invest much more in finding those issues and then exploiting them versus the defenders who often times are looking for easy pickings.

So by making an issue known, then it becomes something that is much more within reach for somebody to protect themselves against, ideally because there's a fix and you just need to embrace it. But sometimes, in other means if it hasn't been fixed, or at least assess it's impact on your systems.

Marten: You and I will agree that openness is great and collaboration is great and it's the only way to achieve security but the world doesn't agree with us yet. The cybersecurity market is $100 billion a year and the majority of those dollars go into products and services that are secret, not collaborative, not sharing anything, and not having any transparency in what they do.

The world is wasting a lot of money on old-school security practices and products that just don't cut it in today's digital world.

Guy: Let's talk a little bit about the world, this evolution of the world to maybe accept at first, bug-bounty, subsequently transparent security. There's a whole desire, the whole DevOps resolution or DevOps evolution/revolution came about trying to aspire to a transparent environment, where people have these blameless environments, we have these blameless cultures.

We talk about it and yet every time there's, in the security world first of all, every time there's a breach, the first two lines are, "So-and-so got fired." It's almost like that's the knee-jerk reaction, is to fire somebody. Then it's scary to stand up on stage and talk about a breach that has happened or even that has nearly happened and sharing those results.

The bug bounty element of it, vulnerability disclosures, I feel, maybe I'm a little bit biased, is in a positive trajectory. People are increasingly embracing it.

What drivers do you see pushing us forward in this momentum, getting this world of cyber to embrace openness, to embrace transparency? Do you see that coming about?

Marten: I think you said it. You said "blameless." It is so important and we can learn it from the airline industry. In airline safety, they have a blameless attitude. They will never blame anybody for any mistake and they'll share the information across all competitors. That is why flying is so safe today.

We, in our ignorance or stupidity in the software world, didn't apply the same rules. We should, and I'll give you a concrete example. It's not security but it's about being blameless.

GitLab, you know the company, they had an outage. More than an outage, they mistakenly deleted their production data. When that happened, they made a decision to go completely open about what was going on.

So they created, I forget what it was, maybe a Google doc that was live where they shared with the whole world how they were dealing with the issue. And they were completely blameless.

They talked about "developer number one" and "developer number two" and one of them had deleted the data. They never put blame on these people. They never said who it was. They just said this happened, these are great people.

It was terrifying to see the moments, minute by minute, hour by hour. But afterward when they got everything back and resurrected the site and people were back online, they had so much good will from their audience because they hid nothing and they blamed nobody.

If we can take that single example from GitLab and apply it to software security and other aspects of the software development lifecycle, we will be in much better shape.

Guy: So, challenging that a little bit, security is still different. They deleted the data, they didn't expose it to somebody else. Do you think there will be this notion of trusting because of transparency would extend far enough to win points to an entity that has lost, or not lost, but exposed our data to an attacker?

Marten: I actually think yes. Blame may be a natural instinct and punishment is very typical, especially in some countries, they punish more than others. I don't think it helps.

If you fire a security person every time something goes wrong, you'll have to fire a lot of people and you'll have to hire a lot of people. Do you really want to do that? And the ones you hired are people who got fired from some other job. You won't find a blameless person anywhere.

So if we could just settle and say, "Okay, guys and girls, we have all screwed up, we are all fallible, we are all vulnerable. Nobody's perfect. Let's not blame each other, let's do our best."

Of course you must have good intent. If somebody fails with bad intent or through sloppiness, I mean concrete, really bad sloppiness, then it's a different case. That's a case of negligence and we have to take action.

But if somebody makes an honest mistake, we have to know that that's how human beings work. If we don't want honest mistakes anywhere, we should employ robots and just have AI-produced software then we won't be needed.

Guy: That's a different story there. I agree with you. I think fundamentally trust is the only asset, the only currency we'll eventually have. Breaches will happen and granted you want them to not happen and to happen very few and you don't want them to happen to you but--

Marten: Maybe trust is the only asset we ever had. How do you know? Could be that trust was the number one thing 3,000 years ago and it still is.

Guy: I think today as there's more of these breaches of major presumably trusted brands leaking their data or supposedly making security mistakes because everybody's fallible. It's not necessarily, "They've done it." The way they've responded to it is massively important, and whether people would subsequently go back to them and trust them with their data again.

Because they dealt with it and you know that if they leaked your data, first they will tell you, second is you know they had all the good intent to prevent that from happening in the first place and had your interests in mind and not theirs. And then lastly is that they'll learn from it and that they will do a better job.

Marten: Look at Equifax. It sad to have to mention a particular company, but they had many vulnerabilities reported to them. They refused to take action. Then when they got breached, they refused to take responsibility. Then when they took responsibility they said, "We've fired the people in charge." Then it turns out that it's even worse than what they originally said.

So there you see step after step after step of ignorance, arrogance, negligence, all these bad things. And you realize then how much better would be that when bad things happen, you admit it to everybody, you say, "Okay, we've completely failed. Here's where we are. Please help us." And people will help.

We have 200,000 hackers signed up for HackerOne. They are ready to help if somebody asks them to help.

Guy: I think the Uber examples and the previous CEO were probably good examples there as well, or bad examples, rather. A lot of times things are hidden for a year and I think a lot of that mistrust is coming up. And as the new CEO states, that's kind of a part of their goal, yes approve security, but fundamentally improve trust in those elements.

What type of role? So GDPR and a lot of these new regulations coming into play. But really, GDPR is the big sledgehammer, are driving a lot of protection and some constraints around exposing data, exposing to customers the data has been leaked.

It doesn't necessarily state exactly how you need to do it. Again, bug bounty being one of those means, some acknowledgement. Do you see legislation or governmental activity promoting these types of practices, being a bit more dictative, so you need to have a bug bounty program for you to qualify or score in this level?

Marten: We're getting there. We're getting closer to it. It's not good if legislation micro-manages things, so it needs to give a broad enough mandate. But I do believe that governments need to say that any organization owning or holding consumer information must have the ability to protect it in an appropriate way.

One of them includes receiving vulnerability reports from the outside and then enacting software fixes. We should mandate it for everybody. Have governments done it today? No, not fully, but we're getting there.

In the U.S., the Department of Justice has published a framework for vulnerability disclosure programs. "Here's how you do it. If you're interested, do it like this." So they're saving a lot of time and money for customers.

NIST has published their security framework, which is excellent. The FDC is recommending this to every consumer-facing company. So we're getting there now.

In the U.S. they are passing laws now that the Department of Homeland Security must test the bug bounty program. So not all of these laws are perfect in how they were written, but they all drive in the right direction.

I agree with you, GDPR, although it's, in a way a monster and people were afraid of what will happen, it was the right mechanism. Here, a bunch of governments, the EU states are saying, "We're done with this thing where vendors don't take responsibility. You are responsible. You must notify if you have a breach. If you don't, we will take a percentage of your revenues."

It's harsh but I think that's what we need in today's world. So I believe all I can say that cybersecurity is in a really sorry state right now. I do think the ship is already turning.

It will take time to fix everything but we can see how decision makers on the public and private side are agreeing that we must take resolute action.

Guy: I agree and I love that a lot of these elements fundamentally boil down to transparency.

Marten: Yes.

Guy: They boil down to, "Accept that you're imperfect and be able to have people attack you and report issues." They boil down to when there has been a breach, which is sufficiently reasonable to happen, you have to own it up and you have to share it and you have to inform the people whose data has been leaked.

Marten: But Guy, those of us who drive transparency and promote it in the world, we know that we have to keep doing it forever. Transparency doesn't survive on it's own. It has to be supported. You have to bring it to the new worlds.

We had open source software which was a huge movement. Now we need to take transparency into security.

We need to take openness from open source into open APIs. We have to go to open data. All of this requires pioneers to drive it, demand it, rally people around it because if we stop, if we get complacent.

Guy: The natural instinct is, "Don't talk about it and just hide it."

Marten: So we must, at HackerOne, we define it as one of our company values. We say default to disclosure. Meaning, unless there's a very good reason not to disclose, we will disclose. Whatever it is.

Not just security things but anything in the company. We are driving a culture of openness and I know that it takes this daily discipline and commitment to stay.

Guy: We started from the practical of the bug bounty and those components and went a little bit high into the stratosphere to talk about how society changes to address it.

Going a little bit back down into it, let's take a moment to talk about the other side of this. We talked about the recipient of these reports, how the companies evolve and companies owning it. Who is on the other side? Who are the people that you see coming in and trying to hack?

Marten: The hackers.

Guy: The hackers who are participating in finding the vulnerabilities?

Marten: We now have 200,000 individuals signed up on our network saying, "I am ready to hack." And of course, not all of them will hack and some of them may be fake accounts or I don't know what. It still shows a huge interest in the world to be an active white hat hacker.

So we look at that group and say, "Who are they? Where do they come from?" because we have never really published a recruitment ad for this. We just say, "If you're hacking, sign up with us," and now we have 200,000 of them. Many of them are young. The youngest are 14 years old. They can be old as well, but half of our hackers are between 18 and 25.

They are all over the world where you have a good level of basic education, mathematical and STEM education, where they have a reasonable understanding of the English language. That's where we get them from, typically from big cities where young people don't have that much else to do, and they typically have security as an important part of their life.

They may be studying it, they may be working as a security person in a company, they may be a pen-tester somewhere, doing it as a day job and then additionally

they hack to maintain their skill and get the thrill of finding a bug and the social aspect of talking to other hackers.

Guy: Do you see bug bounties used as an education entity? Do you see developers wanting to get into security, register to HackerOne, to try out, to get some real life experience or even better, companies that build some sort of training program that revolves around finding vulnerabilities through these bug bounties?

Marten: That will be the best. We sometimes say that

some of the best hackers are also developers, so they understand how software is developed and vice versa. The best software developers also understand hacking.

So we would very much encourage and welcome software developers to try out hacking on our platform and hackers who are on our platform to learn about software development because it just increases their skill.

When it comes to education, we work with universities today. For instance, UC Berkeley, they have a course called Cyberwar, I think is the name of it, where every student, in order to graduate, must sign up with HackerOne and submit the real vulnerability reports. Otherwise, you can't pass.

So we're seeing now a great advancement in the learning and the blending of the two. Because here, I'm sorry I'm getting philosophical again, but

HackerOne and bug bounty programs isn't so much about security as it is about the software development lifecycle.

In the ideal state when all of this works, a bug bounty program is just the logical last step of the software development lifecycle and it feeds back into the beginning.

When we get there, it will be beautiful. Nobody will be 100 percent secure at any point, but we will be much closer to 100 than we are today.

Guy: I think of bug bounties often times as continuous monitoring. One of the problems with security is it doesn't have a natural feedback loop. There's no bar that shows your performance or whatever, CPU cycles, you can see degradation.

You can see how they become worse over time and you can anticipate a problem and you can set some alerts and it can go back. Security tends to not hurt until it hurts really bad. There's no natural element

I like, in the world of DevOps and the concepts of continuous everything, really,

you want some ongoing monitoring that shows you whether you're getting better or getting worse. And I think an active bug bounty program is one indication of that.

Not as live as a CPU cycle, but an indication of how many reports are you getting at a certain period of time and once you've established some status quo, it should be some red flag when you deteriorate. You need to be able to explain, I shipped new software, my CPU cycles went up, I understand it, I accept it.

But if you haven't shipped some major new functionality, then you have an uptake in new vulnerabilities that are being discovered. Maybe something's wrong. Maybe you need to go back and invest in security training or in security controls in your system.

Marten: Very true. You said it. The natural feedback loop for security in software. That's absolutely true.

Guy: This was fascinating and we can go on and on, run out of time. Before I let you go, I'm going to ask you one last question I like to ask every guest. If you had one pet peeve around security or one word of advice around security that you would give a team looking to up-level, what would that be?

Marten: It's not one. I'll package many as one. But first of all, we engineers and nerds, we always think technology is everything, that technology is the solution. That's incorrect.

In security, humans are the solution, and not just the hackers who find things but humans who take security seriously.

There I always tell people that I find, there are two things that build security and maybe only those two things. One is discipline. You must be disciplined about what you do. It's not about whether you did it once, it's that you did it every single time and you never failed to do it.

The second thing is this agility. Doing things quickly. Because when shit happens in security, it's all about how fast you can respond. When you have those two principles, you don't need to worry about all the technology that the vendors are trying to sell to you because you will be able to build a very strong security posture based on those practices which are based in what human beings do.

That's the good news here. We don't need all that hardware to make ourselves secure. We just need human beings who are passionate and committed to it.

Guy: Marten, it has been great having you on. Thanks for coming on.

Marten: Thank you, Guy. This was wonderful.

Guy: Thanks, everybody, for tuning in and join us for the next one.