Marten Mickos
Building Securely for User Privacy

Marten Mickos is the CEO of HackerOne, the leading provider world-wide of hacker-powered security.

Collapse
00:00:00
00:00:00

It’s no secret that demands on developers continue to grow; it’s not enough to know where and at what scale your software runs, it now also falls on devs to make sure it’s running securely. With devs shipping code changes more often than ever before, the risk of revealing sensitive information has also increased, so it’s critical to have a clear understanding of your product’s security surface and how every change can potentially impact user privacy.

In this Speaker Series presentation, HackerOne CEO Marten Mickos discusses app sec strategies for building securely and ensuring user privacy from the earliest stages of your company. He also provides examples of different security models and processes at later-stage dev and enterprise software companies that you can integrate into your own company.

Thank you, everybody. Hello, I'm Marten Mickos, I'm the CEO of HackerOne. I come back to Heavybit Industries because I think it's one of the coolest place where serious startups can grow and become something. I'm trying to be useful here. Last time, if it was seven years ago, that's amazing. I spoke about open source business models and it's actually still today if somebody asked me about open source business models, I point them to that presentation, because somehow we got everything right back then. I recommended it for anybody who is interested in that. Since then, I've moved on from open source to a similar model in security. I see it as similar in the sense that in open source, a lot of people get together openly disagreeing, but contributing to something and value is created and you can sell it to unsuspecting enterprises.

And we do the same at HackerOne in terms of security, in that we run bug bounty programs for companies and we have a community of 600,000 ethical hackers who are ready to hack you at a moment's notice for your benefit. The best way to avoid getting hacked is to try to get hacked. And when we hack you, you'll be thankful, because if you're going to be punched in the face it's better to be punched in the face by somebody you can trust. So that's the business of HackerOne.

User Privacy & AppSec for Developer and Enterprise Software Startups

I'm not here today to talk about HackerOne's business, but I'm happy to answer questions about it later. But I joined four years ago as the CEO , so I'm a complete newbie in security. I don't know much about security. I will talk to you about OPSEC and privacy on a relevant and concrete level, but on a conceptual level. I won't be telling you the exact tools you should use or the exact commands you should apply here or there. I don't even know what they are, but I hope I can be useful to anybody who is building a serious business and who wants it to be safe and secure, and who cares about digital trust and actually cares about our digital civilization. We have built a digital civilization, but we just built a prototype and it's up to us to build a real one that works, that cannot be tampered with, cannot be compromised and cannot be broken into. Of course, everything can be tampered with but we can improve it much from what it is today.

The Many Requirements of Software

I grew up before the internet, so I grew up in this happy time when software was just fun. You just did one thing with software, you tried to solve some algorithm and then you were really happy. But today there are so many requirements on software that it's just crazy. You have to build features and functionality, and then you have to know what UI and UX is. Then it has to be performant and scalable, and it has to be resilient, and it must be easy to maintain, and dam n you if you haven't built APIs into it.

Now we're saying "It has to be secure and you must follow all privacy laws," so building software is completely different today from what it was just a few years ago, and it's just getting harder and harder while at the same time everybody is a software developer. So it is sort of becoming a difficult blue collar job, or something . Everybody is a software engineer and pardon me, software engineers, I have huge respect for you but it has become a common profession in every company and within every such profession t here is much more to know today than there was just three, four or five years ago.

The Growing Demand on Developers

The demands are growing, and if you look at it from an even broader scale , looking back at the PC era, like the 1980s, I was there at the time. All you had to do was get the PC to do some fun stuff and everything was good. Then the internet happened and people said, "OK, can you do that? But make it scale?" And then we spent 10 years learning how to scale software , and that was the time when I joined MySQL and I was CEO there.

At some point people said, "No. W e can't just delegate the operation al responsibility to the IT team, the developers are actually in charge of their code even when it goes into production." So suddenly we said, "Can you also release it continuously and be in charge of it?" That was like ten, fifteen years ago. Now people have invented a new term called Dev SecOps, where they say "OK developers. It's not enough to develop the code, it's not enough to put it into production, you also have to make sure it's secure and violates no privacy obligations." So just piling on more and more work for software developers to make sure that their code is fit for every fight it will get into.

Software Changes Affect Security

The sad truth is that whenever software changes, security is affected, there's no way for you to make even a change in deployment parameters without affecting the security posture. Whatever you do, it changes it.

This leads to the unfortunate effect that most of the time when you optimize for security, you are compromising usability. Because you have to tell the users that they have to log in again, or there's a second factor to do it, or they cannot do this thing here because they are logged in only for another thing there. Suddenly it's more difficult for the users and US software developers have to take the burden of telling them that, "Sorry. It was a bug when it worked." Meaning, when the software was easy to use it wasn't secure.

Top Access Vectors of Compromised Software

IBM just came out with a new report on breaches and data compromise. Many companies do these reports every year, but this is a very serious one. They analyzed 8.5 billion records that had been compromised, and then they said, "What's the first way in which the criminal breaks in? What's the initial access factor?" And here's the results from that study. They said, "Of the six reasons we mentioned, what's the distribution?" Those all add up to 100% , and you see it's phishing , it's scan and exploit, and it's unauthorized use of credentials. So then, just be practical.

How do you avoid phishing? Phishing is about gullible, thoughtless people . To reduce phishing, you have to educate people. You have to make security a priority for everybody in the organization. It doesn't matter how good your software is, if somebody is subject to a phishing attempt or attack, all your good effort can go wasted. That's rule number one. Make sure that you somehow work on the mental resiliency of everybody in the company who has access to a computer, which is everybody.

Number two, scan and exploit. That's about the quality of the software. These are the ones who use vulnerabilities in software to break in. Again, it's nearly a third. So then you think, "OK. I must make sure my software has no vulnerabilities." That will not happen , but you make sure it has fewer vulnerabilities today than it had yesterday. For every vulnerability you remove, you'll reduce this risk.

Then you remember the time Equifax, which got breached enormously, based on one single vulnerability in an Apache struts deployment. That was even a vulnerability that was known and had been fixed , but they hadn't upgraded and patched their software.

It falls into the category of vulnerabilities that make your software vulnerable, and here it is like in aviation safety that unfortunately even the smallest deviations can lead to the biggest catastrophes. Sure , you should fix the high severity vulnerabilities first, but you also have to look at the lower ones because sometimes hackers chain them together and many seemingly innocent holes become a big hole because they jump from one to another , so that's the second one.

The third one is unauthorized use of credentials, which is sort of a mix of gullible, thoughtless humans and software that doesn't work. We are so eager to get our work done that we leave credentials in many places in our software. On AWS [three buckets] and GitHub, we have session cookies that are leaking out. Then when you start thinking there's a lot of internet traffic that has session authentication information, or some way to get access to a session. That's the third. Those three you have to focus on, make sure that all the people in the company get trained in being suspicious of weird e-mails coming in and even normal looking e-mails to make sure their software is in great shape. Then this particular category of credentials that get abused, because they are lying around and somebody finds them somewhere and just takes them and uses them. Then there are other reasons as well, but let's solve the big problems first.

Who Is Responsible for Security?

If you think you're not in charge, Snyk, this open source security company did a survey and said, "Who is responsible for security?" 81% say that it's the developers, you can see that here the percentages add up to more than 10 0, so people could give multiple answers. But it's obvious that security team won't really come to your help, operations won't, other, nobody. Everybody believes that it's in the hands of developers. You may disagree and I may disagree, and we may think it should be in a different way , but reality is what it is.

Security has become a duty of the software developer at the lowest level, at the most junior level. There is no software development anymore that doesn't involve security aspects. Then it is so weird , how many here have a CS degree? It's dark, so maybe six or seven. How many of you took a course in cyber security as part of your CS degree? One. Excellent , two. Most CS degrees today have no courses in cyber security at all. It's crazy. But of course, we can solve it. This is the San Francisco Bay Area, we don't wait. We don't need to wait for society to catch up with us , we can solve the problems ahead of them.

Vulnerabilities Vs. Bugs

When we talk about security, I believe it's important to know the difference between a bug and a vulnerability. Of course, we say there's a difference and yet we use them interchangeably. But if you really think about it, a software bug is when software doesn't do what it needs to do.

Most vulnerabilities come from situations where the software eagerly does more than it should be doing, and this is why it's so difficult to do a machine -driven software-driven testing for vulnerabilities. You can test the functionality whether it works or not, whether it's present or not , but it's very difficult to test for features that never were requested from the software. Like, what would you test? That's a reason why vulnerabilities will exist even in bug free code. Sequel injections , what are they? You send the database some string, and it is so eager to please you that it treats it as a command. We never asked it to treat it as a command, but it does it anyhow.

Other similar vulnerabilities come from the fact that software is gullible, essentially. It's thinking that there is a friendly request and it should be executed, when it shouldn't. There is a difference and that's the reason why vulnerabilities don't come just out of software , and this contradicts my previous statement that software developers are in charge of security. Vulnerabilities can come from business logic, from deployment options, from configuration options. They don't necessarily come out of the code. They, of course, many times emanate from third party libraries that you are using.

Security at the Slack Corporation

How does a fantastic company deal with security? There are many companies who have their security posture in great order , and one is Slack. It's a San Francisco based company, grew up selling to tech companies, now they sell to enterprises. They've gone through the whole scale. This is word by word from their S-1 filing, meaning the filing when they went public. You could see here what they're saying , they're saying "Our security program consists of three things. Organizational security, including personal security, security and privacy training, and so on. This is the stuff that prevents phishing, like they see security as an organizational topic and they highlight as a number one.

Number two, they say "Secure by design principles." So when they develop software, they say it has to be secure by design, not by testing, not by mistake, not by anything else, but by design. Then the third thing they say, to make sure we are doing everything right , we run a public bug bounty program and we receive input from hackers who find vulnerabilities and then we fix them.

Three years ago, there was a hacker in Sweden, [inaudible] and one of the best ethical hackers in the world who chained eight or nine low level vulnerabilities together in Slack and managed to create an RC situation, a remote code execution situation, which is very severe. But he did it by jumping from one stone to the next on small vulnerabilities. He reported to Slack on a Friday afternoon, and six hours later they had fixed it and rolled it out worldwide. That is how security works when it's beautiful. It's very difficult , and the average fixing time is much longer, but there was a great example. Slack has a very smart way of dealing with this.

Best Practices for Web Application Security

Another place, i f you're looking for advice on what to do, there's a company called Offensive Security. They distribute the Kali Linux distribution. They do cyber security education and certification, and they have best practices for web application security. Of course not all software is web applications , but here you get some practical tips on how to do that.

You can see the link there if you want to see it online. But they have very practical things like, "Never trust user input." We must get rid of this gullibility where we think that everything is fine. We have to validate . "Trust, but verify," as somebody said . Which actually is a Russian saying, in Russian they say [inaudible] or something like that. Do we have Russians in the audience? Was that correct? Yeah. Brutalized Russian, brutalized by a Finn [inaudible]. So, "Trust but verify" and other very good concrete advice here on how to make sure that your software is more safe than somebody else's. Because let's remember, you can never make your application completely free of vulnerabilities, but it can be more free of vulnerabilities than your competitors. It does matter, because criminals typically attack where it's easiest to attack. There are some exceptions to that, like nation states, but other than that you are in better shape if you are in better shape than your competitors.

More Secure Software: Organizational Security

Some more tips for making software more secure. First, under the category of organization security, this notion that security is everybody's responsibility. With a startup that comes naturally, but with traditional organizations it's very difficult to go there. Because they have built security teams where everybody has a lot of certifications, and you look at their LinkedIn profiles and they tell you what certifications they have, and they have clearance and all that. They have built a small practice of orthodoxy that does security , and now we know that it doesn't work unless we make it everybody's business.

So specifically, if you're a startup and you would ask me "Marten, when should we have a CISO , a chief information security officer in the company?" My advice may be, counter-intuitively, "As late as possible." Because the moment you hire somebody to that role you have everybody else saying "OK, I'm done. I don't need to worry." That's not good.

It's better when everybody worries about it. And then you get good, agile security. An incident happens, everybody is ready to help, everybody is ready to work on it , and the security team once you have it, is an enablement function. They are not the caretakers of software engineers, software engineers have responsibility for their own code. The security team will help them, but not take away the responsibility.

Similarly to an HR function where you can get help from HR , but as the line manager you're in charge. You make the decisions, and it's the same here . Other things that are useful in organization security. Very importantly, celebrate the good stuff.

Cyber security has for too long been an area of negativity and pessimism and cynicism, and the world is going under. And we should just celebrate even the small advances and find a way to reward those who have done something. Also, I forgot that bullet point. It has to become blameless. It's very difficult. You have somebody who's in charge of security , of course, that person is in charge. Yet when you explore and study something that happened, you must be blameless in your action. Otherwise you won't find out the truth. We can learn from aviation. The aviation industry where they do this , if something goes wrong on an aircraft they just say, "OK. What happened? What was wrong? What did you do , what didn't you do?" They never look for blame. It's much later that maybe the captain who's in charge may be charged for something, but they do the postmortem analysis blameless, which is very important for security.

More Secure Software: Software Principles

OK, software principles. You can come up with your own, but you need in your software design documents state that security is a priority , "This is how we code . These are the methods , this is where we test , this is where we don't test , this is how we do peer reviews. These are the things we do and don't do," There's a ton of stuff under this topic for each organization to figure out.

More Secure Software: Agility and Collaboration

Then the final thing, agility and collaboration , if there's one thing or one way to improve security, it is by acting faster. You don't have to learn any new technology, use any new tool or product. If you can just speed up the cycle time of security things, whether it's an improvement, an incident, a breach, a compromise, whatever it is .

The faster you can move, the better you are, the better off you are. Because in security, you have something threatening you, and the quicker you can put out the fire the better shape you are in. Whatever you can do to have at the speed increased and reward people for it, make it a measurable thing, the better you will be. You don't have to be Nobel Prize winners in cyber security technology, you just have to be very quick when stuff happens. That's why it's good to have practices where you practice emergencies, like we did at HackerOne.

One day I came to work and people were crying there and saying that the laptop had been infected and we had a crisis , and we must call our outside legal counsel and we had an emergency room where people were planning and screaming at each other. Turned out, it was our founders who had just triggered a little exercise for us. Nothing bad had happened. They just wanted to see what happens. And it was very useful for us , of course afterwards you have to take a deep breath and say, "OK. Nothing bad happened. I did cry, I did swear, I did scream, whatever they did. And then you have to come to terms with it." Then next time when it happens, you're like "OK. We've seen this." Whether it's a real situation or not, we can be calm under pressure, which is very important.

Facebook Defense in Depth

Another example from the industry, Facebook, which has one of the best security programs in the world, they have a concept they call defense in depth. Again, you can >go to their website and read about it, and you see this beautiful diagram here with lots of bugs at the top. But as they keep raining down, they disappear. At the end, there's still one left. They're very honestly, notice that even after all that their filters of bugs they must always assume that there are vulnerabilities remaining in the code.

But here they've made a system for how to remove the likelihood of vulnerabilities through those steps , secure frameworks, automated testing, and so on and so o n for every step they clean out more. I added that to the right , this spiraling thing to show that the way you use it is that you take the knowledge from any of those tests and you go back and say, "How can we change our behavior so it doesn't happen again? Or that the likelihood that it happens again is lower?" So in the very concrete world of HackerOne, when anybody runs bug bounty programs, and of course we run one against ourselves, we always say "What led to this vulnerability and how can we eradicate the whole category of such problems in the future?" Of course, it's a futile exercise because we can never get complete coverage, but it always helps.

We had a situation where we had session cookies leaking out in a silly, stupid way, and we took four or five actions where each one will eradicate that problem in the future. But we took all of them, because we thought " OK, something may happen and who knows how the world will change. So better fix it five times than just fix it once."

What Is Privacy?

Then over to privacy, we talked about security now and we talked about AppSec or application security. There's of course other areas of security, network security, infrastructure security, physical security. I've skipped all those, there was just AppSec, Application security.

Privacy, again, is a different beast and sort of unrelated to AppSec. It affects the same people , so we bring it up here at on the same topic. Privacy is the ability and the right of an individual or a group to seclude themselves or information about themselves. So not every country respects privacy, but where it's respected it means that citizens or groups of citizens can say "This information is none of your business." So in those areas, it's consumers and people that have a right to refuse to be tracked.

You need to know that as software developers, you as the holder of that data have a duty to hold it and to keep the data safe and secure. If you have it, you are storing privacy data about anybody, you must make sure nobody else gets access to it.

There are restrictions on how you may use it in your business, so the fact that you've collected something doesn't give you a free right to use it any which way you like. There are rules on how it can be used , and finally now as a new thing in the last few years, consumers typically have a right to be forgotten.

This has been very specifically in the European Union, but we might as well assume it applies to the whole world because I believe soon it will. W hen you collect something and you store information about a citizen or a consumer or a human being, they must be able to come to you and say, "What information do you have on me?" And you have to show it, and they have to have the right to say, "Delete it." That's a lot of software functionality to build, and it's not that easy.

Because if I come to you and say, "What information do you have about Marten Mickos, how do you know I am Marten Mickos? How do you know I have the right to ask? You have to have validation and authentication of me , and then when I say please delete everything , how do you know that you've deleted everything? How many backup copies do you have and log files? It's an enormous task to be fully privacy compliant , and I don't think we've seen even the beginning of this yet. I'm not even the technical expert, I can just see the enormity of this task.

But when we leave our startups and our business hat aside and just think about it as citizens of a digital civilization, we say "Of course . That's obvious. Of course, humans should have this right." But think about how much software work, engineering work it causes for you , and then all these checks and balances, "Have we done it? Have we not done it? How do you prove that you've deleted my data?" Then if I come back and say, "Do you have any data on Marten Mickos?" They will say, "No. We have nothing." But how do you know? With all these sharded, distributed, backups, multi-cloud databases that you run and data lakes, and all that stuff . You have no idea.

Tips for User Privacy

Some tips for you on privacy. Only collect what you need, and for the time you need it. I know it's difficult. We all like to just collect stuff and think it's just digital, so there's no cost of keeping it. There is a cost of keeping it. Having this frugality about yourself and what your story is actually smart, then the job of securing it is lower.

Other things like security policy, proper authentication for all forms of data access, role based access control. You may need to be able to track who has looked at data, like many of you have applications where people and things go in and search data through APIs, or what not. You may need to know every single query and who looked at what, and we do that at HackerOne so that if in our own bug bounty program, if somebody sees a vulnerability in our system, then we will go in and say "We have a vulnerability. Has anybody exploited it?" And then we go through all our logs of database access by whom and when to see who might have been affected. We have this full logging capability of what goes on in the database. That's a lot of software development work to make it function properly.

More Tips for User Privacy

Other things you can practice is isolation of data, isolated from other parts so that if something gets stolen they can't steal all of it. Data residency , you may have to place the data in a certain country because of requirements from your customer, and then you have the sharding and distribution problem of that.

Encryption should ideally be both at rest and in transit, so data is encrypted when in the database and it is encrypted when you send it over to the client. The auditability of data access, I mentioned that and modifications, so that you know who has looked at the data and when, and exactly what records did they look at. The user's right to be forgotten is stipulated in in some legislation , and then the last bullet point is sort of unrelated, but maybe related if you are in a in certain financial services area. You have to know your customer.

There is a legal obligation that you know who you are dealing with , you need to know your consumers, and to know your consumers you need to collect data about them. So, there you sit with a privacy problem about protecting that data because you had collected it in order to fulfill a legal obligation to know your customer. So, it can get quite complicated.

Teamwork and Transparency in a Startup

Then let's leave privacy now to the side, and go back to seeing this audience as mostly startups and eager groups of people who are changing the world and revolutionizing everything and disrupting everything. Most of you say, and I've said it word by word , "We are one team built on trust. We believe in transparency and we move fast." It sounds so cool when you say it on your about page. That means, of course, that you share all the credentials with everybody. Everybody has access to all the data, [unix root] for everybody , AWS credentials for everybody. It's a great way of being a productive team , but then at what point do you change that and how? I truly just have the question for you, I do not have the answer . Because it is dependent on what business you are in, what kind of customers you are, who you are, how you operate your company. But you will have these uneasy stages where you say, "We can't share this anymore." And you go to your best friends and say, "Give your credentials back." And the friend goes, "Enough. I need it. I need this because I'm pushing code. We can't just depend on you." And then you say, "But how do we divide responsibility?" A very tricky organizational question out of the very simple, good ambition. First to be one group, and operate quickly. Then suddenly you realize that there are all these risks, and then you have a board who says, "How do you know that the credentials are not being abused?" And you will say, "How dare you even question my colleagues and friends? I trust them completely." And we all say that, and then you realize the board member doesn't know them and the board member has fiduciary duty and all of that. So they live in a different world, and they say, "Heck. We really have to tighten it up and make sure that we have clear rules for who can access what, and when, and how, and share it and divide it and split it." That's a big thing to handle, s o a lot of work here from very rational, practical needs. They create a lot of work for you, and as I showed you in the beginning, it's the software engineers who are in charge. There aren't many other people who will do it. Some, I'm exaggerating though. So, pardon me all other people who are in IT and security , and many others who do their fantastic jobs. But we are short staffed when it comes to both AppSec and privacy.

Actions to Protect Credentials

Then back to where I showed you the IBM chart, and I said three big things. First was fishing , start with organizational security, second one related to vulnerabilities. Fix your vulnerabilities and start to eradicate them , and the third was about abuse of credentials. I went to the founders of HackerOne and said, "What are your concrete pieces of advice?" This is what they produced.

It's too much for me to review here in the meeting, but the very clear thinking here is that in terms of credentials, some of your actions relate to your own employees. Some relate to your user /customers , and there's a reference actually to the bug we had in our system where a session cookie was put by mistake into a communication channel, and the other person receiving it said, "That's a vulnerability, isn't it? I can now log into the system as you." It was very true. We always, when we have a vulnerability and we fix it, we publish the whole thing. We do full disclosure, so if you go to the URL there you can read the communication with the hacker and how we went back and forth. We then wrote a blog post about how we fixed it , so there was a lot of work going around it, but we spent days on a thing that actually was very easy to fix if we had just fixed it initially .

But because there was a potential abuse, we had to go in and look at the database, see who might have seen these record s, and then we called the customers and said "It's possible that your record was in the data that was visible," and then the customer said, "What's going on?" And then you have to explain to the customer what you did, and we just stick always to transparency because we don't believe there's any other way to build trust than through transparency. But it's a lot of work, the whole company stood still for eight hours as we worked through all of this. So anyhow, there's a lot of suggestions there on how you can work to protect credentials, and you have to realize that credentials come in many forms.

These secrets are not just passwords. It's passwords , it's session cookies and other things that may give access to something , and you have to think about where they might end up, how you stop them.

What we did was, one of the remedies we did was that even if a session cookie would leak now the system doesn't accept the same session cookie from a different IP address to access our system. Once you have access, it will continue to accept you from the same IP address only. That has the side effect that if you're using it through the Tor browser, you will have problems because of Tor browser will cycle through different IP addresses for its own reasons. So then that's a use case that then was harmed by this.

Again, a reminder how difficult it is to fix things in security because they always affect something else. But , that's what we did and we made sure that it couldn't happen again. We made sure our platform doesn't allow the pasting of a session cookie into the communication window anymore. A lot of those things we trained our people , we didn't blame anybody for this. I still don't know who made the mistake, although it was sort of a human mistake. We don't care about who did i t, we just care about the fix. You'll see that some of the others here are actions are educational, like warn users about brute force attempts on their account or automatically lock their account when it happens. There's a lot of stuff you can do in the background to protect your users and help them.

Online Resources and References

So with that, I'll round up here the presentation. I have a slide here with some online resources that I use during the presentation, and with advice specifically to startups on how to build a security practice. When do you in a startup , set aside budgets to hire somebody who is in charge of security or appoint somebody to run it?

Because you know that the delivery of features will be slowed down. First of all, you're losing an engineer into security. Secondly, that security engineer will come back with the requirements and everybody else, and slow everybody else down. So there's no security that comes free of charge and you have to know it. The good news is that in the startup world now, VCs are starting to know it and they may, if they're smart, they'll ask you and say, "Show me how you take care of security and privacy in your product. If you haven't budgeted for it, I can't invest." I hope that all VCs put it that way. So anyhow, there were the online resources.

Conclusion: Final Thoughts

Here's a sampling of our hackers in the HackerOne community. Like I said, we have 600,000 ethical hackers signed up on our platform. They are ready to hack you on a moment's notice. They're the friendliest, most wonderful people in the world. They have no patience, so be ready for that. But that's why they are so good. They have endless curiosity.

We are adding 300,000 hackers per year right now , so this community is growing very fast. They are united by this desire to figure out how systems work and make them better. In that way, there's a hacker in every one of us. When we were kids, there was something we took apart to understand it or we take apart a friend mentally to understand the person, like this act of hacking comes very natural to us. When we acknowledge the hacker within, we can also work with the hacker on the outside.

Q&A

Thoughts on Browser Security

I love the fact that new browsers are being built, because we are often so lazy. We think that we have the browsers we need and we have the search engines we need, and no need to innovate there. That's so wrong. We need to keep rebuilding this world all the time and browsers need to be the first stop of cyber security and privacy. I think Google has done a good job there with Chrome, and Mozilla does a great job there. But there's even more to be done, and there are already ways to stop cross site scripting vulnerabilities from going through certain browsers. The thing that we have to know is, of course, that the criminals will not use the secure browsers. The criminals will use their own browsers, so some of those actions may protect consumers but others may not. Because the bad guys are not using yours, but there's a lot to do there in making sure that "No, this user doesn't do anything they shouldn't be doing" and apply kind, but firm stewardship of the users action. I think that will be needed , and maybe we think then that we don't have freedoms in this world.

But it's the same in many other areas of life, that we govern consumers and we have street lights to tell them when to cross the street and when not to. Some still cross the street, but at their own risk.

I think a similar principle should be applied to browsers then, when do you ask "How do you show to the world that you don't collect their data?" If you say "We don't index you," how do you prove it? I think there's just one way to gain trust and that is through transparency , and it is painful because then you say "OK, what are all the things you have to show to show that you are not collecting data?" Openness, you have to show the source code. You have to show how you deploy so people can verify that you deployed the code you showed , you have to document how it works and you have to have some way of declaring when there are anomalies. That's a painful level of openness, but I believe it works.

We see it with HackerOne, how people trust us. I would point to it and different but similar example is GitLab, GitLab had this terrible mistake where they deleted backups because they thought they had the backups, and they deleted the database. They immediately opened up and told everybody exactly every step they took to fix it , and that's how they restored trust in their service. I do believe it is that simple , but that's hard. It's just transparency.

You've been here a while...

Are you learning something? Share it with your friends!