1. Library
  2. Podcasts
  3. O11ycast
  4. Ep. #5, InfoSec with Gartner’s Anton Chuvakin
light mode
about the episode

In episode 5 of O11ycast, Rachel and Charity speak with Anton Chuvakin, Research VP at Gartner for Technical Professionals, about the ways modern companies make, or don’t make, decisions around security.

Anton Chuvakin is Research Vice President in Gartner for Technical Professionals where he specializes in security and compliance.

transcript

Charity Majors: How has InfoSec changed as distributed systems get bigger and more complicated?

Anton Chuvakin: It's funny, because in many cases we see people point out that some innovation in technology, I don't know, a few years ago cloud, today maybe containers. And they say, "See? It changed." But here's something funny about InfoSec. In InfoSec, all the old stuff is still with us.

I don't want to joke about securing Windows NT 4.0 or securing Windows 3.1, but to be honest, it actually happens.

Charity: Nothing ever dies.

Anton: Nothing. Well, very little ever dies. It's almost like, "How does 'X' change security?" is more about, "How does adding 'X' on top of a pile change security?" Because a few years ago I used to joke that more companies had Windows NT in their closets than cloud. Today, there's more cloud, now in 2018. But there's still Windows NT here and there.

Charity: There's this joke about how cutting-edge security is whatever cutting-edge Ops was five years ago.

Anton: That's, yeah.

Charity: It's never fair. These jokes are never fair. They're always a little mean.

Anton: That's on the pro-sec side. Somebody just told that to me today, earlier in a client call. They said, "Security is a mess." But guess what, ITOps was a mess in 2000.

Charity: True.

Anton: But here's my fear. ITOps has grown up, I'm not sure security is capable of growing up.

Charity: Boo.

Anton: I have a fear. Maybe it will, but I wouldn't be shocked if it doesn't.

Charity: This seems like a good time to introduce yourself.

Anton: OK. My name is Anton Chuvakin and my role at Gartner for Technical Professionals is a little bit funny. Here's why it's funny. People often associate big analyst firms with giving advice to business stakeholders, but most of our work is giving advice to technologists. So I work at Gartner, but I mostly advise implementers and people who architect, operate, deploy.

And not necessarily people who write checks. That's why it's a little peculiar. I deal with all sorts of monitoring tech, intelligence. Threat intelligence in this case. Instance response, vulnerabilities. I don't deal with software security, I don't deal with antivirus. But I deal with a lot of monitoring, detection, and response type stuff. Of course, I have my hands full with that.

Charity: Cool.

Rachel: You're among your people here. Not only in that we're obsessed with monitoring, but I'm a fellow analyst. High five. Analyst secret handshake. You retweeted that amazing piece by Brad Feld talking to vendors, early stage founders about how to manage their analyst relationships. And it brought up a lot of traumatic memories for me. So I'm going to ask you a question I ask myself at 3:00 AM. On balance do you think the analyst industry has been good for tech, or has it been the other thing?

Anton: It's funny that you use the same metaphor, I actually use the 3:00 AM question very often on client calls. I say, "Here's a 3:00 AM vendor shortlist. If I don't know anything about your company and you just tell me, 'top three vendors for doing X' at 3:00 AM, I will give you these three. But generally speaking that's my off the top of my head answer. For the 3:00 AM answer here I would say mostly it's positive.

And here is why. That many organizations do not have any rational strategy for making IT decisions. Their alternatives would be to call an analyst, or screw up. And you can tell me that the enlightened people shouldn't do it, and I would agree with you. The enlightened people should not do it. 90% of people aren't in the top 10 percentile.

Analyst firms serve quite a critical need and a very positive need for people who need some help making IT decisions.

Rachel: I totally agree on the need. And five years out of my last analyst gig I've got enough perspective to say this. An awful lot of firms are not meeting that need as well as they should.

Anton: So your question was not so much about, "Is there a need?" But, "Is the current approach to serving this need working?"

Charity: There is a widespread perception, at least among vendors, that it's very pay-to-play.

Anton: Frankly in my daily job there's a lot less of that. Because again, as I mentioned before, I deal mostly with implementation stage problems or architecture, and a bit less with purchasing. If you guys recall my one last blog post or the one before on my Gartner blog was about why we value increased visibility. We typically would give advice based on what people would actually have done and what worked. And again, we may miss something innovative, but not for corrupt reasons.

We miss it because we would occasionally choose something that's been tested in real world over something that looks promising. I think that we do periodically hear, and I mean "We" in this case broadly speaking, the IT industry, I guess. You hear corruption stories about how somebody is more about buying influence. It's funny because people who leave analyst firms very often tell that actually, it's not true. It's funny because obviously current analysts are expected to say it is not true.

But there was a post from somebody who just left Gartner on LinkedIn earlier today and he said, "I can tell you, I'm not subscribed to any policy because I left the firm. But I can tell you that in many cases it's not pay-to-play, because it's not about that, it is about other factors."

To me, I would say across the analyst firms it probably has a spectrum. But in my seven years here at Gartner I haven't noticed anything which smelled like pay-to-play.

Charity: It seems like security can be sold as a product, as a service, as a feature. Where providers bake security into their offerings for competitive advantage, or it could just be their area of expertise. How do you see the balance of these types of offerings in the marketplace changing over the coming years?

Anton: In this case I'll refer to yet another blog post from the last few weeks. The one thing I want to point out here is that I have a bit of a fear that security is falling a lot more to services than in the past. And I do encounter organizations that simply don't have time to pick a critical security task that you think is absolutely essential and you die if you don't do it and they say, "We don't have time for it."

Charity: Why is that a fear though?

Anton: How is it a fear?

Charity: It seems like specialization is a natural part of growth.

Rachel: Doesn't it tie back to the analyst question though? Where people are creating the fear even though it's not entirely rational, just in order to build their businesses? And the actual threat model gets lost in the purchase order discussion?

Charity: We can't ask everyone to be an expert in everything. We just can't.

Rachel: No, and I don't put words in your mouth Anton, but I don't think that's what you're saying. You're saying that there's a desire on the part of consultants to persuade enterprises that even basic security stuff is hard.

Anton: No, I don't think that's it at all.

Charity: They're just checking a box.

Anton: No. I think it's the other way around. I would talk to a client, and they may say that. That's a particular story inspired by a real conversation, where they said, "We don't time to do log management," and I said, "Log management doesn't take that much time. Perhaps some analytics effort requires an expert, but log management? A monkey can do it. It's very easy." And they said, "We just don't have time for it." And I said, this to me was eye opening because I'm like, "Actually, you guys are busy--"

Charity: But log management seems like a great example of something that you should outsource. Because it's not key to delivering value to customers. There are lots of people out there that can do it more cost effectively and more scalably. The thing is it's not logs itself, it's all of the tens of thousands of little things that comprise building and maintaining an infrastructure that together add up to an infinite amount of time.

You just have to make hard decisions about where your time goes.

Anton: That's true, but also people who try to outsource more occasionally had a bad time. So sometimes in the monitoring and detection area we do see a lot of discussions with clients about managed security services, which are frankly a disaster.

It's funny because it's almost like sometimes I see in my rare depressed moments I see that outsourcing security has failed, doing security on your own has failed, and you didn't consult for security has also failed. And by the way you've given it to machines, which has also failed.

Charity: So who is doing it well, and what are they doing?

Anton: I would say that I see more successes down the path of being very tactical with some of these delegation outsourcing services decisions. Instead of saying, "Here's money. Give me security." Which almost never goes well. I may tell you that, "I don't have time to do logs, but I have time to profile my users, profile access to see where they're doing something unusual.

So you deal with my logs, but you give me access to this part of a system I can see which of my users are behaving badly. I would outsource very tactically and hopefully you do a good job with keeping my logs and I would do a job that only I can do, because I know my users and you don't." To me I've seen more of this type of a mesh of services and products.

Charity: It's like the innovation tokens theory that famous post by Dan where he says, "If you're a startup, imagine you have innovation tokens. Spend them wisely." And you want to spend them on the things that are your core, how you provide value to users. Or in this sense it's the things that could make or break you as a business. Those are the things you spend your security tokens on.

Anton: True. I like that. But there's a bit of a fear because some of your questioning leans towards a position that I see is very risky. Because we do see some companies who say, "We're going to outsource all of it because security is not our core business."

Charity: But security is a human thing.

Anton: Yeah. But the thing is, you don't say "Not dying in a fire," is your core business. "I would not do any sprinklers because I would just hire somebody to magically protect me."

Charity: You can't pay for security because it involves educating your users and figuring out your personal risk profile, and getting everyone's buy-in into it.

Everyone in your org has to be bought into what you decide is your level of security.

Rachel: This ties in really well to this discussion that's been going on around Elon Musk and his submarine for the caves. Which was this amazing collision between Silicon Valley's idea of engineering-led innovation and just parachuting a technocrat in there, versus the safety culture that was represented by the cave divers and that you see in emergency medical care and in aviation accident investigation.

You had these two very different ways of approaching, and in this particular case the safety culture was clearly superior, because the boys got out before Elon even got his sub down there. I see InfoSec playing strongly on the Elon Musk side of the fence and a lot of my knee-jerk reaction against InfoSec marketing is that it is all based on fear, and there's this idea of the superior technocrat. How do you see that that safety culture vs. innovation discussion playing out?

Anton: I think you perfectly described exactly one half of security. Because the other half is very often about people who are very much mired in the 1980s type of rulemaking. Access control, rainbow series, mainframes.

Rachel: "I've hacked into the mainframe. I'm in."

Anton: It's almost like they're not about innovation. They're about the country or the culture of, "No," and, "We've always done it." And, "No. You don't get access because we never gave access to such and such role."

So some of security does not feel like a very innovation-led silver bullet stuff. Somewhat feels like, "We're going to be compliant with--" let's pick something really old. NIST 800-53. It was like a NIST guidance for FISMA. So it's early 2000s, maybe late '90s. And they can just go by the book and the book is friggin' 20 years old.

So I think security is a bizarre mix of slap-dash innovation, which is something that's connected in the real world, and something that's very old rules 1980s type government stuff that was probably not relevant to most startups today and most regular businesses, but they still push it. I don't know. It's hacked submarine and something like a steam submarine from the 1903.

Rachel: A civil war submarine.

Anton: It's a civil war submarine. That's exactly right. Because it goes one way, only down. But that's a separate story.

Rachel: And that's totally fair. I accept and embrace your qualification, but how can we be better? We're living in a world where our infrastructure is vulnerable to massive state-backed attacks with hugely negative consequences. InfoSec really needs to step up and it can't step up with this Jerry-rigged arrangement. What do we need to do? I'm throwing my hands in the air.

Anton: I'll probably do the same, and throw my hands in the air. Because I don't think we're going to answer this one. But I will offer one direction that may fly. Security innovation is certainly agile, but my fear is that we are agile about the wrong things. Because we may build something really cool and very rapidly, and it will be agile. But it would be agile not in the way of business, or the rest of IT is agile. So it's like we see more, let's pick something.

How to analyze network traffic using machine learning, and less about how to secure containers. IT is moving in a certain direction and security is very agile and innovative, just not in the same place. It's almost like, can we preserve some of these engineering marvels but aim them at more real problems? I don't think that's a solvable problem in a one-hour conversation.

Charity: There are analogies too. It needs to become boring. It needs to be human-centered and it needs to become boring.

Anton: Not 1980s rules boring.

Charity: No. I'm talking, you use a key to open your door, boring. It's just expected. It's standardized. There's nothing flashy or cool about it, it's just how the world is supposed to work and you have instincts around it that serve you. And we haven't really done that in security.

Anton: Can it be boring if the landscape changes, though? I don't I don't think that would fly. The door doesn't change.

Rachel: Right. We're dealing with a set of assumptions that are enormously in flux and we haven't developed habits that promote public health and hygiene around them.

Charity: I was talking with a friend earlier today about, "What if I could design a curriculum for fifth graders as part of home ec, that was just a month long, about security. How to protect yourself, best practices. What would that look like? I guess that's my question for you. What would that look like? And what products do you think that we would need to build or standardize in order to make it tractable? And then my friend of course cracks, "So that the fifth graders could teach their grandparents."

Anton: Yeah. But here's the thing. I would point back to the landscape change and then there's a highlight to the threat landscape changes. But frankly I would point at IT landscape change because a lot of threats we deal with are pretty much the 1990's threats. We didn't have ransomware in the 90s, fine. But we had malware, and ransomware is a type of malware. IT though has become quite different.

We do deal a lot with SaaS, with cloud, with containers, with virtualization, with mobility. I almost have a fear that you have each fifth graders a lot more of this type of extrapolation framework skills and not the tools and technologies. Because when the fifth graders do go into the operational roles in IT, assuming some of them do, they would see something very different.

Charity: They teach kids not to get into cars with strangers.

Rachel: It's the tip of a very deep question because it goes beyond technical skills when the threat landscape is really about social engineering.

Most of the severe attacks that we've looked at were based on appealing to people's better nature and then subverting that fear.

And you have to teach fifth graders a better theory of mind, and a better systems understanding of game theory. I hate to say it's time for some game theory, but it really is.

Charity: Fifth graders love game theory.

Anton: True, but also this is hard to distill some of today's realities. We also make a lot of repeat mistakes in security. I know the security guys blame developers often but a lot of the security code practices, mistakes have been repeated. An example was always the very early '90s attack called "Ping of Death" because of a very large buffer.

But the point is that somebody told me that the mobile devices of the late 2000s, some of them had "Ping of Death" on their ability. And then now IoT in late 2010s, some of the devices have "Ping of Death" among their vulnerabilities. It's a coding problem that survives from 1993 to 2018.

Charity: This is a great point. Developers and operations get a lot of heat for not adequately securing their products and services. At the same time, a lot of InfoSec tooling and capabilities have a very distinctly "By security people, for security people," feel to them. They can be very similar to products in other spaces and yet they feel completely differently.

It gives off these waves of, "Not for you. Not for you. There is a priesthood, and you're not in it." So how can devs and ops take more ownership of their security by integrating these security principles and capabilities into their day to day functions? How can we increase the ownership so that it's not such a priesthood?

Anton: I would still try to separate and build a line, and maybe on the one side there would be a set of tools where we need to do it. And there will be a type of category of practices or tools where you don't have to do it.

Charity: I don't understand. What?

Anton: For example, my colleagues on a team deal with application security. That means security of application. They deal with developers and they deal with things of that sort. Network security is about networking. All these domains, what you said applies.

Now think about something like threat intelligence and understanding the attacker motivations. It doesn't have the IT brother. App sec has an IT brother, application development. Net sec has an IT brother, networking. System security has an IT brother. But threat intel does not have an IT brother.

Rachel: I'm going to disagree with you there. I think the IT brother is customer empathy.

Anton: Yeah, but it's a very tenuous connection.

Rachel: Let me explain.

One of the challenges that engineers have is that engineering is a fundamentally optimistic discipline, and engineers build platforms with the idea that the street will find its own uses for things.

That people will take what they build and do cool stuff with it. Security is about paranoia and fear. And that's what makes it difficult for engineers to think about misuses of their platform up the front. But if we expand the idea of user empathy to include malicious actors, I think we can bring those two domains closer together.

Charity: There are a lot of similarities between this parallel that you're drawing between dev and security and just dev and ops. Ops is always about the, "No. We're scared to change the system." And developers are like, "Whee! All the changes!" I feel like part of what I hear you saying is that there will always need to be experts. Absolutely agreed. But for example, I've been working in engineering since I was 17.

I've always been in some sort of operations data role, and I've never worked with a dedicated security person. It was just expected that these are the basics that every good engineer knows how to do X amount of stuff, and then you rely on experts periodically if it exceeds your zone.

These are just the basics. It's like unlocking your house with a key. Well, you use password SSH. All these best practices. But I feel like I'm learning that this is very unusual.

Anton: Some chunks of security crossed the divide. For example, firewalls in many cases are network engineering managed. And maybe antivirus is deployed by the desktop team. Some of the coding mistakes are prevented by developers.

To me there's certainly a bit of a train that leaves from security land to ITOps land or dev land and that's fine. But certain pieces still stay. And not your comment about TI, threat intelligence. I think certain pieces of security haven't journeyed into IT yet.

Charity: Maybe what I'm hearing you say is that, instead of building the software for security users, they need to get better at embodying the users that they're building for. Building these tools to just feel and look and smell native to developers and other users.

Anton: I would say in most cases, but not all. To me there would still be certain domains where they don't really fit in the normal IT landscape.

Charity: Like what?

Anton: My example of threat intel, and I think you guys made a decent attempt to derail this example. I'll give you that. So a lot of these situations where you actually have to deal with attackers, even incident response. Incident response in security is very different from the IT incident response. I almost always tell people that you should unlearn a lot of the IT "My PC is slow," to, "I'm hacked by the Chinese."

Charity: Yeah. Like, "Assume good intent, and blameless postmortems," and everything. I've actually responded to some incidents where we found hackers in our system and we had to do everything. And I remember Googling and trying to learn it on the fly while doing it. It was terrifying. We got through it, it was fine. We had to reset all of our passwords at Second Life and e-mail our users telling them it had happened. No big deal.

Anton: IR is an example where that a lot of the benefits from the grounds out of IT. It may be psychology, it may be law enforcement and then the military. I don't know. But the point is, it's not IT. Some of it stays with the narrowly defined domain of InfoSec. But a bunch of stuff needs to get on a train and live in IT. You're right.

Charity: Relatedly, everyone who works in a security operations center is completely, totally and sometimes hopelessly overwhelmed. A big part of that is lack of context for the alerts they're getting. Not knowing what alerts they are getting, and what's falling through the cracks. It's not obvious from the data collection that's being done.

It's always easier to respond to the things that bark in the middle of the night, and security people being the paranoid people that they are spend a lot of time just worrying and fretting about what they aren't seeing as they should.

Anton: Hence, that's what made threat heightened appear as a practice. Where people go and say, "We're going to look at the data and we're not going to wait for an alert." And we wouldn't sleep badly. We're just going to go look at the data and figure out whether missed them.

Charity: That's my question. What technologies or capabilities or approaches, this is a great one. But what else is out there that can help with the signal to noise ratio?

Anton: That's a problem that commonly comes up in client calls for us. I would say that some organizations really put the energy into the alert triage. Get more context to validate the alerts with different systems, if you see an alert from a log in system they're going to hit an end point and gather more data. So some of it is about gathering data, but some of it is also following the playbook.

We look at the orchestration automation space where they look at a system that gets an alert and enriches it, adds more context, verifies certain things and then tells the human, "Hey human. This alert. We can close it because X, Y, and Z." And then eventually you can automate it and say, "I trust you, system. Just close it."

Charity: What are some examples of security alerts that would go off like that?

Anton: For example, if you have an alert that's ultimately about somebody failing to authenticate many times. It may happen for many operational reasons, like the user forgot the password and tried it five times. It may be something that if it happens 100 times a minute, then it's probably somebody scripted this, or they are guessing the password.

It may be a security question, but it may be in between. There would be a set of steps that a human would go through to validate. "Is this a mistake, or is this malicious password guessing?" And you can automate some of the validations steps. Not everybody does, some people just go give it a human and say, "Hey human. Go figure it out."

Charity: That's terrible.

Anton: It's terrible, yeah. But others try to say, "OK. So what happened with the same user in the past? What else happened in the same system? What else was around?"

Charity: Any time you automated it away you introduce the possibility for someone to learn how you've automated it and get around it.

Anton: That's more often an irrational fear. Because to me the attackers are busy with other stuff and have other things to do.

Charity: It depends on how determined they are.

If an attacker is determined, they're getting in. That's pretty much it.

I wanted to maybe draw any analogy between this and banking laws. How we created this massive corpus of very small fine-grained rules to try and keep the financial system from collapsing again, instead of introducing the Volcker Rule where banks just can't hold more leverage then X percentage, which kept us pretty safe for many decades.

Rachel: But it prevented them from growing too big to fail, Charity.

Charity: I know, I'm such a buzzkill. How do you feel about that tradeoff, between many small complex rule sets which you can look at more as an emergent machine learning type path, or just don't try a password reset more than twice in a minute?

Anton: I'm almost jumping in my chair to answer this one because I dealt a lot with PCI compliance in the past and other compliance. It's almost like many debates when people who are involved with compliance get into a bar and they have drinks. The question of broad goal base compliance, say, "Do a risk assessment and follow the results. Do something about what you found." Versus very granular rules, say, "Reset passwords every 37 minutes." Obviously I'm exaggerating, but the point is that we do see people try the former and the latter and they are different failure modes.

Of course, with the detailed rules. People just start obsessing about tricking the rules and forget the goal. But with goal-based rules they say, "Screw it we accept all the risk. Risk Assessment. Yeah, we've done it. There are some risks. We accept them. Bye." They don't do anything.

But in the other case, they don't do anything because they just try to check the boxes. I would give you honest to goodness answer: I don't know. I feel I lived a lot when I had to defend finely granular rules like PCI. And I built a lot of good arguments as to why the detailed rules, password change every 30 days, patch every 15 days. Respond to alerts within 24 hours.

I've spent a lot of years of my life defending that. But I've also seen it fail spectacularly. So I have a slight preference for detailed rules, because maybe I have a low amount of faith in humanity. I don't know. I've seen people who really advocate the goal-based rules, where they say, "Look at your risks rationally and then decide what to do about them and then do it." I've seen it work well. At this point I don't know. This is a painful question.

Charity: Nothing works yet is what you're saying.

Anton: Both rounds have bad failure modes. I don't think nothing works. I think that's too drastic.

Charity: I'm oversimplifying, of course.

Anton: But maybe there's some sort of harmonious hybrid, but I haven't seen it.

Rachel: Then everyone gets a pony.

Charity: Everybody gets a pony with a unicorn horn on it. Well, you bring up compliance. And this is a big driver for companies to make investments in security: GDPR, SOC 2. What are the strengths and weaknesses of that movement, which is partly what you were just addressing?

Anton: I would say that the positive to me is the obvious positive is that it motivates people who would otherwise do nothing. If they blatantly just accept all risks or prefer not to know their risks, compliance auditor would come up and get a hammer and slam them over the heads and they would say, "OK. We got to do something." So to me this is a positive moment.

Now if what they think of is, "Slam. We've got to do something." "What should we do?" "We have to cheat the auditor." that does happen, and in this case this positive moment of motivating security improvements have been squandered.

The motivational power is there, so people who otherwise would do nothing have done something.

Charity: This brings us back to observability a little bit. I wonder how you think about the emerging nexus of security and observability. To me it seems like the biggest headwater issue is that mostly people don't want to know. Because if they know about a problem then they have to take action on it. What are your thoughts on that?

Anton: I would say that we have nearly departed from the prevention-heavy world. That was one of the themes I wanted to bring up, is that we have lived for a good number of years where people just wanted to build higher walls and hope for the best.

And if you spend years building a higher wall you tend to be committed to this approach and you don't remember about the detection, the visibility, the observability side at all because you're just full of hope that the higher wall will work. It's funny, security guys are supposed to be paranoid, but then they still engage in higher wall building.

So their paranoia wasn't, "Oh my God. Who dug under the wall?" It was like, "Oh my God. Our wall is not high enough." And I'm like, "What the hell?"

Charity: They dug a tunnel.

Anton: Your paranoia is just inadequate. And to me the visibility and getting more data about the environment is a big deal, and more companies are waking up.

Rachel: I have a question for both of you on this point. The Tesla hack in February, the one where they were using Tesla's servers to mine bitcoin. Wouldn't an observability driven approach have picked that up earlier than it was picked up?

Anton: Probably a preventative approach would have stopped it. To me that seems a fairly low-end problem. It's not exactly the Chinese stealing the electric car secrets. It's more about somebody mining bitcoin.

Charity: Which is why it's fun to speculate about. I think that it's a question of, observability is, "Can you ask any question?" And you still have to have the desire to go and ask the question. We see outliers all the time in our systems of various types.

I have no doubt that these servers generated all kinds of outliers. Network traffic was probably saturated, CPU is probably off the charts. Resource consumption was certainly impacted. Often what we find is even people who roll out very sophisticated observability, they don't have the time or the curiosity or the inclination to.

And you can't go track down every single outlier you see. It has to be an outlier that in some way impacted your code running, or your execution of your daily activities. Because if I'm sitting there and I see some blips on the wall, I'm like, "Who knows." But if I'm investigating something because something's wrong and I see some blips, I'm going to go look at those. So I think the answer is "Maybe." It certainly has the ability to, but it needs the human element too.

Anton: I would say the part about anomalies, and that's basically the Hell I give often to the vendors who have all sorts of machine learning stuff, because they show anomalies that are mathematically anomalous but operationally not anomalous.

Charity: If you don't think you have any anomalies in your system, then your tools are just terrible.

Anton: But the point is many anomalies are benign. It's just anomalous. It's mathematically strange but operationally not strange at all. To me, that's why I have a bit of a fear that some people who want to rely too much on automation would end up being flooded by anomalies of the mathematical kind and not of the operational kind.

Charity: This is this is why everybody wants machine learning and AI to take care of their problems for them. But in fact, A, false positives are incredibly expensive. And B, like you said, your systems are flooded with anomalies at any given time.

If you're shipping code every day your baseline is changing too, and you can't train your corpus of data off of anyone else's production systems. It has to be local to you.

Anton: That's correct. To a large extent that's exactly correct. That's why I'm a little bit afraid of the ML claims.

Charity: I'm not afraid. I think they're going to crash and burn and I'm pretty stoked about it.

Anton: I think there are narrow areas where it works somewhat well.

Charity: Absolutely. Are they going to put me out of a job? I don't think so.

Anton: No, that's not happening probably any time. Not within a short time.

Charity: It can happen when I'm dead, that's fine.

Rachel: And my rant number three is people always want to talk about the algos like they're magically intellectually pure or divorced from context. But even the way we talk about this reflects our biases, calling unusual events black swans. Until I was 21, I've never seen any other kind of swan. Black swans were like pigeons where I grew up.

Charity: Cool. All right, I think this is our last question. If you were a hegemon for the day, what one security related law globally which you impose on the world in say, 10 years? No one can alter it or repeal it. And what one law would you repeal, just to try and make the world a better place for humans and security?

Anton: That's a tough one. Because I just spent some time explaining how maybe high-level goals don't work, but also detailed guidance that tie to a specific check also don't work. That's why I am very tempted to chicken out of this question and say, in all honesty, I don't think laws are the answer. To me, part of my answer would be that, can you really legislate security even if you have a magic wand and omniscient--

Charity: Do you think that the security laws are absolutely pointless?

Anton: Short of motivating people to explore security and figure out what to do, there has been decent balance of good and bad.

Charity: For me it would be something around user privacy. I would probably think hard about something to pass around legislating something around the user's right to privacy and possibly the expiration of their data.

Rachel: You're very legislation-happy today for a libertarian.

Charity: It's my question and I just want to answer it, God damn it.

Rachel: Can I pick some laws I'd like to see enforced? Because that's really of interest to me.

Anton: It's kind of funny, because this is a good way to maybe attack you a little bit. Because we talked about observeability. And to me, observeability and transparency are much better values than privacy. Better spiritual goals.

I am a transparency observeability guy, and I'm not a privacy guy. I'd rather have the record. And we can debate who should have access to the record, but I'd rather have it than not have it.

So privacy, I'm not sure. I've been trying to be a little bit the other way, because I'm afraid of people who push totalitarian privacy driven regimes. To me, that's the opposite.

Charity: "Totalitarian privacy regimes."

Anton: You're right, maybe I'm strange. But to me, first time when I read about the right to be forgotten, I remembered that in Stalin's years in Soviet Union, Stalin's henchmen would actually edit people out of books. And to me, what I thought, "The right to be forgotten is Stalinism. This is people editing history. It's just horrible," and I freaked out.

Charity: But it has to involve agency.

The entire point of this has to be, that it is not imposed upon you. It enhances your personal agency to drive your own life narrative.

Anton: But who is the judge?

Charity: I am.

Anton: And I'm nervous about that. I would say I'm more afraid of that. Oh, you are?

Charity: I'm the one with the agency. Just kidding.

Rachel: My counter example to the Stalinist approach of airbrushing history is that when Google Buzz was introduced one of my friends who had successfully eluded her abusive ex-husband, because Google Buzz connected your friends of friends to your friends through contact details, her abusive ex-husband found her home address.

People who are not privileged, people who come from any of the intersectional marginalized entities, have a lot more reason than, with respect, "white men," to want to protect their privacy. And those reasons sometimes boil down to life or death. I just wanted to throw that perspective in there.

Anton: Yes. I would go with that. When I wrote my anti-privacy rant I got a couple of comments to that extent on the blog. Basically people said, and I didn't get it. Now I get it. But on the blog I didn't quite get the whole, "White man," angle to that. But I've seen the whole persecuted groups angle for sure. If you have something to fear then surely you shouldn't be publicly known here and there.

Charity: I guess my point about agency is just that nobody knows that but you. And I think that giving people the tools and the power to-- No?

Anton: No. Because people generally don't want bad stuff to be known for no good reason. Like, I committed a crime and I don't want you to know because I have agency? No. Transparency trumps it. Transparency trumps it.

Charity: OK. Now that's a completely different issue. If you have committed a crime you give up certain freedoms.

Rachel: So now I'm going to play devil's advocate and argue against my own interests, which is Parabon has been doing this incredible work closing cold cases by getting DNA from rape kits 30 years old. And then the investigators put them up on Ancestry.com and the other DNA sites, and they find close family members and then they trace the family tree to find the killer.

This is how they found the Golden State Killer. This is how they found a cold case the other day. I feel really confused about that. Because on the one hand, "Justice!" And, "Lock them up!" On the other hand, my DNA is mine, and that's super creepy and weird. So I'm having a lot of emotions about it.

Anton: DNA, yes. But your name, your likeness. I'm not sold it's yours. And I'm almost surprised about it because in the same blog post I said, "This is a uniquely Western concept." In some other countries like China, maybe Russia, maybe others. It's not how people think about it.

Charity: Yes, well the communitarian cultures continue clashing with our idea of "modern."

Anton: I don't think that's a democracy totalitarianism.

Charity: I didn't say that. I said communitarian, versus individualistic.

Anton: I will give you a "maybe" on this one, I don't know. To me that's a big interesting debate to have. My short answer to your question, as far as privacy, probably I would not do privacy laws. Because I'm afraid of them going wrong, and I'm still afraid of GDPR going wrong, by the way, too.

Rachel: Interesting. What's the worst-case scenario for GDPR?

Anton: Europe becomes a digital third world where technology beyond typewriter is forbidden. That's the super extrapolated vision where Europe is a technology free zone and an IT-free zone.

Rachel: Well, with that nightmare vision of 1930s Paris, we've taken up so much of your time. Anton, it's been an absolute delight. We'd love to have you on again. We've barely scratched the surface of stuff that we can discuss here. Thank you so much.