
Ep. #43, Security Woven In with John Amaral
In episode 43 of Generationship, Rachel explores the frontier of AI-driven cybersecurity with John Amaral, co-founder of Root.io. Together they unpack the promise and challenges of agentic systems that detect, patch, and remediate vulnerabilities automatically. This installment offers a mix of technical deep dives, AI optimism, and a vision of security that works more like an immune system than a fire alarm.
John Amaral is the CTO and co-founder of Root.io, a cybersecurity platform pioneering agentic vulnerability remediation. A veteran of Cisco, CloudLock, and Trustwave, John has built and scaled multiple successful security businesses through acquisition and growth. Today, he focuses on harnessing AI agents to automate software security and transform how organizations defend their infrastructure.
transcript
Rachel Chalmers: Today I'm so happy to have John Amaral on the show. John is CTO and co-founder of Root.io.
He's a veteran cybersecurity leader with a proven track record of scaling and exiting successful companies.
At Cisco, he led product for cloud security, its fastest growing security and SaaS business.
Before that he ran product and engineering at CloudLock through its acquisition by Cisco in 2016.
Even earlier than that as SVP of product at Trustwave, John led its industry's leading security portfolio culminating in a strategic acquisition by SingTel.
Today he's building Root.io, a next generation cybersecurity platform, pioneering agentic vulnerability remediation, AVR, to automate and eliminate software vulnerabilities at scale.
John, thank you so much for coming on the show.
John Amaral: Thank you so much for having me. It's exciting.
Rachel: You have had a front row seat to cybersecurity's evolution from traditional approaches at Trustwave to leading cloud security at Cisco and now building agentic systems.
How do you see AI changing the fundamental relationship between security and infra?
John: I think we all have a front row seat to watching AI and LLMs and agentic systems transforming pretty much everything, especially around software development.
And that's where it's an acute focus today for lots of the transformations.
Where I see the opportunity here is AI can really collapse the gap between discovering vulnerabilities and fixing them.
Security used to be like a spectator in regards to infrastructure and to security. It watched, it logged things, it raised alerts.
But infrastructure was something you kind of protected after the fact. Security kind of told you what to look for and then humans went and did the work to change things.
Now with agentic system, security can become literally part of the infrastructure itself.
Think of like an autonomic immune system that can actually do the work.
I have a belief that we're entering the age from "software as a service" to "do it as a service," which is just I want the agents to give me the outcome, not become a tool my people use for the outcome.
And so instead of chasing infrastructure changes, with AI at agentic systems, software can move in lockstep with the changes, turning what used to be weeks of effort in remediation to hours. And it's all about remediation now.
Security is actually something that can analyze, plan and take action in real time.
So rather than getting a list of vulnerabilities that affect you, why don't we just fix them automatically with agents? And that's what my company's trying to build.
Rachel: So many questions about that.
First up, I mean the prospect of remediation as a service is obviously incredibly attractive, but what about trust?
Like if I'm not just getting alerts from my security system, but I'm trusting agents to write patches and apply them, where, if anywhere, is the human in the loop?
John: Yeah, our current approach, the way we use them is we use, we use an agentic fleet, a couple of different swarms.
Sort of think of it like a swarm of researchers who figure out what the vulnerabilities are all about.
And then a swarm of remediators, they basically write patches, backported patches for vulnerabilities.
And we're focused on open source software.
So think about the use case where I'm writing a software application and I'm using version 1.2 of a given library and that has vulnerabilities.
Maybe I can move to version 2 or version 1.5 of that library.
Maybe, uh, the maintainer did a good job of patching it, but they also probably added a bunch of new features which when I upgrade could make me have security or functional debt.
I have to go and now make my software fit that new library. There's a lot of risk in upgrading, especially when the semantic versions are wide.
So our system will take security maintenance code from these later releases and move them back to the version you're on so that users can always count on the fact that they have really good security fixes for whatever version of a library they're on, which gives them this kind of perfect balance between securing the software I use and keeping the decision to change to a later version.
A business decision, not a security maintenance driven decision thing that you have to do.
Now back to your question. I give that background just so people understand kind of what we do.
We do that for you in our software factory, so to speak. And we offer a SaaS thing that helps you deliver these patches to you when you need them.
We have humans in the loop, so we have a research team and a bunch of really great security engineers who are really good at doing this.
Like they could do all the manual patches, there's just too many to do.
So we've built agents that turn them into kind of a cybernetic creature.
It's a lot of software factory with workflow and then agents that are looking to scale and automate the patch creation.
We've actually worked really hard at making it work really well. But every patch that kind of goes into our flow, that hits our customers is reviewed, just like you'd do a code review if you're building software.
They look at the, at the patches, they make sure that the tests were good, they make sure that the code looks rational.
Now the agents do a really good job of documenting everything they do.
So it's like, "why did you change that line of code?"
Well, the agents will tell you, "I changed it because I saw the fix over here and it looks like it fits in this code right here."
So really good evidence, really good procedural work to make humans be able to understand it quickly.
And of course we have, I'll call that a workbench that we've developed ourselves that lets users really interrogate and understand this.
But, but you can think of the patching agents creating a PR against a repo, and then the humans have to approve it, working in lockstep.
So we can take a process that could have, say, nominal throughput of X and we can make that go x times 100, with the same set of humans.
So I think that's a pattern you'll see happening across the industry in many different ways.
Rachel: Yeah, it's definitely an emerging best practice that we've talked about here before on the show.
But my heart does go out to engineers who thought they were going to spend their lives coding, who are now like reviewing PRs all day, every day.
John: No engineer said, "I want to be fixing vulnerabilities that someone else wrote in their code."
So in this case, we're good at that. Our guys actually like doing the security work. Most organizations, engineers don't like doing that security work.
Rachel: I've often said security is the opposite of engineering. It's locking stuff down so people can't do anything.
John: I think that is a good perspective. I believe it too.
Yeah, I just made my life out of fixing things for, fixing security holes and stuff.
Rachel: It sounds like you had a pretty sweet gig at CloudLock and then at Cisco.
What prompted you to take the leap and start Fresh with Root.io?
John: Yeah, those were really two great businesses and we had this kind of privilege to be acquired by Cisco and become part of that organization.
And the Cisco organization I worked in for cloud security was like a really exciting and fast moving and growing thing.
So it was awesome and I loved every second I was there.
At Cisco I had the privilege of working on these cool security problems, cloud security and all that.
But I kept seeing the same treadmill, especially even internally as we built our own products. That detection, vulnerability detection, I had to build FedRAMP solutions, solutions that were highly regulated.
We had a big business so we got to sell it everywhere.
And detection got better and better but the teams really lagged on remediation and sometimes that was a business problem. And I think that's what a lot of folks face today is they just can't deliver security fast enough to keep up with the rate of change and with the rate and volume of new vulnerabilities.
It's like, do I spend all of my engineering cycles fixing vulnerabilities? I don't want to do that.
I've got to ship and build the business. So teams were drowning in alerts and vulnerabilities couldn't keep pace.
That was kind of the spark for Root. It's like, what if we just stop pointing at problems and actually fix them at scale?
And that's the idea that spurred us trying to go off into this market. It's, it was the dream of making the root problem just go away. Right?
The root cause, which is in the end we consume a lot of software from open source.
We really have no idea how it works, which is dangerous unto itself.
And we needed a system that could provably and in a trustworthy way make that software as secure as we need it to be to meet our business goals.
And of course here along comes the age of agentic AI and LLMs that are really pretty good at coding.
You know, they need help but they can do a pretty good job and we've been able to leverage that and a lot of expertise to kind of build something that can approximate that goal.
So it's very fun and there's lots more good stuff happening next.
Rachel: So let's dig a little bit deeper into agentic vulnerability remediation. What does this look like in practice?
I'm a CISO, I've got bot farms continually port scanning. I get a zero-day vuln. What happens?
John: Yeah, so you not only have zero-day vulns, but you have like hundreds or thousands of known vulnerabilities in your images.
And the vast majority of those live in the open source software you built your container images on.
So container images are the de facto package of software today that runs cloud native.
So I'll talk about container images and that's really where we've started to act.
They're the embodiment of all the open source that you need to build the apps you do.
And you know, 99% of your software is open source, starting with operating systems and moving up to things like, you know, I'm using an open source version of Envoy, or I'm using an open source version of Grafana.
That's what everybody does. You get it off a Docker hub, or I'm using Python, you know, or I'm using Node, or I'm using Rust or Go.
These are foundational libraries that we build our applications on and then just above that we might go off and get library XYZ from some maintainer and we build our dependencies and away we go writing our first party code.
But that third party code is a black box effectively to the user.
When you run your security scanners on that like everybody does, they get hundreds and thousands of vulnerabilities.
It's not unusual to have images that have several hundred or a thousand vulnerabilities in them, coming off open source.
You multiply that times hundreds or thousands of images that you use. Base images can number in the tens or hundreds and application images can measure in the thousands or multiple thousands in even a modest organization.
Let's pile on to the fact that we can use coding agents like cursor and Claude Code to multiply our ability to generate new code and suddenly you've got a vulnerability list that are numbering in the thousands or tens of thousands of vulnerabilities, many of them high and critical.
And it's a lot of work to get those out of that open source software. And there's a lot of different strategies.
Historically that was all about, "hey, let's prioritize and then triage and only fix the ones that are the most important."
But those are often the ones that are hardest to fix. And so in today's day and age, we don't need to do it like that.
Like this is why Root was born. What if you just got... that open source came to you, you could select to remediate all of those vulnerabilities automatically.
And so that's where we focus. That's kind of the area we focus.
You still can keep your same workflows, but instead of pulling images from open source or libraries from open source, you can pull them, ask us to act on them for you, and then we'll render a version of that that has no or very few vulnerabilities.
Most of the high and critical is gone automatically, three to five minutes on an average container image.
And behind the scenes, as I mentioned earlier, are this software factory, agentic software factory we have is constantly looking for and creating new fixes, patches, upgrades, whatever it can find on your behalf for you.
So it's like you have a 24/7 security SRE team.
When we see a vulnerable package, we plan the safest strategy for you.
Like either an upgrade that will not break your software, or we'll create a new backported patch.
And the future is even bright for us to do other kinds, maybe patches for zero days or even finding zero days for you.
Our system will be able to do all these things in the future. And you can just get that library or that new software base layer and you can deploy it right away.
We test it, you can run your same CICD tests that you do today. And it doesn't just tell you that there's a fire, it helps you put that fire out effectively. And that's what agentic vulnerability remediation is in practice. It's a closed loop where detection and fixing happen together automatically across all of your third party open source software.
And it takes your vulnerability management costs and time and compresses them by like ten or a hundred x.
So we can eliminate, and we have done this in a matter of days, from like 7,000 vulnerabilities that would have taken a team maybe forever to fix at you know like a million dollars worth of cost to something that can be done in a few days.
Rachel: So what am I actually looking at? Like every security team I know is inundated by alerts.
Are you having the agent look at all of those alerts and then the humans just look at the output of the agents.
John: Yeah, when it comes down to this vulnerability management specifically, I said those scanners are producing long lists of vulnerabilities.
That's what you're used to seeing. That's the output you're normalized to.
You don't scan until after we act on them. And so we, we have a scanner built into our system where we can take scan results from someone else.
You could take that big list and, and point us at all your images and we'll just fix it all effectively.
And then now you, what you see is just the output of our system as your new baseline for vulnerabilities. And that's just a few.
And so now your job is "okay, I used to worry about prioritizing which vulnerabilities to fix, but now I have a short list and I can just go understand what those are."
And if we can't fix them, it's usually of the ilk, like, "hey, that's a pretty new vulnerability that just got reported and we're really not sure what it impacts yet."
So there's more research to do, maybe by the vulnerability community at large.
Or it's maybe an untenable thing to patch, meaning that like the version differences are so big that it becomes like a maintenance discussion internally.
Like how should we really approach that library now? Maybe it's woefully out of maintenance or something like that.
So it brings it down to a handful of important things that really become a strategic decision.
So stop. You know, don't have your engineers thinking about tactical list juggling. Let them start thinking about the strategic security problems, which are more i n the lines of planning and architecture and which libraries should we be using, not which libraries are we using.
And how do I just manage the thousands of noisy streams of stuff I normally did.
It also brings a lot of effort back from the engineers who would have had to go off and figure out how to modify those libraries or do upgrades or whatever.
So it really takes you out of this kind of continually reactive and kind of frustrating cycle of remediation to "okay, let's think about the important things now."
Rachel: It kind of sounds like running Superhuman on your email.
It's bubbling up the things that you have to pay attention to and just handling all of the other stuff.
John: Yeah, exactly. It's taking the burden away from the effort and results in real security because your risk surface is greatly shrunk now that we've been able to security patch or maintain the libraries for you.
And usually it fits right in with your workflow, you're going to get your base image from somewhere, the images you use in your infrastructure and build from, so pass them through us.
And then we do this transformation into these more secure versions of those. Whatever software you have, whether you're a Debian or Ubuntu or Alpine or what have you, we can fix the stuff you've got so you don't actually have to switch to a new kind of operating system or some proprietary base image.
We fix what you have and just make that burden of remediation go away.
Rachel: So that's a perfect segue. Let's uplevel, let's talk about the strategy.
How should teams think about protecting infrastructure that's basically software?
What are some of the best practices you've seen and what are some of the risks that you see coming?
Because AI is also obviously bringing harms in its wake as well as benefits.
John: Yeah, well, in modern systems we have code of all sorts, right?
We've got software that we build our applications from, we've got code that creates our infrastructure. Andreessen was right, software did eat the world and everything is software. And I think organizations that have embraced that, the agility and the flexibility that doing everything as code can bring you, including infrastructure as code, has brought them a lot of gains. And the most mature have really good software development practices.
So they're mature in their CICD and their DevOps and DevSecOps.
These companies are really at a really good place in their life cycle to adopt this kind of agentic technology.
And I do think it's mostly now going to succeed in the hands of organizations that are already quite mature, but have these classes of problems that security brings, like too many blinking lights or too much to triage or too much to prioritize.
Well, because they're mature and they have the tests and they have everything as code, agents can plug in there and act on that because agents are good at looking at code.
And it can really give you that big lever to move this security debt out into the hands of agents.
So if, say, if you're stuck in old models like patching servers one by one, it's harder, right?
Like the old times were more difficult to make change. Agents want to talk to things in agent speak and in agent interfaces.
So we can apply that when you have the interfaces, right? So everything is an interface, everything is an MCP server.
In the future, everything is agent to agent. We'll give the agents the ability to communicate and if you embrace this, and you probably embrace the idea of infrastructure as software, all this gets easier.
For instance, you can patch an image once and protect thousands of workloads downstream because you can pass that change into a system that knows how to just propagate it.
And that's the DevOps maturity coming into place where I can just knock down all my vulnerabilities in production very quickly.
At Root, we treat these kind of fixes like code version, reproducible, audible. That mindset turns infrastructure as code to a huge advantage for security.
So all the work people did to automate, we need to pull agentic interfaces on that, let the agents talk to it, and then bring specialized systems like mine that know how to generate patches.
Now you can fleet this, you can build this into swarms that, that are just basically doing all the really kind of grunt work of DevSecOps toil and that can do it automatically.
Again, leveling up people to be the strategic guides for this. And that's where human judgment is not going to be replaced anytime soon, honestly.
But we'd love to have these workers doing all the hard legwork for us.
Rachel: Having been through multiple acquisitions, how do you think about building Root.io?
Are you designing with potential acquirers in mind or are you focused on the technology in the market?
John: Yeah.
I never think about designing anything for acquisitions. This is like a trailing indicator, right?
I've been through several acquisitions, I think it's five now. And one thing I've learned is you can't build for that outcome. It's just a distraction. It's all about building value for users, end users, solving real problems, creating a new way to do a job to be done that they hated to do or was hard to do or inefficient to do or too costly to do. So we're building a world where remediation is table stakes.
It's basically a remediation first kind of security mindset. And we can extend that to a whole bunch of different applications and use cases.
The ones I've described are our first. But you know, it's one where security teams expect AI agents to do work for them to fix vulnerabilities.
I think that's the future, I'm sure of it. Honestly, I think we all believe agents will be a big lever for us as we see it happening already with coding and all that.
So if we execute on that, the market opportunities, whether that's partnerships, IPO, acquisitions , will come naturally.
But mostly laser focused on outcomes for users and bringing that value we talk about.
Rachel: Perfect answer from an investor's point of view.
Does this feel like a natural evolution of the tool set or more like a step change in the cybersecurity landscape?
How do you think organizations should prepare for the future?
John: I think this kind of step change disruption is going to happen simultaneously almost in a vast set of, I'll call it traditional use cases that are going to upend the way that you did things.
In my case it's a thing called SCA, which is basically all this vulnerability management stuff, scanning.
That domain will be switched from scanning to automated remediation and vulnerability. All the tooling we needed to deal with large sets of these are gone.
It's going to happen 18 months, two years, the status quo will be, we just don't even have any vulnerabilities.
There'll be zero days, there'll be all that kind of thing that needs to happen next. But that's a more important problem in my opinion actually.
It's also happening in the SoC, in the operations center. So I know in the same portfolio we have some great VCs and they also are making huge bets in agentic AI for security in multiple domains.
But one area that's also being transformed right now is just stock analysts work. Triaging, like level one, two and three triage of vulnerabilities.
This can be again similar uh, to what I do with my software factory building patches using humans and agents, where the agents do like 90% of the work and the humans are verifying or maybe digging deeper on some important things.
That's happening in the SOC right now as well. So, so that's another area where you know, having a thousand SOC workers in a giant data center is going to be doing like a lot of toil and manual research and things, that's going to be gone in two years as well.
And this is going to keep getting repeated. So anywhere you see humans doing a lot of manual or maybe even like I'll call it web search kind of work, or having to understand code bases or having to deal with that, I'll say the first wave of agentic systems are going to just take that work away and it's going to be more accurate, it's going to be more secure, it's going to be more correct.
Humans are fallible when you give them a lot of tedious tasks over and over again, the machines are not, especially for a certain kind of I'll call it a convergence group.
Humans are good at more creative work and thinking through things, we'll need them. But this can all be automated and it's going to keep happening over and over again.
So humans for oversight, humans for insight, agents for automation. And they're going to continue to expand and be better as the models become better and better at execution.
Rachel: So four-hour work week, margaritas on the beach?
John: I like a margarita like the next person. It's fun.
And a lot of people are going to spend their time, especially as you mentioned, it makes investors happy, they're going to be spending their time on how to apply this stuff to these kind of problems that have plagued us.
I think most of this toil work is really getting in the way of security. Honestly you can't see through the noise.
So if we can just get the agents to bulldoze the noise out of the way for us and give us a cleaner, more understandable baseline where a lot of the minutiae or even let's say procedural problems are done...
In my case, where we're actually patching code that maintainers refused or can't maintain. You know, most maintainers are open source and I love them.
They're doing a massive benefit to humanity on average.
But they don't want to backport their patches. They've already fixed it in a new version.
That's something that's really not interesting to them and I don't blame them.
But it's still a security problem for someone else.
So if you can just take that burden away from them and from the someone else, then I think you're bridging a gap that's going to keep those open source creative creatives building new things, building new value and that's what we want.
And so, it's a good problem. But I think this is the way of the future and just beyond that horizon, I'm super optimistic about where all this can take us.
I think we'll see in two to three years, I think we'll see that not only will you have companies like us, and the future us will be doing some more cool stuff, but these companies that are helping you by deburdening your vulnerability management will actually be powering you up with fleets of agents you can run internally and to be more guided by the organization.
So these are more like, I'll call it generic DevSecOps agents that just know how to do work and you can tell them what you want them to do and I think that's a two to three year horizon where you're having , almost like building your own purpose built immune system for your software or for your infrastructure that is looking after your concerns and knows deeply about what you do.
And that's kind of an interesting and exciting horizon as well.
Rachel: Which AI tools, if any are you using in your own workflow?
John: We use all. So the core of our agentic system uses the combination of OpenAI and Anthropic's LLMs at the heart and then we're using systems around it for building agents.
We use LangChain, LangGraph, this kind of Lang set of tools. They have some nice tools that make agentic development pretty good.
And of course we're using cursor and claude code and other toolings to help us write systems.
So we've got LLM systems helping us write agentic LLM based systems. It's a nice recursion and of course that's the way in this in this new game.
Rachel: the worm Ouroboros, eating its own tail.
John: That's right, that's right. And of course humans in every loop all the time, you know, guiding and directing and organizing and creating.
And our people are really creative in building these kinds of systems. We've gotten good at it now. Been working at it for almost a couple of years now, which is forever I guess, in agentic terms.
Rachel: What about your personal CEO stuff like email, outreach, fundraising?
John: We use it, myself, my team, my co-founders, everybody's using it all the time for everything.
You know, it's my perpetual other person in the box with me doing everything.
So preparing for this presentation, my personalized versions of LLMs know a lot about me and what I do and what value we create and, as we think about planning or preparing or strategizing, it's a constant sounding board, co-creator.
I think this is only going to perpetuate and be better. I think my productivity level is like amplified.
I can't even put the... I think it's an integer number in front of X meaning a single, digit integer number in front of X. But it might be five or six or seven and I'm just getting started really.
But I use it a lot and we all do. Our company has been built sort of in the AI epoch right.
And we've really embraced it from the beginning. I think our eyes were wide open when we saw it happening and we're like "Oh my God it, this is the single most high leverage knowledge work enhancement since maybe the Internet or the PC. "
And it might be bigger than both of those, honestly. So exciting stuff.
Rachel: Do you worry about the energy usage, the water usage, the sort of physical footprint of the AI data centers?
John: Oh, yeah, I worry about our Earth all the time. I live near the ocean and I see the thousands maybe now... I live in the northeast of the United States and there's a massive wind farm going up.
And there's a lot of debate about that, ecology impact versus energy impact. And that's the debate we have in our world right now, as you've heard, I'm sure, and as everyone has, that it looks like the key resource for getting the power of LLMs and AI to become realized is just electricity. It's a massive system that converts raw electricity into tokens. And we're not that good at doing that in a way that doesn't break our world.
It's an existential concern for me. I hope we don't need that generation ship to take us away from the Earth after we destroyed it someday, but who the hell knows?
Rachel: The Earth is the first generation ship and my favorite. It's where I keep all my stuff.
John: I like it. It's been a really good run for me. I've enjoyed it very much. And let's hope we can keep it here.
Yeah, I do believe we need to do what's exciting in a way is--
I'm a techno optimist and a futurist. So I build new companies, I try to imagine new things and make them happen.
And you've been in that game and the people who listen to this are probably in that game.
So you know what I'm hopeful for is that maybe the power we can make now and put it into these cybernetic systems and augment humans, and if we have the kind of right motivations and the right interests, as we're describing, that we can use that technology to help us build better technology so that we don't mess up the planet, we actually can fix it or be a lot better at how we mess it up, do it a lot less.
And we've seen some examples of where the Earth is pretty resilient. When you stop messing with it, it comes back pretty fast. So I'm hopeful of that.
And that's something that makes me again optimistic that all this AI stuff we can do can make us just better at not ruining what we have and build a lot of good stuff for the future, for all of us.
Rachel: Margaritas on the beach. I'm there.
John: There you go. I'd like the agents to be able to build those for me.
Rachel: What are some of your favorite sources for learning about AI?
John: Well, you know, I think the thing that I'm challenged with is with just keeping up.
So there's so much. Right?
The rate of change in new things is just mind boggling. I think what would have taken three years is compressed into like six months or three months.
So, I start with a lot of summarized newsletters, like TLDR is one that I use all the time. It comes in and it guides me to kind of do a quick glance at what's happening and then I dig deeper.
And usually it's about what piques my interest or what fits within a range of topics that I care deeply about right now.
When I want to go deeper, I literally use the research tools. I use Advanced Research. I take a topic from TLDR and I say, "Let's go deep. Where can I go to look for more of that?"
Or "Give me a nice deep read on that."
I spend a lot of time looking at what Anthropic's doing, what Cursor's doing, what OpenAI is doing, because I build systems with these things.
So a lot of what I need is kind of deep expertise on how to use them and implement them in a way that's, I'll call it production ready, which is there's a wide gap between building something like vibe coding, like a proof of concept, a flashy thing, or actually getting the AI to do the job for you as reliably as you need it to be.
Rachel: Our listeners didn't get to see me doing jazz hands when you said "vibe coding."
John: Vibe coding mostly is a nice quick way to build a prototype. And then the long tail of making a real system from it is still quite challenging.
It takes massive expertise and, you know, the same old engineering dedication to your craft as it did, but the outcomes can be spectacular, like things you never imagined you could do.
So I spend a lot of time there. I read a lot of code in GitHub because that's a good place to see the creative output of really smart people who figure stuff out and how to make real systems happen.
And then at a very high level, I try to try to listen to people who, like Lex Fridman or the All -In podcast or folks that are kind of trying to see Macroeconomic or industry level trends that can guide me to where I need to spend more time thinking.
But it's diverse and it takes a lot of time to do it. But I enjoy it. So it's not so bad. Downtime thinking.
Rachel: If everything goes exactly how you'd like it to for the next five years, what would the future look like?
John: Specifically for my domain, I just want to see the realization of "do it as a service."
So, the traditional security domains of detection and alerting and prioritization, I just want to see that go away.
So in my future, remediation agents will be standard across the board for-- if it's infrastructure as code or vulnerability management or you know, open source remediation, they're just going to be there as part of a normal pipeline.
Security teams won't spend their days triaging tickets or patching by hand. They'll be focused on strategy and governance while the agents take care of these kind of, I'll call it, "awful large backlogs" that are just not good work.
It's that simple. Humans don't like to do it and it's hard and it can be done for you automatically. The treadmill will end and security shifts from janitorial work to strategic work.
That's what I hope for in the next five years and I think it's actually very, very possible and it will happen quickly. The leading edge of this is happening now.
Rachel: So good news, we've switched to renewables. Earth is recovering. We've built you a shiny new interstellar generation ship for Alpha Centauri. What are you going to call it?
John: I'm going to call it The Outshifter. At Root, we talk about moving beyond the old mantra of shift left to shifting out, which is don't push the work somewhere else in your organization.
Push it to the agents and move it out of your organization. Retool what you have.
So it's making it collective, it's making agents plus humans make the work effectively go away so we can put our time somewhere else.
So The Outshifter would be a shift that takes us beyond the old silos and into a future where security is woven into everything.
Plus if you're heading for the stars, you might as well shift out boldly. So it's an exciting thing.
Rachel: To boldly go. John, it's been an absolute delight to have you on the show.
Thank you so much for taking the time and good luck with everything.
John: I really appreciate the opportunity. It's been an exciting and interesting conversation.
I love talking about future stuff. And this is a podcast where you can do it.
Rachel: That's what it is.
Content from the Library
Jamstack Radio Ep. #147, Secure Local Dev Environments with Chris Stolt and Ben Burkert of Anchor
In episode 147 of Jamstack Radio, Brian speaks with Chris Stolt and Ben Burkert of Anchor about securing local development...
O11ycast Ep. #69, Collecting Cybercrime Data with Charles Herring of WitFoo
In episode 69 of o11ycast, Jess and Martin speak with Charles Herring of WitFoo. Together they dive deep into the world of...
Jamstack Radio Ep. #142, Decoupled Authorization with Alex Olivier and Emre Baran of Cerbos
In episode 142 of Jamstack Radio, Brian speaks with Alex Olivier and Emre Baran of Cerbos. This conversation explores tools for...