about the episode
about the guests
Guy Podjarny: Hello everybody. Welcome back to The Secure Developer, thanks for joining us again. Today we have with us Zach Powers from One Medical. Zach, welcome to the show.
Zach Powers: Thank you for having me.
Guy: We have a whole bunch of topics to cover. But before we dig into that, can you tell us a little bit about yourself? Some of the background, how you got into security and what you do these days?
Zach: Absolutely. Like many people who have been in security for quite some time, that is not what my initial career was. I was studying materials engineering, and then I got into different types of technology. I fell into it out of a passion for it, and then more and more became the go-to guy. Ultimately to look back at where I really cut my teeth in security is more on a global scale with Salesforce.com, where I was the vice president of enterprise security there.
I managed a lot of the internal applications security, the infrastructure security, but also mergers and acquisitions, vendor security programs that app sec is testing on 1,000+ vendors a year. A really big, meaty program. From there I've come to One Medical to try and take on improving security and health care, not just for One Medical but across the industry. Influence some of the other groups in healthcare in the United States. It's a big mix of how I got here, it's a zigzag, like most security leaders.
Guy: Indeed. The different backgrounds is what gives you an opportunity to think about the problems in a different way, and hopefully do it a little bit better. Within One Medical, what was the context of the security team when you joined? What was the rough company size, and were you the first security hire? How was that?
Zach: Security when I first joined was mainly looked at as point solutions and some infrastructure security hardening. There were a couple people doing security work, some in the product engineering team, some in the IT team, but there was no core security function. Not like we have today. When I came in some very good things had been done, and there were still a lot of things that needed to be done. We formed that core function and started to hire a lot of industry talent, pulling from some bigger tech companies that I believe have a much better angle or approach to security today. For example, where infrastructure is code.
Rather than thinking about devices and servers that you plug in, think about cloud first and app first, automate everything.
Those are the type of organizations we're pulling security talent from.
Guy: Got it. you're coming in, and you're structuring this team, and just for context. Because we've had a bunch of these conversations on the show, what's the rough company size when you joined?
Zach: When I joined it was around 1,300. It was a much smaller company, going from global environment and 60 countries around the world to 1,300 staff in the United States and operating across nine cities.
Guy: so you have those people, and you come, and you build it, and you come in. We talked about this a little bit in the tee-up, which is you come in, and you're hiring these people that come in from infrastructure as code. In general, when somebody listens to you talking about security, you oftentimes tout indeed that relationship between an understanding of the Devil's practice to security. How do you see that? How do you see the intersection or the interaction between security and those applications or operations teams?
Zach: At many companies that I've had experience with, or that I've advised, there's an older style of security team members that really do understand infrastructure proprietary configurations of this vendor's infrastructure or that, and point solutions. Those skills were very useful at a point in time, but I find that those security engineers have a hard time relating with and influencing software engineers.
Where I see a lot more camaraderie happens, and honestly a lot more collaboration and influence happens, is when the security engineers themselves at one point in time were a software engineer, or they have their own shops, and they know how to develop. They're not just a script kiddie. They actually have some solid coding skills, and that goes much further.
What I often see at companies is two camps. Does the security team have a lot of software engineering talent in and of itself? Usually, there's tighter integration with the product engineering teams in those style of organizations. If the security team is mainly hardware focused with a bunch of layer 3, layer 4 firewall stuff from the early 2000s, I don't see that tight integration whatsoever. There's a lot of room for improvement there.
Guy: When you build up your teams, do you find software engineering background to be equally important to security experience? How do you weigh? Because there's only, unfortunately, like in today's world, there's still a small group of people that have both in their resume. Both software engineering and security practice, how do you weigh those two?
Zach: Absolutely. I was having this conversation last night with a bunch of security leaders, is how do you scale this? A common belief that a few of us have is, "You can take a really smart software engineer and teach them security, but it's hard to take an older school security engineer who is mainly infra focused and teach them software engineering." Part of the way that I'm scaling is by hiring engineers who were interested in security, but are really good at automation.
Good at handling more of a dev ops lifestyle, more of a continuous delivery environment. Those are the type of individuals that we're scaling with and succeeding with at One Medical. It's not that we don't have the tried and true security veterans. We do, but we're scaling the team and teaching security to engineers who had an interest. But they understand technology. We find that to be much more important right now.
Guy: I fully relate to that. I would even amend that with the fact that software engineers, as they mature and gain experience, they typically build a natural and better appreciation to security. Hopefully, at least a subset of them. Appreciating the role of security as part of the quality of software, while to an extent it depends, of course, different people vary.
In the world of security, oftentimes as a security person in the security career grows, they might even grow further away from the software side of things and more into the risk aspect of the business.
I think also maybe even that trajectory is a little bit different. Also, not that it's easy to hire engineers, but still with the security talent shortage that we have right now the opportunity to bring somebody in from the software side and train them up is a good path. It builds options.
Zach: Absolutely. At the end of the day, it comes down to, "Can someone code?" No matter what the position is on our security team, you've got to pass an in-person coding challenge, that's more than just a Fibonacci series. It really comes down to critical thinking. You don't have to be in security to be able to perform an adequate threat model, you just have to think critically and wait.
We do evaluate really hard on, how intelligent and creative are the candidates? If they have that they can learn security. If they don't have that, if they don't have the coding background, they're not going to be able to move at the speed of an organization like One Medical or at the speed of many of the tech companies out there that have moved to or are moving to a dev ops or continuous integration, continuous delivery environment.
Guy: Agreed. Within that context that's an interesting and a forward-looking model. You hire people into your security organization with some coding skills and maybe an engineering background. What does that do? How do you see the responsibility split between that team that has some software engineering background, and then the software engineers themselves, building the application. How do you divvy the responsibility or activity?
Zach: It varies a lot from company to company. The first thing I would say is there is some degree of embeddedness that we do at One Medical, and it varies by company whether this can scale or not, where the security team members take part and sit in design review.
Moving security as a discussion upfront and having that discussion take place with the software engineers. Not having someone in security look at the product after it's been developed after it's been designed, and find holes in it.
We take part from project initiation, so the security team, not for every single feature but for larger scale projects or sensitive sections of code, the security team sits right with the engineering team responsible for that project. At initiation, if you think 20% to 80% reviews, they're all there. And that's before anybody's started writing anything, that's just at the design stage. That's how we do that at One Medical, and we integrate that way. It goes very well because the software engineers tend to know the security people that are embedded are software engineers in their own right.
They understand everybody has a common language, and there's a mutual respect there. We expect software engineers to learn security whether they're on the security team or not, and we expect them to provide valuable input and makes decisions. So we need to be able to empower them. If they're not familiar with security, we provide custom training for them. If they want to understand threat modeling more, we go through custom training on that.
It's a mutual respect, not a big stick policy, and that works well at One Medical. At some other companies that doesn't scale as well, to be honest. What I see people do is develop a questionnaire like you can develop a real quick app that engineering teams can go through to find out, "Should they go-to a security review?" Not all sections of a product or all sections of code are actually that sensitive that they need to do that.
That works out well for other companies. There is some nuance there, and it is what's culturally appropriate for your company. But either way, I believe security's got to start at the very forefront of that, at project initiation, when you're talking design. It needs to be collaborative there, and it can't just be a series of requirements that are tossed over the fence without any context. You'll hear me mention this a lot, "Security within the context of your product, your application, your company, is very important to us."
Guy: It's great to embed to the security team and engage, and I love the common language bit. I always enjoy drawing analogies to dev ops, and oftentimes in the world of dev ops, one of the things that helps break down the walls between a developer and ops is indeed some shared background. If you carry a pager you are much more, or if you have even for a day or a week, you have a much higher appreciation to make sure that your system doesn't go down.
If you know how to build code you have an appreciation that, it's not that simple to make it not go down, as you build the software.
That said, there's still a challenge around scaling security, and you can't involve everybody inside. In our conversations, you were talking about how software engineers should be empowered to make security decisions, and I'm quoting a little bit literally here. How do you draw the line? Maybe you can give us some examples of what type of decisions you think should be made within software engineering, how do you draw them in?
Zach: Real classic example, I'm bringing this up with how we integrate various tools at different stages. Static code analysis, or dependency analysis, or whatnot. At many organizations, having talked to thousands of software companies over the last 10+ years, at many organizations if they have a security team the security team will scan some code after the pull requests way after the fact.
Once it's already in production, they'll find a bunch of problems with it and take it back over the fence without any understanding of wherein the product or where in the app those vulnerabilities or logic problems are, what the context of that situation is. They have no understanding or no firm understanding of the risk there, and I'm a firm believer that if you just provide information like that up front to software engineers who are responsible for that service or that section of code, they're going to understand context, and they'll realize, "Wait a minute. This vulnerability, maybe it's not a false positive, but it's very low risk and here's the contextual reason why."
Let them make the decision around how to treat that situation vs. they may see something else and say, "That is far more serious than your security scanner told me. We need to hit pause and have another commit, go through another round of testing." Why I say this is that most software engineers, especially given some training and some partnership with the security team, can begin to do a lot of this on their own given the right tooling. Give them the right data up front. There is no way that my security team, or those that I'm aware of around the US or in other parts of the world, can review every line of code. It just won't scale.
Then we introduce automated tools, and then there's the classic griping that goes on, that the automated tools don't understand the product. And it's like, "You removed the software engineer from the equation. Let's put the software engineer back into the equation and have them do their job." They can absolutely make risk-based decisions. They're going to know better than a security team, most times, how to remediate a given vulnerability or a risk, a co-quality issue. Caveat that with if they've had appropriate training because you're always going to have software engineers who might not know how to fix this classic vuln app.
But given appropriate training, they will, and their contextual knowledge and their desire to produce quality code. Maybe that's the optimist in me, but their desire to produce quality code will result in a better outcome.
They do need to be empowered to make those decisions and not feel like there's this big stick policy, where they spend their time and creative effort developing software, and some other person that they never talk to is going to bash holes in it and tell them it's not good enough. That doesn't work anymore. I don't know if it ever worked, but it certainly doesn't work today. It's not how faster software delivery happens.
Guy: I entirely agree. One of the challenges in this model, you come in, and you have your engineers who are hopefully educated about it. There's always going to be variety. Frankly, that happens within the security team as well. You entrust them, and you tell them that, "You're allowed to make decisions here. Here's a set of--" Going back to some of the things you said, "Here's a set of criteria, whether you've discussed that ahead of time or it's a questionnaire or whatever, about when you should seek professional help.
You should bring in somebody from the security team to help decide with you." What do you do about incentives? It's one of the challenges that oftentimes comes up is developers are not in the daily use, so they're not incentivized. They're there to build new functionality, and if they don't deliver a feature, somebody comes knocking. But if they built a security flaw that gets discovered a few weeks later, it's the security team that gets thrown under the bus. Hopefully, nobody gets thrown under the bus, and it's all positive, but how do you incentivize or encourage the dev team to indeed embrace this ownership amongst all the many others they have?
Zach: It's a good question. Most companies, to be honest, there is no positive incentive other than the finger-pointing. At Salesforce we definitely tried a range of positive incentives, and I've carried that on to One Medical. Part of it is simply high-fiving somebody for doing the right thing. Part of it might be, everybody loves swag. If you want an awesome hoodie, security teams know all about awesome hoodies. We've done things for individuals who continually do good security practices, make great decisions, have them do a rotation or work on what we call coalitions.
Have them work on a special project to step out of their day to day routine. Most engineers love doing that because they don't like looking at some section of code all day. In a coalition, we get a cross-functional group together and say, "We've got a really hard problem to tackle, and we want you to help us tackle this problem." So, giving new opportunities is a good way to do that. We've done things as silly as teach lockpicking classes. Things like that. Just finding something fun and memorable to positively recognize in a public fashion that this engineer is rocking it with security, and here's why.
Give examples. But then giving them something fun and meaningful in return. It does go a long way. The security team at One Medical often invites software engineers to happy hours. Where we're not just having a drink, we grab a whiteboard, and we discuss things, and when it is talked about or experienced in a more positive manner, I do believe it goes a long way. People sometimes say it's a security champions program, and some of those do work for sure, but I would say this is more just publicly and positively recognizing when people do good security behaviors.
A little bit of swag goes a long way. Some really nice socks, a coffee mug, things like that go a long way. But I don't see it happen at many companies, to be honest. If you slow your work down to produce better code, at many companies, you're penalized for that. That's definitely not the case here. You need some executive alignment to be this positive about it. At One Medical, at companies like Salesforce, I could name a bunch of them here in the Bay Area.
We have a common philosophy that it is better to produce quality code than to have to go back and have to fix it later on. Because it usually takes longer, it usually involves some angry customers. It's way more thoughtful to do it upfront.
Guy: I love pretty much everything about that model. You gave a whole bunch of examples, and none of them included bonuses or financial motivations, because I don't think that's really what sets the-- You've got to have these hoodie driven security incentives or swag.
Zach: It goes way better. At other companies, we've tried this experiment, and the cash bonuses don't really work that well.
Guy: They create almost a cognitive dissonance, where people think that they're doing it just for the cash. If you're giving them something fun, clearly they're not doing it for that, but they're still enjoying it, and it still has the positive association that comes with it.
Zach: It's key though, and change it up, so you don't always give the same hoodie or the same sticker or what not, the same T-shirt. Change it up. Because if people expect that, "If I do this I'm going to get this thing," it cheapens the experience. There's somewhat of an unexpected surprise. They don't know when they're going to be rewarded, but they realize that there's a culture of recognition.
When the software engineering team at One Medical gets together, every couple weeks everybody gets together for an all hands. We will sit down with the security team and call out and publicly thank people for very specific actions. They're not asking us to do that, but it definitely goes a long way, and it promotes a cultural momentum that these are good things to do and that it is OK to take the time to produce better quality code. And I do think I'm an evangelist about that.
Empowering software engineers and letting them make decisions, but also recognizing them for their good decisions and good work produces way better security than not.
Guy: Fully agreed. I love that. I also feel like the teams that have the best handle on this indeed do this. I've had the PagerDuty security team come on the show, and they were talking about Indeed, awards that they give out, and they're not monetary they're just recognition. Sometimes, I forget who mentioned this, but somebody talked about giving explicit security training elements to it like send someone to a certified hacker CEH type of course, so that they can have something to add to their resume in terms of formal, "You've invested in it. We can develop those skills."
Because at the end of the day that helps your career as well in the long run, but fundamentally it's all around getting that positive sentiment around it. The world of security uses the term "Shame" a lot and uses the term "Pride" very little, and we need more of that pride in it. We talked a lot about the software engineering background within the security team. You have the engineering team, and you train them up, you give them these positive recognition and hoodies to drive the right behavior.
You tee up, and you define whether it's questionnaires, or practices, or processes or whatever it is to help them understand when they pull in these security experts to help advise and add context, which the application developers have. How do you on the other side structure the security team? You talked about software engineering background, but maybe you can share, what's the org structure or the staffing that you think is needed in the security side to help deliver on this?
Zach: It changes a bit as you scale a security team. There's a phrase we often toss around, "The rule of threes and tens," if you're a security a team of three people with the way you do things, once you're at 10 people it's not going to work, and you need to change. And again at 30, and whatnot. From a broad level, the way we are structured today at One Medical is partly due to the size of the security team and the size of the company, and that will be slightly different than, "How do I structure multiple teams back at Salesforce?" Where we had, the teams there were well above 500 by this point.
Part of it is scale of the company and the security team itself. At One Medical today we have a software security, application security team that handles all things code, whether it's our product or whether it's internally developed applications. We have a lot of different teams internal to One Medical that develop code, not just the product team. Whether they're doing that for data analysis on the backend, or whether they're doing that for enhanced productivity with this business unit or that business unit.
We have an application security team who works with software engineering, and finance, people in finance that are coding. It's really a tech company at heart. We do have a lot of doctors, but we're a tech company through and through. As a result of that, we need a group of people to be able to partner with all these different teams. Granted, the way that we do that can vary from team to team.
We're much more embedded with software engineering or product engineering, if you will, than we are with some of the other internal business units, but we provide the same services to them. The application security team really helps focus on some of the infrastructure as well, because it gets a little blurry when you wholeheartedly believe that infrastructure is code and that philosophy. Drawing the line between, "What is your product and what your infrastructure?" gets a little blurry sometimes.
Guy: By definition, almost.
Zach: The team handles a broad set. A subset of that team also handles what I call vendor security, which is nothing but a game of risk analysis up front followed by classic application security activities. We have a gated process at One Medical like many other security-conscious companies, where you can't introduce software into our environments whether you're an internal business unit or you're a software engineer. You can't bring new software into the company or integrate it with us without us going through some form of testing on it, and you need app sec people for that.
This team handles a broad set of activities, the highest priority being partnership with project engineering, but like I said, if we develop code, I've got a finance partner who is awesome. He develops modules that are great, but we need to be able to partner with him as well, not just product engineering. The other real big focus in the way we structure security is with sec ops. Part of that is incident response, classic IR folks analysts who know how to do forensics and whatnot, who have been through multiple breaches of varying scale and multiple incidents. They understand threat actors.
The other part of our sec ops team really is software engineers, and there's a whole lot of guys and gals on this team that can build at scale, and they build the security engineering backend for us to consume and analyze data from a wide variety of sources and be able to automate security functions. Here's a good example. I don't want to pay highly talented security professionals to go out and manually quarantine a machine that downloaded commodity malware. That is a complete waste of money.
So, we automate as much as we can. We have a belief in security here at One Medical that,
if it can be automated, it must be automated.
Whether it's inbound e-mail analysis, file analysis, whether its configuration analysis, whether its detection of events, first stage, triage. All of that we have automated or are aiming to automate, and the sec ops team, part of it is classic security IR professions. But part of it is some tried and true, very senior dev ops guys and gals who know how to build a cloud, know how to build the apps, know how to integrate things together.
Security at scale today, in my opinion, a big part of it is as close to near-time data analysis as possible, followed by automated actions and whatnot. It allows you to keep your team smaller, scale the technology and not the team. I don't want to throw bodies at everything. So we don't. In that picture that I described there's not, for example, a security team member whose job was to manage anti-virus. That doesn't exist in our team. We automate a lot of those things. Everybody on the team, except for security program managers, codes. It's part of the job.
You must know how to automate the mundane work, but that's where we're at right now. Ask me again when we're five times this size, and I probably will need some analysts who don't code. I will need some dev ops or dev sec ops. However, we want to refer to that, folks who focus solely on the security of the infras side of the house even though it is code. We're the size of a team, it's not that small, but we're the size of a team where we're primarily focused on two broader areas. And the teams handle a lot of cross-functional or multidisciplinary work together.
Guy: Hopefully that automation keeps you a little bit further away from having a 500 person security team because you can scale with automation, as you said, more efficiently. It still doesn't preclude the need to have some manual assessment, design reviews as an example, that still needs those. What's your key distinctions around building in-house versus using external solutions like software for these things? Is there any guideline for it?
Zach: Not a super good guideline. What I would say is, if an external solution exists that does a good job, for example. Colleagues of mine in the past have written their own static code analysis tools that did a far better job than some of those on the market. That's great. Not every company is going to be able to do that. If you find a tool on the market that can do the job, by all means, use that tool. Where I see building it in-house is usually we're building something that you can't go buy off the shelf.
For example, there is no security data platform or engineering backend turnkey solution that can handle large amounts of data at scale and analyze that, and you need to build that yourself and you need people who are very adept. Whether they're working in containerized microservices worlds, or classic AWS or whatnot. You can't just buy that off the shelf, and you're going to have to build that yourself. But if you can buy it off the shelf, there are a lot of good security tools out there. Like a web app firewall, as an example. Why build that yourself when there's a couple really good ones on the market? I would rather use talent that's on my team to do something that's not easily solvable by someone else.
Guy: Also the security tooling on market needs to adapt, if they haven't already, to this mindset as the more extensible that it is. Sometimes you come across tools that subscribe to a certain discipline, and that discipline doesn't work, but the tool is not sufficiently flexible to be a part of your automation flow. It's like, "My way or the highway," in which case you choose the highway, and you go, and you build your own car, and that's fine.
Most security tools out there, this is where I'm not the optimist, aren't that good.
The way they tried to sell them to us is by scare tactics, and that doesn't work. Often what we see, good security tools I see today are coming out of a lot of these, to be honest, smaller companies that are just a bunch of software engineers. They understand software development today in more nimble companies, and they often have had experience on security teams that are reacting to real-world threats.
Not the marketing threats people talk about. Some of the products we see today, they're not traditional in that they've been around for 30+ years. You could call them startups, you could call them smaller companies or whatnot, but they're people who really understand dev ops. They really understand where the tech stack is moving in those companies. It's app first, Cloud first. They understand the type of languages people use. Those are the products we find a lot of good in, and they're also the type of companies that collaborate with us. They sit down with us and say, "What do you need?" We'll give them feature requests, and they say, "Give us two months or give us six weeks," they come back, and they've done it.
Guy: They actually implement it.
Zach: Exactly. Other security vendors, if I give them a feature request, "Give me two years. You can go pound sand."
Guy: There's a whole bunch of questions I have, but I'm looking at the clock, and I see we've been at it for a while. We might save those for a future episode. But before I let you go, I do want to ask you what ask most guests or all guests on the show. If you have one piece of advice or one pet peeve around security these days, some words of wisdom to tell a dev team or a security team looking to level up their security. What would the one bit of advice be for them?
Zach: The best advice I could give is,
if a security team is not engineering automation today, they will not scale, and they will not be able to play ball with the type of threats we face today. It cannot be done manually.
There are some things, some types of security testing that still need to be done manually. But so much of security, especially in the world of sec ops, it must be automated. Ask yourselves that, if your team is capable of automation, are they prioritized? Are you setting time aside for them to engineer automation? If the answer is no to that, take a step back and think about that, because that is where most security teams are going today. The companies that really understand the threats and are trying to respond to those.
Guy: Got it. Learn to go out and get automating, if you're not doing that already. If somebody wants to ask you some further questions or pester you on the internet, how can they find you on Twitter? How can they reach you?
Zach: The easiest way to reach out to me nowadays is on LinkedIn. I've slowly peeled myself off of most social networking over the years for good reason, and I get to spend more time with my daughter that way. Reach out to me on LinkedIn, and I'm happy to collaborate and meet up with security leaders around the country. And engineers, I'm happy to grab a cup of coffee.
Guy: Perfect, and if it's the right person maybe apply for a job at One Medical. I'm sure there are some hiring jobs.
Zach: We are always hiring.
Guy: Zach, this has been a pleasure and fascinating. I'm going to have to get you back on the show to talk about some other aspects in-depth, but thanks a lot for your time today.
Guy: Thanks everybody for tuning in, and join us for the next one.
Participate at DevGuild: AI Summit
Join us on October 19th, 2023 for a community summit with 200+ others like you coming together to discuss how AI will change the face of software development.
Content from the Library
How It's Tested Ep. #4, Session-Based Testing with Jacob Stevens
In episode 4 of How It’s Tested, Eden Full Goh speaks with testing expert Jacob Stevens. Together they unpack session-based...
O11ycast Ep. #61, What Comes After o11y with Heidi Waterhouse of Sym
In episode 61 of o11ycast, Jessica and Martin speaks with Heidi Waterhouse of Sym. Together they explore the sensations of...
EnterpriseReady Ep. #41, Camaraderie at Scale with Wade Foster of Zapier
In episode 41 of EnterpriseReady, Grant speaks with Wade Foster of Zapier. They unpack Wade’s career journey, how Zapier was...