1. Library
  2. Podcasts
  3. The Secure Developer
  4. Ep. #11, Keeping PagerDuty Secure
The Secure Developer
40 MIN

Ep. #11, Keeping PagerDuty Secure

light mode
about the episode

In the latest episode of The Secure Developer, Guy is joined by Arup Chakrabarti, Kevin Babcock and Rich Adams from PagerDuty. They discuss how they put into practice their security vision of “making it easy to do the right thing”.

This involves picking the right tooling and designing a security experience that doesn’t force people to do things, but rather provides insight into how vulnerabilities can be exposed. Giving people the opportunity to break things also creates a strong desire to want to then protect those things.

Arup Chakrabarti is Director of Engineering at PagerDuty and Program Advisor at Heavybit. Arup is a technology leader with expertise in operational excellence, using data to solve ambiguous problems and increasing efficiency via automation. Prior to PagerDuty, Arup was an Engineering Manager at Netflix and Amazon.

Kevin Babcock is Principal Security Engineer at PagerDuty. Kevin is a business-oriented engineering leader with more than 15 years experience leading software-as-a-service (SaaS) projects, managing mission-critical systems, developing and maintaining production infrastructure.

Rich Adams is Senior Engineer of Security & Incident Response at PagerDuty. Prior to PagerDuty, he has worked on a wide range of systems, from developing music applications for Gracenote and Sony, to working on baggage systems for the airline industry. Although he insists that if your bags ever get lost, it’s not (entirely) his fault.

transcript

Guy Podjarny: Hello everybody, welcome back to the Secure Developer, today we have three great guests from the awesome PagerDuty company to talk to us about security and how it's handled in PagerDuty. So thanks for coming over and can I ask you first to introduce yourself. Maybe Arup, we'll start with you.

Arup Chakrabarti: Sure, so my name is Arup Chakrabarti, I head up our infrastructure engineering teams at PagerDuty which of course includes security. I've been at the company for about four plus years now and so I've been involved in security, whether I liked it or not, in one way or another over the last four plus years at PagerDuty.

Kevin Babcock: My name is Kevin Babcock, I'm principle security engineer at PagerDuty and I like working to secure software as a service systems, I think it's an exciting challenge. Before PagerDuty, I worked at Box and prior to that, I was at Symantec for quite some time building security products.

Rich Adams: Hi my name's Rich Adams, I'm a senior engineer on the security team. Originally I have sort of an ops and a software developer background and I got interested in security by playing CTFs and getting into breaking things and realizing just how easy it was sometimes and that got me excited to work on the other end of it and trying to stop those things from happening.

Guy: Got it, cool. CTF is always sort of a fun part. We had an episode on CTF alone, which is worth checking out. And how big a percentage are the people in the room right now of the PagerDuty security team?

Arup: So this is the entire PagerDuty security team. It's 100% of the PagerDuty security team is present right here in this room.

Guy: Okay, excellent.

Kevin: I do want to say, because the security team is in the room, it doesn't mean security stops. An important aspect of our philosophy is that everyone ends up being involved in security and we're going to talk more about that later.

Guy: So yeah, and Kevin, actually that's a great sort of segue into it, so we had a bit of a chat here about how you work and a lot of the emphasis that you pointed out was around collaboration and security so maybe Kevin, can I ask you, how do you see security and what's your philosophy around security and how to handle it?

Kevin: I see the security team as the subject matter experts within the organization, that doesn't mean that that team is the only team that will work on it, in fact to be successful in security, you need to work with other people. That's why there's three of us here today. Security can't be solved by yourself, and if you try, you will fail.

And having that collaboration and that ability to work effectively with others outside your team from a security practitioner's perspective or others on a different part of an engineering team from a developer or dev ops practitioner perspective is very important. You really need to be able to approach the threats and risks of your business from a holistic perspective or you won't be able to defend against them effectively.

Guy: So I definitely subscribe to that perspective but unfortunately oftentimes we hear the whole conversation about builders versus breakers and the different mindset. How do you see or even when you talk to security people, how do screen for, if you will, that different approach, how do you break through the concern or the mindset of, well, developers just don't understand security or these security guys are just naysayers, how do you connect the two?

Kevin: I feel it's an important part of my role to be a resource and someone who can educate and train other people. I'm here to help them make better decisions and if people don't feel they are able to do that, that means I'm not doing my job.

Rich:

A good security team doesn't just say no to everything. You figure out, what it is that you're trying to accomplish and work the goals around that and find ways to get them to be able to do their job properly while at the same time keeping data secure.

We have a phrase we like on our security team which is, "we're here to make it easy to do the right thing".

If we build any tooling, the intent is not to hinder developers or hinder anyone to be able to do their job, it's to make it easy for them to just do the right thing naturally and without even thinking about it. One of the things we've done is our own training internally. At previous companies, I've always been frustrated at security training because it was the two-hour unskippable video and then obtuse use cases that never really come up.

Some things are common sense and you don't pay attention, you kind of just skip, you keep it in the background tab, keep it muted, and then answer some questions at the end where you get an unlimited number of chances. And you just keep going until you get through. And it's usually to check some compliance checkbox somewhere. One of the things we've done at PageDuty is done our own internal security training where we made it a bit more engaging, a bit more fun, and trying to teach people about real threats.

One example is passwords. People are generally pretty bad at choosing passwords and it's usually a hard sell to get people to use password managers across a company.

So rather than giving people a list of rules, we framed differently by showing "here's what attackers do, here's how you break passwords", and demonstrated it with some fancy animations and people were more engaged that way.

And then you find that they actually come to you after and say, hey, that was really interesting, I've actually started to use a password manager now. And the idea is we've made it easy there for them to do the right thing and they've made the choice themselves and it's not something that we've forced on them and say, you must do this.

Kevin: My favorite part of this training, which Rich delivered that was wonderful, is after this happened, we had someone come back and say now that I understand how the attackers are working, I just spent three hours over the weekend going and changing all my passwords. And to me, that is real impact because you're not only making people better and safer for the company, but you're improving the security in their own lives and really that's why we're here.

Guy: That's excellent, I always find that security at the boring sense is all about risk reduction and it's sort of this not very exciting notion. The only advantage that it has is hacking is cool so if you can kind of leverage that into your benefit, I use that a fair bit when I give talks, that if you show an exploit, if you show a live exploit, if you let somebody do it, just the educational value is dramatically higher than sitting down and talking about the bits and bytes.

Rich: And that's why CTFs are popular as well, I think that's what originally roped me into security, was seeing it happen.

Guy: Which is a form of trading almost by itself.

Rich: Yeah definitely.

Guy: And I guess Arup, because you cover security but you also touch a bunch of the different elements or different functions in your team, how do you see the division of responsibility for security between that, the different groups, between the ops and security.

Arup: Yeah so I'm responsible for other teams as well at the company and I firmly do believe that security is becoming more of this operational problem as opposed to purely a security problem. And I look at a lot of the trends that we've seen in that kind of ops, dev ops space of the last 10, 15 years around automation, monitoring, metrics, learning, telemetry, all these wonderful things.

And from a security aspect, that's where we as a team keep investing a lot more in that, we invest a lot more in telemogy. Why? Because we want to be able to react quickly to problems when they come up. We invest a lot in automation and making sure we have the right tooling. It's very easy for us to figure out, "hey, do we have a set of servers that aren't subscribing to a certain rule set, if they are, well okay, run Chef again and it's going to get rid of that anomaly".

And that's really important and so one thing that been really interesting is to watch security engineers change their habits over the last couple years just as I do believe that operations engineers had to change the way that they worked, security engineers are now changing the way they have to work too which is very fun.

Guy: Yeah very much so, to bring in the dev ops revolution or the learnings from this sort of evolution of the ops world.

Arup: Yeah I think it's the learning, I don't view these problems as the same problems, of course, they're very different.

Guy: Agreed.

Arup: Very different ways to approach them and everything but I do see that in the security industry, there's a lot of opportunity to look at what a lot of companies went through in their dev ops transformations and look at, "hey what can we take from that and apply towards security problems as well".

Guy: I entirely agree and I think the learnings, you need to adapt them, but you also don't want to just sort of stay focused and many security teams today are very still sort of gate-driven, are still about stop here and work, which doesn't work in a dev and ops world that tries not to stop or tries to stop as little as possible in how you work.

So fundamentally when you look at the activity of people, do you see engineers having explicit OKRs or goals that are security related or are those still central, how do you manage the ownership, if you will, of tackling a security problem. Who would have that ticket?

Rich: I think it ranges depending on the problem. We have some tickets that would be, let's say company-wide, things that are far-reaching that would belong to the security team and we would liaise with other teams and sort of get things into their agile cycle to flex things out.

Kevin: These tend to be broad-reaching projects that are more strategic where we're building tooling or other infrastructure that will be used by other teams and will be supporting that or providing a service but it's really something that everyone needs to be able to use and that will help us as an organization operate more effectively.

Guy: Which comes back to sort of making security easy, making it easy to do the secure thing, the right thing.

Kevin: Yes, that's right.

Rich:

And then at the other end of the scale, there are more narrow security changes that have a stronger focus and in those cases, the team that's responsible for that particular area of the system would taken ownership of it.

Sometimes depending on the type of change, they would perhaps come to us on the security team and request help, maybe we would embed ourselves with that team for a week or for their next sprint to help them through the door. But they would be ultimately responsible for owning the change so it ranges depending on the scale of the security problem or the change that we want to make.

Kevin: We do this as well for reactive and responsive security. For example, we have some tools that will be scanning for vulnerabilities in open source software and that will trigger a notification in PagerDuty that then will be dispatched to the on-call person for the appropriate team and this is a great way to expand the number of people working on security and caring about security in your organization.

If you're listening to us today, I know that you care about security but there's probably someone sitting next to you who doesn't care yet or doesn't know.

One way that we find you can get the entire team involved is by using this rotation and dispatch where, when a particular problem comes in, whoever's up on call is going to have to understand and take care of the problem.

And living through that experience is a great way to get people to start asking questions and learning more about why is this important? Why do we have to fix this quickly? What happens if I don't do this?

Arup: One thing you talked about was, does everyone have OKRs or goals in security, one of the things that's actually unique about our security team is so we actually work very closely with our sales team and we actually do look at what are we doing from a security standpoint to support our sales team, so we actually have goals that are jointly tied between our sales engineering team and our security team.

And the fun part there is that kind of gives you the sense of, wow, the security really does impact not just the engineering teams, it really does have an impact across the entire company, and I'm always torn on where security should sit, where should the goals lie and all those things, but I always err on the side of when in doubt, add another security goal for your non-security teams. I think that is a good habit to have because I think it encourages the right behaviors across the organization.

Guy: Yeah excellent, I think security, one of the challenges with it is that it's quiet as long as it's working, it has one of those, you only hear about it when it goes bad. And I'm a big fan of trying to find opportunities to surface it when it helps, when it's a positive impact, and I think the not-always-fun security questionnaires, is maybe a good example of that, which is, you can kind of demonstrate how awesome you are.

At Snyk, we do this badge that says the number of vulnerable dependencies on your repo and it's been growing, there are hundreds of these on GitHub and I think a lot of the premise is to say, "hey, if you care about this problem and you've bothered checking if you're using vulnerable dependencies and you bother maintaining that, you're awesome, why don't you show it off?"

You sort of help show the world that you care and that they should care and it's fine, win a point, that's okay because you've made this effort and you've moved forward so it's great to hear, I love them tell, unfortunately there aren't a ton of those that are so easy to point to. So when an issue actually does come up or when there's a problem, what's the process there, who gets involved? You mentioned before a bit of an on-call page but what happens after?

Rich: Sure, let's take an example of an external report. Some member of the public has emailed security at PageDuty saying they found a bug in our system. That pages the security on-call engineer, so 24/7 they'll get paged if a vulnerability report comes in. The first thing you'll do is obviously read the report, see what it's about, if it's something that's a known issue, something we've accepted the risk of, something that is not an issue, we can kill it and move on.

If it looks legitimate, we will try and reproduce it in some test accounts. If we're able to reproduce the vulnerability and that it's real, we'll start engaging a response team. We'll pull in the on-calls from whoever are on teams that are affected by this. Again, they'll get paged 24/7, if it's two in the morning, which has happened before, we'll page them, this security report's been raised, we've replicated it, it's valid, we need to fix it ASAP.

They'll work on it, deploy the fix as quickly as we can and once it has, we'll get back to the person who reported it, say it's fixed, can you confirm from your side as well?

Maybe there was some nuance to the way they'd done it, some edge case we've missed that they didn't let us know about, so we always find it important to ask them, can you confirm as well that it's fixed?

Sometimes they don't back to us, sometimes they do. And then generally once that's fixed, we'll consider it closed but we'll also then kick off a sort of post-review task to see if there are potentially any other similar vulnerabilities elsewhere in our code base, let's say it was a crossout scripting on a particular field that got missed somewhere or wasn't covered by automation, we'd kick off a review process to say, "we need to scan everything and just make sure that this same bug didn't get introduced elsewhere in the system as well". But that's usually done business hours next day, we wouldn't keep people.

Guy: Yeah, might not be rushed. Just to confirm, like you mentioned, there were a bunch of we's in there like we do this or we do that, so the vulnerability report still goes to the security team to assess it?

Rich: Yeah.

Guy: Or does that go to the on-call ops person?

Rich: It's the on-call security person so the three of us are on a security on-call rotation.

We essentially triage all of the inbound security reports. If it is something that is operational based, or something where we don't know how to reproduce it ourselves, maybe we don't have the technical expertise, it's something very deep in a particular system, we'll page the on-call responsible for that system.

Kevin: It often ends up being a collaborative effort so something may come in and I don't understand the other system well enough to know exactly what the impact but I've seen this class of vulnerability 10 times before and I know the ways it might be manifested and what the actual impact to the organization would be. So I'll bring that knowledge, which is, here's how bad it could be, here's some other ways this might be exploited, and I'll share that with the system owner who then will tell me here's how our system works and oftentimes say, "oh I can do that here and I can also do it in this three other places, let's make sure that they all get fixed."

Arup: And the important here is that the security team is not the one responsible for resolving the issue. We're responsible for triaging it and initially assessing what do we think, could this get worse, what's the attack and all that, but then what Kevin said, it's that collaborative piece that's super important to us, and we've been actually very fortunate, I can't think of a single instance in the last couple years where one of our collaborative engineering teams said no, you deal with and I cannot.

Rich: I don't think that's ever come up, at least not while I've been here.

Arup: Yeah I can't honestly remember a single time and I think that it's kind of one of those, maybe it's the shared misery piece, of well, "Rich you're up at two a.m., fine I'll be up also". But I do think it creates that shared ownership which is really hard to do well and that's something that we're constantly trying to find the right balance and for us right now the right balance is security team triages it and assesses the vulnerability and then immediately starts dispatching and getting additional people involved.

Kevin: I firmly believe that collaboration comes from conscious effort to be a teammate who can support your other colleagues. For example, I have gone and embedded myself with an engineering team and worked with them for a number of sprints to help them sell their projects because that allows me to have the right context for how that team works, understand the problems they're facing, and now I have knowledge that I can use to design better security tools that fit right into that team's work flows.

Similarly, they get a sense of me, how I'm working, I ask them questions about security and they start having a different perspective than they may have about some of the challenges that the security team is looking at and now I have relationships and people will come to me with questions and I can use that as a way to identify security problems that I might never have known existed.

Rich: Yeah and it's definitely approach of or a feeling that we're all in this together. I never feel bad about paging someone on another team even at two in the morning if I'm not sure about how their system works and can't accurately determine whether this security threat is valid or not. And again, I have no qualms about paging these two either if I'm not convinced that I've replicated this properly or anything like that. We have a motto that I like, it's never hesitate to ask late so it's always hit the button if you're unsure and I've never had anyone on any team complain about that.

Kevin: This goes both ways, I recall a time when an engineer started our security incident response process just because he found something suspicious. He wasn't sure how bad it was but he knew it looked suspicious and he wanted to make sure that it was covered and I was very happy that he made that decision and that we were paged and brought in to respond quickly so we could look at the issue and determine what we should do.

Guy: I love that approach, first of all, that one is very much sort of the, if you see something, say something, and it implies, it's almost better than being willing to be woken up in the middle of the night because it means unsolicited, they've considered security which I think is maybe like an even bigger achievement. And I like it, I guess the way maybe I wouldn't echo back is it's less about educating developers about security, it's about collaborating with development for security.

And that does imply learning on both sides. It's not something that comes down from security into dev, you have to absorb knowledge in the other side and sort of adapt your own knowledge into the context that they would include it. Let's sort of shift down maybe in the stack, because we've talked a lot about first the philosophy and then practices you do in the team, which seems super super useful. Let's talk tools, practically speaking, you run this, what are some notable tools you have in your security stack that you use?

Kevin: Rich, I'll hand this one off to you. You invented most of them.

Rich: Let's talk about two-factor authentication. It's a longrunning project we've had going, specific tools we use for our two-factor run SSH is duo, duo security, using the pam duo module and that is specifically tied to UB keys, which are the nice little USB hardware tokens.

We went through a few different options on methods of two-factor, starting with the basic TOTP, the six-digit google authenticator style codes, and that was very, there was a lot of friction with that. If an engineer wants to log into a server to bug an issue, they've gotta pull out their phone, they've gotta type in the six-digit number and it was quite a painful process.

Guy: These are, just to clarify, these are for two-factor authentications for internal systems?

Rich:Yeah, to access our own systems internally and we went to the duo push, which is where they send a push notification to your phone and you have to approve it, better but not great, and we worked with a few beta testers in our engineering teams and people who SSH a lot and try and find out the pain points and how they use it. There was a lot of negative feedback on using portion TOTP and things like that. We tried UB keys and that was a much smoother approach, everyone really liked that it's just a simple tap of the button.

Kevin: So what's a UB key, Rich?

Rich: I explained that. It's a USB hardware token that you stick in and press a button on it and it does stuff, it does magic that just works. Well we had a lot more positive feedback once we started to roll out UB keys instead, so that's when we decided to just get UB keys for everyone and pre-enroll them. And we've had a lot of success with that now so all of our engineering organization is using this method, support engineers, sales folks, anyone that could possibly access our infrastructure for any way, whether they're jumping through a gateway host or anything uses UB keys and two-factor authentication with duo.

So that's been really good for us to strengthen the access to our infrastructure in a way that doesn't too negatively impact, obviously you've still gotta put in the UB key, which is an extra step than you had before but I think everyone recognizes that we're getting a huge security benefit for not too much of a extra hassle.

Guy: And fundamentally, security does imply introducing some extra work but putting in the effort to make it as usable as it can be, make it simple, as you pointed out earlier on, make it easy to do the right thing, is a big deal. So this was just, again, for helping everybody understand and mimic, maybe, in they're org toknow how it works. So this sounds like an initiative or sort of all of this exploration was done by the security team driven to be enterprise wide but the application of the security control, if you will, which is the use of UB key, is now company wide, outside tech, outside, including, as you pointed out, sales support and the likes.

Rich: The way we rolled it out, I think, was important as well, it wasn't everyone gets a UB key today and go through it. We trialed it with a few power users first and obviously we didn't go to them and say, you will use this from now on. We solicited volunteers who were excited about trying it out and they tried the painful methods first as well and that's how we got the feedback.

And it hasn't been an entirely painless process, there are some issues with certain tools don't work well with it, we're having to find work-arounds for those and it's all kind of been a learning process.

Rolling it out in stages with some key users first helped us in ironing out the kinks before getting to the non-engineering teams and people who perhaps don't know how to use an SSH tunnel work-around.

Guy: Got it, cool, and so this is great for two-factor auth ed and all that. Maybe some other tools that are used that people might care to consider themselves?

Arup: Going back to my point earlier about treating security problems as operational problems, we have that full suite as well that helps us there, so things like Chef, Splunk, AWS tooling, and those kind of auto-toolings they use for those operational problems, we use them for security challenges as well so we have monitors in Splunk constantly running looking for malicious behavior in our audit logs and looking for malicious behavior in the access logs as well so that whole suite also.

Guy: So Chef is an interesting one, it's very much sort of an ops tool, how do you use Chef for a security purpose?

Kevin:

It's important for a security team to be able to react quickly and move quickly and automation like Chef or Puppet gives you that benefit. You already have it in place for your infrastructure to improve operations, why not take advantage of it to allow security to work faster and more effectively as well?

For example, if you want to roll out a patch across the entire infrastructure, you can configure Chef and push out that change and be confident that it gets everywhere and that it's been applied universally and it's not something you have to worry about anymore.

Guy: Yeah I think in general, in continuous deployment or continuous environments or fast-moving environments, a lot of the pushback in those claiming dev ops hurts security is that there's a lot of change and that change introduces risk but I think one of the best claims on the other side is to say alongside with that change, or with faster change comes faster response and comes the ability to respond to issues quickly and across the entire system so I like Chef.

So Adam Jacob was on the show and we talked about In Speck and how there are some tools that are built into it that really try to do it enough to see more security features coming into those tools as good check boxes and easy to use capabilities.

Arup: So one thing you just said around increase of the rate of change introducing more risk, I do agree with that but one thing that a lot of these tools do support is auditability, and so it's that ability to go back through and figure out, hey, on this day at this time, what changes were being made? And so while yes that risk is increasing over time or it's very hard to keep up sometimes, when you do have to respond quickly, when you do have to react, it's actually much easier if you have the automation in place that allowed you to move faster in the first place.

And I think a lot of security teams make the mistake where they insert friction and they'll reduce the amount of automation sometimes, again, with the wonderful intent of reducing risk but a lot of times they actually end up creating more risk in the long run because they've lost that auditability because they don't have that automation in place.

Guy: That's a really good point, and how do you in general see the delta or what's your view on prevention versus response? On putting something as a blocker as opposed to responding quickly to issues?

Arup: Let me just ask Kevin, what should I do here?

Kevin: Security fundamentally comes down to risk assessment and in a corporation or an enterprise, you need to enable the business to make the right decisions for security and if you are shutting down operations and you have no ability to change because everything's locked down to the point where you're very confident you know the state of everything and it's running correctly.

But you haven't shipped any new products, you haven't updated your product, your customers are complaining, the business is not going to be successful, security has to be about understanding the context of the business and the risk that it's willing to take on and making the right decisions for where you put in place controls and protection to reduce your risk and make sure that you're always operating right at that brink of what you're willing to accept but no higher.

Guy: That's an excellent statement,

I like to use the phrase that you can be secure all the way to bankruptcy.

Which is not very helpful as a business methodology, even though you might be able to pass all the audits that come by. Cool, so we talked about a bunch of tools that you use, maybe before we close that section off, just talk a little bit about what would disqualify a tool for you, I mean, you talked about some of the good things, what time of properties have you seen in security tools that you saw that and said, yeah, if this tool behaves this way or if I'm seeing this property, I'm not going to use it.

Rich: So we've had tools where it's been very difficult to integrate them because they might not play nice with other tools that we've already integrated. It might be bad luck on the part of that vendor, that we implemented the other tool first and then they both don't play nice with one another but

Generally if we can't figure out a way to get it integrated in our systems within a week, we pretty much just cut our losses and move on because it's not worth investing additional time there.

The other one, especially with security tools, is the false positive rate. If things are paging us saying you have a critical issue and we find out we don't a lot, that's introducing a lot of on-call burnout to us and is something that we try to avoid as much as possible so any tool that is needlessly, maybe 90% or above is noise, then it's just not useful to us because we can't filer out the noise in an easy way, and again, we've had tools in the past where they're great but they're too much noise and we can't find a way to filter it out properly and it's just, it reduces usefulness and it goes from when you see an alert from that tool, you think, oh great, I must get on this immediately, to, oh, it'll be that thing again and you ignore it. At that point, especially for a security tool, it's lost all use.

Guy: Just the boy who cried wolf.

Rich: Yeah, once you lose trust in the tool then you have to move on.

Kevin: We've also encountered some challenges as an early adopter. There are some very good tools out there for dev ops type organizations, like you mentioned Twist Lock earlier, Signal Sciences, we've also evaluated some other tools where it was very early on in the product cycle and there's an advantage in looking at that because you may get a new kind of protection that's not broadly available.

You're also taking on some risk because that company is still new, it's still establishing the product and in some cases, we definitely saw the potential and we wanted the functionality but as Rich was saying, the time to integrate was too high and we ended up pushing off and saying, well we're going to keep an eye on this technology and reevaluate six months down the road, a year down the road, but it's not something we can do today.

Guy: And just an ROI type of calculation, there's just anticipate more investment necessary in it.

Rich: Yeah and I think another important thing is the responsiveness of support as well. We've certainly had tools where we might have, we've hit a roadblock, the documentation isn't telling us what we need to do, it's not obvious what we need to do to continue on something, we'll reach out to a support team and won't get a response for a week and at that point, we've moved on.

And it might turn out the response is, oh just flip this configuration settings, like oh, that would have been easy but once the week's gone, it's kind of like, well we've moved on now to other things, there's definitely sometimes a missed opportunity there if it's not a very responsive support. That can affect whether we end up using the tool or not.

Guy: I guess I love that all of these definitions are bread and butter for any dev tooling or ops tools out there and unfortunately, not at all the default or the given for security tools that are out there so that's going to be another sort of evolution the ecosystem needs to go through.

Arup: Yeah I think it's interesting because, we were talking earlier about accountability in our environment and how we have individual teams that account for the code that they ship,

A lot of the security tools make implicit assumption that you have an army of security and analysts available.

They make that assumption, and I don't know, you can look at this room, there's not an army of us, unfortunately, and so it's always interesting where I'll see a tool out there and they'll make some bold claim and I look at it and I'm like, "oh this is fantastic, wait a second, you're expecting me to have an army of like 20 people watching these screens constantly, that doesn't work for our organization".

And one of the lessons I've learned the hard way unfortunately multiple times is you don't look at which audience the tool is built for when you go to buy it and so you end up buying it and then realize after the fact that you were not the audience that the tool was built for and so you end up again with integration challenges, whatever it is, but that's something that at least for me at least, I have tried to be more mindful of going forward, is this tool that I'm buying, was it built for our audience and that audience is different for each company.

Guy: Yeah.

Kevin:

There's a promising new set of tools out there that I think are very interesting that enable people who may not be full security specialists to do security work and these are the security orchestration products like Phantom or Xbeam.

I think there's a lot of promise for being able to implement these to get higher leverage out of your security organization by enabling people without a security background to effectively do security tasks.

Guy: Yeah, no I love hearing this, when I founded Snyk, the whole definition was to say Snyk is a dev tooling company that does security. No matter what it does, it needs to be operated on a daily basis by developers, by dev ops teams, and if it's being used by security, we've lost.

It needs the guidance, it needs the expertise, and it's because you want developers to engage with security but you can't expect developers to be the experts in security every time and when we don't have expertise, we revert to tools, the tools should sort of bundle in some of that expertise for us and then make it accessible for us so love to hear the philosophy of it and hear it working in action. So I think this was super useful. Before I let you go and continue securing PagerDuty. Well actually you don't need to because the whole team, the rest of the team is doing that already.

Rich: It's fully secure, there is nothing left to do.

Guy: So I'd like to ask my guests before I let them go, to ask for one tip if you're talking to a dev team that's trying to, or ops team, trying to up-level their security caliber, the security foo, what's the one tip, the one pet peeve that you would highlight right now. Maybe Rich, I'll start with you.

Rich: Sure, for a development team, I think it's key to get the team excited about security. If a team just sees it as a hindrance and something like "ugh, we have to do this security thing" it's never going to kick off. I think these things work best when people take it on their own, on their own initiative and then they pitch the idea to other people who take it on and take it on and it kind of grows that way.

So one of the things I always like pitching to teams is work from the side of an attacker, I mentioned CTF things at the beginning, play with CTF, try and execute a buffer overflow vulnerability, see just how easy it is to do these things, try and do some crossout scripting if it's a web application, some crossout request forgery just to see how simple these things are to break.

And it's always, at least with engineers and development teams, I always think it's very exciting when you break that first thing, you're like, "oh wow, it was that easy, like I just did this one little SQL injection, now I've got all that data. Maybe I should fix that". And that journey gets people excited, I think there's a lot, especially in movies and TVs, this hacker mentality and people want to do the cool thing and

I think breaking things and seeing things being exploited always gets people excited and wanting to protect those things.

Guy: Excellent, how about yourself, Kevin?

Kevin: To do security well, you need to take it in context, you need to know what your valuable assets are and what's at risk. It's not enough to say we need to have strong passwords, we need to use encryption, we need two-factor authentication. Unless you understand why you are implementing those controls, you're missing the point. The reason we went and implemented two-factor authentication for SSH is because we're concerned about this very common attack vector where a phishing email comes in, someone deploys malware on a machine, and then there's lateral movement into the production network.

We know that all of our most sensitive data is inside that production network and so we're interested in putting additional controls in place so that if and when there's malware operating inside the corporate network, it's very very difficult to move laterally and get at the most valuable assets.

Arup:

There's this entire class of security problems that only get harder as your companies get bigger.

Your teams get bigger, and having seen multiple companies now go through these crazy growth stages and then they bolt security on as an after effect, you're signing up for an uphill battle there and starting early doesn't mean dedicating 50% of your workforce, no. What that might look like early on is you have a single engineer that cares about this early on at the company's history, let them spend part of their time on it, enable them and let them be successful and it just pays dividends down the line.

If you really try to think of security like, oh, we're going to go out and buy a security protector, we're going to go buy a security team, we're going to bolt this on after, it rarely works so if you're starting thinking about it, chances are you should have been doing it yesterday so just do it today and keep investing in this stuff as best as you can.

Guy: So Arup, Kevin, Rich, thanks for joining us today, this has been super insightful. Before you disappear here, if somebody, one of our listeners has questions for you, wants to follow up, get some of your further advice out of band, how can they reach you?

Rich: So I am @r_adams Twitter.

Arup: And I am @arupchak on Twitter.

Kevin: I'm not on Twitter but I would be happy to entertain conversations if you reach out to me on LinkedIn, you can find me under my name, Kevin Babcock, and just make a connection.

Guy: Perfect. Okay, well thanks a lot and for all those joining us online, I hope you enjoyed the episode. Thanks. That's all we have time for today.