1. Library
  2. Arrow Icon
  3. Democratizing Security From The Top Down
JAN 28, 2020 - 25 MIN

Democratizing Security From The Top Down

  • Product
  • Security

As security trends away from a specialized art to a culture of ‘everyone does security’, identifying the threats that your company faces and communicating the importance of building securely to your team is critical to becoming a successful enterprise.


As security trends away from a specialized art to a culture of ‘everyone does security’, identifying the threats that your company faces and communicating the importance of building securely to your team is critical to becoming a successful enterprise. Security leaders from Domo, Medallia and strongDM share how teams can approach democratizing security and what they should consider as they grow their security team.


  • Moderator: Justin McCarthy, co-founder & CTO, strongDM
  • Harshil Parikh, Head of Security, Medallia
  • Niall Browne, CSO & SVP of Security & Trust, Domo

Justin McCarthy: We’re switching things up a little bit. The way we were going to talk was earlier stage, mid-stage and then the most developed maturity. We are going to preserve that rubric, but I’m going to fill in for the early stage. I’m going to be the moderator, but I’m also going to represent smaller team size, and we’re at about 20 right now. Next to me here we have Harshil, head of security at Medallia.

Actually, your story is interesting because the company’s been around for a long time but actually a lot of why you’re here today is the last few years. You would represent a middle stage, but then hyper growth through IPO.

Harshil Parikh: Right, yeah.

Justin: OK. Then down the end here we have Niall, and Niall is currently the chief security officer at Domo but also served the same role prior at Workday. One thing I like about your background is
essentially there’s no case where your customer isn’t the biggest possible enterprise, so everything about establishing trust at the maximum level is something that you’re fluent in and do every day. Is that about right?

Niall Browne: Unfortunately, yes.

Justin: Cool. All right, so now that we’ve got the intros out of the way I’d love to actually ask about one anecdote that just in your personal life, you feel like maybe if you hadn’t had your security exposure you wouldn’t find yourself doing it everyday. Something that you do every day that you could attribute to being steeped in this topic.

Harshil: You want me to go first?

Justin: Yeah, go first please.

Harshil: What I would say is every once in a while when I get a call from someone trying to tell me that I owe taxes and the police is on the way and they need some information, I actually respond to those calls. I don’t just hang up. I respond, because I don’t understand how they operate or what they’re after. I don’t think if I wasn’t in security, if I’m not curious about these things, I wouldn’t respond to it. But every once in a while, I do.

Justin: Niall, can you think of one?

Niall: I think for the most part, basically it’s almost the sense of paranoia. The more you get involved in security on a day-to-day basis, the more you realize how many people — How easy it is in some way to compromise people, and not because people are stupid, because they’re not. It’s because everyone’s working on so many things every day.

You get an email and you get a call, you get a ticket and you don’t have an hour to think about it. You’ve got milliseconds to think about it, so for me it’s mostly the model whereby as you go through security, you get more paranoid in voicing your business and then in your personal life as well.

Justin: I had this question because actually this came up earlier today for me, I was interviewing a candidate and as part of the interview exercise we needed to actually grant this person access to a server. We e-mailed them a private SSH key, and because that’s such a taboo I noticed that the taboo was so deeply in my body that as soon as I got on with the person, I said “Obviously this is a throwaway server. Obviously this is a throwaway key.” But I had to voice it because every time I encounter those taboos, I have to react to it.

Security Team Sizes

Justin: So let’s see. Would you mind sharing a bit about team size again? Just so we can calibrate on scale. We’re about 20, what’s your team at in terms of total size as well as the technology team and then maybe the security team?

Harshil: My company is about, I want to say, 1,800 employees across the globe. A lot of that growth has happened recently in the past 4 years or so, 4-5 years. In engineering, we have about 500 engineers and my security team is about 14 people.

Niall: It’s almost about a thousand people, engineering platform I’d say maybe 450-500. Then security compliance risk is about 25 people.

Justin: A lot of those sizes sound like perhaps a little further along than some members of the audience, so for a moment I’d like to actually go back to the beginning and talk about that first wave of “Don’t leave home without it” practices. I think the title of this talk is Democratizing Security, and part of that means finding a way for everyone to do security. If you think back to a team of 50 or a team of 20, what would absolutely be in place for you in 2019, 2020 and beyond for those earliest moments?

Niall: I think for the most part, basically when you look at security, as Oren pointed out earlier on, you have to do security because it’s the right thing. But also, you’ve got to think about security as security is a way of making you money. In other words, basically you’re building a product and you’ve raised seed funding and you’ve got seed funding, and now you’re like “Before we go for the next round, I need to have customers.

And if I don’t have customers, at least I have to have POCs. You’d want to do a POC and you go to 30 enterprise customers, you get 5 POCs. You’re delighted with yourselves, and then suddenly the three-legged stool steps in. Privacy, legal and compliance. They say, “By the way. Who is this company, Acme? They’re doing product, they want access to our environment, they want access to our systems, they want access to our code, to our customer data? No way. We’re not doing it.” And the POC dies of debt, and I’ve seen that happen so many times.

Oftentimes, basically you’ve got four people in a room with a dog. They want to do a POCs they don’t have the security expertise, they don’t want to hire a CSO obviously or anybody in security at that point. How did they go about actually doing that? I think the number one thing to do is really building up the culture. If I define “Culture,” it’s really “What will that person do if they were left to their own devices?” Then if you insert securities, “What would that person do if they were left to their own devices from a security perspective?” Building security doesn’t need to be expensive.

In most organizations it’s a case of you get that 4-5 people, they set up basically a group and they say “Let’s call it the Security Council.” They build it as part of their core functionality, in other words “We want to be a secure environment,” or “We want to have a secure system across the board.” Then as they’re going through with their daily operations, they can tag security into that. One example is every company, when they release code, you don’t want to just release code higley figley , you want to have some sort of a process for releasing code. Same thing as when you want to shut down a server, you want to do change management. If you hire somebody, there’s generally some background process. If you build a culture, just get the mindset right. “Let’s do security. It’s everybody’s job.”

Then you look at basically, “What are we currently doing that aligns with security?” It’s actually very easy then to go back to that customer before, or that POC and say “By the way, here’s what we were doing in security. We’re an early stage company, and here’s the controls we’ve put in place.” Nine times out of ten they will look at you and say, “They don’t have all the controls but they’re generally doing the right things. They believe in security, and they’re – – Basically there’s a culture of security.” So I would shift it from a technical controls perspective, because that’s a losing battle. “Do you have IPS? A firewall? Do you have this?” You’ll never have everything at that stage. If you simply go into the meeting with that culture of security, that’s number one.

Number two is basically you sit down in that POC and you say, instead of just talking about your product, you say “I know the way. This is our product. I know we have access to your sensitive data, and here’s how we’re going to handle it from a security perspective.” It means an awful lot, and certainly I’ve seen the most successful companies that have grown, they’re the ones whereby at the very early stage they’ve come to democratize security and they’ve been able to– The most important thing after you do it is being able to articulate it externally, otherwise you’re not get any credit for it and you’re not making any money out of security.

Early Stage Security Gaps

Justin: So Harshil, let’s say you were part of a team and you’re at 30-40-50 people. You looked around and there’s something missing that must be present in that first wave. What are you most shocked at, and what gap are you going to close immediately?

Harshil: A lot of times what ends up happening is that people don’t take a proactive approach toward security, because a lot of times they don’t understand what it is. In my opinion, security should be treated at the same level as a scale issue or performance issue or a quality issue. Engineers are very familiar with all of those other things, so it’s just natural behavior for them to take ownership of those things. Security is not always the same case, so I think at that level when you are a smaller company then you just have to fill those gaps. Whether it’s a knowledge gap or a comfort gap or whatever it is, and treat it at the same level as any other criteria in terms of delivering high quality product.

Justin: All right. I’m going to speak on behalf of a currently very small company. Because we ourselves are a security product, one thing that I think I’ve enjoyed participating in is actually taking everyone in the company through basic hygiene. That means full disk encryption, hazard managers, the whole deal. Just getting everyone, even if they haven’t been part of a security culture before, regardless what role the company they are in. They understand that if their laptop gets lost at the coffee shop, it’s actually probably fine because we took those steps in advance. Just exposing everyone to a little bit of that hygiene culture early, that hopefully is going to lead into the training later and the awareness training later. All right, so we did talk a little bit about this in one of your earlier responses. but I do want to just go back to the perception of security as a blocker.

We received a question on this, and just the other side of that coin. Security as part of how you create trust in the sales process, and it’s how you move your business forward. I’d love to hear just one practice where that has been manifest into a document or into something concrete, and it could be at a station. But what’s one version of being able to accelerate sales with an investment in trust?

Democratizing Security Comms: The Trust White Paper

Niall: I think for the most part, if you’ve got the culture you’re generally doing the right thing. You’ve got already a hygiene process in relation to code, and for the most part you’re doing 70-80% of what you need to do. Done, and now the second thing is you need to get credit for it. The way I’ve always done that is you think about security is what you do, and trust is how you articulate that externally. One example is when I started in Domo, and even then let’s even take Workday as an example. Day one around their security team, and day two [inaudible] the trust program basically for Workday. How could we send to the Morgan Stanley Credit Suisse Bank of America? Because we had a lot of the really good controls, but every time we came in front of the customer we fumbled the ball and we put architects when we
should put in business leaders, and then when we should have put in business leaders we put in architects and everybody had the wrong answers.

The way I would always do it basically is sit down, you’ve got 4-5 people in a room and you say, “Guys let’s spend two hours on this and let’s create a trust white paper.” Simple thing, that’s all you have to do. Trust whitepaper, how long is it going to be? You don’t want it to be one hundred pages. No one’s going to read a hundred pages. Maybe two pages, max.

If you’re going to do anything coming out of this room, what I would say is: “Create a trust white paper.”

It could be Acme Trust White Paper, the first paragraph writes itself. “At company Acme, we take trust and security very seriously. We have a culture of security. We have a Security Council made up of our CEO, our CTO our CFO,” or whoever you’ve got. In other words, there’s only 3-5 people in the company. This is how we managed basically our security environment itself using industry best practices. We already do change control, let’s get credit for it.

We do SDLC, let’s get credit for it. We already basically do a process with background checks, that’s what we do. Then to Justin’s point whereby in his case it’s a product security functionality. You’re selling your product base to the customer, and the customer wants a self-service model. They don’t want to be picking up the phone and say, “By the way we want to make you change six different things along the way.” So you put in a section that says “By using our product, here’s how you can manage your authentication, logging in, authorization.”

What can you do when they’re in there? Accounting? How do you tie into that? And even at the very start, you can say “Data privacy is a huge component.” We are building a platform for being compliant with GDPR
requirements. 4-5 people in a room, two hours, framework it and whiteboard it, you put it up there and then you go through a second draft and a third draft and a fourth draft. Four days later, you’ve got to trust White Paper. You then take that and every single conversation you have it’s like “Here’s our company and here’s what we do. Here’s our trust program, here’s our security model.” Done. Then as you grow, you start that off whereby it’s the CTO or CEO that’s presenting it. Then you hire your first sales person from that end.

Then you say, “Salesperson. Here you are, get comfortable, this is a trust white paper.” You’ve got to articulate that, and then suddenly you go from one salesperson to 10 salespeople to 100 salespeople, and in Workday’s case, thousands of salespeople. It’s the exact same model. It was the same trust white paper I created in Workday on the first week that we used when we went to thousands of salespeople. We simply basically articulated the vision and then we trained people across the board. I would say if you’re going to do anything coming out of this, you want to close the POCs. You want to close the– You’re doing it already.

Put it in trust White Paper, two pages maximum, put in a couple of images, talk and articulate the team in training and you’ll go from zero, or maybe you’re going 40 miles an hour already. That’ll take you up to 70 miles an hour, closing those POCs and closing the customers, and even if you’re going to close a customer, upselling. Because you don’t want to be dealing with innocuous public data, you want the really sensitivePII data. So if your security model can handle PII Restricted model, then you can upsell vertically from there.

Justin: Harshil, any thoughts on what you might want? One manifestation of using the investments you’re already making to get credit for it and benefit from it in the sales process?

Harshil: I would totally agree with what Niall just mentioned. We actually did the same thing at Medallia as well. Even before I joined there were a lot of security practices already in place, but the way we communicated externally to our customers that was not as polished.

Having this package together, whether it’s a trust white paper or compliance certificates or whatever it is, but having a single way of communicating externally, that was hugely helpful.

Even going beyond that, we were selling to the largest Fortune 500 companies, and as you know they would send a ton of questionnaires and things like that. I think we have a talk about that topic later in the day today, but having a prepared package of pre-filled industry standard questionnaires and handing it over to the salespeople, that’s hugely helpful. That removes friction in a lot of the sales process or expansions. Even having the salespeople or sales engineers knowing how to respond to those questionnaires, that’s very helpful. It actually enables the salespeople.

Engaging Engineers with Security Initiatives

Justin: All right. Back to the democratizing security theme, but then specifically zooming in on the product and engineering organization. An observation that I’ve made with teams that I’ve worked with is even though security topics may not be top of mind in every conversation, there are ways to make them fun and there are ways to make them interesting. Is there one practice that has stood out or has been successful in your present or a previous environment where getting the product and the engineering team engaged in thinking through security topics? One trick, basically, to make it interesting.

Harshil: A lot of times we realize that most of the conversations that we had, meaning the security team had with our engineers or other product engineering teams, was around security issues or security, secure coding practices or stuff like that. It’s very defensive in nature, in terms of defensive security.

What we did over the past couple of years is we have capture the flag events within the company and its open to all engineers. Basically we invited the engineers to become hackers for the day, so we provided them trainings, just basic training on how to do certain things and sent links to videos or tutorials for that day. The engineers were super excited to wear the offensive security cap and trade. To get into this application that we had built on that day of CTF, we had about 40 % of our engineers actually participate in it and we had 400 engineers at that time. So that’s a lot of manpower that went behind this, or person power, and it was phenomenal because since then we actually saw a significant increase in the engagement that we were getting from the engineers. Engineers were more understanding of why we were asking them to do certain things, they had this empathy that we were building with them. Just giving them visibility into a different aspect of security was hugely helpful.

Niall: I would look at this, basically I think security and engineering seem like they have a somewhat contentious relationship. In other words, engineering always thinks security is slowing them down, and then security from their side has a model “Why should I do that?” You should do just because I said so. That’s been the classical security model, “Do it because I said so. [inaudible] says you should do it, you should do it.
Security says you should do it, you should do it.” Harbor model across the [buoy/void]. You’re trying to pull people with you rather than them coming with you, so if I think about it from an engineering security perspective, there’s two things you really want to think about. One is the “How–” Sorry, actually I’ll reverse it. The first one would be “Why?” And second one it would be “How?” First one is “Why should they care?”

The way I think about why they should care is very simple, classic examples of “If you do X–” This is what happens in the wild. One example like Oren shared earlier on about never putting keys in source code. Simple example, you sit down with engineering. I sent out a security newsletter to our company even just last week. I was like “If we ever have keys in code, what’s going to happen is you take the keys and code .

The second step, you upload it in GitHub accidentally, and the third step is somebody from Russia or from China or Dublin logs in. Generally they scan GitHub and at four minutes they’ve got the key, they’ve got the code, they’ve logged in and they’re running hundreds of scripts. You send out to all of engineering, they’re like “We do something simple as basically put a key in code, accidentally put it up in GitHub, and four minutes later somebody from Dublin in Ireland has stolen all our data. Has that happened before?” Yes. Second part of the newsletter, AWS when they look at that, AWS’ security incident team have said to me, “Basically 85 % of their incidents that they deal with basically are specifically down to people putting keys in code and accidentally putting those in GitHub.” Now from engineering, “That’s not Niall talking crap about keys and code, and they all use FUD stories. Here’s a practical example and here’s a timeline, and then how from basically, then engineering gets it.” That’s just one simple example.

Then the second thing is, “What can they do?” The second biggest issue with security is “You guys should have known that. I thought you knew that already,” or “I thought you knew not to put the keys in
code,” or “I told you not to put the keys in code,” but there’s no tool to support that. The second biggest issue is basically, now that engineering is aware of that itself and they want to fix it itself,
helping build the tools and helping build the models to support them itself, and then frankly making it fun.

As Harshil pointed out, building up a security champions program, doing a red team and blue team. In our company, we have a model whereby if you find a vulnerability, we’ll give you X dollars. If you fix vulnerability, we’ll give you Y dollars itself. Certainly when it comes up to certain events, we have not a lot of people signing in. But I think if I look at this really, it’s the “Why” itself. Why it’s important and why they should care, and then the second part is how they can fix it, and you shouldn’t– Security should never be coming up with the answer, saying “This is how I want you to fix it.” It should be engineering devising it themself, because engineering, it’s their platform and their environment and their code. They can come up with a solution to fix that issue exponentially better than security.

Learning from Security Failures

Justin: So my next question, again, back to– While we’re still in the “Everyone does security mode,” and a security failure happens, I’d love to hear just some ways to treat those moments as learning opportunities. Actually, I’ll share a recent anecdote from our environment. We have an AWS account that’s of course isolated from everything else. It’s deliberately the sandbox environment and it’s deliberately just for self training. So you go through training modules and you teach yourself how to use things. We had exactly a key upload, and it was really–

We caught it instantly but it was great to see how fast it was, what the behavior of the adversary was isolated from everything else, and just that real experiential, tangible experience. It wasn’t a shaming moment for the for the individual involved, it was definitely a learning for the team.

Any other structures like that that you folks have dealt with for almost celebrating the reporting that necessarily has to happen.

Niall: There’s a bug. So someone has a bug in engineering or it’s a feature, that just doesn’t work that’s supposed to. You click on the button and it doesn’t work. You’re not going to go to that person and say to that person, “What happened? Why did you do that? How did you — Why would you ever think about making that mistake?” Then going to HR and get them to either fire the person or put basically a note in their file.

Engineering would never do that, or QA and engineering would never do that to each other. Yet security down through the years, that’s been their model. It’s like “By the way, there’s code. There’s keys in the code and you shipped it in GitHub,” or “You released this feature with line sequel injection.” That’s a terrible model, so I would almost say if you want to build this proactive model, instead it should be basically security, engineering and QA linking together. Really looking at it from the point of view of “Let’s just take the example of keys in the code. What’s the issue there? The code is in the key, first of all. If you’ve got code in the key, what have you got to do?” You’ve got to go to fold and we’ve got to step that one up, we’ve got to take the legacy keys out of there and then we’ve got to build an automated system to fix GitHub itself.

Then we’ve got to do security awareness, so the ones I find that work the best are the way the engineering and QA treat each other in relation to “Errors happen. We need to fix it, what are the three issues that are at fault? How do we automate that across the board?” Security needs be part of the heart of that model instead of coming in like a HR type entity telling you did wrong. Raising hackles and ruining relationships, instead, it should really be the model of “We found an issue, what are the two or three things we fixed,” make it a teachable moment for both parties and automate it. Then after that, then you just track it and try to find out if it’s going to happen again. If a service goes down once, it’s going to go down a hundred times over time. But you can track it and find out generally basically, are you on the right track or not?

Harshil: Yeah, and we actually teach them– We actually use those events as teachable moments for us as well. In the sense that if a bug got introduced or if something bad actually happened, then it’s not just some engineers fault or somebodies fault, but it’s really a gap on the security team as well. Because that risk materialized in production environment hypothetically, so then how can we learn to prevent it from happening again in the future?

We used the standard retrospectives when an issue or an incident happens, and it’s RCA that comes out with action items for everyone. But our perspective is “How can we prevent this from happening again, as much as possible?” It’s definitely using those things as learning moments for everyone involved is not pointing fingers that is the key.

Justin: There’s a concept, and we had a question this that probably doesn’t show up in that first wave, but in that second wave of maturity I’m eventually going to want to look at a number of subtype. So what’s the first, if you’re sitting down to make your first dashboard, what’s the first probably quantifiable aspect that you would want to put in a metric track on a weekly, monthly, or quarterly–? What comes to mind as something you would definitely want to quantify?

Niall: If there’s a vulnerability and it’s internal, that’s fine. If the vulnerability is external, that’s a worse thing. How many vulnerabilities do you have that you can find that can be externally trackable itself using simple tunes like Tenable Rapid 7? Using internal tools for source code like analysis, but really it’s the biggest one I always care about. Basically, “What’s my external footprint? What are the external vulnerabilities out there?” If you can get that right you’re in a pretty good position.

Harshil: I would say from a non-technical perspective, if you can somehow track– Especially if you’re in B2B, I’m assuming there would be security requirements set by your customers. How many gaps do you have compared to the contracts that you signed with your customers, or customer expectations? On your roadmap, what is that delta? You really should be tracking those things, because those things could become contractual obligations. But if an incident happens, then those things would come under questioning so you should really keep a close eye on those.

Justin: OK. Thank you much.

Harshil: Thank you.

Niall: Thank you.