August 30, 2018
Ep. #19, Feat. Charles Hudson of Precursor Ventures
In episode 19 of Venture Confidential, Charles Hudson stops by the studio to talk about his journey from founding a mobile games company to ...
About the Guests
Dan Cornell heads the Denim Group‘s application security research team. He is also a sought-after speaker on the topics of web application security and building solutions with Microsoft .NET and J2EE technologies.
Guy Podjarny: Hello everybody. Welcome back to The Secure Developer. Thanks for joining us. Today we have a great guest with us, we have Dan Cornell from the Denim Group. Welcome, Dan.
Dan Cornell: Thanks for having me on.
Guy: It's great to have you join us, Dan, for many reasons. There are a lot of interesting topics I want us to talk about today. A big starting point for it is that you have been in this world of application security for a long stretch, and I'd love to pick your brain and hear a little bit about the evolution of it.
Before we dig into it, can you share a little bit about who you are, what you do, and a little bit of your background getting into today?
Dan: Yes. I'm Dan Cornell, I'm the CTO and one of the founders of Denim Group. I'm a software developer by background. Computer science degree, math minor, because I was really cool when I was in college.
I did a lot of server-side Java in the mid-to-late '90s, I did a lot of early server-side .NET stuff in the early 2000s. But what I've spent the majority of my time in my career doing over the last 15 years is working to bridge the gap between security teams and development teams, trying to help organizations understand the risks that they are exposed to because of the software that they're building.
To help them from a systematic standpoint to create development life cycles that let them create secure software reliably. So I'm a software developer who came into the world of security, as opposed to being someone with a more traditional security background. Either in systems administration and network penetration testing or IT auditor, or something like that.
Guy: That's definitely a very valuable perspective today, coming into it. A lot of the conversations I've had the pleasure of having on the podcast show that the brightest minds in the world of security today and OpSec today, have had some good investment in dev in their background. During that time, tell us a little bit about your work in application security over the last 15 years. Just to get some context of what you've seen.
Dan: I've always had an interest in security, even going through my university education. After that I followed the stuff that the LOFT folks were doing. Again, I was writing e-commerce websites at the time and not directly responsible for security, or so I thought.
Everyone has learned that everybody writing code is responsible for security.
That wasn't a primary concern of mine but it was certainly an interest of mine. My background was in custom software development, running consulting companies that did that, and about 15 years ago I was introduced to John Dickson who is now also one of the principals at Denim Group.
John's background is much more that of a traditional security guy. He came out of the Air Force, he was an Air Force intelligence officer, information warfare officer. He had worked in information risk management, KPMG. His resume looks much more like the traditional security practitioner.
We were introduced at a tech networking event and got to talking, and as we were talking I said, "Here's my background as a software developer. I'm interested in security stuff." And he said, "Everybody in the security space has a resume that looks mine." But the really interesting problems, the really scary and challenging problems that are there right now, are all around applications.
It's not that the network and infrastructure layer has been solved, but everybody at this point has pretty much figured out that they needed to put up a firewall. They poke holes in the firewall in ports 80 and 443.
So the real security challenges that we're seeing right now are at this level, but none of the existing set of professionals, or none of the install base of security people understand software development understand IDEs, understand web programming. All the folks in security that have a background doing programming did COBOL or FORTRAN way back whenever.
At Denim Group that's what started the genesis of our security practices, was saying, "Let's look at what we're doing on the custom application development side of things," and then there's things that we knew we had a responsibility for security. Encrypting credit card data, it's in transit and it's in rest. That was a lot and that was the limit.
But let's look beyond that to start to look at, "What are the security implications of the code that organizations are writing?" And out of that set of conversations both me personally and professionally with what I was doing, as well as what we were doing at Denim Group, the consulting services that we were offering expanded when we started working with John based on that set of conversations.
Guy: Maybe let's dig a little bit into some examples. So you come in, you work with companies to an extent to fix either existing explicit security flaws or fix how they handle security in this process. Can you give us some examples of common blockers? Why aren't these companies addressing these security risks, and what changes do you start by doing inside a company that keeps them from being self-sufficient here in the first place?
Dan: There's a couple of different reasons of why an organization isn't addressing security risk, and it goes along the maturity spectrum where at every level of maturity you have an excuse not to address security. At the base level, the entry level, a lot of organizations haven't thought of their applications as being a conduit for security attacks.
When people think security, they think that's a specialized sub division of the IT genre. "Those are the guys that do antivirus, they do firewalls and they make us watch the training videos every year to not click on bad links. That's what security is." Those organizations, at an exceptionally low level maturity, don't even know that it's a problem.
And when you ask them if their applications are secure they say, "Yes we've got a firewall, and I bought the fancy SSL certificate from our provider." Those people don't even know they have a problem. In a lot of ways that's the most challenging organizations to work with, because it's all evangelism.
You've got to convince these people that the way they're viewing the world is not comprehensive enough. And that's a big uphill climb.
Then you get into organizations that are more sophisticated, where they know it's a problem, and probably they've done some assessments and they've identified some vulnerabilities. But the challenge there is that a lot of times the security team and development teams don't communicate well. They don't speak the same language, they don't use the same tools and in a lot of cases if you look at the short term, they have competing aims.
So the challenge we see with security in more mature organizations is they know they have a problem but that problem is prioritized below other things that are being done. "We've got all this other work to do, now you're adding this security stuff on top of that, that I maybe knew I needed to do but didn't really recognize."
Like I said, at every stage of an organization's lifecycle development they can always find something else to do, because security isn't the only thing that organizations need to care about. That's something that a lot of people in the security industry need to wrap their heads around, is what they're doing is important and valuable, real risk management is a key component of what organizations need to do if they want to survive.
But in a lot of cases that is defense, or certainly in most organizations is seen as defense, not as an enabler. And security people need to understand there's a whole lot of other stuff going on that is much more directly generating business value. So figuring out how to incorporate security as a component but ultimately making that more of an enabler, is where a lot of security teams need to place their focus as they're trying to stay part of the conversation.
Guy: Absolutely. I talk a lot about why humans make insecure decisions, and maybe specifically why developers or security people make insecure decisions. The first bullet in there is around motivations. It's around, "What is it that you're doing?" And the developers, fundamentally, they're building.
They're building functionality, they're building scale, and security is a necessity within that. But it is not their primary motivation. They don't set out to build something secure, they set out to build something that does something.
Guy: And they need it to be secure in that process.
Dan: If we just think career path, if you were to think of your reputation on your team. If your reputation on the team is, "That guy writes really secure code and he's never met a deadline that he signed up for." Versus the guy who, "That guy's a little bit of a cowboy. But every time he delivers on time, on target." Which of those two individuals is going to experience greater customer success?
From a competition standpoint, the person that can never quite get it done is less valuable in most cases for teams than the person who finds a way to get the job done.
And obviously those are two extremes, but if you think of what characteristics or what qualities, what accomplishments are given the credit and are recognized and appreciated, rewarded. It's important to see those motivations and the incentive factors that are in place.
Guy: I guess, actually let's double-click a little bit into this communication. So, you come into a team, let's say that it's second level of maturity. They have the development team and security team not communicating well. You mentioned that they might even have competing aims. What are the different aims that you see in those, and what are a few types of practices that you're trying to instill to help fix that?
Dan: From the security side, typically the aim should be, "Let's bring an appropriate level of risk management to what we're doing in the organization, so that we're supporting innovation and we're supporting progress, but that we're not putting the organization in a situation where it's too exposed."
There's different ways to look at that, but that's how I view most security teams. At the base level, "Let's make sure that we don't do anything that's going to get us breached in an unacceptable way. And then in a better way, let's look and see how by providing this risk management we can enable the organization to do even better things."
Like having a net under your tightrope act. On the development side, it's their goal to provide new capabilities to provide innovation that is going to allow the organization to be successful in the marketplace, and to provide value to its various stakeholders.
Again, in a lot of situations the development teams that are saying, "We need to go-go-go, because the business told us, 'Go-go-go.'" And security is perceived as being the department of, 'No.'" Like, "We want to do this." "Fill out this form. No, we're not going to let you do that, but fill us this form instead."
So it's not that either group is right or wrong, but again, and looked at in the most basic way one group is trying to move forward and one group is perceived as trying to slow that forward progress. And that creates a lot of problems.
Guy: How do we fix that? What are some sample advice, pieces of advice that you give to these organizations? Or that you work with them on applying to get them to a better place?
Dan: The attitude in security can change. It's like improv comedy, where in improv comedy the way you're not supposed to do that. If somebody says, "I'm a guy who forgot to take my umbrella in a rainstorm." And the next person takes over and says, "No. Actually, we're going to do this other thing."
In improv comedy you're supposed to say, to whatever the other person says, you're supposed to say, "Yes, and--" So the security teams need to be a little bit of improv comedy, where when somebody comes up and says, "I've got this crazy idea where we're going to have a website where you don't have to log in, and it doesn't make you log in but it's still going to give you access to your account."
It's not security's job to say, "No. That's the dumbest idea ever heard." It's their responsibility to say, "Yes. That's a very interesting idea, and here are some things that we could do to address some potential problems with that." That change in attitude toward, "My users are stupid. I wish they'd stop doing this. The developers are stupid. I need executives to help me shut them down."
A perspective of, "I need to stop these folks," is going to be pretty defeating for the security people. Because in organizations that the value you derive from moving forward, and increasingly the value you create from being able to move forward and innovate quickly, in a lot of cases outstrips the short term risks that you're exposed to.
And again, organizations are going to say, "Where are our incentives?" Certain organizations may need to be the most secure, even in organizations that believe that they need to be secure, at the end of the day really know that they need to be serving customers, they need to be driving forward, they need to be innovating.
Those are the forces that are going to win in the marketplace. For security folks, it's really important for them to view themselves as enablers and, "How can we how can we help us move more quickly, safely?" As opposed to the default answer to anything being, "No. We can't do that." Or, "How can I slow this down or put some additional control in place?"
Guy: I'm always struck by the analogies to the DevOps movement, or how the word "security" there could have been swapped for the word "ops" in some earlier point in time, a decade ago or not even. And that has changed.
The most successful businesses are the ones, indeed, the operations is a business enabler. "Look, we do ops so amazingly well that it allows us to move faster and still do it in a high caliber ops environment." The goal would be to have security work in the same fashion.
I also love that a lot of times when you talk about developer security, talk about mobilizing a more secure, modern dev environment, a lot of the conversation rotates around what developers should do. But I very much relate to the fact that there are just as important changes, or sometimes more so, that the security team needs to change. That the security industry needs to change to get to that better place.
Do you see that acceptance changing? You've seen the OpSec world, the OS world, all this evolution of application security for all this stretch. Do you feel like there is acceptance, or reluctance to this right now? Is it becoming known wisdom that they need to change, that we as an application security industry need to change? Or are people still pushing back against it?
Dan: If you look at the people in the industry, they exist on a spectrum of that understanding. The same thing if you look at organizations, they exist on a spectrum of that understanding. With my experience in OWASP and the type of people that are involved in OWASP, they very strongly have that understanding or that that point of view, which I believe is right.
And we'll ultimately see if it's right. But a lot of the leaders in OWASP very much have that view, which is saying, this is per Steve Ballmer, "All about the developers. Developers, developers, developers." The way that I look at it, at the end of the day the developers are going to have to change their behavior and change their actions if we want to see more secure code.
Obviously there are external things that you can bolt onto or insert into the process. There's WAFs, there's RASP, and those certainly have a place in a program in a protection scheme.
At the end of the day, developers are going to need to change their behavior to stop introducing new vulnerabilities into code. Developers are going to need to change their actions to fix the vulnerabilities that are already out there.
And security certainly has a role in advising and building awareness, and providing direction, but at the end of the day if you want more secure applications in your organization the developers are going to have to do something different tomorrow than what they did today.
As a result, per my view and per folks on OWASP, a lot of the leading voices the industry, the way that they view this is very much, "How do we support and enable developers with their training education, with the way that they are setting up the processes with the tools they're using? How do we change these factors so that we get better security outcomes?"
Guy: That makes perfect sense. The evolution at the end of the day, the entity that is taking on the responsibility. Or rather, there's work that needs to get done. The security actions, security activities needs to happen, so you want to enable and empower developers but they need to embrace that responsibility.
So let's double-click a little bit about how to make it easier around tools, and maybe it's a good opportunity to bring up ThreadFix. This is a product offering, I guess, from your space that I found really exciting as I was digging into it. Do you want to tell us a little bit about what ThreadFix is and what brought it about?
Dan: First, it's not a product. It's a platform.
Guy: Alright. I stand corrected.
Dan: I've told a lot of people this story, but I'll tell you the story of how ThreadFix came about. It came about from us watching the interactions between security teams and dev teams. We were working in a financial services organization, helping them set up their software security assurance, or program their secure development lifecycle.
One of the security analysts needed to do testing on a web application, one of their important line of business web applications. So he took one of the commercial scanners, ran a scan, generated a 300-page PDF with a color graph on the front, and went and handed it to the development team and said, "I'm from security. I'm here to help. We did some testing of your application. We found a number of vulnerabilities, and because security is really important we need you to fix this stuff."
And the developer, they're playing along says, "OK. This is a pretty big report. Which of these vulnerabilities do we have to fix?" And security says, "This is security. You have to fix all of them. This is the most important thing. There's hackers out in there." And the developer says, "OK. How do I fix them? We didn't put these vulnerabilities in there intentionally. How do we fix the vulnerabilities?"
And the security person says, "I'm pretty sure there's some instructions in the report on how to do that." The security person wanders off, the developer takes the report and puts it in the bottom drawer of their desk and forgets about it.
A couple months later they were doing some perimeter scanning. They had a different service that they turned on, pointed this application, ran another scan and generated a 200-page PDF document with a different color graph on the front. And the security representative went to the dev team lead and said, "We did some more security testing. Here's a report of the additional stuff that you need to do. By the way, how did fixing all that other stuff turn out?"
And the dev team representative says, "How is this different than what you gave me before? Is this the same vulnerabilities, or different vulnerabilities? I don't understand." And the security representative says, "I don't know. There might be some overlap, there might be some same things, and different things. I'm not sure."
So the development team lead went up to their line of business management and said, "The security guy came around again and this time he's actively wasting my time, because he's got this report that's 300-pages, he's got this report that's 200-pages. He can't tell me where there's overlap, he can't tell me what I'm supposed to fix.
I've got these features that this hotshot VP promised to an important customer and I've got to get those out the door, and we've got these performance bugs that are really aggravating some people, and we have non-security related bugs that are making customers angry. How do I prioritize all of this stuff?"
So a rock got dropped from on high on the security team that said, "You can't speak to another developer until you can provide them a single list of what needs to be fixed, until you can provide a justification for these vulnerabilities of why they need to be fixed. Rather than we continue implementing these features that we've promised to customers, or do other things that we need to do, and you need to provide specific instructions on how to fix these vulnerabilities. Until you can meet those criteria you don't have the authorization to speak to another person on a development team."
So the security representative does what is natural, fires up Excel and starts cutting and pasting the results from the different reports in and trying to de-duplicate them. We watched this interaction and a couple things struck us. Number one, no organization feels their security team is overstaffed or underworked, or at least I haven't met one. "If there's a team out there and you have a job opening, please let me know. I would love to be your colleague."
So every security team has very limited resources, and this is obviously a misuse of these resources. But what I also realized and noticed from this was, neither of these groups was working in bad faith. Nobody was trying to be a jerk, nobody is playing crazy politics or anything like that. The security analyst was trying to test the website, find vulnerabilities, and hopefully get their risk reduced by getting those vulnerabilities resolved.
That was that person's job and they were doing the best that they knew how. Looking at the development team, it's not like they wanted to write code that had security vulnerabilities in it. It's not like they're maliciously saying, "We're going to show that security guy. Let's see if we can get 10 more sequel injections in our application."
But they were doing what they needed to do, which was to build features that allowed for innovation and made customers happy, and to address the most glaring issues that were degrading their customer's experience. Performance problems, non-security related bugs and whatnot.
Both groups were acting in what they thought were appropriate ways, but the communication pattern was so horrible that neither group is ever going to get anything done, and they were just destined to be in conflict with one another.
Watching that interaction, it was great to be a fly on the wall to watch these interactions, because that showed us a pattern that we saw over and over again in organizations. Which is that the security teams and the development teams are speaking different languages. They have, at least in the short term, they have very different motivations and incentives.
And the way that they're communicating, because they're using different tools, the way that they're communicating they're in a lot of cases talking past one another. So watching that interaction led us to build out ThreadFix, to say, "How can we make it easier on the left side of the equation for security teams to manage all the different stuff that they're doing to identify vulnerabilities?"
Such as dynamic scanning, static scanning, component lifecycle management, open source vulnerabilities, things of that nature, along with all the manual stuff. Then on the right side of the equation, how do we turn these vulnerabilities that the security team cares about, how do we turn those in to software change requests or software defects that the development teams care about?
Because that is, if you think about it, going to a development team and saying, "90% of the time when you're doing your work you guys are using Jira or Bugzilla, or whatever defect tracking system you're using. But 90% of the time you manage your workload, and this tool 10% of the time when you're doing magic security stuff you work off this PDF that we've printed out put sticky notes on."
If you take a step back and describe it that way, it's a crazy way to communicate with dev teams, but that's still the way that security communicates with these development teams. "We're going to do some testing, we're going to shoot over a PDF." Or, "We're going to do some testing. We're going to shoot you an Excel spreadsheet and we expect you to work down through that in order to address these issues that we've found."
So it's those data management challenges, but more importantly the communication challenges that led us to put together Thread Fix.
Guy: That is an amazing cautionary tale. You're right, it resonates painfully when you hear it. It just happens so often. We see it all often in the world of open source security, where you ask somebody about how they handle open source vulnerabilities and you'll get a 10-minute answer about all the wondrous ways--
Well, hopefully. In some cases you get a fairly alarming, "No we don't," answer. But if there's good progress you'd say, "We're finding them here, and finding them there, and finding them here." And then you ask them, "So you're finding them there, but how do you handle those vulnerabilities? What do you do next?"
And suddenly there's a disastrous story about multiple triage committees and they fall into these buckets and those buckets, into these custom processes, sub-security team under the top security team. And at the end of the day the path to remediation is nowhere near as good or as concrete as it should be when at the end of the day everybody agrees that's the end goal.
It's not just to find them. It's find them first, do the risk management, but then subsequently fix it and improve the risk posture. So ThreadFix is aimed for that communication channel, and do see it as a tool that makes the developers smile, or the security people, or both? Who is typically taking in?
Dan: Our goal is, if we're successful, looking at the average developer they don't even necessarily know that ThreadFix is being used in their organization. Because from a philosophical standpoint ThreadFix is targeted at the application security team. Find all the teams developing software in your organization, all the applications they're responsible for, and then load in the results of all the testing you're doing.
Static, dynamic, IAST, open source management, manual pen test, code reviews, all that stuff. And let the security team manage the data inside of ThreadFix to determine which of the vulnerabilities that we think are the most serious, which has compliance implications, service level agreement implications, whatever that might be.
But then what we want to do is we want to reach out to developers in the tools that they're already using. In this case, most specifically defect tracking tools. So let's bundle up these vulnerabilities that we consider to be sufficiently important to merit developer attention, bundle them up in a way that is going to make sense and create defects based on that.
Maybe that's grouping things by vulnerability type, maybe by where in the application they're located, whatever that might be. How do we bundle these things up and make that transition between vulnerabilities that the security teams care about, and bugs or backlog that the development teams care about?
For most developers, they don't need to log in to ThreadFix, they don't need to learn a new tool, they don't need to have a new login. You don't need to train them. They're just going to get bugs that show up and get assigned to them in their scrum meeting or whatever meeting tempo that the organization has. Those bugs are going to show up in their defect tracking system saying, "Here's the problem. Here's how to fix it. Let us know when you're done."
That's what we strive to do, how do we make it as easy as possible for developers to get the information they need to fix these problems? How do we take friction out of that process? Because if you take friction out of the process, what we've found is that developers fix more bugs faster. I want to say that in one organization we worked with, the mean time to fix went down by 46% which is great, just like you said before.
Finding vulnerabilities is an important part of the process, but getting them fixed is where the world actually gets better.
Again, I've been doing app security testing for 15 years or something like that. Finding vulnerabilities isn't the problem. That's never been a problem in any of the testing engagements, in any organization that's rolled out static analysis, dynamic analysis. I asked.
In all my experience doing testing, helping organizations set up testing programs, finding vulnerabilities isn't the issue. Actually, finding vulnerabilities in a lot of cases is the problem, because you stack up this mound of vulnerabilities that is just a monotonically increasing in size because things aren't getting fixed.
Where organizations really get value, the win for organizations, is to figure out which of the actually important vulnerabilities that need to be addressed and to get those resolved and pushed into production by the development and the operations teams. And that is where organizations struggle.
We've seen so many static analysis rollouts where each app that you go through you're just stacking up more and more vulnerabilities, and especially looking at a lot of the untuned static analysis engines that are just stacking up a bunch of info or low level stuff. Same thing with dynamic analysis, and any type of automated analysis you're doing it's going to generate a lot of stuff. Some of it is false positives, some of the stuff you maybe don't worry about. But again you get this mound of vulnerabilities that just increases in size over time.
More attention needs to be paid to the other side of that process which is, "How do we figure out of the vulnerabilities that we've identified, which are these are we actually going to fix and how do we get them in front of the developers to get them to fix them?" It's a lot less sexy, you're not going to be speaking at DEF CON about your sweet remediation hacks. Although, Black Hat has some more blue team stuff this year, which is great, but again if you look at the industry the InfoSec rock stars are not the ones that are fixing and the most stuff.
Guy: No, it's those hacking the Jeeps and breaking into your brain.
Dan: Exactly. And it's not to say that that is not valuable, but that is something that is very discreet. "I went in, I did a test, I found this stuff." It's a challenging intellectual endeavor to do testing to find new things.
The real challenge comes on the other side, where you're fighting. I'm going to butcher the quote, but I think it goes, "A level 8 in the OSI model is like politics, a level 7 is applications and level 8 is politics." I'm sure I'm butchering that quote, but it's where you see, "OK. We've done all the technical stuff that we need to do to find the vulnerabilities, now we've got to hack humans and systems in order to get these vulnerabilities resolved."
For better or for worse that is not as cool as finding the stuff in the first place.
Guy: But no less, if not more, important. So it sounds awesome. If as a subsequent, if somebody wanted to check out ThreadFix and try it out. Where do they go?
Dan: They can go to ThreadFix.it, we've got a free download trial. The easiest way to get up and running is if you just submit the contact form, we can share with you. We've got an Amazon image. Give us an account number and region and we can just shoot that over to you. You just spin it up and it's already pre-configured.
Guy: Awesome. Dan, there's a whole bunch of other questions that I have, but we're running a little bit out of time. It's been a really fascinating conversation around the evolution of not just the dev to security but also the security to dev channel. Including the start of the conversation and ThreadFix and what it represents, and at the end of the day communication cannot be one way. It has to be both channels but we need to adapt and create those communication channels.
So, awesome conversations, and thanks for sharing your experience. Before I let you disappear here, I have one question that I like to ask every guest on the show. Which is if you had one pet peeve or top advice that you had around security, to offer a team looking to level up their modern dev security posture, what would your one bit of advice be?
Dan: My pet peeve, I don't know if I'd call it a pet peeve, but one of the things that has tremendous promise is this security champions model where you have certainly a central security team that is providing certain functions. But you start to embed security knowledge into dev teams, so that every dev team has, "Here's the security person I can go talk to if I've got a question about a vulnerability, or if I've got a question about authentication authorization."
The model of having this monolithic security group that does everything is destined to fail.
Again, it's not to say that central security groups don't have an important role, but that looking at, how do we embed some security knowledge? Whether it's taking someone on a given team that has an interest or an aptitude for security and providing them with some training and some development, or whether that's taking someone from the outside and embedding them in that team.
But that, along with the DevOps cultural transformation, to say that development operations at the end of the day had the same goal of, "How do we generate shareholder value for this business? Or generate stakeholder value?" Similarly breaking those barriers down between security teams and development teams is critical for success.
How do we make security knowledge local so that every team has the ability to easily reach out for it? That security champions model of having embedded expertise and knowledge is one that I've found to have a tremendous amount of success and value.
Guy: Excellent. That's a great tip and bit of advice. Dan, thanks a lot for coming onto the podcast.
Dan: Thank you very much for having me. I had a great time.
Guy: Thanks everybody for tuning in, and join us for the next one.