1. Library
  2. Podcasts
  3. The Secure Developer
  4. Ep. #3, Security From The Start
The Secure Developer
34 MIN

Ep. #3, Security From The Start

light mode
about the episode

In episode 3 of The Secure Developer, Guy is joined by Sabin Thomas, VP of Engineering at Codiscope, where he creates tools that help developers build and deploy secure code faster. The two discuss the difficulties presented by the accelerating release of new tools and frameworks, the problem of too many sticks and not enough carrots, and the benefits of designing with security in mind from the start.

Sabin Thomas is VP of Engineering at Codiscope, a developer tools company hell-bent on ridding the world of easy-to-hack apps by teaching developers to code securely.

transcript

Guy Podjarny: Hi, everybody. Welcome to The Secure Developer. Thanks for listening in.

Today we have Sabin Thomas with us. I'll let him introduce himself in a moment. He has a lot of experience building and leading engineering teams in security companies, as well as companies that don't focus on security , but actually do security.

So he has a lot of knowledge for us that I'm eager to tap into and have you listen to as well. Sabin, thanks for coming on the show. If I can ask you to just introduce yourself a little bit, what's your history, maybe how you got into security?

Sabin Thomas: I think I'm more of a varied sort of path in the security space. I would say this is one of the more recent explorations for me.

My past has been over 15 years working in software, doing everything from enterprise, HR systems, HCM systems, financial institutions, and then more so, e-commerce, search advertising, and then now developer tools with a very big security focus. So it's been a great journey. I think this is the most interesting of all of them.

Guy: Definitely not a small feat , to get developer tools in security. And I guess you work at Codiscope today? Do you want to just sort of catch us up a little bit on what Codiscope does?

Sabin: Absolutely. My role is VP of Engineering at Codiscope. Codiscope has been more than a year and a half into its inception. We are based in Boston primarily. We are a developer tool company.

We do a number of things. One of the number of things that we do is specifically security tools aimed at developers. We've had almost six to seven years of experience in terms of the people that we have in our company that have been working on developer solutions for other stacks, Java, .NET, PHP, and have got a good amount of experience in that type of realm, that kind of field.

We also have a specific division of our company that's focused on e-learning content, educational materials specifically aimed at security themed courses helping developers improve their security knowledge. A nd now, over the last year, we've been refocusing our efforts specifically on JavaScript, and a different model at understanding code and making developers code securely.

Guy: I actually have a whole bunch of questions I could add. First of all, you mentioned developer tools and the focus on the developer; clearly we need security knowledge, right? There are a lot of problems out there. Why developers? Why developer tools, versus just the typical info sec and security tools?

Sabin: That sort of has to do more with the genesis of Codiscope, I would say. So, a little bit of background there. Codiscope, as a company, we were spun out of a company called Cigital, which has close to 20 years of experience in application security consulting. T hey've been doing this for a long time and they have a great enterprise footprint.

The reason for the spin-off was that as a product, we would have that much more impact if we were able to touch the developer. Traditionally, security, especially in enterprise companies and regulated industries, have sort of been mandated from the top down. Management, or a security team, or the CSO in these companies would certainly understand the need for it.

There's an obvious risk with doing insecure code, and we would turn that around and mandate that to the development team , to use the tooling, the processes and so on. We felt there was a different approach to that, where

we could go directly to the developers and have that experience be very native to their development environment and really make some impact there, to the point that they feel that that's part of their tool set.

The aim is still the same. We still want secure code. We want the developer who is at the forefront of this to go down that journey and make it more secure, and that's what we feel gets the most stickiness in our approach to it.

Guy: Yeah, it makes sense. And I think as the pace of development grows , and just the impact of an individual developer in the throughput, how much a single developer can achieve today in terms of new functionality just becomes staggering. You can do a ton of things, which is awesome. It also implies that really anybody, I think, outside of development is just not able to keep up, let alone understaffed or just with a shortage of talent security team.

So, to an extent, we really have no choice but to get developers involved. It becomes not really a question about whether developers should embrace security and start acting on and using, applying, some security best practices, but rather, how do we make that happen?

Clearly it's a necessity if we have any chance of winning, achieving some form of security. So a lot of it is just about how to get developers engaged. We do the same thing at Snyk, right? We're focused on saying this is developer-tools-that-is- security, and I feel like just by that statement there's an indication of focus. Which hopefully, I'm seeing a little bit more of, and hopefully keeps growing in the space.

Sabin: Just to match the need, as well. The need for developers is at an all-time high. Everything everybody does always has a software component to it. That's one piece of it.

The other piece is that developers have a five second attention span, so it's whatever framework they're working in; the next one is obviously cooler and better. And so with that kind of churn across frameworks and that kind of repetition,

I think developers are just prone to doing things incorrectly, not because they're malevolent from the start or they have intent that way, it's just the pace in keeping up with these things means that you just can't be an expert at everything.

Compare yourself to the Java developer who's been working on it for 20 years. They have basically covered every aspect of Java, the tool set. And then the developer who's been working on Node.js, Swift, Go, take your pick in the next two years.

Guy: You may not know where the pitfalls are.

Sabin: Yes.

Guy: You may not have that experience.

Sabin: I think with the amount of time that you have to spend on this, you are not able to spend the time to really understand it, to get to know what you're coding , and make that secure. So tooling is even more important.

Guy: When you say it's developer tools, like, practically speaking, you've sort of seen in Cigital and all that, building pretty much the same security, especially for the education tools, right?

It might even be the same tools, but here at Codiscope you're trying to make this developer-tools-that-is-security. How do you see that manifest in the product? What makes the a security product more compelling, to a developer?

Sabin: I think a lot of that is tied into what we've recently launched a few months ago, a product called Jacks, definitely try it out. The way we approached this product was very different from how we did our past products. The understanding being that we wanted to approach developers first, and again, not something that was mandated onto developers.

In that model we really looked at GitHub, a lot of other tools that have had a good amount of developer traction, a New Relic, where these tools were just simple enough for the developers to understand where it fit in their ecosystem. That model was pretty solid. So we took a look at that. We understood where we wanted to fit in.

We also understood, as part of our mission, to have developers code securely , that the educational aspect of that was very important.

I think developers certainly want to know when they're doing, themselves, a good job, when they've been able to work with their framework correctly, they've been able to do the right things in their code. What is sort of missing in the tool sets that are there right now is the inability to track that.

There's no ability to say that, "Hey, you've actually become a pro programmer in Scala and you've done package incorporation correctly." So that type of analysis or that type of tracking is missing. And so we felt that combining the need to have a secure tooling experience along with the educational experience would establish that kind of environment that a developer would want.

So we approached that. That was very much the part of our product design from the very start. We took a brand new, fresh new look at it at Jacks. We spent a good amount of time understanding the user experience, what a developer would really want, what would accelerate them, how would we accelerate that dev cycle instead of being a deterrent or a new body of work?

Guy: Yeah, nothing engaged, right?

Sabin: Right, which is, I would say, sometimes the case with most security tools. There are findings, and you have to now spend time to work through them. We wanted to be an accelerant. And so we're still tinkering with it. There are things that we measure, everything about Jacks. We track and try to understand if the developer is faster now than he or she was before that product.

Guy: I had this really good chat with Camille Fournier who is a CTO in New York, and she had this sort of quote that said that, in the case of security tools, you pay to get more work. You just sort of pay out, and it's sort of not what you want, right? Like, if you're paying, you want to achieve better security. Maybe you are, but you want them to make it easier, not harder, and not send you down that path.

Sabin: In just looking at that, as well, it's a tricky thing to do, to find out if a developer has really done this correctly . It takes a good amount of analysis, takes a good amount of understanding of the behavior, and we're still finessing with that. I don't think there's necessarily a "right way" to do it.

There's certain nomenclature we try to stay away from , in terms of all our products, which is something that says that, "You fixed something in response to a vulnerability that was triggered. "

Unless we're absolutely sure, we don't want to say that a fix has happened. What we can tell is these packages look like you've done this correctly. This code you've written looks like you've done this correctly.

And so that's the path we're going down. We don't want to be too forward in saying that you've done this correctly without making sure that that's absolutely right. Now, I'm sure in the early versions of the product these are things that we're still figuring out, and we'll iterate on that. But that is certainly the mindset, the goal of where we want the product to be , when it's at the point it's mature.

Guy: I think when you talk about developer tools, first of all there's the five-second attention span aspect, which is pretty much how much you get when you build a developer tool to sort of demonstrate value and to show ways of use. Historically, well, not historically, at present even, the vast majority of security tools today, you need loads more time than that just to be able to get them up and running, let alone starting to operate them on your system.

You have to have quick onboarding and, in general, just an understanding that even the exact same core technology, be it security static analysis or be it authentication, or whatever sort of vulnerabilities and dependencies, any of these components, the same core technology might be at play, but the surrounding of the product and how easy it is to use and how easy it is to get going, is quite critical.

I feel like there's an interesting conversation to be had around education and tooling, and this virtual cycle between them. How much education do you need to do upfront versus how much can tooling do for you and a lmost reduce the need for you to know? H ow can the two work in tandem to basically help on one hand reduce some of the effort from you, just do some of the stuff for you , and ensure the tools are secure, a nd at the same time, surface things to you in line with the work that you're doing?

Instead of you going off, for instance, to do a three-day course and learn about secure coding,

how can you have some of that education around making mistakes be a natural part as you code?

You identify these things and you just repeat that enough that you know to ahead of time not make that mistake, allowing you to basically take it up to the next level.

I think an aspect of that is with automation tools and with static analysis and such. I don't know if it's SecureAssist, or you have the in-line, in-the-IDE types of security education. Do you see that paying dividends? Do you see people, over time , making fewer of the same mistakes?

Sabin: Certainly with SecureAssist, which is one of our products in our portfolio, the intent is very much to be as native to the developer experience. And the way they do it is by being an IDE plugin. Right now they have IntelliJ, Visual Studio, Eclipse, some of the major browsers.

As the developer is in that, as they're coding, those are right opportunities where we can inject the right type of education to do that correctly, especially in response to something that's been done. I think teaching when somebody's been able to see, for themselves, that they've made that mistake, or that something that was incorrect, I think that is the right point at which it becomes sticky.

It is a little trivial sometimes in the way that people write code, or developers write code, where you can almost attribute it to a syntax-type approach. There are more complicated vulnerabilities or inaccuracies that can happen that may not work so well in that type of experience.

This is where we're finding that, even with languages, we're seeing that manifest. So if I'm talking about a Java code base, I can be sure all of that is typically in just one repo. In the analysis that SecureAssist does, we can find out from the way you've initialized your application routes, your web routes, to the way you've initialized your database, all of that is in the single repo. We can sort of figure that out.

In more modern frameworks, more dynamic, I would say even with JavaScript, that's split across 25 different repos, and making that connection can be a little complicated. And so to get that full, holistic understanding of the code you're writing, the corrections that we offer have to be in line with that.

I would say there's certainly still a need to have developers be educated at the point that something we've noticed is a miss, and as you're coding as the right time to do it. But I'd be remiss if we don't do that holistic understanding. And so that's where a tool like Jacks is a little different, in that you invoke it primarily on your cloud-based repos.

I think we're still figuring out what the right model is. I think it's a mix of both. It's a little challenging for newer frameworks, because an IDE is almost a non-existent concept, I would think, for a Node.js developer. If they've started with Node.js as their first programming language, I would say that an IDE is probably the least of their concerns. It's usually a text editor or something like that.

Guy: Yeah, they're going to be in Atom or in Sublime or something.

Sabin: In Sublime, right. In that scenario, for a developer who's been using something like this, they don't really expect a lot of the IDE. They don't expect a lot of feedback out of the IDE, and so where they're expecting feedback and possibly educational material is a little later on in that build chain.

That's a little more native to them. Whereas if I taught the Java developer, they want everything to happen in the IDE, the ability to debug, the ability to run to debug builds and do stack traces.

There's a different nature of developer, and w e want to be the answer to both of them.

So this is where we'd like to work with the community and see if there are other things beyond the build chain, beyond the IDE, where we can inject that.

Guy: "Developer" is not one thing. Different developers, clearly, when for starters, it's just people, and people work differently. But also the norms of, or the best practices for how to develop software, it may be that in the world of JavaScript, it'll be typically a slightly more continuous processes.

There'll be a higher percentage of people that have some sort of CI, test automation, continuous deployment processes, maybe as compared to the average in Java. But you're right there, at the same time the development environments themselves, the debugging tools, those are anywhere from less mature to non-existent, and building them there.

I also like that you mentioned this notion that sometimes you can't identify an issue in a smaller context. Oftentimes the lack of existence of a security control, like the fact, for instance, that you did not validate input or that you did not encrypt some piece of data, that by itself is not a security flaw right there and then.

That's not a vulnerability, but if you did not have that security control throughout the flow of an action, suddenly your system becomes vulnerable. And there's some learnings there.

Sabin: Yeah, specific to instruction, though, there are many mechanisms that have been valuable to the developer. There's certainly the old-school method of development , where you have a three-day instructional seminar that's on site, that all your enterprise developers are sort of required to take.

There's e-learning courseware that they can take on their own time and sort of measure their progress that way.

The challenge with any instructional mechanism is that without the appropriate tracking and the appropriate metrics, it's very hard for a developer to know what use they've gotten out of it.

And so this is where we we're looking to answer that with Jacks, which is, "Because of the way you've coded , and because of the way you've interacted with our courseware, we have an understanding to say that you may have a predisposition to coding these insecurities or these vulnerabilities in your new project, and we want to avoid that." So that's certainly the goal.

Guy: I like the notion that it's almost like continuous education, right?

Sabin: Yes.

Guy: As you do a part of that, right? It's not "security is a moving target," but also in general, the ability to absorb information in one time and then remember it if you might not have a change to apply it. Remembering that, if a few months later you come across that, it's hard. But if it's constantly there, constantly pointing decisions and questions for you, you can evolve it.

Sabin: Just to close that out, one of the things that we've done in talking to our customers that have been using our products for a long time, we find that the thing we're always asking is, "What level of security training is happening currently?"

And then also in our developer outreach, when we're talking to devs or new graduates, the question we ask is, "What kind of security training have you received in college?" And the answer to all of them has been "none." We've actually gotten responses from customers asking if we could create security training for college-level, academia-type usage scenarios.

We're almost 40 years, I would say 50 years, into mainstream software education. And the fact that we still don't have that, speaks volumes. So what's happening is a lot of people are learning this on the job.

They're learning this as a result of an incident that has happened, and at that point it's almost, I would say, fatal to have to learn it from that type of scenario. So there should be a better way to do it.

Guy: Precisely. You should be able to preempt them.

Switching to security education itself, y ou have these tools, you educate developers about security solutions, you see a lot of actual mistakes or a lot of interest in specific ones. Can you share some insight into some of the common, or the most common, mistakes you see or conversations you have?

Sabin: It sort of varies by the nature of the developer. What we find among our clients is they will engage us for a certain type of curriculum, and the curriculum could be varied depending on the client. A client would want introductory security training, defensive programming in Java, how to prevent buffer vulnerabilities in C++, basic stuff that is also a good reminder.

It sort of checks off certain boxes in the security team , to make sure that developers are doing this year after year, reminding them about it. But our focus is to make that material fresh and relevant and to make that contextual to what you're coding.

We find, from our experience in e-learning and the type of questions we're getting from our clients and the type of use cases that we want , that we feel that e-learning still has a good amount to go to make it that much more relevant. And that's what we're taking to heart with our product development with Jacks.

Guy: This is very useful on the types of security education that people actually explicitly ask for. Do you, like on the flip side of that, if you look at mistakes through Jacks or through the IDE assist, which mistakes do you see are the most common? What do people not do, or incorrectly do, that exposes them to security problems?

Sabin: One of the things that we always find, especially with Node.js repos, is, whether they're using Mongoose or not using Mongoose, the native Mongo driver, is basic SQL injection. A SQL injection in MySQL or a SQL injection in MongoDB still happens, and it's not so much a developer vulnerability that we find, it's just inaccurate use of the product, I would say.

This is part of what I was saying earlier, because these frameworks are coming out so fast, the ability to spend time to become really good at this, is sort of lacking. It's sometimes also a result of documentation just missing, but if you use the find operator or if you use the where operator in MongoDB, and if you don't filter input, that's a big scenario at that point. But I think frameworks are getting a little better.

So, if we're looking in Node.js between the people that use Express and hapi.js, there's a big difference. People who were using Hapi.js at the very outset, developers, those repos are pretty good to start with because validation is a big part of Hapi.js.

I think that sort of mindset is really good. We encourage developers to go that way. ReDoS is also a big thing, reg ex denial of service, that always happens, people just don't know what to do with it. I'm surprised the number of developers that come out of really strong programs, a lot of these schools, and still say, "Reg ex is a little complicated for me," so they need help.

Guy: Just to get a little bit of info for those who don't know it, regular expression denial of service, these ReDoS vulnerabilities, are the case where executing a regular expression takes a very long amount of time. Regular expression with all its back references and logic that it needs to match can take a very long amount of time to run , even on a small string, definitely a non-linear amount of time as compared to the length of the string.

So it's fairly easy to, if you don't restrict the length of an incoming string, which is something you often omit to get to the point where that thread spins quite a bit, and especially if you're in JavaScript where it's single threaded, not single threaded but it scales by events and not by threads, you can really fairly easy take down a system, a machine, and introduce denial of service.

Sabin: Another thing that we also see is incorrect use of cryptolibraries. I'm surprised by the amount of code where people use Math.random to seed a key algorithm or something like that. It's just the wrong thing to do. But they haven't been taught anything else.

Unless they get the type of education that should preempt them from using Math.random, they will continue to use it. So that's one of the popular things that we see as well.

Guy: I think education is an aspect, and tooling or better defaults is an aspect as well, right? In the case of the Mongoose usage, I feel like the big platforms like Angular and React, they come in and they put a lot of emphasis on security. While imperfect, they still have very secure defaults. This notion of security faults of taking on responsibility, or not shaking off to say, "This is not my responsibility," just makes it harder for you to make a security mistake.

Many of these packages just don't do that, right? Sometimes they're just small. They might not have the capacity in terms of the amount of investment that's been put into them, might not have the security knowledge. Sometimes the authors of them perceive them, because they're not a platform but rather a component, somebody says, "Yeah, of course I don't filter MySQL injection."

So the relevant functions in Mongoose, it's not meant to be used that way. But that piece of information, about how it should and should not be used, just goes away. And really, I would hope that over time we evolve into some model that is a little bit more defense in depth.

To an extent, I find the npm package, or the Maven package, or in general these open-sourced packages, and as you use more and more libraries, you chain them together. Because these libraries are developed out of context with each other. They're just developed independently, there's no guarantee that the use of those packages is going to be done correctly from a security perspective.

The only way to help assure some of that security is to build it in, to actually have each of these components, have defense in depth, have every one of these components enforce its own security restrictions. Even if it means that you've gone through seven of the same checks, you eventually have a shot at not letting an attack through.

Sabin: I think what is in the mindset of a developer. I, as a developer, if I'm coding something up, if I'm doing a blog app or I'm doing a very simple website, I can understand the mindset there to say that, "This is not something that's going to be used by a lot of people. Why would I need security? "

And I know the way that I would think about something like this 15 years ago has changed a lot from when I'm doing that now. So anything you write should be thought about.

You should expect that there are bad people out there that would want to expose things, or maybe an incorrect usage of your application completely, not because it was nefarious intent, but because it was just there.

So I think that mindset is changing.

That's a good thing for developers to understand, that anything you write, anything you put out there, anything you deploy, no matter what it is, if it's from a simple non-profit charity website to running payment transactions , that the vectors are still pretty much the same and the thread models are still pretty much the same. So I would say that is an important thing to keep in mind for fresh, young developers, I would say.

Guy: Yeah, and I think the other aspect of security would be transparency. It's about the fact that you should be declarative about what security aspects you are and are not taking on. This ties back both to the earlier conversation we had around how there are some security threats you can't address in a repository.

Then we now continue this to sort of go, I don't know if it's a different granularity or not, but not about a repository, but rather about a package, each of these components, you know it makes sense, sometimes your package might be 50 lines of code , and making it enforce security restrictions would reduce its functionality as well as make it 200 lines of code. That starts being a little bit cumbersome, but maybe the right way to do it would be to make these packages declare whether or not they enforce security.

Just by that sheer statement you can say, "Well, can I now ask a question to say, in this flow, am I calling a package that does or does not do security controls?" Or, you know, security control is probably also not a black and white flag, but just to be declarative about what security constraints you're proposing or you are taking on, and which security constraints you are not.

Sabin: To that extent, there may be a bit of a struggle. I know organizations like OWASP try to establish across all applications , "These are the common attacks, or these are the common vectors or vulnerabilities." And I think that's good to understand , because we're still seeing that happen. The OWASP Top 10 from five years or eight years back is still relevant to today, even though it's a different environment for applications.

I wonder, though, if in setting up those common languages, a common declarative presence about your package, that we spend a lot of time in standardizing something that we forget about different new cases for each. That's something that I just have to think through. I don't necessarily think it's an easy problem to solve, but I think it's relevant. I just don't know the path to get to the final.

Guy: When you talk about declaring something, then you want to declare it in a non-custom fashion. I want to say, "I do not allow invalid inputs through," I have input validation. I might have bugs, maybe I have vulnerabilities there, but at least I have made the attempt to sanitize input and use some form of standards to indicate that you did.

Then maybe later, a tool like Jacks could go on and say, "Well, you have this sort of chain of actions here that you've performed, none of which claims to be doing input sanitation." Because you could do, I mean, Jacks has the sophistication to try to deduce that itself , through the language assessment. But it'll be nice to layer in some declaration, and over time.

A lot of the aspect of security that we struggle with a little bit in the development community is that it's all sticks and no carrots. There's no way to celebrate the success.

If you could somehow stamp your package to say, "This one has good security controls," clearly you need to implement them, but assuming people don't abuse those flags, it's nice to say a good quality package has these, like, seven security things that it's doing. And you would aspire, as a developer, to have all of those flags. And if you did so, then you'd have some thumbs-up on your repo and on your package, and people can recommend or sort of applaud you for it. You could get a little bit of that value.

Sabin: I think the rating business is good. It'll certainly help. It'll certainly help when you relate that to another package, because those are where those metrics really make the most sense.

If I want to choose this type of markdown parser or another one, if I look at those ratings, now my decision process is a lot more clearer. I think that really certainly helps. What's going to be challenging is defining the first few packages, I get it.

But I certainly think this is where the community can really help. Like if that is going to be a value, if the community certainly feels that, we should make that happen by all means.

We should all get together to figure out a common protocol so the community can understand what they should be using. And then we, in the business that we're doing, can make that work.

I think that's certainly of value.

Guy: Definitely. And I think we need to keep shouting that from the rooftops and keep getting people to engage and care about security, but try to find these actionable next steps that you'd consider doing.

Sabin: It's an interesting concept as well, crowdsourcing security. Because if that was easy to have been done, we wouldn't need to exist. And the challenge that we have security has always been an afterthought.

And firms like what we do, and firms like us in what we do, it's sort of our responsibility to take that on, like you said, shouting it from the rooftop, be that leader. So at the point that the community understands it, now it becomes a more active engagement. But it's a struggle.

Guy: It is, always is, never going to end, but hopefully improves.

Sabin: Yes, certainly.

Guy: I think we're about out of time. Before we part off, I'd like to ask you maybe one tidbit of a question , just from your experience and what you're seeing. If you're talking to a development team and you have one security aspect or one thing that you think they should apply to make their system more secure, what's your favorite? What's the current thing that you're championing the most?

Sabin: There's a lot of answers I can give to that. The official answer would be to use jacks.codiscope.com. It's a great tool that would help you do that. But I, and this is something that I've sort of indoctrinated among the teams at Codiscope, which is building security in t hat touches different parts of your development process.

One that I've seen had the most impact on is at the design step. If you have a security-designated role or somebody who can take that on and champion that and make that part of that design before any code gets written, that is already part of the mindset.

That is something that's sitting there when devs are coding. It's always in the back of their mind, but it sits there, and that has a great impact in the way that I've seen the developers on my teams code.

I would say, as part of your design, make security review step an explicit part of that, and you will see gains out of that. That's just coming from personal experience. I mean, there's certainly more answers to that, but that's something that I have seen that has had more impact than any other thing that I've seen before.

Guy: I fully agree, it'll be sort of a high-impact step to just acknowledge that security is a core component of some of that process, of your flow and of that design. Thanks a lot, Sabin, for coming.

For those who are listening, you definitely should go and check out Jacks. It's jacks.codiscope.com, and it would probably tell you a few things about your JavaScript code that you may not know and you definitely should, maybe about how you're using Mongoose, maybe something else. So thanks again, Sabin, for coming on board, and good luck.

Sabin: Absolutely, this is a great service you're doing. I think the more we can benefit the community, things like this will really help. So thanks a lot, Guy.