1. Library
  2. Podcasts
  3. The Secure Developer
  4. Ep. #26, Security Education with Jim Manico
light mode
about the episode

In episode 26 of The Secure Developer, Guy is joined by Jim Manico, founder of Manicode Security, to discuss insights from his long career as a security educator, and to explore the importance of developer training in application security.

Jim Manico is the founder of Manicode Security. As a seasoned educator in security, Jim teaches software developers how to write secure code, and has provided developer training for SANS and WhiteHat Security among others. He is also an OWASP volunteer.

transcript

Guy Podjarny: Hello, everybody. Welcome back to The Secure Developer. Today we have an awesome guest, one of the-- Maybe the most well-known figure in the world of application security, or definitely one of the more noise-making ones of it. This is Jim Manico. Welcome Jim, to the show.

Jim Manico: Thanks so much for having me on your show, Guy. It's great to be here.

Guy: Cool. We've got a lot of great things to talk about here, but for the three people in the audience that might not know who you are Jim, can you tell us a little bit about who you are, what you do, maybe a little bit about how you got into this world of security AppSec?

Jim: Absolutely. I am a security educator, I travel around the world, I teach software developers to write secure code with a team of different trainers as p art of my company. I've been a developer since I was a kid, with 30 years of writing code I started as a Commodore 64 assembly developer and I've been coding ever since. I was brought into security by Stephen Northcutt, who is a fellow resident on the island of Hawaii where I live. Steven, I think 20 years ago said, "Jim. Anyone could be a developer. You need to be a developer who studies security, and it will be a great benefit to your career." I listened to him, and I'm grateful that Stephen Northcutt dragged me into the security industry, and now I'm doing secure coding for a living and I love doing it.

Guy: That's quite insightful. Also 20 years ago, that was some forward thinking there--

Jim: Absolutely.

Guy: To see this coming. Today it's a slightly more well-known fact that security is key.

Jim: Absolutely.

Guy: You've lived and you've evolved, you're doing the security education for a good while now. Has that always been the case? How long has it been security education versus the developer, and what was that tipping point going from doing the coding to doing the training?

Jim: I started my firm about four years ago and I've been doing 100% developer training for a little more than-- For almost five years. In other jobs I've had over the last 10 years, I used to work for WhiteHat, I did training for them. I used to work for SANS, I did training for them. It was always a part of my job. This is the first time in my life where training is 100 % of my job. I was in that classroom over 100 days last year, and I love it.

It's not just teaching, it's research. It's studying, it's doing sample coding, it's reading other people's research. It's participating in the conversation of application security and trying to contribute something, it's working at OWASP as a volunteer helping with standards. It's a whole collection of activities around being able to be a good educator, and I just love doing it.

This is the greatest puzzle of software development and it's not just my job, it's my passion. I love doing application security. It is fun.

Guy: That's awesome. We need more of that knowledge disseminated, and the sooner the better as well. It's all very important gospel. Let's dig into that. We're talking security education, developer security, and whatever that narrative is of it being a good career choice these days. Where do you think we stand today? You go around, you talk to these developers, do people know security? What's the state of affairs do you feel around developers knowledge and maybe adoption of security?

Jim: Let me take a step back. It's key that companies get how important this is.

When I was brought in to do training, say, 10 years ago it was a quirky thing. It was something to do on the side because they had some extra budget lying around, or because compliance told them to do it so they did it and moved on with their life.

Today training is something that I have to take seriously. I've always taken it seriously, but ten years ago when people took to training and didn't think about it for a year--

Guy: They didn't care about the results, they wanted to check the box that they've done the training.

Jim: Today, 10 years later where every little slide you talk about is going to affect their policy, it's a whole different level of responsibility. I've had people call me and say "We just earmarked 30 million so we can turn our entire infrastructure to internal HTTPS and stronger transport security inside of our network." When I first heard that, that was about three years ago, my jaw hit the floor. Like, "We are entering a new era of: everything you say must be more precise and taken to a new level of rigor because of how much people care about this topic now."

This is the golden era of application security for me, because we have a mature toolset for assessments, we have good books and literature on assessment, and we have a plethora of intelligent people thinking about building securely. It's not a quirk anymore. It's now a core part of development, at least among the teams that I interact with.

Guy: That's awesome. I love those comments in general about adoption of security, specifically HTTPS is one of the big wins of the world of security.

Jim: I agree.

Guy: I know it is very much carrot and stick, on one hand making your rank higher on Google or things like that and advocating for security on the outside, making it easier, with less encrypted likes. Then alongside all of those having the browser start marking you as not secure.

Jim: Exactly.

Guy: We've seen this shift a little bit, which I relate to, and maybe there's two trends. Maybe tell me if you agree with this, there's two trends driving this. On one hand, maybe once again the stick or the external one, which is breaches. Just more and more security problems that occur and the implications of not doing or making it a big deal. On the flip side, what you have is you have this accelerated development. This "Everything is becoming software." Your clouds are becoming software, your servers, your network, your infrastructure. Everything is becoming code, and the people writing that code need to secure it. It's almost like the scope of "What is application? What is application security?" Has grown. Do you feel those? Do you see those two narratives, do they make sense to you at all?

Jim: It absolutely does. Let me start by saying this question, this is important. "What's motivating people to do AppSec?" This is not something I think about a lot. I'm usually worrying about, "How can I get this SDLC program to be rigorously within the organization?" Bear with me on this, it's that, yes the breaches come up and they're big. They're getting bigger, they're louder, they're more damaging to society.

Guy: Yep.

Jim: But all that breaches are to me, it's that momentary reminder, "Shit. Security matters a lot." Then it goes away, and we hear it in the news. It's important, those reminders are important. It helps me justify budget, it helps me justify time, it helps explain to the board and the C-level staff why application security matters. We have to often rethink everything about how we write software. That's helpful, but it doesn't change the culture. It doesn't actually fix the SDLC program. It's the stone in the pond to get the process started, or continued, or renewed in some organization. It's important, but it's a small piece of the puzzle.

The next one you were saying is we see everything moving to software, and again, this is critical in that all of infrastructure has changed. Suddenly developers or developer-like people are directly involved in infrastructure. I see in some people's top 10 lists that cloud configuration is one of their most difficult issues to get their hands around from a security point of view.

Guy: Right.

Jim: These two things are critical. But I'm going to say that again.

Breaches are the stone in the pond to give us a reminder that security matters at different parts of our company's evolution, and everything moving to software is yet another reminder of how important software is. I'm going to say something controversial that a lot of people don't agree with. I think to move to the cloud in my world is the same old same old. No one agrees with me here, I'm on my own here. It's more code.

My form that collects data that goes to a database, or my report or my advanced business logic financial flow, that code is all mostly the same. I have the same vulnerabilities to struggle with there, yes, as I attach this code up to the cloud I have all different kinds of ways to hook my code and application to cloud services. I agree with that. I've been slinging code for 30 years, it's all the same old code to me is what I'm saying, but it's the same old level of importance. It's the same old, "We have to take this seriously." It's the same old, "We need to train our workforce and have assessment tooling in place, and have C-level staff supporting these efforts, and take that product manager who doesn't drive security but has control of all of my requirements to take them to the roof and drop them off the roof."

Sorry about that, I'm Sicilian. We do that stuff. Get the product manager out of our way. All those, what I think of as core problems in AppSec delivery, Guy that's all the same. What I see as different is the acceleration of how much people are taking this seriously. That's hard.

Guy: The acceleration, maybe, is the key word. I'm right there with you, which is the fundamental problems are input validation. There are some core elements that we're not doing and some of the same premises. I do think we've enlarged the scope a little bit, so it used to be that AppSec couldn't compromise your storage, but today maybe your cloud configuration is indeed an aspect of your code. But beyond that maybe it doesn't impact security directly, but doesn't it impact the development pace? By doing so, it kills the notion of a gate, it kills the notion of an external audit.

Jim: That's the point that is critical here, is that the general attack surface of all software has grown dramatically in 10 years. Infrastructure in terms of how we host our software, infrastructure of the world, interconnectivity of everything. Absolutely. The need to get software security right has grown dramatically in an immeasurable way in the last five or 10 years. The fact that so much has moved to the cloud and our code is now infrastructure, that's a critical part of that mammoth growth of attack surface of software. The drumbeats getting louder, the importance of getting this right is getting louder. The frameworks and the kinds of ways to deploy to the cloud, the frameworks we're using, all that is expanding as well, which makes it a lot more difficult to get our hands around application security. But I get your point, and I'm with you, Guy. Attack surface growth.

Guy: I love the guidelines. The bottom line of it is merging the perspective, it's to say "Software security remained the same, but what is software and the pace of software has changed." We still need those same mechanisms, but the context of what's at stake and maybe the complexity of how to get it done is the thing that's changed, because software has grown.

Jim: I'm going to steal that quote from you and say it in conferences like I thought of it, so I can sound smart. That was awesome.

Guy: That's what we do. That's the purpose of podcasts. That's the whole intent. So, how do we do it? Maybe let's get down a little bit from the "OK fine, it matters. This is important, it's growing and secure, it's growing in importance." Budgetary wise and all that, maybe organizations are getting it a little bit more than before, and they're willing to drop their $30 million dollars on HTTPS initiatives. How do we get it embedded? You come in and talk to a shop, what would you say are the core premises that need to succeed for people to indeed get AppSec right?

Jim: From the beginning of the SDLC I like requirements and training. I like training not just because I do it for a living. I recommend my competitors to do training as well, just in general. I want an educated workforce. Even the basics of application security, number one. Number two I want to have a clear definition of security requirements, so we're all on the same page of what this thing called application security really is. And number three is having a security champion. If you don't have an AppSec expert directly embedded with your team, your likelihood of having application security gets low really quick.

So, three things. Security requirements, number one, a trained workforce and knowing the basics of secure coding and application security, and having a security champion that developers have access to that can lead the intellectual part of what it means to write secure software. Those are usually three of the first things that I'm looking for, and I'll even give you a fourth. Alongside of that I usually want to ramp up basic assessment infrastructure early on if that's not there already. These days it usually is there in some way already, but there's my three and a half.

Guy: OK, cool. Let's pick them apart a little bit. The requirements bit, how do you handle that? You have good security requirements, high level, and maybe more specifically how do you see these top security requirements manifesting in an agile world? You've got a place, it's not a PRD that you write and then a year down the road some software would be delivered. How do you do security requirements? Give us some nuggets and some best practices there.

Jim: From the open source world, I want to avoid doing something like using an OWASP top 10 for my requirements. This is an awareness document, this is meant to do teaching and education to get people started in web security. For that matter I want to keep away from the OWASP proactive control, which is another awareness list. The standard I start with is the ASVS. This is OWASP Application Security verification standard, and the 4.0 release is coming out in a matter of weeks. This is 200 plus requirements. As an early stage I want to take all the main stakeholders of security pro, product manager, some of the lead developers, and go through the ASVS and fork it completely for their company.

Sometimes that's where we stop, sometimes we fork the standard and nobody uses it, but we've gone through a training process to walk through every single requirement that we need to care about with the technical leads. That's the weakest maturity of rolling out requirements. A t the high level w e have requirements rolled out and we've translated all those requirements into where it exists in the framework, where we need to do it ourselves manually, or where we have third party tools to help us, or where you need to go talk to Bob to go get help on that because we already have our own standards. When you're taking the requirements and breaking them apart and getting into the details of how you as a company are going to address them in that forking process, that's when requirements are the most useful. But again, let me go back to the beginning. I've seen people roll out requirements, never read them, and still have that process be helpful. Because I was able to-- Or the tech leads were able to influence the lead developers for four or five hours just talking about what's important for security, and even that is a useful process. I dare say, no matter how you roll them out it's going to be helpful in some way.

Guy: Challenging though, a little bit. I'm totally with you, by the way. You're raising awareness and bringing practicality to it, bringing some "What does it mean to be secure?" These concrete elements is spot on, immediate value there. You talk to a lot of organizations, you're trained and you see how they operate. What's an ideal scenario in an agile team? Like a team that works in the sprint environments. How have you seen this as security requirements embed there? Do you see, is this an area that we've developed? Are there good examples? Because there's no-- They're going to have the requirements for whatever, the product, but how does whether it gets the dev, does it make it into the sprint feature plans? How do you see that happening?

Jim: Sure. Before you start sprinting, the thing about agile is that agile is a mix of discipline and fast moving. A lot of people will take the discipline part of agile and throw it out the window, and move real fast and change requirements as they go. In that scenario, you're screwed. You're not really doing agile, you're just moving fast in a clumsy way. The point is, let's take the pieces of agile that are disciplining and consider those for a moment. I want to have a well-trained workforce on the framework and platform that we're using, and part of that framework and platform is understanding the security controls of using that framework.

Before you started sprinting at high speed, if you mapped all your requirements to your environment and made it clear which one of those requirements are handled by the framework, and had a trained workforce who knew how to use the framework properly. Now we're agile. Now I can move extremely fast and I have a basic static analysis or basic assessment that's checking my work as I go, fast and furious every check in, now I'm starting to do AppSec. It's that pre-step of mapping your requirements to responsibilities. "Do you have those libraries available? Do you have the authentication and access control methodologies established already? Do you have your key storage for cryptography dialed in already?" If all these pieces are missing and you're in an agile world, I claim you're not agile. You're not AppSec agile. You're running fast, but all these basic components to secure coding are not in your environment yet, and I can't help you. You're at the beginning.

Suppose you do have an authentication service and you have an access control, best practice really well established. Your own homegrown permission based system that's real detailed. It's great, and you've gone through basic training. You have subject matter experts. You have a great assessment DevOps pipeline dialed in. When people think DevOps, they're mostly thinking "I have a pipeline of techs. Great." If you have those pieces into play, then requirements, the ones that you're not addressing will become helpful. I know that these 20 requirements I'm not addressing, so I can target education just to the area of ASVS where my framework doesn't handle it and where my subject matter experts are not into that. I can limit the focus of what more I need to do to spread the word on those requirements. Again, if you're actively using requirements in a mature DevOps agile world, and using them as a core study for responsibilities, and you have those pieces of discipline in place, I think they are really useful. By themselves? To your point, a lot of times in agile environments they get ASVS into place, they read it, then they let it go and just do what they normally do. That's not uncommon, to your point, and that's not helpful.

Guy: That doesn't get you as far. I love the analogy of it, like the picture that comes to my mind is thinking about all that infrastructure we talked about is the same as having Kubernetes in place or having a monitoring system, and having your build system and having chosen whatever React or whatever JavaScript framework is. All those decisions, they are requirements, and they are within the context of agile. Even the disciplined agile, but they're infrastructure. You need to build security infrastructure just like you've built your DevOps infrastructure. That includes the tooling, but it also includes the competencies, the technological requirements, the equivalents of uptime and time from build to production. The equivalence of those in security.

Even that ties in perfectly into that third, before the three and a half, we'll get to the half in a sec. But the third element there which is the AppSec competency, because the other thing that you say in agile is the team is to be a full stack team. Like, we don't do a front end team and a back end team, if you're doing agile in a proper fashion you have a product ownership and you have some scope of that and of the system that you control, and the AppSec competency needs to be within the agile team to get that done. Digging into maybe that third bit about the AppSec competency, which models do you see that work? Is at availability? Are those AppSec people part of a different group, and are they somehow affiliated and made available to the dev teams building it? Do you see, do you recommend w hat you think is right around having that AppSec champion be a member of the development team? Is it a full time job, is it a competency, what do you see there that you think is a good model for success?

Jim: I'm going to answer this a little awkwardly, but we had a lot of security problems, and one of the dev teams that I was working with, we solved the problem by removing two developers. Not by adding anything, but by taking the two beginners who were writing a huge amount of vulns every day and got them into a different project, that solved our problem. That got us to a better risk place statistically, and the point is I want senior developers on my team. If I have a security critical product, the first thing I'm going to do is get beginners out of the way. This is a horrible thing to say.

Nobody wants to hear this, but you need a developer, in my opinion, who's been writing code for three to five years before we even start talking application security in most of the teams I work with.

Step one is having a senior team that already understands the rigors of process and they're engineers and not just code hackers. Once I have a team of engineers I can institute all the above. Process, requirements, infrastructure for testing, all the pieces we need for good application security. Training becomes easier. Training does not become this difficult thing. These developers who've been around the block for a while that I should be able to explain SQL injection once in five minutes and never bring it up again, and it's never an issue again. I shouldn't have to reteach that team SQL injection every six months. Something is wrong there.

Going through teaching, having a subject matter expert--you asked earlier "The closer that AppSec expert is to being embedded and integrated with the team, the better." Over at Google, some of my heroes are Michele Spagnuolo and Lukas Weichselbaum. These are experts in content security policy, but they're not standard body members tweaking with the little variables in the standard. They're embedded inside of Google's team implementing CSP. That's the kind of AppSec expert that is the most valuable. Absolutely.

Those are my two points, have an embedded AppSec expert and a senior team of developers, and be careful of the damage a novice developer can do to security on a project.

Guy: It's an interesting perspective. The conversation always goes to the max expertise, that embedded AppSec expert would be like "There's base level expertise, who's your high level expertise?" You're saying, "The core of your problem is more from the minimum level of expertise. What's the minimum level of competency?" Which again I love drawing analogies to general software practices on it. This goes straight up to quality, and you can do things like if you have novices on the team, which eventually you want to have. All those experts have started somewhere. If you have novices on the team, maybe you want to constrain them or have them focus on being framework users. Or basically two areas where they have less ability to screw up from a security perspective, by working within some sandbox until they develop enough maturity to build the secure coding.

Jim: Guy, you're a nice person and you're describing this from a nice person point of view. I'm not. I'm way more militant. I want novice developers not to be seen. I want them to go away, and I don't mean that cruelly. It's just that every single time--the point is when you're a novice developer joining a senior team, you should be paying me. You're lucky. Because you're getting more training and you're getting personal attention from senior folks that you're taking away from doing their work, and you're not producing for me that much. It's harsh, and I don't mean this in a cruel way.

I'd just rather have two very senior developers than 20 novices. That's what I'm trying to say.

I see a lot of my customers who've experimented with this, where they took training and they saw from three different teams that one person was a rock star. One of my customers took these three different rock stars, put them in a team together to do secure coding, and these folks were able to deliver a level of secure software that nobody else in the company could pull off. What they did was they built a large project, they had three medium findings after months of work, and this cracked open the mind of the whole company. Because everybody else was trained, everybody else has tooling in place, everyone else had the same resources. But the team of senior developers, three of them who got it, ended up delivering better secure software than everybody else.

It brings me back to a core principle of being cautious of novice developers. They tend to cost more than the benefits of having them on teams where rigorous security engineering is important. Guy, this is a horribly unpopular opinion, but I stand by it.

I've been rolling around the software industry for a long time, and every time I see a small but senior team, they're able to accomplish magic. Including application security magic that I haven't seen in other configurations. That's my answer to this. Senior, educated people who understand that we're not just slinging code, that we're doing engineering. That's where I think the win is.

Guy: You're entitled to the view. I think that we're not that different in that p erspective, maybe I'm giving a nice spin. I wouldn't be offended by being called a nice person, but those other three people were building those secure coding platforms, they didn't let go all the novice players. But what they've done is that it had them build systems that used the libraries and the access system in the sandbox, but those trials have established for them. We might be going in circles here--

Jim: No, I'm with you.

Guy: Point well taken. Maturity of the of the engineers, that high quality, and I dare say that that also works well for a quality analogy. A lot of those choices you wouldn't take your novice developer to choose your JavaScript framework, or to build it.

Jim: I agree. For a novice developer you do want to box him into a specific set of duties that we can monitor and control and review. But I'm with you.

Guy: Give them a chance, give some of them a chance to become--

Jim: You're a nice guy.

Guy: The few that survive. Give a light at the end of the tunnel.

Jim: I want you to give them a chance and once you've grown them into mature engineers, I want to steal them all from you.

Guy: Let's jump from the people bits to maybe that infrastructure, that tooling. We're painting a really good picture here of "What do you need to do to get this done?" What is the tooling stack that you need today to do this stuff successfully? What do you see in that infrastructure?

Jim: Static analysis, dynamic analysis and third party library analysis. Those are the three critical assessment tools that every single team should be writing as frequently as possible. I see third party analysis as almost a form of static analysis, it's specialized, and I don't think the static analysis tools that we traditionally look at do third party analysis as well as they could. The industry has split those up pretty well. We have dedicated static analysis running every single time a developer checks in code, or builds code, or deploys code as frequently as we can. That's awkward, because it's rare where static analysis will run quickly, so we have to play with this whole "tunings" within a DevOp environment. Usually tuning static analysis to run fast, and then running it out of the automation environment. Slow on a regular basis. That's almost a solved issue now, we've done a good job with rolling those products out across the industry.

Guy: But the third practice, the highlight for the SaaS? If you do static analysis of the SaaS application, then you do one variant of that which needs the per check in, incremental, lower comprehensiveness, if you will--

Jim: Yep.

Guy: Scan, but that is able to scan with sufficient accuracy and speed to be reasonable for the dev flow to use it. Then you do an out of band static analysis that's more comprehensive. That would be the model?

Jim: That's the standard model I see in most places that are taking this seriously, where assessment matters to them for their program. That's usually the dual ways that I see static analysis rolled out in the modern world, and to miss either of those is bad. To only do it once a month in full mode you limit the daily lived-checking you get when you use it every day. If you're only using it today you're missing the depth of the tool, because it often takes more than 10 minutes for a good static analysis tool to run across a complex code base. You need both. But this is solved, I don't think it's interesting anymore. It's solved. If you're not doing static analysis, go do it both those ways or you're barely beginning. Let's look at other categories, other categories. Dynamic Analysis next, right?

Guy: Yep.

Jim: In my world the best tools in dynamic are the cheapest ones. I tend to use the open source world Zap. It's got a lot of decent capabilities. I can go to developers and say, "I want this feature," and depending on my ability to support them I can get it. Tools like Burp have become popular as well. Their dynamic scanning engine will go head to head against any of the big players. The work is getting these dialed into automation so I can run them not once a month with a consultant, but I can run them every single day in some fashion. The work of getting dynamic added into your automation environment is a little bit trickier, because of the nature of how DAST works for static analysis. It's going to touch code and I'm done. For dynamic I need a whole Dev infrastructure, it's more complicated and more work to get dynamic rolled out, but it's a part of all the core.

Where dynamic is screwing up though, even though the tooling of dynamic is getting better the effectiveness is going down dramatically. Because it's getting harder to look at all the JavaScript frameworks and it's getting harder to look at APIs and means stack, and that's the core of development. The web app has migrated to these thick JavaScript applications, React, Angular and Vue that talk to backend APIs. These are two things that traditionally DAST has not done a great job at. There is an area for innovation when it comes to JavaScript static analysis, even to this day.

Guy, we should start another company on the side. Get a few good researchers and do a dedicated JavaScript security company--JavaScript static analysis, JavaScript framework analysis, JavaScript education. That's a gap in the industry right now. But that's my opinion on DAST in general.

Guy: OK. Perfect. Great practices. I don't think there's anything for me to repeat there. That was well articulated. SaaS, DAST, and then the third bit, open source analysis.

Jim: Guy, I'm not saying this because I'm on your podcast, but I believe that third party security analysis in my world is the number one issue. More important than SQL injection now, because of how problematic it is to deal with this and because of how poor the ecosystem is. In particular, in my world Maven and NPM are the big code repositories of the Java and JavaScript ecosystem. These are not run with security in mind at all. In my opinion, it is a complete bleep-show how bad it is and we desperately need tooling that can go deep into the .jar world and deep into obscure things. Like, "I need to know if this third party React module is decent for me to use, and I need third party security library tooling, and I don't think there's any option here. This needs to run as frequently as possible."

What I like about third party analysis is I can run it in under a minute, or within a couple of minutes. It's built from the ground up to work in a DevOps environment. If you're not using something to do third party security analysis on it 100 times a day, you're negligent at this point. The hard part about this is when people are given awareness about third party libraries and they see, "Oh no, these 170 libraries are out of date from security," the initial hit to get up to date can sometimes be years. Anyways, those are the three kinds of tooling that every development shop should be doing as a bare minimum to be assessing the security of their applications during development.

Guy: Awesome. Preaching to the choir, clearly, about that third bit. I do think one good practice to share on the backlog, as you take the blinds off and you see the disturbing number of issues that you have there. Clearly one aspect of it is invest in remediation, but the other bit that we've embraced from the performance budget world is to draw lines to not get any worse. You can do whatever, we do some of this at Snyk, but you can if you wanted you can roll your own. Which is to take a snapshot of where you are, and for starters say, "Put something in the build that would only fail--" or in your pull requests or whatever, "That would only fail if you were introducing another new library that has a vulnerable component, and alongside that introduce something that tracks your dependencies and alerts you on new vulnerable libraries."

The notion being, you roll something out, again the shameless plug here for Snyk could be it, but this concept you can roll is with whatever you want. You want to roll something out that draws the line that says "I shall not get any worse." It's a concept, as I said, that we pulled from the world of performance, and performance you said, "I want to--" Tim Kadlec popularized that, saying "I want to improve my webpage speed. But for starters, I need to stop making it worse. Why don't I take a measurement and measure your page 10 times, see how long it takes to run, and then put whatever that range is in the build. Every time run the test, and if it takes more than that amount, fail the build. Don't allow that through. Or say, "I'm only allowed-- I've got how many JavaScript libraries do I have on my page? 17? Keep it to 17. If I add an 18, you have to take one away."Things like that, to establish from a security perspective and say "I might not be where I want to be, but I'm not going to get any worse." Then you eat away at your security debt over time.

Jim: Guy, the best way I can flatter you is to say "I'm going to go and give a talk at a conference or give a talk in training, I'm going to cite that exact concept of how to properly roll out an interstitial process where I can stop the bleeding and at least detect when a new third party library that's insecure is introduced, and still let the old one survive for now, during this interstitial process of rolling out third party tools. This fills a gap in my mind. I love it. I promise you I'm going to talk about it in the future and give you no credit." I'm sorry, I don't have time for that. I'm going to act intelligent, and it's the best way I could flatter you.

Guy: And use Snyk at the end of there, we'll be tied.

Jim: I'll slap some Snyk in there. Guy, I want to say it again, my claim that third party analysis security is the number one issue, I didn't change that to make you smile. That's my honest opinion. Because it's big, it covers every possible vulnerability out there, and it's something that a lot of people are flat out frightened to deal with.

Think of the culture of software development. Once you get everything working, "Don't touch it! It works!" That's been the mentality for decades, and that mentality is destructive when it comes to third party library security. This is why we're talking. This is why we're here.

Guy: Agreed. This has been awesome. Before I let you disappear here into your training black hole, I like to ask every guest that comes on the show if you have one pet peeve or word of advice, or one thing you want to tell a team that is looking to level up their security posture, what's the one thing they should do or they should not do that you want to convey?

Jim: That's a hard one, because I want to answer the question, "What 200 things should you do?" But let's go with one. This is a boring answer, but it's good. Several of my customers have software that's so big, like 400+ micro services, they're having trouble keeping their hands around it. It's too big. Even with lots of smart architects, all the right resources, nobody understands what's going on because it's too complex of software. This is the bane of application security development. The way I've seen people handle that is to put mammoth focus on one easy thing. Logging. Logging like mad people, they log every single thing across every single service. They get really good visibility into what's happening, so when things go wrong and they see anomalies they can take immediate action.

This is the 10th item in the OWASP top 10, "Do good logging." I tend to be, "Go ahead and log, now let's move on with our lives and talk about more important things." This has become more important in big software. Big software is hard to lock down because of the nature of its complexity. Adding in lots and lots of logging is a relatively simple engineering task, I dare say. It gives us mammoth visibility. I'm here to tell you take your logs seriously. That visibility is crucial to runtime security analysis, and understanding when problems are happening in the real world. Let's log, Guy. Let's log like crazy. Let's log everything. Let's go log wild, that's what I'm saying.

Guy: Here, you've got a quote. Now I can steal something off you and not give you any credit for it.

Jim: Log wild, baby.

Guy: That's awesome advice. Jim, this has been a pleasure. Thanks a lot for coming on the show.

Jim: Guy, seeing all of your success of your firm is well deserved. I'm happy for you. Thank you for working so hard to solve such a critical and difficult problem in AppSec. Keep on rocking.

Guy: Cool. Much appreciated. Thanks everybody for tuning in, and I hope you join us for the next one.