1. Library
  2. Podcasts
  3. The Secure Developer
  4. Ep. #35, Secure Coding in C/C++ with Robert C. Seacord of NCC Group
The Secure Developer
28 MIN

Ep. #35, Secure Coding in C/C++ with Robert C. Seacord of NCC Group

light mode
about the episode

In episode 35 of The Secure Developer, Guy is joined by Robert C. Seacord of NCC Group, who champions the continued practice of coding security in C and C++, and offers practical advantages to using various programming languages in the Agile era.

Robert C. Seacord is the Technical Director at NCC Group. He is a published author and teaches secure coding in C, C++, Java, and C#. He is also a Linux Advisory Board member and expert on the ISO/IEC C Standards Committee.

transcript

Guy Podjarny: Hello, everybody. Welcome back to The Secure Developer, thanks for joining in for another episode. Today we have a great security trainer with us, Robert Seacord. Welcome to the show, Robert.

Robert Seacord: Thanks for having me.

Guy: Robert, before we dig in we're going to go a little bit more bare metal here, or maybe a little bit more C and C++ programming security and the likes later in the show. Can you give us a little bit of context about yourself? Who you are, how you got into security, what you do these days?

Robert: Sure. These days, I'm a technical director at NCC Group. I split my time between doing secure coding training, developing secure coding training, research and customer work, doing a lot of security code analysis for various customers, reviewing source code and the like.

How I got into security, I started as a developer for IBM back in '84 and I had a startup company in '91.

I worked for a company called SecureWare down in Atlanta, Georgia and did not do any security work for them whatsoever.

Continue down my career I went back to the SEI in 1996, and then in 2003 I just changed tracks completely. I went from working in component based software engineering and I moved over into the CERT team, originally on the vul handling team.

You can find one or two vul that I actually handled, then I didn't get a lot of direction while I was there so I wandered off and started writing some books.

I wound up writing Secure Coding in C and C++, and really liked the security field because it gave me the opportunity to get very lost in the weeds of things and not just have to deliver functionality on a schedule and move on to the next project.

Guy: More about security as quality. Would you identify first and foremost as a developer, or more as a security person?

Robert: These days I'm right at the intersection of those two things. That's my sweet spot because as a security person, I'm not the best.

As far as people who are experts at C language, I'm not the best. Most rooms I walk into I am, but when you walk into the C standards meeting I'm the dumb guy.

Guy: The joy of being broad in your specialties, you have to internalize a lot of things, you can't just focus in on one thing.

Let's dig into the meat of your training curriculum these days, you've written a lot and spoken a lot about secure C coding and the likes.

At the risk of condensing a world of knowledge into a few highlights, what would you say are the primary emphases that you give development today, when you come in to try to get them the core principles of secure development?

And maybe how much, if at all, do you feel like that has changed over time?

Robert: I think that the devil tends to be in the details. Rather than superficially perform a treatment of the variety of topics, I tend to dive deep.

Most notably the second day of my secure coding and C and C++ training, I tend to talk about integers. Six hours seems like a long time to talk about integers, but turns out they're very misunderstood.

The reality of C programming and C++ programming is buffer overflows are the biggest issue, both writing outside the bounds of objects and reading outside the bounds of objects.

The way you do that is you add a pointer to an integer and then start de-referencing memory at that address.

If you don't know what value is stored in that integer, you really don't know what that eventual pointer is referencing. You can't have any assurance or confidence that's not an out of bounds read or write.

Guy: How--? The statement would have been true 20 years ago, 30 years ago as well. This is the mistake for it.

Do you see the systems have slightly changed, if nothing else a lot of systems that used to be written in C or C++ are now written in other languages.

They might be written in Java or C#, or higher level languages where they don't deal with those pointers.

Do you feel that, or am I just being a little bit biased in that everybody lives in their world?

Robert: I'm not sure there's a lot less code being written in C.

There was a point at which Java was being explored for the desktop, and eventually I think it was abandoned as having adequate performance for desktop applications.

Now it feels like it's largely relegated to running server-side software. Plus there's just a world of embedded software. Cars, all sorts of transportation are all written in C.

I've followed the TIOBE index over the years, which talks about language popularity, and it used to be Java C and C++ are all there with 20% plus of the market, but what's happened is that usage is balkanized.

There's more and more different languages each with less percent to the market. But at the top, it's still Java and C. And C++ has actually dropped off a bit now.

Last time I looked, I think in the fourth position.

Guy: I think there was actually some reason to that, because from a programming perspective it might be that C++ was on a path towards more structure, maybe a little bit less low-level control.

But basically that is-- Now some of that is being taken up by these other languages.

Robert: Yeah, I agree with that. I think to a certain extent some C++ people went to Java.

The people who wanted the abstractions and the people who were keen on performance and small footprint, all that stuff, moved more to C and vacated the C++ space a bit.

Guy: This is an interesting observation. Important indeed to remember, that the volume of developers as a whole also has increased.

C and C++ or C developers probably continued to grow as well, and indeed all those brave new embedded worlds.

How much are you feeling in the context of security and maybe dealing with Agile development?

Do you feel like in those contexts, is the world of C development a little bit less agile or driven for these biweekly shipments? Or is it getting the same type of pressures?

Robert: I don't see C being driven so much by Agile development as maybe website development and projects like that.

Agile projects I've been involved with tend to have a lot of problems with security. It doesn't typically seem to fit into the model of quick release cycles.

There's always this short term push to push functionality out to deploy, and secure coding a lot of times is the antithesis of that. It's a focus on gaining assurance in the code and the functionality you're about to deploy.

Some people I've seen even have trouble expressing security in terms of a backlog that they can even address as part of their release cycle.

There's probably things people are doing to make it more appropriate for security, but to a certain extent, I feel it's not really built into the model.

But then again, who knows? It's when you go out and you look at real companies and what they're doing in terms of security processes, it's just always alarmingly much worse than you can imagine.

You'll see companies who don't have configuration management in place. Real basic things like that.

Guy: Yeah, I understand. I think we're basically-- It's just these different worlds.

In the website development environment, you might be pushed for a faster iteration, languages might be higher level and may be a bit more Agile.

I think we're discussing a slightly different world, which is like you mentioned, embedded systems. Like in connected cars, quality assurance is much more important.

You can't iterate quite as much, you can't shoot a new car every two weeks like you need to, and even the uptake mechanisms are a little bit more controlled there.

But also they're written in languages where more damage can be caused, so we have maybe a slightly higher responsibility to invest in that assurance for everyone.

Robert: When I was at CERT, we kept track of vulnerabilities but we didn't deal with all of them.

We focused on the more critical ones that would affect things like critical infrastructure, and as a result of that, two-thirds of the vulnerabilities we found in the CERT database were related to C and C++ code.

Again, that's because we focused on critical infrastructure. We didn't focus on mom and pop websites, in which case it would have been all PHP and cross site scripting vulnerabilities.

Guy: Yes. It makes sense again, just different super high gravity type surroundings.

Maybe ones where that balance between agility and safety can be taken a little bit differently.

You do a lot of this assessment now in reviewing, maybe share some bits of what works well for you. If I can start by looking at one of your favorite reviewing tools when you do some analysis of a C code base?

Robert: The most surprising thing to me is people tend not to use the tools in front of them.

I would say starting with the compiler, we'll talk to organizations that are talking about buying Clarity or buying Fortify or some other high end analysis tool.

But they haven't set their warning level on their compiler, or they're disabling warnings so they're not seeing critical problems.

My favorite warning is the sign to unsign conversion warning that developers like to turn off, and it turns out that's a really bad idea.

Many of those warnings are identifying real problems and potential vulnerabilities in the code, so I would say just start by using your compilers better.

Clang and GCC now have a bunch of dynamic analysis tools integrated with the compilers, so there's AddressSanitizer and MemorySanitizer, and UndefinedBehaviorSanitizer and 1-4 ThreadSanitizer for analyzing parallel execution.

All those tools along with a static analysis capability, the compiler is very effective.

Guy: How do you know when to be happy with your results? You come across, you look at these-- These are complicated beasts, and you run the review.

How do you feel like you've explored the unknown sufficiently to feel like it's ready to go?

Robert: It's almost about defect detection rates. How many defects are you finding per day or per hour of analysis?

Once that rate declines to a certain point, you get past the point of where you're getting a good return from that.

Usually at that point is a good idea to change strategies, because a lot of times once you go to a different tool, a different approach, suddenly you start to find new classes of defects that you weren't finding with the old approach.

Typically you do what you can until you run out of time.

Guy: Until the reality hits you.

Robert: Yeah. But usually an indicator that you're getting there is when you're not finding things as quickly, when you get to a diminishing return point.

Guy: I assume in some of these systems, you have some form of continuous build, right? You have some form of automated builds that happen.

How do you like to set standards, to know you haven't slipped? Almost the regression test aspect of security.

Do you feel like there are good tools around that? Does it come back again to the compiler warnings and disallowing them?

Robert: For regression tests?

Guy: Basically, to know you've done, you've sat down and somebody hired the top talent of Robert Seacord.

You've come through, you've done analysis that helped them get to a point that is of a higher comfort and now they don't want to slip.

They don't want to regress, so regression tests in terms of the security quality. Do you do sets of lines and thresholds?

Robert: For example, NCC Group would do a security analysis of a system including analyzing the source code, and we'll write a report and we'll identify the defects and we'll explain what the problem is, what possible mitigations are and so forth.

That gets moved around, but a lot of times it makes sense to follow that up with some onsite training where we'll come in and talk to the developers and give them the training course, and maybe supplement that with some examples from their own system.

Actual mistakes that they made and try to up their game on a whole. Because I think what you don't want to do is always rely on the pen testers to find the problems in the code, because it's not necessarily the best approach.

You really want to not code the errors to begin with, because that's the most effective time to code correctly and securely is while you're writing the code.

Any time you come back to something, a lot of times you're looking at someone else's code. You have to relearn that person's code, and sometimes you're looking at your own code but often you had to relearn your own code because enough time has passed that you're not really familiar with it.

Fixing defects later in the development cycle, it's more likely that you'll fix it incorrectly or introduce additional defects while you're repairing an existing problem.

Again, there's a lot of reasons to try to code security to begin with.

Guy: I'm going to switch a little bit gears here and maybe talk about people, indeed talk about these dev teams from a team composition. We talked tech, let's talk a little bit about the teams that you teach.

So when you're coming in to do more of a training or when you're interacting with teams to share the results, do you feel like there's any change over the last while?

You've been doing this for a good many years. Do you feel like approaches are different?

Like, is there a higher awareness or appreciation for security? Is it about the same? Do you get push back around "It's not my job"from dev teams?

How do you see the state of the industry amongst some of the customer base you work with?

Robert: I would say that there are some changes. We don't really get the same type of arguments that we got into years ago where we'd call a vendor and say "You have a vulnerability in your code."

They would say, "Prove it." Nowadays they're more willing to accept that on face value. From a teaching perspective, I mentioned this in the intro, but I started out as a developer so I've always maintained a developer focus.

I've had security people come and try to train me unsuccessfully, because they would say really stupid things that were just unpractical things that we would never do, could never do.

A lot of security people tend to be very dogmatic in their approach without having any firm basis for it.

When I do teach, I'll tell students that security is a quality that you have to achieve.

People sometimes ask me, "Why do people break C as a programming language?"

One of my answers is that typically security might be fourth or fifth on your list of reasons you pick a language.

The first reason would be "We've got existing software that is already developed for this platform and it's in C or it's in C++ or what have you."

There's an advantage to keeping your code base in the same language.

I've had conversations recently about Frankenstein systems, where they started in C and then someone switched to Java, and then C# and then Rust and then Go, and then you have 12 or 15 or 20 different languages.

Those systems become very brittle and very difficult to maintain. That's the first reason, the second reason might be that's where your expertise is.

If you have a group of expert C developers and you tell them to build the next system in Java, I can guarantee that system will be less secure than the system those developers would have built in the C language.

Then you get to things like performance and eventually security might be fourth or fifth in the list of reasons you would pick a given language.

Guy: I think you're right. I think fundamentally the choice of language is attuned to what you're trying to do. If you built an embedded system in Java, maybe there are some explicit cases where that may make sense.

But more often than not, that's just not as performant or it's to resource consumption-heavy to fly and security has to cope.

You have to build security despite the choice of language, whether that's a helpful or negative thing for you.

Robert: A lot of security advice tends to be overly dogmatic. Just saying something as simple as "Always check bounds" is overly prescriptive, because you can look at many loops in C and C++ code and just prove that they don't have out of bounds read or write.

Why waste cycles securing something where there's no possibility of a defector error, when you can use those cycles elsewhere to provide real security?

Performance and security always tends to be a trade off to some extent.

Guy: I think we're describing an interesting profile here, which is a little bit different than a bunch of the example systems that we oftentimes have on the show.

We're talking about these embedded systems, low level systems, performances of material, some very high importance, maybe oftentimes a physical thing that gets shipped or something that doesn't get delivered or deployed quite as often.

Therefore, between the sensitivity and maybe just the form factor, they end up doing security in a slightly more, maybe thorough fashion.

How do you feel the engagement? You come in, people contract NCC Group or they work with you.

Do you feel like before they would bring you in at the end of the process now they bring you in more mid-way? Has there been a change around when that type of security process is done?

I understand that the deployment is less Agile, but do you find it still waterfally? Do you come in after six months of code have been written, or is it more collaborative?

Robert: It's a bit hard to say. We certainly get engaged at all points in the lifecycle.

A good time to engage us is to engage us early with the training, and a lot of times during the beginning of a project you have a small group of architects and designers who are coming up with the initial architect and design.

You have a lot of developers who aren't fully engaged at that point in the process, because they're more novice programmers and not fully engaged in the design process.

So that's a good time to deliver some secure coding training and get those folks up to speed when they're not necessarily fully engaged yet in the development process.

We do get asked to do architecture design reviews, those are always worthwhile endeavors.

But there are still a lot of companies that bring us in for pen testing, expect the pen testing to go great, are shocked and dismayed by the results of the pen test and then decide to bring us in.

I would say even more commonly is just some big exploit gets discovered or published and then the alarm bells go off in an organization, and they decide "We've got to more proactively address security."

Guy: Yeah, I think that's true at all levels of seniority and all levels of the stack.

Security is invisible, so when something big happens or half this big hit that feedback loop it mobilizes people to action.

I think this was a really interesting conversation, because I feel on one hand the reality describing a lot of these principles are the same as they would be in a language.

This notion of people that are near your stage of their career, they need more of the secure training and education element of it while, people that are further along might be a bit more, like looking for you for subsequent verification, but not quite as they go.

A lot of the commentary about teaching-- these specific examples change, whether it's out of bounds vs. cross scripting or knowing sanitize inputs and outputs.

Fundamentally, the idea of teaching a principle vs. teaching a specific holds as well and runs with it. Maybe the biggest change that runs over here is the tradeoffs and the specifics, as well as the pace maybe that this world works.

This risk tolerance around the likelihood of a problem, and maybe the tolerance for slightly slower paces.

Robert: When I think about the training, because I've been delivering secure coding in C and C++ trainings since 2005.

Some time now, and the problems don't change very much. They remain there in the languages and particularly C, there's a very strong reluctance to change the language.

The first rule in the standards committee is "Don't break existing code." They're OK breaking compiler implementations, but they don't want to break existing source code that is out there because the saying is, "The world runs on C and we don't want to break the world."

You'll find that there are code bases out there where there really aren't maintainers left for that code.

They'll update a compiler and rebuild it with the latest version, the compiler, and if something goes wrong they're out of luck now.

They don't know how to repair that code anymore. I think the thing that probably changes the most are the solutions.

There's different and better tooling that comes along, different and better processes, and sometimes there are newer libraries which are introduced which are potentially more usable and more secure.

Guy: I think that's a good thing to hear. I think the ecosystem, the surrounding evolves while indeed, the cornerstone of software development that is C remains a little bit unchanged for historical reasons, but also because it's pretty darn powerful.

It's not-- It allows you to do a lot of things, including shoot yourself in the foot. It let's you do a lot of good things as well.

Robert: Yeah. The time I've been involved in C standardization, I would say that it's really still driven by performance more than security.

We have these undefined behaviors in the language, which the less specified a language is the more room there is to optimize it.

The simple view of that is you have to go from point A to point C, but you have to stop at point B on the way, and you try to optimize your route and that constraint of stopping at point B is going to limit your optimizations and your ability to optimize the route. But if you can eliminate the necessity to stop at point B, you can come up with much faster routes to your final destination.

One of the things that's been going on in the evolution of C is that compiler writers are taking advantage of these undefined behaviors to do greater and greater optimizations.

There's weird push back by the C community. The C community will physically show up at standards and meetings, representatives.

And they'll say "We've had it up to here with these optimizations. We've written this code, this code has always worked. You know what it means, you've always known what this code meant, but now you're doing these optimizations and our code is broken now. Cut it out."

The compiler writers would say, "OK. You can do without these optimizations." Then the C developers will say, "No. We want those optimizations."

Then the compiler writers will throw their arms up in potentially mock disbelief, but there's this desire to want it all and it's not necessarily feasible.

But when push comes to shove, performance has been winning out over security in terms of the decisions that are being made.

Guy: Yeah, it's an aspect of the functionality. You can be secure all the way to bankruptcy.

At the end of the day, a business value is what dominates and security is invisible. It's something that you have to work to make visible.

Robert, thanks for some of the good guidance here and all this sharing your experiences as we went through the show.

Before I let you go, I like to ask every guest on the show if there's one tip or one bit of advice, when you have a team or a team of C developers looking to level up their security knowledge.

What's the one small piece of advice or pet peeve that you get annoyed with people repeatedly getting wrong, that you would give that team to get better at security?

Robert: I suspect that every time I get asked this question, I give a different answer based on what's most on my mind at the time. But this time I think I'll say, take a look at--

Write some C code. Imagine what assembly code is going to be generated, then take a look to see what assembly code actually gets generated.

And then when your expectation doesn't match the reality, read the standard again and repeat until you can predict what the code you're writing actually does.

Because people are increasingly surprised by the semantics of the language and what the compilers are doing these days.

Guy: Cool. Yes, that's good advice. Once again, thanks for coming on the show, Robert.

Robert: Thanks again for having me.

Guy: Thanks to everybody for tuning in, and I hope you join us for the next one.