about the episode
about the guests
Marc Campbell: Hi again and welcome to another episode of The Kubelist Podcast. Today we're here with John Amaral and Kyle Quest, two co-founders of Slim.AI, to talk about their popular project and service. Welcome, John and Kyle.
John Amaral: Hey. Thanks, Marc.
Marc: And of course Benjie is here. Hey, Benjie.
Benjie DeGroot: I'm here. I'm excited about KubeCon. We're recording this right before KubeCon. I'm getting excited, Marc.
Marc: All right. Well, let's dig in. Normally we like to get started with just some background. John, do you mind getting us started by telling us your background that led up to creating Slim.AI?
John: Sure. My role is Slim's CEO and I come from a 30-year background in developing products and building technology. These were software products, of course. Mostly SaaS throughout those years, of course 30 years ago there wasn't any SaaS. But I've been building software products that are mostly in the cyber security and networking space, and going back in time, most recently, prior to Slim, I was the head of product at CISCO Cloud Security, which is a really big, cloud security division of CISCO.
Before that I was at a company called CloudLock which was acquired by CISCO. We were another cyber security SaaS company in an area called CASB, Cloud Access Security Broker. Then going back in time, similar roles as either CTO or Head of Engineering or Head of Product, or all of those depending upon the role, in companies and a bunch have been successful.
I've been acquired several times and built really big ones, notably TrustWave, which is a managed security services provider. I was SVP of Product there, and we sold that to Singapore Telecom. So I've been pretty involved in building cyber security tech for a long time. I have a background in software and computer engineering, undergraduate, and I got an MBA from MIT. So I've been mostly on the business side and product side now for several stints, and that's also true at Slim.
Marc: Great, yeah. And I think we're going to dig into a little bit, this is a super interesting background of the business side but pretty technical too, so we'll put that over here for a minute and come back and chat about it. Kyle, the same question, would you mind sharing with us your background that led up to creating Slim?
Kyle Quest: Sure. I'd say I'm a builder and a breaker, so I've been building cloud native apps since the early days of cloud native or early days of AWS and all of that. That's when it all started. On the other side, I've been involved in security in different capacities for quite a while, so I like breaking stuff and I like doing the opposite. So it's kind of a combination of those two things that led to the origins of DockerSlim and the tech behind it, and what I have observed building applications for the cloud and containerized applications served as an input for the initial idea.
Marc: That's awesome, let's jump in and chat about the product. Generally, the folks listening to the podcast are pretty technical so don't be afraid to dig into the lowest level that you want to or whatever, but what is Slim? What does the project do right now?
John: I can start, and then I'll hand over to Kyle. One thing we didn't say about our backgrounds together, is that we've been working together for roughly 20 years on and off, and then mostly on. So I estimate something like 15 of the last 20 years, approximately, Kyle and I have been working together at companies where we build cyber security products or build deep tech around SaaS and cloud native.
So we didn't just meet on the street and start this company, we've known each other a while and been building really great stuff for a while together in our own capacities. At a high level, Slim is all about providing developers the ability to understand, evaluate and quickly build secure containerized applications, and we have an open source that Kyle will tell you all about and give you his perspective on.
We have a SaaS platform that's available and free today, for developers, that allows them to really explore, understand and secure containers, whether those containers are containers they build themselves or containers they consume from public registries. So the high level value proposition or belief or use case families that we're trying to attack are inherently software supply chain security, it's a real problem, understanding and securing containers using container best practices is a key factor in being able to limit risk in deployed applications.
Reducing attack surface of containers, hardening them, removing vulnerabilities, really understanding how they run so you can secure them appropriately. It's pretty hard work and it takes a lot of expertise if you don't have good tools to do that, and we've created some really awesome tools that can do a lot of that stuff for you automatically and give the power to the developers so they can build secure apps from the start, right at the point where they turn it into something that will run in the cloud.
Right there they can make their containers more secure. We think that beats securing them after the fact or later, or having poorly composed containers, which is a way harder proposition to fix after the fact. So that's, at a high level, what we're all about. I mentioned we have this open source project, Kyle's the creator of that. I'll let him dig in there and tell you all about it, and then tell you about our tech as much as you'd like.
Kyle: Sure. It's a standard way to start a project, solving your own problems. Where I had a problem, and technically it's a general industry problem where with the new cloud native applications, there was a new way to build and deliver them, with DevOps. There are many definitions of DevOps, but one of my favorites is, "You build it, you run it, but what does it mean?"
There's a lot of potential, and it empowers engineers to own the application full cycle, but it also means that the app developers are now expected to be infrastructure experts, and that's just not realistic.
So there was this disconnect between the theory and reality, and we're technically still in this phase where we are sort of in the Stone Age of the cloud native era where you have to know a lot of low level stuff, you have to do a lot of manual work to get the outcomes that you want. It's super overwhelming.
You look at the CNCF landscape, so many different technologies and tools and all of that. Just trying to know Kubernetes is a Mission:Impossible, so there's a lot of low level knowledge you need to have, and the level of skills that you need to have is really high. That was the origin, where I was trying to build containerized applications but I didn't want to do it the hard way. Kelsey Hightower has this awesome Kubernetes The Hard Way.
I think there's a lot of that in the cloud native space where you're doing a lot of stuff the hard way, and I wanted to find this easy way or straightforward way where I could get the outcomes that I wanted. That's production ready containers and secure, production ready containers without doing a lot of manual work. And so I ended up leveraging my security background in order to automate a number of activities that had to happen, creating hardened and production ready containers.
That's how DockerSlim was created, and it was created during one of the Global Docker Hackathons that they hosted at the end of 2015. Then the project got a lot of traction because this was a problem that others experienced as well, and it just kept growing and growing from there.
Marc: I remember those early, yeah, I don't remember if it was 2015 or which year it was. But back then those early Docker Hackathons. It was obviously pre Kubernetes, Docker 0.Whatever days, it was early and it was fun. Lots of interesting and fun stuff was being built then, like now too, but it was still very, very early in the whole ecosystem.
Kyle: Yeah, exactly. I would say we're still early on, it's still not mature and this is the exciting part because there's a lot more to do and a lot of opportunities.
Benjie: Yeah, I'm excited to talk about those opportunities. One thing I wanted to ask, as I'm a newb when it comes to Slim, but I've been following DockerSlim for some time, and I always thought of it as a way to make my containers a lot more efficient. Obviously when you reduce the surface are of the libraries and all that stuff, obviously there's a huge security component to that, and you guys have both been talking about the security angle which totally makes sense.
But I just thought for maybe a second, can you tell us a little bit about some of the efficiencies you might get just as an operator, if I were to run my images through DockerSlim? It seems like there's also some speed gains to be had, is that accurate or is that just a misassumption by me?
John: There are a few, and I'll parse out the various value propositions. The first is DockerSlim and Slim are Swiss army knives of container evaluation analysis and optimization. We also use the term hardening as well for what we do to secure them.
It's not uncommon and, in fact, I think it's generally common that folks who build containers don't efficiently or effectively scrutinize the code they have inside the containers.
They have to know what it takes to get their container to run, meaning that they add the parts that are necessary to run, and that's normal when you build code. You can't miss something that makes your app run.
But, what they generally don't know is the parts that they don't need, and it's pretty common when you build from a base image, like a base OS image like Debian and you add a node.
You know developers want to go fast, so they want to build things with images that have convenient tooling or robust package sets, et cetera. Those things, and our systems have scanned literally millions of containers, we see that on average there's way more files than necessary to run the app and there's all the risk and baggage that comes along with that, including inefficiencies and speed and startup times, and of course vulnerabilities and risk surface.
So container best practices state three high level things, right? These are maybe the three core ones. You should know what you're shipping to production, you should minimize what you're shipping to production to only those things you need for your app to run, and you should remove as many vulnerabilities as possible. There are all sorts of operational and security ramifications of those three things.
On the size part, what we've seen is as you reduce container size, you get operational efficiencies and even, sometimes, cost efficiencies. Certainly in CIC pipeline times. Scanners work faster on smaller containers, things flow through your networks faster, they pull faster, they push faster. When they load, if you're using something like serverless containers, or even using Fargate or one of these container services on popular IS systems.
Those containers, you're charged for those containers when the service asks for the container to load, and then startup. What we see for ephemeral workloads is that if a container has a long startup and load time, but a relatively short ephemeral working lifetime, that your build can actually reflect more cost on the startup and load than it does on the actual lifespan of the working container.
So there's these efficiencies and speed and performance gains that begin to be got, depending upon the context. Then, you're storing very large images around everywhere, it builds up over time and it causes drag on your operation, on your developers' desktops and your CIC pipeline, et cetera. Storage cost isn't free either, and depending upon the system you use to store your containers, that can be more or less impacting.
On the security side I think the effects are even more profound. With software supply chain risks today, there's a lot of scrutiny on vulnerability counts, vulnerability scanning is really prominent, and folks who create containers are getting a lot of pushback from container users. Think of that as, "Hey, I shipped the containers to my customers as the software I expect them to run in their SaaS," or, "I'm running security for a firm's SaaS, I'm downstream from the developers, and I have to reach a certain security profile for the running systems."
The scrutiny caused by Log For J and even the other popular, large scale, supply chain attacks like SolarWinds, have caused a heightened sense of urgency to make vulnerabilities go to zero. That's pushing back on developer teams and causing a lot of churn and a lot of work. If you're a company that ships containers broadly through the internet, for instance, think of any foundational infrastructure container that you can find on Docker Hub, those publishers are getting a lot of interaction, a lot of feedback and a lot of pressure from users to have vulnerability free containers.
And it's not easy to make containers vulnerability free, and so that dynamic is causing a lot of interesting use of our tools which help you remove vulnerabilities and unnecessary space, and adhere to those three best practices I talked about. Getting those to happen as you build, but automatically through DevOps is a really big value proposition. So size and security come along together as you make these optimizations. Kyle, do you want to add to that?
Kyle: Yes. I'll add with security. So one of the interesting things in security, one of the concepts in security that's super desired is proactive security. You want to be proactive as opposed to reactive, and if you look at the traditional vulnerability management, this is a reactive security control. You have vulnerabilities discovered, your scanners will have checks for them, and then they will find those vulnerabilities in your containerized applications.
So that's great to do, it's important to have the reactive controls, but it's also desired as much as possible and realistic to be proactive. One of the fundamental approaches with the DockerSlim and Slim is about proactive security, so you throw away the stuff that you don't need and that's proactively eliminating problems, reducing the surface, and this is one of those things that's hard to do manually, traditionally. The idea of proactive security and the concept of least privilege existed for decades, but it's the implementation side that's been tricky, and trying to automate that is really a game changer, and that's what we're trying to do.
I think the same is true for the size and other hardening activities, it's super low level, requires a lot of domain knowledge, and it's time consuming, optimizing the size so your images push faster and they startup faster, and all of that. Being able to automate that is super powerful, because what you need in production is different from what you need when you're developing. When you're in production, you want to have as little as possible in your container, but when you're developing you need developer friendly containers so you can develop easier, debug easier.
If you look at those kind of developer friendly container images and all of that, Ubuntu and Ubuntu based images are pretty much it. I'd say Ubuntu, and to some degree Debian, has been the operating system of the cloud. If you look at the last 10, 15, or more years, they dominated the DevOps scripting space, all those cookbooks, recipes, a lot of it was built for Ubuntu or Debian, so there's a lot of this critical mass that developers or DevOps engineers are used to leveraging to build their environments. Either containerized or non containerized.
There was critical mass and gravity around that, and that was another kind of driving force behind the original project, because I used Ubuntu images to build containers at that time. I didn't want to change the config scripts, all of that. I kept throwing it away, going with another operating system. It was a big ask, just going from one version of the same OS to another one is a significant effort. So being able to use what I had was super important, and that's what I was able to achieve.
I could still use Ubuntu based images and then get the outcomes that were necessary to have production ready containers. So this is the developer experience side of that, it's super important as well. It's not just about pushing the images faster, it's also about developing faster as well, and you develop faster in an environment that's more developer friendly.
Benjie: Yeah, definitely. Quick question for you, just give me a yes or no. But back in the day when I was doing a little bit more developing, I'm not really anymore. But Alpine, that was the answer, use Alpine. Super low surface area. I feel like Ubuntu Slim is kind of the new hotness now. If you were a listener and you're like, "I want to get started, I'm going to get DockerSlim integrated soon, but I'm just starting today," what's the image that you, just as obviously an expert in this, Kyle, what would be the image that you think you'd start with now? Ubuntu Slim? What do you think?
Kyle: I'd still go with whatever standard images they have. If you look at the language specific images, for example, FoGo or Node or Python, the main ones are Debian based. Because of that, there's enough critical mass around that so that's the best thing to go with.
Benjie: So don't use Alpine anymore? I used to tell everyone, "Just use Alpine." But I'm pretty sure Alpine's kind of-
Kyle: So here's the deal with Alpine. I think it's an awesome project, but there are certain things people are not aware of. There are two types of Alpine users. The experts who know the ins and outs of Alpine, and they understand the trade offs, and the newbies who heard something about it and they try to use it and it seems like it works for them. It's like this for a while, and maybe if you're lucky, you'll be fine. But there's a lot more to know about that, than just that.
For example, the package management ecosystem. In the commercial application development space, you don't always control the operating system and the package manager because you work with third party vendors and they give you RPMs. Or you need a FIPS encryption because you're going after a compliance mandate, either a government mandate or some other commercial mandate, and you get FIPS encryption with RedHat. You don't get it with Ubuntu, or definitely not Alpine, so you end up with the operating system you have to use, and that's a big deal.
So sometimes maybe you can use that, but there are a lot of other interesting things to be aware of, and the problems that... One thing about Alpine is that it's not a standard Linux distribution, it's not a GNU Linux distribution, which is probably the number one thing people are not aware of. So what does it mean? It means it's not compatible with your Ubuntu, Debian, RedHat operating systems, and a big part of it is the libc component package that's the foundation for a lot of applications.
They have their own Mazel libc implementation and it's not binary compatible, and there's a lot of interesting things around it. And because people who hear about Alpine, they see that, "Oh, the image is small, I'll just go use it," again, maybe your use cases will be just fine, but there's a lot of unknown that you need to be aware of.
Benjie: Right. So there's no magic bullet, is the truth. Back in the day I felt like Alpine was a little magic bullety, but of course for all the things, for a lot of the reasons, and it's really helpful to lay them out like you have, of all the things that you need to consider. One other thing I wanted to just poke on a little bit, and then we're going to switch gears a little bit, is that I think it's really interesting you brought up the point of development being a very different environment than production.
I think most of our listeners understand that, but I think really cool technology that's coming out, I think it was out of beta for 1.24, is the ephemeral containers that attach. So for example, to be explicit, like, "Oh, you want some debugging tools, or a higher verbose log level," you don't want that in your production containers, you do want them locally, and making sure that you understand what packages are where and the ability to, in production, actually access those things.
So I think that's really cool and it's a really good point. One thing that we wanted to touch on a little bit, and maybe John can speak to this a bit, so you've started this cool project, DockerSlim, and you guys have worked together for some time. What made you guys decide to start Slim.ai itself? How did that get going? What's the story there?
John: So as Kyle mentioned and as we preluded, there are a lot of these challenges around shipping secure applications to prod, right? And if Kubernetes is the organism, containers are the cells. And of course Kyle and I have been building very large scale, very important, I'd say in general, cloud native security applications. These are the kind of cloud native applications that secure other cloud native applications and secure very large sets at the last company we were at before this.
Our cloud native application that we were involved with had more than 100 million users being protected by it worldwide, very large scale. We were building all containerized, we had a very large team, a very wide team, separate, working on that DevOps kind of mindset, you build it, you own it. We were really engaged, in the thick of this problem to maintain consistency across all these teams and ensure that from the developer's desk looking out, we were building the secure and most well composed applications as we could.
Of course we had every tool available, but we ran into conditions there where regardless of every resource in the world, every tool in the world, we were stuck with bloated containers and inability. Sometimes even because they were so large we couldn't run them in the latest platform. Tons and tons of vulnerabilities, and lots of software lifecycle scrutiny on vulnerability reduction, so we had to do the really hard work to get these things secure.
We were always lamenting about, "We wish we had higher velocity and building secure apps natively," and we'd been in security, building security products and companies for a long time. So Kyle, we were together, he mentioned this project DockerSlim to me, while we were meeting one time, reminded me it exists and said, "Hey, looks like it's catching on, it's pretty cool." Then we had a number of conversations, I took a look at it, we started talking to some of the users.
As we started to brainstorm and imagine, first of all, why are users adopting DockerSlim. Kyle had this great base of users that were kind of like ardent fans of it and adoption and the number of stars was rising. As we dug into that more, and started to think about the implications of why and talk to users, there was a pretty clear signal coming out of that.
It's that of course lots of folks struggle with this combination of lack of expertise, managing complexity and dealing with security of containerized applications. It's kind of, "We can't secure it because it's hard, and I don't have the expertise." Kyle mentioned this earlier, hardening is an expert task. The other part about it is, is doing the work to secure containers as containers is kind of a job that doesn't fit right. It's like, "I'm an engineer, I write code. I want to go fast at writing code. I don't want to prematurely optimize things in my environment, but I would love to ship secure containers if it's easy."
And lots of engineers don't have the knowledge or capacity or time to do it well, and so usually this stuff gets kicked down the road, somebody in SecOps or somebody downstream figures out they need to secure it. Then you try to wrap security around this later, or worse, there's this delayed signal that comes back to the engineering team that says, "You got to fix that." But it's always inconvenient.
So we thought, "Wow. We see users of DockerSlim trying to tackle those problems and be more proactive."Like Kyle was mentioning. We could completely empathize with the problem space given our own experiences in building cloud native security applications for a long time, and then we're entrepreneurial and we've been doing this a while, building companies and such, and started to explore the idea that Kyle's vision and reasons he built this is a pretty market wide problem space, and that there really wasn't technology aimed at this proactive, automated automation that starts with developers moving out.
And so, we put together our ideas, we got together with some other friends of ours that had built companies with us before, we've been part of building companies in those companies. This was around about 2020 and decided to go and do it, and left what we were doing and started this company and haven't looked back since. Kyle, you want to add to that, or embellish that commentary?
Kyle: A couple of things to add.
One of the challenges that I've observed over and over again is that one of the problems that prevents developers and engineers, the engineering teams, from getting to the production ready containerized applications and cloud native applications in general is actually understanding the application. Truly understanding the application.
Benjie: So you mean from like an architecture perspective? Or what do you mean?
Kyle: In terms of what it needs at the infrastructure level, in terms of the resources it needs, the interactions it has.
Benjie: So from like a network perspective, "Who do I need to talk to?" Obviously there's firewall rules, or you're maybe using a mesh, or how many replica sets do I need to have?
John: Or even more simpler, like what parts of the operating system does it need to run? That's an important hardening question.
Kyle: And also for example, what kind of resources the application needs? For example, it needs a lot of memory or disk space or CPU. Any kind of way you slice it, there's not enough knowledge about it. There might be some kind of high level knowledge about the application the developers themselves have, but the applications are like icebergs, you see only the tip of the iceberg. In order to have the application running in production, you need to understand what's below the waterline, and that's a manual process.
Benjie: Right, so it's really the DevOps perspective that you're taking when you're looking at applications, especially the containers as a whole. That totally makes sense. I actually have a slide in my deck that is an iceberg, I don't know if you saw that or you came up with that on your own. But I really appreciate the iceberg analogy, that production is just that little bit of tip on the top and the rest of it is, "You don't even want to know." And that goes for the software development lifecycle, let alone the various components and containers that you're using.
One other question I did really want to understand about Slim is just a little bit on the technical side. Just walk me through, high level, well, kind of low level I guess. So I'm over at Shipyard, we use... Maybe I shouldn't say this because it's actually an attack vector... I don't even know what image we use, I'm pretty sure it's Debian flavored and we do some of our own Python stuff and we've got some GO binaries that we have in other places in our stack. Would I actually just go and I would run DockerSlim as part of my CI process?
And then if I did do that, how does it validate that you're not actually taking something out that I actually need? That, to me, is really interesting. How do I give you a Docker file and then you guys give me back a Docker file that is more efficient, slimmed down? Slim, good name. Slimmed down and more secure? What's the technology workflow there?
John: We have two parts, we have DockerSlim as our open source project, and we have Slim.ai, Portal.Slim.Dev, which is our SaaS. We have a bunch of free tools there. And so, depending upon the kind of scale and scope of the implementation, DockerSlim or Slim.ai, one of them might be more appropriate. In the first part that you said, if I want to plug this into my CICD, into my DevOps workflows, then that's more appropriate to do with Slim.ai.
What Slim.ai does, it's got natural integrations built on it, it's a SaaS platform application and it connects into any registry, public and private registries. It also has APIs and other kinds of integration points where you can easily connect it to, say, GitHub Actions or whatever you like. Through those APIs and registry interactions, users of that platform can provide access to where their containers live or where their containers are being built, et cetera.
In that case, if you were trying to do this more proactive automated automation and evaluation, and this is the way that it's being used quite a bit, is that you'd create a workflow through your CICD, you make a connection to your private registries, where your private containers are, maybe even a public space. You would trigger every time something is built or something changes, for us to pull that container into our pipeline, evaluate it and then act on it, harden it, optimize it, vulnerability scan it. You name it, we can do all these things.
Benjie: So you guys pull in my images, not my Docker file? Is that what it is? So you would connect to my registry?
John: Yeah, connect to registries. Right, exactly. DockerSlim is different, I'll get into the minor differences there. But anyway, for an integration at scale into pipelines, et cetera, Portal.Slim.Dev is what you need. That allows fast integration and a lot of assistive tooling and vulnerability scanning and you name it. It gives a very refined and complete environment so that you can not only scan and learn, but you can do things like understand changes between two containers, vulnerability differences between two containers, optimize the container, show the differences between the hardened and regular, pre-Slimmed container, et cetera.
So it gives you a lot of evaluative tooling that developers can use to see what changes, and that was part of your question. Then you can automate this at scale and go fast. You asked, like, "How do I know you're not taking out stuff I don't need?" And that's the magic in this, Kyle was talking about application intelligence kind of loosely. We watch your containers run while they're being stimulated and then we observe the resources they need, and then when we create a new container we only remove the parts that aren't necessary for the successful operation. So we emit a new container that is a functional replica of the original, but doesn't have any of the parts you don't need.
You have the ability to tune that and optimize the profile around that, but mostly that knowledge comes from watching them run. We have a mode of the tool where we can actually receive a container that has instrumentation in it from us through that automation process, and then go run it in your test environments, wherever you'd like. Then we can interact with the results from where you've run that, so there's lots of tools and techniques we have that are made to build a confidently hardened container that can be automated and proactive.
Marc: So I've run it in my test environment as a way to exorcize all of the functionality of the application so that you can observe that, I assume that's the value of me actually running it. Right?
Kyle: Yeah, there's a little bit more to it and I'd like to go back to the company name, Slim.AI AI actually stands for Application Intelligence like John mentioned. That's really the main difference between a lot of tooling in the cloud native space that's more infrastructure centric. Focusing on the application itself and building that application intelligence really allows us to automate a lot of activities that you would do manually.
The application intelligence is collected in many different ways, and that also includes static analysis of the application and the container and also dynamic analysis of the application and the container, where that application is running. And on the interaction side of things, we obviously benefit from using the test that you have, the integration test, the end to end test, but we also have built in automated probing capabilities that also makes it possible for us to see what the application is doing and what it needs, and all of that.
Benjie: Is that eBPF? Because I love talking about supply chain and I love talking about eBPF on every episode, whenever possible. Are guys using eBPFs for those probes?
Kyle: It's different kinds of probes, those are more application interaction probes. eBPF fits better on the sensor side. We have a sensor that is responsible for collecting telemetry. The current sensor we have is a container level sensor, and we'll be expanding to a model where we have a system level sensor. The difference there is that the container level sensor is embedded in the container image itself, and the system level sensor lives outside of the containers that you have.
There are a lot of interesting opportunities, but also trade offs and constraints. For example, with eBPF you have kernel dependencies in terms of what can be done with eBPF, based on the version and all of that. So it's a trade off based on the application environment that you have.
Benjie: Of course. Yeah, I don't know how much you listen to The Kubelist podcast, but I love to talk about eBPFs and supply chain, and I also love to talk about Wasm because I'm a CNCF hipster over here.
Kyle: I had a presentation at the last KubeCon about the eBPF libraries, and building applications with eBPF actually. So I'm happy to talk more about that as well.
Marc: Oh yeah, I think we definitely do. eBPF is definitely pretty popular in this ecosystem. But before we do, I want to go back, right? You talked about this hard problem to solve. John, I think you actually described it in your intro as, "We're going to explore, understand and secure containers," and then you said that you're doing that for both first and third party container images. Obviously this is a hard problem to solve.
We talked a little bit about the transition to cloud native ecosystem and the separation of SRE and App Devs owning the infrastructure, creating this need. There's other folks in this space though, right? Slim is doing a great job, but there's Snyk and Sysdig and Chain Guard and Anchor and Aqua and Clare that do similar things, all as open source. We've had a lot of them on the podcast to. What I want that actually helps me a little bit, help me understand the differences. What does Slim do that's unique and differentiating in this space, that sets your solution apart from the rest of the competition right now?
John: I think we're one of the only ones that's focusing on this kind of proactive and automated hardening solution. We're trying to build tooling that makes that usable and repeatable, and automateable for our users and customers. We're expanding the concepts of that and making use of this application intelligence Kyle talked about, to really have our system automatically understand the functional operation of the container and not just the container, our system can also work on groups of containers, networks of containers. Think Docker Compose files.
So we can make observations about that, and from that observation, do a lot of these securing and optimizing tasks. So we're going to really go deep with that concept, make sure that developers have access to all of that.
I think this is another part of it that's really important, is that we've spent a lot of energy and time and development on making sure that developers understand this. I said there were three parts to best practices, right? Know the software you're shipping, only ship to production what's needed to run your application, and remove as many vulnerabilities as possible.
But our goal is to make that something very friendly to developers and to the DevOps folks who live in that create and build space of the DevOps lifecycle. And so our commitment to developer friendly tooling, for instance, if you were to go on our Slim site, our portal, you'd find there I think the only functionally useful container differ that can show you exactly what changes between two containers. This is designed to demystify change for developers.
If you go there we can do the same with our vulnerability scanning where you can easily target two containers and say, "Show me the vulnerability changes and the compositional changes between those two containers." It really helps folks very quickly answer the question, "What changed between stable and latest?" For instance, "Am I getting more vulnerabilities or less? What did the author of that container change in that? And now how can I set up a profile that optimizes a rendition of that that will run optimally in my production?"
Marc: So John, on that, you're checking that at CI time, right? I think that's ridiculously valuable, what you just described. Are we increasing the number of CVs? Have we decreased the number of CVs? Because there's dependencies in the image that aren't in the code that I wrote, to your point with the three things. I'm not even aware of these dependencies in the image, and all of a sudden there's a new, patchable or unpatchable vulnerability.
The way you just described that really is... I'm guessing there's more to the solution and that's what I'm trying to get to here. You talk about, "At release time, at CI, at build time there were this many mediums or high vulnerabilities, and now in this next version there's this many mediums and highs, and here's the diffs of the container image." But do you also get into a vulnerability that was disclosed a week later after I ship that image by continuously scanning them?
John: Yeah, our system can do that as well. Every time that container builds, we can do what you just described. But then you can also automate our system through the APIs. Remember I mentioned that we can have connectors going into your registries and such, so our system can be triggered to rescan everything, and we'll be building more automations there that proactively do that.
But yeah, one dimension of our platform is this vulnerability scanning capability that is super simple to attach to your registries and gives you all sorts of power to understand and different vulnerability change, even ongoing vulnerability change. The other cool part about our vulnerability system is it's multi engine, and we have ambitions to add pretty much any engine to it so you not only can scan it, but you can get scan results and perspectives from more than one scanner, which is what we find people really want to do because not all the scanners are equal. They have differences, they don't have a full intersection of their findings.
Marc: Yeah. At the end of the day, if you're competing one scanner to another, you want to find things that the other scanner... That's how you differentiate the quality of a scanner. For us, we ship software into third party environments, and we played that game for a little while where it was like, "Well, this scanner reports this and this scanner reports that."
And it kind of turns into they're all valid findings, but really difficult. So we just said, "You know what? This is a scanner, we're going to publish the scanner that we use and we're going to commit to any high vulnerabilities that this scanner detects." And multi scanner is definitely interesting, because it gives somebody who's shipping software the ability to say, "It doesn't really matter which of the scanners you're going to use. We have scanned it against that."
John: Yeah. Think of the consumer side effect here, right? So for any company that ships containers to their customers who use them in their SaaS, for instance, I think you might fall into that same class. The producer of that container is at the whim of scrutiny based on whichever scanner the consumer decided to use, but based on... There's quite a few of these scanners out there, now you've got to answer questions about all the scanners because you're at the whim of the person who's observing your scans.
I think that increasingly there's pressure on publishers to respond, so we saw that as an information asymmetry or a coordination problem. People producing containers very often want to or need to be able to make observations across a set of scanners. And, by the way, those consumers can use our platform as well to see results across more than one scanner so it kind of speeds up the process of coordination on scan results.
Benjie: Interesting. Okay, I want to ask before we run out of time because we're coming up here, I want to ask a little bit about the open source project itself, in regards to who's contributing here? Is it mostly the Slim.ai folks? How's the community doing? How do you determine roadmap? Maybe a little bit what's on the roadmap? Then also if I am super interested or I'm a person listening to the podcast, how can I start getting involved? What's the right place to go to start being a part of this?
Kyle: I have a few things there in terms of where things are with the open source project. Obviously it's not Kubernetes, not a lot of projects out there are at that level. Only a few, Kubernetes is an outlier when it comes to the standard projects that are out there. But relatively speaking to other projects, we have a lot of attention, we've had a lot of contributions as well. Some of them are more ongoing than others. So it's not just Slim employees, we had people from other kind of companies or just open source contributors, developers contributing.
For example, ARM64 support was added by somebody who had a need on ARM64 machines, so we got that kind of contribution based on an external user. In general, we have quite a few open issues on GitHub, and there's a growing number of good first issues that will be good for new contributors, and a number of feature requests. We don't have a formal roadmap, mapped to many years ahead, and a lot of it is really driven by the community asks. It's very dynamic from that perspective. People ask what they want.
Benjie: Is the community communicating on GitHub issues particular to the DockerSlim project? Or is there other places that I should go to come check out more of what's happening?
Kyle: So GitHub issues, GitHub discussions, the usual forums on Discord. That's probably the most active one. Then we also have a Slim.ai community Discord server where there's a lot more going on in terms of a lot of use cases related to the discussions related to the podcast and all of that. So if you want to focus more on that side of things, that's also a great resource.
Benjie: So the good Discord, I go to Slim.ai and at the bottom there I click on the Discord. That's the place to find it?
Kyle: Yeah, that's a good entry point.
Benjie: Awesome. The other question around all of that is, early on, you started this project. Was it just you, Kyle?
Kyle: So during the hackathon, Docker Hackathon, there was a small team that would work together, 2.5 people and then after that, after the hackathon was done it was mostly me for a while. Then as people discovered the projects, they filed issues and then they made suggestions and contributions. So over time it grew organically.
Benjie: So a really nice open source story, if you will, of all that stuff. Now, I believe it's written in Go, is that correct?
Kyle: Yes, it's Go.
Benjie: Super cool. Then do you wish that it was written in Rust? Just kidding, I joke. Yeah, I'm just really looking forward to trying some of this stuff out. I know that when I took a look the other day, I was in an examples repo, I want to say, and I believe that there was a bunch of stats there that were talking about how much you can minify these base images to these smaller images, and that really caught my eye.
I obviously talked about it at the beginning of the podcast, but I think that's really cool too. I just want to point out to the listeners, I think it's worth... We deal with a lot of containers and those registries get really big, really quickly, so I'm super interesting just on that. Let alone on security component of all these things. Do you guys have any community meetings in particular? Or is that just more coming up?
John: We have pretty common live streams and on Twitch, and we have also a pretty vibrant program to interact in our community through Discord, et cetera. We travel around a lot, to KubeCon, we have booths there and such, we go to all these places. Lots of times developers join up with us there. In fact, Kyle and one of our other developers led a Docker Hackathon literally right at one of these events. So yeah, there's lots of ways to interact.
Kyle: Yeah, at the last KubeCon. KubeCon EU.
John: Yeah. So we're trying to help, teach and inform, and welcome. It's a pretty fun and open community, lots of stuff going on there. We have a community leader, he's full time trying to help make sure that our community is being nurtured and we try to build a lot of new content. We write a lot of articles and do a lot of live streams, et cetera. If you raise your hand or you come into our Discord, we're going to interact with you and we're going to give you as much love as you'll accept, and so, yeah, we're all about making people successful. We love interacting with people, we learn a ton all the time and it's fun.
Benjie: Cool. So just before we wrap up, I think it's always interesting to understand. You guys have raised some money, I believe. That's correct? You're venture backed, I believe.
John: We are, yeah. It's public information, we've raised about $41 million in financing in two rounds.
Benjie: Wow. Yeah, I didn't mean the exact numbers, but that's impressive, that's awesome. So right now you mentioned earlier that you're not monetizing, so as a company you guys are kind of just heads down, building up this community, building up this product and getting it as good as can be. Are there monetization plans on the horizon? And also if that's not appropriate, feel free to say, "Hey, we'll talk about that in six months or a year." But I'm curious to ask you.
John: I can give you some high level concepts there. I won't get into too many details. But as we've talked about, we started really investing a lot in the core technology and the open source, and that's really taken off and grown. It's got, I don't know, like 15,200 stars now. The project has just been really successful, tons and tons of downloads, I think more than a half a million downloads, et cetera.
So DockerSlim has certainly taken off and been a great project for us to really explore and learn, interact with great developers and just, I think, produce some really good stuff for the community. We then switched phases into building out the nucleus of our SaaS capabilities, and it's always been our intention to make that a companion to the needs of folks who've been doing DockerSlim in helping them go from a local tool that's mostly command line, that they can use to work on images in their local environments, to something that allows them to scale and interact with containers in a much more user friendly and integrated way.
We bought a lot of the analytics and tooling and parts of the core engine of DockerSlim in there so you can start to perform workflows in use cases that allow you to build this automated, proactive security and assessment, hardening and optimization flows. That free tooling, you can go sign up right now if you want, on Portal.Slim.Dev, is widely adopted.
Thousands and thousands of users come there and it's growing faster every month. We're getting a ton of interest and adoption. In the Slim Discord we've got a mix of users in there from our free and SaaS and open source, it's kind of group hug there across the board. We are of course working with, and you'll see it on there on our website, "Join and become our design partner,"we're working with a whole bunch of... We're getting quite a few design partners who are working with us to evolve what I'll call our pre commercial solution.
This is like a more enterprise ready, organization ready solution, building from our SaaS that we've had out there for about a year or so. We're in that process right now, really working through and making the solution awesome and doubling down on a bunch of core value propositions and workflows, and really just experiencing and interacting with users who use it every day, plug it into these workflows and trying to reach some of the objectives we talked about.
So moving into a more commercial period, or A commercial period, is our next phase. We've got some work ahead of us yet to continue to refine and optimize and expand our solution, and of course we want to build awesome products. So yeah, it's coming, I can't say when. But we are definitely moving in that direction.
Kyle: Yeah. We're trying to monetize the supply chain value for the enterprises, because there's a lot of demand there. There's one more thing I wanted to add, we have a Docker desktop extension which is also great for developers. It's a great way to experience Slim and get value locally, and also get some of the cloud value as well.
Benjie: Yeah, I know. That makes a whole lot of sense. The supply chain stuff's been on all of our minds for some time, especially with SolarWinds and all these other things that have been going on. But I think that's a really burgeoning space to be looking at. All right, guys. Well, it was really great having you on, and really appreciate you guys taking the time, and look forward to meeting you face to face, and watching where Slim and DockerSlim and Slim.AI go.
John: Thank you very much.
Kyle: Thanks a lot.
Subscribe to Heavybit Updates
Subscribe for regular updates about our developer-first content and events, job openings, and advisory opportunities.
Content from the Library
Jamstack Radio Ep. #114, Automating API Security with Rob Dickinson of Resurface
In episode 114 of JAMstack Radio, Brian is joined by Rob Dickinson of Resurface. This conversation explores API security in...
Jamstack Radio Ep. #108, Securing Environment Variables with Dante Lex of Onboardbase
In episode 108 of JAMstack Radio, Brian Douglas speaks with Dante Lex of Onboardbase. Together, they discuss environment...
The Kubelist Podcast Ep. #9, Falco with Dan “Pop” Papandrea of Sysdig
In episode 9 of The Kubelist Podcast, Marc Campbell speaks with Dan “Pop” Papandrea of Sysdig. They discuss Pop’s experience as a...