about the episode
about the guests
Guy Podjarny: Hello, everybody. Welcome back to The Secure Developer. Today I have a great guest with me. Simon Bennett, who is VP product at Bitnami. Welcome, Simon.
Simon Bennett: Thanks Guy, it's great to be here.
Guy: Simon, can you tell us a little bit about your background? A little bit about what Bitnami is? Maybe if we start from there, what is it that you guys do, but then also how you got into this world in the first place.
Simon: Yes. For folks who are not aware, Bitnami partners with the big cloud providers to make it easy to build solutions on top of those platforms. We work closely with some of the big players in that space and we provide prepackaged solutions, usually based on open source projects. They could be infrastructure, language, run times, complete applications around use cases, content management or continuous integration.
We help people solve business problems on top of those cloud platforms, so most people are familiar with Bitnami from the offerings that we have in the places like the AWS marketplace and the Azure marketplace.
We tend to offer solutions wherever people want to work. There's still plenty of people who are doing local development on a Windows laptop or a Mac, so we provide VMs and native packages for them. We also have a fairly significant selection of solutions targeting the Docker ecosystem, including Kubernetes. We tend to work across a lot of different technologies and platforms with the goal of helping developers and other types of users solve their business problems quickly and easily.
Guy: The cloud journey is not always as smooth or as easy as the brochure said. So, you try to make that a little bit smoother, a little bit easier.
Simon: Yeah. You asked about my background a little bit.
The rise of cloud platforms is the biggest revolution I've seen in my 20-plus year career in this industry.
I started out on the software development side of the house. I went to university in the UK, and the state of the art of software delivery when I entered the workforce was "How do we get something on the web and make it work?" The big Sun E-10K, E-15K boxes and app servers. A tremendously exciting time, but it was truly differentiating to get a company connected to the web and maybe a store front, which was how I got started.
Over the last five years in particular the rise of cloud platforms and this new deployment and development model, focused on moving quickly, is absolutely a game changer. My work at Bitnami, which certainly involves working with the cloud providers and some of this more advanced technology is a fantastic place to be.
Guy: There's a lot of conversations around how the technology grows at an exponential phase. It has these elements where we think about the linear amount of what we can create, and every now and then there's these things that really jump up that exponent. Make it curve just that much faster. Cloud is definitely one of those, with that easy access to compute.
Simon: Absolutely. The adoption of these things is definitely not linear. Simon Wardley's comments around punctuated equilibrium definitely ring true to me, and the cloud platforms themselves have gotten sophisticated in terms of not just the IaaS layer, but the managed services and the security models. The things that you can do are truly amazing.
But you now need to take that raw technical capability and apply it to solving a problem, whether it's content distribution or scanning your software more quickly, or rolling out a new back end for a mobile service. That requires both deep knowledge of the cloud platform and the pieces that you're building on top of from an open source point-of-view, and those are two areas where Bitnami's expertise by working closely with those cloud providers, we think we can we can help people be successful across all of those platforms.
Guy: Let's dig into that. At this point some listeners might be wondering if they're still on the security podcast or not. I love that we have this conversation, it's one of my favorite conversations to have with people that are not maybe as explicitly in the security industry, but rather build the tooling for developments for companies to do the work successfully, but have a big security emphasis.
In this conversation what we're talking about digging into is this whole notion of images, Bitnami as a whole but also even product aside, from an expertise perspective, deals a lot with packaging software, w hat bundling it together would making it accessible and available to users. So this is clearly a practice that's happened today as we assemble software. We assembled it in various ways, and within that world and within those packages lies dangerous security concerns. Let's unravel that a little bit, maybe we'll start with just this notion of images. This is the conversation, with golden images and base images. Can you give us a little bit of the rundown of what this image is in the world of cloud?
Simon: Yes. There's a couple of ways that I look at golden images, and in the bad old days it was a dirty word. There's a golden image and it's a combination of an out-of-date operating system, some drivers, and some other bits and pieces that came from my hardware manufacturer and it never changed. It was the lowest common denominator that was needed from a software stack point-of-view to make the thing work.
In the modern world where folks are targeting multiple platforms, they are definitely polyglot shops. They have a huge selection of application languages to choose from. It's no longer a Java, .net world. I'm still learning about all of these new approaches to developing software. The definition of what a golden image is has changed dramatically. It's no longer the ISO, go burn it onto a physical piece of hardware. It's "what's the standardized and blessed runtime that we want to build on top of."
It could be delivered as an OVA, it could be delivered as a Docker image, it could be delivered as an army. Layering is a big innovation around golden images. Layering each incremental improvement, one on top of the other. The other big change from the V1 of golden images from my point-of-view is the application of automation. They don't go out of date. The bad old days where change control committee would update the golden image once a quarter, it just doesn't work. That's the opposite of what we need to both ship software and react to the evolving security threats.
Automation is absolutely at the heart of making golden images work. You have to know what went into that golden image, who's responsible for it, and in a matter of a few minutes be able to create a new version, and get that into test. That's the baseline of what you need to do to deliver a secure and usable environment for your developers and your ops team.
Guy: Unraveling this a little bit, images remain at the starting point. We're talking about images and we haven't introduced that--is that images are the starting point for a machine or a starting point for an application. That dates back, I didn't even think about that, it dates back all the way back to those CD images that we burned down. I got stuck into this cloud world.
So, when you started out from those elements they were still the golden image, they were just minimalist. But we're talking about two evolutions here, layering, which we should chat about separately. But then also the depth and the proliferation maybe of the golden images. It goes from one golden image that is the lowest common denominator, to multiple--I don't know, are they more golden or are they less golden? They're shiny in some capacity or the other, images that contain more content, and as such a stronger starting point for you to come along. But because they contain more, maintaining them well becomes a more paramount concern that you have to do on a regular basis. That's fine, it's interesting to take instead, I see the historical context for those. Should we talk about layering then, before?
Simon: Yeah. Let's dig into the layering topic a little bit.
Guy: What is this layering of images?
Simon: The big innovation, and Docker certainly gets a lot of deserved credit for popularizing this notion although it's existed in other forms for a lot longer, is layers are a great way to separate different concerns and different areas of responsibility within your golden image.
At the root of security, from my point of view, is what goes into your image is a statement of who you trust.
There are folks that are contributing pieces to that golden image who are on staff, your operations team has opinions around hardening and which agents to use in production. But there's a whole host of other folks who you're implicitly trusting in rolling out an application. The wider open source ecosystem, there's some tremendous innovation going on around things like message queues and databases, and a lot of innovation in the Linux kernel. There's a lot of trust area there.
Folks would have traditionally gone to an enterprise focused vendor like Red Hat for that, but there are more choices these days. Especially in a cloud world. Things like Amazon Linux 2, Alpine, there's a huge plethora of different ways to solve the problem at different layers in the software stack. Packaging all of those dependencies up into a golden image is a way to tame that tremendous complexity.
Instead of dictating " our golden runtime i s it going to be this specific version of java," you can be much more flexible. We have a preference for ABC language runtime, Java.net--go. You can go outside of those boundaries, but here's where you would slot in your layer if you want it to go and experiment with node, or Scala, or Rust or something like that.
Guy: It's trust and constraints that are rolled into that, as well as enabling some flexibility. The layering allows you to say, "Before you trust," but you'd also demand maybe of someone to create this base image that includes a lot. So maybe this is the evolution, like you used to be in that lowest common denominator, then you get nothing. You've got a template that was very minimalist, you had to install everything on top of it, but if you went all the way to something that's very deep then you got constrained environments that forced Java, whatever--x, y, z.
Now with layering you're able to, in theory, benefit the best of both worlds. You can use multiple low common denominators and layer on top of those the different runtime capabilities in them. You're right that Docker popularized it, but fundamentally golden images are a layer in their own right. In a machine that is running, it has a layer of the golden image and whatever it is that you've installed on top of it.
Simon: From a consumption point of view, the layers are not always visible. If you're using a Docker based environment, the layers are explicit and you can go do interesting things from a security and introspection point of view. But even if you're working with OVAs or RMEs or other sorts of virtual machine images, there's a lot of value in thinking about layering. Even though it's not necessarily manifest in the final consumed artifact.
A t the end of the day the great thing about golden images is the API is super simple. You just use it. Start it up. Whatever that means in your environment, whether that's a Docker ran launch, so the friction that you are introducing for a developer or analyst is really minimal--that's the big payoff of adopting a golden image based approach.
Guy: Once you have layers, what is the difference between a golden image and the layer? If I could choose, if I had five layers or whatever, I had Alpine on top of that, I had Java of a certain version and then on top of that I had some application server on top of that. Each of those things is a golden image on its own right, no? Is there a difference between--? Whatever, a golden layer? I don't know if such a thing-- Maybe you should coin this. A golden layer, and a golden image?
Simon: The way I would think about that is I'm not quite sure. I'm not sure if golden layer should be a term. Let's make it a term of art right now. The way I would think about that is the layers that are further up are taking implicit dependencies on the layer below. Things like a layer that's putting some CIS inspired hardening in place, i t is going to be somewhat dependent on the layers that go below. Good engineering practice says "Minimize those couplings," but in most cases there is a dependency between "How do I manipulate how this particular daemon is going to operate in production and where that config file lives?"To take a really simple example.
In theory I can swap out my rail-base image, or base layer, and swap Alpine in. In practice, life is a little bit harder than that. It's an interesting idea especially for things like static analysis, you can take a look at-- Here's one approach to this layer in our stack. How does it compare to another one? But at runtime, it's the whole image which is a combination of all the layers and everything that's needed to make a running deployment--is really the unit of consumption.
The other thing that is interesting from my point of view is, how does this intersect with some of the more modern function-oriented approaches? If you look at things like AWS Lambda, looked at through the lens of a layered golden image, what they're doing is cutting a couple of the bottom most layers off. Where you were implicitly trusting Amazon if you were building on top of Amazon Linux 2, or maybe the Debian maintainers if you were building on top of Deb.
Something like Lambda takes those bottom most layers and says "You were trusting us already. How about you just consume this directly from us via an API?" That's a very interesting consumption model. Because layer sprawl is a thing. These things get much more complex over time, and as we know complexity is the enemy of security.
Guy: Yes, if we can't understand it--It's all we need to coin the golden layer term, over it. It's a great perspective to think about the container, Lambda layers. It's really all they've done, is they've cut a couple of layers down from below, and maybe added some wrapping or packaging those up. So, we have these images.
We've been dancing around security for a bit. We talked about trust and how we have these golden images on which we build our applications or that serve as the starting point for our application. We've composed them in this fashion or another based on how much trust we put into each of those. We have these two paths for security to talk about. One is about how do you keep the images themselves secure?
The second is, what are the security implications? How does it help and how does it hurt if either of those keep your application secure? If you kept those images? Let's start with the former. We have these images, we build applications on top of them, we trusted the images to be secure. How do we keep these images safe? What's involved in doing that? Is it easy, is it hard?
Simon: It's pretty significant in that it's a continuously moving target. The first benefit of combining all of these pieces into a golden image is you know what's running. This is a pragmatic solution to the problem called "How do I understand what the dependencies of my dependencies are?" And keep going down the chain. It's dependencies all the way down. But if you have a golden image which has been pre -configured and pre-built, you can fairly accurately answer that question in a definitive way.
This is a huge benefit because y ou now know to whom do I need to be receiving that update information. Just because a security vulnerability is found, how do I know that I need to take action on it even at the most basic level? Are we paying attention to the right mailing lists from an open source point-of-view? Where I'm playing a component from maybe a team that's inside the company, is there a process in place? Are they providing internally focused release notes? Does it include CV information? If not, should they? Do they know how to reach me on the weekend on a Sunday night?
Because we all know that these things, they happen at the least opportune moment. The other big benefit of golden images given this "What's in the box?" Manifest is it helps you prioritize what are the most impactful things that we need to be paying attention to. These golden images when they're running, they're running in the context of a production environment, and not all of the things that go into that image are equally important from a security point-of-view. It makes it possible to do things like "Let's focus initially on components that are remotely accessible over the network. We've done our due diligence around perimeter security and segmenting our infrastructure, but based on our threat model this is a logical place to start." Maybe a lot of the other things we can defer, and maybe focus on the most valuable things first.
Guy: This is from the perspective of, how does using golden images help your security sense? Golden images, they offer constraints on that and it comes back always to this notion of "The bigger the entity, the less flexible it is." Once you crowned an image golden, or whatever you've called it, then you've entitled it now to some special attention.
But it also means that you gave it some special privileges within your system, because you can say, "OK. For this golden image, because it is only allowed maybe to use these components in this fashion or because I know all these extra things around it, that I can further refine how do I prioritize security concerns around it? How do I think about securing its perimeter? Securing around it, etc."
That's always the balance. Golden images in that respect serve as a control mechanism for maybe the security side of the fence within our organization, to say, "Go on and embrace all these different layers and all the mess, and compose them. But I want you to reduce it down to this manageable number of golden images. Then those I can cope with. Those I can help you secure."
Simon: The other thing that gets not talked about enough is the concept of a golden image implies a binary-ness. Either it's golden or it's not, and certainly in the companies that we've been working with who are targeting some of these platforms, life is not binary for them. It's more common to have a staged graduation environment process in place for all of their functionality, and this is not from its security point-of -view.
The bar for getting deployed into production, its pretty high, and gold is the required standard. If you would like to deploy into a staging environment, there's a silver image. The level of review isn't quite exactly the same, but that reflects the fact that the risk to the business is different. Maybe you have access to some privileged systems, but what's running in the staging environment has been purged of customer PII. The bar can be lower, to strike a better balance between control and letting a thousand innovative flowers bloom.
Guy: To an extent it's also about how we're talking about the security constraint, but the silver image can also be sufficiently similar to the gold to believe that functionally whatever works on silver would work on gold in the vast majority of cases. But it allows for debugging, or all those components that you don't want available maybe in production quite as much. We have these gold and digging up to gold type images, building all sorts of taxonomy here. Silver images or bronze images, or bronze layers, and rebuilding it. We're using it and that's one aspect, that's one advantage from a security perspective, using images.
The other thing--have you seen cases where security controls are built into the images? Can you require layers of security, that their job is hardening? Or do you typically see that as more trimmed layers?
Simon: How people consume that definitely varies by technology. The folks that we are talking to are either consuming our marketplace images or working with tools like Bitnami stack smith. They're definitely looking to layer in additional customizations.
It's pretty unusual for somebody to take an existing image and consume it as- is. There are always considerations that are either driven by security needs, operational needs, things like monitoring come up pretty often.
Guy: Those are layered on top of. That's one aspect which is charged with whatever, with an Alpine or Ubuntu or whatever, and then you would layer on monitoring. Do you see it also work for hardening? Would you add a layer to tweak or diminish a previous layer?
Simon: Yes, we certainly do see that. It tends to be more configuration fine-tuning for hardening than wholesale replacement of pieces, I would say more often than not. The other aspect of customization that we're seeing a fair amount of is integrating into existing systems. Whether that's plugging into an IDS or plugging into an identity management system, these are pretty common things that just need to work everywhere within the organization from an operations point-of-view.
Guy: That's pretty cool. We talked about two security advantages of using these golden images, one is the control and the second is that you can actually use them as a mechanism to introduce integrations into security systems or other forms of hardening or introduction of security capabilities on top of them. Those are two pretty significant capabilities.
What about securing the images themselves? These are advantages to why you should use it, we'll talk about challenges maybe in a sec, how hard is it to keep--? What are the key, if you were to enumerate a handful of best practices around keeping the images themselves secure, maybe ones you practice yourself in Bitnami?
Simon: The things that we've learned that are important in adopting golden images, I already mentioned one in passing, which is automation. Golden images themselves depending on which ones you're consuming are not generally small things. They're pretty significant in that they're an entire runtime, and all of the things that depend on it. So automation from the get go even if it's with a simple point solution has disproportionate benefits down the road. Any environment where you're looking to deploy software of this complexity, doing it once manually to get a feel for it is about the limit of the number of times that you should be going through that process manually.
It's just good hygiene around, "What are the steps that it takes to build an image? Is the environment in which that image known good? Is it clean?" One of the biggest challenges in putting golden images into practice at scale is making sure you have good change control over the environment, the context the images get built in. If you put all the same ingredients in but you're using a different kitchen each time, you're going to get a slightly different result, to use a terrible analogy.
The other big area that Bitnami is invested in internally, which is an extension of automation, is the testing part. Automatically applying testing, both to make sure that best practices are applied, and to ensure that the golden image is fit for purpose--e nd to end type testing is a huge time saver. We have a widely globally distributed team, and the fact that somebody can kick off a speculative change to a golden image and not just get it built and delivered to the right environment, but wake up to a set of results which shows "these functional tests passed and we may or may not have a couple of regressions down here the next morning." Or "hand that across to a team in the next time zone" is hugely valuable.
Guy: That's a little bit more specific also to your environment.
Bitnami, in one form or another, can provide you with a whole horde of different golden images or equivalent. But I think inside an organization, you're likely to see a little bit less--Smaller numbers, if you will. Hopefully golden images, although per the previous conversation they would be, they're not quite as small or not quite as few as before.
Simon: That's definitely possible. Bitnami is a little bit unique in that we're working across many different public cloud providers, but few of the conversations I'm having with large companies are they working with one or even two platforms. It's very much, "We have a selection of different platforms we want to target for business reasons, some at cost and some existing partner relationships. Their IT infrastructure is getting more complex from a vendor management point-of-view. The delta is maybe a little bit less than you might think."
Guy: A different way of saying it is that these golden images, you're going to rely on them and they're going to go forth and multiply because all of the different apps are going to build on them. The advantage is you need to secure fewer things. But the demand or the requirement is "you better secure things well, because they will propagate."
I love that analogy, to think about Open SSL and Heartbleed for instance. Heartbleed was a bad vulnerability. Hardly the worst vulnerability that we've seen, we've seen far worse. But the reason it was such a big deal was that OpenSSL was everywhere.
This vulnerability naturally existed in many cases. In that sense, when you're packaging software, whether you're packaging it through a library that you consume or you're packaging it from underneath, by building on a base image. Then in both those cases the value in those base images multiplies, but whatever security problem--if you manage to get a malicious component into that, or a vulnerability for any of those components--that multiplies and expands its reach as well.
Simon: The pragmatic approach here is, whether it's something like OpenSSL or somebody pulling in a package that had not been appropriately security vetted. These problems are going to happen. There's no amount of automation or upfront thinking that can protect you from that. But if you think about your software delivery pipeline as having a series of checks and balances that are independent, even though you've hardened your build environment and you're only pulling from trusted places. There's still a tremendous amount of value of doing static analysis of having a runtime security module as part of your infrastructure, because it helps you with "What should the right remediation step be for my business?"
The answer is not always, "Rebuild the image and deploy it everywhere." Having an engineer in the loop to say 'yes, on paper, this is a critical vulnerability,' 'yes, the OpenSSL package is present on our golden image but it's not listening on and any interfaces. We're using it for certificate generation, or something else.' Let somebody, preferably the application developer because they have the domain expertise, step in and say we have the ability to update this package via a golden image and we can do it quickly. But in this case, we don't need to.
Guy: Because we have the knowledge.
Simon: Yes. We can make a decision quickly and react appropriately. The challenge for any business is "Are we overreacting?" An overreaction is good in that it's solving the problem as reported, but is it resulting in good decision making and good resource allocation overall?
Guy: Yes, and sometimes it's urgency.
Simon: Severity and urgency are different.
Guy: Let's talk a little bit indeed about the ownership, and maybe this is a good way to advance this. We have these images, they're good and they're built and you've hopefully done a decent job securing them, and you're doing that in a continuous fashion. But then there's these organizational boundaries. One team is building or providing those images, and maintaining them--some infrastructure team, some central team. Then you have developers within different applications that are building on those images. Now a vulnerability comes along and you find out that app X is using golden image Y, and it has this vulnerability in it. What happens next? What do you see happening inside organizations? Who responds? Who gets the alert? Who needs to be woken up? Who has that conversation you described? Who's responsible, if you will, for fixing that?
Simon: From the discussions that I've had, it's very contextual to the organization, both in terms of what are the business drivers, but also what does their org chart look like? It's hard, and frankly one of the things I enjoy about my job at Bitnami, is to map that landscape out a little bit. There's different types of companies depending on how they're looking at the cloud opportunity broadly. There are definitely some places for whom developers are an extremely scarce resource, and have organized around maximizing developer productivity. Those tend to be the folks that are much more sophisticated from an automation point-of-view.
For folks who maybe don't have a deep bench in application developers, they tend to be interested in solutions that are consumable by existing teams that are not necessarily aligned to the organization. Whether the security team and the operations team is one team or two can be foundational here. Who's running the infrastructure? There's an important angle here that we haven't talked about which is in any large company, that infrastructure is operated by a lot of different groups, and some of those groups are not employees, but they're service providers or outsourced service providers. That adds a whole new layer of complexity. It's another organization to trust in delivering that functionality.
Returning to your original question, it's hard to answer your question in a general way, because it's so specific to how each company is run and how it's organized. The thing that is in common across the folks that I've been working with is that they are all looking at ways to move faster. In many cases adoption of the public cloud at least early on wasn't necessarily a central mandate, but it helped people understand concepts like self-service, API driven services, time to instant gratification--they are a powerful way to influence the behavior of their internal stakeholders. That's the big opportunity. Whether it's platform layer, a golden image or a managed service on top. The person who can deliver it in 45 seconds or less is going to see a huge amount of adoption.
Guy: Fundamentally, I take your point about the complexity and how it changes per organization and who you have here. But maybe a valid point when you think about security of these images is, often we talk about how we get going. We talk about how the fact that we know whatever we create, that golden image, and you secure it and you get going and you roll out your app on it.
But when we talk about a response, we don't always talk about the maintainability of the app that builds on top of it. Maybe the takeaway from a security perspective is to say that each organization is its own unique snowflake, but you have to think through the steps that say "OK. Vulnerability is going to come along. You built on this golden image. Who do you expect to respond?" Or, "I rolled up a new version of that golden image. It fixed a bunch of vulnerabilities. What do I expect my developers to do?"
I don't know if you see a change but that last statement for instance has a difference between images, or AMIs if you will, of the cloud images and containers. Because in the context of an AI you can change the AMI and a central team can choose to reboot an instance, and it would launch on the new AMI. While in containers you've pulled in from a base image. You're going to shut it down, you're going to run it. The same thing is going run again with whatever vulnerable components are in there. There are some interesting concepts from the Google managed layers, or maybe Lambda layers. Those components were-- They're taking a portion of those things out of your control, and in favor, they can manage it for you. But fundamentally you have to think through how those things work.
Have you seen, to cap it off, we're going a little long here. But to focus specifically on that last question, have you seen a difference in how those indeed golden images get managed in production when they're a Docker layer vs, shall we say, an AMI or some other cloud VM?
Simon: Yes. The tooling around the container pieces is quite different. Down at the lower level the lifecycle of these components is different.
In a container-based world, w e're not talking about managing hundreds or thousands of running instances of an application, it's tens of thousands, hundreds of thousands, millions. We added a comma to the number of things, so that brings a whole bunch of complexity.
This upcoming set of tooling, particularly in the Kubernetes system, is interesting because it's been designed to deal with that scale problem from the get go. A simple canary, green-blue, rolling update, where there's a human in the loop just doesn't work super well at container scale. Making use of automation on the deployment side, and some of the tight-loop principles, it's still early but it shows great promise.
Guy: Yeah. Containers are our brave new world in many respects--s ometimes maybe leapfrogged by Lambda. But automation, as we grow from the scale of the CD to the scales of VM to the scales of cloud images. Containers the next step, the next level of scale. In all of those, automation is key. Understanding the organizational boundaries and understanding who's going to do what when is something that you have to figure out in your surroundings.
Simon: From a prioritization point-of -view, if you're moving from thousands to millions of things, automation moves from the "nice to have" to the "not optional"category. As a product person, that's a key distinction.
Guy: Of course. This has been fun and we've been going long because of that, so it's good conversation. Before I let you be on your way back to the day job, I like to ask every guest that comes along if you have one piece of security advice or some pet peeve of a security thing that's bugging you? Just one piece of advice for somebody looking to level up their security caliber. What would that recommendation be?
Simon: I don't know that I can dispense security advice. I'm definitely not a subject matter expert, but I've learned a lot from your podcast. Thank you for putting that together.
My key takeaway for this is software at the end of the day, whether we're talking about it from a security angle or delivering features point of view, it's a team sport. It's all about the people. As leaders within our organization it's about making sure that our people have the best tools, and they're empowered to deliver at a high level, especially when it's in an evolving space.
Like containers or lambda, we tend to talk a lot more about tools and platforms and the technical stuff. But the constant across all of these generations of tooling is the people, and the more that we are able to engage in and empower those folks the better the outcomes that we'll be able to achieve.
Guy: That's awesome advice. Simon, this has been great. Thanks a lot for coming onto the show.
Simon: My pleasure. Thanks for having me.
Guy: Thanks everybody for joining, and I hope you join us for the next one.
Participate at DevGuild: AI Summit
Join us on October 19th, 2023 for a community summit with 200+ others like you coming together to discuss how AI will change the face of software development.
Content from the Library
The Kubelist Podcast Ep. #34, Slim.AI with Kyle Quest and John Amaral
In episode 34 of The Kubelist Podcast, Marc and Benjie speak with Kyle Quest and John Amaral of Slim.AI. This talk explores...
Jamstack Radio Ep. #114, Automating API Security with Rob Dickinson of Resurface
In episode 114 of JAMstack Radio, Brian is joined by Rob Dickinson of Resurface. This conversation explores API security in...
Jamstack Radio Ep. #108, Securing Environment Variables with Dante Lex of Onboardbase
In episode 108 of JAMstack Radio, Brian Douglas speaks with Dante Lex of Onboardbase. Together, they discuss environment...