
Ep. #50, Building Sandboxes for AI Agents with Ivan Burazin
On episode 50 of The Kubelist Podcast, Marc Campbell and Benjie De Groot sit down with Ivan Burazin to explore the rise of sandbox environments for AI agents, how Daytona enables instant, stateful compute, and why traditional infrastructure models fall short. Ivan also shares lessons from building early cloud IDEs and finding product-market fit in the AI era.
Ivan Burazin is the co-founder and CEO of Daytona, a company building runtime infrastructure for AI agents. Before Daytona, he co-founded CodeAnywhere, one of the first browser-based IDEs, and has spent over a decade innovating in developer tools and infrastructure. Ivan is a frequent voice in open source and AI engineering conversations, advocating for accessible and secure computing for both humans and agents.
transcript
Marc Campbell: All right, welcome back to The Kubelist Podcast. Today we have Ivan Burazin, co-founder and CEO of Daytona, to talk about sandbox environments, which are really growing in popularity. Really excited to talk, Ivan.
Ivan Burazin: Likewise. Great to be here.
Marc: One of the ways we'd love just to get started is to hear your background. Like, think back to the early days when you started using computers. Start there and we'll build the story.
Ivan: Sure. Video games are always the start of things. And so mine was an Atari, actually. That's sort of aging myself here. So that's where I started and always wanted to build out video games and started getting into computers and then you end up, you know, what's the thing, you start with video games and you start doing B2B software after that, directionally.
Not exactly what we-- I say we because me and the co-founding team of Daytona has been together for quite, quite a long time. And so we actually started doing it a bit different. We did do programming, we did websites. We created our own CMS's back in the day, late 90s, early 2000s. But where we actually created a first company was actually stacking data centers and server rooms.
And so that's the origins of where we started. Basically there was an opportunity in the market, we did that, and we did that for a few years, ended up selling that company. And the reason we did is my co-founder stumbled onto a new idea which is browser based IDE. Right? And so we created the, I believe the very first browser based IDE in late 2008. So a very, very, very long time ago.
I understand that the co-founders of Heroku were probably the first, first ones to do it, but then they, you know, pivoted to Heroku. But we were the only ones after that that ended up doing it for a long, long time. And so we sold our first business to focus on that. That was super early. But the interesting thing was the first company we did was actually physical servers, screwing them together, networking, virtualization, all these things.
And the second company we did to Codeanywhere was there was no VS Code, so you had to create your own editor, there was no Kubernetes, so you had to create your own orchestrator. The isolation layer, I think Docker had just come out, but we were using something called OpenBZ at that time, so we had to build a lot of the things ourselves. All of these things sort of pile into what we're building on today.
Benjie De Groot: So, Ivan, Ivan, I want to go back for one second. Where did you build this? The data center stuff, where did this start? Was this in Croatia or was this--
Ivan: This was in Croatia. So as we talked in the beginning, I was born and raised in the suburbs of Toronto and my parents are Croatian, hence I'm Croatian and in Croatia was working for high school, did university and started my first job and my first business. So stacking these servers was in Croatia and Codeanywhere was in Croatia as well.
So even Daytona was basically founded while we were here. In the meantime, we had lived in the States. In the meantime, flew a lot back and forth. A long time in Boston, then a while in New York and now ultimately in San Francisco again, just because that is where, you know, everyone building AI agents is. And so it just seemed like a large opportunity cost not to be there. So yeah, it was in Croatia, to answer your question.
Benjie: So you were racking and stacking. So this is like early data center stuff in, I guess that's Eastern Europe? And I know that just from a little historical standpoint, obviously there was the war in the late 90s and so Croatia kind of had this renaissance after that. And so I know that there was a big animation, like outsourced animation thing going on in Zagreb at the time I think.
So who was using your data centers and how big were they? I just think it's really interesting, you know, where we have terawatt data centers coming online versus you know, you were probably racking, stacking yourself--
Marc: Data centers on the moon.
Ivan: Yeah, it was much smaller, it was much more humble than what we're looking at right now. Right? And so these were for the most part like CPU boxes, which is interestingly enough is what sandboxes are for the most part. And so again like Dell and spitting up VMware and whatnot, the consumers of this were usually like banks and large telcos and things like that.
To be fair, just for the Croatian history here a bit, right now there's about three companies worth over a billion in Croatia. One of them is Infobip, which I worked for for a while. They acquired one of my companies which is a direct competitor to Twilio. So they do a little over 2 billion a year right now. They're based out of this village in Croatia, which is interesting.
Then you have, for the people, the car lovers here, Bugatti is now a Croatian company. If no one actually knew this.
Benjie: I thought it was French. Isn't Bugatti French?
Ivan: It was until a Croatian company acquired it. So one of my friends Mate Rimac, one of the fastest hypercars in the world, his company Rimac Automobili actually acquired or merged or I don't know what the technical part of it, but he is the CEO of both companies right now. So if you see you know, people driving in Bugatti that is now under the umbrella of a creation company. So that's an interesting one.
He has a second company which is called Verne, which is a self driving car company which is also I think valid over a billion. And then we recently had, recently I think it was like two years ago, Google had just acquired one of creation startups called PhotoMath. The public numbers didn't come out or was order of magnitude half a billion dollars or something like that. So there's a bit of things happening in a country that is, for people that don't know, 3.8 million people. So it's actually a quite, quite smaller country.
Benjie: I love Croatia, I've been there a few times. One of my favorite spots. But I think the interesting thing there is like you said, it's 3.8 million people and you have your racking, stacking data centers. Obviously you said banks and stuff like that. So I would assume a lot of that has to do with, you know, PII and, and EU regulations. But this is early. This is 2005, 2006, 2007?
Ivan: Yeah, today you're right. Actually it started I think 2004, 2005, something like that. Yeah.
Benjie: Wow, that's really cool. Okay, so how long were you doing that? How long were you building out data centers for?
Ivan: I think it was about four or five years.
Benjie: And then you went straight into a cloud IDE in 2008.
Ivan: Yep.
Benjie: Was it web based or was it open source?
Ivan: Web based. Yeah, a web based IDE. So think like Replit before what Replit is today or codespaces, what existed. So we built that in the early 2000s and went full on, I think 2013 we got like an investment from Techstars in Boston and then we moved there for a while and that's sort of where we kicked off the Silicon Valley, learning about what investors are, how to grow companies and things like that.
Marc: So yeah, and building that had to have been like super different, right? Like somebody doing it today has the advantage of VS Code and Monaco. And it's like just drop an NPM component into a website, you kind of get it, right? So you're building this the hard way.
Ivan: We built the whole thing. Yeah. And interesting part, it's like, oh, you're competing against, I don't know, a Sublime Text or whatever it was. And VS Code did not exist. And then it's like, oh, we need plugins. It's like, oh yeah, how do we create a plugin ecosystem like that? Like you just saw syntax highlighting and connecting to all these different things, remote sort of like box to get this up and running.
So yeah, we built it all internally. So it was quite different. And also the speed of things was different. You could mess around for like three, four, five months and try to figure something out. Today you don't have that opportunity. Like today, if you don't do it, someone else will. And there was no AI of course, so you had to to it by hand.
Benjie: I mean this was JavaScript. That predates Node, I think, some of that--
Ivan: Some of that does. So that we started using Node at some point. But yeah, it was very new at that time.
Benjie: Wow. So 2013 you moved to Boston for Techstars for this web based IDE. You said that you kind of got introduced to VC life, North America, how to grow a company, all that stuff. What happened to the code editor company? What did you learn? How did it become the next thing?
Ivan: Sure, I mean we learned a lot of things like what is a user, what is a buyer? By the way, to be clear, like, dev tools as an investable market really wasn't a thing that much at that point in time. After that, I think it was 2015 or whatever it, was when GitHub got acquired by Microsoft for 7 billion? What was it?
Benjie: 9.5? I think.
Ivan: 9.5. There you go. I was in the ballpark, right. I forgot the number. But anyway, after that it's like investors were like, oh, you can actually make money from dev tools. So that was quite early. The other thing is we sold directly to the end developer, which is not how you make money. You have to sell to businesses and developers have to bring it in. So we didn't have that.
We didn't have an on prem solution, which was quite interesting for that time in the world. You had to have that. So we learned a lot of things on that. Like it made revenue, the company itself and whatnot. We raised a bit of money, but we didn't know how to tell a story of vision, connections. We didn't understand how a network, human network effects actually make a difference when you're raising capital.
And so we learned a lot of that on in a very, I'd say hard way. We, for better or worse, were quite stubborn. So we pushed that on for quite a long time. It just wouldn't die. I mean, it still exists. Someone else runs it, but it still exists. It's still alive somewhere in the Internet. So it's interesting that company like that.
The interesting thing that you also learn is that even though something doesn't seem actually dead, it actually, you know, it might be dead and probably better off to move to the next thing. And so all of that stuff was applied in Daytona when we did it. Like we actually pivoted at one point in time. We were doing pretty well and we're like, "no, this is not the thing. Kill it. Like literally kill it. Move on to the next thing."
Benjie: What year did you guys sell that company?
Ivan: I mean, we sold it fairly recently, like a few years ago, it wasn't that much. We kept it for quite a while. We ended up paying back our investors. Techstars had a handful of angels that put in money. There was like a million dollars invested in Codeanywhere and so we paid all of that back, and then it generated a bit of revenue. And so it just stayed for a while until we decided to move it to the next thing.
Benjie: Tell us who else is on the team, because it sounds like you guys have been together for a while.
Ivan: Yeah, so my co-founder and CTO Vedran has been with me. Like, we met while I was still doing the data server stuff and then data room stuff. And then we had created the Codeanywhere company together. And then our first engineer, Goran, who's now our chief architect and third co-founder of Daytona. And then after Codeanywhere, I ended up creating this developer conference and that got acquired and sold. And then we came back together, the three of us, to do Daytona.
So me and Vedran have been together since, like, officially working together since 2008, but known each other for a longer time. So we're talking like two decades there. And then Goran is probably a few years less with us. And then the entire team of Daytona right now, there's a few new ones, but the vast majority have worked with me or them in our previous ventures.
So the average tenure of working together in Daytona, or I should say the mean, is probably like six years, something like that. So it's like a high context, high throughput organization, because we've all known each other for a very long time. So there's a lot of, you know, f-yous and whatnot, but in, like, in the loving and hating way. So that's great. I think that lets you move faster.
Marc: Yeah, it definitely does. And you're all distributed and remote?
Ivan: We have three offices, actually. So we have an office in Zagreb, an office in Split, an office in San Francisco. So people are all in office, but in one of those three pods.
Marc: What year did you start Daytona?
Ivan: Three years ago now.
Marc: Three years ago. Okay. I'd love to hear a quick explanation of what Daytona is, then go back to the inspiration, the origin, like, why. Why you decided this was a problem you needed to solve three years ago. It seems kind of-- Everybody's doing it now, but, like, three years is a long time, a lot of insight and foresight into building it.
Ivan: Sure. We didn't start like this three years ago, so we'll get into that.
But basically, Daytona the product now is what the market calls a sandbox provider. Essentially, what we are trying to serve or enable is composable computers for agents.
And why I say composable computers-- First, I think of it as a computer general, like as humans use computers, agents will use computers. It's not just for code execution, it's for like installing applications for doing all these things. We can get into use cases, but it says essentially if a human needs a computer to do something, there's a high probability that an agent will need to do that as well. It's not exactly one to one, but it's a high probability that it will be.
And then I say composable because as humans you have different computers for different use cases, right? So you might have like a MacBook Air for like answering emails and traveling. You might have this big GPU enabled Windows machine to do 3D rendering or whatever it might be. Like you have different use case, different computers for different use cases.
And so we essentially enable an agent or the system for the agent to say, oh, for this use case I now need, you know, four cores, 16 gigs of RAM and I have Windows operating system. Or I need a Linux operating system and a GPU inside of it, or I need whatever I might need. And Daytona will spin up that configuration in, depending on the operating system, but for Linux it's like 60 milliseconds, which is half the time it takes you to blink.
A Windows operating system takes about one second to get up and running. And so there's different speeds depending on what you need. But essentially enabling an agent to do the task that is at hand.
Benjie: Does that include GPUs in this world or are you guys mostly CPU?
Ivan: Yeah, yeah, so we're mostly CPU based. We're introducing GPUs now as well. But it's a different use case. So it's not GPU for inference. It's like the reason why you have a GPU on your computer. So is it like 3D rendering? Is it playing a video game? Is it like, whatever you have a GPU for on your computer?
And again there's some computers that just have an onboard whatever graphics processor used in for whatever, but then you have like a big one for something else. So if you're using Blender for 3D animations, you probably need a GPU to get that done, right? And so you need a sandbox or a computer with that enabled inside of it.
Marc: So it's really like the compute that the agent wants to be able to do something on. And it turns out the agent already has an inference provider because that's how the agent is running.
Ivan: Exactly. The inference provider is essentially the brain of the agent or whatnot, or the agent itself. It's not the agent, but the brain of the agent. And if the agent has to do a job, then it spins it up. So it's equally-- Like the three of us are now in front of our own computers. It's the equivalent of that. It is literally just that. Right?
And so I digress right now, but what I'm doing and trying to spin things up and I now have a sandbox with Mac in front of me right now, spun up and I have a phone and I have a SIM card in it and I'm actually connecting all that for an agent. It has its own email, has its own account. It is literally on the Daytona, like we have now, agents in the org, equal.
Like equal in the sense they have their own privileges what they can and cannot do. But you can essentially text them, message them, slack them, email them, whatever, and they go off and do the thing. And so as we start thinking about agents as digital humans or digital knowledge workers, it becomes more and more clear to people why we need these, as we call them, sandboxes.
Marc: And agents might be relatively long lived. You might have an agent that's like monitoring, whatever, like doing some infrastructure task. It might be like monitoring signups on the website, but the compute that the sandbox environments Daytona's providing are very ephemeral, designed to be relatively short lived tasks. Or are they longer?
Ivan: No, they're actually the opposite. No. So actually Daytona, when we were building out Daytona, we can talk about the story as well, we actually thought of it first off, ephemeral sandboxes came as a, like there's a flag for it to be ephemeral. But that was like a second order thought. Like our idea was it has to be like a human computer, meaning it has to be stateful and long running.
So meaning it should run as long as the agent wants. Like it can run forever, there's no stopping it, right? Unless someone kills it, turns it off, the agent kills it, whatever it might be, it has to be stateful. So like we open and close the lid of our laptop, it's the same state. You want that same thing. And it has to be super fast. And so those three things together was a foundation of what we thought an agent needs and the foundation of what we built at Daytona.
That's why for the orchestrator of the sandboxes, we don't use Kubernetes or anything off the shelf. We rebuilt, we rewrote our own to be able to do this. We do use Kubernetes for the deployment of the control plane and things like that, but for the orchestra itself, we don't actually use it because we thought it was insufficient for having those three things working at the same time.
Benjie: So we're going to dive in. I want to dive into that in a second, but before we do, you said you started three years ago. I think three years ago, sandboxes for AI agents were probably maybe a twinkle in a few people's eyes. But I assume that--
Ivan: No one. No one.
Benjie: Yeah. I don't think so. What was the original idea behind Daytona? Obviously, you've evolved to here. You guys have gotten a lot of traction. But talk us through. I mean, you made a good point earlier where you're talking about, like, identifying when something's not working and moving on from it, even if you think it's sort of working. So talk to us about how it started.
Ivan: Sure. So we started Daytona as a spiritual successor to Codeanywhere. We ended up pulling out the-- What we had learned on the orchestration layer that we did for the cloud IDE. We had found that a lot of the tech companies had built something internally similar for their agents. So if you work at, like, Netflix or Google or Meta or whatever, you have this automation to set up your dev environment. Right?
So it's like, it's very, you know, essentially one click. You don't have to, like, spend two weeks setting up your dev environment, which you have to do in all the other Fortune 500 companies. And building what we built or learned at Codeanywhere, we said, oh, this is an opportunity for us. There's a story how we got actually involved in that by accident. It ended up being we had specific domain knowledge in spinning up these or orchestrating these VMs.
And then we get started selling to some, like, Fortune 500 companies this product. And so the original product, again, is just like, for human developers to automate their dev environment not locally, remotely, but securely. And we ran that for about a year and a half pretty well and growing pretty well. But what happened was end of 2024-- So end of 2024, people started talking about AI agents. Devin was available in, like, private beta.
There was this competitor called OpenDevin or OpenHands, and you had to install it locally on your machine. And I'm like, you know what? I was talking to my co-founder. I'm like, this actually doesn't make sense that you're running this locally. This makes zero sense to me because you can fire off multiple tasks, but you're constrained by your machine with the compute and CPU you have. Plus, if you close your laptop, it doesn't work. It doesn't make sense.
I have the same analogy today when I see people running Claude Code locally. I mean, it's fine, but you can't know-- Work as many as you want. If you close the lid, it doesn't. It stops working. And, like, maybe we can use our thing and we sort of bashed together our dev environment manager, what we called at that point the original Daytona, and OpenDevin, or now it's called OpenHands.
And we got so much inbound from people building agents. They're like, oh, we need to set up our environment for our agent. And this was like, before it was cool. I believe that the following February, so like six months after that, at a conference in New York, both OpenAI and Anthropic had announced the new specifications or new definition of agents, including a runtime or a dev environment.
So the following February, the model companies publicly said this was a thing. And so we were all ready. So to your point, like, three years ago, no one was talking about this. You know, two years ago, still, people weren't talking about this. And then as people started reaching out to us, we gave them what we built and they said basically, we can't use this.
And there we discovered what the differences were between how you set up a VM or a runtime or dev environment for a human and what agents need, right? So you need an SDK. It has to be super fast, has to be stateful. All these things, some of them were assumptions, something we sort of, like, groked from the people that we're talking to.
And what we did is we made a prototype of that over the holiday season, me and my co-founder, and we booked 22 calls with the people that were reaching out to us, you know, the few months earlier.
Benjie: Right. This is 2025 holiday?
Ivan: 2024 onto 2025. And so January of 2025, we have 21, 22 calls. Every call goes long. Every call, people are literally emailing us afterwards, like, texting us, "give me the goddamn API key."
Like, I'm not lying. This was insane. I never felt this in my life. And I'm like, to my co-founder, you know what? This is it. And if it actually seems true that the number of agents in the world will be the number of humans to the power of N, which people weren't sure about at that point in time. It's like, this is sick and no one owns this market. Like, we have to go.
We literally at that point in time put all hands in the company, said, the old thing's dead. We helped all our customers migrate to competitors of our old thing and we just plowed on this thing. Like we basically, our GTM team, as most were in Croatia, went to San Francisco, started creating awareness around this product. The product is basically an MVP. It runs on like one VM, right?
The engineering team goes to build it and so we're building momentum the next three months, the go to market team, while the engineering team is actually building this product. And then we launched this product three months later with the audience already sort of ready for that.
And in that meantime, from that January, you know, I think it was end of January to end of April when we launched, It was when like OpenAI, Anthropic then announced that this is a thing, people started announcing it was a thing. And you know, it was a timing thing.
Benjie: Yeah. Claude Code Beta, I think was April of 2025, I think so. Right? Claude Code Beta?
Marc: I think it was like February.
Benjie: February, sorry, February 2025. So timing is everything.
Ivan: Timing is everything.
Benjie: That's spectacular, obviously. That feeling of product market fit is-- I've read about it a lot. Haha. Marc and I have read about that. We talked to a lot of people that know how that feels. We're still working on it.
But wait, so I just want to understand. Let's talk a little tech turkey for a second. You had this orchestration layer and that was for VMs. Was that Firecracker with the original product or was that not Firecracker yet?
Ivan: So we had used, yeah, we used Firecracker at that product. Yeah, we used Firecracker for that product. And we learned a lot of things. And so like the way we attached storage to these things was interesting and it was very slow. And so we had to use Longhorn to attach storage. And at scale, that's really hard. And so when we decided to do these sandboxes to make them very fast.
So this is what we do. And our thing is open sourced so people can see see this. Basically we're like, okay, we want like, you know, sub-100 millisecond spin up times. How do we do this? And so the Daytona sandbox, unlike any other VMs and any other sandboxes, actually uses the CPU, RAM and hard disk of the underlining node. So we don't attach external storage. You're using the storage from the node, because most people use like a RAM CPU of this, of the node, and then you attach external storage.
And just the attaching of storage takes time, and the throughput is also slower. Right? And so we're like, we're going to do that together. And so the way Daytona does it really fast is if you imagine our, like, stacks of-- We run this on bare metal servers, basically, but we can run anything. It could run on a VM, but for a lot of reasons, we run it on bare metals.
And so if you imagine, you know, all the Daytona bare metal servers, basically what we do is the snapshots or the history or the point in time, or the template, whatever you want to call it to create a sandbox, is actually preloaded on the SSD disk of the node. And then when you fire off, "I want a sandbox with that snapshot," it actually goes to that node where it is, and from that SSD to the same one turns on a sandbox. Right?
So there's no network latency connection. Everything's actually on that node. Now, of course, we can't fit every snapshot on every single node. And then so you have an algorithm that you sort of sorts them out onto different nodes, and then you have sort of layering of snapshot as well as to compress them. So there's a bunch of things that we do there. But that was actually quite exciting and interesting to build that out. And sorry, I went into the technical part of the conversation already, but these are the things that we sort of do.
Marc: Let's kind of go back to the product. I think just some definitions would make sense. I've used Daytona a little bit.
Ivan: Sure.
Marc: Like, you're right. It's almost magical. You create a sandbox and by the time you get your bash window back, like, the sandbox is running and you can SSH into it. There's never, like a latency loop or anything there, but you mentioned sandboxes and snapshots as two terms, right?
Ivan: Yep.
Marc: Can you define what those are in the product?
Ivan: Sure. So sandbox you essentially think of as like, a container, a VM, a micro VM, whatever you want. The industry calls them, we call them sandboxes. And so I'll tell you what I think of a sandbox. There's no term right now because today sandboxes, people offer sandboxes that are just like app layer isolation, that are just bash, that are like a full VM. They're like different things. None of these tools are bad. It's just like there's different definitions.
But what we look at as a sandbox at Daytona is we look at on two axes and one is the primitive axes. So it is the actual, let's call it VM container sandbox itself. So how fast can it spin up? Can you pause and resume it? Can you fork it? Can you put multiple operating systems on it? Can you dynamically change the CPU RAM configurations? Can you add a GPU? Those are things that we look at at the primitive level.
And the other is the tooling axes. And so the tooling is one. We started Daytona thinking that agents should have all their tool headless. So we come pre-baked and coding tools, coding agents were original customers. So you have you know, an FS tool completely headless. You have a terminal that's completely headless. You have, you know, a Git client. You have all these things already baked in there.
But you also have things like a firewall. So you can define where the agent can go outside the sandbox or not. And then upcoming is like a secrets manager and then a process firewall. And so we also add a layer of tools that on one side help the agent get the job done faster, but on the other also add guardrails, potentially the agent.
And so that together we call a sandbox. And the definition of a snapshot is basically it's either a memory snapshot or a file system snapshot. It is like the state of the hard disk and, or the state of the memory and the hard disk of the sandbox.
Marc: So a snapshot is like you could use it to pre-bake a bunch of stuff in so that you spin up a sandbox and it kind of comes up quicker, so you don't have to do a bunch of runtime, installation, boot time installation?
Ivan: Exactly. So the whole point is like, so when you spin up a default Daytona sandbox, it comes with a bunch of tools. But ideally you probably have other tools that you want. So then you as a customer can create your own snapshot. You can just pull in a Docker image or something like that and say, oh, this is what I want. And then we preload that snapshot into our system and then it becomes super fast, just like ours does.
Marc: So Benjie asked a little bit ago if you were using Firecracker, and that's interesting because you said, "we were."
Ivan: Yeah.
Marc: So, like, you're not now, does that imply then you're no longer using Firecracker?
Ivan: Yeah, we did and we will again. And so we actually started Daytona using Docker hardened with Sysbox. And so when I say Docker, people are like, "oh, it's a container, it's super unsafe and whatever." So Docker has this product called Sysbox that also gives you isolation equal to a container, more or less. Like, we've done multiple, like, security audits and whatever.
Benjie: Equal to VM?
Ivan: Yeah, equal to VMs. Yeah, it's equal to VMs. Yeah. And so Docker helped us do a lot of things really, really well and really, really fast. Now, that being said, we also have a Firecracker, we also have a cloud hypervisor, and we also have a Qemu. We're not using it at all anymore, but we have it up and running. So basically, depending on what you want and what type of sandbox you spin off, you'll actually get a different isolation layer underneath.
The customer doesn't know this for the most part, and actually shouldn't know this because they don't really care. And so to answer your question, like for the Linux one, we started off with Docker because it let us do all these things very, very fast and move fast. Again, I know there's some competitors that started after us, but from the perspective of the incumbents, if you can call them like startups as incumbents in the space, like, we are the last ones that came.
So it's like this was the ultimate way to move fast. And it ended up being a really, really good tool for a large set of our use cases or customers, which actually run RL workloads. And so for them, and we get into customers in a minute. But basically the input and output is a Docker container. And so having that super, like, natively and fast enable them to get a lot of things done.
Benjie: So basically an OCI image is kind of the input and output?
Ivan: Exactly. I mean, you can use that for the others as well, but then we have to sort of like bake that into a VM image in the background, so you don't notice that. But this was very like, natively. Right?
Benjie: Okay. So it started with using the Docker stuff, and I know that secure stuff I looked in a while ago, that stuff's really cool. But now you said there's multiple layers here. How do you determine who gets what? Or is it just based on OS?
Ivan: No, it's based on spec. And so it's like CPU, RAM and disk are not a spec but a spec for us, but we can put that in there. So it's like CPU, RAM, disk, GPU, operating system. And so depending on what you fire off, like the first three don't matter because it's all the same across of them. But if you flag on Create, and so the multiple operating systems and GPUs aren't GA, but we have like customers already using them.
So basically if you fire off a request to create with that in it, then we know which one to create in the background, we know which isolation layer to use. So right? GPUs doesn't work on all-- Like on Firecracker you can't use, or on Docker you can't use a Windows or you can't use a GPU. Like it doesn't work at all. And so for now there's some issues.
You can run Windows in a cloud hypervisor, but there's no graphics driver. And so you can't use Remote Desktop I believe, or VNC, one of the two doesn't actually work if you use a cloud hypervisor, but if you use a Qemu, on a Qemu it actually does have like a quote unquote display and they can render that display as well. So there's like technical reasons why we have one type of them for the other.
There's also drawbacks, like some of them are faster or slower or the way they load from hibernation or pause to resume is super fast on Firecracker, on Qemu it's really slow because you have to load the whole ram, the memory back into RAM to turn it on. Whereas on Firecracker like there's a sort of like a lazy load type thing where it can sort of like kick it off very, very fast.
So there's different trade offs to each. But we're trying to give the customer the ability to have all these computers together.
Benjie: Yeah, you mentioned that it takes about a second for Windows VM to turn on.
Ivan: Yep.
Benjie: Just going to say I don't believe you. No, I'm just kidding. But how in the world are you doing that?
Ivan: Oh yeah, so that Windows VM. By the way, a Windows VM takes like two and a half minutes to boot and then another 30 seconds to get up and running on the clouds. Like it's very slow.
Benjie: I feel like that's generous. I've seen it more like, in my experience, more like five minutes for those things.
Ivan: Yeah, it's very slow and so we did a lot of work on this and so we have a hypervisor that's not Windows since we run it on a Linux hypervisor. And so we do a lot of like dark magic in there to get that up and running. And so yeah it is just about at a second right now spin up times.
So I like we're super proud of that and customers are like insanely excited about that. By the way, Windows for the most part we can look back at this maybe in six months. But people still think we're idiots for doing that. Like why the hell would you need a Windows sandbox? But stay tuned. I have insights on this.
Benjie: Well I will share my insights on that and you have some stuff coming so we'll leave that. But I think it's pretty obvious that you know, for Office and for all these agents that are running that are coming, especially when you start talking about training and like real world stuff and like Back Office and all this other stuff, obviously the Windows thing makes a whole lot of sense.
That's why the one second thing is mind blowing to me. Can you share a little bit of technical details on how you do the one-second Windows thing? So you're saying there's a Linux hypervisor that loads memory of Windows? Like is this Windows N or what? I don't even know what.
Ivan: No, it's actually. So the Windows what we're using right now is Data Center Edition because from a licensing perspective that that works and we can downgrade it to a Windows 11 or whichever the Windows is right now. But it actually costs quite a bit more so we'll leave that up to the users if they actually do need that. But the Windows Data Center Edition has the ability to look and feel exactly like another window.
So it's essentially Windows, it's the same. There's just like more things inside of that so that shouldn't bother the agent. Right? And so when you include you know, memory snapshot and the way we manage these machines which is also similar, RAM, CPU, disk, all on the same underlining node and things like that. And we've done a lot of new things with expanding the memory we used to have--
I digress slightly. We currently on our product have a constraint on size of hard disk because you're actually constrained by how much hard disk you can have in a physical machine. Right. That's why most people use an external, like EBS or something.
And so we've now actually sort of like conjured a solution to have like unlimited hard disk while still being at the speed and throughput of it sort of being local. And so that will help as well. I can't give you all my secrets, but yeah, that's there.
Marc: But you don't have to be constrained just to that disk. Right? You all do support like fuse mounts and external mounts on those.
Ivan: Yeah, but fuse mounts are slow as hell. Right? So fuse mounts, we have our own product called Daytona Volumes, which is essentially an S3 fuse mount. And so the cool part of that is that all the credentials are inside of Daytona itself. So your agent can see that. And it's part of the SDK, so you can like fire that off quite easily.
But the problem is a fuse mount like that's, it's just slow, is super slow. And you only use a fuse mount if you want to read, you know, large data and then that's fine. And so you like have a terabyte of data that you want to sort of load in there, but you're not going to do like your workspace directory inside of a fuse like that. That doesn't work.
Benjie: So that's where like the RL workloads might come in for folks leveraging Daytona. Okay, so you guys are using these SSDs. So do you guys have your own data centers yet or using Hetzner or Vultr? What are you guys using?
Ivan: So not, not Hetzner and Vulture. I mean, somewhere you can see in our sub processors Hetzner just because we have a few of them running to test stuff out. But basically all this again is public on our DPA. You can see that. But we're using mostly colocation providers that we can order servers per specification and then they give us those specs and then give us access to them. And so yeah, it's like we have our own data center, but we don't have the complexity of that.
Marc: But you actually own the hardware that you're putting in those colos.
Ivan: We pay per month on this, just like from a financial standpoint it doesn't make sense to invest the long term capital into that. But we do have contracts to keep them up and running for, you know, an amount of time.
We could buy them out for how much we're paying over time. But working as a startup it just feels more optimal to rent them out and then figure out where we sort of expand from this. And so do we end up building our data centers? I believe so, but we'll see.
Marc: And hopefully you have ways to mitigate against RAM prices 10xing and stuff like this.
Ivan: No, CPU prices, guys. CPU prices. So SemiAnalysis-- So Dylan Patel from SemiAnalysis does all these analysis on prices. You know, GPUs. Like GPU is a constraint. CPUs RAM is the constraint. He was also at our conference.
We just had a conference recently at the Chase Center in San Francisco called Compute and he was one of our guests and he was talking about CPUs are the new constraint because of people like us and sandboxes and RL environments and all these things. And by October there will be no more CPUs. Like it's done for this year.
Marc: So buy what you need now.
Ivan: Exactly, exactly.
Benjie: I mean that makes sense. We need space hardened CPU soon apparently, which may or may not be a smart idea. Who knows. I don't know how to cool those things, but yes.
Okay, so you guys have colocations there. Did you start off co-located? I mean I'm just interested in this because your background of racking and stacking. I feel like this is a big advantage for you guys.
Ivan: Yeah, for us it made sense. Like our thought process is the following: If we're going to build a, I don't even think like AWS is a competitor, they certainly don't care about us at this point in time, but in the sense of like if you want to build out like a large cloud company, I find it very hard to believe they're going to do that on top of a large cloud company. Like that sort of like defeats the purpose of those things.
Plus the only reason that we have a right to exist at all is that the software stack that we had built out is our own. And so we're not offsetting anything to AWS's or Azure's or anyone else's software stack. Because if you can do that, it becomes quite trivial to actually build this product, hence you don't need the product.
Right? So like and so if you think about it that, the way I think about it is like okay, we have to be able to do that from day one. And so if we're going to go into that, we might as well just do these colocation providers because we're going to run on our servers anyway. There's no difference. It's now if it's our own data center versus this, it's a server server. Like there's no software stack on top of that that you become dependent on.
On top of that there's the gross profit margin part, which is much better than running this on clouds. But also performance, it's not huge, but you do get like a 10% performance hit if you do use this on top of the software stacks on top of the cloud. So, you know, performance is better--
Benjie: For the AWS's but Hetzner is not like-- Hetzner is bare metal.
Ivan: It's bare metal you could do that as well. But there's other things. It's like so you, so now you have the DevOps overhead, right? And so like you you now have to spin up. We have running, we just have shy of a thousand servers now running, physical servers now in Daytona, right? Just shy of a thousand. And so we spin up a lot of these servers and if there are different configurations, different size, different date, different whatever, then the setup time is just like a pain in the ass.
And my DevOps people will be just like, they, then they'll bitch and they're like, this just takes time to do that. Whereas the benefit of doing something in the cloud, like the only benefit of using a bare metal in the cloud is it's usually always the same machine. Which means that you one, have an API to spin up the bare metal and then two, it's the same machine. So your scripts will work right out of the box and get it up and running right away, right?
If you use something like Hetzner, you don't know what you're getting, right? It's like different machines, different days, different. Like some of them are older, some of them are newer. It's all over the place. So like, we'll spin up sometimes something like that to do like tests or like something that we're trying to POC or whatever just because it's like quick and easy to get them up and running. But the standardization of them is not ideal. And so that's why you want a sort of colocation provider that is at some level that can give you standardization and speed of deployment of these boxes.
Marc: And it even goes deeper than that. At Replicated, you know, we, we built something on top of Firecracker to create micro VMs for our customers. It's not a competitor to Daytona or anything. It's a very purpose built thing and we actually did it on top of Hetzner. The challenge is Hetzner only offers bare metal in Europe, not in the US and so if I can spin up a VM in 60 milliseconds and then the latency is like over a second for keystrokes on ssh, it's like, unusable still.
Ivan: No, no, that. Absolutely. So that's why we have regions as well. We have the providers that have multiple regions. And I didn't actually-- You're probably correct. Although I thought they had it in the US, but I hadn't looked at that. But there's other things. But yeah, I would agree as well. You want it to be where the consumer is.
Marc: So you have data centers around the world.
Ivan: We have five locations now. Yeah.
Benjie: So in North America, Europe.
Ivan: Yeah. East Coast, West Coast, Europe, Asia, and one more in Europe. So we have two in Europe, two in the US and one in Asia right now.
Marc: And in the product, do I specify the data center or do you just, like, geolocate it?
Ivan: Yeah, there's regions in the product right now. So we haven't-- Because we're still migrating some things, you basically have an EU and US like, flag right now and Asia. So you have those three flags, which you can do, but I think in about two weeks you'll have, like, east coast and west coast, and you'll be able to split between.
Right now you're like randomly between east coast and west coast, which is not ideal. I totally get that. So that's not ideal. And so we'll segregate the two of them and you'll be able to choose which one you want.
Benjie: So capacity planning is something that I'm going to-- Like you said you went from zero to a thousand. Literally, in this case, zero to a thousand servers. But how in the world do you know if you need 2,000 servers next month?
Ivan: You don't. Capacity planning, like, now that we've been around for a while, Daytona's been live, this product, less than a year. So like end of April. So, you know, we're coming 11 months now. And now there is some data and figuring that out. But right now it has been-- We have, like, obviously, measurement, utilization of where we are.
But there's a couple of things you have to think about for capacity planning. One is, what is your growth rate of your company?
And so if you look at your growth rate. Let's just pick a number. This is not our growth rate. Let's pick a number, 20% month over month, whatever it is, right? And if you're like saying you're growing 20% month over month and you want to keep the same utilization of your servers across, then you'll keep stacking 20% more each month, right? Like kind of trivial way to figure that out. And that's, that's fine.
Then when you add on that the spikes of usage, because the way we work, because we're an infra company, like we're more like a Twilio and a Stripe than other types of companies. Whereas someone will integrate you and then they ramp up their product and then if their product's successful, then they just spike, right? Then they just go vertical. And if they go vertical, you go vertical. So you have to be prepared for these jumps in customers.
And basically if you look at our growth trajectories, it's like linear growth, then like 5x, then it's linear, then it's like 5x, then it's linear, then it's like 5x. And like, so you get these big customers that sort of make that go up. And you have to like always have in the back of your mind, okay, I have to make sure that I can have capacity for these. And then it's more like, oh, what does our pipeline look like? Who are we talking to? Who's like signing a contract right now? And then you can sort of like feel that out.
We've been wrong, we've like overbought sometimes and underbought, but like ballpark, we sort of feel that out. But the other part that's also very, very hard to manage is RL workloads. So if you look at background agent workloads, background agent workloads do the day, night, weekend thing. So like in North America, during the day it goes up, during the evening it goes down. And then in the weekends it goes down.
You know, Monday and Friday are the least, Wednesday is the highest, right? And that's your peak. And then, then you tilt that if that company is growing and sort of how that grows. And so they don't have, they're not spiky workloads. You can sort of like calculate what they have. But if you look at the workloads of RL, it's like a square, it's like flat up to 100%. Whatever they have utilization, use that for five hours and then down.
And so they like come in and they spin up 50,000 concurrent like 50,000 concurrent. That means you have 50,000 CPU cores somewhere. Like that is not a trivial number to have of CPU cores, right? To have somewhere up and running. And so you have to have that for them. So your capacity management is like, what is my growth rate? What is my growth rate for these types of customers? And then what do I have to do to serve these super spiky customers that come in and they use it for a while and then they're gone, they use it again and they're gone?
So it is a non trivial problem to solve.
Benjie: So what do you do? How do you handle the square?
Ivan: You over buy for a lot of things.
Benjie: Yeah.
Ivan: And so it's also like how we, when we talk to our especially RL customers. So the way RL customers have their need is like they usually have GPU allocation for a while and then for their GPU allocation, when they're training models, they actually need CPUs in that same time and they want to make sure that those CPUs are very, very fast. Because you basically want your GPU running. If you're like in car analogy you want it at like in the red RPMs, like you want your GPUs always running in the red and then your CPUs as they spin up and down, because you can spin up, you know, hundreds of thousands through these runs--
When you kill one, you want the other one to come back as fast as you can. And so that's why the speed thing becomes really interesting. And you want to be able to serve these customers. Luckily they at least know like a day before they need that, which is not a lot of time. But like it's not live live. So you have some time to have capacity management and say, okay, you know, customer X, here you go. And then if customer Y calls up and is like, okay, there's a lot of still like hand planning of like enabling limitations for this. But the thing that we want to do and we're really proud of is for RL workloads, our competitors will take like anywhere, let's say 50,000, they'll take anywhere up to like 30 minutes to get 50,000 sandboxes up and running.
Daytona takes 75 seconds to get that up and running. So 50,000 takes 75 seconds to get up and running. And the way we do that is obviously the throughput that we have. But because we quote unquote, own the metal, we don't have to provision VMs when you're asking, we have them idly and then we spin them up.
Now there's a whole conversation of, like, how do you get maximum utilization where that works? And if you want the VC answer, I have all these answers for that. But, like, that's basically how we solve that right now.
Benjie: Oh, boy. So you have a lot of computers sitting around, is the answer.
Ivan: Yes.
Benjie: So in October, when I can't get a CPU, I've got to come to Daytona.
Ivan: Exactly. That's the point.
Benjie: Yeah. Well, I look forward to having conversations with you in six months and seeing what that thousand server number is. The RL workload stuff is super interesting. So people are leveraging you guys. So there's the consumer side of this, or I don't know if it's consumer, but people using Claude Code and Codex inside of these things. And now you have these large model folks, I guess, using you for, or whoever, using you for RL workloads as well.
So one thing that I think is super interesting is your SDK and your CLI, and we're running low on time here, but I do want to touch on those and just talk to us about, like, you know, you go to Daytona's GitHub and it's 70,000 stars and it's got you know, a PIP package to install. Tell us about the SDK. Tell us about CLI. I've heard very good things.
Ivan: Oh, yeah, I can. I can, sure. I can talk about that. First of all, we have to clarify is most people think on our GitHub stars, it's just the CLI because that's on the readme, but it's all of Daytona. The whole thing is there. So hence, like the usual if you want to spin up your own Daytona it's a AGPL3 license, so there are some restrictions, but the whole thing is there.
Someone said we're too open source. I've heard a competitor say we're too open source. So a lot of things there.
So it's not just the CLI on its own, but, sorry, the SDK or the CLI. But that's what we put on the readme, because that's most people use. And so we will redo that so people can actually see the whole thing is in there, actually. And so that's why I wanted to, like, before I get into like the SDK and CLI, that's the reason why we have so many stars, if that was a question.
Like this, the SDK on its own, I feel. And you guys are on the outside looking in. So you probably have a better vibe of this. We think it's actually really good. The SDK, especially the CLI, has a bit of polish, but the SDK we feel, is, like, we're really happy. We've had a lot of positive comments on that, and we try to make that as good as we can for humans and agents actually integrating this in it. So I don't know if that was your question directionally.
Benjie: No, not at all. The stars, I think it's cool. It was just more, I've heard great things about the CLI and the SDK. How do folks use it? What's the way? But now I didn't realize, like, backing up for two seconds here. What is Daytona? Is it Go? Is it Rust? Like, obviously, we can go check out the GitHub, but, like, what is it?
Ivan: There's a lot of Go. There's a lot of Typescript and a lot of Go in there. So there's some other things, but, like, that's basically what it is. So the orchestration stuff is mostly in Go, because we've always been sort of like, we, as a team, have done that and used that and, you know, Typescript for the app layer stuff. And so that's what most of the code is.
Marc: I feel like, you know, I spent a lot of time writing Go code. Most of Replicated is written in Go. And y'all have a Go SDK, an official Go SDK, which is unusual in, like, the AI world. Everything is JavaScript and Python. And I'm like, oh, this is nice.
Ivan: There's a couple of things that we think about. And so I'm sure that all competitors will be listening to this. So this will change.
When you think about why people use your product, basically, there's three ways, right? And so why would someone use you? One is awareness. Do they know you exist? Two is preference. So preference can be anything. Pricing, design. The person you met that works there. Does it have a Go SDK? Right? It can be whatever. The preference is very, very wide. And then the third is if there's some deterministic factor that you have that others don't.
So, like, is it a government contract and you have FedRAMP and they can only use you, right? So those are three things that's there. And so I think we're probably the best in, in our market and wider market of go to market. Like we've been on this for the last 10, 11 months and it's just been insanely growing. And then when we look at, okay, so what can we do from a go to market, not even technical perspective, like what can we do from a go to market perspective that someone will prefer us versus someone else?
And even though like agents do a lot of the coding today, there's still humans doing a lot of these integrations are integrated. And so to your point, Marc, is like if you're a Go developer and we have Go and our competitors don't, if all other things are equal, you're probably going to pick us, right? I'm not saying all things are equal, but like if they were, then you pick us. And so what is our time investment to get that done? And for me that made sense.
And so we also have Java coming up and we have .Net coming up and whatever. And we want to make sure that everyone sort of is supported. Because to your point everyone's either just Python or just Typescript or those two and nothing else, basically.
Marc: Yeah, that's cool. I want to move on. Early on you said you wanted to talk about some use cases. I'd love to talk about use cases. I think that you described it as, you know, this is computer use for agents. Right? Agents can spin up computers that can perform all these tasks. The startup time isn't just a vanity metric, "we're faster than everybody else."
It's super important for an agent to be able to do this without having to do some awkward polling or async callback. Is this machine running or failure rates? So the agentic flows or the agent use cases. Let's talk about some of the use cases you see out there.
Ivan: Sure. We break it down into two use cases and then in that there's multiple ways to use it, the agent uses. So the two basic use cases are what we call background agents and we call reinforcement learning. So background agents is essentially, you know-- Ramp launched this product you probably saw that like the internal product of their agent, you know, doing coding in the background. They don't use us to use a competitor. But in that sense.
Or you have -- Mintify is a customer of us. So Mintify now generates docs on the fly. Their agent does that to do that, they spin up at its own sandbox to do that. So we're engraved in that product. But it's an agent that does something, it spins up the machine, the sandbox, it does what it needs to do, it kills the sandbox, and then it submits the code.
So for the audience here, you know, if they've ever used like Claude Code in a browser or ChatGPT, whatever, and you say, oh, go, you know, analyze this data or search the web, or not search the web, but it opens the agent tab, it actually spins up a sandbox to do that. And so that's like the background agent use case. And the other use case is RL, reinforcement learning, which we talked about. So I won't do that again.
But both in those use cases, the agent can do what we call coding commands, execution, computer use, and browser use. So there's like two ways that agent does that, which is like that sort of headless way is it firing off like a command in the terminal using a CLI, running code executing a script, whatever it may be, or calling a whole GitHub repository. That's the coding command execution. And the other is like the computer browser use is like, does that agent actually need to use this because there's legacy software that it needs to use or it's doing QA or whatever it may be? And then it goes and uses that. And so that's how we think about the two use cases that are broken down in two ways of two modes of work.
Marc: Got it.
Benjie: So, Ivan, we're running low on time, but I really want to talk about the company building side of this for a minute because obviously you, like you said, you went from zero to a thousand overnight. We keep talking in these ridiculously small timeframes, like, "oh, I've been doing this for 11 months." And it's like, what are we talking, like three years ago? That was like, okay, yeah, you just started a company in today's day and time, like you're the gray beard of sandboxes right now.
So you obviously were a VC funded company and then you found this product market fit, you went bonkers with it. I believe this is public information, you guys raised a funding round recently. Just tell us what happened, that's appropriate to share. And then what? It's like juggling customer demand, raising money. You have actual CapEx that you need to deal with because of the model that you explained where you're buying--
Ivan: Renting, but yes.
Benjie: Sorry, renting.
Ivan: Yeah, yeah. So yeah, first what we did, we raised a pre-seed on the old product. 2 million. It was mostly just like founders. And so we have a bunch of cool founders in there and the old product, it was closed sourced and then we decided to open source that product or a version of that product for the single developer. And then we were able to-- We had revenue even on the pre-seed and then we had revenue growing, kept growing after that, but we decided to open source it in March of, you know, whatever it was 2024, that product.
And that went like bonkers. That went like, you know, 4,000 stars in 24 hours. And that actually kicked off our seed round because our internal investors, which were mostly angels, there were some like, funds that put in small angel checks and they were like, oh, we want to give you money. Basically, that was the conversation. And that stuff does happen. They text you and they're like, oh, yeah, yeah, we'll do that and we'll kick off a round.
The funny thing is some people say that and they don't actually mean it. And now they've thrown you into this like, fundraising mode. And anyway, so we ended up, we did close a round in like three weeks. So we end up doing a 5 million seed round, which is great. Upfront ventures led that round. And then we kept the story. Now, you know what happened of like, you know, pivoting and changing and whatnot.
And so then we launched this new product and we go from like zero to a million run rate in like 60 days. It was like insanely fast. I mean, there's faster today, but it was like pretty fast for us. Pretty shocking.
Benjie: In the old days, 11 months ago. Haha.
Ivan: In the old days, yeah,
Benjie: Zero to a million ARR was huge then. Now it's kind of--
Ivan: Now it's whatever. Yeah, exactly. And then we went to 3 million in like 45 days after that. And so basically what ended up happening, this isn't part of the public, but basically our internal investors basically doubled down and gave us like uncapped term sheets safes to put in more money inside of Daytona, which we on an uncapped safe absolutely did take. We're like, thank you very much. We love you guys so much. We will take this. Right?
And so that was great. That gave us more money in the bank because we had spent a bit. We were running for like a year and something at that point. So that made the whole thing better. I did tweet about this, but I don't know if it really resonated with people. We ended up then, after we got that money from internals, said, "oh, we're awesome now. Our internal investors gave us what was in, like, $7 million or whatnot. And like, oh, we'll just close an A round now because we're freaking awesome. Like, these guys gave it to us."
And we go out to try to raise an A. We talk to everybody. We got one term sheet which was not very good, and we decided not to take it. And then we went back to work for three months. And then actually, which was the. Probably my young self wouldn't have done this. And this is sort of anyone listening. I called back, like, all the VCs and said we didn't raise a round.
And so obviously, whatever you say, they think you just suck. Right. Because, like, you didn't. We weren't able to. So there's a lot of ego here where you're like, you know what? I'm. Here I am again. And then we show them the numbers three months later. And then, you know, you get four term sheets, which end up being like, there's a whole process. It's stressful. It's not that easy.
But in the sense of, like, it was like, we did a pre-seed, which was, like, kind of okay to do. We did a seed, it went fast, but there was points that we thought it was going to, like, fail. Then we had this little round in between that was part of the A. And then the A. We failed the first time. Like, flat on failed. Like, you know, you guys suck. Go back home.
Benjie: And that was when you went from zero to, you went out for the first time, 0 to 3 million, and you failed?
Ivan: No, it was right after the 0 to 1. And then between the 1, it was between the 1 and the 3.
Benjie: Oh, okay.
Ivan: And so, yeah, it was between the 1 and the 3. And then by the time we got to like, after it was three, and then we did some work another month or so. Then we went back and said, like, let's go do this. And so we didn't do the whole thing. We called maybe, I don't know, 10, 15 people on that one. And that one closed pretty fast with a bunch of term sheets.
It sounds easy now, but it was hard. You have to literally go home beaten. They beat you. You did not come home with the term sheets.
Benjie: Yeah. What I've learned is dealing with rejection is basically what being a CEO actually is.
Ivan: That's it. Yeah.
Benjie: Just like, thank you so much. Will you buy my product? Can I hire you? Can you give me money? It's like, no, no, no, no, no. Until they say yes. So, okay, so you raised-- If it's public, what did you guys raise to?
Ivan: Right, so 24 million total was the A round, five was the seed, and two was a pre-seed.
Benjie: When did you close the A?
Ivan: We signed the term sheet in December. Closed mid-January, somewhere.
Benjie: Okay, so like two months ago you closed? Oh, congratulations. Well, it's been two months. So have you done the B yet or what's going on here? What are you doing?
Ivan: Maybe, Maybe we've got some-- We'll see. We'll see. Haha.
Benjie: Yeah. You need money for CPUs. Come on, get in front of that.
Ivan: Yeah, yeah, yeah.
Benjie: So juggling the fundraising with the growth, I'm just going to assume that that was not a fun experience.
Ivan: It's not fun even now. It's very flat. It's 25 people as of today or yesterday in the company. And so I'm very not pro-hiring. I'm not the one that says, oh, like a vanity metric, I want to have like a hundred people. The less people, the better. There's less complexity between the team members. We all still like each other. You know, all the things, every small company, but, like, we literally at this point in time--
And so we're talking about investors on this, and so we haven't even hired any sort of sales org. And so we've closed like Fortune 500, like very, very, very large companies without this. Like enormous companies without this. The amount of inbound that we have now cannot be solved anymore without hiring. Like, I'm just falling apart. Like I honestly feel--
I was, before this podcast, I'm like, can I say no to this? I just have so much work. But I want to do it, right? So it's like, it's just insane. And so managing that board, members, the capacity management, all the things, I'm not complaining it's just like--
Benjie: No, it's a crazy time.
Ivan: It's great. Like, I love it, but just something like it'd be great to have two hours of sleep and that's it. So yeah.
Benjie: Yeah, well, Ivan. I think that the story of Daytona is-- It sounds like we're just at the end of act one right now and we're gearing up for act two for Daytona. The RL workload stuff, I didn't realize that that's what you guys were kind of the next step, which makes a whole lot of sense. So that's really exciting.
So there's an open source project. Typically, historically with The Kubelist Podcast, we like to promote open source. Is there contribution opportunities there? How does that work?
Ivan: Absolutely, there is. The repository is https://github.com/daytonaio/daytona. And so contributions can be done there. Also like security. We also do bounties on security bugs. If people find any of them, they can submit there and so they can use it there. Anything that, as any open source project, they're more than welcome to contribute. Obviously it's not a-- The product itself is like directed by our customers in the company, but if anyone would like to or has reason to or enjoy to, we're more than happy to have them.
Marc: And you said it was AGPL?
Ivan: It's AGPL. Yeah. Yeah.
Marc: Cool.
Benjie: Ivan, this was amazing. I'm so looking forward to the rest of the year for you guys. Thank you so much for coming on, we really enjoyed the conversation.
Ivan: Thank you for having me, guys.
Content from the Library
Open Source Ready Ep. #33, Retiring Ingress NGINX with James Strong & Marco Ebert
On episode 33 of Open Source Ready, Brian Douglas and John McBride sit down with James Strong and Marco Ebert. They discuss the...
The Kubelist Podcast Ep. #49, From Containers to Unikernels with Felipe Huici of Unikraft
On episode 49 of The Kubelist Podcast, Marc Campbell and Benjie De Groot sit down with Felipe Huici to explore how unikernels are...
The Kubelist Podcast Ep. #48, Unpacking Software Supply Chain Security with Justin Cappos
On episode 48 of The Kubelist Podcast, Marc Campbell and Benjie De Groot sit down with Justin Cappos, professor at NYU and a...
