1. Library
  2. Podcasts
  3. Open Source Ready
  4. Ep. #32, Rewriting SQLite for the AI Era with Glauber Costa
Open Source Ready
55 MIN

Ep. #32, Rewriting SQLite for the AI Era with Glauber Costa

light mode
about the episode

On episode 32 of Open Source Ready, Brian Douglas and John McBride sit down with Glauber Costa to explore Turso, a Rust-based rewrite of SQLite built for the AI era. They discuss database reliability, open source licensing, and why embedded databases are becoming critical infrastructure for modern agents and applications. The conversation also dives into AI-assisted development and the future of software engineering.

Glauber Costa is a systems engineer and co-founder behind Turso, a modern rewrite of SQLite designed for distributed and AI-driven applications. With a background contributing to the Linux kernel and building large-scale database systems, he focuses on reliability, performance, and open source innovation. His work bridges deep infrastructure engineering with emerging developer workflows.

transcript

Brian Douglas: Welcome to another installment of Open Source Ready. John, are you ready?

John McBride: Hey, I am ready. You know, I gotta say though, Brian, we we didn't quite make it for the Golden Globes best podcast award.

Brian: Yeah maybe next year.

John: Maybe next year. Yeah, it was only the first one, so they had to give it to Amy Poehler, right?

Brian: Yeah, I thought it was Amy Poehler, but yeah good for her. Great that she got some notoriety and her podcast and everything she's talking about.

But the reality is that we're talking about something way cooler than like inside baseball and actors and stuff like that. Actually, hopefully we'll talk a bit about Rust today because we do have a guest today, Glauber Costa. Glauber, you're calling in from Texas today, right? Is that correct?

Glauber Costa: I am, Brian and John. Nice to meet you. Well I met you before, Brian. John, it's the first time that I get to meet you. I'm very excited to be here.

I am from Canada, but I have moved to Texas in June 2025. So I've been enjoying a little bit of this incredibly warm weather here in the middle of the winter in Texas.

Brian: Yeah, I mean, it's better than having it freeze over.

Glauber: I've been enjoying, I don't know, it's a great place and my family's having a lot of fun here.

Brian: Yeah, cool. So Speaking of fun, you've been working on this project company called Turso. You want to catch up the audience about what is Turso? What problem are you trying to solve?

Glauber: So Turso is a rewrite of SQLite. And this is again, the easiest way to explain, the most direct way to explain. But then we can start going in layers into what all of that means.

So it's a rewrite of SQLite in Rust, first of all. So this is something that attracts a lot of attention. There is a trend these days of like, hey writing what would programs look like if they were written in Rust? But it's important.

I always like to make it clear that when you rewrite something, you just don't get whatever was there and port one to one to the new language. I suppose some people may do this, but that's usually not how it goes. I've been a part of several rewrites, most of them successfully in my career. And the best ones are not doing just that.

What happens is that you have a reason to rewrite a software project. And obviously when you rewrite a software project, what you end up with is with a new software project. So why do you call it a rewrite? You call it a rewrite because you are maintaining a certain level of familiarity on compatibility with the one that you are rewriting from.

So in our case, for example, we can read and write SQLite files. Our syntax language is fully SQLite compatible. You change the driver in your source code and instead of talking to SQLite, you are now talking to Turso and it changed nothing else. So it is a rewrite of SQLite.

But, you know, you will do this because hopefully for some reason, not to judge the people that they're doing just for fun. Fun is not what drove us, but it is a valid reason. But we had a couple of reasons to do this and enumerating them--

We love SQLite. We've been using SQLite for a long time. I think the first time that I remember that I touched SQLite was in 2003. I think a lot of the audience might not even be programming at the time or in tech. I mean, it's a long time ago. And I never will forget the feeling of, wait a minute, is this a database and it's just here? I didn't have to do anything. I didn't have to--

Especially back then, firing a server was a lot more complicated than it is today. And you would spend minutes if not hours, just tweaking configuration files for the server to even start. So that was the reality of infrastructure back then in the early 2000s. And all of a sudden comes SQLite which is this thing that you don't have to do anything.

There is a file and then you type SQLite3 and you're querying a file. What a beautiful thing. And I've been using SQLite one way or another since then. So it's been again more than 20 years. And what my co-founder and I noticed is that SQLite moves very very slowly which is good and bad like most things in life.

Brian: You mean like project, like open source and stuff like that? What do you mean by slow?

Glauber: Project wise, yeah.

And SQLite is known to be a very reliable database. It's a database that people trust for mission critical workloads. It's a database that people will put on airplanes and whatnot. So I mean it's truly everywhere. But the project moves very slowly. And one of the key reasons for that is that, by their own admission, which is written in their website, SQLite is open source but not open contribution.

So I suppose it's technically possible to contribute to SQLite that work. I've seen cases of people who successfully did it. But the authors of SQLite are saying very explicitly that we're not interested in your contribution. And beyond just the technical details of whether or not you can do this, there's clearly not a community of developers for SQLite. There is a very small group of individuals that develop SQLite and that's it.

The code is open, you can use the code but you can't be a part of what this project is. And my co-founder and I, we met in the Linux kernel, we spent almost 10 years writing code for the Linux kernel, which again was also a very successful open source project with a completely different philosophy. The philosophy of the Linux kernel is like come and build, get a seat at the table and help us push this project into directions that we were not even considering doing.

Like a reminder, Linux started as a toy operating system for x86 processors and look at what it is now. So we felt that SQLite could be a lot more if you had that dynamic. And 25 years, give or take after SQLite started being this massive database, it was easy for us to see some very obvious points in which you could improve a lot. So that is part of the motivation.

And just to give you a couple of examples, SQLite does not have a CDC mechanism. And to the people who don't know what that is, CDC stands for Change Data Capture. And it's just a way to inspect what changes happen in the database. So you can pipe that to an ETL process or do whatever you want with that. SQLite does not accept more than one concurrent writer.

So if there is one process writing or one thread in a more modern sense writing to SQLite, that is the only writer. It takes a lock on the entire database. So your throughput for writes is very limited. SQLite doesn't really have a vector type, which is something that a lot of people are looking for today. It doesn't have an extension system. I mean you can have extensions, but there is no way to do what Postgres does and have something that becomes an ecosystem.

So when you look at all of those things, we see that, hey, it would be great to rewrite this project. And then there's the final component, which is okay, I suppose many things in the world could be better. So why spend our time making this thing better? And the reason is that when we started this, which was in late 2024 and then we announced it in December 2024, went all in on doing this January 2025, agents were starting to become a thing.

People were starting to understand that AGI at the end of the day perhaps is just a dumb French dude called Claude with access to a shell to do things. So I mean the French dude doesn't have to be smart. But once you give that guy Bash, it becomes a superhuman to some extent. And people were reaching out to SQLite in a lot of use cases, even in the early days of LLM. I mean SQLite was there everywhere as a cache.

SQLite was there as part of the context window, as part of the agentic memory or even like the vibe coded applications. You vibe code an application, you don't want to spin an entire database for that. That application may have the lifespan of minutes, perhaps even less. But it started to become clear to us that the form factor of SQLite was perfect for the world that we were heading towards. The feature set was not.

So it had to be rewritten. And then again, if I am going through the trouble of rewriting something, then I'm going to pick the language that I like the most and that's Rust. So you see, Rust is the least interesting part of the story I talked about. It tends to be the part that people obsess about the most because there is this trend of rewrite the world in Rust. But for me it's a footnote.

It's really like why we did it, why now? Why do you think it's a good idea? Why does that need to exist? Obviously SQLite is a database, so I'm not going to use JavaScript to do it. There is a restricted subset of languages that you can use for this, but after everything is said and done and the reasoning is there, the language becomes the least interesting part of it all. But that's the whole story.

John: Yeah, very interesting. I've been a part of a few of these types of rewrites before, both for fun and for business I guess. Also in Rust, also as part of kind of deep infra stuff when I was working on Amazon Linux 2 and Bottle Rocket at Amazon, it was always so surprising to me, taking the pieces that worked the best for that thing, lay a lot of like, the best ideas, the things that would make it most compliant with, you know, the other systems you were going to integrate it with and then the best ideas from, you know, that technology you were going to use.

So very curious your thoughts on like, you know, what are the best things or ideas from Rust that have really enhanced the whole product and the whole thing?

Glauber: Well, the other choice of language that we had, I mean I would not have started this in C. There's no part of me that would have done this. I mean we could. So as a reminder, a rewrite is done to achieve certain goals. It's not done to do something in a particular language. That's not how I view the world. So ultimately we could have done it in C. There's nothing wrong with that. We could have done it in C++, we could have done it in Zig.

Go is a little bit of a stretch. There are some systems written in Go, I think for what SQLite is it has too much runtime already because of the whole thing about the Go routines and how the the Go ecosystem works. So it would be possible but very hard to do. So those are the languages that we had to choose from. Right? C++, Rust, Zig. We have a lot of very successful projects now written in Zig. There is a TigerBeetle, there is Bun, of course, and we could have done it.

I think that the reason we did it in Rust is that SQLite is known to be the most reliable database in, perhaps, the universe. You can say the planet, but it runs in the Mars Rover. So you have stuff in space that runs SQLite.

So if you allowed yourself to be a little bit of expansionist perhaps you can say that. And when we started talking about rewriting SQLite, a lot of people came with that objection, which it is an objection that I would find valid if it wasn't for the fact that obviously we are aware of that. So people are pointing out as if we were not aware of that. So I agree with your objection, but just not an objection.

The thing I like the most about SQLite is not the feature set. It's really the fact that I can just fire and forget and trust and it's reliable and it never crashes and it has no bugs because it has 100% test coverage.

And by the way, the magical SQLite test suite that everybody talks about is proprietary. It's not open source. It's not like we can just do Turso and then run it against the SQLite suite. That cannot be done because SQLite has a variety of test suites. one of them is open source, which is the very basic, the one that transforms SQLite into the behemoth that it is is not.

And reliability was incredibly important for us. Being able to write a project that is guaranteed to be reliable and it's guaranteed to match, perhaps even surpass what SQLite gives you, it was non negotiable. It was not something that we could negotiate. The way we do this again it is possible to do this in Zig, TigerBeetle but has done it. We are using deterministic simulation testing the same way that they are.

But we kind of felt that for something like SQLite, every piece matters, everything that you can put to your advantage matters to some extent. And Rust being a memory safe language, we felt like this is already some guardrails that can help us. And again it's a very hard battle matching the reliability of SQLite. So everything helps. And the memory safety was very appealing to us.

So if I had to summarize like why Rust vs Zig, I would have said the memory safety. But it's important to put it in this context. Right? It's not that it will be impossible to do it in Zig, I think we would have to spend a little bit more time with verification, for memory errors and Rust we just don't have to do this. So it's a plus. But that's the thing that we use from Rust the most.

Brian: So you guys have been working on Torso for a bit and I know the Rust rewrite is fairly recent. Also I noticed a beta flag or at least the mention inside your repo. So I'm curious of what's it been like adoption wise? Do you feel like the extension of some of the features you can't get in SQLite, are folks picking this up? Is it validated right now I guess in the open source market?

Glauber: So Turso is not that old. I mean the company is older than that but the company was just doing something else. We actually had a fork of SQLite. During that period is when it became very obvious to us that it had to be rewritten. Right? But then what we decided to do is just give everything the same name, which is confusing on the short term, but on the long term at least I don't have four things with four different names.

Running a project or a company or anything, again, it's the same thing about the memory safety every day when that you can get it's already hard to build a brand. I don't want to build three brands. We just named everything Turso. The things that we had before we just called the Turso Cloud, which is a service over the wire running SQLite. At some point this is gonna run Turso. So Turso is the rewrite of SQLite.

Of course, I mean sometimes people still call the old thing by the name Turso. And I understand it's hard to adapt especially if you use it before, but Turso is squarely, the rewrite of SQLite. The first people to adopt Turso have been us. So as you mentioned it is in beta and we implement around 85% of SQLite right now of the service area of SQLite. We do add new features so there is vector search, there are concurrent writes there is CDC, as I talked before.

We're just working now on full text search, again SQLite has full text search as well, but it's quite cumbersome to use. So we're trying to improve the DX and the performance. There are some performance improvements. There are places in which SQLite is faster than us. It depends on what you're doing.

And as I mentioned we have a project that's still the main activity that we do in our company, which we call today the Turso Cloud and Turso Cloud, again I don't think it's worth it for us to go too deep into what that is, but it's a database as a service. And guess which database we've been using to build that service? Well, SQLite. So we had tons of SQLite databases for the API, when you create the user, the user table is in SQLite. When you create a database there is a table with all the databases that is in SQLite.

When you read the write, we log that into a SQLite database. So everything we build is built with SQLite. And some of those databases that are internal to our service are running with Turso now. So I mean it is already at the point they can run some production, but it's not yet at the point in which we want to bet our reputation into telling people you can run this in production and then you do it, something goes wrong.

So the beta label will stay until we migrate all of our databases to Turso and get very close to 100% coverage on what SQLite does. And it won't be perfect, but when we hit the 1.0 mark, we want to hit the 1.0 mark exactly at that point. If you use a very obscure feature of SQLite, maybe it won't be there, but the coverage should be like over 90, 95%. And we as a company are comfortable telling people you can use this for production because we are doing that.

So that will be the point in which we'll remove the beta label. But Turso again exists for around the year. AI is accelerating a lot our timelines there and it's the beginning of a beautiful journey. The adoption is coming. I think we are seeing like downloads growing week after week. But it's not yet at the point in which it's going to reach escape velocity. That won't happen.

We don't have any illusion that this would happen before, especially for something as crucial as SQLite. While that beta label is there, that gates adoption and quite frankly this is by design. Right?

John: Yeah. I did have a question around the licensing. You know, this is something that frequently comes up with our listeners who are business owners and open source practitioners. Turso looks to be MIT licensed, which was a little surprising to me from a new database where a lot of these seem to be going more I guess source available or copy left like an AGPL.

Does it worry you that a cloud service provider like AWS or somebody would start offering a SQL-like or a Turso, I guess as a service. Does that concern you? What are your thoughts going into that license being permissible?

Glauber: It does not concern me in any way, shape or form. In fact I would welcome this truly because SQLite is a very different database than other databases.

SQLite is very special. And what makes SQLite special is the fact that it can run everywhere, that it can be embedded into everything. There are databases that themselves run SQLite.

So two examples that come to my mind. FoundationDB uses SQLite for its storage layer. Chroma uses SQLite, Convex actually uses some SQLite, at least in the things that they put in the open source to do like local simulation. So the way we view this is a constraint of the problem. You can't have a component that gets to that level of pervasiveness if the license is not incredibly permissive.

SQLite itself is actually public domain. It has no license. You can do whatever you want with that, which is not. I'm not a lawyer, I don't want to start a legal discussion. But there are restrictions of course on what public domain means depending on the country. Here in the United States you can pretty much do whatever you want with that.

And for us it's just one of the constraints of the problem. It has to be like that. It would make absolutely no sense to have a database that is replacing a database that is used, again in the Mars Rover, in your browser. Turso runs natively on the browser by the way. We have a WASM build, if you go to Shell.Turso .Tech, there is an example there running with persistent storage on the browser and it will become a part of other projects.

Our goal is we have open source projects like Mastra that make Turso a part of their project and we have to work with those constraints. So for us it was not even a question ever do we adopt a permissive license or not? Now there is also something special about SQLite which is the fact that SQLite itself usually doesn't do much. So if Amazon or Google or anybody else would go offer a service based on Turso, Turso would by definition be only a part of that puzzle. Because you need like what is SQLite? What is Turso? It's just a library.

So you have to have all the networking parts, you have to have the server, you have to have anything that you offer to the public needs to have a lot more infrastructure that is not a part of your database. And in our company all of that is proprietary. It's not even copyleft. I mean it's very proprietary.

So what I like about the SQLite model is exactly that it allows you to have a very clear dividing line because Elasticsearch cannot have that dividing line. MongoDB cannot have that dividing line. The moment you're using MongoDB, you are using the whole thing or not at all. But with a library database like SQLite, I mean the part that is open source, it is very powerful, it is the core of the operation. But it doesn't give you enough to offer a service to anybody. You have to do something with that. And that allows us to have a very clear dividing line between what is open source and what is not open source.

John: Yeah, that's fascinating. I remember when Grafana released Loki, which is a Prometheus like system for logs and really is like the whole thing, it's AGPL. So the whole thing is like basically what they run with this horizontally scalable.

Glauber: If you had a library to parse logs for example, that library could be very permissive because to make that service useful you still need to build something around that library. And the situation with SQLite is similar, right? SQLite at the end of the day is just a library that will parse a file and read and write from that file, allow it to execute SQL against that file. But anything you do with that from a business point of view has to go beyond that. And this is great.

This is absolutely great for us because it makes our mission very clear. It makes the expectations that we put on open source very clear. So everything that is local is open source and it is very easy for people to understand that we don't have the incentive to make it not open source because there's all the license rug-pull and all of that.

But at the same time there is a very broad understanding today in our industry that if you're running a service, you have the right to keep the code for that service. Nobody really expects that your SaaS will be open source. Some are, but databases, if they're not open source, people won't touch them. Cal.com is open source, but if they weren't I think people would not use them because of that. You can make it to your advantage, but for a service that you run, I don't think there is the expectation that it is open source. I think it works pretty well for us.

John: Yeah, you had mentioned that you and your co-founder had worked on the Linux kernel at one point. And I do see that same philosophy kind of mirrored in that where there's like enterprise, you know, enterprise Red hat Linux for example, which is licensed and it's an enterprise product and all that.

Glauber: Linux is a little bit of a miracle because there is at the time I think this whole thing was new and people developed a very strong reaction to the GPL. And Linux is one of the only examples remaining today of something that it is GPL and people are fine with it. And I think people are fine with it only because it is the Linux kernel and it's so pervasive but because a lot of people just hate it.

Another thing that we rewrote, my co-founder and I, we were employee numbers 3 and 4 at a company that we wrote a database called Cassandra in C++. It's a project called Scylla. Scylla was AGPL and they just changed the license recently. And AGPL is tough because it's not open source enough. So you don't reap the value of being open source, but it's also not proprietary enough. So it's a pretty bad place to be in my opinion. I think you should either go proprietary or fully open source.

And AI is changing, I think, what is going to be acceptable as open source or not. And I think everybody that makes a prediction here is almost certainly wrong just because we don't have enough information yet. But it's easy to see how this will have an effect on open source, but my intuition is that it will kill the middle really. And you're either open source or you're not. It's pretty hard to be in the middle.

Brian: Yeah. And how does it affect Turso today? specifically AI. Like I have a large project that I'm working on at Continue, but it is an AI project. So like we get a bunch of AI generated contributions.

Glauber: Mhm.

Brian: I don't say we explicitly encourage it, but I always respond, if I get an AI contribution I respond with AI and like literally just like, "hey, if you're generating this, use this prompt instead."

Glauber: Yeah.

Brian: I actually find it quite fun because of the see if anybody bites, otherwise we just close the PR. But I was curious how Turso is handling it right now.

Glauber: Yeah, so first of all Turso, we had the motivation for us to even start writing the databases AI. And just to expand on that a little bit more, it's becoming very clear that isolation is one of the key components of AI because the models are completely dumb and at the same time you want them to do powerful things but at the same time you don't want them touching shared state because they will go wild.

So you have this idea of every small application that you run with a prompt will end up having its own database. You have this idea that every coding agent can be storing state on its own database, OpenCode. On X yesterday I was just having this back and forth with the founder of OpenCode. They're moving some of the stuff that they have from flat files to a SQLite database. We are obviously talking to them to see if Turso will be an option which they seem to be open to and see the advantage of some of the places where Turso would fare better than SQLite.

So this idea that we're going to have a billion agents, some of them on the enterprise, some of them coding agents, some of them on your Mac and some of them on sandboxes and they're all going to benefit from an isolated, incredibly cheap database that might not even have a network connection at all, it's just for you to store state--That idea is a very powerful idea.

So that's our first relationship with AI and it will be very hypocritical of us to nag and say let's not look into this tech. How can this technology help us build the database? So for a system project like us, I don't think we are at the point yet. We may never be but certainly not now. We're not at the point in which you can vibe code, so to speak, quote unquote because I don't necessarily like the term but it is the term that people use it. But you can't just vibe code a piece of code for Turso and commit to the repository.

You have to read and understand what's being generated. So we don't have any opposition to people using AI generated code. We neither discourage nor encourage. We just believe we don't like Mitchell with Ghostty, he's requiring people to disclose if they're using AI. We don't. And the reason we don't is that why would you, I mean we assume you are and you should be.

So we don't require that disclosure because we think that the obvious thing will be that you will be using AI. But we expect you to have read the code before you submit it, we expect you to have done the manual fixes before you submit it and you're still responsible for what you submit.

And then of course our maintainers will read the code as well. We're going to have a discussion and deny or not. We have today 185 contributors is a large community again especially in comparison of SQL, wildly successful in that regard. So we use AI a lot. What we are starting to experiment with is can AI start fixing bugs automatically?

And we have had some success with that. So just one example, if you're familiar with Rust, Rust has the unwrap keyword. And we don't use this a lot. We don't use this a lot. But there are always places in which you end up with some unwrap. So we had a loop running, one of our engineers did this, in which you asked Claude to scan the code base. You ask Claude to enumerate all the uses of unwrap. Then you ask Claude to look at the code around that unwrap and try to come up with a SQL query that would trigger that.

And then if successful, that becomes a bug report on GitHub, and then it can ask another instance of Claude to fix it. So you can have this level of-- And then maybe three days later you wake up to a PR and then you're going to look at the PR. Right. So I think it's still very important to us that we're going to be reading that code and it's very important to us that we're going to understand the code and we're going to veto the code that's going to go into the database.

But we are very bullish on how AI can accelerate that, and we're doing it a lot. And as I said to the point that we don't require disclosure, we in fact believe that you will be foolish not to be using AI. You should disclose if you're not using AI, because then I will perhaps, we'll take you a little bit less seriously or ask AI to review your code first for the obvious things that you may have missed.

Brian: Yeah, that's fair enough. I've been working with some cloud agents and what I like to do is we basically set up our Sentry. Sentry has this great tool called Seer, which is their AI assistant inside of their product, but actually has an SDK is and API that's connected. So instead of continuous dashboard, we can now one click Sentry fixes. And then what we've been experimenting is the sort of, can you fix an issue, like through the dashboard?

So far what we have is like, we can actually devise like a plan or a response to the issue that's opened, which is a bit more valuable than just go one shot and like things closed. Because like the person opened, they can also validate, oh, no, this is either the wrong approach or the right approach. So I've actually been experimenting with this, generating comments. And then as if I was writing a blog post, I go edit it manually as a human, and then go respond with that comment based on the context of the code base.

Glauber: Yeah, but keep in mind again, that the example that I gave you is a lot simpler than that. Right? Because there's no plan. It's just like, hey, you found something that the database crashed. We found one or two that were real because a lot of them are just there, but they're there because it's truly impossible to hit.

So, I mean, I'm a very huge fan of assertions. And assertions should be used in databases. I don't think people in the database world would disagree with me. I will be very surprised if anybody in the database world would disagree with that statement. I understand that there's a statement. The other day, Cloudflare had this famous unwrap that brought out the Internet. Again, I don't see it that way.

But in the database space, I think the calculus is a lot more in favor of assert every single thing. At some point, I joked that I would tie the compensation of my engineers to the amount of asserts that they write. You know, obviously a joke, because they should be doing the best that they can.

John: This is very funny to me because I once got chewed out by a principal engineer at Amazon for writing asserts.

Glauber: Yeah, probably wasn't a database guy then.

Because the thing with a database is that people think that a program crashing is the end of the world. It's the worst thing that can happen to your program is you can crash. So instead of crashing, you should return an error. Now, don't get me wrong, of course, if you can return an error, you should, right? But in a database, there are things that are much worse than crashing. Destroying your data, writing the incorrect data, corrupting your data. All of those things are much, much worse than crashing.

And I remember from my Scylla days, again in the Linux kernel, I've seen a lot of those stories. I had a lot less personal exposure to them just by virtue of the things that I was working on. But at Scylla, I remember one day in which one of our customers just came back with a corrupted table. And then what you do is that you look at that table and they sent us the file and said, this file is corrupted, okay? And then you read the file, and the code to read the file will do something bad because it expected 10 bytes for that field, and that field doesn't end. Right?

But the real problem is how did you get to that state? And it is very hard, if not impossible to figure out why that was written incorrectly. So again, you will never reproduce this again. So on the read side, you can pass the file. So it took us two or three days to even figure out what was wrong because it was a big file. So we just you run the database, see where you try to make a crash, actually, because what was just happening is that the data was wrong.

So I mean, try to find the exact place in the file where the data is wrong. When you find it, you crash it. So you can inspect exactly what's going on. But okay, now what, how did this get to be wrong? So you truly don't know. So especially in the write path of a file system, of a database, you are much better off crashing than writing something incorrect. If you write something incorrect, that's it, it's game over. The only tool that you have at that point is your brain.

You come up with a theory and it's very hard and it sometimes takes days to even come up with a theory. And then you want to test the theory. So it's one of the worst positions that you can be at. Not to mention, hopefully, people have valuable data in your database. Otherwise, what are you doing? And the data is gone. And then you have 100 other customers and you don't know are they hitting this bug? Are they susceptible to this bug? So nobody knows.

So it is much, much better to crash than to write incorrect data. So in a database, I think the calculation, I can see why some people would feel differently. I think I would disagree because of my background. I didn't grow up to understand crashes as a bad thing. You crash, Kubernetes restarts you. So what? It's not the best, but it's also not the worst.

But I think in a database the calculation is very clear towards don't crash if you can avoid crashing. But if the choice is crash or write bogus data, I'll take crashing any day of the week.

So, but again, even in our project, we don't want to crash where we're not supposed to crash. So you can ask Claude to go and analyze each one of those. And the solution is usually simple. It's just like instead of crashing, propagate this error. So it's something that is very easy to automate and doesn't need a complex plan. So we had a lot of success with that. And we want to experiment more and more with how we can use AI to speed up the gaining of trust that we need for this database to see production. So it only comes to everybody's advantage, I think.

Brian: Yeah. So speaking of advantage. You're doing the Rust thing. I wanted to quickly speak on your team member who I randomly was listening to a NPR podcast. Yeah, can we speak on that one? We don't have to spend a lot of time on it, but I'd love to have a footnote about that.

Glauber: Well, we spent a lot of time in person. I suppose our audience doesn't know. So again we are a distributed team and we have one of our employees in Mountain View. Now, as he would tell you, it's not the Mountain View you are thinking about. It's the Mountain View Correctional Facility in Maine. So we have a person working on Turso from prison. It's a very beautiful story.

As you mentioned, I think there are podcasts, there are hour long podcasts where he talks about it, I talk about it, we both talk about it. And so the whole story is quite interesting. But the summary of the story is that when we again when we released Turso, it was an open source project, there was this person in Maine that had access to the Internet in a contained fashion because it was a part of a program to reduce recidivism. So they want you to get out of jail and hopefully don't come back.

So one of the things that they're trying to do is if you learn a skill if you do something useful with your life, you're less likely to commit another crime and come back. So it's not a program for everybody. It's a program that I, again, I do not understand all the qualifications that you need to be selected for this program, but Preston was one of them. And again he had access to the Internet with restrictions and he sought to. So he fell in love with the idea of rewriting SQLite and he started contributing to it and we had no idea he was in prison.

He quickly became the fourth top contributor to the project. The joke that we're always telling is that he was locked in, right? So he could do a lot of work. I mean it's not that you have a lot of things to do in prison. So he could pour a lot of time into the project and quickly became, I mean the top fourth contributor. And only then we found out that he was in prison.

So we kept talking, we became good friends and we kept talking for a couple of months and around May last year we had an opening in the team. And then everybody felt that perhaps if we can make it work with the prison system, Preston will be a good addition to the team. So we've now hired him officially.

Brian: That's amazing. I do encourage folks listen to the hour long podcast. I think I had a that came through my feed. I don't even know what podcast it was, but came to my feed and then I messaged you like right away, I'm like, oh, this is like literally two days after we had met in person, the podcast came on.

Glauber: If people are interested in databases in particular, I would recommend the Aaron Francis podcast because he had an hour long episode in which they talked for around half an hour about his situation and the rest of the time is really just databases. And of course there's a sprinkle of both in both parts.

It was a very good interview, but there are many others if you just look for them. We have an article, I gave an interview to the NBC. There are things published about on TechCrunch. It's a story that really made rounds and it's easy to see why. It's a beautiful story. It's a story that we want to believe in. I mean, it's a person that's turning his life around and he really did. So we're very happy to have Preston on board.

Brian: Excellent. Yeah. Well, speaking on board, I appreciate you having a conversation about Turso with us. I'm actually looking forward to actually deploying the stuff I've been sort of kicking around. So I'm not fearful of the beta flag. I don't mind going to production with cool things.

Glauber: Let's go.

Brian: Feedback incoming.

Glauber: There are many situations in which it's totally acceptable. So let's say you validated and you're not hitting any of the unimplemented features because that's one of the reasons to be beta. What do you have to fear?

What you have to fear is that the database will crash and the database will eat your data. So if it is your source of truth, that matters a lot. If it's not your source of truth, that doesn't matter. That's why it's important to release early and also be explicit about the status because everybody can make their own calculation about whether or not they can use it at this stage.

Brian: Excellent. Well, yeah, thanks so much. And I do want to transition us to read. So, Glauber, are you ready to read?

Glauber: Let's go read.

Brian: Excellent. So I've got a read, which is a tweet from Jarred Sumner of Bun. So we mentioned Bun, a bit earlier when we were chatting about Zig and either Rust and Go, actually Bun got acquired into Anthropic, which is a whole other story. Everyone could catch up on that.

But he made this claim, which, I'll butcher it if I don't actually read it out loud. So let me quote him: "So I think open source repos almost entirely maintained by LLMs will be a thing this year."

And there's a whole thread and he sort of explains, how GitHub issues could be prompts. This is something actually I think I'm down for. And I think there was another person who I think was maintaining Claude Bot and recommended this week instead of opening a PR, just give me your prompt and then I'll run it and we'll see if it works. But I'm actually curious, both you and John, what are your thoughts on those two statements?

John: I think Jarred is wildly out of touch, unfortunately. The reason I say that is because, yes, there are repos and things that can be, from the technical perspective and the code and even the issues and the pull requests using GitHub or GitLabs, APIs and things, sure, it can manage all that. It can write the code, it can handle opening and closing and merging PRs.

I think what people often forget about open source and really want your takes on this as well is that it's such a personal thing, at least in my experience. Like the things that are really hard in open source usually are not the technical bits. It's usually the managing expectations from all the people, all the contributors, all the people who use your project as a dependent handling, you know, rolling releases or when things go very, very, very wrong and you're on the front page of Hacker News and people are calling you a huge pile of poo or whatever, and you have to personally deal with that.

I don't know of an AI system that can take that off of my shoulders.

Glauber: I would love to just have an AI system that go read Hacker News on my behalf though, because that is the least fun part of the job.

John: There you go.

Brian: Yeah, summarize it and then give you all the compliments and ignore the rest.

Glauber: Yeah, ignore the rest and table it. And look, I think the debate here, the interesting debate is the semantic debate on the meaning of words. What do you mean by "almost entirely" right? Not to get too Jordan Petersony on like, what do you mean by what, what do you mean by do? What do you mean by you, but like-- What is that you mean by saying that an open source repository will be almost entirely maintained by AI? Because clearly some of it will.

And I just gave you guys an example. We are using AI at Turso to file issues. We opened the other day almost 30 issues in that unwrap experiment. And some of them we closed because we looked at the code and then you put a comment, realizing that, yeah, this assert has to be here because the alternative is your data is gone. Right? So some of them were real and then you tried to use Claude itself to fix it.

So I think it depends a lot on what the project is. I would not have AI writing APIs for me, but I don't have to in a lot of ways because again, Turso is a very interesting example because it is a systems tool, it's a database, it's something that has to be very high quality. And yet there's a bunch of AI. So where would I put AI? Where I would not put AI? Another thing that we're experimenting with now is like, okay, we're implementing around 85% of SQLite. Can AI help us get that to around 90%?

So some features will be too complex, but some of them are like, I'll give you an example. In SQL you have the returning clause. That one I actually think we closed, but it was open until a couple of weeks ago. You can insert, delete or update an element and then you can put a returning clause. What that does is that it gives you the result back in a way that you can consume in your application.

We had the returning clause implemented for updates and inserts, but not deletes. The API is not the issue here. There is not a lot of creativity, there is not a lot of taste that you need from the LLM. All you need is, why is that unimplemented? What is the best way to implement it? Can you implement it? Things that you can easily verify are things that LLMs are going to become really good at.

Things that you depend on taste are much harder. Things in which you can run a program and say, is this right or is this wrong? Why would you not spend the tokens trying to get the LLM to get it right? And I think a lot of bugs are this, because a lot of bugs is like, I know the state that I desire, I know what working is. I can define this very easily and go let the LLM explore why is it not working.

So I think a lot for bug fixing, it's a great tool. It's a great tool for feature requests, it's not. Right? Because for the feature request you have to make this decision. But for the bug fix you already made the decision that the feature is in. You already made the decision that the feature has to have a specific shape. You told the users through a contract what that feature should do, but then you find out that it's not doing it.

Why would you not use an LLM to fix it? So I think there's a lot of ways in which you're going to have a lot of LLM activity in many open source repositories.

What do you mean by the statement might be too big but I don't think it's wrong.

Brian: Yeah, fair enough. I like the take of focus on the bugs as well because I think what we're seeing in the last six months is a lot of folks who were skeptical are now coming on board and approaching AI agents just generally and coding. We had the vibe coders, the sort of JavaScript nerds. They all super excited with the Bolts and Lovables and now we're at a point where we're not quite going to start blank page repl it and then ship a full on product but when it comes to like if you have--

Actually I'm just double clicking on what you were saying about this having the contract like if you already designed the API, you designed the interface for the user, one hundred percent like let's just keep that on par what's what expectations are because I think what AI is actually unlocking is for us to go do the creative part.

Glauber: Yeah.

Brian: So if you think about the system design like you could spend more time in that and like have that as your, your backbone and like what we're trying to accomplish. It would be really wild for Claude to basically manage all feature development and your roadmap as well because yeah, I don't know if Claude's going to be able to have the, well today, but doesn't have the capability to go and reason within user feedback and journeys and studies and stuff like that.

I was watching a podcast and someone said that the product managers used to be joked about and now they're kind of leading the edge when it comes to having all the context of like where the road graphics going and sort of dictating what we should be focused on 100%.

Glauber: I was always a fan of Lovable, Bolt, v0 et cetera. in fact, I met Guillermo Rauch right after the v0 launch and I told him that I love the launch. And the feature that I think they nailed the best was the name. Because it clearly means that you're using, that you build the v0 of the product and the name is a feature to some extent. So I think that is the best feature of v0, the name.

And what a lot of people don't understand is that those are-- If you go look at Lovable, most of their customers are not programmers, they're product managers. So I think this discussion of whether or not this can produce competent code is a tangent. It's not the interesting discussion. The discussion really is like, look, I'm a product person, I have no idea how to code and I could code something.

Now, that doesn't mean that that code is going to be the code that runs your production for a million users, but it could run your production for a thousand users to make sure that your idea is validated. Right? It could be the working prototype that you show to people. So there are many ways in which you enhance the capabilities of those people that before could not fully act on their own.

And all you need to do to appreciate that is remove this frame of reference that if I am writing code, that means that I am writing code with the intent of making something reliable. Right? A nd just think about it. How many times have you seen in your career, a product person comes to you and say, I want this feature. And what he truly wants is that feature done in a day, if you can. But then you pass this to the engineer. The engineer starts going for what framework should I use? The testing methodology. None of that shit matters, not at that stage anyway.

But again, it will be defensible to do this because the idea is, well, software has a cost to be written and it's going to take me two weeks anyway. And then after that if it works, we're going to, realistically, as much as we say we would throw the prototype out, we won't. So you defend yourself in case it doesn't work. So I'm not completely saying that the programmers, although we are known to exaggerate things and over complicated solutions.

But it's not necessarily that the coders were wrong, it's just the incentives are set that way. Right? So the product person wants this done in a day. You perhaps could do it in a day, but you know that if you do it in a day and that works, you're setting yourself up for a lifetime of failures. But the economic incentives now are very different.

You come up with this thing you generated in two hours. And because it was so easy to generate, throwing it away is easy, right? You don't have emotional attachment to that. I've thrown away a lot more code now than ever before. Because you don't have any emotional attachment to the code.

You go, you put Claude on a specific trade. If it generates code that you don't think it's good, if it generates. I don't know if you noticed this, but you throw away things with a lot more, more ease than you did before. So it's all about setting the expectations and understanding the incentives, I think.

John: It's a very good take. I think that emotional attachment to code is a really key part. And a lot of the discourse I've been seeing online about AI, and it's kind of this turn we're seeing since the holidays of a lot of people, I think getting to try Claude and how good it's gotten, maybe getting that breathing room to do it and seeing some of the things they can finally do with it.

That's my read is this piece called Don't Fall into the Anti-AI Hype, which really is just like, hey, look, these things have gotten so good and, you know, the tooling is now there and you can whip around features just like you said, really quickly. You know, in one case is this person makes a C library to convert some models, which is a little slower than something like Pytorch. But, you know, it makes it in like five minutes, you know, something that maybe would have taken them days instead. It's a very wild time to be alive, right?

Glauber: It is. And as I said, I think the key thing is that it is very hard to know. We're all trying to find out because we want to be early, right? We want to find out and we want to position ourselves to be, you know, where is this going? But I think realistically it is very hard to understand where this is going. I think it's useful to try to pin yourself to principles, right? Like reusability will still be a principle. You see what I mean?

I mean, it's hard to think of a world in which the principle of reusability would not matter, right? But we're changing the cost of variables that we did not even know were variables up until perhaps six months ago, things that we thought that were perhaps constant and given and part of the structure of the world of software. And now, you saw that it was just a cost function that had a very high cost.

And the moment you remove that cost you change the equations completely. So I think it will be hard for us to make any intelligent predictions just because things are changing so quickly and we haven't developed yet a good mental model for the costs and the intuition behind those things. Some people are on the forefront and they're seeing it firsthand. And of course they're going to have better information than anybody else and they'll be able to make slightly better predictions. But there's just so much in flux, that it's hard to fathom how will the world be in two years?

John: Yeah.

Brian: And I think that's a good stopping point for us to wind this down. So, Glauber, thanks again for the conversation.

I'm super excited about 2026. There's a lot happening and I think we've seen so many launches and ships and blog posts and ideas in a matter of two weeks. Who knows what's going to happen next? Either that or the bubble pops. So we'll be here for either.

Glauber: We'll ride whatever comes.

Brian: All right, listeners, stay ready.