1. Library
  2. Podcasts
  3. Jamstack Radio
  4. Ep. #136, Serverless Postgres with Nikita Shamgunov of Neon
Jamstack Radio
36 MIN

Ep. #136, Serverless Postgres with Nikita Shamgunov of Neon

light mode
about the episode

In episode 136 of Jamstack Radio, Brian speaks with Nikita Shamgunov of Neon. This conversation explores how Serverless Postgres is revolutionizing the database game, making it easier, more scalable, and more cost-effective than ever before. Together they explore topics like orchestrating micro VMs, utilizing databases as a service (DaaS), building a scalable cloud service for Postgres databases, and getting started with Neon.

Nikita Shamgunov is CEO at Neon and a partner at Khosla Ventures. He was previously CSO at SingleStore.


Brian Douglas: Welcome to another installment of Jamstack Radio. On the line we've got Nikita Shamgunov. How are you doing?

Nikita Shamgunov: Great. Very happy to be here.

Brian: Excellent. Yeah. So I'm actually happy to have you here. I've been following Neon for a bit and looking forward to digging into serverless PostgreSQL, but do you want to quickly explain who you are and what you do at Neon?

Nikita: Absolutely. So I'm CEO at Neon. Neon is a company that is two years old and change, maybe two years old and a couple of months, and we're building serverless PostgreSQL. Now, I've spent my whole career in the database space. After my PhD I started in 2005, working on the SQL Server Project at Microsoft. After that, after six years at SQL Server where I worked on the core engine, mostly on the compute side of the things.

Old databases typically have storage and compute, so I worked on the compute side of things. I started a company called Single Store. Single Store is a unicorn company as of last year. It raised, I believe, 1.2 or 1.4 billion in valuation so it's already post the crazy 2021 years. It was in 2022 so the valuation is more justified. It's a scalable database.

I spent a decade building Single Store from the ground up and through that journey we built a scalable database which turned out to be an enterprise first offering. Basically because it's scalable and can support very, very large workloads, you find those workloads in the enterprise. And the sales motion for this company turned out to be top down enterprise sales motion.

After that I joined Coastal Ventures as a partner and walking into Khosla Ventures, I spoke to Vinod Khosla who basically is the founder of Khosla Ventures and said, "As I was building Single Store, focusing on the top of the market, I have this idea of how to become the default cloud database and go and figure out a bottoms up adoption."

That was the idea of Neon. Vinod said on the spot, "Here's $5 million, why don't you figure that out?" And were going to incubate the company. We started working on that, we started payroll on March 1st, 2021. Thinking about when you start companies, typically it's a group of friends that are iterating on the idea, figuring that out. In this particular case, this was a company built in the lab.

The idea was there, the idea was to create a dominant PostgreSQL offering. We observed the fact that PostgreSQL is becoming more and more popular as a developer choice, and nobody owns PostgreSQL, which at least from the very first principles allow you to come up with a company idea that provides the best PostgreSQL service.

Obviously owning PostgreSQL is impossible, that cat's out of the bag. It's like owning Linux. But you can build a company that is providing the best PostgreSQL service in the cloud. That's where the most valuable workloads are living anyway. And so, despite the fact that it was not very clear why Neon will win over RES or AWS Aurora or Azure SQL or Cloud SQL at Google, we felt that the opportunity is large enough to go and try.

We started with building a team, because the company you build is the team you build, and so I started calling all the PostgreSQL committers. I had a bit of a brand already, having built a database company which is a unicorn company, and I was pitching that. As I was pitching this, I was refining what we were actually going to do because people asked questions.

Very quickly we navigated to at least one important idea that if you look around and see who is making money in the PostgreSQL ecosystem, mostly that's Amazon. That's probably number one, and Amazon is making money over the two products. One is RES and the other is AWS Aurora. Because I'm a database nerd, I intimately understood the architecture of AWS Aurora.

In the database world everybody knows that, it's one 10 year old paper and I believe in 2017 a lot of the major database conferences. So it wasn't a secret that this was at least a very exciting architecture and also a bunch of time had passed since 2017 and people were also clear that, hey, you can approve this architecture and call Aurora's architecture V1, there was a V2 and potentially V3.

We actually managed to come up with a V3 of that architecture.

In databases and in systems in general, architecture is super valuable because if you set the architecture, everything layers on top and all the concepts you introduce in the architecture, they stay with you forever so you've got to get this one right. We had a lot of conviction that if you want to build a modern best in class cloud service, this will be the architecture and the key in that architecture was storage.

At SQL Server I worked on compute, at Single Store we built the whole thing, storage, compute, cloud service. Here the kernel of this is storage, but database storage, not file system storage. So that was the first mega insight for us that the key architectural advantage will be that storage, and we integrated that storage with S3 for cost reasons. Then we made a decision to open source that storage.

We said, "If we're going to be the dominant, default cloud service for PostgreSQL, the underlying technology needs to basically be endorsed and blessed by PostgreSQL itself. It would be impossible, especially for a startup, Amazon maybe have a different type situation because it's a gigantic company. But for a startup, we got to be open source and we claimed that real estate of open source alternative to Aurora.

We even positioned our storage initially in the GitHub repo and as an alternative to AWS Aurora. We also decided to build it in Rust as opposed to C++, even though I grew up with C++, just because it's a modern systems language, it can allow to attract a modern systems developer. We actually arrived to that relatively quickly, through the debate, and then we started writing code.

In parallel I'd been calling PostgreSQL committers, got blessed with two co founders, Stas and Heikki, which are absolutely phenomenal. Heikki is a PostgreSQL committer, Stas is a PostgreSQL contributor. The difference is Heikki can author commits and Stas needs to find committer before commits go in.

We built our initial team around PostgreSQL hackers and systems engineers, and then we made another breakthrough discovery which became our positioning, and that is that serverless is a big deal. We realized that serverless is a big deal and we also realized that it's hella hard to build. Specifically, I'm very proud of the hella part because I'm an immigrant and I learned it from South Park.

Brian: Well, you're in the Bay Area, so yeah, you're catching up quickly.

Nikita: Yeah, catching up quickly. It's only been 10 years for me in the United States. So we discovered that serverless is important, we discovered it because early on I called all the ecosystem, future ecosystem partners, including Guillermo at Vercel. I was introduced to him by our common friend, and that's when Guillermo told me that serverless is important. I also knew that serverless really took off at Aurora. Then the third thing is nobody else has serverless.

There's literally two serverless offerings for PostgreSQL. One is Aurora and the other one is Neon. Nobody else can do this. It is now that we've sunk multiple man years of R&D into this, it's pretty clear to me why.

Because it truly is quite hard, and serverless for something that was not designed to be serverless initially requires a particular technical approach which in our case is basically orchestrating micro VMs. So we run PostgreSQL in micro VMs, we're inflating and deflating them and we're live migrating them between hosts and live migration doesn't even break the TCP connection to the VM.

That kind of scheduling machinery, operating on the fleet now over 100,000 databases is like a piece of art and there aren't papers that you can read that describe you to how to do that. It's kind of like a very hairy project that lives in the world between systems and just plumbing things together and leveraging micro VM orchestration in Kubernetes. But long story short, we discovered that serverless is mega important.

In general the fewer knobs you have to your system, the easier it is to scale, the easier it is to adopt and so the vision is that every time you spin up a PostgreSQL database on Neon, all it is a URL. Once you have the URL which is a connection string, then your storage, your compute is scaled for you and you are only charged for what you use. You're only charged for the amount of storage you consume and for the amount of compute you consume on a per second basis.

Brian: Yeah. You mentioned Guillermo in passing and Vercel, that's what sparked me to reach out to you, which is the launch of the, what is it? Vercel PostgreSQL, which you're one of the launch partners with them. Can you explain how you're able to launch alongside of Vercel and how that integration works?

Nikita: Yeah. I think serverless is the keys to the kingdom here. I'll speak about two things, and for one we are actually getting a good amount of criticism which is scaling to zero. I'll explain what's the roadmap on that is. Basically, in order to integrate Neon into any other platform, all you need to do is to create a Neon account and then use an API to spin up and spin down databases.

Now, in the world in which we're not serverless, we would have to expose a parameter which is the size of the database or probably two parameters, the size of compute and the size of storage. As we expose those two parameters, every launch partner will have to introduce their own menu of various sizes, call it T-shirt sizes for your environments. Introducing those sizes will inevitably block ourselves into being very careful of how we change those sizes as well because let's say you have 20 partners, and then every partner has a bunch of sizes and now you're like, "Oh, those sizes are wrong."

So now instead of small, medium and large, I want to change that to medium, large and extra large, whatever that means. Now you go to 20 partners and you have a conversation, and then they have their customers and they have to explain this to their customers, to their users. So that makes it very, very tricky to operate. The other thing is Vercel has edge functions, serverless functions, and they kind of require the rest of the infrastructure to be serverless as well.

Within serverless there is a unique feature of Neon which is scaling to zero, what that means from the technology standpoint, we actually shut down the VM. So we separate storage, storage is one, global, low latency storage that lives in the cloud and then every time you go and create a database, we launch compute in a VM, upsize X but we know that size and you don't, and attach it to that storage.

As you drive the workload, we change that to Y or Z or whatever, we adjust the size. Now, if you go away and you don't touch the database for five minutes, we shut down the VM. Now, the question is what happens when you come back and start querying the database? Well, we bring the compute back up but that takes time, and generally all serverless architectures fall into two categories.

The ones that are truly serverless shut it down to zero, but then the trade off is you have a cold start problem. Then the rest of the systems are not truly serverless so they slow burn compute even if you're not using it, so they actually burn money, and then they have a choice of passing it down to the user or not. The difference in cost that we're seeing by doing some modeling is five times.

On our 100,000 database fleet, if we don't shut down to zero versus we do shut down to zero, the cost of AWS bill, the size of AWS bill is 5X difference. Now, our cold starts are currently about three to five seconds, and we're getting a lot of heat from the competition. They're tweeting about us saying, "Cold starts, they shut down to zero. Cold starts are terrible," and stuff like that.

Just next week we're going to roll out and our cold starts will become around a second and from there we'll drive them down to half a second for sure, with some probability to 250 milliseconds, with some lower probability to 150 milliseconds. At that point, it becomes unnoticeable and so hopefully we'll be able to live in the best of both worlds where we drive the economics and we pass those economics to the users, and we have pretty much invisible cold starts and we're getting closer to that vision that the database is just a URL.

It's always available for you and it scales up and down, and it costs you zero if you're not using it. But to answer your question, yes. It's that serverless promise that got us to engage with Vercel and engage with Guillermo, and Guillermo is also an angel investor into the company and we're working very, very hard to become the best possible partner for Vercel.

Brian: Excellent. Yeah. I mean, it's very opportunistic as well because I don't think serverless has even hit its inflection point. I think we had a lot of people trying the serverless realm with tools and strategy, and I think the extremely ambitious for getting PostgreSQL on serverless. I agree, 100%. This is what made me really interested in just the product offering.

I did want to touch base on the open source angle too, because you had mentioned it, but what struck me is that you went to committers in PostgreSQL and that's how you fielded your initial team. But what's been the impact of being open source first with this sort of approach?

Nikita: Yeah. We had a certain amount of debate, internally not that many but people ask all the time, "What's your license?" Well, it's Apache. "Well, what's your protection from somebody else, big scary Amazon or fill in replace Amazon with any other hyper scaler, taking your IP and rolling their own offering on your IP and the code that you spend a lot of time building?"

The best answer, because intuitively I felt that's not a problem, we should just be Apache, and then I was talking to the CEO of Nutanix. Well, he's the CEO of DevRev right now but the founder is an ex CEO of Nutanix and he looked me in the eye and said, "Nikita, the license is a rich man's question, and you're not. So make it Apache and forget about it. Just do excellent work, and if you do excellent work and the traffic does come and the threat that you are worried about does materialize, you can make that decision down the line. If you need to you can relicense."

Which I don't want to, but it's still an option. You can also advance your cloud service ahead of where the open source bits are, which I think that is the Data Bricks strategy, or you can have additional services that are very useful to your users in addition to the crown jewel of the technology, the core of the technology which is storage. That resonated super well, and so we just parked it.

But I think the more I think about it, the more I think it's a feature not a bug, to not only be open source but to be open source with the most open license, which is Apache. Even more so, we are considering internally to not just open source the storage, but to open source the control plane which is absolutely unheard of in the world of database as a service.

So if you look to any of our competitors, you will find that occasionally they are built on top of the open source back bone but never they open source their control plane because that's actually a lot of IP is in the orchestration of your cloud service. But I think we might open source the control plane as well.

Brian: Okay, and can you explain? So the orchestration is the control plane for someone who's not building database tools?

Nikita: Yeah. So that's actually Amazon's terminology, data plane, control plane. Data plane is your storage and data plane in a way is the database engine itself which is PostgreSQL. Then control plane is when you go, when you log into the dashboard, into the console. Well, there's some amount of code for frontend and backend of that console, think about AWS dashboard. But then as you interact with the systems, you basically give the system tasks of, "Okay, go provision me a database here."

And the system needs to go and spin this database up. In the serverless world, all the decisions of I want to increase the side of the database, decrease the size of the database, increase the size of compute, decrease the size of compute, move things around. That scheduler, distributed scheduler that runs on top of Kubernetes, all of that is control plane.

If you open source the control plane, then theoretically with enough technical chops, you will be able to take an arbitrary Kubernetes cluster and deploy all the Neon machinery into that Kubernetes cluster and run the service yourself. And so that's what we'll be exposing ourselves to, but I think in return we're actually gaining stuff. We're gaining trust of our users, we're gaining trust that if something happens to Neon, the bits are out there, and we potentially will sign up on prem partners down the line. Companies like VMWare, we already signed up Arcona. We're not closing the door to partners like this.

Brian: Cool. Yeah, this is something that personally the way we structured the product I work on day to day is same question. We did go Apache as well for our license, and then kept moving forward. I didn't get the same advice, but I watched a bunch of other projects do very similar approaches so I was like let's just not over think it. Yeah. But the orchestration layer, the control plane, I'm enticed.

I'm very excited to try Neon, at first to start with some side projects. Definitely want to start shipping some stuff, get comfortable with it because the underlying technology being PostgreSQL makes it very approachable to me and I imagine everyone listening who also very familiar with PostgreSQL and very approachable to them. The question to you is how do folks get started? What's your recommendation?

Nikita: Look, it's trivial to get started. You go there, you authenticate with GitHub and I assume you have a GitHub account. From there you push you push a button that says Create Project, you name the project, press okay, and then three seconds later you have PostgreSQL and you have a connection stream. So now just use PostgreSQL through that connection stream and build your app.

There is a couple things that are new in addition to that. Basically, the first five minute experience is what I just described. You push a button, you can do it on this call, and you will have PostgreSQL and hopefully it will take you less than a minute to get onboarded to Neon. There are a couple more things which are different. In addition to the PostgreSQL connection stream that you can consume from, I don't know, Python or Ruby on Rails over TCP, we offer a serverless driver that you can consume over Rest API.

So if you go and Google Neon serverless driver it will have an instruction, so you can query Neon from your browser, you can query Neon from the Vercel edge function or from Vercel serverless function. Then the other thing that's different is you can create branches. So if you need a Dev environment for your production environment, at any point in time you can go and say, "Hey, create branch."

And that will create a full copy of your data and give you a separate URL, separate end point for you to query the database and that's how we create an isolated environment for your project.

Brian: Okay. Cool, yeah. Thanks for sharing how folks can get started. Sounds pretty straightforward. With that said, I'd love to get us over to picks. These are jam picks, things that keep us going, that we're jamming on. Could be tech related, that keeps you up at night. Actually, hopefully the good kind of keep you up at night, but also music and food related as well. If you don't mind, I'll go first.

I mentioned this in our prep call, I've been messing around with LangChain. I actually sat in a workshop, it was like the On Deck fellows and a bunch of founders have an organization and it was one of their workshops. It was specifically on how to get started in LangChain. It was like once I saw someone implement it I was like, "Oh, I get it. I totally understand what we're doing here."

I know Neon is going to be shipping something pretty soon, but this world of doing semantic search and building your own ChatGPT and not even training the model but keep tracking of search, on top of searches and the result of searches that are built on top of better searches. Completely blown away but I've got so many nifty ideas on what I want to do with it, so I won't share what my cool side project will be.

But I've got a really cool side project around the data we have at Open Sauce, contributor data, and how to leverage LangChain. I haven't seen anybody else pitch the same idea, so I'm actually looking forward to ship this in the next week or so.

Nikita: Very cool.

Brian: Yeah. If we're lucky enough it'll be on Hacker News, if not you'll have to come to Reddit and see where the post got missed. But yeah, LangChain. Check it out. I'll pause and ask if you have any picks for us.

Nikita: Let's see. The one that I've been looking lately is in the auth space, authentication. I'm a huge fan of Clerk and so do invite them to your podcast. I think they're doing very important work that is delivering on the promise of Auth0 but delivering it for the JavaScript and React world much better than Auth0 did.

So that would be probably the first thing that comes to mind. There are a couple more smaller things, one of which is dark mode and every developer tool should have a dark mode, and Neon is going to have a dark mode. I think on that on, that would be the highlight.

Brian: Wow. I'm envious. We have actually not shipped our dark mode yet. We shipped a feature that actually is presented in dark mode that we'll ship next week, which is a different way of how we present our data. But we haven't done dark mode because it's one of those things that it's either do it at the beginning or when you have the time and we just never had the time.

Nikita: Yeah. We are in that mode now where we're catching up all the corners, what I call the dark corners, but they're actually light corners that we need to go and paint them into dark mode.

Brian: Yeah. You've got a very exciting launch happening that for the recording of the podcast, for folks to take a look at, why don't you explain that?

Nikita: Yeah. So we have a launch coming in next week. We're very excited to introduce a few things to Neon. Now, Neon is a two year old company but we launched it last year about the same time. So the company is one years old on the market.

Brian: Actually. I didn't even know that. I was assuming this was around since like 2020 or something.

Nikita: No.

Brian: Absolutely amazing.

Nikita: In 2020 it wasn't even an idea. We started in '21, March '21 it was three guys and a dog. Actually, there was no dog, it was just the three guys and a slide deck. We started working on that, and obviously you need to build a service and you need to build the storage, and storage is a pretty big project. Last year, in July at the same time we were behind a wait list.

We had a few thousand people on the wait list, and then we started to onboard them hundred by hundred, and then learning how immature the system was and then we just kept maturing the system. We officially invited everybody December 6th last year. To give you some of our stats, January 1st we had just under 9,000 databases on the platform, and as of today over 100,000 databases on the platform.

So since January we drove the number of databases very, very high. So we launched every quarter or so, and this time around there was a number of things that I'm super excited about. I'm actually looking at the plan, a lot of them are just reliability and performance. We're going to improve our cold starts as we talked about cold starts and I say cold starts way too much. I don't even know what I say more, AI or cold starts, more often.

Brian: Well, and serverless, if anybody touches serverless you know cold start is something you can probably talk about a lot.

Nikita: Correct, yeah. So we're going to improve our cold starts from three to five seconds, and specifically the P90s to about a second. That's just the first step towards improving the cold starts. I think cold starts, right now they're just annoying. Once they're in the second range, they are still annoying but a lot less. Then once they're going to be in the low hundred millisecond range, they hopefully will become unnoticeable.

The first thing that we're going to do is we're going to have improvements on the cold starts. That's kind of unique to Neon. Classically, databases offer you a TCP connection and the TCP connection is how you interact with the database. You go authenticate and then you establish a TCP connection to the database, start sending queries.

Now, this works kind of less well in the web scale world, in the world of, okay, well, I have my app written mostly in JavaScript and it has frontend, backend, database. But that backend starts to disappear, if you know what I mean. We're starting to live increasingly in the world of declaring any backends, and then a lot of code is written in the browser.

Specifically this declarity of backend doesn't necessarily run in a VM, so it can run in a serverless infrastructure, it can run in CloudFlare edge functions, Vercel edge functions, Vercel serverless functions. It's hard for that backend to maintain a TCP connection anywhere. That's why we introduced this thing that we called Neon Serverless Driver, and that serverless driver gives you single...

Well, we have that today but starting next week, that serverless driver will give you single digit millisecond latency between an edge location and Neon, at least in the same region. So that's another thing that we'll be launching next week. We are launching our PG embedding extension. So if you're familiar with what PG Vector is, it's a plugin to PostgreSQL that allows you to do a vector map.

People use it as kind of like an alternative to Clientel to build memory to their large language models. Now, if you actually zoom in and look at the implementation of PG Vector, it uses an algorithm, an indexing algorithm and a search that's called IVF Flat. But all these algorithms, they run a process called ANN, Approximate Nearest Neighbor. Basically, given a vector, give me top however many vectors that are closest to that vector, to the input vector.

The reason you want to do that is, well, because you want to do a semantic search so you get embeddings, you get embeddings out of OpenAI. OpenAI has a very cheap API for embeddings. Then you index your content based on the per document or sometimes on a per paragraph, per snippet, whatever. You create those embeddings for them, put them in a vector database and with PG Vector PostgreSQL becomes that vector database that can store vectors and that can run approximate nearest neighbor.

In PG Vector you're using an IVF Flat algorithm. Now, the complexity of that algorithm is square root of n, and there are other algorithms specifically HNSW which is an alternative, the complexity of HNSW is Log(n). So as you start adding more and more vectors Log(n) becomes much smaller than square root of n, and so modern ANN implementations from Google and Facebook, various libraries, they tend to use HNSW and not IVF Flat. We're going to launch a plugin that's called PG Embedding.

Brian: A PostgreSQL plugin specifically or this in Neon?

Nikita: PostgreSQL plugin. Yeah. It's already out there, it's an open source repo, of course the license will be either Apache or PostgreSQL license. But basically super permissive license. It's an alternative to PG Vector on our basic benchmarks and of course it's benchmark specific. But we put like a million documents into the thing and compared the performance, so it gives you about 40 times faster performance on vector lookups, on this Approximate Nearest Neighbor.

The other thing that they call a Better Recall. Recall is the percentage... It's basically how accurate it is because it's Approximate Nearest Neighbor, not Nearest Neighbor, and Precise Nearest Neighbor is too expensive so you need to basically scan the whole data set. But with an index you can compute Approximate Nearest Neighbor, and Approximate Nearest Neighbor algorithms always have this recall parameter, and the higher the recall, the slower they are.

I just want to be precise here with a definition of Recall. But I think when Recall is 1, that means it has to be absolutely precise, 100% precise, and so Recall is a number between zero and 1. It's a number between zero and 1.

If it's 1, you request 100% recall and that's what makes algorithms slow. But if your recall is kind of reasonable, then they are much faster but you as a user that are okay with certain imperfections and imprecisions. Anyway, long story short, better recall, 40 times faster, implements HNSW. Basically we're wrapping LibHNSW into PG Embedding.

We'll introduce it to the world and we'll offer it to the folks, that data, in a Hackathon about LangChain and the rise of PG Vector. But basically just to repeat myself real quick, PG Vector implements IVF Flat, which is an algorithm with a complexity of square root of n, we are launching PG Embedding, same idea, does similarity search, open sourced under the Apache license.

But the complexity is Log(n) so it's a much more performant algorithm and if you have a million embeddings, you should expect about 40X improvement in the speed on that algorithm compared to PG Vector.

Brian: Very cool. Well, by the time this podcast comes out this stuff will all be launched, so folks, check it out, check out Neon as well. Nikita, thanks for chatting with me and sharing updates on serverless PostgreSQL and what you guys have accomplished. I'm extremely impressed.

Nikita: Yeah. Excited to be here. We should talk again next year, same time and then you will see how much we will have done.

Brian: For sure. And listeners, keep spreading the jam.