1. Library
  2. Podcasts
  3. High Leverage
  4. Ep. #6, Async Runtime For Rust with Carl Lerche of Tokio
High Leverage
43 MIN

Ep. #6, Async Runtime For Rust with Carl Lerche of Tokio

light mode
about the episode

On episode 6 of High Leverage, Joe Ruscio sits down with Carl Lerche, Principal Engineer at AWS and creator of Tokio. Carl shares his journey from Ruby and Rails into Rust, and explains why memory safety, fearless concurrency, and async runtimes matter for modern infrastructure. The conversation dives deep into the origins of Tokio, lessons from building foundational open source software, and how Rust’s guarantees are shaping the future of systems engineering.

Carl Lerche is a Principal Engineer at AWS and the creator of Tokio, the most widely used async runtime in the Rust ecosystem. He has been a long-time open source contributor, previously working on Ruby on Rails, Bundler, and Merb. Carl specializes in high-performance systems, networking, and building durable software infrastructure.

transcript

Joe Ruscio: Welcome back to the show. I'm joined today by a super interesting guest. Carl Lerche is a principal engineer at AWS and the creator and maintainer of Tokio, the widely used asynchronous runtime for Rust that powers a large share of modern high performance network systems.

He's best known in the Rust ecosystem and maybe some others for his open source work on Tokio and related libraries like Mio, Hyper, Tower, and Bytes, which have become foundational building blocks for production Rust services by many, many companies and services you've heard of. Welcome to the show, Carl.

Carl Lerche: Yeah, thanks for having me. Super excited.

Joe: Yeah, yeah, super excited. I assume many of our listeners actually are familiar with your work already, but for some of those who aren't, could you just give a brief background, like how did you get involved in Rust? How did you come to start the Tokio project? What's your origin story?

Carl: Yeah, my origin story. Well, you might have previously heard of me from the Ruby community many, many years ago. That's actually how I got into open source was via the whole Ruby, Ruby on Rails. If you were building apps back then, you might have heard of Merb. That's actually where I first got involved in open source.

Joe: I was going to say even prior to Rails, right? Merb.

Carl: Yeah. I ended up getting some opportunities to work on Ruby on Rails itself. I was on the core team for a little bit, part of the Rails 2.0 to Rails 3.0 era.

Joe: Right, right.

Carl: And co-created Bundler which was a fun time.

Joe: And Bundler is now, since you created it, the standard package management system for Ruby, right?

Carl: Yeah, it is. And that was fun. I don't know if people remember what adding packages to your like apps in general was like back then, but you'd add a dependency, you'd try to get someone else to run it and it never worked. Right? Like if there's an update, it would automatically get pulled in.

And we ended up adding the whole .lock file concept, which I did not know about that before, but it seems to be used now throughout other-- Like JavaScript's got equivalent kind of .lock file concepts. So I don't know what the history is. There's probably other things that had it. But I like to believe maybe that I had a hand in inventing something like that to make people's lives easier.

Joe: Certainly got there from first principles, it sounds like.

Carl: Yeah. And from there, I guess, because now we're talking about Rust and I've been writing Rust for about--

Joe: Yeah. You know, Ruby and Rust, despite the alliteration, in some ways are quite far apart. So how did that transition happen?

Carl: Yeah, about 11 years ago I was working at a company, we were building a product called Skylight, which was kind of a performance monitoring tool for Rails applications. And part of when we were there where like-- We have kind of your agent in your application and it's collecting data. And we were trying to collect a good amount of data. It was written in Ruby, but I think it was taking up more than 5% of CPU cycles just collecting that data and sending it. And we're trying to get closer to like 1% or below.

So we were looking at a native extension. And personally I had written some C, C++ before, but that's not really my comfort zone. And I was working at a company with Yehuda Katz as well. And Yehuda had a lot of contacts with the Mozilla people. Dave Herman there. So they were talking and Dave was at that point, if you know Dave Herman.

Joe: Yeah. Mozilla were really early boosters of Rust, I remember.

Carl: Yeah, like they were investing in Rust as part of Servo. This was a while ago. So Servo was kind of experimental browser that they were exploring, trying to see if they could make browser rendering faster, leveraging concurrency.

And again, C and C++ are not great languages for doing more aggressive concurrency optimizations in a context where security matters.

Right? Because it's hard to do. You do it wrong, you expose memory, unsafety, et cetera.

Joe: Yeah, there's direct memory management, so there's like buffer overflows and all these kind of challenges that are closer to the metal a programming language has.

Carl: Right. So Rust was a R&D project back then to see can we come up with a language that removes those problems, those security vulnerability vectors, while maintaining the same level of performance as C or C++, so we can build a better browser.

Joe: Yeah, because if I remember correctly, the promise of Rust is that if it compiles, it's memory safe, right?

Carl: Yes. If it compiles and you're not using the unsafe keyword, which you should not be doing for the most part.

Joe: Yeah, Pro Tip users.

Carl: Yeah, yeah, if it compiles, it is memory safe. So you will not have that category of security vulnerabilities which there's been, you know, like, I think Google and Microsoft have all done their research on their bug histories and security vulnerabilities. I think it's something like 70% of all high severity security vulnerabilities are related to memory unsafety.

So if you use Rust, you don't have that problem. I mean, you could also say if you use Java or like other memory safe languages, you also don't have that problem. But the advantage of Rust is you get the same level of performance as C or C++ because all of those memory safety checks are done at compile time and not at runtime.

Joe: Ah, yes.

Carl: Yeah. So Yehuda comes back to me and says, hey, how about this Rust language? It's well before 1.0, but we should probably do it because we don't know how to write safe C++. And I'd like to say I was brilliant and was like, "oh yes, I immediately saw the benefits of this."

But no, what actually happened was a big argument about, no, we should not use this completely crazy, unproven language in our product. I think it was, again, well before 1.0, no production users.

Joe: Yeah. I mean--

One reason why early stage startups are often adopters of cutting edge technology that really actually has no business being anywhere near production is, as an early stage startup, you have nothing to lose. Right? So if you adopt the technology and it enables something that you couldn't do before, then you win and if it doesn't work, then it probably wasn't going to work anyways.

Carl: Yep. So he was saying we should do this. He was like, we don't know how to write C++. And of course he was saying I don't know. And I was like, I, as many engineers tend to vastly overestimate our ability, I was like, no, I can totally write C++. Come on, what are you talking about?

Joe: How hard can it be?

Carl: Yeah, how hard can it be? He won that argument and we ended up using Rust and it worked actually surprisingly well. But we definitely had some compiler bugs going through it. I mean, the Rust team was great. They were supporting us throughout that process and I mean it ended up working quite well.

We got that performance we needed. And I think we have extremely few bugs on the Rust side because clearly there were no memory safety issues that we almost certainly would have had if I had won the argument and written terrible C++ because I don't know how to do it.

But just having that strong type system ends up having you get a lot of nice properties in terms of like, if it compiles, there's just fewer bugs that you end up slipping through when you have a strong type system like Rust has with enums.

And also Rust has this. They call it "fearless concurrency," because the same system that ensures memory safety also ensures you're data race free. So it guarantees that you can avoid data races in your code.

Joe: So no deadlocks, no data races.

Carl: Deadlocks is not a data-- I think if I got my terms right, like deadlocks you can have, but you can't have two threads mutating the same data at the same time. Right? Or those kinds of issues.

Joe: Which is still a very big problem in highly concurrent applications.

Carl: Yeah. So while Java prevents segfaults, it does allow you to have data races where you concurrently have two threads smashing the same set of variables. Now it won't crash, but it will do unexpected things.

Joe: unpredictable results.

Carl: Yeah.

Joe: Okay, so now obviously you're a Rust convert. You're like, hey, this works. This kind of feels like a free lunch or a silver bullet almost, which are not supposed to exist. So what leads to the creation of Tokio? I guess first, what is Tokio?

Carl: The way to think about it is, it is, like you said, that async runtime. So you're probably familiar with Node.js, where JavaScript is the language and Tokio would be kind of the everything else that Node.js provides in an async context. So it gives you that async runtime, async networking APIs, that kind of thing.

Joe: Yeah. And again, I think many of the listeners will be familiar with async. But in case people aren't, when you say async, what is that? What's the 50,000 foot explanation?

Carl: Yeah at a high level, when you're in asynchronous context, you make a method call or function call, whatever your language calls these things, the thread that you're on blocks and waits for the response. So if you're making a network request somewhere--

Joe: And it's usually on some kind of I/O that the thread blocks, right?

Carl: Yeah, it's usually on IO, I think for the most part that's really, that's where you're going to get most of your benefits.

Joe: It's a common case. Right? I know there's others, but yeah, common case is particularly in Internet connected services. Right?

Carl: Yep. You're rebuilding Internet connective service, you're making HTTP requests somewhere and if like it's synchronous, you have to block the thread and wait for the response.

Now, historically, when threads first came about, they were somewhat heavy. There was this whole idea that like C, was it C10K problem, where, how do you get 10,000 concurrent requests? Back then, threads were not really able to handle it.

Joe: Right.

Carl: And Async was a way to just get many concurrent requests on the same thread. These days threads are a lot lower overhead, but they're still, like if you're really pushing high levels of concurrency on one server, it can be useful to not tie up threads.

Joe: Yeah, I mean, so I think what you're saying is that while threads are much more performant than maybe they were a decade or two ago, ultimately async I/O is still the most performant pattern. Right? And so if you're at the highest level, certainly the highest levels of scalability, that that difference matters.

Carl: Yeah. And you can do additional things once you go async. For example, something as simple as just an HTTP request with a timeout is relatively hard to do if you're using fully synchronous IO because you have to then register the timeout with the kernel for something like an HTTP or like, I mean, it gets complicated, but then you need thread interrupts and a bunch of other shenanigans just to get a timeout going on. With Async I/O, things like that tend to be a lot easier to implement.

Joe: Okay, and so at the time, did you have a problem you were trying to solve and you realized it wasn't a good async IO? Had you used async IO previously before?

Carl: I mean I've used Async I/O throughout my career, whenever you're trying to build some networking service. I think I've used Netty because I did a lot with Netty. I think even in the Ruby sphere there was event machine. You tend to eventually, if you're building network services, get to some Async I/O kind of-- There's Go as well, I mean obviously Golang and whatever.

Joe: Yeah, even in, if you're doing systems programming in Linux there are Async I/O and Glibc and-- The standard library, rather.

Carl: Yeah. So this whole kind of getting involved like starting to use Rust also happened around the time when my first child was born and I took some amount of time off and I ended up-- not being sleep deprived. You know, in the early days of having your first kid, you're up a lot but you're not sleep deprived yet? So yeah, I was like, all right, I got a whole bunch of brain cycles.

What am I going to do? I'm awake, I'm going to code something. So what do I want to build? How about I try to build a distributed database? How hard can it be?

Joe: Right.

Carl: That was going to be my hobby project because. And I was like, all right, what language should I use? You know, I'm kind of familiar with the JVM and JVM based languages. But also at this job I had been managing Cassandra Cluster and I distinctly remember a lot of unpleasant time spent getting Cassandra to work well, because of the garbage collector.

Joe: Yeah, at Librato we spend a lot of time on Cassandra and garbage collection pauses are very challenging in highly loaded backend services.

Carl: Yes. So at that point I was like, all right, I don't want to use a JVM based language. I could use C, C++. Or now that I've actually dabbled with Rust, let me just try to build with Rust. But because again it was well before the 1.0 days, there was no ecosystem at all for networking libraries and I think there was only the standard library which only had your regular blocking sockets as part of it. So there was pretty much nothing there. So YakShave time.

Joe: I was about to say, this is starting to sound like a YakShave.

Carl: Yeah. And I'm still trying to pop the stack on that YakShave, I guess, you know, 11 years later. But yeah, that was kind of like, all right, well there's no Async I/O ecosystem. Let me look at that. Oh, there's not even anything kind of binding-- Like there's no bindings to epoll even.

Joe: Right.

Carl: So let's start there. And that was kind of Mio, you mentioned that that was my first, well, one of the first bigger projects that I took on in Rust and that was what I ended up building while I was on paternity leave was just kind of like, let's start by just binding an epoll, coming up with your basic, a very basic kind of non blocking async library. What will it look like?

And it's interesting because going into it, I was still just relatively new to I guess what you call systems level language. My initial gut was let me just build this with closures because Rust actually does have closures.

But it turns out closures don't work as well with a language like Rust for Async because you have to allocate a lot for those closures. And when you're working at the Rust level, a lot of things like, oh, I'm allocating for a closure which is transparent in a higher level language. You now kind of realize, oh, I'm doing this.

A lot of these things that end up costing you cycles are made a bit more explicit when you use Rust.

Joe: When you say "costs you cycles," you mean when the code is actually running?

Carl: Yeah, yeah. It costs you these TPU cycles.

Joe: Yeah, this higher level programming language construct actually has a real world cost when it runs.

Carl: Yep. So anyway, it's like started with Mio and from there, 11 years later, eventually you got the Tokio ecosystem the way it is now.

Joe: Okay, so fourth came Tokio.

Carl: Yep.

Joe: Like let's fast forward to today, you know, Tokio community. It's much bigger than you, maybe sleep deprived, maybe not sleep deprived. Where are things with Tokio today?

Carl: 100%. I mean, when I first got involved with Rust, you know, I like to say I was kind of clairvoyant in this. Like obviously Rust is going to be this game changing programming language. Like it enables anybody almost. Like if you can write code, you can now become a developer and write this extremely high performance code without the risk that came with it.

And that is game changing. But I didn't really see that because I wasn't like exactly clairvoyant. It's what I noticed over time is it enabled me to do a lot more.

But anyway, as the ecosystem started to be built, like as there ended up being more libraries like Tokio came about, now it's relatively easy, it's actually quite easy to build these high performance networking services with Rust.

It's almost become the default choice that companies make when they need to build something high performance. Right? So now, like you mentioned, I'm at Amazon.

Amazon uses Rust and Tokio heavily in their most performance sensitive services. I know for a fact it's in S3 and like DynamoDB and EC2.

Joe: "Tiny services" like S3.

Carl: Yeah, tiny services like Cloudflare and Fastly. But there are more and more places where it's popping up and like it seems like every new, I mean there's been a whole bunch of new database products being built over the past like 10 years and the more like the more recent ones I know when I go and peek and almost all of them when I look is they're built with Rust which is actually like it's not something I was expected.

It seems to have gotten to the point where Rust is the default you pick for a new project when performance is a concern.

Joe: Yeah, it's fascinating because I was, you know, I was still a programmer when-- Because Golang and Rust kind of both came out around the same time. And while there's some similarities certainly in the term of like you know, they compile to a static binary. Right? And so it's like you get a lot more portability.

I think what's interesting, you know, you hit on Cassandra and the garbage collection. Rust being really still the only language I'm aware of with any real adoption where you get both the guarantees of memory safety, which you know, effectively Golang gives you. But they do it because they have a virtual machine that does memory management much like Java does.

And so you know that is great and fine for many, many kinds of services. But yeah, if you're trying to achieve a peak, high throughput backend data management service, managing your own memory is the only way to do that and the only way to manage your own memory and not be, maybe not riddled with but certainly at some point have memory management security CVEs, is through what Rust does.

And so yeah, it's always, always fascinating but not surprising, I think, to see that. Are you aware of any database or any of the up-and-coming projects using Tokio under the hood or?

Carl: A lot of them are. I've not kept extensive lists but a good amount of them-- Like when I peak at them, they are, for the networking layer.

Joe: Yeah. How big is the Rust-- I mean there are some open source projects like SQLite where it's still effectively one person. There are some open source projects with like massive numbers of contributors and in the community, like where on that spectrum does the Tokio community fall? You're still the primary maintainer, right?

Carl: Yeah, I am the BDFL, but I've stepped away from most of the day to day. Alice is the main day to day person that maintains Tokio. I mean I watch and I pull in. I and I get involved for like the harder, not harder but like when you need more opinions and design, like opinions, like I jump in and participate.

But we have a bunch of maintainers now and it's nice to not be the person that's a bus factor of one. If I step away, Tokio and the Tokio project are going to be fine.

And kind of like the way it's structured like you mentioned like Mio, Hyper and all that, it's almost kind of like the Tokio project is almost like this group of maintainers. So like Hyper is pretty independent, maintained by like built and maintained by Sean McArthur. But like we all collaborate in like the same Discord rooms and whatnot.

And if he steps away, I could jump in some PR reviews. So it's nice to have a bigger group. So it's not tiny, it's not massive like the Rust project itself.

Joe: Yeah. So if I could take just a quick detour. We're recording this right before the holidays, but certainly in 2025 and will be the case in 2026, whenever you're listening to this, I don't think we're actually allowed to post this if we don't talk about AI. I'm pretty sure I got some new rules last week.

Obviously you started down this path, I think over a decade ago. I know we both probably don't want to count years but you know, let's say about that. And you know this is, like you said, now underpinning some of the largest services on the Internet, you know, in terms of traffic and you know, I/O. I don't even want to know what the total I/O to S3 is. It's big. It's a lot.

But AI the last two, three years now, it's started to very much feel like we're undergoing a pretty big revolution in software engineering. And so where does this all fit in there?

Carl: Yeah. So. Oh, interesting lessons for companies I think, I'm pretty sure Anthropic and OpenAI both use Tokio or a lot of their data shuffling work that they have to do, like all these big AI companies need to do.

But I have to say I am pretty bullish on the whole AI code generation thing and I use Claude.

Joe: Okay, you're an adopter.

Carl: Yeah, yeah, yeah, I'm an adopter. I'm still like, like all these new tools, I like trying out new tools and seeing how it works and this is definitely one that is embedded now in my workflow though we all still have to figure out the best ways to apply it.

I mean I never thought I would see something like this in my lifetime and I find it super exciting. I think the code generation and AI thing, I mean who knows how much it's going to improve but I think it's going to be pretty transformational. I don't think it's going to take anyone's jobs. I think it's going to change everyone's job.

Joe: Right. Yeah.

Carl: And figuring out how to use it and increase your productivity with these tools is pretty key.

Joe: Yeah.

Carl: And what I find most exciting being a Rust developer and using these tools is, well I mean like I'm sure anyone has tried it, has noticed that sometimes it goes off the rails and generates incorrect code.

Joe: Quite confidently though. Very confident in the incorrect code.

Carl: Yeah.

What I really like with these tools and Rust combined is Rust's stronger type system and culture of building misuse-resistant APIs and all that correctness aspect of the Rust language translates to guardrails when you use these AI tools.

Joe: Yeah. Well it's a strong eval capability. Right? Because Claude or other agentic tools can, they can run a compile step. Right? And if what your compile step produces is much stronger guarantees than I guess the compile step in another language, if that language even has a compile step, right? Like we're, we're talking about compiled languages or obviously JavaScript does not really have a compiler. So is your Claude, is that part of your workflow step where as it's writing Rust code it will compile and make sure it compiles?

Carl: Yeah, I do that. It compiles it and you can see it kind of like trying things that are wrong then iterating on it. I would say I'd be terrified to use Claude with C++, but I'm sure people are doing it.

Joe: I think kind of famously one of the, if there's like not a ding against Rust, but even just kind of a meme about Rust, it's that the error messages or warnings generated by the compiler when it finds a problem can sometimes be-- "inscrutable" maybe is a strong word but a little more challenging than other compilers. For tools like Claude, is it easier for them or how well does--

Carl: I would say that there's two aspects of that. So the compiler error messages that Rust produces, I would say compared to any other language that I've ever used are phenomenal. Like the amount of help and the way they're crafted are really, really good.

What you are probably alluding to is now Rust does have some harder features. So all those capabilities that enable writing really low level memory and really doing-- Like having control of your memory while ensuring memory safety at compile time, all of the language features that enable that can be hard and tricky to use when you start kind of taking it to the extreme. Right?

So when you use them and I see this a lot like for new Rust developers, especially coming from dynamic languages, you're almost like I don't really know how to architect the code. Especially when you have more complicated data structures to be friendly to a manually memory managed language.

So you kind of have to think about how to structure your data in ways that are friendly to the borrow checker. And that's one of Rust's memory safety checker kind of thing. And if you know how to do it and you have like 11 years of experience, you don't tend to get into really complicated issues there where you have like basically data organization problems and the compiler says no, this is incorrect.

Right? But you don't know how to fix it because you kind of structured it as if you would with a garbage collector. And with a garbage collector you get dangling pointers everywhere and, and the garbage collector deals with it at runtime. With Rust you have to make sure that you don't have those dangling pointers and you are kind of like, really that the lifetime of your data is well understood at compile time.

But if you don't know how to structure your data to be cognizant of that, you can get into problems that are hard to solve because the compiler says, no, this doesn't check. But you don't really know. It's like a data architecture problem.

Joe: Got it. Got it. Okay, so the memes are overstated. Going back to your first part, you know, it sounds like they'll probably be pretty good at being able to use those compiler errors.

Carl: Yeah, yeah. What I'm finding is, as they are today, when I hit a lifetime issue, which tends to be like that category of problem, 8 out of 10 times, 4 out of 5 times, if I can reduce my fraction, it solves it the right way, which is surprising.

I don't know how it works. It does a great job. Now there's still that kind of like 20% time where it doesn't know what to do. And it can get into the cycle of getting worse.

It's the same kind of cycle that real humans get into where they don't know how to fix it, but they keep hammering on it. Sometimes the AI does that. So it's not perfect. But even if you don't have AI write your code for you, what it is really good at is speeding up learning.

I've seen some new developers kind of learn Rust from nothing, but with AI tools, and it goes a lot better than before that because you hit a problem, even if it doesn't fix it, you can ask it, "hey, explain to me this problem." And yeah, sometimes it hallucinates it.

Joe: Yeah. So if you're a new developer kind of pairing with the AI and the agent, you can kind of accelerate because you can ask it questions and you can ask it to elaborate.

Carl: Yeah.

Joe: Are there any, like, AI specific features being built into Tokio or new, like testing harnesses? Or is that the kind of thing where there are primitives you could put in place to make it easier for AIs to use Tokio? Or is that still early days?

Carl: I mean, I think everyone's figuring this out. I have not seen anything AI-specific.

It seems like basically what is good for a human is also good for AI.

Joe: Yeah, I guess a naive way to ask that question would be like, does Tokio need an MCP server? That seems to be what people are slapping on everything these days. And I think the answer is no.

Carl: I think the answer is no.

And I also think, generally, that we're moving away from MCP servers in a large part. That is the trend, it seems like.

I mean like Claude introduced Skills, for example. MCP servers as they're built now, my understanding, because this is also not my area of expertise, seems to like taking up a lot of context. Obviously there's going to be ways to reduce the context cost of MCP. You could do that.

But what seems to work really well is just CLI tools that have good help and some sort of index readme doc that says, hey, when you need to do this, use this CI tool. That's all it needs to do.

So why build an MCP server then when you could just have a readme and good CLI tools that are also good for humans?

Joe: Yeah, well, I guess it leads to the mentioning questions like should open source maintainers, you know, in many cases where your readmes or whatever-- Like in the case of Tokio, I'm sure there's always ongoing tweaks, but, you know, is it something where you should go back to your readmes and think back from first principles, like, oh, if I was writing this doc for an AI, like how would I, how would I change it? How would I make it better for the AI?

Where most of what's good for humans is good for the AI, but are there-- Yeah, I'm curious. Exercise left for the listener if there are--

Carl: Yeah and this is for everyone to figure out.

This is groundbreaking technology that changes everything and like we're all still figuring out. I think what I notice is: the more examples, the better.

Joe: Yep. And you mean actual code examples.

Carl: Yes, code examples. The more code examples, the better the AI tool's able to figure it out.

Joe: Interesting. I guess that makes sense.

Carl: Yeah. You came back like, well, how would you change documentation? If I had to make a guess, the only thing would just be how you organize it. Like, I don't think you want for a human that's maybe more able to skim and generalize, I don't think you need as many examples and that would just take up space. But some sort of example directory is probably and a table of contents is probably good enough.

Joe: Yep. You know, there's just been so much Tokio code written over the last decade. We're kind of fortunate that there's a lot of examples already in the weights of the foundation models themselves.

Carl: Yeah, it's like Tokio's been around for like 10 years or so and there's lots of open source code built with it, full applications and examples. And like I said, I think I'd be shocked if Claude and like GPT, like the OpenAI ones are not trained on it directly because I notice, and I bet lots of people notice this too.

It's like if you're building something that is kind of with a popular open source library and that is pretty straightforward, like there's convention over configuration, AI works really, really well. And I think if you're using an unknown tool or a newer tool or newer libraries, things that it's not trained on, it definitely has to guess a lot more.

And I would expect that to improve over time especially because I think a big part of what these AI tools are improving on is specifically how to manage that context. So if you have these big directories of examples, my guess is over time Claude is going to learn how to navigate and load context and learn tools on the fly.

Joe: Yeah, I mean I think that's fascinating and probably exceeds the scope of our conversation today because I do want to get to one more thing. But yeah, I do think on that note, I've been thinking more and more about if you think about historically how languages like Node.js or Rust or frameworks like Tokio or React or Rails come out and get adopted, it is kind of interesting to think through in the future that we're barreling towards.

I mean there's obviously always bias towards incumbents, legacy incumbents, but now those legacy incumbents, like Tokio is one, are literally embedded in the weights of the models. And so how do you kind of get the mind share and you know, it's going to be a new hurdle for the next generation of frameworks authors to overcome and I'm sure we'll figure it all out together. But yeah, I think it's not a problem Tokio has.

Carl: No, not as of now, but now we can't change the API. Haha! No, I feel like that's another problem is like if you make a version 2.0 that changes something significantly, it's going to really want to generate what it knows.

Joe: Yeah, yeah, that's a good point.

Carl: So that's an interesting problem for libraries, authors. Like how do you feed that information to the AI? Like you know, this is wrong.

Joe: Yeah, so coming back from the machines, and I'm sure we should have you on a future episode once you've figured all this out, to the humans, there's big news for humans in Tokio coming up, right?

Carl: Yeah. So we're going to have next year, end of April 2026, we're having our very first TokioConf. It's going to be in Portland, Oregon. It's going to be a two day conference. You can go to Tokioconf.com to get more details. But I thought like, I mean there's already been, there are already a whole bunch of good Rust conferences out there.

But I thought given the kind of growth that we've seen with Rust, especially in that infrastructure space, I thought it'd be really important to have somewhere that we can like get together and like really talk about, like, how do you really build these high performance networking systems with Tokio? Like where are we going to go from there?

Because I'm excited about this AI thing. I think it's going to be a game changer for Rust and Tokio. I think it's going to accelerate the growth of Rust and building networking speed. Networking services are not going anywhere. We're getting more and more connected. We're going to have more and more people coming, trying to figure it out. Let's have a place where we can share lessons learned.

Like how do you monitor a Rust app? How do you debug a Rust app? Like an Async Rust app. Those are all tricky things. How do you introduce Rust to your organization? Like how do you introduce Tokio to your organization?

Historically, Rust is harder to learn. Like how do you approach that if you're introducing it? Like if you listen to me, you might think, you know, you should use Rust for everything. And maybe I do believe that.

But pragmatically, you know, what are good times to use it? When should we pick other things? So I think it's a good time to have that search, having those conversations. You know, it's used heavily at all these big companies. Let's come together and figure these things out.

Joe: Yeah, no, it sounds amazing. And you know, one of the things I think that's interesting about this moment we're in with AI is, in these moments when the frontier is moving so quickly. And it really, at least in my career reminds me very much of the early days of cloud where like, like the whole underpinning is new. Everything's moving so quickly.

There's so many different ways things can be taken and so much kind of opportunity. It's really when getting together in person, face to face and having that really high bandwidth interactions like really matter. And yeah, I think it's a great time to be putting the-- I mean this is the very first Tokio conf, right?

Carl: Very first one. Yeah, it's the first time and I'm heavily organizing it. I got help from Tiffany, who's actually been doing most of the heavy lifting. So I'm really just trying to figure out the program. You know, it's easy enough, but I've seen conferences organized.

I've been in the orbit of conferences like EmberConf and the early RustConf organization. So I've seen how it's done the first time, taking the full responsibility. I'm not gonna lie. It's a little scary, you know, got all these people that are gonna hope to show up and have something to show up to, you know, what if the hotel cancels the contract last minute anyway? Stressful.

Joe: Yeah. Okay. Well, yeah, beautiful Portland, Oregon.

Carl: Yeah. So it's late April. It's either gonna be great weather or rainy weather, which is the same.

Joe: Yeah. But you know, Portland's a town built for rainy weather. So it'll be fantastic either way. But how large is that? You know, it'll be two days. But how many people do you think will come?

Carl: Yep. Two days? I mean, you know, it's the first conference. Who knows? We're hoping to have 300, 350 people.

Joe: What's your ideal size?

Carl: Ideally, I think 350.

Joe: Okay.

Carl: It's our first one. For a first conference, I think that is a good size you want it to be. You could still be intimate. You can still have a lot of the contributors and maintainers of all these different libraries come together and talk and interact without being too big.

Joe: You still have a great hallway track at that size.

Carl: The hallway track is really where it's at though. At time of recording, I'm just looking at the proposal, so by the time this goes out, the program should be out. But we got way more good proposals than I was expecting. It's going to be hard to trim it down.

Joe: Yeah, I actually was going to ask, is the RFP already closed?

Carl: Yeah, it is closed now.

Joe: Okay. But you've got a stack of great talks to look at.

Carl: Yeah, the talks are really good.

Joe: All right, amazing. So the RFP is already a success because that actually sometimes the first conference is an open question.

Carl: Yes. It is probably easier to organize a conference when you have like an established community. We're not starting from a void. And I think there's people that want it, you know, people want the space to talk about these topics.

Joe: So the last thing I'd ask, often when you have a conference, like an optimally sized conference like this for like an open source project, sometimes a large percentage of the maintainers will be at the conference. Are you expecting that?

I know it's hard for different communities and sometimes travel distances are too long, but do you think some of the other maintainers will be there a lot?

Carl: Yeah, we're going to get a good amount of maintainers, especially the US based ones. But I know some are coming from international as well.

Joe: Great. Yeah, I mean every conference for practitioners there's always the like sharing of the practitioner knowledge, interacting in the hallway track. But with open source there's often this like, additional benefit that like, oh, you can actually co-mingle with the people kind of steering the project and share your viewpoints and that's really valuable if you're a practitioner whose company is really betting big on technology.

Are you-- I assume you're maybe looking for commercial sponsors who are looking to spend time with Rust practitioners?

Carl: Oh yeah, always. If you want to sponsor, if you're out there and you're like, wow, I want to reach out to like a group of developers building networking services, especially with Rust, or just in general, I think we're going to have a good group. Feel free to reach out. I think the sponsorship info is on Tokioconf.com so there's always opportunity there.

Joe: Yep. And we'll put all this stuff in the show notes too.

Carl: And the goal is just, I'm not doing this to become rich, you know, just make enough money to do it again next year.

Joe: Okay, Carl. Well, this sounds fun. I'm looking forward to being there. Thanks so much for coming by today and taking the time to share the history of the Tokio project and some thoughts on where maybe it's going. Last thing, if there's any contact information, how do people reach out to you?

Carl: Oh yeah. Best way is email. I'm old fashioned now. I'm not on the social medias anymore. You can find me on the Tokio Discord is the easiest way. Like we have our Discord channel.

Joe: That sounds like the best bet.

Carl: Best bet. That's where I'm at. So thanks for having me. This was fun.

Joe: Yeah, it was great. We'll do it again.

Carl: Yep. Cool.