1. Library
  2. Podcasts
  3. Open Source Ready
  4. Ep. #16, Building Tools That Spark Joy with Mitchell Hashimoto
Open Source Ready
54 MIN

Ep. #16, Building Tools That Spark Joy with Mitchell Hashimoto

light mode
about the episode

On episode 16 of Open Source Ready, Brian and John sit down with Mitchell Hashimoto, founder of HashiCorp, to discuss his journey after leaving the company and his latest passion project, the open source terminal emulator Ghostty. Mitchell shares the accidental origins of Ghostty, his pragmatic approach to technology, and his thoughts on the current state of open source business models. Lastly, they explore the complexities of AI development and the trade-offs of foundation governance.

Mitchell Hashimoto is a software engineer and the co-founder of HashiCorp, where he helped create industry-standard tools like Terraform, Vagrant, and Vault. Today, he’s focused on building Ghostty, a modern open source terminal emulator written in Zig, and exploring new models for sustainable open source development. He’s a passionate advocate for developer experience and pragmatic tooling.

transcript

John McBride: Welcome back everybody to another episode of Open Source Ready.

I'm here again with Brian. How are you doing?

Brian Douglas: I'm doing great. We've got some sun today at Oakland, so I'm not wearing a sweater inside, so I'm going to try to go touch grass later.

John: Yeah, that's good. We're here today for a very, very exciting episode.

We have Mitchell Hashimoto, the legend himself. How are you doing?

Mitchell Hashimoto: I'm good, I'm good. I think that's an overstatement, but I'm good, thanks.

John: Mitchell, why don't you give us a little introduction to who you are, what you've been working on?

Mitchell: Sure, so I'm a developer by background.

I'm sort of best known for founding a company called HashiCorp, which created Terraform, Vault, Vagrant, sort of a slew of other tools in the DevOps cloud space.

But besides that, I've always sort of done various types of dev work. I did a stint of macOS native development and got paid for it.

I was a full-time JavaScript person for a few years, Flash back in the day, and then nowadays, I left HashiCorp about, when we're recording this, almost two years ago, and I've been sort of working part-time since then, being more of a dad, less of an employee.

And my main focus in my work has been an open source terminal emulator called Ghostty.

It's been a work of passion, and I think it's pretty cool and it's doing pretty well, and that's sort of where I'm at right now.

John: Yeah, you've sort of taken the terminal, you know, UI/UX space in that regard, sort of taken it by storm.

I mean, in my eyes, it almost seems ubiquitous for, you know, the terminal of choice, not only on Mac, but now it seems almost on Linux as well.

What was that journey like? Like how did you, you know, wake up one day and say, hey, I'm going to build Ghostty in Zig, nonetheless, like what was that experience like?

Mitchell: Yeah, I mean it was mostly accidental.

You know, after HashiCorp, I wanted to work on something, honestly, the complete opposite. I literally looked at different dimensions of what I did for 12 to 15 years in the DevOps space and said, what's the opposite of that?

And so I was like, okay, I did a bunch of server side stuff, let's do something desktop. I did sort of higher level. It wasn't as high level as say front end, but higher level stuff was like, I want to get back to like counting CPU cycles.

I never touched the GPU while we were at HashiCorp, I was like let's do GPU programming. And I sort of like had all these properties and I was like, what can I work on that fits these?

So I actually like had the technologies I wanted to use first, which isn't usually the best way to make a product. But I had that first and I was like, what could I build?

And I was just thinking that all the software that I helped create and work on at HashiCorp was very like CLI focused, and I had a fairly--

I felt comfortable with the terminal ecosystem, but I realized that I didn't really know the details of how those things work. And so I was like a terminal emulator, let's do that, and it was just an educational project.

And as I was working on it, I sort of felt like there was sort of a trade-off choice that would lead to a terminal emulator that didn't really exist and that I wanted, and so that sort of organically led to Ghostty.

But definitely I didn't like go into it thinking like, oh, terminals need to be worked on. I thought they were done, along with I think a lot of people.

John: Yeah, it was a pleasant surprise for me having gone from like iTerm2 to, I don't know, Alacrity at some point and tried Kitty, Ghostty is the first one that feels to me like, oh, this is truly like a macOS native experience, and then even on Linux having that kind of like cross native experience that sort of breaks on some of their terminals.

So my personal thanks to you.

Mitchell: Thanks, thanks, that's what I love to hear. I have a lot of people that say, "Oh, this is a Mac-only app," and it's like I have to correct them and say no, but that's what, that's like, that's this feeling of success to me because I'm like, that means we did the Mac app so well that people think that this was only built for Mac.

And the history is actually funny because the Mac app didn't come till a year after we started. It was Linux first.

And then the reverse also happens, which is that it looks so much like GNOME Console or something that a lot of people feel right at home, but then they're surprised to learn it's also a Mac app.

John: Yeah, that's great. I have to ask you also just on like, you know, your workflows and like developer tooling that you've been building.

I've taken great inspiration from your Nix configs recently. One of my friends, I asked him, I was like, "How the heck do you even get into Nix?"

I recently started a new job where they're using a bunch of Nix machines for like developer machines and reproducibility.

How did you get into that ecosystem of like, you know, deep Nix Linux configs and stuff?

Mitchell: I mean, I think it fits my personality really well, like just looking at like what I worked on with Terraform and stuff. I mean, it's just like I wanted a reliable, consistent "as code" way to rebuild my desktops initially, and really still the focus of my Nix usage was desktops and it sort of bled out since then the other stuff.

But that's what I was looking for, like it's the age old classic problem of I just got a brand new laptop, and I think that's literally what motivated it.

I was like, I had an old MacBook Pro and I had a new MacBook Pro, and I was like looking at 'em and I was like, how do I make this better? Like, how do I make this, I don't want to spend the next few hours setting it up, so how do I make this better?

And so I put in the work, did the Nix stuff, and now it manages my Mac, it manages Linux machines. It does CI for me, like it's grown a lot since then.

But I know that a lot of people get really tired of sort of the Nix like hoo-rah-rah that happens everywhere.

John: Yeah.

Mitchell: I try to stay relatively quiet about it.

When someone's excited, I get excited and talk about it, but otherwise I try not to talk too much about it because I'm not trying to be like a preacher about it.

But it is still I think one of the most daily impactful pieces of technologies I've adopted, but totally no disillusionment, and like it's a very difficult piece of technology to adopt.

John: Yeah. Yeah, plus one to that. Brian, have you gotten a chance to try Nix yet?

Brian: No, no, and we sat with the Flox folks a couple of podcasts ago, and I come from way more the JavaScript space, the Ruby on Rails space, so like I haven't had a need to mess around with any sort of Nix at the moment.

But I sort of fill my pool as I'm now doing a bunch of Kubernetes things and trying to catch up over there.

And actually, I have a machine over here, I'm running Linux, but other than that, like I haven't had a reason to do it.

But I wanted to ask a question around the Zig choice.

Mitchell: Sure.

Brian: 'Cause it sounds like, you mentioned Ghostty being a product. This is purely open source. It's like, no, there's no paid tier or anything like that for it?

Mitchell: Nah.

Brian: Excellent, yeah. So the choice around Zig, like do you feel like you're sort of trailblazing within that community?

'Cause I only know a handful of like companies that are using Zig. I don't know a lot of projects that are like fully using Zig in production.

Mitchell: Yeah, it's definitely trailblazing, you know.

It feels a lot like when I adopted Go. The only company I knew of using Go besides like Google maybe was Heroku at the time.

You know, Docker didn't exist yet, like these obvious Go users didn't exist yet.

So there were a number of us coincidentally adopting Go, and it sort of feels like that with Zig, maybe a little bit earlier though.

And so definitely like I've had to sort of rebuild a bunch of libraries that I don't think you would rebuild in a mature ecosystem. It was the same with Go. I don't know.

I think that if someone listening to this was a Go developer anywhere from like 2012 to 2016, like you couldn't download a Go project without getting like Mitchell H or HashiCorp dependency 'cause like, it's not like we wanted to build those dependencies, it's 'cause they didn't exist, so we sort of built a huge ecosystem.

That's sort of matured out at this point. And Zig's sort of the same way. I've had to build a ton of stuff.

I've had a lot of fun with the language. I think it's great. I still think it's probably a little early for like a company to adopt it. Like there's no language stability and stuff like that.

And so I think we're in the phase where we're probably going to see like community growth but not quite professional growth at the same scale.

I still think that's probably a few years out.

John: Yeah, we had a few of those dependencies in Cloud Foundry way back in the day when Pivotal was still a company, and I think they were pretty early adopters of Go as well, just transforming that whole thing from, you know, a giant Ruby thing to Go stuff.

What do you look for in new technologies? Like there's so many choices to make out there, not only languages but like platforms or things that you might like be bringing into like a bigger stack of platforms of things.

Like what do you look for when you're evaluating technologies?

Mitchell: Well, I feel like now with like a kid and where I'm at, I'm just looking to be productive and get stuff done.

So, before, I think I would choose technologies much more philosophically. You know, this is just like is a beautiful design or something, and now it's a lot more pragmatic of like, this will make me more productive, and it'll sort of prevent me from blowing my own foot off, and the rest, it's growing. And I think the most important personal thing at that is that like I need to have fun writing it or using the technology.

So I'm also just like, if I'm at a point where I'm using technology, and it's like not sparking joy, you know, if I'm not being paid to do it, like I'm not going to use that technology.

So that's always been a really important part. I mean, I think that's a pretty core part of my personal ethos of like when we were building HashiCorp tooling too.

I felt like I always tried, you know, whether I succeeded or not, very arguable, but I always strived with that initial user experience, download, run, to like find some sort of happiness in a thing that you wouldn't really say it sparks happiness.

John: Man, yeah. I mean, I was just thinking about how a lot of the HashiCorp tools I remember using, even Terraform today still, like it's very delightful to be able to like just go and do a thing, and it kind of has that almost Unix Linux ecosystem of like does one thing very well.

That's going to like spark a lot of joy in that. Was that part of the design principles with tools like Vagrant and stuff?

Mitchell: 100%.

I felt like, especially when I started working in sort of the DevOps cloud native space, this is like 15 years ago now, but I felt like all these tools that existed were really made for what you would call like a classic system administrator at best, and just for machines at worst.

Like there was the config formats, like the CLI, if they had one. The way you invoked them, like it was all so difficult for a human, and it wasn't very fun.

And I wanted to sort of, I think this is because I came from an environment where I was sort of building like consumer web applications as well as desktop applications.

I think that was my upbringing. Like I think I came to DevOps with this mentality of like, I want to build I guess like consumer-grade DevOps tooling, which isn't, you know, of course you can't give a consumer Terraform.

But like, you know, that was what I was striving for was like make a standard web developer think they could do this, like they could download Terraform and be like, "I don't need a system, then I could do this."

Of course you need those people, but like that's what I wanted them to feel.

John: Yeah. Well, speaking of Terraform and HashiCorp and everything, I mean, one of the things that has come up a few times has been the license change that happened and the acquisition.

I'm curious if you'd be willing to kind of walk us through what that experience was like relicensing some of the projects, and just, you know, maybe where your head was at with some of that.

Brian: And for clarity, you're not like on the board or anything like that for HashiCorp, are you?

Mitchell: No, no, no, no.

Brian: Completely separate now at this point?

Mitchell: I'm totally separate, and, honestly, with the acquisition closed as of like a month before this recording, I'm at a point where I could really kind of talk about anything.

That being said, I don't think I have too much exciting to say.

You know, the license change happened when I was already like not an executive, leaving the company.

I think I got like a two-week notice that the license change was happening, like I wasn't part of that whole discussion, and the acquisition fully happened after I left.

I had no idea if HashiCorp was talking to IBM. So I don't have a ton to add except like sort of what I felt, I guess, at the time.

And the way I describe sort of the feeling with the license change is that I really, really love open source. So of course like it's disappointing to choose a non-open license or feel that that had to happen, but, at the same time, I was an executive for so long that I also understand the challenges and empathize with what that corporate leadership was going through to probably motivate that change.

Again, I didn't see the full RFCs that led to it, but like the way I've talked with people, the way when I talked about it with Armon and the CEO and stuff like that, it was always like I had these two sort of battling sides inside of me, and I'm not fully mad or fully happy about anything.

So that's sort of all I could say, I mean, it'd be better if there was a reality we lived in right now where that didn't happen.

You can't just magically just say they couldn't have done it though 'cause I don't know the full details.

Brian: Yeah, 'cause HashiCorp came up like in a special time in open source, and you guys had a lot of success, and had the IPO and et cetera.

We had Adam Jacobs come on board. I'm not sure if you've seen any of his recent conversations and podcasts around doing open source first and open source-ready type companies.

Mitchell: Yeah.

Brian: Like do you feel like we've kind of passed a movement where like you can't just go MIT license early days, and then do the VC route, and then hopefully never have to, like there's a relicensing happening eventually?

Mitchell: Yeah, I think, you know, because of HashiCorp but also because of like a dozen other companies, I mean like, there's no singular person I think you could blame here, but I think the open-source community is highly skeptical of venture-backed, open-source businesses and totally understandable.

Again, this is one of these difficult things because I always wonder like, if HashiCorp didn't get venture funding, like if we didn't spend all that money like developing this software, what would exist?

Like where would we be? I just don't know.

Brian: Yeah.

Mitchell: And I think that again, my open-source side wants to live in this world where we could build this bazaar that people collaborate and build incredible software.

The practical thing I've seen, both in the open source and as an executive, is like that's not what happens. It happens with the biggest, most successful projects.

I would say, like, Kubernetes is in that camp. The Linux kernel's in that camp.

But like even if you're looking at fairly large databases, like you go through the committer list, and the top 10, 15 are paid by one entity to do it, and there's this long tail of awesome contributors like that definitely commit bug fixes, ecosystem help, small features, things like that.

But it's like majority just run by one entity. And I have this hard time reconciling like how do you fund that? How do you, at the end of all that, when you've reached success, when you've reached this point where the community probably would benefit from sort of more open governance, how do you make up for that huge investment that that company made in order to reach that point?

And it's probably sounding like right now that I'm like, I don't know, for these business license changes or something, and I'm not trying to do that.

I'm trying to just show like the inner conflict that I have of how do you get success as open source, but how do you make it sustainable, and, you know, getting donations or sponsorships even less than like a million dollars a year, which is 99% of all of them, doesn't sustain the software.

Like the salaries that, I just know from us, the salaries that we were paying to support each of our open-source projects were dramatically more than a million dollars a year, and, yeah, it's hard.

I recently did like an exercise because I'm looking into a bunch of nonprofit stuff since HashiCorp, and I've donated to the Zig Software Foundation and things like that.

And I recently did an exercise with some accountants where we found all the open source 501c3's. So that excluded like Linux Foundation and stuff 'cause they're a C6.

We listed sort of all these C3's and we bucketed them by size, and I think the only, there was like single digit numbers that had over a million dollars a year in donations, and there was a lot that were way under.

And it's like you cannot build a Terraform with that budget. It doesn't happen.

It's done now, so it seems possible now 'cause you could be more in a maintenance mode, but the point to here doesn't happen in the timeline it did without much more investment.

John: Yeah, I think that's honestly incredible insight. We're almost seeing something similar playing out right now with this project called NATS.io and the CNCF.

Mitchell: Yep.

John: I mean, I 100% have empathy for that corporate entity 'cause it's like, you know, and this coming from probably one of the most staunch, you know, free and open-source software guys in the world, you know, we all get it.

But like, I mean, we don't have an eyes into the books of that company and they're, you know, contributing like 97% or something commits that they said.

I've even seen this play out in the Kubernetes ecosystem. Like I agree it's a very well established project and very like stable, but I was also at VMware when it got, you know, kind of gutted by Broadcom, and I was working on upstream Kubernetes, and then all those people just got scattered to the wind.

And they're not going to go and spend their free time, you know, contributing to Kubernetes's Kubernetes, you know, without really getting paid for it I guess.

And today it's like what I would argue, probably just a handful of people that are keeping that core technology and core community going.

It's all wild, yeah. I mean, what is your takeaway on the NATS situation?

Mitchell: It's the same conflict that I have.

I think on one hand, I totally get that if you, you know, signed to the ink that you're donating a project and whatever came with that, then trying to find a way out of that feels bad. Like it feels like a rug pull.

On the other hand, I read the CNCF blog posts, and I recognize that I'm with a couple of people that are associated with the CNCF here, but I got to the section where it's like what the CNCF has done, and I was like, so you basically maybe spent 90,000 to 100,000 dollars on audits and trademark legal.

Like the millions of dollars that led to the stability of NATS, like this is a huge imbalance like of what happened here. So it's difficult, again, like it's, and I know Derek really well.

I never use NATS actually, but I've only heard good things about it, and I can only guess what he's going through.

But, again, like when I put on the founder hat, a lot of empathy. When I put on the open-source hat, I'm a little bit upset. It's difficult.

Brian: Yeah, for listeners, I don't think I even mentioned I switched over from the LF to the CNCF running ecosystems there.

But I got to to sit in some conversations, and I think it's just a really awkward situation that everyone's tried to, you know, be open source and public about what's going on, because it's just an awkward situation where it's very clear the company that that Derek represents was doing the lion's share of all the work, and it just wasn't alignment for the CNCF.

And I think historically, like everyone wants to figure, like a foundation, it's going to work out, like we're going to get more people adopting it, but this never played out that way, and said, no, we got to figure out what to do next.

Mitchell: Yeah, I really think that it only works for very, very large established projects. That's my pessimistic point of view.

You know, Kubernetes makes a ton of sense. Like Apache Web Server and the Apache Software Foundation makes a ton of sense.

Kernel, of course. OpenOffice, LibreOffice, stuff like that makes sense.

But like I feel like when you're trying to, you're still trying to find product market fit, or you're trying to like reach that level of success, I don't know if the foundation model like is best.

I'm not sure. I obviously have a huge bias here.

I'll like share one story too as at various points at HashiCorp's life, we of course talked to different foundations, including the CNCF, about possibly donating a project.

I was always open to talking about that. And the question I always had when I came into these meetings, there were multiple, it was always like, what's the trade here?

We give you this, but what do you give us? Like this is critical IP that we've raised money on that I'm trying to build a life that I could work with on.

Like what's the trade off here? And for us, in our position, I never felt like I got an answer that made sense.

You know, it was like, you'll get speaking opportunities, and I'm like, we already speak at a few dozen conferences a year.

It's like, you'll get marketing. It's like, we got marketing covered. You'll get more users, I'm like, we get millions of downloads a year.

Like what is actually happening here? And so, at one of them, I asked another foundation, not CNCF, I asked them, I was like, "If you'll split the development cost with us, then I'll call it good."

And, at the time, we were tiny, and even at the time, I was like, "We're spending about $1.2 million a year in salary, so if you give us $600,000 a year, then let's talk."

And that was like a no. Their largest project, which it was a very large foundation, was like we only give 200 something thousand a year in development. I'm like, that's not even close to what we need.

So this is the issue I come up with. And like John said as well, it's like once it reaches a certain point, and it sort of enters a maintenance mode, and you're not trying to, you're in the mode where you don't want to break too many things 'cause so much depends on this critical infrastructure, then the cost of maintenance comes way down, and then you want the guarantees of open governance and stability and things like that.

It's difficult, but that's sort of my opinion on it.

John: Yeah. One of the projects that I maintain that I think even was used at HashiCorp, was spf13 Cobra, and it's one of those just wildly ubiquitous libraries that I won't change.

Mitchell: Yeah.

John: And I've accidentally broke in the past, and it's like I don't want to go, you know, breaking a bunch of these CLIs up in the ecosystem, and has reached that level of maturity where even the bugs I would call kind of features, just like part of the shape of the API.

Mitchell: Hyman's law.

John: Yeah, exactly. It's like we're not trying to find that, I guess, quote unquote, market fit. It's an open-source library.

But we're not, you know, trying to gain a bunch of users. We're not trying to like, you know, iterate really quickly on like what the shape of it looks like.

It's just community management basically, which I think some of these ecosystems can do, or some of these foundations can really help with.

Mitchell: Yeah.

John: Brian, what are your thoughts on all this? I mean, you have kind of unique position still within the ecosystem.

I recently left the Linux Foundation, so I guess I can't comment anymore.

Brian: Yeah, I mean, you probably can comment way more than I can.

But I think it really comes down to honestly original misalignment, and with the NATS project, I think there's a lot of good that both parties or all parties can probably do now, but I think it's one of those situations where you just can't, you don't know what's going to happen around the corner.

Like maybe we would've predicted that Terraform's the most popular way to ship your infrastructure.

It's easy to say that now, but like we would've thought NATS would've been as successful back in the day.

So I think what it comes down to is I think the CNCF is still, what? It just hit 10 years, so like they're still figuring out like what's the next phase of this organization?

Has it continued to do the best for like all parties in the ecosystem?

So I'm appreciative in some of the responses that folks that represented CNCF have been doing that kind of help elevate that, but also, I'm biased 'cause they pay my salary today.

So, yeah, I think there's opportunity.

But like, are you in like the better case scenario today with Ghostty where you don't have to think about, you know, whatever product market fit it is for this thing, like you can really just kind of scratch the itch and build what you want to see in the world.

Mitchell: Yeah, I think like the fundamental tension in everything we've talked about is that, on the side of the project, there's a group of people that need to earn a living that they feel is fair in order to build this project, and Ghostty doesn't have that tension.

Like I explicitly do not want to make money from Ghostty. The extent that I want Ghostty to be sustainable is that I want contributors to be able to be compensated to a certain amount.

So I am sort of moving in the direction of forming, like literally this was a meeting I had this morning, I'm talking to lawyers about forming a nonprofit so that money could be donated, and I could donate money myself, and send that, not to me, but to contributors and try to figure out a system that works there.

And I think that sends a very clear message of, you know, once you donate IP to taxes on nonprofit, like there's really kind of no clawing that back, and so like that'll send the right message of what I'm trying to do there.

But I think like that doesn't work for, you need to get to a certain point, and so I want there to be a blueprint for that, and I don't know what it is yet.

John: Yeah, I'm excited. I think that sounds amazing honestly for Ghostty.

What's next for Ghostty? Any big feature adds that you're looking at now?

I mean, it hit a big GA release, and, you know, sort of went out of the beta that you had with the Discord and everything. Yeah, what's next?

Mitchell: Yeah, there's a few different categories of things we're working on. One is just, we're still missing a few like obvious terminal features.

I think the one that has like a thousand upvotes is search, which is kind of embarrassing that we don't have.

But, yeah, we're missing like sort of Control F, Command F search, scroll back search. So there's the basic features we're missing. There's a few of those.

Then there's sort of the mature desktop app stuff that we're working on. So like the next version will be localized in like 20+ different languages.

And, you know, it's not super exciting but it is exciting for some of those people, right?

And it's just something that you have to do for a sufficiently mature application to get widespread usage.

And then the third is like the exciting stuff, and some of the exciting stuff that we're working on, I think at least, I think is exciting, is, broadly speaking, I like to look at like the GUI framework ecosystem out there, like Qt, GTK, AppKit on macOS, you know, Win32 and the all the other frameworks that Microsoft has for Windows.

And I like to be, like, what do they do that terminals don't have? Which is like everything.

But like what do they have that terminals don't have that makes sense, and bring it back, 'cause a terminal is an application development platform, text application development platform, but an application development platform nonetheless.

So what higher-level APIs are missing? And so some of the stuff that we're working on right now is we brought a sort of VS code-style command palette to the terminal, and we're working on a new protocol for applications to be able to put all their actions in this like native thing.

So imagine like opening up Vim, and instead of like hitting colon and tab to figure out everything you could do, just like hit, you know, Shift Command P like you would in VS code, get a native widget pop-up and get a full mouse-scrollable native list of everything Vim could do with a description and just click it to do it.

We're working on stuff like that in order to, I think, make TUI applications more attractive for people and also make it easier for developers to implement this stuff.

So, you know, that's like one step on the way we're going. There's so much more that I want to do.

John: Yeah, I mean, this really has me excited. I'm a heavy user of Neovim and all these different TUIs.

One of my favorites is this thing called K9s, you know, which opens up this like big thing for Kubernetes.

And that was immediately obviously to me one of the big innovations was basically like a library. Is it libghostty? Is that what I'm thinking of?

Mitchell: Yeah.

John: Okay, yeah, and like what were sort of your first principles thinking around like, okay, this is just not going to be like only a GUI but like libraries that actually can be extensible and built on top of?

Mitchell: Yeah, on one hand, it was sort of a necessity because we were being cross-platform, and I wanted, as we talked about the native apps to feel really native, and that meant not using something like GTK or macOS.

And so I needed this separation between native GUI code, the macOS app for Ghostty is written in Swift, and then the core terminal emulation.

And then the second thing is sort of that I looked at the ecosystem Alacritty, Kitty, Wezterm, iTerm, Terminal.app, Windows Terminal, like if you looked at any of them, almost all of them re-implement core terminal emulation, and it leads to a huge functional inconsistency across all of them, and it's just a waste of time.

It's not the interesting part of a terminal. It's like the part that just has to work. It's not the part where I think you do a ton of innovation.

And the Linux ecosystem has something like this, which is pretty good, which is called libvte. There's about 50 terminals built on libvte, but it's purely, I think, GNOME-focused, and it works, and I'm like, I want to build a cross platform libvte, and probably like higher performance and things like that.

So that was sort of the idea behind it. I also just think there's a lot more applications now that aren't terminal applications that want to have a terminal in them.

And I think VS Code is a good example, like editors in general all have terminals in them. I think that makes a ton of sense.

But I think that getting, you know, VS Code using Xterm.js is, as good of a library that is, honestly it's really good, it's just a little bit sad that you have to do that.

And so I think like using a native hyper performant, fully-featured library makes a lot more sense for those use cases.

And this all circles back to why I think that Ghostty, as its own, needs to live in a nonprofit, needs to be like totally unencumbered and stuff like that, because if I'm trying to say that every application could have confidence embedding this thing, you know, you need to have this strong message of mission statement of what you're trying to do here.

John: Totally. Man, well, again, I commend you. You're doing me good, 'cause this is exactly the tools that I needed.

I wanted to ask quickly about a tweet that you had the other day.

Mitchell: I'm scared.

John: I know, I know.

Mitchell: I'm scared. Post HashiCorp, my tweets have gotten a little bit more unhinged as I'm not like held back by PR fears, okay.

John: Nice, nice. That's great.

So you said, "Two and a half years into the AI craze and I continue to firmly believe that if your company wasn't already interesting succeeding without AI, then doing whatever plus AI," end quote, isn't going to save you.

I just thought this was such a good take given the number of, like, it's kind of like bolt-on AI things we see in the market today.

So, two questions, what in your eyes makes like a, quote unquote, good AI product? And why aren't you building AI products right now, I guess?

Mitchell: Man, there's so many ways to do, like we don't have enough time to go into that without, like, there's not going to be enough detail here that people are are going to take some stuff out of context I think.

But, anyway, I just think, like for example, there's a lot of companies that are just, yeah, doing like mail client but with AI, but it's a bad mail client.

The AI integration actually is pretty good, but the mail client sucks, and it's like you still, at the end of the day, have to interact with the mail client.

So that's just a concrete example of like, I just want a really good mail client, and I also want AI features, but I need the whole package to be good basically.

And I think there's a lot of funding happening right now, and a lot of new products being shipped that are just like X plus AI, and the X is not solid, the foundation isn't solid.

And so that was sort of my core take. And I think that I remember that tweet, and I mentioned in that same tweet that I think like editors like Cursor don't have a strong moat, and I think Cursor's AI is really good. I've used it.

I'm a Neovim user, but in the interest of open-mindedness and learning stuff, I've used it for a few things, and their AI integration's really good.

But when I look at it I'm like, okay, but what's stopping, like is there actually a moat here? Like what's stopping Microsoft from doing this with VS Code, or Zed from doing it in their editor really well, and they own all their IP?

That's my feeling on it.

John: I am curious if you've looked at avante.nvim recently?

Mitchell: Yeah, oh, man, all that stuff's changing so fast. I used Avante for a while. Hats off to the person who made that.

And then I switched, I'm currently using CodeCompanion.

John: Oh, I haven't tried that one.

Mitchell: I feel like it's better. I don't want to piss anyone off, but like I've had better success with it.

John: Sure. I mean, you said it, like these things are moving just so fast and-

Mitchell: Even with CodeCompanion, you can't like follow the tags. You have to use the master main branch because it just moves so quickly that like if you want to use the latest models and stuff, like you just, you can't wait for a release, so, yeah.

John: Yeah, it's moving so fast, and that moat just seems to get smaller and smaller with some of these things, 'cause you know, if somebody can do this in a couple of hundred lines of Lua or whatever, it's scary for anybody shipping a product that, you know, competes with that, right?

Mitchell: Yeah, and I'm not in the AI space, I'm just a consumer, so I don't want to talk with too much confidence.

But, yeah, I think that practically speaking, like the example of how I switched from Avante to CodeCompanion, and I use Cursor, and I feel like I'm getting a lot out of these Neovim, you know, totally open plugins that are developed by one person.

I think it all shows that even if there is a moat, it's smaller than you probably think.

John: Yeah, yeah. Well, I could probably keep chewing your ear off about Neovim and stuff, but Mitchell, are you ready to read?

Mitchell: I know how to read, so.

John: Okay, so for today's reads, Brian, why don't you kick us off?

Brian: Yeah, I had a read actually in kind of a segue what we just talked about, blog posts from continue.dev.

Company's called Continue, but their website's continue.dev, about collecting data on how you build software.

Kind of really asking the question around all these prompts you're adding in these chats. It's like extremely valuable.

So like I have a team of engineers working on Ghostty, and they're asking like, hey, can you prompt this to like get education around this part of the code base to then write this test in, et cetera.

Like there's a lot of really valuable commentary that can be sourced into, hey, there's some documentation edge cases and et cetera.

So this actually came out in 2023, summer of 2023, so it was like pre like the sort of Cursor wave, and even like whatever Copilot's doing today.

I thought it was really interesting because I got to thinking about this other tweet I saw from Swyx about this feature OpenAI where you could just like opt in to like share your information, and then you get free a million tokens a day, which, going back to like the Copilots and all these other tools, like how valuable this information is, and how they'd be able to advance their product.

Perhaps all this data is like, we're just leaving it on the table. So it's got me thinking about like how I interact. I've been using a lot of local model stuff like the Llama Coder and Qwen, which I know you'll talk about that later, John.

And trying to sort of build my own code assistant local that doesn't have to depend on, you know, a Clon subscription and et cetera. So I thought those were interesting things to share and happy to hear any commentary on that.

John: So it's kind of the idea that, you know, you can not just look at the code but also like, you know, maybe methodologies, things that kind of go beyond docs.

I'm almost thinking like tribal knowledge, if somehow you could like put that down into a model and then it could kind of understand almost that like senior developer intuition or something.

Brian: Actually, I mean that's actually the best way to place this.

So Mitchell, I don't know if you know my background, but I worked at GitHub for five years, and I was like GitHub Action's, like I was the DevRel side of that, so got to do a lot of storytelling and building examples.

And one of the easiest things I did in my job was go through the discussions in the forum of like what are the most popular questions being asked? And what's upvoted the most?

And then create content around that. So blog post, video, whatever, conference talks, anything that got upvoted the most.

And like Stack Overflow, same deal. And like that's my, that's like my success in doing DevRel is just go look for answered questions.

So then when you look at answered questions inside the prompt, it'd be nice to get that sort of data, which again, no one's really doing anything with this right now.

I don't think there's one feature or one product that actually lets you look at what chats are happening, or what you've looked at previously.

Like even with Copilot, your history is only to the session. I don't think you could go look at what you were asked last week.

So I think there's a lot being left on the table on us being able to like, one, leverage AI for the benefit of programming, but two, like how do we learn, okay, I keep struggling with this thing that can't fix bugs 'cause I'm like stuck in agent mode and this like weird debugging loop.

Mitchell: Yeah. I like the analogy too of making it a senior engineer. That's definitely something that I'd find useful.

There's this like website called DeepWiki I think. I think it's like fairly mediocre. Again, don't want to piss anyone off, but I think it's just fairly basic.

But if you go to DeepWiki and I think you append any GitHub project or maybe it's curated, but it clearly runs the whole code base agentically over and over through an AI and builds architectural docs for various projects.

I only found out about it 'cause Ghostty just popped up on it, and I was reading it and there's a lot of crap, but also at the same time I was like, yeah, I mean, this isn't written down and this is close.

It's pretty interesting. You could just like pull up PostgreSQL and just like see how like the backing stores work and stuff like that.

And again, I don't know how much of it is true, 'cause based on Ghostty's page, I have a lot of doubts, but the idea is really cool.

John: I love it. I think it's like one of those things that will just continue to be like the next step, the next step of building this like, I don't know, maybe infinite web of knowledge and things just to continue to feed into agents and models and all these things.

Mitchell: Well.

I think the most annoying thing about AI right now is managing context. If you could solve that, whether, however it's solved, I don't know, 'cause again, I'm not the expert. But as a consumer, if I can not manage context, then I think we're going to be in a much better place.

I feel like most of my success learning how to use these tools, like a Avante, CodeCompanion, Cursor has been figuring out how do I open the right buffers and share the right thing, so that the answer's going to be higher quality.

And it's this like really silly thing that I feel like I'm not going to need the skill in a few years. But yeah, that's sort of where I'm at now.

John: Yeah, that's so fascinating, 'cause as somebody who's built agents and like some frameworks and libraries and stuff for AI, usually the most arbitrary thing is just like, okay, take the whole history, slap it back into the model, and then it'll reach regenerate the thing, and then it just kind of keeps going and going and going, until that context window gets too big.

Mitchell: Yeah.

John: And there's so many tools that are so good for search indexing.

I mean, I yearn for the day when somebody plugs in, I don't know, OpenSearch into, you know, whatever LangChain's graph thing looks like, and then it can like more intelligently look back into its previous history versus just like, okay, here's the whole context window, right?

Which really I think just shifts it to the consumer to you to manage that and kind of be like, hey, please don't regenerate the entire file.

Just tell me how this one, you know, function works or something, right?

Mitchell: Yeah, I don't use LSPs, and I use Copilot for like auto complete across my project, which like people sometimes have a allergic reaction to, and I'm like, it's pretty good enough.

And, but like I've realized like as people have challenged me on that, I've looked at my workflow and thought about it, and one thing I noticed that I do is like, when I want to complete an API call that I know is in a certain file, I'll open that file in like a split, and then I'll go back and get the auto-complete and it usually gets it right.

Because like I just like naturally organically learned that, oh, Copilot's kind of like totally opaque in how it works.

It's not open source, but like I've like learned that it must be sending other buffers or most recently access buffers as context.

And so I learned like, oh, if I open it, my cursor goes there, my cursor comes back, I hit insert mode, the right thing pops up this time.

And now I just do it so fast that like it's become automatic, and then when I watch someone struggle and say Copilot is terrible, and like they struggle, and I'm like, it's 'cause you're not doing this thing.

But then I realized the thing is a little bit irrational. So it's funny that way.

John: Yeah, fascinating.

Brian: I love this workflow, and I was going to say that, I'll mention one comment, I'll let you, John, you can move on to the next thing.

But I started writing tests, I was writing a bunch of Ruby code last weekend, and I come from like the test driven development Ruby world.

So it's like, oh, in my brain, like, write the test and the code will be generated through agent mode, like perfect, and that was 100% true.

Like test driven development works with agents in the context, what you can kind of limit it to, I want it to work in this way and click like these methods and functions and stuff like that.

I was actually shocked on how performant it was for me.

Mitchell: Yeah.

I think it's a little bit manual, but getting agents to at least execute if not write tests in addition to writing code and looping on that is like unreasonably uncomfortably effective for me as a developer.

And yeah, I mean I think the zero shot "write this function and have it work" is, in the year 2025, kind of a pipe dream for most things, unless there's like really hyper common JavaScript scenarios or something or Python.

But the idea of like write a test, write some code, re-execute and just like let that agent spin and probably cost you like 10 bucks, let that agent spin for like the next half hour, it gets pretty scary close to success every time.

And yeah, that's the only, like, that's the thing that really has me like thinking about what does this look like a few years from now?

John: Yeah. We're down the rabbit hole now. I mean, I'm curious how it is for Zig.

Mitchell: Pretty bad.

John: Pretty bad, okay, okay. I remember years ago, it's so bad for Rust when I was writing a ton of Rust and I almost would just, I guess this was like GPT-3.5 days, but I would just give up usually.

Mitchell: It's gotten a lot better. Like the Copilot like complete-a-line thing is all pretty solid at this point. But the bigger like write multiple lines, do stuff like that, it's pretty bad.

Usually my trick is I have it write C, and then I as a human just convert that to Zig. If you ask it to convert it to Zig, it'll hallucinate a ton of syntax that doesn't exist, so.

But C is such a widespread language and there's so many code bases that have been trained that I find that it's really good at that.

So if I'm actually working on something, I'll just have it iterate through and write a bunch of C.

John: Man, I'm going to have to adopt some of those workflows.

Mitchell: But again, stupid tricks, like again, like I do that so naturally and I get success, and then I have to just remember that when I like talk about how I'm using AI in a successful way and people are dumbfounded of how that's possible, and I have to remember that I'm doing weird stuff that like I've adapted to.

John: Yeah. I think I was a fairly early adopter of some of this AI stuff, and I think I've actually gotten worse at like using it.

I think early days I was like, "you will write a test, you will," you know, and I was very specific on what I expected, and now it's almost like, and you know, maybe this is like the AI brain rot, I don't know, but like now I'm just like, "do a thing" and I don't always give it enough context.

And I realized that, you know, that's probably just like a separate skill I need to continue to-

Brian: I think it's like, I started using 4.1 from OpenAI last weekend on like a small little like toy project, and I started at 3.5 'cause like for cost reasons, and then once I got comfortable that the code was going to work, the product works and I don't mind people using it publicly, I switched to 4.1 and like, it can one shot, which the product is like I could take a pull request and I generate a blog post, and this is more of like the scratching itch for myself.

But it's worlds better when it comes to 4.1 that I can just, you know, just be okay with like a one shot experience. But I imagine it's going to degrade, it's going to get worse.

OpenAI, it's a little black box of like whatever they ship every three to six months will definitely break everything.

But at the end of the day, it's like we have our weird tricks up our sleeves and we kind of figure out how to like hold it right or hold it wrong, whichever one you want to say.

Mitchell: Yep, yeah.

John: But eventually that will change.

Mitchell: Totally.

John: Wild. We need to write a book on like how to use AI.

Mitchell: It'll be invalid in like three months.

Brian: Yeah, "AI for Dummies."

John: Well, speaking of AI, I'll move on to my reads.

One was this post that I wrote actually on opensourceready.substack.com. Listeners, go check it out.

Brian: I don't think we mentioned that we have a Substack on the podcast yet, so.

John: Oh.

Brian: Yeah, worth mentioning we have a Substack.

John: We have a Substack for doing it. Yeah, this was a post I wrote sort of, I don't even know what to call it.

It was almost a rant, almost say like open sourcey manifesto, but it was titled, maybe hyperbolic, but "There is no open source AI."

And I think it came from a place of kind of going crazy hearing some of these companies call open source AI, quote unquote, open source and it's really not.

And you know, I dove deep into the, you know, Stallmany definitions of some of these things. I don't know, I'm curious, Brian, your thoughts on this.

I got some like good conversation from the OSI, who has their own definition of what open source AI looks like. What did you think about it, Brian?

Brian: Yeah, and we actually, we had Avi Press on here talk about the "open source AI" definition in episode three, I guess, I don't, I probably should have got that number right.

But yeah, I think it's just one of those deals where everyone's kind of running towards like a land grab right now.

So like what we got from OpenAI was almost open source AI, and then they clawed that back a bit when they got three, was it 3.5 or 3.0 out there?

So now we're all kind of making it up as we go along when it comes to like open weights, and like what's open, and Llama's definition and like whatever that is, if you have a X amount of users.

So I think what's helpful is like ironically DeepSeek, like shipping what they did that kind of exposes like what the cost actually really is.

And like we have a window of opportunity of folks saying, okay, and maybe it's like similar to what you're doing with Ghostty, Mitchell, but like maybe someone who's going to be benefited having somebody do open source, proper open source AI and like set the tone, or the sort of the pace for what we can do, and not be, so this is my walled garden and that's theirs and et cetera.

So I love the post and I encourage everyone to read it and like gather their own opinions as well.

John: Yeah, Mitchell, how do you typically think about like open source in this kind of non-traditional software sense with like AIs that they claim are open?

Mitchell: Yeah, and I read your post, and there's two questions like popped out at me.

One was like, I think not even question, like--

One thing I was really nodding at was like the issue with copyrighted material. Like I said before that like even a consumer of AI, like I want that to go to court, just so I could have an answer. Like I just want the precedent out there to know what the hell we're supposed to be thinking about this.

Like I had this case happen with Ghostty where Ghostty is MIT licensed, and a lot of terminals out there are GPL, and so we can't look at them. I can't look at them and-

John: Oh, wow.

Mitchell: We had an issue say I want to see this feature, here's how iTerm implements it and like pasted it in there, and I quickly edited it, deleted it, responded, said, "Hey, we can't paste iTerm source code."

I didn't read it, but I can't trust you to contribute this anymore, so we need someone else to contribute it.

John: Yeah.

Mitchell: And then he responded saying, "Okay, well, I asked ChatGPT and it generated this and it works."

And I was like, you know, is that safe? Did that like launder through the system to where it's now legal?

I don't want to think about that. I want a court to tell me no or yes. I don't care either way, I just want to know the answer.

So that stood out as me as something like I really want to know, and also like totally makes sense as an impediment to true open source.

The second thing that sort of popped out was like, it made me wonder, and this is totally a question.

I would love to see like data on this. It made me wonder if like, does all innovative new technology have to go through a fairly long cycle, years long cycle of proprietary first in order to figure it out before the commodity open source versions that become very good, that sometimes exceed them in capability, but before they could become successful?

And I was just like thinking back in terms of like, has there been examples of like new database paradigms that were successful open source first, or was it like Oracle and Windows, you know, server and stuff like that, that like had to like be way advanced first, and then now we're at the point where it's such a commodity that all these other databases are very, very, very good.

I thought about VMware with virtualization, like they hit the hypervisor first. KVM was a joke for a long time, all these things, now KVM is like super, super good, but like it had to lag behind that proprietary timeline.

Like are we in that same phase with AI here where we're in the proprietary, a bunch of people are going to make a ton of money, whatever, but like we're in this phase where it's too new that you need all this like boatload of funding that's happening to make it work? And then we're going to look back 10 years from now and all the true, true open source versions are going to be king because it's like a commodity at that point?

I don't know. I would love to know actually if that pattern has existed.

John: I think what worries me though is that this whole thing feels like, and I think I said this in the post, it feels like this big race to the bottom, where that bottom is either some technological barrier with the transformer models or whatever current AI technology we have, and we're just never going to get past that, or we have to invent something new.

Or we hit some crazy basilisk super AI God, and there there's no like putting a Pandora's box back on top of that.

So I think, I dunno how to answer your question because it feels like a whole new class of power and like technology that I almost don't even know how to like classify it beyond like, it's so many things.

It's got copyrighted material in it. You know, nobody has a cluster of H100s sitting in their house even to reproduce it if they did have the copyrighted material and the entire contents of the internet.

So like we as the consumers are almost just expected to like, "no, it's okay. It's open weights. You can, you know, download the 14 billion parameter one and you'll be fine with that, but you're going to buy our service because you want the like good thing."

Mitchell: Yeah, that's what I was wondering when I was reading your post too is like if you had a truly open source version, like no one can afford to reproduce this.

Like even the cheapest, there's a lot of controversy of whether how accurate it is, but even if you take it at face value with what DeepSeek was able to do, it was still astronomically expensive from like an individual standpoint.

John: Yes.

Mitchell: It's not like the Linux kernel where you could use a 10-year-old laptop, download the kernel and actually build it. You know, like you can't do that with the, in theory even.

John: Yeah. I've been slowly chugging through this book, "Free Software, Free Society," which is like the essays of Richard Stallman, and it is crazy because the beginning days of GNU and his time at MIT, like he was sort of stealing time on these like very early days, you know, mainframey computers that, you know, you had to go like actually schedule time to like run your programs on and stuff.

It sort of feels like that where like, it would almost take like a, I don't know, like a new age Richard Stallman to get on a big cluster of H100s to like truly, you know, jail break it into the open, right?

Mitchell: Yeah. And I think we're in this weird place where like, I want nothing more than to run local models, and Brian, you talked about you're like experimenting with that.

Every time I've tried that it's just been too slow, to expensive, too low quality, but like I really want to get there.

And so I'm like, I'm hoping, you know, even I'll just be generous I guess, like as a consumer, I feel like five years is probably safe, but like, you know, Apple's already making laptops.

I think they're making laptops that have like, well, I know my laptop has 128 gigs of RAM, which is absolutely absurd.

And like, I think if you get to the point where you could get to 256 in a laptop form factor, I think their studio could do that now, but like if you get to that in a mobile factor, then like it starts getting a lot more realistic and like that becomes really interesting if you could dedicate 200 gigabytes of your RAM to running a fairly large model.

John: Yeah.

Brian: Yeah, the only way I'm able to do it is run it on my PC that has a GPU in it and use Tailscale to then point that to Olama.

So like that's the closest to self-hosting, but it's like, not on the laptop itself, 'cause I've got a regular Pro with I think like 32 gigs, so like no way I'm running like a full Olama there.

John: Yeah. Well, this has been an amazing discussion. Mitchell, thank you so much for coming on the podcast, and yeah, giving us your candid thoughts on open source and AI and everything. We really appreciate it.

Mitchell: Yeah, thanks for having me on. I hope people don't take too much offense to some of the stuff said here.

I'm still trying to figure it out myself. We're always growing, we're always learning. I'm trying to take it all in and figure it all out.

John: Yeah, as we all are. Well, listeners, remember, stay ready.