1. Library
  2. Podcasts
  3. Open Source Ready
  4. Ep. #28, 2025: Year In Review
Open Source Ready
33 MIN

Ep. #28, 2025: Year In Review

light mode
about the episode

On episode 28 of Open Source Ready, Brian Douglas and John McBride reflect on the biggest themes that shaped open source and AI in 2025. From sustainability and security to MCPs, agents, and infrastructure, they revisit key conversations with guests and unpack how the industry evolved over the year. The episode closes with bold predictions for what 2026 may bring for developers, maintainers, and open source communities.

transcript

Brian Douglas: Welcome to another installment of Open Source Ready, it's the end of the year recap. John, how are you doing?

John McBride: It's the recap. I'm doing good. How are you, Brian?

Brian: Good, good. Yeah. So we crossed our year but also we're ending 2025 with this episode as well.

So I thought it would be fun, actually, no, you thought it would be fun to do this. It was your idea originally to basically go through the last year and talk about all the episodes we have done in the last year and to relive the moments.

John: Yeah.

Brian: So John, are you ready to jump in?

John: I'm ready to jump in. Let's go.

Brian: Cool. So we started actually in 2024, November of 2024. And the first probably six episodes out the gate were all themed around open source sustainability, but also money in open source.

And we had a bunch of really cool conversations with our guests. But I'd say the ones that actually stood out for me included Chad Whitacre and the Open Source Pledge. That kind of hit home because we had a bunch of billboards specifically in Oakland and San Francisco. Funny enough, I was at AWS re:Invent and saw some billboards, Sentry billboards. I guess they have the same designer.

John: Oh nice.

Brian: So they're still at it.

John: Yeah.

Brian: But what was really cool was we got to really learn about what Sentry thinks about open source sustainability, and the Open Source Pledge initiative. So it's a operations inside of Sentry.

But then we also talked about sustainability with Tobie Langel as well and what he's doing. It was an interesting time because it was like right before the sort of advent of agents and coding agents.

John: Right.

Brian: We barely had Cursor agent at that point. But the entire industry has shifted where sustainability is less about can we support this but more about the money now at this point. And I think it's been interesting to view outside but also on the inside of like--

There's a lot of money around, but I'm not sure if there's a lot of money still for open source.

So I'd be curious, what are your thoughts on since we had all these conversations?

John: Yeah, yeah. I think that conversation with Tobie really stood out for me as well. Just hearing from him about some of these wild vulnerabilities that have gone through the JavaScript ecosystem that he's a part of as well as just like throughout the industry in the last five years.

I think it speaks to just the amount of software that's been created, that is being created, but then also how deeply a lot of that integrates with our day to day, I think the CrowdStrike thing, you know, where airports shut down and everything. That was, that was in 2024, right?

Brian: Yeah.

John: But still like that rings true for so many people around how like deeply these things integrate. I'd be curious what Tobie's thoughts would be now with the advent of all these AI agents shipping even more code, possibly more vulnerable code and like how we can sustain that now that some of the software development life cycle has changed a little bit.

Brian: Yeah, yeah. Somebody who I've been wanting to get on the podcast and hopefully we'll get them on in 2026 is Feross Aboukhadijeh from Socket Security. Feross's experiences, He built npm-fund.

John: Oh yeah.

Brian: And specifically, basically when you npm install, it had like a little banner that says, "you could sponsor this project." A lot of people had a lot of reaction to that.

John: Yeah.

Brian: Which I think we brought it up in Chad's episode as well.

John: Yeah.

Brian: But Feross is now working on a very adjacent problem, which is security. And so you have a bunch of AI generated code. They're finding vulnerabilities of folks who are trying to find backdoors into things like Claude Code.

John: Right.

Brian: Which we saw a few months ago only, like a bunch of npm vulnerabilities because now you don't have to be an expert on the project. You could actually use projects and make legitimate contributions with code.

John: Yeah.

Brian: But then like you could also slip in a back door. It's pretty sophisticated what people are doing nowadays.

John: Yeah. I mean, not to bury the lead too much, but one of my predictions for the next year would be that exactly. That the scale and type and proliferation of these vulnerabilities will just continue and continue.

One of the biggest ones that I don't think we've mentioned on the podcast recently is this Shai-Hulud vulnerability that was basically a worm, which we haven't seen a major worm type of vulnerability since the 90s or something, when everybody was just email forwarding stuff all over the place to each other and just opening attachments. Right?

But this thing would just go and replicate itself, open up new repositories in your org, steal secrets, and then go and do the same in other packages that were within your org. And that all was basically executed through some prompting inside of a string to get Claude Code and to get some of these agents to go and do a thing. Which is very scary. That makes detecting these things semantically in code much more difficult. That type of vulnerability, I think, will be seen more and more as we go forward.

Brian: Yeah, and this is something that I really enjoyed one of our guests we had actually in the 2025, so first half of 2025 we had Mitchell Hashimoto come on and talk about Ghostty. And the thing actually happened like right after he joined us as he shared a PR, which is basically as part of your contributing guide for contributing Ghostty is you have to disclose that you're using AI to generate code.

You can still do it, but you need to disclose that because you're going to have a different review experience if you know folks that, maybe since Ghostty is in Zig, like, maybe you don't know Zig as well. Maybe you sort of generate a bunch of stuff to kickstart what you're trying to accomplish.

John: Right.

Brian: But at least gives the human in the loop a proper leg up in the review process. Which I thought was, one, fascinating having a conversation with Mitchell, but also seeing that come out shortly after we had chatted with him.

John: Right, yeah, that was a great conversation. I think that was probably one of the highlights for me of the year, just hearing about his thought process, hearing about how he uses AI agents, obviously getting to discuss the open source side of Hashicorp and that exit and where some of those things from our favorite topic, open source licenses, went one way or another.

Yeah, it was a great conversation.

Brian: Yeah. Yeah. And then the whole first half of 2025 is really around agents. Well, actually it wasn't even about agents, it was about MCP. So we ended up having Steve Manuel talk about MCP Run.

I've since talked to Steve offline. He's working on Turbo MCP now, which is an enterprise grade solution for having basically one OAuth experience to manage all your MCPs. Because what I've actually found since then is a lot of folks are building their own MCPS internal for the enterprise because it is around the trust part.

So if I'm going to use, I think up until maybe a couple weeks ago Slack, or actually still today at the time of this recording, Slack doesn't have an official MCP. So you have to use a community version of the Slack MCP which just wraps a bunch of API endpoints and then adds AI to it.

Which there could be a backdoor in one of those things. And I think the fear of a lot of enterprises is we need to sanitize all these interactions especially if the LLM is going to be touching our code and even, God forbid, our databases or even our Slack database. There's a bunch of threads and information that is in there.

So for that reason, I'm glad that Slack has a proper MCP coming out soon. But it's interesting. Probably the first interaction I saw around MCP was from Steve Manuel but then we also talked to the Goose folks. Adewale "Ace" Abati from Block.

I think you had mentioned Goose. We got Goose on but I had not even used Goose. I wasn't even aware of the actual application of it and I've only just seen Goose like non-stop since then.

John: Yeah and that that conversation was great too because that was a tee off into how a lot of different companies and products and things think about the types of models and those inference provider APIs that they're using.

Because I think Goose was very early on this kind of train of like oh yeah, you can configure OpenAI or you could configure Anthropic or even local Ollama or any of these other ways of doing it. I think that will be continuing to be very important as token costs probably shift going into the future. Different providers have better, different models.

Another shout out to a conversation we had with Quinn Slack, now at the Amp company, f ormally Sourcegraph. They do this frequently where they will switch around the model that they use for Amp based on some of their internal evaluations, probably test suites and real evals that they run. Trying to always get the best experience for the cost that you're paying.

Brian: Yeah, yeah and this is something that, it's funny because we talked to Quinn, we talked about the sort of like switching models on demand and like the routing. But one of the things actually, to be quite frank, I didn't really like Cody back in the day . And I didn't like even the current version of Copilot because you have to constantly switch between models to get your best experience.

And for someone who was brand new to using coding agents in its current form, I don't want to choose as soon as I log in. I've got to like find the best model. What if I could tell you what I'm trying to accomplish and you pick the best model for me?

And I think what Amp was doing with the ads within Amp Free, which we spent a good amount of time talking about that on that episode with Quinn. I think we're going to have a bit more of that moving forward.

Like in our next episode with Cameron, which actually will come out after this, we talk about ads in our reads, ads in LLMs.

But I think there's going to be a shift and somebody's going to have to be covering the cost of this inference and compute.

And it might be in the way of like we will just provide you the recommended "I'm feeling lucky" type experience rather than constantly always trying to, for you to do evals and choose whatever the best model is for the job.

John: Yeah, totally. This makes me think of a conversation that we had as well this year with Josh Rosso from Reddit where we did go more into the technical deep dive on running huge Kubernetes clusters at scale for Reddit, how they go into some of that, but then also just like the raw engineering resources, the cost to go and do that and how that plays into real product that then becomes a part of I guess Reddit in this sense. And Reddit Ads probably feeds the true cost of infrastructure.

I think that there's been a lot of subsidizing in AI, a lot of subsidizing for tokens. Clearly there's going to have to be something that changes or shifts.

Yeah, we'll see where that goes. We had a lot of conversations also, Brian, this year about open source AI. Where is your head at with that? What's your reflection on where open source AI is?

Brian: Yeah, my reflection is like, man, I really wanted to get the GPT-OSS team from OpenAI on. I have a DM with them and we're pretty friendly. But the timing didn't quite work out. I thought towards the end of the summer of 2025 we got the advent of a bunch of really good open source models. Obviously we got Deepseek in January of this year.

John: Also the Kiwi ones recently.

Brian: Yeah.

So we're getting good models but I think what the models are doing is really just giving us more unfiltered access to LLMs to help build what it's actually enabling, which is the new wave of infrastructure and AI.

So because you can have access to models, you don't have to go through the gate or worry about inference and if you have-- Something I don't think we spent any time talking about on the podcast, but Nvidia shipped out the Spark boxes, so now you could have GPUs and a Mac Mini style container.

So if you want to do AI coding, you can get all open models working for like about less than $2,000 in a very portable box. So take it to your work in your backpack and plug it in. It's funny, right outside the booth we got two or three of these things that we've been testing and we were able to talk about in a blog post.

John: Yeah, yeah.

Brian: But I think it's going to be probably the opportunity area is we'll have LLMs that we can do in consumer grade hardware.

John: Yes.

Brian: And that's very approachable. But I still holding out. we spent some time talking to Rachel-Lee about this, SLMs. I'm holding out for the opportunity to do more stuff on my iPhone. Like to be able to source through all my text messages and look for something specific and have that knowledge, which even today Siri and Apple Intelligence doesn't quite give me that yet.

I would love to spend a weekend with SLM on my phone, that's not gonna overheat the thing, and just index all my text messages and photos into a model and give me the experience that I expect Apple to give me. And maybe in 2026, Apple will give us that experience by maybe WWDC.

Open models are gonna help open source continue to make a stab at the footprint of the industry.

The one thing that we didn't spend a ton of time talking to is any sort of the Vector DBs because I feel like what pgvector did to Vector DBs, providing a open source plugin for all Postgres, makes that extremely approachable.

So when I build something, I'm doing pgvector inside of Supabase. It's a non-question. I've definitely grown out of Supabase and pgvector for one of my projects and I'm using Chroma for that now.

But I think what we're seeing at LLMs, we're going to see way more tooling in open source that are being driven through some of these open models. So expect to see more Chinese models come out in the coming weeks and coming months and we might just have like a twice annual cycle of like some of the more premier foundational open source models coming out.

We'll have a great time with it, but then we'll get another Claude or GPT model that gets us to forget about them.

John: Yeah, absolutely. It is interesting you bring up Vector databases because something I've observed over the last year is this hilarious, full circle moment that we're having with some of this stuff where like very early on you know, there was GPT3.5, the context window was tiny.

Everybody was building RAG and talking about vectorizing stuff and doing this retrieval augmented generation in order to lighten the load on the small context windows with some of these models.

Then context windows just got huge and there's like millions of tokens that some of these models can support. So people kind of stopped doing that and just started just shoving all kinds of stuff into the context windows.

Then we got MCP to kind of standardize how you could do tool calling. But now we're going full circle back around to vector databases. And I do think that vector databases will continue to be an important technology in this space because we're seeing people build these things.

Like recently Anthropic had a blog post about their Tool Search Tool that goes and searches all the tools within an MCP sort of uber thing because too many tools just pollutes the context window too fast where like 10%, 20% of your a hundred thousand tokens just immediately gets eaten up by the initialization flow for MCP.

So then that makes me think like, oh, we're just going to go all the way back around to RAG if we're trying to search for tools. I think the circular nature of some of this stuff is going to possibly continue.

Brian: Speaking of which, I'm curious. We kind of ran through a bunch of the highlights of the episodes we had.

Towards the last four or five months we've really been talking to a bunch of folks in the infrastructure space and I love the conversation with Davanum "Dims" Srinivas and learning about what he's doing really for the Kubernetes space, but also now at Nvidia, recently AWS, but now Nvidia.

I feel like the entire world of infrastructure, it had really been sitting on the sidelines for a lot of this AI stuff.

So we had a lot of MLOps and we had Anyscale has Ray and they just donated their project to the PyTorch Foundation. And VLM got donated to the foundation and we have a bunch of other open source stuff that's actually-- Mike over there who's leading the Pytorch foundation, who we should definitely have on as a guest on the podcast.

I think there's going to be more and more projects that we're seeing that we're depending on like pgvector that probably belong in a foundation to help build a standard, to give us some confidence for large enterprises to ship AI and LLMs to production, but in a safe way that's consistent, so we're not constantly always looking for the next SaaS product to unblock us, but we can now ship some stuff confidently.

And ideally these SaaS products also shift over to like what the standards are. Hey, Future Brian here. And it turns out right after we recorded this conversation the Agentic AI Foundation (AAIF) was announced. So just wanted to drop a note about that because obviously I'm predicting something that happens when literally two days after we recorded this. At the time that we release this, the Agentic AI Foundation's been out for two weeks.

John: Yeah, absolutely. I think the Kubernetes ecosystem and what Dims was telling us about dynamic resource allocation and just enabling GPU workloads to be so much more ergonomic and better because it's very painful right now. That alone I think will just be a huge unlock for infrastructure teams going and building some of these massive thousand plus node clusters with big pools of GPUs.

That software I think goes pretty along the lines with the big data center and actual raw GPU infrastructure buildouts that are happening. So yeah, it's wild and exciting times.

Brian: Yeah. And this is something that going back to full circle, we started talking about Adam and I was DMing you about System Initiative because I'm very excited about the product that they unveiled in the last few months, I think in September of this year and shared their agentic workflow and platform .

Previously System Initiative, like Terramate they're all like to help enable Terraform, Open Tofu.

It feels as if like we're moving into a new ecosystem. Agents are here to stay.

So as we figure out how do we work around these and how we're worth them, it feels like there's a world that we're getting a new primitives for deploying stuff at scale and for maintaining software at large enterprises. So I'll get your take on that but I do want to shift over to your future outlook on 2026.

John: Yeah, I think with System Initiative and just where some of those paradigms are, I'll be very curious to see how SRE teams integrate those or I guess more, to say another way, is how they adopt it or if they adopt it.

Because if there's a group that's probably more cautious or risk averse to adopting this kind of technology, it would be SREs whose whole job is basically to not get paged, like I did today at 4:00am you know, they want the determinism of like I can look at a dashboard, understand those graphs, understand like how some of these things go.

But a lot of this is becoming the mental mapping of like what an LLM can go and understand. I was playing around today with the the Playwright MCP server for Microsoft. It's very, very good and it's so good because it kind of shifts that paradigm mindset, that mapping of like me looking at a browser or looking at the dev tools and instead the MCP server can show those things in text to the agent so that it can go and click and play around with things.

And I could see what it's doing as it's clicking around and then building my test suite for me instead of me being like, "no, I see the button, the button is labeled this. Please click the button so that you can do the test."

That mental mapping is different for an agent. And I think we'll see tooling that continues to conform to that mental mapping for agents, which really is like text language, essentially.

Brian: Yeah. I want to shift gears. So we're not going to do any reads today, but we're going to do some predictions.

John: Brian, are you ready to predict?

Brian: Oh, I am, John. Yeah. So I want to look forward to 2026 because this will help me think through like what guests are we bringing on? Where do we think the industry is going, especially in open source? And I think AI has always had a strong grapple on a lot of our conversations as well.

John: Yeah.

Brian: So I'll go first. I'll share my prediction for 2026, which I'm calling this The Rise of The Solo Contributor. And I want to be very distinct about this. There are definitely maintainers, but I wanted to be distinctive that these are contributors as well.

We're starting to see folks who are building ideas, building projects, and not waiting for validation or adoption or consensus. They're just building and they're throwing stuff out there. The speed-to-tech-debt is at an all time high, but also the speed-to-validation is even higher.

So if you have an idea and you're like, "hey, I want to--" Actually I was talking to Jason Lengstorf about this and Jason would be a good guest for this cause he's seeing a lot of this in what he's doing with his YouTube channel, which is, in hackathons, everyone has ambition but to get to presentation form, you'd probably have like 15 to 20% success rate.

And when I was talking to MLH just a couple weeks ago in New York during the AI Engineering Code Summit, they're seeing 90% success rate in hackathons. So when you think about open source it's like almost the same spin where folks have ideas, they do a thing. They might not have all the sort of wherewithal of how to maintain a project or structure it, but they can get something into someone's hands pretty quickly.

So we're going to see, one, s lot of open source disruption. So you asked about the LLMs. We're going to see a bunch of LLMs be successful for short spurts, but it's going to fuel the testing of all this open source infrastructure and projects around, specifically I think as of today--

I think we saw Tanner, who, we need to have Tanner Linsley on this podcast and we should chat with him. He just shipped an AI SDK that's very competitive to Vercel and it's called Tanstack. And Tanner is very popular. He lives off sponsorship. So he's been fueling the sort of offshoot of the React community which a bunch of tools everyone loves and enjoys.

At OpenSauced, we use the Tanstack table pretty heavily and we'll talk to that maintainer on a regular basis. And so we're now seeing AI SDKs and infrastructure now being not necessarily disrupted yet but providing a pathway into like if you want to have no vendor lock in, you have an opportunity to leverage some tools that people are building.

So I also expect a large amount of consolidation from this open source drive. So what we're seeing right now, in the last week we saw Bun get acquired by Anthropic. Which is absolutely amazing. Like a great, great get for Anthropic. Bun is powering a bunch of next wave, front end ecosystem and technology.

So the opportunity to go build something with no corporate overlords and get stuff out to market and get some adoption, I think we'll see a lot more of that.

And then more than likely big cloud, large foundational models will basically pick and choose for talent purposely. So we might not see all these projects live on but, but we'll see some of these folks get absorbed into some of the bigger clouds and bigger models to help hopefully fuel ourselves away from a recession. I'm not sure. That's a whole other prediction but I'll stop there.

John: Yeah, I honestly couldn't agree more. I look at it almost more from like a operational standpoint, again maybe I'm putting on like the SRE hat here where some of these things just get so much easier to operate and I think teams will feel more enabled to go and adopt technologies and things that maybe five or 10 years ago would have just been like huge lifts, but now are maybe easier to get started with or ramp up onto or understand error states that things might get into, just because the unlock is LLMs, essentially on that.

May be scary as well. Like maybe you could spin that and think like "oh this is the year that tech debt just completely balloons and exponentially explodes." And that's one of my predictions which really aligns with what you're saying about you know, people starting to build more and more software.

I think this is the year of just more exponential software being created. There's just going to be so much more software in the world, like full stop. There's just going to be so much software out there and we're going to need people to maintain it. That is not something we can get around.

Like unfortunately these things aren't to the point yet that they can full stop be autonomous, but with more software, there's going to be the need for more human operators, be it people who are enabled, who maybe aren't in the traditional software path of things.

But one of the things that I think validates this for me is MinIO recently is kind of abandoning their open source efforts and instead is making really the full lean into the business. One of the alternatives people are talking about is RustFS, which is a S3 compatible file system that you can deploy and run distributed like MinIO or I guess S3, really.

It's so interesting because that's part of that consolidation that you're talking about is it's like really a lot of time, energy and effort to maintain and run an open source project, especially a huge one like MinIO. So maybe we'll just see people be like, "well, oh that's okay, we can adopt RustFS because we can go and understand it faster than maybe we would if we needed like an army of people to go and get involved in the community and understand how some of this stuff worked."

Like maybe it's more akin to like smaller teams being able to adopt these things.

Brian: Yeah.

John: Does that make sense?

Brian: Yeah, it does make sense. And you were talking about this like this overrun with software. I moved to San Francisco 10 years ago and it was at a point where you couldn't find a place to live because like the rents were so high. The demand for being in San Francisco was pretty much at its peak at that moment, like 2015, 2016.

And then we saw the pandemic where it was almost as if it was a ghost town. So you had all this infrastructure but no one using it, particularly in housing. But we're seeing an uptick where I think this is probably, if you look historically, cities work in this fashion. Like everyone's moving to cities, everyone's moving out of cities. You're having new waves of folks embedding themselves in the community.

And I think it was the same with software. I think we also had like two years ago, it felt like AI was taking over in infrastructure and like dev tools didn't really have a time in the sunlight. And it's completely shifted 180 in the last year where you're seeing a lot more dev tools and infrastructure, taking on funding and scaling up teams .

Like one of my favorite companies I've been a part of is Supabase. And Supabase is now just had a big Talk at AWS re:Invent and they announced a big round and open source first opportunity that now is giving a lot of light to some of these serverless database experiences, in a way that we saw with Mongo like what 15 or 12 years ago.

I'm bullish. I'm bullish on all this stuff. Obviously we have a looming recession that could definitely hit, but we also have a lot of energy and interest in fueling the next wave of this technology.

So I think there will always be a need for us to rebuild and restack our stuff that's going to now be AI friendly. I think thanks to the advent of what Dims is doing at the Kubernetes side and that we have a lot of these AI projects in CNCF now graduating, there's going to be a lot more efforts to start deploying and get sustainability within infrastructure of AI.

There's a large social network that was-- I talked at CNCF at one of the side chats and they were, back in Kubecon, London, they were very shy to deploy AI to production because they didn't have the same tooling you would have for CPUs as you do with GPUs.

And we're seeing now a company I'm very excited about called ZML, which I'd love to have on the podcast, is doing auto scaling of GPUs. We had Run:AI who's also got acquired by Nvidia that's also leading efforts to acquire more open source projects to help solve this specific problem.

So I know VCs are happy investing in AI. I know folks are happy to now contribute and distribute AI. So now we could sort of sit back and just be proud developers that can have cool tooling.

John: Yeah, that's very true. Speaking of tooling, I think one of my predictions is on code. We were talking about this a little bit earlier, but I'm calling this the AI Code Flywheel. I think that we could see, maybe this is a little bit more out there--

I think we could see agentic flows and the way that people are deploying these workflows or these autonomous agents or whatever you want to call them, align much more closely with raw code.

One of the things I would point to is that same blog post Anthropic was talking about Tool Search Tool, they also introduced this capability of Claude Skills, I think is what they're calling it or Code Skills or something like that. And it's basically a way for cloud code to just run some Python.

This isn't really like a new novel idea. Like I think you could do this way back with basically tool calls in LangChain way back in the day and a lot of these older frameworks. But I wouldn't be surprised if it got to the point where these agents can write their own skills, in code, which they're already very good at, and then they can sort of provide those as capabilities to themselves to then just spin that flywheel on like, "well I need to make this thing a little bit better because I need to do this other thing. Oh, it'll work on that little code piece."

And then it'll have that as a skill back to itself and then maybe it'll even grain more of those capabilities as these things just get better and better and better. I think that's kind of the dream that a lot of AI companies, ML engineers, AI researchers want, is that flywheel of like continual self improvement.

I wouldn't be surprised if we saw these things getting more and more aligned with just code because they're very good at writing code.

Brian: Yeah, yeah. Andrej Karpathy gave a talk around the self healing infrastructure and I think I've already seen Sentry and Vercel with billboards or like blog posts around this and I think this is a thing that I think will, the dream will come true and become like alive.

And again there's a bunch of tools in the space that I think are going to be attacking this. The one thing I've learned, especially because at Continue, we're doing like the continuous AI thing and that's kind of built on the self helping infrastructure thing. What we found is the natural language is one of the best ways to--

Like for folks who are zero to tech debt and like maybe solo contributors, in natural language you could say, "hey, I don't want this thing to go down," or "It went down. Let's think through how to fix this."

As long as the sort of endpoints and the connections like with MCP, ACP, whatever, as long as those are there, the natural language can get you there. That's what the biggest unlock is, is what I've discovered, especially doing a bunch of workshops around AI coding for folks in all different walks of life, is like you're as good as the knowledge you have in your brain.

So if you read a book about designing data intensive infrastructure, you'll know what to prompt because you read the book. That's the kind of shift when I think about this AI coding in like these entry points is like, if you have the concept and a context of like, "what are the boxes you're trying to derive and where you're trying to put the boxes," you'll have a much better job than like, "build me full stack app, no bugs."

John: Right, Right.

Brian: Like, that's never gonna work.

John: And it's like early on with ChatGPT, people were like, "Generate an idea that'll make me $1 billion. Go."

Brian: Yeah.

John: And obviously it's not going to do that because, yeah, it's the mass sum of all human knowledge at this point and it's a bit too broad, I guess.

Brian: Excellent. Well, I'm going to wind down the conversation. I appreciate you, John, for taking some time at the end of the year. Also, thank you for being the co-host with me these last 12 months.

It has been a ride, but honestly it's been a pretty smooth ride. I'm looking forward to 2026.

John: Yeah, looking forward to 2026. I'm ready to stay ready, Brian.

Brian: Yeah. Stay ready, listeners.