
Ep. #25, Death of the Web Browser with Rachel-Lee Nabors
In episode 25 of Open Source Ready, Brian and John sit down with Rachel-Lee Nabors. They explore how AI agents are reshaping the web, from the decline of traditional browsers to the rise of agentic experiences powered by small language models and MCPs. Rachel-Lee explains why advertising models are collapsing and why the next web may depend on direct payments and open source innovation.
Rachel-Lee Nabors is a developer, speaker, and writer exploring the intersection of the web and artificial intelligence. They’ve worked on the React Core Team at Meta and on AWS Amplify, and are known for their talks and writing on web animation, AI browsers, and the “agentic web.” Based in London, Rachel-Lee shares their experiments and ideas at nearestnabors.com.
- Rachel-Lee Nabors on LinkedIn
- nearestnabors.com (Rachel-Lee Nabors’ personal site)
- agenticweb.nearestnabors.com (Rachel-Lee’s Substack on the Agentic Web)
- Death of the Browser (CascadiaJS talk by Rachel-Lee Nabors)
- web.dev
- Perplexity.ai
- Claude (Anthropic)
- ChatGPT (OpenAI)
- Goose
- Gauge
- Coinbase 402X
- OpenEdison (Edison-Watch) GitHub
- AWS Amplify
- React.dev
- TinyFish
- The opensource code that powers Claude's computer… (Blog post by Rachel-Lee Nabors)
transcript
Brian Douglas: Welcome to another installment of Open Source Ready. John is on the line. How are you doing?
John McBride: I'm doing well. Brian, I wanted to congratulate you, and us, on the one-year anniversary of the Open Source Ready podcast. We did it.
Brian: We did, yeah. Who would have thought? I think I randomly DM'ed you and was like "hey, do you want to be a guest on a podcast?"
Actually this was still during the OpenSauced days so I was like, "hey, we should do a podcast together."
And I was glad you said yes because I didn't have any backups for co-host.
John: It's good to be the first and only pick on the team.
Brian: Excellent. Speaking of picks, we actually picked up a cool guest. Rachel-Lee is actually calling in all the way from across the pond. I want to welcome you, Rachel-Lee, and would love you to introduce yourself and tell us what have you been up to.
Rachel-Lee Nabors: Oh, hey Brian. So good to be here. Been up to crossing paths with you a lot ever since moving to London.
Last time I was here I was working on the React Core team on react.dev, and after the pandemic I went back to the United States briefly to do some things with AWS and the Amplify team. But then I just couldn't stay away from London and ended up coming back on a Global Talent visa.
And now I'm really getting into the intersection of web and AI. I have a lot of crackpot theories about what's going to happen next for the web. Some people love them, some people don't love them, some people think maybe no, their startup's going to do it better or differently.
But I love peeking around the corners and I've been experimenting a lot lately with building alternative experiences for the web using MCP and local small language models. SLMs.
Brian: yeah, so I want to get into that, but also I want to really just take a quick de tour into the Global Talent visa. Like I'm actually really curious what that entails and what did it take for you to work in London?
Rachel-Lee: My gosh, that is a great question. And I'm surprised I don't get asked it more often. But I suppose these days with the way things are in the USA, people are wondering, "if I didn't want to live here, what are my options?"
The first thing you need to know about the UK is that they're very gender aware here. One of the shortcomings. So if you right now, if you're trans or you love someone who's trans or even non-binary like me, this is a hotspot right now. There's a lot of conversations that are happening and there have been some recent repeals of rights, so it's not perfect everywhere.
The NHS still works though, so they won't let you die of cancer. Which I consider a real life hack for the rest of my life. It's like yes, unlimited health points hack. I can always come back to the shop and buy that 200 HP hyper potion.
So that aside, the other thing to know is they're actually super autism aware in the UK as far as kids and adults go. So they say that different societies evolve at different rates and in this way, the UK is ahead of the US on some points, but behind in others. Now how do you get a Global Talent visa?
I always feel like I have to put this little disclaimer there because we have this tendency in the United States to either hero worship or demonize other countries. And it's more complex than that. I've lived in Amsterdam as well, so I've got opinions. To get the Global Talent visa, one must display that you are a notable talent, either an up and coming, which is under a certain age or an established talent.
And I can only describe the process of applying for one as making spreadsheets of spreadsheets, explaining, "I keynoted at this conference, I published this article." If you have patents, if you've written books. And most of this has to be before a certain period of time. You can't be referencing stuff you did 20 years ago. It has to be within the last seven.
But if you had hundreds of thousands of views on YouTube that can count in your favor. So the criteria is pretty broad. But I must point out, read the criteria now, because there might be things that you want to do with your career that you would do differently after seeing the criteria.
For instance, mentoring. Mentoring counts for a lot if you're a part of a mentorship program that is not in your company. So if you're like, "yeah, I mentored everybody at Microsoft." No, doesn't that count. You have to go be part of, for instance, Women Who Code or a local university's mentorship program. It's a technicality, but it's important.
And the other great thing to do is consider getting yourself a visa lawyer. It costs extra, but they will basically show you where to dot the I's and cross the t's to make sure that your proposal just sails right through.
I know a lot of people don't make it in the first time and they're not using a lawyer. The second time they tend to get it, either with a lot of feedback from friends who've done it. So if you're thinking about it, get you a lawyer. Don't let the first rejection get you down, and do consult your friends.
Brian: Okay, that's great. I honestly have no context of trying to live outside or abroad, but I'm super intrigued.
So I know you from your web comics, I know you from the React Core team as well, and we've been on podcasts before in the past. But you've recently been talking about the agentic web, and I'd love for you to set the stage for this conversation about, what is this? But also, before I hit record, I did mention your keynote at CascadiaJS and I would love to catch the listeners up to that because I think it's definitely worth a listen and to watch that video for sure.
Rachel-Lee: Yeah. So if you haven't gone and watched the video on YouTube, the CascadiaJS edition of Death of a Browser, you can. It's officially its last incarnation. I've given variations on this talk and it's evolved because the area is moving fast. I think this is version 2.02, so it's the last one. Go ahead and watch it, if you want.
Basically what I got at is with the way MCPs and agents, personal agents, that is to say AI browsers and things like Claude and ChatGPT are evolving and user trends we're seeing out there, for instance, the dip in Google searches on Safari, what came out in the FTC hearings in, I think it was May, and overall the fact that when people are using agents, they stop using the web, they just rely on the agent to go out and aggregate things for them, we can anticipate that this trend continues.
If it doesn't have any sudden reason for stopping, like the technology is made illegal or it turns out to be a horrible vector for malware and disease, which it looks like those problems will be solved. So that's not likely either. Then--
The way we use the web is going to change and we're probably not going to use it through something we call a browser. My prediction, which makes people uncomfortable, is that in five years we won't really be building websites anymore. We'll be building feeds and endpoints, for better or for worse.
Brian: Yeah. John, I always love your takes because you're always forward thinking but also sometimes you can be pessimistic as well. Obviously no offense on that one.
John: Me? No.
Brian: But yeah. I mean a year ago it seemed like everybody had an AI app or a AI funded startup that was going to be the AI for lawyers or the AI for real estate. And we've seen the pendulum swing completely back the other way towards developer tools. And this is now where the current investment's going towards , like "okay, we need the infrastructure to be set up."
But then you have folks like Perplexity who are building AI search for the web and everyone's got a new AI web browser. Which also is like it's the counterpoint, because I feel like with OpenAI having a web browser, are we going to go back to web browsers? We're just going to talk to them as opposed to, or like move or wink our eyes and stuff like that at it?
John: Yeah, I do think about this a lot actually. The first thing I thought of, Rachel-Lee, when you were mentioning that was my therapist. Shout out therapy.
Rachel-Lee: Woo. Therapy.
John: I know, right? But sometimes when I complain to him about like, "yeah, the world of tech, it's like eating itself, man. Like it's crazy."
He'll kind of ask me to slow down and think about how things outside of tech are like still vastly very slow. And I think it's a good state of mind to put myself back into when some of these things come to this.
So, Rachel-Lee, I'm curious to ask you then, where do some of these slow businesses, from your eyes, land in the broader agentic web? Like Brian, you mentioned a lot of dev tools are going agentic. There's obviously the web browser space, a lot of web things like protocols, MCP. I recently discovered a new one that Coinbase is doing recently, 402x to get agents to pay a little bit of money to go and do a thing on the web.
But there's all these slow businesses that have not yet adopted that. Where does this leave all of them, like the rest of the world that's not tech?
Rachel-Lee: So I love the way you're coming at this. And one of the reasons why I like living in London is that, don't get me wrong, every time I go to San Francisco, I have incredibly intense FOMO. I worked there for nine months working on a web agent for TinyFish. And they got their fancy funding, et cetera, and they stopped building a dev tool. So I moved on.
But the point was, every time I was there in Palo Alto, I felt like I was in the eye of the storm. But then I would come back to London and I'd go to an AI meetup and you'd have the machine learning people sharing really cool cutting edge science and you'd wonder, "well, where's that going to go?"
And then you'd have people just being like, "hmm how can I shave some time off of these support queues for a non technical related product?"
I think engineers are seeing most of the benefit because it's really well set up for processing code. The rest of the world does not process languages as much as we do. We're processing computer languages. The rest of the world is not communally writing volumes of text that run things. They do not do that.
That is not a use case for the rest of the world and it's hard for product managers. This is a trend I'm seeing out in the field. It's hard for product managers to find like where the real gains of this technology are. For us engineers it's like, "oh, obviously this is going to reduce the need for QA so hard. You know, you're going to be like a real QA engineer building these cycles."
Of course, yeah. Tell me, how does that help the Times report better more often and deal with shrinking revenue flow? It doesn't. It doesn't really, because you can't generate very good news headlines from pointing at an MCP at the universe. You still need to come through humans at some point.
I think I forgot what the original thought was there. I just wanted to say I love coming and going from a place that's not an epicenter for tech, because I can go get hyped and then I can come into the real world and be like, well, that's not going to work.
John: Yeah.
Rachel-Lee: Obviously everyone's hyped about that technology because a bunch of investors just dumped $20 million into it and they're giving away free champagne. Who wouldn't be hyped? But then you come to London and there's no free champagne here because engineers aren't top dog in this city. You know, financial bros are.
John: Yeah, it's so funny. I actually have an opposite experience where I go to San Francisco and I feel like I'm going insane.
Rachel-Lee: How are you going insane when you're there? What's driving you nuts?
John: It feels like this giant echo chamber where nobody knows what they're talking about, and yet they all have this crazy shared language that doesn't make any sense outside of that sphere of Silicon Valley.
And I understand that that's how Silicon Valley has been for ages, where crypto was a whole thing and everybody was talking about crypto and doing crypto and building crypto and, you know, the Dot Com Bubble was probably the same way. Now it feels like that with AI and agents.
And I'm in this space and I'm building in this and building MCPs and all that, and I still feel like I'm going crazy whenever I'm out there.
Rachel-Lee: I think that's a good sign.
Brian: True.
Rachel-Lee: Oftentimes I feel like, "am I the only one seeing this?" When I'll look at a couple of broken tools or pieces that I found lying on the floor outside one of these SF meetups, I'll be like, "am I the only one who sees that? You, like, if you put these together, this will happen? No, just me. Okay. All right. I'll take that knowledge and do something with it."
I think a lot gets lost. And you're right, it becomes a bit of a clique and an in-speak area. The trick is, when you're there, to try to look for the people who really are seeing how the pieces connect and seeing what's coming two years away and aren't just there for the champagne, so to speak.
Brian: Yeah, there's a lot of hype. I specifically joined the company I work today, Continue, because I wanted to work in the AI space. I wanted to catch up and find out how the world is transcending and moving. And I live in Oakland, so I don't actually live in San Francisco within a bubble. And there's a very different distinction between like just across the bridge.
But I say that because I also am seeing like Kimi 2 instruct their model launch today. Like most of the friends and like most of the parents at my kid's school, which ironically one of the parents works at OpenAI. But outside of that one guy, everyone else is like, "I don't know about these models. Like I don't care about what research is coming out of Berkeley," which is just a mile north of my kid's school as well where I live.
Like there's the bubble but then there's also reality. And I think when you can kind of look through the lens of reality where I love talking to my dad and my brothers about how they're interacting with AI because they do have a very different take on how they're engaging with these tools.
And they don't know about MCP the way that we would know about MCP, but they know they have extensions or plugins or things you get. You can connect your Google Drive and it can do stuff for you.
Rachel-Lee: Exactly. I once had an opportunity to work at a generative media company and one of the things I was really tempted but I went to an anime convention and I used to make award winning cartoons and comics and I'd go to comic conventions and I'd sell my books.
And I still like to go and visit Artist Alley and see what the kids are creating these days. And one of the things I did to stress test whether or not to take that role was to talk like, "how do you feel about AI and generative media, et cetera." And I was surprised at the AI literacy of the artists, you know, being frustrated that they're being accused of using AI and how it's being threatened with Laura's and how they're trying to poison pill their own artwork.
It's like, "wow, this is like life or death for young folks expressing themselves."And that helped me make some decisions about what I wanted to do with my career and I decided if there wasn't a clear line to advocate for human creativity, then maybe it wasn't the right opportunity for me.
But what I mean is--
Getting outside the bubble and just talking with people who aren't investors, in some capacity or another, with their time or with their money, I think can help you see the real value.
Another thing, it's great to talk with people who are actually scientists. In Germany, I have a friend who's a PhD and PhDs are rather common in Europe because it's affordable to get that kind of education.
So you can have really in-depth conversations about the technology and where it's going to be in two years with someone who's just running a little agency and trying to put their kid in a better place in life in a way that you couldn't really have that conversation, I think, in the thick of it.
So it's a different conversation over here. I love talking with academic folks, but it's nice to not have to swim through venture capital and Anthropic and NDAs just to have that conversation.
Brian: So this is a question I have. John, I just looked up this Coinbase 402X, which is like the Brave browser again. People are always trying to get stable coin payments to interact and validate things on chain and blah blah, blah.
Rachel-Lee: It's like Fetch.
John: Yeah, exactly.
Brian: Still trying to make it happen. "Get to the car, losers. It's Web3 again." But what I'm getting at is, so let's take a zoom out and go back to first principles. I write a blog post. My blog post has been generated with AI. Well, the majority of it has. I might go fine tune and update it to my voice.
But are we moving to a world where we can't trust written content anymore? Or do we trust written content but we just know that everyone now has their own LLM that's going to digest this anyway, so we just want the highlights?
Rachel-Lee: Well, a couple of questions there. One, define trust. And number two, I don't actually read your blog.
One of the things that LLMs are great at doing is linking some thoughts together. And no insult to you, Brian, but if you didn't take time to write it, why should I take time to read it? I can usually tell when someone's put something through an LLM to do the core generation, because it's like reading a comic written by a teenager.
Brian: Yeah.
Rachel-Lee: You know, you go in and you're halfway through and then suddenly the premise shifts. "Wait, wait, wait. Are you retconning the previous half of your arc? I guess they are."
So it's kind of like a vibe. I don't know how to describe it. Even with my newsletter, I hand write most of it. I might rubber duck the premise and the outline a little bit for my talks and my newsletter with Claude just to make it stronger.
But at the end of the day, I'm the one doing all the legwork. And I think that shows. And we have evidence that LLMs like Perplexity, they prefer human generated content, not because they're filtering out LLM generated content. They'll totally take it if an LLM wrote it.
But because the human generated content has that better coherence of value proposition, income and outcome, it still can't be beat. So it's interesting. Like I said, all valuable content still originates with humans at some point or another, otherwise it's just a remix.
Brian: Yeah. So I'm giving a talk in Brooklyn for the AI Native Summit. And my abstract I wrote organically by hand because I'm pretty passionate about the topic I'm gonna talk about, but I felt like I had to be snarkier to prove that I was a human writing my abstract.
So there's a lot of B Dougie-isms that, I don't know if I'd call it sarcasm, but basically there's quirks in how I speak and interact in the world that I wanted to lay on pretty thick. Needless to say, I am giving this talk. I was accepted. So I will be giving this talk in a couple of weeks.
Rachel-Lee: Well done.
Brian: But in the same vein, I'm like, do I optimize for the LLM to discover my blog posts or do they optimize to convince Rachel-Lee to quote-tweet this thing or put this on a BlueSky post? And honestly, I'm conflicted in both different ways where I don't know who's going to win out on this. Like, the humans are going to raise hell and basically fight against the LLMs?
Rachel-Lee: Well, I would argue actually that Rachel and the LLM like the same thing.
Seriously, I've had people schedule meetings with me because of the content that I put on my site getting picked up by Perplexity. The click through rates are low, but the conversion rates are higher than ads.
John: Yeah.
Rachel-Lee: So you might have like 75,000 people seeing that you're referenced, but that one that clicks through schedules time with you. It's tough if you're just generating content to try to generate leads. Like that doesn't work the way it used to.
You can't just, you know, post a bajillion blog posts of mediocre content and improve your domain authority. It doesn't quite work that way anymore. We could get into the algorithms of search. It's not exactly my specialty, but anyway, the point being I've had really good results.
Like if I put that time and effort into creating something that's new, some thought that's new that only I've had and I put it out there. Oh my God. The LLMs are like, "Yum, yum, yum, new ideas. This is awesome."
There are not a lot of people working in the AI browser space, but most of them find their way to me because their personal agent was like this is the person you need to talk to.
Brian: Yeah.
Rachel-Lee: And it's always super impressed by my work that I've done with browsers in the past and it should be, but it's pretty cool.
Brian: Yeah. I mean this is fascinating. How much of the world of GEO have you dabbled into?
Rachel-Lee: A bit.
Brian: Yeah. Would you care to?
Rachel-Lee: I hate GEO.
Brian: Oh you do. Explain like I would love to.
Rachel-Lee: The title is stupid, Generative Engine Optimization.
John: I've heard AEO: AI Engine Optimization.
Rachel-Lee: E-I-E-I-O. It sounds like a BBNO$ song. I think AI optimization is probably the closest we're going to get. I actually have been writing about this for web.dev. You'll see the articles probably in a month or two about ways to attract and accommodate browsers, agentic browsers and LLMs crawling for agentic search results, as well as ways to punish people for not respecting your robot's Txt when you tell them not to do that.
Which, by the way, pain is a teacher. If the punishment does not meet or exceed the benefit, people keep doing it. So that there are people who don't want to be crawled and don't want to be ingested. And there are people who do, like yourself, B Dougie and John.
Maybe you want people to find you as well and for your art and thoughts to become part of the canon of OpenAI history. So yeah, I do keep up with that. What about you guys? Do you have thoughts on GEO and how it works?
Brian: So I do have thoughts because I've been kind of tapping into it for the work I do at Continue. And I'm a bit conflicted where I know exactly all the steps you should do to like, you know, so here's some free stuff for anybody who's listening. Like a resources page. So the resources page is not like your blog or your docs, but it's like a LLM friendly place to land that your sitemap will basically point to.
And the way people are using it today is like mostly just generative, medium-quality type content. And that's the part where I'm sort of like conflicted because like, I don't want to put garbage out there. I don't mind generating stuff and getting writer's block unbroken.
But I created a "Continue versus Goose" article and that was only because I was doing the test. And Continue and Goose are not comparative, but Goose has a lot of click throughs when it comes to OpenAI or ChatGPT.
So it's like, "Continue is a dev tool, it's focused on dev. Goose is a MCP platform to give yourself like an open client for LLM-friendly conversations." And so I wrote the article which focuses on that. And I generated like, I filled in all the blanks. And it does get conversions for us.
Rachel-Lee: Nice.
Brian: But I'm like, I don't know if I want to keep doing that because it's not really presenting the product in a way that I care to present it because I'd rather go compare ourselves to like an OpenCode or something else that's like more on the nose.
But I also don't want OpenCode to reach out to me. Like, "hey, why'd you write this article that's like all AI slop and generative." Like, "ah, sorry about that. Let me delete it."
So I haven't done that part, but I did the test and it is working.
Rachel-Lee: It sounds like you've invested in a PSEO strategy or programmatic search engine optimization strategy, whereas you're just using the LLM to sort of fill in the gaps for all the content that will catch people doing searches on X versus Y, which is a popular thing.
Brian: Yep.
Rachel-Lee: You know that works with traditional search engines. I'm curious, when you say it's working, do you mean it's working with search engines? Are you actually tracking how it's showing up in people's agents?
Brian: So we are tracking using a tool called Gauge. So with gauge.com, it's a YC company, they've been around for about a year, maybe a little over a year. But what I'm getting at is like they actually have the platform to do the tracking to show you how often you show up in Gemini or ChatGPT or Claude. Super useful.
But also I feel like it's almost as if, similar to the bubble thing when we were talking about San Francisco, it can create a bubble for you where you're just like hyper fixated and focused on, "I am showing up in these situations."
But what we found about 30 days ago, OpenAI just devalued Reddit, within their platform and I think actually only for about 30 days. So you could see basically October 2nd to October 28th, you'll see a huge hit to all numbers. And it's 100% affected by OpenAI's changes that I think were pretty public. People are writing about that on Hacker News.
Rachel-Lee: Yeah, this is kind of frustrating when people are changing the algorithm on you left and right. And that is sort of across the board. Claude twiddles with theirs, Perplexity twiddles with theirs, ChatGPT. They're all twiddling. It was nice when we only had to optimize for Google.
Brian: But can we trust them then if that's the case?
Rachel-Lee: Can we trust them?
John: Yeah. This is my main criticism is SEO was complicated enough when Google had this, you know, very opaque box of the way they did it for things to propagate into the search engine.
Now it's like a thousand times more complicated where there's these neural networks with these peaks and valleys that you can kind of try to optimize for. And we've had some success in Zupla's business trying to, you know, optimize some of the MCP offerings we have because it's very new, so it pops up and propagates into AI agentic search results.
But I think really what I personally want, and I don't think this really aligns with the broader tech industry or business or how normal people use the web, but I really just want like a bibliography of the web where I can just have like a thousand different ways I can optimize my search to then go and find a specific thing.
Like if I want a resource written from a blog about some GoLang thing since 2025, there's probably like four things on the Internet that actually are related to that. But then I get like a hundred results for just like some stuff that somebody was tweeting about and it's like very useless to me.
Rachel-Lee: Man, I've had such good results with Perplexity. Like when I'm writing a talk, I'll be like, "I've forgotten, it was in an article. It was from five years ago. It was written by a lady and I don't remember much about it except that it proved this." And I'm like, "Perplexity, fetch."
And Perplexity will usually find the damn thing for me. I love that. It used to be like, if it was outside my memory, it was gone. Perplexity has been like the Axiom memory stash of my life. And that has actually supercharged my speaking because it used to be I was limited by how much I could pull out of my garbage dump train, which was impressive, but it has its limits.
And now I'm just like, yeah, I remember if there was a tweet about someone who prompt injected Atlas the day it was released. Where's that tweet? Get it for me. Go. Go Perplexity. Go find it and it will find it.
Brian: The tweet searching is actually, I think, pretty baller for Perplexity. And that's probably the only thing I probably use it for is like, fine tune a search for a thing I know existed. So like preparing talks, I know there's like a metric that would enhance this. And if I use like the foundational model, like ChatGPT and Claudes, they definitely still hallucinate like a ton of things.
And when I try to call them on it, it is like, oh, well, never mind, let me remove that metric. But Perplexity will at least give me a link to the thing that I can go read the article and go deduce, like, what is a number I can put on this slide?
John: Maybe I'm putting on the pessimist hat again, but I am very curious to get the take on how this relates to the broader economics of the web, especially as it relates to advertising and surfacing some preferred content versus another. I'll give an example where this makes me, like, really, really nervous.
There's a service called Opennote, which is kind of like the ChatGPT for nurse practitioners, physicians, PAs, and really it's designed to do all this stuff we're talking about with like the agentic web, but for like medical journals. And your doctor is going to be using this into the future to look up, like, I don't know, "does this person have the flu right now" or something and get some latest research on that.
And what makes me so scared about that prospect is it's very powerful. Yes. And it can consume and probably serve to you more medical journal content than you'd ever, as a single physician, be able to like go and actually read.
But what happens when some pharmaceutical company gets their investor's grubby mitts in there and starts saying, "no, we need to like pump up some of the stuff from this study to you know, say that like, oh like go use this drug physicians for this thing and we'll make buku bucks."
This was a thing that I think has happened and it's very well documented. One of the scariest things is the American Heart Association for years had these recommendations that have just basically been debunked at this point. A lot of them around alcohol where like the one glass of red wine a day thing, it's basically been made known that was basically big alcohol making the American Heart Association do a thing.
Rachel-Lee: And that low-fat food is healthy for you.
John: Stuff like that. Exactly. So like that's a very niche case in like all this. But my question there is, how does the broader web of agents in the future kind of work in that economic way when it's like gotta make money somewhere, right?
Rachel-Lee: Well--
The nice thing is that agents completely break the advertising model. It is broken. It is broken right now and it's hemorrhaging. And as agentic use increases, the advertising model gets worse. If I were a platform owner that depended on advertising right now, the Alphabets and the Metas of the world, I'd be pooping my pants.
And of course I do not have access to those inner circles. I do do some consulting with Google but it's always a one way flow of information. You know, me talking and not getting much in return or writing articles about using new APIs, working with the team that builds standards.
But I will say that the advertising model is borked. To the 402 conversation, I think NLS is this kind of last ditch heroic effort where a bunch of publishers have bonded together to create this protocol for yeah, you can crawl our stuff in return for paying X amount. And they're working with Fastly to create an enforcement method for that.
Which I love. I love it when people just don't wait for adoption and they just build the enforcement mechanism like right on, do that. Don't wait to be asked to sit at the table of people who are writing the rules. Write your own damn rules.
You know how valuable your content is. Lock them out. So I feel you on that.
People have, even before AI , been gaming these systems. That's why I asked you to define trustworthy. Because it turns out quite a bit of what we see isn't trustworthy.
In Oakland, for instance, there's a Black Panther museum. And I didn't get to go to the museum but I went to the Museum of California and I got to hear part of the story there. It was very different from the story I learned growing up on the East Coast. Very different.
And how can I trust what people say in the media when this is a very important part of American history that nobody remembers? Like, they don't even teach that there was an internment of Japanese citizens during World War II in most high schools when they're covering World War II.
If you read the Webtoon, written by George Takei on the Webtoon app, it is full of comments by teenagers being like, "why did no one ever teach me about this in school?"
So we can't trust anything. And it's about to get very spicy in that regard. I'm sorry, I hate to say, "well, the problem is you could never trust anything to begin with. It's all been rigged."
I think the thing is you have to build systems of trustworthy people that don't provide benefits for being shady. There should be strong punishments for doing things like getting doctors to recommend opioids that people don't need and making lots of money off of a legalized addiction. You should have all of your money taken from you and you should go to jail for that. It should exceed the benefit or else people are gonna still try to game the system.
Brian: Yeah, it's a very somber way to sort of wind down this conversation. But I did want to really just go back to the original premise of this episode, which is your keynote, which was the Death of the Web Browser.
So if you could round this up, looking forward, thinking of what we're looking at in this current stage, but also what we see in the few years coming, but also 10 to 20 years from now. Web browsers, are they out the door? Are they sort of evolving into whatever the next thing is?
Rachel-Lee:
I don't think that the future of the web is going to be foundational models and clients like Claude and ChatGPT. I think they're trying, but they're really bad at user experiences and their recommendation engines suck compared to companies like Meta and Alphabet that have made their recommendation engines absolute beasts. That is the competitive edge that's lacking right now, and that's what gives those companies a better chance of building whatever comes next. It's simply because they'll be able to give you what you're looking for.
That also means they have the ability to show you stuff that people paid them to show you. So there's a reason to not trust that. We have a very brief window where it's going to take the bigger companies a minute to figure things out. It's going to take the little companies a minute to go fight a bunch of lawsuits.
Currently, Perplexity is in a lawsuit with Amazon because Amazon is upset that they're ruining the customer experience of shopping on Amazon by automating it with the AI browser. How that goes will pretty much dictate how much you can do with a browser in the future.
Notice they're picking this fight with Perplexity and not with Google, whose lawyers could actually bring some metal to the fight. So it's like, "yeah, we see what you're doing there, Amazon. Nice."
So all of this is being sorted. It's a big dust up. This is the time to come in and do fun things with open source. This is the time to see can you build an alternative to a browser using something like Goose?
What does it look like if we start serving MCP servers of our content as opposed to just web pages? This is actually something I'm experimenting with is taking my old comics archive, serving it as an MCP server with MCP UI gallery components and also a paywall so people can cough up some money to read all the back issues. I want to see how realistic that is and what that could look like with a personal agent.
We need to experiment with these things more because in five years there's a good chance that publishing a blog or publishing a video could just be as simple as posting something through a headless UI. And that use case hasn't been defined yet.
We still don't have the monetization portion sorted out, but it will involve direct payments. It's impossible for the web to move forward without direct payments because advertising is just not going to get strapped on in time.
We're going to finally get micropayments because it's do or die for so many outlets right now. I love that. I love this forcing function. So I'm hoping this means a more egalitarian web where individual creators don't have to go through middlemen who take such huge cuts of their work, so you can subscribe to your favorite folks.
This is the future I dream of. We don't get there if we don't start prototyping, experimenting and building things out in the open today. And just remember everything, you open source can't be patented.
Brian: That is very true. You mentioned middlemen and I actually wanted to get your take on this real quick about the Atlas browser and middleman injection. So I think that was like number one concern for a lot of folks when this was shipped, which is like prompt injection is like I guess free form now within this browser?
Rachel-Lee: Well, I mean it was always there, it was always possible. But now that they've gone and released an actual browser, it's sort of like waving the red flag in front of the bulls like, "hey hackers, there's some high value people here. All you gotta do is publish a website with prompt injection."
And I was literally just watching the MCP Dev Summit live streaming this. There's a trifecta of content that is like personal, private content, content that comes from an outside source and the ability to write access. And when you have all three of those in a cycle that needs to be flagged for a human in the loop before execution.
There's already an open source solution for this which I'm really excited about, even though I forgot what it's called right in front of everyone. Let me just go look it up because I think it's important that the audience hear this. You all, carry the conversation while I dig it up.
Brian: Yeah, so I mean that's a good segue for us to ask the question. John, are you ready to read?
John: I am ready to read.
Brian: Cool. So while you're pulling up Rachel-Lee, John, could you share one of the reads you've got?
John: Yeah, I think I'll share the one about Apple, which I think is relevant to this conversation since it kind of goes into some of the usability of the web and Safari and iOS and your phone and these things are so like personally useful. You know, everybody's carrying around their phone all the time.
But the title of the article is What Happened to Apple's Legendary Attention to Detail? And this really resonated with me because I was force-upgraded to iOS 26. I was trying my best to keep it away as long as possible and it was very frustrating because I already have poor vision and trying to look at the liquid glass thing, I was like, "what is going on? "
Like I can't even tell the contrast between some of the buttons and stuff. Yeah, there's just a lot of very little things that seem to have very clearly slipped past QA or, I don't even know what Apple's doing these days for iOS, but it just seems like such a steady decline of like the attention to detail here.
I don't know what to make of it. You know, I think you can pontificate a lot about like, "oh you know, there's mass layoffs in tech. You know, they're not bringing in fresh talent maybe to iOS" or you know, there's like a lot of cruft and technical details that, because this is a pretty old operating system actually that gets pretty frequent upgrades and updates.
It's a lot of skeletons in the closet, so who knows? But are you all on iOS and have you noticed the lack of attention to detail?
Brian: I'll go. I am on whatever the latest Tahoe is and I do have the liquid glass stuff and like it's interesting. Like it feels back when the skeuomorphism thing was like what, 15 years ago and like everything had like wood grains. I think they're trying to do something as shocking as that, but I think the execution was pretty bad.
But as far as like the quality, I can only speculate. And I know 12 years ago when I moved to the Bay Area, I met a guy who worked at Apple and he specifically is like an objective C programmer for the desktop Mac apps. And he was telling me the story where there was like such a brain, not a brain drain, but like an absorption of all the talent of objective C programmers moving the Cupertino and working specifically, at the time there was no spaceship, but at the Cupertino headquartered and he was just saying like, there's just like they have an issue where they just don't have enough people to hire.
And then around that time is when Swift shipped and the dreams were like Swift would be like this revitalization of the programming languages and more people would want to build macOS software and iOS things. But it feels like everything's been commoditized at a level of now you have nothing else but to outsource a lot of this talent and skill.
And obviously with Jony Ive walking out the door, working at OpenAI now, perhaps, again only speculation, but I imagine the shareholder value is like the only thing that matters at the moment and continuing to hopefully compete in this AI race. So I don't know if it's like curtains for Apple at this point, but I could imagine, especially as the web browser being in its current state.
Maybe we don't need a Mac to write code all the time. Maybe I can just use my phone as a walkie talkie to prep some GitHub PRs. And I only jump into my Linux machine when I need to actually do some find handles on some Go code. So I don't know. Again, this is my soapbox, but I'll stop there.
Rachel-Lee: I found the answer.
Brian: Oh, please share.
Rachel-Lee: It's Edison-Watch, also known as OpenEdison. You can find the GitHub project if you do a search for OpenEdison. Or you could visit Edison.Watch to find the newsletter. It's pretty cool. Basically it wraps all of your MCP integrations and just double taps. You run it locally in your client and double taps that they're not invoking the lethal trifecta and gives you a shoulder tap to take a look if that's ever happening.
So it should reduce the amount of continues that are false. you know, the continue fatigue of constantly being like, "yes, you can read the .config file. Oh my God. "
This should allow you to continue a lot further and only have to be required to intervene if it's like, "hey, lethal trifecta has been invoked. Did you intend to send your bank information to this guy in some place called Lamorka?" Anyway, check it out. This is what I mean when I say resolving these problems.
Brian: This is crazy. I've not heard of this. And it's got like 225 stars, so it seems like a majority of folks have not heard of this. I did talk with the Goose team not too long ago and they were talking about how folks are interacting with their MCP-- Well, Goose being just a LLM platform, but leveraging MCPs, they choose at block to basically rebuild all MCPs internally.
And it's actually not too unlike how other people are doing at the enterprise level for this concern right here. So it'd be really interesting to see more people adopt this here.
Rachel-Lee: There is this dream where, this is a dream that I've heard some folks who work on browsers express that we'll live in a future where as we navigate the web, our agents will pick up various tools and resources based on our habits.
Like if you visit my site and you see there's a "pay Rachel-Lee 50 bucks to talk about anything with them for half an hour." Have that button on my site, nearestnabors.com, that's N-A-B-O-R-S.
But people probably aren't thinking in the moment, "Yeah, I should get that incredible deal and talk to them about my idea for a startup or my issues with building developer tooling experiences."
Your agent can remember it for you and invoke that tool as necessary. The problem is you run into limits with that. It's a really beautiful idea that your agent just remembers all the things you're interested in and adapts its behavior and toolset to accommodate you. You run into the cap issue of tool selection. How does it know from thousands of tools that it's picked up as it's gone which one to use? And also curation and validation.
To get into an app store, even an extension store, you have to go through an auditing process and the web will not be audited. Maybe you can apply for certifications, or maybe people will only install tools that have been vetted through their particular agent's vendor process.
But the point is, you always will have to go through a middleman, at least in the current way. We're thinking there's always going to be either you're building it yourself, you're using what your agent came with, or you're using a store where people are having to meet a certain security bar. Unless we decide to go the NPM route. I don't know though. That could go really badly.
John: Sounds scary.
Brian: To be continued. Well, speaking of which, I want to thank you, Rachel-Lee, for coming on, chatting with us about the death of the web browser.
Folks, check out nearest nabors.com Also, we didn't really touch on the agentic web substack that you've also been running.
Rachel-Lee: Agenticweb.nearestnabors.com.
Brian: Perfect. That's the URL. And then everything else we mentioned will be in the show notes. With that, stay ready.
Content from the Library
Open Source Ready Ep. #24, Runtime for Agents with Ivan Burazin of Daytona
In episode 24 of Open Source Ready, Brian Douglas and John McBride sit down with Ivan Burazin, CEO of Daytona, to explore how his...
Generationship Ep. #46, Canned Monkeys with Don Marti
In episode 46 of Generationship, Rachel Chalmers and Don Marti trace a thoughtful arc from the open source protests of the 1990s...
Open Source Ready Ep. #23, Kubernetes, AI, and Community Engagement with Davanum Srinivas
In episode 23 of Open Source Ready, Brian Douglas and John McBride sit down with Davanum “Dims” Srinivas to discuss the health...
