
Ep. #15, Codename Goose and the Future of AI Agents with Adewale Abati
In episode 15 of Open Source Ready, Brian and John chat with Adewale "Ace" Abati from Block about Codename Goose, an open-source AI agent, and the underlying Model Context Protocol (MCP). They explore how AI agents are revolutionizing developer workflows, the concept of "vibe coding" for rapid prototyping, and the future of AI in productivity and accessibility.
Adewale “Ace” Abati is a Staff Developer Advocate at Block and a passionate advocate for open-source technology. With a background in decentralized web projects, he now focuses on Codename Goose, an open-source AI agent platform helping developers build smarter, faster workflows. Known online as @AceKyd, he’s at the forefront of AI tooling innovation.
- Codename Goose
- Block's new open-source AI agent 'goose' lets you change direction mid-air
- Zapier MCP (Tweet)
- Glama
- TBD
- Block scales back TIDAL investment and shutters TBD in favor of Bitcoin mining
- The Burnout Machine (Hacker News article)
- State of Productivity & AI | Superhuman
- Lenny’s Podcast - Superhuman's secret to success
- Wise Animals by Tom Chatfield
In episode 15 of Open Source Ready, Brian and John chat with Adewale "Ace" Abati from Block about Codename Goose, an open-source AI agent, and the underlying Model Context Protocol (MCP). They explore how AI agents are revolutionizing developer workflows, the concept of "vibe coding" for rapid prototyping, and the future of AI in productivity and accessibility.
transcript
Brian Douglas: Welcome to another installment of Open Source Ready with my cohost, John McBride. How you doing?
John McBride: Hey, I'm doing fine. I busted up my knee in Crested Butte skiing last weekend, so.
Brian: Oh, wow.
John: I'm all right. I'm getting around okay, I'm just hobbling around while I do.
Brian: No standing desk today.
John: No standing desk. Well, I don't use my standing desk much anyways, so.
Brian: All right, I won't tell your-
John: Don't tell my doctor.
Brian: I won't tell your insurance provider that you're sitting all day at a desk.
John: Exactly, exactly. How are you doing, Brian?
Brian: I'm good. I've been vibing, this whole vibe coding thing. I've been doing that all weekend.
And I sent you a note over the weekend, but yeah, just prototyping ideas and then getting to production-ready products in less than a day is kind of unreal.
I tend to noodle around too much about the non-important details. And I think having a little chat AI in my editor, unblocks me more fast-- faster is the word I'm trying to say.
Anyway, wanted to introduce our guest, Adewale Abati. And goes by Ace on the internet. Ace, welcome. How you doing?
Adewale Abati: I'm doing good, I'm doing good. Thank you for having me. I was just going to follow up with John 'cause I actually had a fracture over the weekend playing football. I still have like a-
John: Oh, no.
Brian: Oh, wow. This is an audio-only podcast, but yeah, I mean, that looks like an intense cast.
Adewale: Yeah, it was just the elbow. I think I had a radial edge fracture from the fall.
John: Ooh.
Adewale: But I'm good. It should get better in like a week or so.
Brian: All right, can you still type? I mean, you still got mobility in the fingers?
Adewale: Yeah, I still have mobility in the finger, but it kind of hurts a bit. So my left hand has actually been surprising me.
I did not realize I had so much experience typing with just one hand, so it's doing a good job. And I'm trying to just get used to moving the mouse with it as well.
Brian: Yeah, I mean, luckily, we've got AI to where you can just like, talk to the editor at this point.
Adewale: Yeah. We'll probably get into that a little bit down the line.
John: That's what I was going to say, is vibe coding and AI-assisted editors is actually incredible for accessibility.
Adewale: 100%. Last week, we did a livestream of our work, using a speech MCP, and it's something I could just ask it to do stuff, combined with AI agents, like Goose itself.
It's just like, thank you. Thank you for inventing this.
John: That's great.
Brian: Excellent. Well, we actually want to talk about Goose. So you're at Block. I've known you at Block as doing TBD, which we could talk about briefly, like what was TBD and where it's at now.
But we did want to get to talking about Goose and what you all are doing with AI runtimes and agents.
Adewale: Yeah. Hello, everyone, everyone on the podcast. My name is Adewale. Everyone calls me Ace.
Currently, I'm at Block as a staff developer advocate, and I've been there for about almost two years now, actually.
Like Brian mentioned, I did start with TBD. TBD was focused on like open standards for decentralization, mostly finance.
So we're working on technologies that would push that kind of decentralization and stuff like DWNs, which was like Decentralized Web Nodes, Web5 itself, and all of those things.
Right now, there's been a shift in focus from TBD, so all the projects and technologies that were created at that time have been turned into other open-source foundations to keep it moving.
So we certainly have... I also switched teams and I've been working on Goose, which is an open source as well AI agent that is doing things.
I'm always super excited to talk about it 'cause I see the way different people use in different areas in different professions, and it's like, oh, wow, that's a good idea.
So I'm super excited to just talk about that, and also to find out how people are also vibe coding or trying to create stuff with these new tools that are getting every other weekend, you know?
Brian: Yeah, so could you explain what Goose is real quick for the listeners?
Adewale: Yeah. So for any of the listeners that don't know what an AI agent is right now, think of it like a super personal assistant.
The brain is the LLM. So you have like an LLM, like an, what is it called? GPT-4, 4.0, 4.5, or Gemini as the case may be, as the brain behind these tools.
But then they also have additional abilities to take your instructions and go act on them.
So a typical example I would give is if you're speaking with ChatGPT, for example, "Hey, I want to do this," it kind of gives you the code back or gives you suggestions, but with an AI agent, 'cause it can act based on your instruction, it can take what it should have given you as response and actually go work on it and go do something about it.
And I think that kind of just transcends even beyond just coding, because thanks to the continuous advancements, now you can manage activities on your computer. You can use it to connect to your existing applications, like Google Drive, Google Calendar, all of those tools that just makes it like, "Oh, okay, just go do this thing for me. I'll be here watching."
So AI agents, for me, are just like that's AI-powered assistants that just listens to you and can also be entirely under your control, basically.
John: Yeah. One of the things that I noticed immediately about Goose that really stood out to me and I'm very excited about it is its deep integration with MCP.
It's almost like a universal MCP client where your agent with Goose can go and use a bunch of MCP servers. So maybe we could tee people up.
What is MCP? How do you think about it? How does the Goose team think about it?
Adewale: Yeah, I mean, everyone is talking about MCPs today. And to take it back a bit as well, let's think about when API was first shared with everyone. Like, "Hey, you can use APIs to connect with applications. You can build your own front end and whatever on top of any API. It kind of opens up the world to things that, Oh, I wish I could do this."
And then moving into the era of AI tools where people are able to ask questions with LLMs to tackle LLMs. And now we want to also be able to bring the intelligence that these LLM tools bring into tools that I already use every day, like Slack, like my Gmail, my calendar, all of these things.
MCP is providing that protocol that's just kind of standardizes the way all these AI agents, AI tools, can take advantage of this opportunity or access your data without having to reinvent the wheel every single time.
With the API example, if three different websites want to consume a specific API, libraries came into play to make it like standardized in a way.
But it still has to write your business logic all around that. It's still very custom-based. But with MCPS, it's very, very consumer-focused, where the AI agents take this MCP, you don't have to reinvent any wheel, the standard is already there, and you, as an end user, can use pretty much any agents that supports the MCP standard itself to work on any of these tools right out of the box.
So MCP, as a protocol itself, just opens up the world, where there's an ecosystem that you can continue to build more things upon it. And most of the infrastructure, when you have a system like that, it means the possibilities are quite endless 'cause it's not just about what we've created. It's about you being able to create so many things on top of what has been created, and I think that's what makes it amazing.
Brian: Yeah, so can we talk about the origin of Goose as well? So like Block is known for its Square tablet reader, and payments, and a bunch of other stuff.
This is a dev tool. So I was just curious, at what point is this making sense for Block?
Adewale: Right, so actually, Goose started an internal tool. We wanted to find a way to bring in more productivity to AI tools within our organization.
I just seen how it started to improve productivity. I was like, "Oh, we can actually expand on this and make it open source so that more users can use it, more people can build on top of it as well."
And that's part of open source, right? 'Cause when you open things up like this, different people across different areas of expertise are able to contribute stuff into the technology and make it even grow faster.
So ever since the launch, January 28th, it's grown exponentially in the sense that we've gotten contributions from different organizations and even individuals.
It talks about the collaboration as well around MCPs. Goose was initially built in Python, so we had things we call like two keys.
It was kind of like its own ecosystem, but just the collaboration that open source brings to be able to work on something like MCPs together and just kind of redesign goes around MCPs themselves.
So it feels like it's an MCP client today. I mean, it is, but even the inbuilt tools for Goose are MCP servers. So the things that makes it work beyond just your chats tool are built in a way that would support MCP setup and a lot those things.
So that kind of just opens up the accessibility of Goose, and I think it's one of the things that Goose best for that. Number one is open source, number two is the accessibility.
There's no caged, what's it called, environment, like, "Oh, you have to use this LLM tool," or, "You are limited to these sets of extensions," as case may be.
You can make Goose be what you want it to be, like very much customized to yourself based on the MCP servers that you choose to install.
John: One of the sort of classic problems that, I guess, if this was APIs, like you had mentioned, that's kind of the old world, would be like service discovery.
So if this started as an internal tool, I'm curious how you and the Goose team are thinking about discoverability of MCP servers.
'Cause obviously, I can Goose add, or Goose config, or whatever for an MCP server. But yeah, that seems like that will start to get hard to manage eventually.
Or just being like, "I kind of don't care how you figure this out. I just want you to go access a thing on my Google Drive or my Notion," and then it can go and discover that.
Is that discoverability problem something that has come up already or that you all are thinking about?
Adewale: I think so. It's a very good question 'cause everyone is approaching it together and also differently.
As the last time I checked, there are like over 2,000 MCP servers already live. And these are servers that were built for the common applications that we use.
So you think Google Calendar, think Slack, you think Figma, you think some of these tools. And we've had couple of websites as well, like PostMCP, like glama.ai, that kind of reads these MCP servers.
It's also the way you find multiple MCP servers for the same platform because people build their own version and all those things, but it should always still work 'cause they're filling the standard, right? And you can plug and play into MCP clients like Goose.
So in terms of discoverability as an open-source project as well, Goose, we have a created list of MCP servers that we try and we've tested out, kind of like, "Hey, you can install this directly out of our extension site."
But the best part is also is that you can literally install any MCP server. I think I've came across one MCP server that I don't think the developer knew anything about Goose, so it was configured like just, "Hey, this is the command to install this MCP server on your own."
You can install those kind of MCP servers on Goose as well and use it directly with the AI agent because, like I said, the standard is there, so it kind of solves the problem where you have to manually integrate, "Oh, this is how this works. Let me configure a specialty."
That's where MCPs kind of like shine. It just eliminates the entire complexity. I still think there's a little bit of technicality required in the sense of, as an end user, you need to know what MCPs are to go look for, "Oh, I need an MCP for," maybe a particular platform, right?
Which is why, with Goose, we kind of call them extensions and we're trying to create as much as we can just to start an extension versus think about like, "Oh, I'm looking for a Model Context Protocol server," that kind of sounds very technical.
But the more we get into that space, I think today you'll find people that are not developers at all that would mention APIs, like, "Hey, I know these guys have an API."
So it might get to a point where they all know, the general public will also be like, "Oh, these guys have an MCP," and stuff like that. So fingers crossed.
Brian: You mentioned in passing it was originally written in Python, and I'm seeing like 70% of the code's in Rust? What was that story?
Adewale: I think it also came with the accessibility, being able to put into something that would be plug and play.
I'm not 100% sure, but I think the MCP protocol that we worked on to put like the new version of the Goose client also had like a Rust SDK.
So I think that kind of just made it easy to do plug and play and transition into a Rust environment and keep them moving going forward as a stable release into the different platforms that would possibly use Goose.
Brian: So is the goal to like... 'Cause you mentioned a couple examples, like with Slack and other sort of MCP servers.
And I think the context I have around MCP is like, "Yeah, it'd be cool if GitHub did create their own MCP servers so that can run an agent that has knowledge about GitHub."
And it sounds like GitHub actually has this with like Copilot Extensions sort of. And I've actually been messing around those all weekend and they're kind of underwhelming to be quite honest.
So I'm curious, what's the incentive for all these companies to leverage MCP? But also, is Goose kind of like laying the egg that everyone can now proceed with building MCP servers?
What's the adoption look like?
Adewale: I think the adoption has been geared mostly towards using MCP servers versus you individually building them. We do have a training, like exterior extension that teaches you how to build an MCP server 'cause want you to build your own extension.
And that kind of comes back to the question that you asked about the companies building on MCP servers.
I think at this point in time almost every large platform has an API, and you can actually use those APIs to build your own kind of MCP server, right? Zapier launched an MCP server I think this past week and it's exciting. I've not tried it out yet, but I was looking at what the functionalities were, and just through Zapier alone, it gives you an ability to connect your AI agent or AI to all of the platforms that you already currently automate. So just installing that one single MCP server or pulls up your world into tons of automation, right, from your AI clients, right, to be able to tell you to go do something and it goes to you from Slack to Twitter to like anywhere else.
So as of today, there are multiple versions of, say, a GitHub MCP server because some people build MCP servers out of their Rust clients or their GraphQL client as case may be.
But when it comes to the integration, the best thing I like about it is how they work together.
So for example, I could use a Tavily MCP server, which Tavily does like deep web research, you can go online, find if there's update on or information about something.
And I can say, "Go online using this Tavily extension, MCP server. Get some of the latest information around sentiment," even something that's not just directly like a blog post or whatever.
The AI tool is able to come in to analyze sentiment, and then create a summary for me in Google Drive. Those are things that are very possible today thanks to AI agents.
It's more than just... I think it's slightly more than just automation. It's like there's a thought processes thinking in there where you're able to plug and play into multiple platforms all at once, that automation. A lot of time it may be like a model automation in the first place. That's kind of like a unique environment for sales specifically customize automation in this case.
'Cause tomorrow, it could be, "Oh, do this research, then improve my GitHub PR based on the best practices for SDK addition," or something like that, right?
With the GitHub MCP that I tried earlier today, I mean, I use it every day, but I did something today where I merged a PR.
My manager should not hear this, but I merged a PR, and I think there was an oversight where there were like some CSS issues.
So I basically took the PR link to Goose. I was like, "Hey, using the MCP server, the GitHub MCP server..." And actually, they wouldn't need to like spell it out.
'Cause if I'm sending a GitHub link, it's able to tell that this is GitHub-related. So if I'm like, "Hey, look at this PR." I share the link.
"Something is broken related to this. Can you check out how it's best to do this?" So I was like, "Check out best to do this."
So it checked out the PR using the GitHub extension, then it went to online using another MCP server to find out the best way to solve what I was dealing with. And it came back to me and gave me a summary, right? Just in my editor there while I was grabbing a cup of coffee or something like that. And I think that those things open up an endless world of innovation. And the best part about it is we don't also have to wait on these organizations, like GitHub specifically or Figma specifically, for them to create these MCP servers before people can create them and use it in their applications.
Brian: I dropped in the show notes the link to the, I don't know, someone at Zapier did an announcement of the MCP thing, so that announcement tweet. Just like scrolled through and like, "Wow, this is amazing."
'Cause only two weeks ago, there was another tweet of like, "Oh, MCP? That's just kind of like Zaps."
So I feel like this world is moving crazy fast and it's like hats off to the Block team and Codename Goose to be so early on this wave and helping to build more structure with this protocol.
But the other thing I was thinking about is like, man, again, I opened up this podcast with how I was vibe coding and I've been leveraging AI to unblock me in a way that I haven't had since I was 10 years younger and had so much energy and coffee that I could code throughout the entire night.
A night code session for me in my 20s is literally a couple hours with some AI-enabled coding assistant. And that's mainly 'cause I know how to write code in the same way I know how to use an API.
Adewale: Mm-hmm.
Brian: So when I get handed over an MCP server, I'm like, "Okay, well, say less." I can now put together a real product.
And that's where I sort of swim is like prioritizing ideas rather than trying to build out the guts and the framework and protocols.
So just kind of just making a statement of like, man, this is exciting times. And folks, you got to be paying attention to this stuff.
Adewale: Yeah. Talking about that, I was going to ask a follow-up question.
How did it feel for you going from like the zero to whatever point you got to vibe coding?
'Cause it's nice when you have the idea you want to go into in your head, and then being able to vibe through it. You don't have to be the entire team, but that process and just being, watching the agent do its thing, how was that for you?
Brian: Yeah, yeah, way to turn the mic around. But actually, I...
On Bluesky just before this chat, I Bluesky'd. I don't know what do you call those, tweets posted?
John: Skeet.
Brian: Skeeted?
John: Yeah.
Brian: I skeeted all over Bluesky this morning, and it was basically, it was like, I posted how I felt, which is I felt like Michael Jordan in the Wizard era.
So like Michael Jordan came back to basketball a third time and played for the Wizards in the 2000s.
And the first half of the season, they won like 23 games, and then he got hurt, and then he ended up not having a great end of that year.
But I feel like that year of like, man, I'm like 38 years old, Michael Jordan around that same age, and I'm just like crushing through this idea.
And I sat on this idea for the longest time where I wanted to basically take recipes from Instagram and generate, make an agent to basically generate some recipes from that and save that for later.
Sunday morning, I started. And basically, in the first couple hours of the morning, I had a full-on prototype.
And then by that night when I got back to it, I had a full working production-ready application. And that's a pace that I don't normally keep when it comes to...
Like I'm trying to look at docs, remember where I got started. Like, how do I run Supabase migrations from the command line?
How do I connect a local database with Postgres? All this stuff, I'm just cruising through and leveraging the chat and AI tools to get unlocked.
Adewale: 100%.
I think the best use of these tools are things that require you to like manually do stuff. You know what you want to do but it takes a couple of steps to do it.
You don't have to go through those steps. Just skip through it. I mean, not skip entirely, but like the AI agent does the steps for you, and then you just pick up with the next step. You think through stuff, you have it go do that thing, and you think through stuff and have it go do that thing.
I think it's the fine balance between letting the AI completely take over versus maximizing most of the benefits that AI tool provides for you.
Brian: Yeah, and so I mentioned Supabase in passing. I don't know if they have an MCP server anywhere or if they built this, but-
Adewale: Yeah, think they do.
Brian: Do they? Okay, I would be surprised 'cause they move at a super fast pace over there.
But just maintaining and managing my database and connections, and now I'm adding like vectorized columns in there, which is like concepts that I know about but I've never done from scratch.
I'm able just to leverage the tool at hand and I've been using AI to basically generate my migrations and stuff like that.
But in a perfect world, I'd use MCP servers to be like, "Hey, my data's changing in this way based on these features. Here's a screenshot of what I designed. Go update the database to be ready for it."
Adewale: Nice. Most of the vibe coding tweets I've seen from people that have built out like MVPs and the likes have been a combination of whatever framework they were using plug into database through the Supabase MCP and then deploying to different services under MCP, whether it's Vessel, or GitHub, or whatever the case is.
And you basically completed the cycle just vibing, which also poses the question that people are like, "How secure is this?" Or, "Why should you be doing this? Is this production-ready?"
And maybe for someone that is probably not aware of the steps it'll take normally to build a production-ready app, then you would make the same mistake even if you were learning how to code and you built something that wasn't production-ready and you launch it pretending to be production-ready.
It's just that you get to that finish line quicker, then you face your walls, like, "Oh, I should have made this more secure."
So when I see people blame the AI tool for that, I'm like, "It's not the AI tool. It's more like the person using the AI."
'Cause if you sat down and you asked the AI tool, like, "Okay, we're done building this. Is there any security thing I should be worried about?"
It's going to give you a review based on what it thinks, like, "Oh, we did this. We did not do this."
So I think it falls back down to the human to make sure you go through all the checklist or at least ask the questions that matter so that the AI can best serve you in a way that you do not regret in the future.
Brian: Yeah. John, a couple weeks ago, we were talking about Go channels and like-
John: Yeah.
Brian: Some dragons around not deferring properly and stuff like that.
I actually opened up a PR with one of my Go projects, and my Copilot, you can now review code in a PR in browser, actually caught some of that stuff.
So asked the right questions and then pointed out like, "Hey, you should perform," and I'm going to use the terms all incorrectly because I literally vibe coded this stuff, but it's like, "The mute text here, blah blah blah, close this, defer here."
All that stuff is stuff that it takes me a long time to wrap my brain around the server side code-
Adewale: Mm-hmm.
Brian: To be able to make sure that it works and proper. But I wanted to take advantage of performance, so ended this asking the chat, "Hey, I actually want to improve performance based on how many cores available."
Adewale: Mm-hmm.
Brian: "Write that code." And it's like I'm able to trust it with... Probably should write some more tests to be quite honest.
But trusting it just by deploying it and seeing what happens.
Adewale: Yeah, that's smart.
John: Yeah. Go is also one of those languages that I think is just simple enough and powerful enough that works pretty well for whatever this is, vibe coding, prompting, et cetera, et cetera.
Adewale: Mm-hmm.
John: I've had sort of opposite experiences, actually, with languages like Rust or OCaml where the semantics and the syntax are like, like it just can't do Rust very well, at least right now.
And that sort of makes sense just given it's kind of verbose in text. Maybe there's just not enough of open-source Rust out there for it to have slurped up and and used, but these things also are getting better every day, so who knows?
There's probably a good-enough MCP server for Rust these days that I should be using.
Adewale: It is insane how good these tools are getting every single day.
I get surprised. I mean, working on Goose puts me in a position where I'm like checking out these AI tools, I'm trying to stay up to date as much as possible, and I get blown away every other day.
With the Zapier MCP, for example, I was like, "Oh, wow, these are all the applications that you're giving me access to."
With the concept of MCPs at first, as of January or December last year, which is like three months ago, I didn't know anything about MCPs 'cause they were still building them and launching.
I was like, "Okay, now, yeah, LLM tools can tap into any application that you already have."
And we hear about the new AI tools, the new models every other week, and you're like, "How fast is this thing going to get?"
In one of the sessions, of course, one that I had last week, it was, we're talking about how close we were to having something like JARVIS with the whole entire Iron Man experience where you would want it to do something and like, "Hey, JARVIS, what's the weather report?"
I mean, weather reports is simple, but I'm thinking about "Iron Man 3" right now where there was like an explosion.
It was like, "You know what, map out all the explosion locations for me in the map. Let me see it."
That is something that we can actually do today. I can use this speech MCP on Goose and say, "Hey, based on the recent bombings, or shootings, or whatever the event I want to track, like, Get the data for me on the internet. Plot a chart with the locations and show me what it looks like."
I can use my voice to tell Goose to do that today and I'm going to get a chart or a map or something popping up on my screen in less than five minutes, and I did not write any code.
So it's not as fancy, obviously, as the whole Vue and all of those things with JARVIS, but we are getting the kind of functionality with the interactivity, and it's like, what would this look like in six months or a year? That's insane.
John: Yeah. It's funny you mentioned JARVIS. I actually was going to bring this up, partly because I've been building with a lot of these things.
I've been building a smolagent framework in Go, actually. And something I've been thinking about is this idea of anthropomorphizing all these things or really like humanifying them or making them seem as if more, like it has a personality, or maybe, like JARVIS, has kind of a humor-
Adewale: Right.
John: Or has like a funny British accent or something.
I think I'm actually starting to kind of swing back in the direction that making these things more and more human-like is just maybe not, is almost going to restrict them in their capabilities or in their...
'Cause the amount of time we can spend in these frameworks and these things trying to make them funny or have a personality or be it more and more anthropomorphized, it's just like not exactly what this technology is.
But I'm curious if Goose or the team has thought about this at all.
Adewale: Yeah, I mean starting with my team, with DevOp team, Michelle gave a talk on anthropomorphism, and that was probably the first time I came across the term at first.
I was like, "Oh, that's true." 'Cause I used to tell people like, "You know what, just think of Goose as, or your AI tool as, what's it called, your guy, if that makes sense."
So maybe it's at the point where we need to probably take a step back from that and think of them as tools as what they are and also build them in that way so that humans still are able to keep what makes us human. But at the same time, we're able to maximize the benefits that these tools bring to us. 'Cause we are using them in multiple ways that go beyond just, oh, this sounds funny, or this sounds cool, or whatever the case is, and we can take back our own time.
So I think it's a very interesting topic. I'm sure different people would have different perspectives on stuff like that.
But right now, I'm in agreement with you to take a step back from trying to humanize these tools or build them in a human fashion so that, oh, they feel human, 'cause they're not at the end of the day, at least right now. They are tools that we can continue to improve upon.
I was reading this book, "Wise Animals." Can't remember what the author was. But it's like our technology works and evolves over the years.
And it starts with, the first paragraph was like, "Technology is stuff that we don't understand yet."
And that's exciting 'cause now we use things that we don't even think to write about, just like every other day thing, maybe our smartphones and likes. But at some point, they were like, I can remember when the iPhone launched.
And then the chapter I'm currently on is also saying things like, "The time it took us to get from one invention," like let's say cars, or like engine cars, or oil cars to like smart cars or whatever the case is, and the time it's going to take us to do the next invention is exponentially smaller because the tools continue to grow at a pace that is just different from our time as humans."
'Cause first of all, we've accumulated some of this information. Again, we built these tools that they're able to even create more tools by themself.
And if it took us 1,000 years to get to point A, there's a very high chance it's going to take us only 10 years to get to point B. And then, eventually, these things would outlive us 'cause they're not technically restricted by our time. So we have to observe the pace at which the innovation goes and try to see that we make the most of it while we're still here.
John: Would you consider yourself an AI accelerationist?
Adewale: I don't know. I don't know, I'm kind of like...
John: Maybe an unfair question, but.
Adewale: Yeah, I don't know if I want it to move ridiculously fast, but at the same time, I recognize the fact that it is moving fast.
As long as we kind of have this ethical conversation around it, like this is things that we're trying to build. I don't think it's going to solve all the problems.
There's still conversations around security, even with MCPs, even with agents. Like, how do we continue to enforce security and privacy, all of those things? And it's an ongoing conversation.
But I don't think we can put a cap on innovation. Things will continue to get built, whether we want it or not. It's just about how we manage it across our spaces and also how we continue to make use of them to the best of our abilities.
John: Yeah, yeah, it makes sense.
Brian: Cool. Well, honestly, Ace, appreciate you having the conversation about Goose, about what Block's working on. It's fascinating.
Accelerationist? I don't know, I feel like I'm now on a conveyor belt that's moving quite fast, so if it goes any faster, I couldn't even imagine trying to keep up with this stuff.
So I need my AI agent, and my voice recorder on my necklace, and my car to drive me places to keep up to date.
But with that said, I do want to move us to reads. So Ace, are you ready to read?
Adewale: Am I ready to read? Yes.
Brian: Excellent, let's get into it. John, you had a read that was going to be spicy so you want to go first?
John: Yeah, I can do my spicy read. So this was something that I pulled off of Hacker News, was kind of going viral there, and it's titled "The Burnout Machine."
And I've read these little kind of, I don't know, quips before that, inevitably, it'd lead people to say like, "We should unionize as tech workers. We should organize as builders."
And I found this one particularly gut-wrenching or just it really resonated with me in a lot of ways.
The TLDR is just about how you're sort of expected to show up to these giant tech conglomerates.
There's a bunch of fun, sexy stuff they're going to do for you, like meals, and pizza parties, and massage chairs, and whatever.
But then inevitably, two, three, four years later, you're just completely burned out and just the pace of work and expectations seems to be kind of unreal.
And I sort of feel like that's only gotten worse as return to office continues.
Yeah, I don't know what that's going to look like with the future of tech work, because even just a few years ago, it was like showing up to do some code and cut some features.
And it honestly wouldn't surprise me if, in the future, every tech worker was sort of a manager of their own AI fleet, trying to get that complexity to make sense, and then still shipping things to other people.
And just the graph of how this could look in the future could be very interesting. But it's a spicy read 'cause I feel like anytime you bring up unionization or organizing within any job sector, there's a lot of opinions.
So I don't really know how to feel about it, I guess, is my read of this.
Brian: Yeah, I mean, everyone's got an opinion around this, and I feel like when you bring in the union sides, so I'm looking at, and this might ruffle feathers and get us canceled, but this is my opinion, but I look at Starbucks.
Starbucks had a unionization a couple years ago. I don't remember the last time I went into Starbucks.
I'm happy that there was jobs and people can have Starbuckses in their hometowns. And I guess there's only like one or two unions in existence for Starbucks so far.
But it does like set the stage of like, okay, well, it's unionized, does it really matter or are we going to move slower?
If you try to build a house or get any sort of renovations here, specifically here in Oakland, California, the time to get permits is like ridiculous.
And it does come down to a lot of red tape that happens. So in the event that like, "Okay, let's figure out how we can get either paid more or get established and standards," we might move the cheese pretty quickly, especially how you mentioned the, do we manage a fleet of agents?
The other thing was, I know with the airline pilots, which I know this might be like an overused analogy, but like, "Is AI going to take my job?"
With AI, airline pilots, you take off and you use autopilot. So you are just watching the bleeps and blops on the screen.
I imagine there's a little bit more to it than that. I've never flown a plane.
John: I've actually asked my pilot buddy about this exactly, 'cause I'm like, this seems like something ripe for... Or even like maybe you'll be sitting in a cube one day to man three or four of these different planes from your drone-operated thing or something and just remote into it.
But what he tells me, though, is that while autopilot is very good, and I think this actually rings true for a lot of how AI works today, autopilot's very good at every typical use case, but man, the thing just sucks in a storm or with unexpected turbulence.
It can't really take off or land because you're literally manning the yoke and you feel, you feel the give and pull of the thing coming down.
And flying these planes the way that he describes it to me, especially these really big planes, is they have a certain feel. And that's like really difficult to code into any kind of autopilot or AI system.
I don't really know if I want to get on a plane that doesn't have a person in it, I guess I would also say.
Adewale: I mean, talking about that, it's not exactly the same thing, 'cause I mean, flying a plane is probably infinitely more difficult.
But it's also one of the things I try to tell software developers, like it's more than just writing the code or doing everything as in documentation.
It's how do you use this code to solve a problem?
Eventually, even if you have like a fleet of AI tools, it's going to come down to, how do you use them to solve whatever problems are in front of you? And that would probably be what differentiates developers or managers as the case may be in that scenario
'Cause, oh, everybody has access to these tools, but what are you able to create? Are you able to approach certain problems even with the same set of tools? So it's just like everyone has access to, or everybody can now build an application, but what kind of application can you build?
Or everyone can now use AI tools, but what can you do with them? There's autopilots here. You're flying in the storm, but can you handle the storm?
It's like, yeah, I think it's something that is just very key for everyone regardless of that field to just know that the tools will always be there, but it's like, what can you do with them basically?
Brian: Yeah, and going back to the original reason for the article, like the burnout question.
I think that employers are always going to try to get the most bang for the buck where they can.
So like if it's, "Hey, you're going to start coming back to the office 'cause we have lunch now." Like, "Okay, lunch."
"Oh, by the way, we have dinner. And we have these events. And we're running a hackathon on the weekend that we need to have people hands-on for. And we have a launch on Monday."
So there are always going to be these little carrots dangled in front of you.
I think what I learned early in my career when I was much younger and had more energy is those opportunities where maybe you do put some extra time or go travel to do whatever reason, those help set you up for when you're 20 years later when you're like, "Oh, you know what? Actually, I think I can just get my job done a couple hours a day, and then write design docs or do more people management."
All that, you set the stage early on to get to the point where, "Okay, yes, I can afford to just have the decision made that I can make more money and et cetera."
And it's like the age-old engineering answer, which is, "It depends." And it's like everyone's situation is going to be different.
And I think we're better off without standards, but I do see the reason for the need for standards, so I'm here for it.
John: Yeah, one of the frameworks that was helpful thinking through this, and I think, 'cause my initial read of this was like, hell yeah. I've been burned out before. We should do something about that.
But then I started to think through some of these frameworks of like how this has happened in the past where like the steel mill is a great example where like the literal means of production for the steel mill was, yes, that workforce, but the actual place and the actual machinery and stuff owned by the mill.
So those people could organize to get better working conditions, more safety, to then be able to take back some more of that power for the actual means of production.
Where in tech, it's definitely different where I could just go and the means of production is basically just like in my brain, right?
Where I can go and output some code and do some thing. And in theory, I could go ship a bunch of stuff that could be a competitor to Amazon, and Cloudflare, and Block, and X, and whoever.
But that paradigm starts to get so much muddier for me when I start to think about the future of what work looks like with these fleets of AI agents and stuff that I'm managing.
Because I don't have a fleet of H100s in my house. That means of production is not within my means.
And especially years and years down the road where it's like what happens to that future generation of engineers who don't have so much of that means of production in their minds where they can just go and execute on something?
They're so reliant on vibe coding and so reliant on the steel mill of AI agents, I guess, to really pull on this analogy. Does that kind of make sense?
That's kind of where I land, so in the middle with this from just like, oh, I don't know, right?
Adewale: I think time is always going to change 'cause I wasn't there, but I'm imagining, for example, when we started with the LEDs of the computer and it was a very manual process to do the ones and zeros, the punching, the assembly, and all those things.
John: Yeah.
Adewale: And then the smarter languages came. I'm sure some of those engineers felt, how would you understand how your computer works if you just tell it to print?
I mean, there's going to be that feeling there, but then we continuously keep moving forward.
So maybe in a century's time, vibe coding might just be the Assembly language of whatever pro coding we have at that point in time.
And it becomes relevant to even know how the, what's it called, the ones and zeros work.
I'm not saying that's what the case will be, but I'm just saying things are just going to continuously get built on what is working, and what isn't working is going to get left behind eventually, so.
John: Yeah, and I think there will always be opportunity for folks to go back and learn the old-school way.
Adewale: Yeah.
John: In high school, I had a car that I had to maintain it because it was like, it was piece of, it was garbage, basically.
So I had to change my oil. I always had to like check the radiator for leaking fluid. I changed my own brakes. I haven't changed my own brakes in years.
That's not a thing I want to do. Never again in my life do I want to go jack a car up in my driveway a nd drain the oil.
But I know how it works and I'm like one YouTube video away from learning how that works in whatever code base.
So I don't know, again, it depends, but it'll be interesting moving forward.
Adewale: Yeah.
John: Actually, I want to switch over to... I got two picks. Actually, I got one pick, really. I was listening to a podcast this morning with the CEO of Superhuman.
And I've been a Superhuman user for a couple years now, actually. And I've been aware of Superhuman since before it was... His other company was Rapportive.
'Cause I used to do sales at a large enterprise and I used Rapportive to understand my LinkedIn connections through my Gmail, and that was his original idea.
But what I'm getting at is they just had "A state of AI and Productivity Report" that they just shared, and there were some pretty cool nuggets in there.
It's a nice, little PDF that we'll link into the show notes. But yeah, 82% of the professionals leverage AI features in email, and almost 70% use AI messaging and calendar tools, respectively.
And I love Superhuman's sort of report. Read the report because productivity and booking calendars, like, Ace, when you booked this time with us, we use Cal.com.
It just happened. It checks our calendars. With Superhuman, I went through a bunch of emails and just responded with like, "No, thanks."
And I just used AI to basically say, "No, thanks," and it writes in my own words, "Oh, yeah, blah, blah, blah, blah." Like, "We don't want whatever you're selling."
But that's just like so tedious of going through all that stuff and being like... I don't want to respond to all this stuff. But also, if I don't respond, it's going to be in this weird loop.
So how do I... And the other thing is unsubscribing. There's unsubscribe link. You just click the one in Superhuman and it unsubscribes and deletes all the emails.
So yeah, I think they're doing AI really well. And for a company that's been over 10 years old, they had a huge resurgence in the last couple years.
But yeah, I feel like I'm selling Superhuman, but try not to.
Adewale: I mean, I got sold. I'm looking at the website right now, I was like, "Okay." So yeah.
John: Listeners will know that I've been on a journey with Apple and Mac's default apps.
At one point, I was talking about Automator, and I've been using the Mail app recently, which actually has been very surprising and delightful.
Partially because Apple Intelligence will just surface the most important things are things from my wife or something that it knows that I probably won't actually look at.
So yeah, I think email and calendar, I mean, it's just such a good use case for some of that stuff, especially in the summarization realm, which I think is probably the best thing that LLMs can do today, just summarize a bunch of text.
Adewale: Yeah.
John: Yeah, I'm a huge fan.
Brian: Yeah, I'm a big fan as well. You using AI for your productivity, Ace?
Adewale: Yeah, I mean, to be honest, I've been an early driver on Goose, clearly for documentation.
There's this MCP server I really like, the knowledge graph. Basically, takes any block of detail information, documents, and just kind of like does a knowledge graph of them, and it can base questions off of it.
I think that's helped me out, actually working on documentation and stuff. But yeah, Goose has been an early driver for me, if I'm being honest. I'm not trying to sell.
'Cause I mean, I was working on a task that I was like, "Just go over this pull request. I'm not sure I understand what's going on here. Can you go through this?"
But I'm trying to see, any other AI tools outside of that. I'll probably say video editing, using like CapCut or Canva, that's also been one very nice point of AI coming to my day-to-day life basically.
John: I saw a really crazy video recently of somebody that built an MCP server for Ableton, which is a audio workstation for like DJs and music producers and stuff.
And it basically was just like, "Hey, make me a song in 130 beat with a kick drum on four-four, and it sounds like this, and it sounds like this."
And it just like, boof, and the whole Ableton project just appeared.
Which to me, I mean, I've played around with some of these things before, like playing guitar and a little bit of MIDI, and you come up with kind of a terrible-sounding thing.
But that was pretty crazy to me where I was like, "Wow, there it is, just the whole song," you know?
Adewale: It is insane.
John: It's wild.
Brian: Yeah, that is wild. Man, but this has been a not a wild conversation.
It's actually been pretty awesome to kind of catch up with what you've been doing, Ace, and what Block's been doing.
And folks, it is time to touch the Goose and check out Codename Goose on GitHub and on the website as well.
Adewale: Thanks for having me, Brian. This is an amazing pleasure. Nice to meet you, John.
John: Yeah, you too.
Brian: Excellent. Listeners, stay ready.
Content from the Library
Open Source Ready Ep. #14, The Workbrew Story with Mike McQuaid and John Britton
In episode 14 of Open Source Ready, special guests Mike McQuaid and John Britton join Brian and John to share the story of...
Open Source Ready Ep. #13, Why Microsoft Chose Go: A Deep Dive with Thorsten Ball
In episode 13 of Open Source Ready, Thorsten Ball of Sourcegraph joins Brian and John to unpack the real-world engineering...
Open Source Ready Ep. #12, Exploring Flox and Nix with Ron Efroni & Ross Turk
In episode 12 of Open Source Ready, Brian and John welcome Ron Efroni and Ross Turk from Flox to explore the world of Nix, a...