1. Library
  2. Podcasts
  3. Generationship
  4. Ep. #50, Defense Against Deepfakes with Joshua McKenty
Generationship
45 MIN

Ep. #50, Defense Against Deepfakes with Joshua McKenty

light mode
about the episode

In episode 50 of Generationship, Rachel Chalmers sits down with Joshua McKenty to unpack how AI-driven scams, deepfakes, and identity fraud are already reshaping our digital lives. From scam factories and nation-state actors to broken trust infrastructure, Josh explains why authenticity is the core problem, and what it will take to fix it. The conversation spans cybersecurity, public policy, and the future of human trust in an AI-saturated world.

Joshua McKenty is the founder and CEO of Polyguard, a cybersecurity company focused on real-time, privacy-preserving identity verification. Josh is a frequent advisor, investor, and speaker on authentication, privacy, and the societal impact of emerging technology.

transcript

Rachel Chalmers: Today I'm very happy to have Josh McKenty on the show. Josh is the founder and CEO of Polyguard, a cybersecurity company delivering real time identity verification to stop AI-driven fraud.

Josh is a veteran technology entrepreneur and executive, best known as the co-founder of OpenStack and former chief cloud architect at NASA. He also co-founded Piston, acquired by Cisco and served as senior executive at Pivotal Software.

Earlier in his career, Josh contributed to the development of the Netscape browser, helping shape the early web. With deep expertise in cloud infrastructure, security and digital trust, Josh is a frequent advisor, investor and speaker on the future of authentication and privacy.

Josh, at my last job, the junior admin staff would semi-regularly get urgent emails from the company founder asking them to transfer large sums of money. These emails were, of course, scams, as are the panicked voicemail messages grandparents are getting from deepfakes of their grandkids. How will the advent of artificial intelligence make all of these problems much, much worse?

Joshua "Josh" McKenty: Oh, it's already happening. So we can just talk about how has it already? And how is it accelerating? There are three things that are happening that makes it much worse. The first one is the tools are so easy to use now, the number of folks involved in running these scams has grown enormously.

So it used to be you were getting attacked by a small group of sophisticated attackers and it was relatively expensive for them to run an attack. So they only targeted the folks where they thought they would make a lot of money. Now this is so perfectly democratized that everyone is getting scammed all the time, every company of every size.

So that's Impact one. More people involved in running the scans. Impact two. The scams are better because the AI is better at simulating the impersonated person, right? So instead of getting a call from a random sounding voice that is robotically claiming to be someone on the help desk team, you get a call from someone who you could swear is the CISO saying, "dude, your laptop's compromised. I need you to immediately download this scanning tool. Run it real quick. Here's the link. I'm texting you the link now."

The failure rate of folks who have immediately finished training on deep fake defense is still like 50%.

Rachel: Oh my God.

Josh: It's crazy. So that's impact two, is just they're better. And impact three is this terrible intersection between the AI that's generating the content and the AI that is scouring the data. So anything that's been released in a data breach about you ever is now at the proverbial virtual fingertips of these AI systems.

So everything is tailored. The message comes from the right phone number, it references a conversation or an email chain or a purchase you've made before. So everything feels more and more plausible.

If you are a grandparent getting a voice call from your grandchild, they're referencing their vacation in Mexico, where they're actually in.

So the intersection of the stolen data with the synthetic content has become this weaponized attack that people are way more vulnerable to than they are willing to admit.

Rachel: Who is running these scams? I'm always curious, is it like people in the poor world with few other options for actually feeding their families? Or is it state actors or is it just regular American bad guys?

Josh: The worst part of this is the largest number is, there was a documentary called Scam Incorporated. So there are these massive scam factory compounds off of the border area between Myanmar and Thailand.

Rachel: Oh no.

Josh: And the folks running the scams are actually essentially human trafficked slaves. So they have themselves been scammed to travel there for the promise of a job. They've had their passport taken away and they're locked up. And they're basically told if you don't scam people successfully, you won't eat and you are never getting out of here.

So we're talking about hundreds of thousands of people. There have been military operations to free various groups that have, you know, released thousands to tens of thousands of people in single operations. There's some good evidence that this is actually run by Chinese organized crime, but it's at a scale that is hard to imagine.

So when folks want to kind of get back at the scammer, they're assuming the scammer is the one benefiting and not realizing often the scammer has no choice.

Rachel: Yeah.

Josh: They are themselves the victim of a scam.

Rachel: Yeah.

Josh: So that's one category. The second category is nation state actors. Right? So when we look at the DPRK IT scheme, that's North Koreans impersonating Americans to get IT jobs. Remote IT jobs.

Rachel: Yeah.

Josh: That's like 30,000 North Koreans and it's run by the government. It's a financial program to fund the North Korean regime. And then also, anytime you have cybercrime, you have the classic pseudo state actors, especially in Russia and all over the world, really. But the Nigerian prince scam has been run by tech savvy entrepreneurs of a certain variety in places in the world where stealing from Americans is sort of considered an honorable form of entrepreneurship.

Rachel: Well, you know, Robin Hood.

Josh: Yeah, exactly.

Rachel: I can see their point, but I would prefer not to be stolen from.

Josh: Yes.

Rachel: Luckily for us, the White House has announced an AI Action plan. It covers everything. Right? And it's going to address all of the risks that you see. We don't have video, but Josh is shaking his head sadly.

Josh: It is an interesting policy document. And from the standpoint of American industry, it has some policy objectives that are clear. It's about making sure that Made in America products benefit America economically. It has some footnotes on page 12 around some of the risks of synthetic content to the legal process in terms of evidence. Right?

So making sure that evidence in court, we have a way of telling whether the evidence being presented is a deep fake or not. It does not at all address the idea of scamming of the general public as being a policy problem. It doesn't address the use of deepfakes in misinfo and disinfo campaigns, either domestically or abroad.

It's sort of really focused on other aspects of AI. There's actually some provisions in there that will make this problem worse in the sense of in the name of free speech. There's a lot of enforced censorship of the training of AI models in the AI Action plan.

And as we know from history, when you start a path of censorship, the path doesn't end well. If you inject professionals into a tech company with the goal of censoring what goes into the model that censorship escalates. The idea that you are now in charge of deciding what the model is trained on becomes a thing you do forever. And it tends to scale up rather than scale down.

So putting your finger on the scale of, how do these models work and what are they allowed to be used for, we're not doing that as a policy effort on the outcome of those models. We've literally done that in this action plan of saying, "no, we're going to embed people in Apple and in Meta and say, the model must not ever contain these things. Censor it in these ways that please us."

Rachel: Can you imagine a policy framework that would go further towards addressing this specific issue of the deepfake scams?

Josh: Yes, absolutely. And there are other countries that are working on it. So we passed the Take It Down Act, which was narrowly focused on intimate imagery, and it's a really important piece of legislation. I'm very much a fan. It also establishes kind of a legal definition of what a deepfake is, what synthetic media is, and it pioneered the idea that threatening to release a deep fake is also a crime. Especially in relationship to children.

Rachel: Yep.

Josh: Super important. What it doesn't do is address that attack in any other context. So if I'm a professional and someone deepfakes me and posts it to social media with the intent of either harming my reputation or benefiting themselves commercially, I don't have any recourse under the Take It Down Act.

So we need to extend the policy framework that that created to all of the other harms that defects are used, at least in the personal frame, in the, myself as a professional, myself as a politician, myself as an advocate for a particular cause. I don't think we have a framing yet to address the misinfo and disinfo problems.

Rachel: Yeah.

Josh: But we never have. We don't have a legal basis for that globally, so.

Rachel: And it's hard to persuade an administration to fight misinformation and disinformation when such information benefits it.

Josh: Yes. And all administrations want the right to use that tool when it suits them.

Rachel: Yeah, yeah. It's a bipartisan sphere of corruption.

Josh: Yeah.

Rachel: Absent a policy framework, can each of us individually protect ourselves from these more and more plausible deepfakes?

Josh: Yes. And there are two sides to this protection. One is limit our own vulnerability to being duped in these kinds of ways, particularly for financial losses. And the other is to protect the use of our own image and likeness in attacks on others.

And this is very similar to cybersecurity problems in general. Protect yourself and protect your loved ones. The former is a lot easier, weirdly, today than the latter.

So one of the weirdest, hardest things to train yourself to do. But really the only truly effective approach to say I don't want to be scammed by someone calling and pretending to be a loved one is if you get a call and it is a loved one in crisis, the first and only safe thing to do is to hang up on them, which is counterintuitive.

You need to hang up and call them back. If it's your spouse and you're like, oh my God, I've just been in a car accident, honey, I'll call you back, hang up, call them back. Because spoofing their phone number and their voice is trivial and there is no way once your amygdala is triggered during that call to verify whether or not they're a fake adequately.

Rachel: Right.

Josh: The problem with code words is people forget them in crisis. Or you use them every day. And if you use them every day, then they become easy to discover. And once fight or flight kicks in and someone's like, "I've kidnapped your child again," the first thing you need to do is hang up and call your child back.

And if your child doesn't answer, maybe you listen to the call the second time they call. Your child answers immediately like, "what are you talking about? I'm not kidnapped." But if you stay on the phone, you don't have a way to verify.

Rachel: Mhm.

Josh: Which is really, really, really hard to train yourself to do.

Rachel: If you have two phones, can you call them back from the other phone?

Josh: Yes. If you have two phones or you have a, you're sitting in front of your laptop and you can dial them on a second line or someone else in the room you can pass a note to, sure. But you are generally better at protecting yourself if you hang up anyway, because psychologically the most important thing to do is take a pause, which you can do best by getting off the line.

Rachel: Yeah.

Josh: And what the attacker wants to do is not give you a chance to take a pause and not let you get off the line. And so the first thing they're going to do after they give you the initial shock is say something like, "if you hang up, I'll kill your child."

Rachel: Oh God.

Josh: You need to hang up before they say that. Because if you hung up before they had a chance to say that, they're not going to kill your child because they want to get something from you, they will call back.

Rachel: Right.

Josh: If it's an actual kidnapping, you got to hang up before they make a threat. And if it's a synthetic kidnapping, hanging up gives you a chance to prove that it's synthetic.

Rachel: Yeah, my amygdala is triggered. I'm feeling glad that I sent my kids to many years of martial arts class.

Josh: Yeah, the obvious other ones, I mean, it's hard to think about training your parents and your loved ones to, you know, look at a domain name. But look, I put this Instagram post out two years ago. I was like, you should sit down with your family members at Thanksgiving and talk about Deepfakes, and you should do it every year.

And you talk about, what are you seeing, what's coming in text messages, what's on WhatsApp? You know, what the state of the art is, because--

The reason people are so vulnerable is because they believe they're not. They believe that only idiots get scammed.

Rachel: Right. But the barrier for "idiot" is going up and up and up and will eventually swallow us all up.

Josh: It already has. Everyone is a situational idiot.

Rachel: Yes.

Josh: My daughter's classmate got scammed and had his bank account emptied.

Rachel: Oh, God.

Josh: And it was right before we started the company. You know, he's not a stupid person. It was nine o' clock at night. He'd been up since 4am. He was a little tired. He was a little distracted. And the attackers were claiming to be the bank. And they didn't raise any red flags.

They didn't ask him for personal information. They didn't ask him to transfer them money. They said very plausible things in a controlled tone in a way that the bank has. And people are vulnerable to that attack, particularly because banks already lie to us.

Rachel: Yes.

Josh: They will already email us and say, we will never call you, and then they will call you as soon as your credit card is overdrafted.

Rachel: Yep.

Josh: They just don't follow their own policies. And we all know this, so we're used to them calling us, even though they said they wouldn't.

Rachel: Yeah.

Josh: Their telemarketers will call us to sell us a new feature, even though they will claim they never call us, but then they actually do. They just outsource it to another company. But it's actually legitimate. But you can't really tell.

Rachel: So there's a certain amount we can do to protect ourselves individually. But the problem is systemic. What can we do at the company or state level to try and address some of the reasons that people scam?

Josh: Yes.

The reason scams are possible is because a lot of our social infrastructure, by which I mean the ways that we relate and transact with humans, is based on trust, which is good. That's a healthy societal trait. But the trust is based on assumptions about verification that are broken now.

Rachel: Yeah. It's a social contract that is out of date.

Josh: Yes. And we made a decision in the very early days of the Internet. I love to take advantage of the fact that I'm very old.

Rachel: It's a superpower. We were there.

Josh: We decided that anonymity was more important than authenticity.

Rachel: For very good reasons.

Josh: Great reasons. It was very important. But we didn't know the trade off we were making is Pandora's box in a certain sense. And we said we're going to bias towards anonymity over authenticity.

A Mac address is not registered to a human when you buy it, and an IP address is not permanently registered. And the logs of all these things will be secret and protected in certain ways for very good reasons. Which means you never really know who someone is on the Internet.

Rachel: On the Internet, nobody knows you're a dog.

Josh: Yeah. No one knows you're a North Korean. And okay, that was always true. And people sort of knew that was true about the Internet. They didn't realize it was also true about phone calls.

Rachel: Right.

Josh: And text messages. And they didn't realize that these two things had become one over the last 10, 15, 20 years, that most phone calls go over the Internet and are partly Internet traffic for a period of time.

So the shift that has to happen is we have to verify who people are again in the same way that we would if we met them in person. And we have legal procedures for this. We just only use them very rarely. Notaries are the thing where say, hey, I'm buying or selling a house. That's a lot of money, maybe we should be really sure .

I'm opening a new bank account. That's potentially really risky for either me or the bank. Let's be sure. We can bring that same sense of like, let's be sure about who this person is to bear on our phone calls, our video calls, our Zelle transfers, our emails, really everything at an API layer. The same thing we did with SSL certificates for websites, we can do for SSL certificates for humans. That's kind of what we're working on at Polyguard.

Rachel: What about Real ID here in the States? Does that go any way to addressing this problem?

Josh: Yes and no. There is this really important role that the state has, the nation state has, which is it is the authority that says this person is this person and only they are this person. And we don't have another construct in society that holds that responsibility.

Rachel: This is why we talk about our government names.

Josh: Yes, government name, legal name. It's a fully qualified domain name for a person. But the problem with the government then being the arbiter of identity in all cases and in all transactions is you lose that anonymity piece. You're like, "oh, now we have authenticity but no anonymity. And do I want the government to know every single thing I've ever done with anyone, every phone call I ever had, every text I ever sent?"

Rachel: It depends on the government, Josh.

Josh: Right. But sooner or later, there's a government. You don't want to have that information.

Rachel: Yes.

Josh: And so the Polyguard approach has been to build the technology for authentication, but to keep the identity on the mobile device and to make the transaction decentralized. So it goes from your phone to my phone without the government, or even Polyguard, in between that transaction.

So if you need to verify who you are to me and I need to verify who I am to you, we should do that together, presenting credentials that the government has certified, but not with the government being in the middle of that verification. And Real ID puts the government in the middle.

And by the way, most of the other solutions in the space put someone in the middle, whether that's Apple or Google or Meta or Sam Altman through Worldcoin, you're saying, hey, we're going to trust this arbitrary third party to know everything about everything we do.

Rachel: And the last person trustworthy enough for that was Jon Postel.

Josh: Yes. Yeah.

Rachel: So tell us how Polyguard does it. Tell us what you're building.

Josh: Yeah. So we took advantage of a few innovations in technology that are really meaningful and kind of delightful, one of which is your phone is a really powerful device now. It has a really secure hardware enclave. So you can keep sensitive data on your phone encrypted in a way that people can't get it out, even if they crack your phone open.

And I say people. I mean, if the Russian government stole your phone and was willing to put the money into it, call it one to five million dollars, they definitely could. So it's not entirely safe, but the bar is really high.

We also have really good cameras on modern phones that have 3D data. So we have LiDAR scans, we have infrared scans, we've got time of flight cameras, we've got stereographic cameras. So actually, your phone is now one of the best biometric verification devices in the world.

It's better than most of those cool door locks, when you go through a secure facility.

Rachel: Those door locks are not cool.

Josh: The airlock ones where they lock you in the little glass box for a minute. Reman traps. Don't like those.

So we take advantage of that in a way that folks haven't before. And we help you collect your credential data from your government that, you know, the signed biometric data that's on your E passport and load it into your phone.

Kind of like a smart wallet. A little different at the protocol layer. And then we give you a way to present that, but not the actual credential. So the last thing we want people to do is start showing their passport to everyone.

Rachel: Right, Right.

Josh: And that's actually starting to become the standard in a lot of places. Like, please know, that's terrible. You're just giving folks data to steal from you and impersonate you with.

Rachel: Yeah. It needs to be a hash.

Josh: Yes. It's a signed jot that says, "Josh has these details that he's sharing with you. This is his full legal name. This is his age, but not his birth date. This is where he currently is, but only at the state and country level. Not his like, fine grained latitude and longitude address. This is where he's allowed to work, but not his citizenship details or whatever else."

So we present signed proofs, but not the underlying credentials. The credentials are yours and we broker the infrastructure so that the recipient of that credential can verify it if they need to. They can say, "Hey, I've got a thing that Josh says this is his full legal name. Was this actually signed by Josh? And was the proof of it signed by the US Government? And was the proof of his phone number signed by his Telco? So we build these chains of trust.

Rachel: So it sounds a lot like public and private key infrastructure. Is that the model on which it's based?

Josh: Yes. And it is actually PKI under the hood. It's you know, certificate signing requests and moving the right key to the right place and keeping it on your phone carefully. We had to solve some fun technical problems in the process of that.

Like how do you prove the phone is in the room with the person? How do you prove the person and the phone are the ones using that web app and they're not on the other end of a remote desktop connection or a VPN connection?

So we've added some other patent pending technology in some spots. But the nuts and bolts of what we're doing of "hey, I want to have a way to prove my identity, but I want to carry that around in my pocket and not have it depend on trusting Google to do it for me."

Rachel: And I feel really weird and William Gibsony asking this, but the hash algorithms that you're using, are they going to be vulnerable to Quantum Attack?

Josh: I had a happy hour last week with a quantum computing expert to choose the new set of algorithms we're moving to.

Rachel: Wow.

Josh: Because NIST has scheduled the ones, the elliptic curve ones we were using are now scheduled for deprecation in 2030.

Rachel: That's amazing.

Josh: Yeah. We have five years to make a relatively small patch and we'll obviously keep doing that every year forever. So the relationship people end up having with Polyguard is that you're using our mobile apps.

And then in the commercial sense, the company who wants to interview candidates and say, "hey, prove that you're not a North Korean, prove who you are for this interview." They're our customers. The end user who has the mobile app is always a free end user. We don't want to charge individual people to use Polyguard.

Rachel: Right. Despite everything. Do you use any AI tools in your own workflow?

Josh: Oh yes.

Rachel: Which ones?

Josh: Cursor, Claude, Firefly. We have to be somewhat thoughtful about anything that touches data because we have customer contracts that say we won't do that. I was really surprised at how quickly our customers have said, "we have an AI policy that is part of both the NDA and the contract and the MSA. And it says you will guarantee that nothing covered by this NDA ever touches an AI agent."

Rachel: Interesting. So the customers are fiercely protective of their own data and don't want it ingested into the models.

Josh: Yes. So we can't for instance connect Google Drive to an MCP server that we have anything talked to because we might have other things in that Google Drive instance that are not ours that are covered under NDA. So we really, it stays right now in the, "what are we doing on marketing in the public web." So, you know, use ChatGPT and write a bunch of content.

So we definitely use "help me write a white paper."" And this is part of, we were talking about this before we started, the nature of the funding environment today is we're a very lean company because we don't look like an AI company. And that means we do more with less people than we would have, say, in the ZIRP period.

So the expectation is our marketing department is less than one full time person and yet we should be on all the social medias, we should be in the press, we should have a great website. So definitely AI plays a role there.

And on the coding side I would say we are not on the very cutting edge. We're probably a quarter behind the cutting edge because I know where the cutting edge is. We have friends in the portfolio who are that cutting edge. Here's the newest, coolest MCP server mechanism and here's the next gen models.

But we do human in the loop PR reviews of everything that goes into the products, either on the mobile side or on the back end services, the verification chains and so forth. And we definitely have some old school, we have an old school security team that's monitoring our infrastructure. We have fairly old school practices around firewall rules and the actual infrastructure.

Rachel: Yeah, as your customer I would hope. There is an interesting follow on question to those draconian NDAs though. We investors love to talk about data gravity and how you can accumulate enough data to start to build products that are more self-learning.

Does the prohibition on AI preclude that or can you use more traditional data science techniques to analyze the data that you're collecting just in the course of doing business?

Josh: Well, so the data we're collecting is this very interesting subset of like, as little as possible. We are from an investor standpoint, we have made this what they would consider a terrible blunder of not building a data business. I do not want to know what my users are doing. That is not my job. It is specifically our value prop to not know.

So what we have is log exhaust from our own infrastructure which is interesting in the like, "oh, we should learn about how to scale our infrastructure in clever ways or how are we being attacked." But we don't have usage data in the way that we would think about, how can we build a bunch of value out of this?

One of the trickier areas for us from a policy question, we do have customers who are using our product for anti-fraud and in particular in their use cases they're using Polyguard when they're already pretty sure that the person being checked is a bad actor.

And so they would like us to give them more and more signal and we have this dilemma of saying just because you're sure they're a bad actor doesn't mean we can treat them any differently as any other user. And the consent model says they have to agree proactively to consent to sharing data with you and, or with us.

And the consent is explicit. It's explicit. They're sharing this piece of data with you. We do not then have the right to combine that data in a community setting.

So a lot of fraud analytics and behavioral biometrics has been done with this sort of community data perspective that says, "oh, I see that Josh is using the same device to pretend to be 10 different people and talk to 20 different endpoints. I should flag that device as bad because it's being used in all these bad ways."

But in order to do that, we would have to agree that sharing data across a bunch of different customers was allowable. And Josh did not give consent for that.

Rachel: Right.

Josh: And so the moment at which we could do it is only for us to share that insight with law enforcement, if we had the insight at all. And that would really only be because they're failing a verification multiple times. And that's some interesting signal that we do have because we're part of the verification step.

Rachel: Yeah. And producing any false positives on that is going to have a very business limiting effect for you.

Josh: Yes, absolutely. And again, we're not in hey, we want to be proactively taking things to law enforcement. Unless that is in the context of "here are actual harms being done." And then our customers, they would have to have their own relationship with law enforcement, that law enforcement was taking something we had shared with them back to that customer. Right?

So there are those cases in kind of financial contexts, or a bank might say, and I can say, look, I can't tell you this about your users. They did not give me consent to give you that data and I'm not going to. But if I have an obligation to share it with law enforcement, and I do, and law enforcement shares it with you because you're an impacted party, that's the right way for this information to flow.

The reverse problem is the subpoena issue. When law enforcement says, we believe we have the right to know this thing about your users. And I say, well, if you have a subpoena, I have a duty to inform them, if I'm allowed to. And if I have the data, I have to comply with the subpoena, whether it is ethical or not or lawful or not, in which jurisdiction.

And we are definitely in a moment right now where I do not want to have the data because I don't want to be able to comply with the subpoena that puts me in that position. So we just don't have that data.

Rachel: Yeah, yeah. You may have to bootstrap this thing.

Josh: Yes. Yeah.

Not building a data business means we're less investible, but it means we have a product that people feel safe using.

Rachel: Yeah, yeah. it's an ethical decision.

Josh: I do think the international market is enormous for that too, by the way. I think there's so many countries where, because we built it this way, we can do business where at this moment in time, other American companies cannot.

Rachel: Yeah. And like the focus on venture backed startups has kind of distorted the incentives in all kinds of infrastructure building. I love bootstrapped companies. I think that they are a really viable alternative to the growth at all costs mindset.

Josh: Yeah, I like venture, I've raised venture. But I do think there's a middle ground of like you can raise venture from visionary investors who understand that you're building a business that you want to feel good about operating. It doesn't have to be enshittification all the way down.

Rachel: Yeah, yeah. So as a big user of Claude and Firefly's, are you worried that LLMs are going to replace engineers? What advice are you giving to college graduates these days?

Josh: So my daughters are 22 and 18 and I am legitimately worried that LLMs are going to replace 60% of the workforce and that our answer to that has to be UBI because otherwise we have a giant revolution coming where nobody has a job that feels meaningful.

Rachel: Yeah, but eight guys have all the money, so like they're doing great.

Josh: Yeah, sure. I have a lot of billionaire friends too, so I'm the weirdest mix of capitalist and socialist, you know, and I embrace all of the paradoxes of that position. But yeah, I think the structural problem we have is that the ideal dev team right now is senior developers and a bunch of LLMs.

Rachel: Mhm.

Josh: And we can't have senior developers without having junior developers. So how does that become sustainable? And I'm old enough and grouchy enough to feel that I don't think dev teams with zero human developers are ever a good idea. I think I have already worked on the largest dev teams I will ever be a part of.

Rachel: Wow, we've reached peak dev.

Josh: I mean, to be fair, OpenStack was 30,000 contributors. Right. So it's hard to imagine a larger community than that needing to exist.

Rachel: But we still need junior devs because you don't get senior devs unless you have junior devs. Is it going to look more like medicine, where we just have internships?

Josh: No, I think medicine is very fundamentally broken and I don't want to copy that model.

Rachel: Yeah, absolutely.

Josh: I think the way that we collaborate with AI is not quite right. And there's been some interesting studies on this. It's helping people turn their brain off.

Rachel: Yes.

Josh: It's not what we want. And if you look at what we did at pivotal for years with pair programming or what we did with mob programming before that, it was about a collaborative environment that turned everyone's brain on more.

And I think there are experiments happening. I've been coaching a friend of mine through his early startup phase. I think actually you met Francois, working on Decode, which is a really radical rethinking of like, what does it mean to feel like you're collaborating with AI around building an application. I don't think they've got it quite right yet either, but the experiments are going in the right direction. Where I can imagine a collaborative development environment where junior developers can become senior really fast.

Rachel: Yeah.

Josh: Does that mean we need them there or they need to be there? I don't like the model of medicine that says we're just going to abuse interns for seven or eight years.

Rachel: Right, right. The guy who designed that system was high on cocaine 24/7 and it shows.

Josh: Yes.

Rachel: But I do think all of this conversation is exposing a fissure in how we talk and think about work and jobs. Like we have tied the fundamental human desire to play and create to the human necessity to keep our meat sacks alive. And if we separate those two things out, if there's universal basic income and it's like a Scandinavian country and you can just live a good life, then you want to work on things that matter. You want to start a punk rock band or make indie films or write open source software.

Josh: Yeah.

Rachel: And that kind of restructures everything. Like, you know, it's really fantastic to hit the point of your career where you don't have to work anymore. You know, the kids' college is paid for and the house is paid for and you can start a venture company to invest in the founders that you believe in and not have to persuade other people to invest in them anymore.

Josh: Yeah.

Rachel: I want that for everybody. I don't want to gatekeep freedom.

Josh: That's where it gets so interesting.

What people do when they have freedom is so much more interesting than what they do under desperation.

Rachel: Yes. Yeah. And I think that strikes to the core of the problem, like with who these scammers are and these awful internment camps on the borders. You know, UBI in America isn't going to be enough.

Josh: Yeah.

Rachel: Doing something to reduce the incoming inequality all over the world is the only path forward I can see. And right now so many of these tools are dragging us in the opposite direction and exacerbating income inequality.

Josh: Yes.

Rachel: Anyway, rant over.

Josh: No, geographic arbitrage is a huge part of the problem and UBI only in one place makes that problem worse and not better.

Rachel: Yeah, yeah. As it does in Scandinavia. I mean, that's part of why immigration has become such a contentious issue.

Josh: Yes.

Rachel: What are some of your favorite sources for learning about AI?

Josh: Wow. I'm terrible to ask questions about learning because I'm an incredibly self taught person. I didn't go to university. I dropped out of grade school to join the circus, haha. I skipped half of high school. I don't like being taught things. So I've always been like, I will figure it out. So yeah, my learning around AI is mostly about staying aware of what's happening so that I have a chance to be curious about it.

Rachel: Yeah.

Josh: So I'm on a bunch of like very old school daily newsletter things. Wing VC was a good one.

Rachel: Ben's Bites? TLDR?

Josh: Yeah, Wing VC's got one. I chat with journalists because they call me for commentary. So I always hear about what they're hearing about. Because I was very early into social media, I end up getting invited to early social platforms. So I had like a Sora 2 account in the first few minutes.

And so then I'm always trying stuff. I was like, "oh, I'll brew install Claude and see how it's doing this week." And so then my awareness is pretty high and then I like to try stuff. So, you know, shortly after the MCP spec came out, I ran some servers and then wrote some servers and I was like, "oh, okay, here's where this protocol is. I've got a basic understanding."

It's really the step functions that don't look like step functions that surprise me. I don't know how to put that better. But like, what is the difference between dial up and broadband? It's just speed. But at a certain point "just speed"becomes, it is categorically a different universe we now live in.

Rachel: Yeah, yeah, yeah.

Josh: What's the difference between GPT5 and GPT4? It's like speed and accuracy and context window size? But it isn't really. It's like what questions does it get right most of the time, and how quickly?

Rachel: Yeah, yeah. It's a qualitative change that comes at certain exponential thresholds.

Josh: Yes. So I guess the example I have to give. So we had built an iOS version of our app initially, and then we're like, ah, we got to do Android really quick. I wonder how really quick it could be. And one dev plus Cursor, built the Android clone of our iOS app in under a week.

Rachel: Oh, my God.

Josh: Like Rev 1. And we had it done and polished and in the app store in under three weeks.

Rachel: That's astounding.

Josh: And that's not simple. That's like, oh, yeah, we're going to run TensorFlow on the phone and store encrypted embeddings of your biometrics and compare them in real time with a bunch of image pipeline processing stuff. So for that kind of workload.

And now I see people saying, hey, we just wrote a spec doc, turned Claude loose for 30 hours and we ended up with a fully functioning clone of product X or product Y. Yeah. I think a lot of it is still in these sort of communities of who's trying what. So my communities are mostly other founders.

Rachel: Yeah.

Josh: Understand to be a super early adopter.

Rachel: That's why I like hanging out with founders.

Josh: Yeah. Yeah.

Rachel: Josh, if everything goes exactly the way you'd like it to for the next five years, what would the world look like?

Josh: Okay. Well, there's two or three big lumps. UBI at a global level. I was talking to my sister last night about like, how does that need to work? Who needs to administer it and how is it paid out? And I have been a die hard cryptocurrency hater since the day cryptocurrency came out. And UBI is the first thing that's making me rethink that position.

Rachel: Because it's not a fiat currency.

Josh: Yeah. If we need everyone in the world to receive money regardless of where they are and where their citizenship is. Right? So I did some work for the Saudi government and I think a lot about the Bangladeshi residents of the kingdom.

Rachel: Speaking of having your passport taken away.

Josh: Right, exactly. Bangladesh is not going to send them money under UBI, and the Saudis certainly are not going to pay them money under UBI, but they need it probably more than everyone else.

Rachel: Yep.

Josh: So, okay. Cryptocurrency.

Rachel: Damn. That's the first good application for crypto anyone's ever pitched me.

Josh: I am publicly recanting my 100% opposed to all cryptocurrency forever position to say, if we get to use it for UBI for every human in the world, I'm in.

Rachel: Can it be stablecoin though, rather than one of the speculative ones?

Josh: Sure. I haven't even paid attention to what the difference of the various flavors of evil are. So, yes, UBI via crypto to everyone in the world. Polyguard for every human in the world. And I'm not an AGI maximalist, so I don't actually think that ends up being the solution to everyone's problems.

But I think if we end up with a thoughtful social embrace of the AI transition period, which is the next hundred years, it's going to be four or five full generations to digest what humanity needs to look like in the post-AI era compared to the pre-AI era.

Rachel: We'll make good pets.

Josh: I don't even know, even forgetting about pets. Like, let's just assume it's not AGI, it's just the AI we have today taken to its logical extreme, which is that labor is obsolete. But we still have people and they still need purpose. And our purpose can be art and music and love and research and science and learning to understand each other better.

Rachel: I mean, Iain M. Banks wrote about this back in the 80s, and all of the billionaires have read his books, but they haven't understood them.

Josh: I know. The fact is, Star Wars, the Star Wars universe, what do most people do? They're the mechanics that service the robots that do the work.

Rachel: Yeah, no, I'm a Trekkie. I'm all for fully automated luxury gay space communism yesterday.

Josh: Yes. And I don't think we're going to end up with one grand unifying purpose that humanity gets behind. I think people need to have the freedom to come up with their own one grand purpose. I think a lot of folks will get on the let's colonize other planets work stream.

Rachel: Those planets are made of poison. I urge people to reconsider.

Josh: I 100% agree. I'm not saying I'm signing up for that one. Some folks will say, "let's clean up the one we have" and be dedicated to fixing the mess we've got here. It's a noble, multiple-generations worth of work there. And I think there are mysteries of the cosmos to be unpacked. And I think if we feel that we actually have a lifetime of not needing to scrabble in the dirt--

Rachel: I would just be looking at James Webb images all day, every day.

Josh: Oh I would spend 10 years working on getting myself into orbit so I could just camp out, like in a pod attached to James Webb. You know, James Webb, but personal. I want a joystick. I want to pan around. "Oh, zoom back in. Go back over there.

Rachel: That is the best segue I've ever had, to my last question. Favorite question. Your platform is so good. I've made you President of the Galaxy. And with this office comes your complimentary generation ship. A starship that takes more than a hundred years to get to its destination. What are you going to name her?

Josh: Dosadi.

Rachel: Dosadi.

Josh: Have you read the Dosadi Experiment?

Rachel: I have not.

Josh: Frank Herbert is really well known for Dune.

Rachel: Frank Herbert is actually a really good writer.

Josh: He's a great writer and he did this weird collab where he wrote with a different sci-fi author. They did a few fascinating books together. And in the same way that Dune was mostly about politics and power and not really about sci-fi, Dosadi and the Jesus Incident are mostly about human interaction and not so much about the sci-fi framing.

So the generation ship has to be named after the big problem that we think is solved during the trip or struggled with during the trip, and not what do we do when we arrive.

Rachel: Yeah, that's a good one. I mean, I named Generationship because the Earth is our generation ship. And those are exactly the wicked problems we're trying to solve.

Josh: Yeah, it's how we get along while we're on board.

Rachel: Josh, what a joy to have you on the show. I hope you'll come back. Thank you so much for your time.

Josh: Really a pleasure. Thank you.