
Ep. #8, The AI Preceptorship Model with Scott Hanselman
On episode 8 of High Leverage, Joe Ruscio sits down with Scott Hanselman for a conversation that goes far beyond prompts and productivity gains. They talk about craftsmanship, learning by doing, the long-term talent pipeline for engineering teams, and how AI could either free people to do more meaningful work or simply accelerate existing inefficiencies. From pair programming to Star Trek economics, this episode examines the bigger human questions behind the current AI boom.
Scott Hanselman is a programmer, teacher, speaker, technologist, podcaster, and writer who works at Microsoft/GitHub as a Vice President and Member of Technical Staff. He has been coding for more than 30 years, blogging and podcasting for more than 20 years, and is the host of Hanselminutes.
transcript
Joe Ruscio: All right, thanks everyone. Welcome back to another episode of High Leverage. I'm super, super excited today. I'm joined by Scott Hanselman, who's a VP and member of technical staff at Microsoft. Thanks for joining us today, Scott.
Scott Hanselman: How's it going, sir?
Joe: Good, yeah, really excited. We've been talking a lot, with a lot of guests on the show, about obviously everything going on with agentic AI and programming and coding.
But you and some of your colleagues recently put out a really fascinating article in the Communications of the ACM that talked about a side of this that I, I think some people have been paying a little, ever so slight kind of attention to, but it's really been not a focus with all the excitement about what agents are doing now, which is, what's the future of software engineering going to look like in this world?
Scott: Yeah, I live here in Portland and I worked at Portland Community College and I went to Portland Community College and then I went to OIT. I taught at both. So even though I've been in tech for money for 35 plus years, I keep coming back to being a professor.
Joe: Right, right. Yeah. You have a background. I mean, that's a good, I don't know your origin story. I was going to say--
Scott: A little bit of framing. Right? So I'm the first one in my family to really go to college. And my dad is a Portland firefighter, my mom worked at the Oregon Zoo. So, you know, we're craftspeople working with our hands. But then I got into computers because back in like 91ish, Portland Community College kicked off a software engineering degree, not a computer science degree.
Pure engineering technology, that is the science of shipping, which is different than computer science, which a lot of the folks on the call may have done, which is compiler theory and OS theory, which is all fine but has nothing to do with how you ship. And this was right when test driven development and the Pragmatic Programmer and Code Complete, Steve McConnell, like, this is the kind of like inflection point in the early 90s and then, you know, in the late 90s we started getting CICD and stuff like that. They just called them build servers back in the day.
But I propose that the act and practice of shipping software is different from computer science. And my degree is not in computer science, it's in software engineering. When you bring someone into a new job, are they or are they not qualified engineers? And software engineering is not the same as being a plumber or being an electrician, where there's boards and there's tests and there's formal apprenticeships and all that kind of stuff.
But then my wife is a nurse. She went to be a nurse, then a charge nurse and then the nurse of nurses, the house supervisor that runs the whole hospital. And there's this concept they have in nursing called a preceptorship. And a preceptorship acknowledges that the nurse arrives at the hospital after having passed their boards. It's called the NCLEX. They pass the boards and they are a fully qualified nurse.
Day one, hello, I've graduated, I passed the test, I'm a nurse. But they don't have any context. They're baby nurse, but they are fully qualified to practice medicine as a nurse. We don't bring them in and then say, okay, you're an intern, you're an apprentice, you're minus one. And then you have to dig yourself out.
And when you name someone an apprentice or an intern, you're basically saying, you know, dig yourself out of this hole. And if you, if you wash out, you're like, well, they were an apprentice, they were an intern. What nursing does is they have a nurse preceptor, which is a nurse that is trained to make the baby nurses into full fledged nurses.
And it is their job and they are metricized and they are measured on: "Did you make a nurse today? Did you like successfully turn a new fresh nurse into a senior nurse?"
So we are proposing that in a world of AI augmented software development where early-in-career people are being chewed up because they think, "what's the point? I just learned about compilers in school and now it's all just vibes," or older people who are like, "why did I spend the last three or five years honing my craft? It's all vibes."
So you got people on both ends of the bell curve, kind of like bummed. How do we pair up seniors, not as a mentorship, not as an apprenticeship or an internship, but rather as a, "come on in here, join us. We're going to put you in the front lines, we're going to do this stuff together and we're going to give you all of those great experiences that I got to have by virtue of the time of my birth,"putting them in the front lines like a nurse.
You know, "stab a guy in the neck with a pen. Oh, you saved his life. You cleared his airway. Oh, that was awesome. I'm going to be a nurse." I want them to have those experiences, you know, like when was the last time you dropped production? And you had that like, "oh crap," you know that, I want the young people to have that experience.
Joe: Yeah, yeah, there's a whole lot there. I got a degree in computer science in the late 90s and you just reminded me at the time there was almost a-- I think in a lot of computer science departments, almost kind of a perverse pleasure in being like, hey, we're not here to talk to you about like how software gets measured or deployed. This is purity of like computer science and algorithms and Big O notation.
And I remember like our department had this one weird professor who had a software engineering class and you know, which I took because I was actually very interested in like well, how is this, how do you actually make this practical? Right? Like how do you, in the real world?
And you know, I was fortunate to have some internships. I worked at Motorola on telecommunication software and I was fascinated. They were SEI CMMI. I don't know if you remember that, like the software maturity model.
There was a lot of rigorous thinking or attempt at rigorous thinking, I think around how to actually codify software engineering. And yeah, I think, well, first and foremost-- I'll come back to that because I had one follow on question there.
But how do you think just in general the role of where software engineers in general before you get into how to make-- I mean there's a little bit of foreshadowing because you are concerned about how to make more of them. But how do you think the role of the human in general, human software engineer fits long term into the creation and management of software moving forward?
Scott: So I thought in the past, as I've been developing my perspective on this with a historical context in mind, that every era panics. When I started, I was told by the old neck beards, non gender specific neck beards, "hey, you're using C, it's going to rot your brain."
And then I got, we got Color. I remember when I went to Nike in the 90s and we had syntax highlighting and they're like, "oh, it's amazing, it's going to rot your brain."
And then we got IntelliSense and Visual Basic and start typing and it autocompletes and "oh, that's going to rot your brain." And then Stack Overflow rotted our brains.
The difference is though, now we have actual Harvard studies and MIT studies that show that it rots your brain. Not the early stuff, but the AI does. So it is the biggest abstraction layer ever.
Joe: Right.
Scott: You just spackle on top of 50 years, 60 years of software engineering with the PowerPoint compiler that we always used to joke about.
Joe: Yeah, yeah. So I think, I mean you're referring to, and I do like this, that if you look at the history of IT, is a history of just stacking yet another abstraction layer on top.
Scott: Right.
Joe: And reducing the toil people have to do in the day to day work in software engineers, allowing them to focus on higher level primitives. But I think to your point, like in this series, it's the singles biggest leap ever in one kind of new abstraction layer. Right?
Scott: Right. And as such though, there's system one thinking and system two thinking. You're familiar with this concept?
Joe: Yep.
Scott: System one is fast, automatic, intuitive. It's just, I know this because I know this. You go from A to C instantly. And system two is deliberate, it's analytical, it's working together. System one thinking is your unconscious automatic thinking. And it's effortless, intuition.
There's an argument that we are outsourcing our thinking to system three thinking. We're applying that metaphor and retconning it. System three thinking is the externalization of judgment that is problematic. That is literally: I'm not going to think.
And when you vibe an essay, this is all facts. I don't believe I'm saying anything that's controversial. If you vibe an essay and get a B, they have shown that if you are asked questions about the essay a couple of days later, you don't remember because you never wrote it.
Joe: Mhm.
Scott: So where did the deep work happen? Where did the deep thought happen? And I can say that I can feel that, I can feel like, I've got four monitors here and that one's running an agent. I just closed three issues and I'm vibrating with the serotonin of running multiple agents.
Joe: Right, right.
Scott: I couldn't tell you if I really fixed those or not. So there's a deliberateness to it. Like, you know, we're all supposed to go and work out. Joseph. We're all supposed to go to the gym and work out and lift weights.
Joe: Right.
Scott: But now you're putting a forklift, on your behalf to the gym, I'll stay here. And then it lifts the weights for you. And then you're like, "I don't know, my muscles aren't getting bigger. I don't know what's going on. But the weights got lifted so that's good."
Joe: Yeah. And like you, I think I've kind of been constantly open minded and trying to adjust my thinking as the capabilities improve. Yeah, I think that system one and system two thinking is interesting. Right? Like as we're building, you know, a lot of people have been referring to them as software factories. Like the role of someone whose primary focus, I mean even going back to like the difference between software engineering and computer science. Right?
Like historically, the role of someone whose primary focus is like, okay, how do I take the kind of messy requirements and constraints of the business, and how do I translate that ultimately into working software that meets those needs? Right.
And the abstraction to your point is like undergoing a massive shift. But I do believe there's still you know, for accountability purposes, for kind of final fitting of the horizontal technology to the specific problem domain. But it is, yeah, it is interesting to see you know as the capabilities come out, how that, I mean in almost real time is like shifting as to like-- If you'd asked me what that role is going to look like six months ago, I'd probably have a different answer. I'd certainly have a different answer than I have today.
Scott: Right. And I'm, I mean I'm trying to be open minded and I also have to make-- You know, you and I have to look at this and acknowledge that we're looking at this through the lens of people of a certain age. I think I got a couple years on you. But the point stands. You're not 25.
As such, we keep using the word "toil," which I've, I use a lot to refer to the work that we don't want to have to do. But then we are also realizing that early-in-career people, our kids need to toil. To be a human is to toil. Otherwise we're just being-- You know, it's like the guys at the end of Wall-E, they're just floating around and getting tacos thrown into their faces and they don't have to do anything.
Joe: Well, and I'm not sure, too, to that extent, because-- I think what you're talking about, your preceptorship and the nursing, I think it's always fascinating to me to see when you can bring kind of practices from, you know, one domain. You know, it's a lot around continuous delivery, that comes from, like, a firefighting background, which, you know, sounds like your family has that as well.
What's interesting to me, especially when this stuff really started breaking out, some people, you know, who are probably of our general cohort, and like you said, there'd be some concerns, like, oh, have I been doing this for 20 years for nothing? And my perspective has always been, like, at some level, I think if you've been writing code in industry and learning all this, that you're almost like pre-atomic steel, in a sense. And then I'm like, I don't know where the industry--
To your point, there are certain things, you can learn them in a book or you can be exposed in theoretically. But there's certain things, I think, and maybe that goes to your system one thinking, you can really only learn by doing the toil. Right? Like, you can only learn it by doing the work.
And it's a fascinating question of, like, if the cost of doing the work is so high relative to sending the forklift in to lift the weights, like, will people do that? Like, what kind of motivation, or--
One of the most fascinating parts about your article, maybe we could go in here for a bit.
Scott: Sure.
Joe: You raised the idea of these tools, like Claude Code or something, actually having a learning mode where you would literally flip a little switch and you'd still be using Claude, but you would be forced to do some of the work. Could you dig into that a bit?
Scott: Yeah. Well, so the idea is that-- And I love analogies, and people who follow me online who may stumble upon this interview will probably be sick of this one.
But if you drive stick shift, you have a fundamentally different relationship with the vehicle, and I don't think that's controversial. If you are a person for whom only Uber has ever existed, like you grew up in a world where the cloud existed, meaning that the business environment and business problem pre-cloud didn't exist, therefore you don't have any context. Like it was always the cloud. The iPad was always a thing. So there was no pre-touch world. Wi-Fi was always a thing. So you never had to go from one Wi-Fi hotspot to another Wi-Fi hotspot because you grew up in the global north.
All of these things change your relationship with technology. So if you don't ever have to manage your own memory, managing your own memory is archaic. It's crazy, right? If you never have to even see code, that changes your relationship with the computer.
So then the question is, are we, am I, is the paper advocating for an old man who shakes fist at cloud, "oh, these young people gotta drive stick shift."
I think the point is that at this point in our relationship with computers, the abstractions leak a lot still. And until they don't leak at all, we need to make sure that a reasonable number of us still know how to drive stick.
Joe: Yeah, I agree with that and I also think, probably like you, I mean my first professional work for close to the first decade of programming professionally was doing like systems programming in C. And that, much like driving a stick, I think you could use that analogy requires you to have a level of mechanical sympathy that say working in a higher level primitive like a .NET for instance does not.
You know, working in something like .NET gives you kind of superpowers and the ability to work in higher level primitives than something like C does. But one of the things that's interesting to me is all these other like, in IT, prior all these abstractions like moving from a stick to an automatic, there's a level-- Or moving from non memory managed to memory management or even like something like Rust lang where it's like, no, not only is it memory managed, but it's memory safe. Right?
That's deterministic in that it's like, okay, because we've made this one choice of a language to use or a primitive to use, we deterministically gain these additional primitives.
Scott: Right. It's unambiguous and it's solvable. It's provable.
Joe: Provable. Right. LLMs are pretty famously not deterministic. Right? And one of the things that fascinates me about them is I, I've actually been struggling. Maybe you have a good one. But I've been struggling to think of another technology advancement like this that has the level of non determinism or probabilistic, you know like I can't think of one.
Scott: You're gonna laugh. But, you know, it's pretty straightforward. It's the Internet.
Joe: Oh, okay. Interesting.
Scott: You Googling for something and me Googling for something is not deterministic because you get a different thing.
Joe: Right.
Scott: They've changed the algorithm. When page rank was a thing, we would all Google for stuff.
Joe: Yeah.
Scott: Like, I grew up in a time when you Google for Scott, you find me. Now you Google for Scott, you find Scott toilet paper and Scott bicycles and all these other Scotts that have SEO'ed their way. Additionally, if you use Scott bicycles and Scott toilet paper and the algorithm knows that you're going to get that, and I'm on page 19.
So the Internet itself is not deterministic. You go to a Stack Overflow question on Tuesday, you get one answer, that answer has been updated. You go back five weeks later, it's different. So the Internet itself is a giant abstraction across all human knowledge that is not deterministic. And now AI has scraped all of that, added a little bit of randomness, and now it's a big giant game of Family Feud.
Joe: Right, right. Well, coming back to software engineering, heading towards the other end, because this is what I was thinking, I've been wondering if-- Because again, getting back to what the role of the human could be, I do wonder if we're heading to a place--
So, when I was studying computer science, it was at an engineering school, Illinois Institute of Technology. And a lot of my peers were in civil engineering or mechanical engineering, electrical engineering. And they always used to roast me a little bit. And they're like, hey, that's not real engineering. Like, when I graduate, I'm going to eventually have to take a exam and I'll be a professional engineer.
And you mentioned this. There's like, boards and as part of my work, I will sign my name as a professional engineer to a bridge being built or a HVAC system that's installed. And if something goes wrong, I'll be like, liable. Right? And that's quote, unquote, real engineering. There's always been this kind of conversation.
And some part of me wonders if-- And this has always been this conversation about, like, oh, should software engineers be professionally licensed? And the pushback correctly, I think, has always been like, hey, when you're building a bridge or an HVAC system or whatever, there's these best practices in a very fairly narrow solution space although you can still always be innovative.
Whereas software is just like infinitely fractally complex. I wonder now with these new like LLMs and the power and the non determinism if part of the role of the human is going to be like, oh, you're the professional engineer who's responsible for the software factory and you have to sign your name to the output it produces.
Scott: Yeah, yeah, that's a good point. You know, in my day job I work at Microsoft and they are going out of their way to make sure that an AI never does anything where there's no human attached to it. So like if you do a Git commit it'll say "on behalf of," you know, or of an issue gets like everything's "on behalf of" that you're ascending your agent, like just the same way that your lawyer has power of attorney, but they didn't do it themselves, they did it on your behalf.
You're the person that gets in trouble if you check that thing in. So then human judgment is the only thing that matters. And there's that old IBM thing, I think it's in the manual. Like, a computer can never be held responsible.
Joe: A computer can never be held accountable, therefore a computer should never make management decisions.
Scott: Yeah. So that matters more than ever. But then I retire, you retire. Everybody comes in, they've never seen a computer of this time. They've been assuming a lot of things. The abstraction starts to leak. Who comes in and fixes it? That's where the preceptorship comes in. We've got to pass the knowledge on.
I make this joke about how there's not a lot of software engineers on the Walking Dead. You know? Everybody on the Walking Dead is like a mechanic or they chop wood or they like they always know how to rewire a nuclear-- They always come upon a nuclear power plant and then rewire it and then bring civilization back. But it's people who know how to like twist wires, not reboot routers.
I think there's a lot of value in trying to remember as much of the stack as possible. And the other problem though that we're not talking about is the hyper optimization culture driven by unfettered capitalism.
Because it used to be we're going to try to lower costs by 3%. The problem is we've been trying to lower costs by 3% every month for the last hundred years and now we're trying to squeeze blood from a turnip and drive costs to zero. Which makes us wonder why are we doing this?
Joe: Yeah, I was definitely wanted to come to that because I I think you raised a point in your article about, and I've been kind of having this same conversation with people . Again, I try to be a student of it and imparts that just with advancing age I happen to have like lived through an increasing percentage of it.
But if you look historically, when I talk to people about what's weird about this or what's different, if you look historically the vast majority of like major advances in IT history, innovation cycles, they get adopted first at like the fringe, like the early adopters and that tends to often be startups, especially the last several waves. Whether you're talking about virtualization or cloud computing or cloud native.
Because as an early stage startup you have, you by definition have nothing to lose except maybe some money that you've raised and that was really from your investors. So what do you care? And everything to gain if you find some edge. Right? And this strikes me as one of the first and potentially fewest, where basically it's almost like it's not coming directly from the tech--
Like the C Suite. The business executives are basically holding the metaphorical gun to the head and saying, "hey, you will adopt, you will use this." And even if you're a very forward thinking CTO, you're maybe a little. Because historically you're like oh, this seems really great and we should do the same adoption cycle where we kind of test it in the small and you turn around and the CEO is like no, everything tomorrow. Right?
Scott: Yeah. There's a classic Babylon 5 quote. "The avalanche has begun. It's too late for the pebbles to vote."
Joe: Oh, okay. Yeah.
Scott: Isn't that good. You think about like build servers and CICD. That didn't come from top-down. That was very much bottom-up. Right?
Joe: Yep.
Scott: And your point about memory management and memory safety, like there's no CEO who's like "hey, memory safety is super important."
So yeah, it does feel like this particular snowball has gotten bigger than we would expect faster because of the promise of 10x and 20x productivity.
Joe: Yeah, well and you make a good point because you know, I did spend the last several years of my career before coming over to this side running a semi large engineering org. And historically actually what would happen is you'd go to the C suite and say, "hey, we're going to deliver like not nearly as many features this quarter as we usually deliver because we have to peel a bunch of people off because we're going to invest in this like infrastructure technology. But don't, don't worry, it's going to let us move faster in the future."
And the response was like, "well, that sounds like some nerd stuff. And I really want you to deliver the same amount of features this quarter. What are you talking about? We're not going to deliver the same--."
I mean, with their forward thinking, but it's definitely like, it's a flip now where people are like, oh, you want to do some AI stuff? Like, budget's way open, you know, don't worry about it, just move fast. But I do think some of that comes from. I've been telling people, historically, I think the business used the engineering department as this black box that is made up of really expensive, annoying people who complain about the free soda and generate software.
And if you Zoom way out, AI and LLMs feel like, oh, from a token perspective, maybe still kind of expensive, but like, we'll not complain about soda and we'll spit out software and we won't need these annoying people engineers anymore.
Scott: Yeah. But this, this is the whole thing though, like, this is getting a little, maybe overly philosophical for your show, but like, what are we even doing, man?
Joe: Haha.
Scott: Right? So that's why I keep coming back to nursing and trying to remind ourselves that if we're not doing this to make people happier and healthier, then what are we doing?
Joe: Yes. Well, just coming back to your point. I think you and your co-authors are like a million percent right. Correct. In that there's going to be a need for future generations of human software engineers. And who knows, maybe there'll be on average many fewer on any given software project.
And one of the things I come back to is like, okay, great, even if we reduce the amount of toil, right? Like say an engineer's job is 80% toil. And that 80% goes away. It's not that like, "oh, cool, Now I need 80% fewer people in that role." Well, there's probably some, some percent fewer. But like, what can that human do now that that 80% of their time has been freed up? Right? Like this trained professional human.
Scott: Yeah, this is the thing, right? The billionaires will tell you that we're not going to need to work anymore, but they never explain how this is going to work. They don't support universal basic income. They just go hand wavy. Right?
So I keep coming back to Star Trek and there's a really great book called Trekonomics that I recommend that people check out, which is the: How do you get to that level of like, I don't think about this kind of stuff anymore.
The reason that I point that out is that I think about like hard driving career, Starfleet person, which would be like Jean-Luc Picard. Right? He wants to be an Admiral. He wants to do great things. He's thinking about his career, but he's also thinking about helping humans. So he's kind of like the best of all worlds.
Then there's like an episode where he's like talking to a potter and you're thinking to yourself, wow, that's amazing. Like they're just hanging out, talking about pottery, but they're in a world where they could say, hey, "I need a pot." And it would pop out of the thing. Right?
Joe: From the Replicator. Right?
Scott: "Tea. Earl Gray. Hot."Right?
Joe: Yeah.
Scott: No one ever says, "hey, did 'Tea. Earl Gray. Hot' put the tea maker out of business?"Maybe it did, maybe it didn't. But the point is that the only way that you can have a hard driving, high level, "I want to be the Admiral and help Starfleet galactically" hang out with and connect as a human with a dude who's a potter on an alien planet, and they're vibing, is that neither of them are worried about the rent.
They don't have anything other than a trunk of stuff, bunch of little tchotchke that they've been lugging around the universe. One of them chose to be a big career person and fly around and one of them is like, no, I love hanging out here in my village on this avatar planet. And they're both connecting about craftsmanship.
So I thought AI was going to give me Fridays off. I'm working harder than ever. How are you doing?
Joe: Hasn't happened.Well, I'm making software for the first time in like 10 years again.
Scott: There you go. Right. But the question is though, for both of us to ask ourselves is, "why?"Are we making babies lives better? Are we making it so people can do pottery? Are we making it so I can do fun stuff? You know what I mean? Are we doing it because it's interesting?
Joe: Yeah. Well, along those lines, one, one of the things I think that's fascinating about this is, understandably, I think most of the the dialogue ultimately, you know, it's not explicitly about this, but if you look at the intent, really a lot of what people are talking about is like, you know, the legions of humans employed as you know, writing an enterprise software, these are highly paid positions. It's a specific kind of software.
But, you know, I believe there's this, like, spectrum of software, right? And if you think of it as kind of like a-- I tend to think of it as like a long tail distribution where the front, very big valuable part is what most of this dialogue's about, which is like, the kind of software that takes like hundreds to thousands of humans to like, build and operate and maintain. What does that look like in the future?
But if you go down that tail, I'm kind of fascinated by the idea that there's this entire part of that spectrum down the long tail where, up until today it's been mostly theoretical software. Because the economic cost of, like, creating the most minimal piece of software, if you look like, you know, for someone like you or myself there's been any number of times in the past, say, 10 years where constantly I'll see a problem or something and I'm like, "well, if I had like two weeks, I could like, bang out a piece of software that would, like, handle this and, and it would be a lot better," but if I accurately value my time, that the expense of doing that just does not make sense. Right?
Scott: Yeah. Well, so I have a podcast. You have a podcast. I didn't have an administrative backend to my podcast for 20 years. I edited a text file sitting on an FTP server and then it moved from an FTP server to Azure Storage. And I edit that text file every week. And I've been doing that for 20 years. And then one day I spent 43 minutes and I used Opus and Claude with GitHub Copilot CLI, and I wrote an administrative back end, and now I have one.
Joe: Yeah, great. Yeah. I mean, for me, I have I don't know, tens or dozens of these spreadsheets that I've been maintaining for, like you said, like 20 years or something, that has something to do with, whether it's my car maintenance records or if I'm going skiing with the kids, like, what's everything I need to pack? And like, yeah, one by one, I've kind of just been knocking these out where I'm like, okay, now there's like a piece of software that I will only ever be the single user of.
Scott: Right. Personal software. Exactly.
Joe: Right. Yeah.
Scott: And then the question is, then what was the environmental impact of that? Was it a net positive? Was the 43 minutes of my time worth the 75,000 tokens? Will that be cheaper and easier? Will personal software, just in time software even, be a thing where I just want it done? Which brings me back to Star Trek and the interaction that they have with the computer.
It's like, "computer, analyze my expenses over the last X thing, you know, are there any receipts left open?" Like right now on this other machine over here on the corner, I've got a Claw that is automating a browser to look for expenses fill out the most baroque expense reporting system that's ever been created. And then the PDF scans that I made while I was off doing stuff.
You could say that that's a miracle and I've solved a problem. But the larger issue is why doesn't our expense reporting system have a REST API and why am I uploading scans of a receipt? Like there's so much wrong with that. And then we've just spackled over it with computer vision and reading PDFs and OCR and ML and machine learning and computer aided--
So it's like, yeah, I could do a whole presentation about how cool it is that a Claw can fill out my expense report. But no one's actually talking about how the system, the very system of expense reporting is broken. So I worry sometimes that these personal pieces of software, like your spreadsheets, should it be a spreadsheet, should it be a markdown file, should it just be in the ether somewhere, some global Joseph Redis of like all of your context?
Joe: Yeah, well I'm definitely in a scenario we've been exploring, you know, what are these white spaces? Or what are the areas of missing software? And to your point, h ow's it look to solve that?
Scott: Right. And more importantly, does doing that free you up to do other interesting stuff?
So if I don't have to spend 20 minutes doing this expense report, am I doing cool pottery, am I reading a book, am I working out? Or does that just free me up to do more expense reports?
Joe: So one of the things that struck me with, with you being totally right is even historically, even before LLMs there was always a version of this where you know, because I think startups, understandably historically, early stage startups are like, look, we're only going to hire senior experience people because we don't have time to--
And there is some I used to tell people because occasionally you meet someone like oh, we're going to hire someone really junior. And I'd be like, "okay, that's great. But understand that you are either going to just get lucky or most likely what you're going to need to do is dedicate, you have to dedicate real mentorship bandwidth. Like one of your senior people has to like have a chunk of their time you know carved out to kind of mentor this person. So make sure you're, you're thinking through that."
And I, I do think historically some very large, if you, if you look back to like again going back 30, 40 years to the IBMs or even we had a version this, like I said, I was an intern at Motorola. They had a Motorola University, they called it. Like historically, I think some very large corporations had some amount of education built in.
And then to what you were talking earlier, I think over the last several decades of "gotta be 2% more efficient next quarter," a lot of that has gone away. Like actually, I'm sure there's still some, but it seems like Fortune 500,000 companies, like the preceptorship is a good start, but it seems like there's going to have to be very kind of formality investment in this.
Scott: Yeah. So we're talking to universities as well because there's a gap where you leave university and then you're just kind of like yeeted out into the world and then it's on you to figure out where your job is. But I got an internship directly out of school, a guaranteed internship would be nice. Right?
A direct relationship between a university where people are on the way out of university and people who are hiring would be nice. You know, those kind of things need to be called out. And I think that there's no paved path to get from out of school into industry, into doing stuff.
Joe: Yeah. Well, maybe another way to put it, like the gap, to your point, between like a fresh out of undergraduate and the work-- Like there's always been a gap there, but arguably the gap now is going to be much bigger than it ever was before.
And I think historically most companies got by where they were like, well we'll hire 200 and I don't know, half or two thirds of washout, whatever, and then we'll have enough. The ones who make it, we'll have enough of them. And that strategy seems like it's not going to work anymore.
Scott: Right.
Additionally, if you as a big company don't train early-in-career people, then where are you going to get senior people? You will get them from your competitor and you'll pay more and you'll fight with your competitor and you'll be poaching people. And that's not sustainable.
So either you invest in the future and go and work out and lift weights, or you're going to have to just look for seniors at other companies where they were trained and cared about and loved and appreciated by those other companies. The problem is, as with working out, humans are not particularly good at planning, doing the hard thing.
Joe: Yeah.
Scott: And that's a problem. and then also, capitalism is like, getting kind of out of control. So, you know, unsustainable growth, infinite growth on a finite system is a cancer. That's like the literal definition of cancer. Like, it's unconstrained, unstoppable growth on a finite system.
So we've got to figure out a way to come up with Trekonomics such that AI allows people to do the things they want to do with humans in the loop so that tedium is done for them at least as much as they would like it to be, which may or may not be tantamount to toil, because tedium and toil are kind of different things.
Joe: Yeah. Double click on that. What do you mean by that?
Scott: So if I'm toiling in the mines, I could think about, why am I in the mines? Why are there mines? Who's the boss of the mines? What am I learning? What am I doing? How I'm improving my body here in the mines.
But tedium, using the example of the expense reports, that's not toil. That's not mine work. I'm not getting muscles. I'm not doing anything. I'm literally wasting my time.
Joe: Right, Right.
Scott: No value exists in me doing expense reports.
Joe: Okay. So it's sort of like, are you learning or not? So if you're toiling through, like--
Scott: No. There's arguably zero value. Like taxes. I just did my taxes. Have you done your taxes?
Joe: In process. Haha. It's not the 15th yet.
Scott: Right, exactly. Right. And it's a thousand tiny paper cuts. And it's like, either you're doing them yourself or your guy or gal is emailing you, "hey, I need this form. Hey, you gave me this form last year. I didn't have that form."
We could go, "oh, we're going to solve it with AI. I've got a great system. It's going to solve taxes with AI." Why are we filing taxes?
Joe: Right.
Scott: They know how much I owe.
Joe: Yeah.
Scott: Solve the problem. It's the same kind of thing, right?
Joe: Yeah. Yeah. Well, I think that's a really good point. It's to separate whether the toil is something where it's like, okay, well, I'm writing this "for" loop and I'm debugging this thing, but I am learning while I'm doing it. Even if theoretically the AI could do it.
Scott: Right. So I'll go out and I'll say I'm an expert at .Net. Like, I don't need to write for loops in .Net anymore. I get it. So let the thing do the "for" loop. But if I don't know Rust, I'm not an expert in Rust, maybe the learning training AI knows that and it's the senior until I'm the senior.
Joe: Yeah.
Scott: So it said, "we're going to write a 'for' loop in Rust. I'm going to walk you through it and I'd like you to actually lift the weights before we bring the forklift in."
Joe: Yeah, well, it strikes me because when I was reading that, yeah, coming back to that is you could almost envision, you know, you're talking earlier about the different advanced, you know, like CICD and-- But you know, pair programming was, you know, a pretty popular concept. I don't think ever got like, massive.
But this notion. I don't know if you've ever. I'm sure you probably had the experience pair programming. But it had this idea that like in pair programming and you would take turns, but like, someone would be, I think they called it the driver. And that was like, you were the hands on keyboard.
Scott: Yeah, it's who's on the keyboard.
Joe: Yeah. It struck me there could be a mode where you could be working with Claude or something and saying, "okay, well I'm gonna drive for a bit. And so like, we can talk and whatever. But like--"
I think you mentioned too. Cause one of my kids last summer was using Khan Academy to learn Python. And I was kind of fascinated because they, I'm sure using some of this tech, like, they give you a little exercise and you write the Python code and then it reviews it for you and it says, "oh, hey, this looks good. Except this or except that." And it gives you kind of like feedback on the code that you wrote and where it could be better. Right?
Scott: And that's because Salman Khan and the folks that worked on Khanmigo decided as organizational guidance like, that it's organizational willpower, "this is what we're going to do."
But a Claude Code might add a feature like Autopilot, where I just take my hands off the wheel and like, oh do the thing. They acknowledge that lifting weights matters, while other tools acknowledge that, "Well, we'll just send the forklift to do the weights."
So I think that the right thing is a balance depending on where you are, depending on the problem that you're trying to solve. I think a Tesla self driving car might be science fiction and fun, but I don't want to put a 15 year old in that. I'd like them to know how to drive before they get that. So then the question is popping off the stack at the beginning of the talk, am I, are we just old man who shakes fist at Claude? Oh, you don't get.--
Haha. Oh, I, just said old man who shakes fist at Claude instead of Cloud. You don't get memory managed, Joseph. You haven't learned, you haven't mastered memory management. You don't get just in time compilation. You have to compile main-- Like that's gatekeeping. But at the same time, do I want everyone making software? You know, there's a lot to unpack here. I don't discount the value though.
Joe: Yeah, it brings up, there was a meme that made its way around, you know, the timelines a few months ago that really struck me. It talked about people using AI and if you think historically, any large scale engineering team, you have your great engineers and 10x engineers and whether or not it's actually 10x, that's a whole other discussion.
At the end of the day, there's some middling engineers, probably some bad engineers or engineers for whom it's not a good fit, whatever. But bring this now, with AI you have and they just, they distinguish between slop cannons and turbo brains. Right? And the idea being that like AI is leverage and it makes good things better and bad things worse.
And by the way, this isn't just restricted to software engineering because I think it was actually brought up in terms of like, you know, people who, like you're saying, will just buy the report and it's really, you know, not high quality, the first pass coming out of the AI and then they just slapped it on their executive's desk without even looking at it. Right?
Scott: Well, I mean, I'm not sure if, if Orion and Artemis, by the time we've published this, they will have landed, hopefully without any drama.
Joe: Yeah.
Scott: But like, I don't think you could vibe your way to the moon.
Joe: No.
Scott: And it would be silly to even think that.
Joe: Yeah.
Scott: So you have to ask yourself what matters and what doesn't matter.
Joe: Yeah. Well, I was thinking in terms of incentives for businesses to invest in things like the preceptorship and like ongoing training is: Do you want barely trained people, like slop cannons or, do you want turbo brains? And I think the only way you reliably get to the latter is--
Scott: I think that's a false binary. I feel like it's not just, oh, you're a 10x developer or you're a vibe coder, or you're a slop cannon or you're whatever.
I think it's just I want a world where I can occasionally buy Ikea furniture, and I want a world where I appreciate Japanese craftsmanship of carpentry. I want "Tea. Earl Gray. Hot," and I want a happy potter on an alien planet to be able to do his thing without fear of the rent pushing him out of his little studio. And if AI can enable that, then I'm pro that.
Joe: All right, well, this has been a mind expanding conversation. I really appreciate you taking the time and joining us today. Where's the best place for listeners-- I mean, I know you've got, you've got a podcast, you're online. Like, where can people get more of this from you?
Scott: Just go to hanselman.com. I've got a YouTube. I've got a podcast I've been doing for 25 years with over a thousand episodes. It's kind of like Science Friday. So if you like NPR, you'll like the show. It's a tight 30 minutes.
I've got a show I do with Mark Russinovich, the CTO of Azure, called "Mark and Scott Learn To." You can find that anywhere that you get your podcasts. And every week we learn something new. So you check me out there and subscribe to my YouTube.
Joe: Awesome. Well, thanks again, Scott. This has been great.
Scott: Thanks for having me.
Content from the Library
Open Source Ready Ep. #36, Managing AI Coding Agents with Jesse Vincent
On episode 36 of Open Source Ready, Brian Douglas and John McBride sit down with Jesse Vincent. They explore how Jesse’s...
Generationship Ep. #54, Human-like Memory with Vishakha Gupta
On episode 54 of Generationship, Rachel Chalmers sits down with Vishakha Gupta to explore the hidden infrastructure challenges...
The Kubelist Podcast Ep. #50, Building Sandboxes for AI Agents with Ivan Burazin
On episode 50 of The Kubelist Podcast, Marc Campbell and Benjie De Groot sit down with Ivan Burazin to explore the rise of...
