1. Library
  2. Podcasts
  3. O11ycast
  4. Ep. #85, AI/LLM in Software Teams: What’s Working and What’s Next with Dr. Cat Hicks
Ep. #85, AI/LLM in Software Teams: What’s Working and What’s Next with Dr. Cat Hicks
51 MIN

Ep. #85, AI/LLM in Software Teams: What’s Working and What’s Next with Dr. Cat Hicks

light mode
about the episode

In episode 85 of o11ycast, Dr. Cat Hicks unpacks AI’s impact on software teams from a psychological and social-science perspective. Along with Ken, Jess, and Austin, she explores how AI magnifies long-standing tensions between solitary and collaborative models of development, and how fears about AI often reflect deeper issues like undervaluing collaboration or having unrealistic productivity expectations. The discussion also explores empathy, theory of mind, and pluralistic ignorance, highlighting why developers may prepare more for AI than for each other.

Dr. Cat Hicks is a psychologist and research architect who helps software teams work better, feel better, and talk better. As founder of Catharsis Consulting, she brings empirical psychology to technical organizations, focusing on learning culture, collaboration, and evidence-based practices. Her work bridges the gap between human behavior and software engineering.

transcript

Dr. Cat Hicks: These days, I'm listening a lot.

I feel that people say a lot of things to software developers, and I'm not sure that we listen to software developers enough.

So everyone's bringing me certain conversations, and saying, "But there's kind of black holes at the center of these conversations."

So questions like, "Okay, if a tool like AI is going to radically change how developers work, why are developers so afraid that they themselves as workers, as a labor unit, you know, even, will not accrue the benefits of that efficiency gain?"

That's kind of a cold-hearted and economic way to think about it, but a lot of times when changes come to our jobs, we can imagine accruing benefit from it, but there's tremendous amount of fear that individual developers will not get that benefit.

So why is that? That's one of the questions that I've had.

Another side of my work is understanding developers kind of end to end as people, understanding that software development and innovation in it comes from our minds, comes from what's happening inside of our heads, and that is also where our feelings live, and that is where our experiences of threat live and our experiences of community live and all of these other things.

So how is that brought up by AI? That's been some work that I've focused on, work that I called the AI Skill Threat Project.

But I'm really, really interested too in the structures that get us good quality information about these developers and technology teams in general.

So that's kind of the research architect part of what I like to think about.

What are developers experiencing that we haven't really created a place for them to even articulate that or have the language for?

You know, I know that this crew, you all have sent me some questions that are kind of like, "Why are we seeing such mixed different findings about AI?"

And I tend to think the answer to that sort of thing is, "Well, we haven't gotten specific enough about our questions, or we're using the wrong language or format for how we ask about it."

And so I'm interested in all parts of that process, like how do we actually create a science for developers that takes them seriously as people, takes their learning seriously, their creativity seriously, and then kind of demands a higher quality evidence really about these people, I think, and the way they work.

Ken Rimple: I think part of what it seems to be that might be giving people fear is it's a little more monastic working with an AI agent.

Cat: Hmm.

Ken: You're kind of having this conversation with a thing that's not real, but it acts like it's got conversational skills.

And so if you're working a lot with AI, and you're vibe coding, what have you, you're getting things done that you wouldn't be able to get done by yourself, but you're getting it done through the agent, right?

Cat: Hmm.

Ken: And then you have less of a, and maybe I'm making broad generalizations here, but it's not as as social as pairing with somebody else.

So I mean, what is a team now when you're dealing with AI?

Jessica "Jess" Kerr: Yeah, and how does this change our sociotechnical system where now it's a triangle?

Cat: Hmm.

Jess: There's the software, the code that we run and write, there's us as developers, and then there's this third in-between thing.

Cat: Hmm. That's a provocative way to describe it.

One of the things that comes up for me is I really see this tension in software development and in really how we approach technology on a grand scale, like as a society, which is, you know, we have kind of two very opposing sides, and one is the side that says technological people work the best, and our society gets the most out of them if we just isolate them, and they're alone, and we want to look for these lone geniuses.

And these are like the stereotypes, right, that a lot of people hold about what software developers are like and what that work should look like, and also we can probably think about times we've had painful moments of seeing that this is a belief held by a big leader in the industry or something like that.

There is this whole raft of beliefs about this, and they're very strong and powerful.

Like I studied beliefs people have about software engineering in a number of different ways, and in one of our projects, we studied what we call belief that software engineering has a contest culture, and this idea that in order to succeed we have to be competitive and at each other's throats and really think of ourselves like isolated individuals.

That's a worldview, and it exists in tech, you know.

And then there's this completely different side, right, which is software developments traverses the globe.

It requires you to collaborate with other people in this intimate way where you're sharing problem-solving, and you're trying to understand another mind, and we have all sorts of beautiful communal collaborative practices like code reviews and the kind of collective understanding and memory that technical communities hash out together.

You know, these incredible things that you see in the history of software development of people deciding, "We learned how to do this. Let's share it. Let's, you know, bring juniors into it."

So, sometimes when I talk to teams, I talk about, "You know, you probably feel this tension between a really isolated model and a really collective model, and that tension is there. It's kind of a war all the time."

And I think that AI just sort of dials up that tension. I don't think it introduces that tension as a brand new thing.

I think it kind of pulls the curtain back on the fact that we don't know how to protect the really collaborative parts of software development.

We often force people to work that stuff out in the nights and weekends, you know, or feel like it's not visible or valued.

And what are we going to do with that? And what are we going to do when our stereotypes about what makes work good actually come into conflict with what truly factually does make work good?

I don't believe at all that AI is going to remove the need for people to collaborate or understand each other's intent or work as teams, or set goals together. If anything, it's sort of challenging us to become really explicit about that stuff.

But I think it's also almost like a bomb is going off inside of this legacy that already existed of us not quite knowing how to reward those people who are doing that work or see that kind of skillset as technical, you know, the skillset of collaboration.

Austin Parker: You know, one thing I've noticed that's really interesting to this exact point, is I've been trying to kind of create an ethnography of AI haters and AI lovers as it were, mostly for my own personal sanity.

But in the sphere of technologists, I've definitely noticed, it's kind of hard to say where people come down on this because there are people that I know just professionally, personally, whatever, who have come out as, you know, very anti-AI, right, as a cultural phenomenon.

Cat: Yeah.

Austin: And then there are people who have come out as very strongly pro-AI as a cultural phenomenon and specifically in the narrow lane of, you know, just to be clear, like in the very narrow lane of, you know, AI as a coding assistant, as part of the software development lifecycle, as part of software teams.

The real discriminant I found is that the people on one side of this tend to view coding and programming as highly creative as an expression of some sort, right?

It is a craft, it is an art, it is, you know, sculpture, and there is an inherent beauty in creating these technical systems all bound up in social systems.

And then, on the flip side, there are people that see it from a much more utilitarian point of view where the code is not, you know, code was never the most interesting thing anyway.

You know, the code is boring. The code is the annoying thing you have to do in order to translate your desires to the computer, and what you actually care about is the end result.

You care about the product. You care about dah, dah, dah, dah, dah, dah, dah.

I'm wondering is this kind of just a mindset thing where it's the product people versus the sort of programmers as artists, or am I seeing something that is maybe more banal?

Jess: Austin, I'm guessing that the people who see programming as very creative are against AI, and the people who just want to get the product to work are for AI. Is that what you're saying?

Austin: In a lot of ways, yes. Like people that tend to see the programming, the actual code itself, of having inherently less value maybe?

I don't know. I don't want to like prescribe motives, but definitely that does seem to be the thing, right?

Is the code, is the actual code itself, is the programming of the code, is that something that has value?

And depending on where you come down on this, like that does seem to influence greatly how you feel about AI as part of the software development lifecycle.

Cat: Yeah, you know, I think it's really reasonable to think that any time a big change might come to people's work, it's going to be painful, and we could decide that pain is worth it or we could not, you know?

And so I think, a lot of the time when people talk about craft, you know, and they sort of write blogs about programming as craft, there's something very true and beautiful there, but I also think that there is a way in which that's just the argument that they hope is going to work, you know, and it's not an argument that answers all dilemmas.

You know, if we have a bunch of people who otherwise would never work with code at all but who would get big economic value out of being able to incorporate a little bit of programming, and they're really not going to be motivated to program as a craft, is that always a bad thing?

I mean, I think that it's a little unkind to say, you know, everyone should have to work at my level of technical depth, you know, and we've faced that sort of challenge a lot in when we see work just become more abstract and more automated.

On the other hand, you know, I think that when we have this conversation, we're having a lot of conversations at once.

We might be having a conversation about the climate demands of computing and where data centers are being built and how these models are being trained.

And I have written about this a little bit, but as someone who's worked with data my whole life and, you know, often really thinks about the ways that translating the world into data can go wrong, you know, that piece is super painful for me.

And, again, I see AI as kind of, it's really turned the dial up or it's put its finger on, it's pulling the curtain back on these dilemmas we already had about how are we representing the world and making, you know, predictions and using models and using statistics.

It's hard for me to have one thing to say about why do people fall on one side or another side in thinking this technology's going to work for them, but those are some big dynamics that I see that aren't just about the design of the tool.

Austin: Yeah. It's hard to tease it apart, right?

Like, I feel like we saw this in recent tech bubbles, recent tech hype cycles around things like blockchain and cryptocurrency and even before that, like you had echoes of it in containerization and IaaS, in like changing ways of literally running software.

Like it's maybe a little more poignant because it's very difficult to tease out the distinction, right?

Like, it would be one thing if we were only talking about like AI coding agents, but in the year of our Lord 2025, it's very difficult to separate, you know, it's the same poor technology, right?

If you're a visual artist seeing your pipeline completely evaporate because now things that you would get paid for before can now be done more cheaply by generative AI, then yeah, like I know visual designers.

Like, yeah, it sucks.

Cat: Yeah, I do too. I know a lot of people who write, you know, I see the pain in those creative communities.

I also see that software was already involved in taking value away from those communities.

And so there's an interesting, very big challenge I think we all have, whatever our jobs are, but, you know, I think about throwing this back to developers sometimes and saying, "You know, where has software already redistributed power and resources and credit, you know, and how is this just, you know, how do we set up the preconditions for AI to do this, right?"

And I just think, if we want a solution to those social problems, we're going to need a very strategic big social answer, you know, that's going to include questioning how software operates to create exploitation, you know, and that's not just AI.

It's kind of the question of why is AI like this, you know?

Why this version of AI, and what version of these kind of models do we like, could we trust, would feel more ethical to us?

You know, are there examples of that out there in the world?

I mean, I think, you know, certainly when I think about my friends who do, you know, machine learning to watch how plants grow and their roots, you know, grow, that's a very powerful use of technology that's enabled us to create more food to feed people, and in another context, the same technology is surveillance, you know, of human behavior essentially.

So answering that question is never going to be easy, but I do think that that's kind of the real question, you know.

Austin: So, on this, because of the complexity there, and the social questions, like we are seeing like this intersection now of AI tools, AI features, coming into the workplace in ways that are not specifically, you know, just code generation or AI agents for like operational stuff, right?

So you're seeing AI power analysis of developer productivity, right? It's looking at like URL request comments or velocity. You're looking at EC.

You know, another thing, like I see a lot of this actually, is people using LLMs and AI to like help do reviews and peer feedback and OKRs, and there's a lot of like really interesting questions to ask, but I think one is how should people think about, this is sort of a mixed company, right?

Like, if everyone is going to have different levels of comfort with AI, and we want to bring in these things that may increase productivity or at least help measure productivity, right, 'cause like you said at the start, we often just don't know, right, like, we're not listening enough, we don't know sometimes what problems we're solving, and there's this race to quantify everything.

Like how can we introduce these sort of tools and workflows in a way that is empathetic and helps to bridge that divide to give us understanding while also making sure people don't like flip out.

Cat: Yeah. This is a huge question.

So I think it's really interesting that you frame this as developer productivity because I think it's really an interesting question whether it is about developer productivity just yet.

And I believe in asking questions like, you know, why are we discussing AI in terms of developers' abilities rather than in terms of the structure of the code basis we have to work on, and whether that's set up to make AI good or work well or in terms of the company into which we're deploying this tool.

I do understand it because it is a lot easier to stick with individualistic kind of explanations.

And it's a lot easier for very time-strapped teams that have somebody putting pressure on them to measure developer experience in one month or whatever they have, and they're not usually people with a social science kind of background, so maybe someone who cares deeply, but the only language or format they've ever been given to do this is kind of pre and post individual.

"Did one developer get faster? Did one developer feel more satisfied?"

And those are actually sometimes simple ways of approaching this can actually be very misleading, you know, and very contentious, and I think that always tying it back to developers' innate ability is almost the hardest possible question here.

And I think very threatening for people and very misleading way to frame this rather than asking, we have a responsibility to try to assess the impact of an intervention in our company, and how do we do this collectively?

How do we become an organization that wants to understand itself?

So that can look like saying, well we're all going to commit to a plan together, maybe even co-designing with our developers what they think should be measured, commit to measuring it over time, commit to saying if we find outcomes in this direction, we're going to do this.

And it's all a lot more mundane and a lot more roll your sleeves up than a magic AI intervention.

That's the kind of approach I try to guide teams to take, and I tend to find that even people who are deeply against one outcome or are on opposite sides of what outcome they think is going to result there, can come together and actually completely agree this is a good approach to study this, you know.

And that I think is the kind of buy-in we need to get.

We need to have room for people to have very different hypotheses going in to this assessment, and also to ask ourselves what edge cases are true at the same time.

You know, because it could very well be true we haven't specified our question precisely enough where AI works as an approach for a certain kind of work, and it really doesn't work for another kind of work, and if all we're doing is taking averages and mashing that stuff together, no one's going to be happy.

And I think that's the state of a lot of it right now.

Ken: And it's so new that you only have a few data points to work with anyway. So like, the longer you've done these things, the better you really understand what it does for you, I guess.

Jess: And we don't know the right questions to ask.

I just published an article about qualities of your code base that influence how much AI can help you with it. And that's just one aspect.

Developer skills is an aspect, but what are the other aspects of a sociotechnical system, the whole company or the whole team, that will make it amenable to AI assistance?

Cat: I love that, and I'm so glad that you're out there doing that and saying that, you know, because I think that that pulls us away from this hyper focus and this very threatening focus on individual developers and all these other things that leaders might want to use this story to tell a good story about their layoff or whatever.

You know, there's a lot of stuff going on here that's not actually about assessing the tools, in my opinion.

There is this paper I really like called "Behavioral Science is Unlikely to Change the World without a Heterogeneity Revolution," which is a really nerdy title. It just means-

Austin: That's really good though. And they say social scientists don't have a sense of humor.

Cat: You know what? Sometimes we have these papers come out that are just like: "y'all, what are we doing?"

And this was one of these. And it says like, yeah, sometimes things work in one context and they don't work in another.

Jess: Right!

Cat: News flash, you know? Yeah, but we have to challenge ourselves to ask that.

Jess: It's the classic consultant answer: "It depends."

Cat: Yeah.

Austin: That's also the senior software engineer answer though.

Cat: And it's all of our answers because it's true, you all, and my question as a social scientist in this space is can I create good science for the "it depends," an "it-depends" science, you know, and sometimes those will be generalizable findings, and sometimes they won't sometimes we will say, gosh, we're at a point in investigating this where we just need to even know what's happening.

Just observe it. We don't have good observation of the developer experience in my opinion, and I think that that is part of what creates all these mixed effects that probably would make a lot more sense if we had just observed certain other variables about the situation.

Ken: Can we ask you to tell people who you are?

Jess: Oh, right, yeah! Good idea.

Cat: I'm Cat Hicks, and I am a psychologist for software teams.

I like to call myself a research architect. I'm someone who builds research teams.

I like to go into areas where you kind of have to start from zero and try to get to a research agenda.

I've done a lot of work where I try to bring in empirical psychology models to help technical teams feel better, work better, talk better, and honestly also just validate what I think software developers are experiencing in the world.

I think there's tremendous wisdom and problem-solving and innovation happening in this community of people.

I kind of realized there weren't a lot of psychologists working with this population, and I love working with this population.

So that's what I do. I'm kind of a psychologist at large right now. I have a consultancy called Catharsis Consulting where I work with engineering orgs.

For the last three years, I published an open science agenda on research for software teams.

So I have a number of pieces of empirical work out there that I hope help teams to access more evidence about good ways of working in this world.

Ken: So I had another one for you.

It's around the fact that these are not really deterministic systems we're working with, right?

You've got the way you ask a question, when you ask a question, whether the moon is full, tends to sometimes adjust how things come back.

You might ask for it twice and get a different result.

Cat: Yeah.

Ken: So I know that there's education of the team, at least getting the team up to speed, being comfortable with each other working in this kind of world, but then there's also kind of informing the management layer of what they should expect.

You know, this whole concept of like the 10x developer or whatever, thinking then that's what's going to happen when you slap Claude Code in front of somebody. There are some good things you can get done relatively quickly. I ported an application from the web to Android to iOS, and I was like, "Wow!" in a couple of weeks.

Jess: Yeah, that's one of those things that you wouldn't have even done it.

Ken: No.

Jess: You couldn't even attempt that. And now it's a few days' work. And is it perfect? No!

But that's the beauty of DevRel. It's a demo app, right? It's an experiment, and we can learn something from it.

Ken: Yeah, and I guess my point there is that you'll get these kind of modes, and then you'll find some edge case that's bizarre and takes a while even though you've got the agent at your fingertips because maybe that data wasn't trained into the systems enough yet or it's a novel problem, right?

What are some of the things you end up telling teams and the teams' management on what to expect when they're going into an AI type of assemble of a team?

Cat: Yeah, gosh, I mean--

One of the things that's been the most powerful piece of, you know, evidence across multiple of my studies is that teams that have a strong learning culture win out.

And I mean that shows up when we ask individual developers about their last month of productivity. So we do tie it back to at least people's experience of productivity.

It also shows up when we ask them to rate how effective they think their team is, okay, which is not the same as how productive they think they are.

It also shows up inside of the psychological measures that I use in my research.

Now, it can be really easy to just hear that, okay, have a learning culture, whatever. We all do, you know.

I find I talk to teams about this, and sometimes they can be really dismissive about it or just take it for granted, and everybody likes to think that their team or their org is great at celebrating learning.

Absolutely not the case. Celebrating and committing to learning, having a culture about it, is actually pretty difficult.

It's difficult for us to sustain it in our lives. It has powerful benefits for us, but it requires doing a few things.

It requires us to commit to making mistakes, which is super hard.

It requires us to have a social environment that won't absolutely ream us and punish us if we let go of performance and, instead, we experiment. That's hard, you know, and it requires us to kind of have it as a shared collective thing. And a lot of teams in software engineering seem very alienated to me, lonely, isolated.

Jess: Like the people in the teams, or the teams within the orgs?

Cat: Both, honestly. I think it shows up on both levels, you know.

And I think that it's a dangerous place for software development to be where we have so much load being put on these people to adjust to this new technology but so little care for their psychological needs.

You know, and so little ability for the team to sort of say from the bottom up to their leader, Hey, your expectation for what this tool change is going to do is really inaccurate."

You know, that misalignment is very problematic.

And if you have a leader with a mental model that is just foolish, you know that there's going to be an upward and to the right forever effect of this instead of being in line with the real world, which says, well, we've got this very complex set of effects that are happening here where sometimes it's allowing me to go in a problem-solving direction I never would've gone in before, right?

Like AI's fundamentally changing what I decide I even can do. On the other hand, there are times when the benefit I'm getting from it seems very unpredictable.

I thought this thing would work, and I wasted a hell of a lot of time before I realized it wouldn't work.

We have this kind of possibly paying off a lot, possibly unpredictable complex effect, and you might start to ask, okay, who are the kinds of people who were getting this effect?

You know, is that uneven in my organization?

Can I start to figure out certain kinds of people doing certain kinds of work in a certain kind of situation?

Now we're getting that complex system, right?

Jess: Yeah, the situation makes a big difference.

Cat: Yeah, for sure. And then, as a leader, I think it's kind of under your responsibility to start to understand the contextual interactions of this and ask yourself, "Can I port that to everybody else?"

But, instead, you often get leaders just have this overly simplistic, everybody do the same thing, everybody got the same effect, you know, instead of a willingness to look at that as an ecosystem that we're growing.

Ken: It's so early too because like the tools are just emerging and starting to pop, and people are like, "Oh, I love X," so someone might, you know, love Claude Code, and someone might love, you know, whatever else, and they start getting their tools and their favorite techniques.

And I think part of it, the things that are really important in my opinion, is like sharing all that with each other. "Hey, this worked for me. Try this in your workflow."

Like coming together with some good techniques people can use as kind of a menu of ideas to work off of.

Cat: Yeah, we did this really fun thing that we called an AI pre-mortem, and, you know, you all are probably familiar with a pre-mortem.

And we had a group of folks, it was at a conference I was at where I was talking about some of my research about people's feelings and you know, about AI and the AI skill threat they feel and how learning culture has made that better, you know.

And then I said, "All right, you know, let's actually walk through an exercise, all of us together. Let's just pretend we're at the same company, and I want everybody to take a post-it note and write down the... We've implemented AI on this team. Let's come up with the worst possible outcome. Six months from now, what's the worst thing you think could happen? What's your biggest fear? You know, just get explicit about it, actually share it."

And I mean, it's a room full of men in suits. You know, I look like I look. Like, it's really, it was a fun little moment. Everybody kind of looking at me like, "Cat, are you going to make me share my feelings right now?"

And I said, "Yes, we're going there."

And, you know, it was just magical, right, because people went from being uptight, business land, OKR language, whatever, but as we moved through this exercise and we forced them to be specific, like, "Tell me what you really think is the worst thing that would happen, and let's all put them, all of these examples, out on the table. Let's just sit with it. Let's just read them all. Let's all read them all."

And we did that, and suddenly people were just in this different realm of talking about all kinds of things that were not the AI: the structure of the code base, the way the teams are talking, the way we failed to ever onboard juniors, and, gosh, that always bothered me, but I never really stood up for it.

And the way that, you know, we want to have an infrastructure team, but we don't.

All of these organizational features, right, suddenly became these places of agency and discussion and, you know, folks were broke out into little small groups and kind of shared about their joint possible fears here.

Just from forcing people to get into that zone, people came out with action plans about what they were going to do for the immediate six months in how their org was facing AI, and it really was much more about validating the problems they already knew about their organization than suddenly becoming like a completely different AI leader or something like that.

But that is something I always remember, and I remember how people just started in this place of, "I'm probably the only person who's afraid of AI in this way," and then we just ended with this like loud, happy room of people talking about all of it.

Jess: Nice, and this is another way that you're using AI to bring existing properties of the system into visibility.

Cat: I think that's a great way to talk about it, yeah. That makes a lot of sense to me.

Austin: I think you brought up something interesting that I've noticed sort of colloquially talking to people about AI is this idea that when we think about AI what we're really thinking about a lot of times is we're thinking about process, and we're thinking about like our existing processes and how we like to work and how we make work work for us because so much, especially at larger organizations, is really about interpretability.

It's about, you know, even at a company the size of Honeycomb, which is, you know, what, not quite 300 people or whatever, but, you know, anything with more than 40 or 50 people, there is this real desire, like--

The reason that we have these systems is to make the work output interpretable to other people in the org 'cause you need that for a variety of fun capitalism reasons.

But then you start thinking about these 1,000 person, 30,000 person, 300,000 person global organization and how much of it is just like, "Oh, these processes only exist so that people actually understand what we are all doing at any given moment in a big picture."

And I had a really funny interaction where someone was like, "Well, yeah, we tried an AI chatbot to do this sort of process, and it kept getting it wrong."

And I was like, "Well how often do you actually audit the human process? Like, how sure are you that the human process is giving you the right results?"

And they're like, "Well, we don't. Like, the people obviously are going to do it."

It's like, "Well, how do you know you, right?"

It's causing people to have fresh eyes on like all the things that we think we know about how we do work and about how our work works in ways that maybe we-

Jess: Makes a lot of things explicit.

Austin: Yeah, making the explicit implicit or vice versa.

Cat: Yeah, it's really true that if you want, and I see this happen, in my case, for teams that come to me and say, "I would really love to evaluate the impact that AI is having on our delivery velocity, you know, on X outcome."

And then you ask, "Okay, well do you know how to look at your delivery velocity now? Have you ever looked at it before? Do you know what the baseline is, right?"

And that literacy for just measuring change inside of our technical orgs, it's not always present, and I mean, I understand why it's not present.

I understand that these are people doing their best a lot of the time, and I really feel deep compassion for the engineers who kind of step up and, you know, out of their passion, become like process-oriented people.

But then they are themselves not always supported to really understand how difficult the evidence gathering process needs to be here, and so what we end up with this Schrodinger's cat thing all the time where, you know, one person says it worked, and another person says it didn't work, and it starts to become more about organizational politics than actually having a clear evidence base that gathers over time, that is self-correcting.

And you know, again, I have my own personal point of view on this, but it's that we need to become more scientific about it.

Jess: And that was the part about how do we become an organization that wants to understand itself.

Cat: Mm-hmm.

Jess: Yeah, which, which by the way, totally ties into observability.

Cat: I think so, yeah.

Jess: And there's an interesting point there of people will do things for AI that they won't do for each other.

Cat: Hmm.

Jess: They'll check the outcomes. They'll make things explicit. They'll document things.

They'll add tests now the AI makes that easier. But I wanted to ask you about theory of mind.

Cat: Mmm.

Jess: I noticed, and Fred Hebert, also at Honeycomb, has a great article about, we'll do all this stuff for AI, we'll put the code in a better shape, we'll explicitly define things, and all of these things help people, but we weren't willing to help the people.

It's almost like we have more empathy for that robot than for each other.

Cat: That could be true. That could be true.

Austin: We're more worried about the robot.

Ken: More concerned it's going to mess it up without that stuff.

Cat: Less baggage with the robot. I mean...

Jess: Oh, yeah, my theory is that we can imagine that the AI really doesn't know this stuff and really needs this information, and, at some level, we can't actually imagine a human that doesn't know what we do.

Cat: Yeah.

Jess: "If you aren't putting this much effort into your software craft, it's because you're lazy!"

No, it's because I'm actually an accountant, and I just need this to work.

Cat: Empathy is really complicated, and it's incredibly powerful, and we can dampen it, and we do dampen it a lot.

And this is a fascinating topic to me. You know, there's a researcher named Mina Cikara who I quote all the time, who works on what she calls coalitional cognition and the idea that our coalitional cognition is something we need to activate, and we can turn it up, and we can turn it down.

And this is part of the whole, you know, all of the theory that we use to try to understand why groups of people do horrific things to each other.

How is that possible that you could love your family and also do terrible things to other people?

And I mean, that's not hopefully the realm that we're in with our tech companies all the time, but, you know, our psychology is capable of community, and it's capable of conflict, you know, and I think that sometimes I think about the... Do you know the show, "Community?"

And there's a bit in it where one of the characters holds up a pencil and he says, "This pencil's name is Jeff, you know, and he has a family."

And everybody says, "Wow!" And then he snaps the pencil, and everyone goes, "Ah! Oh my God!"

Austin: Or it's, "This paper will be sad if you don't read it."

Cat: Exactly. We have incredible social cognition, and it's not perfect, you know.

And I find it somewhat distressing actually, when I see the social media commentary that's like, "Everyone who gets suckered in by AI is such an idiot."

You know, that kind of really aggressive... Like, if you have theory of mind failure, that is a very typical human experience actually.

Jess: And by "theory of mind failure," you mean, "I can't imagine how someone could believe this?"

Cat: I mean it on either side. Seeing a mind where there isn't a mind a nd also, failing to take the perspective of another mind, you know, or mixing up your own mind's perspective with someone else's perspective.

This is something that everybody does, and people fall on various, you know, kinds of spectra about this, you know, but it is actually not just something that we're perfectly fluent in all the time.

It's something that can get pushed around. And it's like that because we are a social species that has this incredible ability to identify with each other, take on other perspectives, you know, have empathy.

Even in our neuroscience, you know, not to dive too deep into this, but there's some phenomenal work on this from Frith & Frith, an amazing couple that does neuroscience around the social cognition, and they look at things like, you know, you literally think differently about members of your own group.

It affects what you pay attention to. It affects, you know, the assumptions that you make.

I love Fred's question here, which is like, well, you know, my question to why are we doing this for the AI would be, well why aren't we doing it for other people because there probably are usually good reasons, and it might be because of existing cultures, existing histories.

There's a concept I love in psychology that's called pluralistic ignorance which is when everybody in a group thinks that no one else in the group holds the opinion they hold, and actually the majority of the group would think like them or does think like them if only everyone could just talk to each other about it.

And so you can say, "Gosh, I would love it if we actually had a kinder code review process, but no one else wants that in this group, so I'm not going to do it.

"I'm not going to be the one person who adds a lot of context to my code reviews because probably everyone else is just over there on the other side thinking, 'What a dumb dumb Cat is. You know, she is not a technical person if she does this.'"

And it can actually be the case that almost every, probably the majority of people in a group all want this change to happen, and none of them make it happen because they are kind of like the Spider-Man meme, everybody pointing at each other, holding each other hostage to these cultures.

I am very interested in moments where people take the first step to change the culture, and that was what my dissertation was on, was actually moments of disclosure and moments when you decide to break the social norm and just say, "You know what? I don't know this thing. I do wish it would be different. I am different."

And I like to tell this story because it happened to me in grad school that I was in this statistics class, and I didn't understand the equations that the professor was writing on the whiteboard.

And I was in a course that was, you know, mixed. It had a bunch of math folks in it and a bunch of psych folks in it.

And so you feel like all this identity threat. You're always like, "Oh, God, maybe I'm not as mathy as these math people," you know, and especially if you come from a liberal arts school like I did.

This is a very classic kind of get into quantitative social science, face the hazing ritual of the stats class.

And finally I just said, "You know what? I'm going to freakin'... I'm going to fail this class if I don't just... I'm going to take the humiliation on the chin."

And I raised my hand, and I told the professor I did not understand the Greek symbols. I just didn't know what they were.

And he looked at the class and he said, "Anybody else?"

And a bunch of other people raised their hands.

Ken: Awesome.

Jess: Dumb questions are the best questions.

Cat: Yeah, which was a beautiful moment.

Austin: I love this because I feel like this is my radical AI-centrist party that I am trying to develop now.

Cat: "Something for everybody" is the slogan.

Austin: Well, yes, but also like, I think there's a couple different things here, right?

Like we've talked about like, "Wow, yeah, it is weird that we will do this stuff for the AI even though it really helps people. Why wouldn't we do it for the people?"

And there's so many reasons for that, but I think, as a technologist, as someone that cares deeply about this shit, you know, to be blunt, we could make a choice of how we, you know, interpret technology and interpret the advance of technology.

And we can say, we can sit on the sidelines and say, "Oh, this is immoral, I cannot burden my soul with the weight of the token prediction machine."

Or we can step in the way, and we can normalize it, right? Maybe not normalize. Maybe that's the wrong word.

But we can be adaptive in how we think about these things.

We can help adapt other people's thinking about these things, and we can sit astride this and say like, "Look, the people over there selling you golden shovels, like maybe don't pay a bunch attention to them."

Maybe let whatever people who have a vested economic interest in hyping this of up say, "Treat it with the grain of salt that it takes."

But also don't look at the people that are like selling the two-minute hate, and say, "Oh, I'm going to throw in with them,"right?

Like, people can feel what like what they feel like, but I think our responsibility as technologists, people that care about these things, is if there is an opportunity for us to say, "Like, oh hey, we can use this as a little bit of a reset moment to like ask these questions and put these questions into the ether about like, 'Well hey, why aren't we doing these sort of things, right?'"

Like, we're using AI as a lever to go reevaluate our processes.

Jess: To make our code bases more humane.

Cat: Yeah.

Austin: Right, like maybe we should do like, "Okay, yeah, that's a good outcome!"

Cat: Yeah. If we learn all our training data is bad, you know, then we have a training data problem, right?

Austin: Right.

Cat: And we can question what we use it for.

Austin: But it's very difficult to do that if you set outside of it, if you set yourself apart.

Jess: You have to participate.

Austin: Yes, and to what we were just saying, like, I do genuinely believe that there are people that feel this way, right, other than me.

And I think one of the challenges right now is that it feels really hard to express that opinion, to express the like, "Well, hey, like, yeah, there's good stuff here, and there's bad stuff here, and let's figure out the good stuff, and, like, let's minimize the bad stuff."

Jess: Austin, all that nuance! Who wants it?

Austin: Well, in the year of our Lord 2025, very few people, I've discovered, especially people on social media.

Ken: I mean, I think this is the same thing as like, you know, no one was thrilled except the people who learned to use it when desktop publishing came out, right?

But the people who used it were like, "Wow, I don't have to go to a printing press, and I can do this myself?"

It's like making things easier, right? No one wanted to write assembly language once they could write in a higher programming language, unless they need to move the bits around.

It's another one of these levels of abstraction. You're abstracting coding away a little bit to the point where you don't have to do the heavy lifting.

Jess: But you do have to do some other stuff.

Ken: But you have to do all the stuff around it, like someone-

Jess: Especially the learning.

Ken: Right, you have to do the scientific, the observing of it. You know, "Here's what I want you to do. Here's how you're going to find out how you do it."

Cat: You have to check its work t hat you didn't do with people 'cause you didn't want to know if they messed it up.

Ken: In a way, I'm a lot better at all the other things I wasn't great at before AI entered my life.

I'm like, I'm better at specifying what the hell I want, and saying, "Please write tests, and these are the tests you need to do, and you want to run 'em every time, and, by the way, when you do your commits, make them really helpful based on the last changes and, you know, then make sure you add this observability, and I want you to check to make sure this is observable, and I get the right answer."

And well now if I don't get a good trace, I'm going to go to an MCP, and say, "Hey, buddy. Here's the trace. Here's my code. Correlate the two, find out where this bug is, and I think it's in here."

You have this ability to collaborate with a non-real thing, but force yourself to be better organized.

Cat: Yeah, I think that, you know, the violent power in the world is going to be violent power.

Like the bad actors are going to continue to find ways to use anything, everything. They will build the bad version of this.

And, I mean, whatever you consider to be the bad guy and I consider to be the bad guy will probably take a long time to unpack and all of that, but, you know, sort of setting that aside, you know, I think we don't have the good version figured out yet, but who's going to figure it out?

Like the only way to have a voice is to keep trying to have a voice, and if you just sort of shut up about it, you know, sit on the sidelines about it...

I do not extend this critique to people who are genuinely trying to take care of themselves and are like, "I'm overwhelmed."

You know, we all have different roles to play in the world, right?

Jess: Or have other dangers to speaking up, yeah.

Cat: Exactly. Or, you know, we all have, you know, maybe there's certain forms of work you can do in your life.

However, when I think about software engineering as a class, you know, a profession, an organization, it has so much weight in the world, and it has so much power, so much financial power, that teachers don't have, that artists don't have. You know, but at the same time, I worked with teachers for a long time, you know, and like, harsh truths, if I was going to go try to pick a profession that was engaged and activated about social change, I wouldn't start with software engineers. Like I would start with teachers.

And they're underpaid, and they're in the inner cities, and they're heroic, you know. I'm not trying to valorize.

They're also normal people, but like they are capable of extraordinary things, and they don't get a choice about the kids in their classrooms using AI.

So they are grappling with it.

And I think that sometimes my hope and dream for software developers is that everyone in this field can feel a little less isolated and a little bit more part of the bigger human community and say, "Well maybe the teachers have figured some stuff out."

Austin: It's a song with consciousness.

Cat: Yeah, and I think what you're saying, Austin, a little bit is funny to me that you described this as the AI-centrist party 'cause I think it's actually like the party of having some moral ambition about this.

Jess: "Moral ambition."

Austin: You know, there are probably a great many...

There are at least a few people in the world that will get a kick out of the idea that I have moral ambition, but I'm not one of them because I would agree that I have a significant amount of moral ambition.

But I would argue that, to coda this, right, like I think in, especially in the tech industry today, the tech industry right now, we are at a bit of a pivotal moment.

There are crucial conversations that need to be had in this industry about what is it that we do, and I am not going to tell anyone, you know, I am not here to live someone else's life for them, and I'm not here to tell people how to...

You know, everyone has their own struggles, and they're all fighting battles I don't know about, but I do think that it is incumbent upon us to grapple with, you know, the reality we're faced with, and the reality we're faced with is that, holy crap, we have the universal function approximator finally, and there is no putting that toothpaste back in the tube.

So we can figure out how to build empathetic systems of people and technology that are, you know, humanistic in nature, or we can let the people whose moral compass orients slightly towards their bank account make those decisions, and I know which side of it I'm on.

Jess: Great. Cat, is there anything you'd like to leave our listeners with?

Cat: You know, I would like to say that I think it's easy to feel really overwhelmed at this moment and lose sight of the power that you do have as a technologist.

As a person, you know, maybe you don't have tool budget decision-making power at your org, but you probably have certain forms of social capital, you know.

Particularly if you're in a technical role, you might have a form of, you know, credibility or weight that your opinion and your assessment gives, and I feel that we are not surfacing good evidence about technical teams and about what they need and about the cost of our decisions for them.

And I really want technical stuff to work in the future, you know, because it is in all of our lives, and it's in our healthcare, and it's in every industry.

And so the more you can find, I think, those sparks of bravery to keep trying to ask yourself, "What is working, and what's not working, and how can I be clear about that, and how can I talk about that, whether it's to my peer, to my manager, perhaps it's to a large org if I have a big platform in that way."

You know, but I do think that we all have some way we can make the quality of the conversation better here and stop being thrown around, you know, passively inside of it, but actually become part of it. I think that would be helpful for folks.

Ken: Well, thank you so much for joining us today on this great talk about AI and all the things it brings.

Jess: Yeah.

Ken: Really appreciate you.