
Ep. #45, Perspective Density with Allegra Guinan of Lumiera
In episode 45 of Generationship, Rachel Chalmers speaks with Allegra Guinan of Lumiera about the trust dynamics and design ethics of voice-based AI. Together they explore how human tendencies to anthropomorphize voice systems can both build and erode trust, underscoring the need for responsible design and diverse perspectives.
Allegra Guinan is the co-founder and CTO of Lumiera, a boutique advisory firm helping senior leaders build responsible AI strategies. With a background in data and enterprise engineering, she bridges technical and business worlds to guide organizations through ethical and effective AI transformation. Her work centers on building systems—and leadership mindsets—that are resilient, transparent, and human-centered.
transcript
Rachel Chalmers: I am absolutely thrilled to welcome Allegra Guinan to the show. Allegra is a technical leader with a background in managing data and enterprise engineering portfolios.
Having built her career bridging technical teams and business stakeholders, she's seen the ins and outs of how decisions are made across organizations.
She combines her understanding of data value chains, passion for responsible technology, and practical experience guiding teams through complex implementations into her role as co-founder and CTO of Lumiera, a boutique advisory firm focused on responsible AI strategies for senior leaders.
Allegra, thank you so much for coming on the show.
Allegra Guinan: Thank you so much. It's a pleasure to be here.
Rachel: Let's just jump straight into it. Why do we trust voice agents more than we trust text agents?
Allegra: Yeah, I love that we're starting with this because I really think voice will be such a prominent modal moving forward.
Text has been a huge part of the scene for a while, but voice is really stepping it up lately. And the answer to why we trust them more is we really have this tendency as humans to naturally anthropomorphize AI systems. And maybe you're starting to see this as well.
And when voice agents exhibit human-like linguistic traits, like they have certain tones or phrasing, or they are exhibiting positive behavioral traits, like they're very polite or they're helpful, that amplifies this tendency that we already have to attribute human characteristics to AI.
And well designed voice agents are meant to feel less robotic and more like this human experience where we're used to interacting with voice and those well designed agents then innately play into this trick-our-brains play on us that, that we can trust this system more because it's like us, it is human.
And text doesn't have that same ability to convey emotion and to carry nuance like voice does, which makes it a bit less of this trust at the very beginning. But of course we are seeing now with companion AI tools out there that text can be very powerful for creating bonds and evoking trust and emotion out of users. So I don't want to discount that either.
Rachel: Yeah, and we're starting to see the dark side of that as well, aren't we? Where people are getting drawn into these long conversations with ChatGPT that are reinforcing their delusions or encouraging misguided thinking.
The humanness and trust inherent in voice has the potential to be even more damaging, don't you think?
Allegra: Yes, I do think that. And this ties into a larger epidemic of loneliness that a lot of people in tech also discuss as it relates to social media and everything. So they are very closely linked. And I think if not designed responsibly and with that in mind, then voice can have a very similar experience, if not amplified.
Rachel: How do you tackle it? And this may be a particular Silicon Valley problem. I don't know if you're encountering it. Have you run across strategic leaders who want to lean into the dark side, who want to create addiction loops with these tools. And if you do, what do you say to them?
Allegra: Well, I don't think it's quite phrased that way and people don't necessarily think about it that way when it's happening. But there is definitely still a large focus on engagement. And how we get people using AI as much as possible.
I think this can come from the sense to push adoption as well. Although by default they're not the same thing, but they can become coupled together very quickly.
And I think as leaders, business leaders, those that are trying to make a profit, pushing engagement and trying to push people and use their emotions to get them to take certain actions to stay on a product and what have you, I think that does happen quite frequently and is a slippery slope of wanting to get your product out there.
And I would say that there is still a prominent focus on profit and engagement over responsible practices at this point. Although I don't think that thinking responsibly or having regulation in place or governance at all stifles innovation or moving quickly or engagement when done well. So I want to be clear on that that it's not one or the other. They're not mutually exclusive.
Rachel: That's definitely common sense in Europe. It's, I think, much more contested territory here in the United States.
Allegra: Yes.
Rachel: Let's take a dig a little bit deeper about the differences between voice and text agents. Isn't voice just text agents reading aloud?
Allegra: Yeah, this is a common misconception, but no, they are architecturally different. But before getting more into the technical structure, very simply, you can think about the difference of how you write versus how you speak, the pace of articulation, filler words that you use, an accent that you might have.
Even you and I, we won't say the same sentence the same way, even if we write it exactly the same way. And when you're having a conversation, there are interruptions, there are sounds to express that we understand one another or that we're in agreement or not, and so on.
And of course, you can feed some text to a text-to-speech model, and you'll get an audio output, but it will be a bit static and somewhat flat. And to level set a bit, because we're talking about agents here, agentic AI is a term that gets thrown out a lot and has a lot of varying definitions depending on who you're speaking to.
But if we're moving with the definition that agentic systems can access tools and that they can take action with some degree of autonomy, that separates them from a simple text agent reading something aloud.
And so voice agents are engaging in a conversation and they're able to adjust to the environmental variables like the interruptions or expressions of frustration and so on. And there are a lot of variables in real life.
So let's say that you're talking with a customer support voice agent, and you're on your way to work, and there's the sound of the city and there's poor connection, maybe because you enter the subway, there's some fumbling around, you're trying to get certain information that they're asking you for, you have your own unique intonations like we talked about.
And the voice agent has to figure out how to handle all of those different variables and still return something that's valuable without cutting you off and making you feel like they interrupted you, or without misinterpreting what you're saying.
And then getting back to how we actually build these systems, one is straightforward, like I mentioned, like text to speech it's just a tree that will give you an audio format of your text file. But voice agents need to transform first speech to text and then likely use an LLM to analyze the text, likely make some function calls using other tools.
Maybe they need to access a CRM or some database, then generate an output, and then they have to convert all that back to speech with a text-to-speech model. And we are seeing varying architectures here where now we have speech to speech models which reduce that flow, but what I just walked through is still quite the common one.
And so you can see we've added a lot more steps in the process there and that introduces significantly more latency as well as design decisions.
So at each of those inflection points, somebody has to make a decision about how that's being interpreted which call gets made, and build and think along that entire process, which is significantly more complicated than just reading aloud something from text.
Rachel: So given all of those constraints, given that this is the problem domain we're working in, and given this concern that at least you and I share about trust and responsible and ethical use of these agents, how should we design them to take these differences into account?
Allegra: Yeah, this is where something called perspective density is so key. This is one of Lumiera's core values. It's essentially the idea of getting as many varying perspectives into a given space as possible to have the most robust and resilient result.
So back to the commuting to work example, if you drive to work instead of taking the subway, you might not think about poor connection when you're in the tunnel. And so when you're building this voice agent, that might not be one of the variables that comes into your mind.
Or if you've never been a delivery person who gets around on your bike, you might not think about the traffic sounds or the touch free commands that you'll need that you might design in otherwise.
If you're a native English speaker from the us, which a lot of people that are building these systems now are, you might not take into account the accents of a non-native English speaker or someone with a distinct accent.
And all of these contribute to building more resilient systems that actually work in production and that are responsible. And so we need to bring multiple perspectives into the room. We can't just move forward with one, path in mind. And then technically we need to think about all of the steps that I had just mentioned in that architecture.
So how do we reduce latency to also instill trust? Because if you are having a conversation with a voice agent and it suddenly stops interacting and it's just dead silent, that doesn't make me feel super confident.
And so how do we break up the system so that you have multiple sub agents that are maybe handling specific tasks that take longer, or only running when necessary and breaking up this idea of a monolithic prompt that might get confused or get lost as a single system.
And if there are things that take longer, designing processes around that so that the user always has some confirmation and feedback. You might want to design a follow up after that confirms that some action was taken.
Not everything has to happen in the context of the conversation. So really prioritizing the natural flow and experience and what the user will feel in that conversation rather than trying to build something that seems super efficient, but you were only thinking about the happy path.
Rachel: There's so much there that I want to dig into. I love the idea of perspective density.
It reminds me of Google's project Aristotle where they found that diverse teams had higher psychological safety, although they struggled with it initially just because people were bringing more different points of view to the table and that they saw higher success rates because their products were more resilient in precisely this way.
And Google had been dealing with things like image recognition that identified black people as gorillas, or divulging the personal details of women who were leaving abusive relationships. These diverse teams were able to help fend off some of those specific failure modes.
Allegra: Yeah.
Rachel: Was that research part of what informed your thinking about perspective density?
Allegra: I mean for sure, examples like that are a huge reason we think about it. We refer to it as the opposite of blind spots.
Rachel: Yes. Or group think.
Allegra: Yeah. We are aware of our own limitations.
We only know what we know. We've only had our experiences. And it would be irresponsible to think that just our experience is enough to build something that then impacts others. We need to have other views in the room, other perspectives. And so it's something that we really value.
We talk about it constantly. It's really built into our work because we see so many examples like you just mentioned.
Rachel: And then the agents themselves, if they have this ability to understand different perspectives, they're going to afford a much more welcoming interface to the people interacting with them.
They're going to be much more forgiving, they're going to be much more fault tolerant and resilient. Is that something that you see coming out in practice?
Allegra: I think we're not necessarily there in practice yet. There are a lot of assumptions being made right now for designing voice agents.
A really common one is that female voices are preferred for certain kinds of voice assistants, like Siri, Alexa, things like that. There's a lot of research out there on why female voices are chosen for such roles, which are rooted in a lot of bias and discrimination in the past.
And we need to break free of those assumptions, first of all, and we can't decide on behalf of the user all the time. They probably like this voice because they are this person. I think we need to give a lot more autonomy to users in: When they want to engage with voice, how they want that to look like, and because it is so unique to that person, how they speak, how they prefer to be spoken to, how the conversation works.
All of that needs to be built up over time through understanding the natural flow of conversation with that unique individual. And so I do think there needs to be a lot more participation from users as we're starting to design these and scale them up.
I think right now we're seeing a lot more assumptions being made because it is also quite nascent, the stage that we're in right now, in terms of advanced voice and conversational agents. So I think a lot of people are just trying to get it right and to work in production and are not thinking that far ahead into the responsible elements of the design.
Rachel: Voice is an incredibly interesting and dynamic scene right now, but obviously it's just one piece of the larger picture. How many of these design principles apply across AI more generally, and how might they inform an organization's strategy?
Allegra: Yeah, all of them apply. Everything that we've spoken about so far applies to any AI system, maybe just any system that you're putting together. And so when you're thinking about an organization that's crafting their strategy, instead of just thinking about, "okay, this product we're putting these principles into practice."
What are the principles of your entire organization? What are the things that you stand behind? This is also something that we talk about a lot with leaders.
Like you as an individual, what are your leadership principles that you will stand behind when you're talking about AI, when you're in a room and you have to make really hard decisions, what are you standing on to say, "no, we have to be security first," or, "we are focused really strongly on resilience or robust systems, which means we have to take these kinds of decisions."
It's so critical to have that North Star, to have those leadership values in place and at the organizational level to know then which principles you're putting into practice throughout all of your AI systems. So it shouldn't be this ad hoc scattered approach to designing.
There should be this very cohesive sense of what kind of organization you want to be with AI. And then you can go into, of course, various specifics for different technical architectures or whatever you happen to be building because there are so many different decisions across whatever you're building. But the core principles, I think, really need to be present across the board.
Rachel: And this is non-trivial because they're like the engineering vectors: fast, cheap and secure. They're somewhat orthogonal. You optimize one at the expense of another. And I think the prevailing view is that any kind of responsibility, AI is orthogonal to profitability.
I guess it's the same question restated. How do you persuade people to take these larger ethical concerns into account when everybody's still just trying to make money off this?
Allegra: Yeah, well, I think there needs to be a shift in how we're talking about returns on AI to begin with.
There is this sense that AI will give some immediate profit that is massive. There are a lot of stories out there on LinkedIn maybe that people see that make it seem like AI is immediately profitable. And that's just not the case. In reality, when we actually look at the numbers of how many at-scale, in-production AI systems are out there in organizations that are built well that are lasting, there are still not that many. This is not super common.
AI maturity is very low still, even though the investment is super high and it does take time. To answer the first part of the question, in convincing people on the responsible side, it can be a competitive advantage to be a leader in this space to show that you have principles.
A lot of companies are starting to do this already and have been for the last couple of years, to show that they're at the forefront. "We are thinking about this already. We have these principles."
And then actually putting them into practice, being able to share publicly, research papers show that you're investing in talent, in how you're progressing forward.
That can be a competitive advantage and it can be rooted in transparency and building systems that are responsible, security-first, all these things.
And then just on the profit side, if you haven't already invested in the infrastructure, in the talent and research, if you're not putting time aside, if you don't already have a data governance program in place that's been running that is super clean, you're not going to see that immediate profit that some people expect, or a lot of people expect.
And the ones that are starting to publish real numbers in the space that are doing super well, they were investing in their talent and infrastructure years ago and now they're starting to see numbers come out. So there does, in my mind, have to be a shift in how we're talking about it that is a bit more rooted in reality rather than the storytelling that's going on out there.
Rachel: Yeah, I want to dig in a little bit more because like you, I've seen tons of hype and very few actual institutional -grade AI deployments demonstrating a return on investment.
I would say they're almost entirely confined to agents and they're in fintech and retail. That would be where I see it, and I struggle to see it anywhere else. There's this prevailing joke format where I use AI to generate a slide deck and I send it to somebody and they use AI to distill it back to five bullet points and nobody ever actually interacts with the slide deck who's a human.
And there's a sense, particularly coming from the creative side, that what AI is for is to just get rid of a layer of middle management or creative staff who are seen as cost centers rather than profit centers. How do you see all of that changing over time?
Do you see real applications for conversational AI beyond those specific customer service segments? And if so, do they augment real humans or do they just replace all of our jobs?
Allegra: Yeah, I mean, I'm sure you know that I won't say it ll replace all of our jobs. That's definitely not the future that I want to see.
Rachel: I mean, I wouldn't mind having my job replaced if it meant I could sit in my garden with a margarita for all but four hours a week.
Allegra: That could be a job in itself. Yeah, I think there's a lot of room for voice. I know healthcare is also an area where this is used a lot for transcription in doctor's offices, for example, places where you really need to be present and hands on and you can't necessarily have text as an interface.
And it can be supplementary to a very human experience, but not at all replacing the core of what you're offering. I think also for how people learn.
So another thing that we talk about a lot at Lumira and something that we really tried to do through our own executive education program is multi-sensory and multimodal learning, because not everybody ingest information the same way. Some people are audio learners, some people are more visual.
I think there's a lot of opportunity there as well for changing how people interact with data and with the information around them in their lives, that just fits more to how they prefer to have information enter their brains, essentially.
So I think that's another potential area and I do think it can help with efficiency with things like customer service and so on. I'm sure there's a lot that could be done there that would alleviate teams rather than completely removing the whole function, because I think most of the time customer service is overwhelmed.
But I would push organizations to think more about the areas where they really can't have text or it's not possible or efficient. And how they can help the human in the room by using the advanced voice technology that we have, rather than trying to shove it into areas where it doesn't necessarily make sense and they're just trying to push voice as the modality.
Rachel: Tell us a little bit more about what you're doing with Lumiera. What does the program look like?
Allegra: Yeah, so we have an eight week executive education program and we have a cohort of maximum 15 people because we like to be really hands-on with the participants. And we take these senior leaders through three foundations.
The first one is building confidence so they can be an informed leader when it comes to AI. The next one is focusing on taking action and we focus a lot on risk there as well, understanding the risk landscape, being able to navigate it around AI, as well as understanding more about the industry.
So what is the current state of things? How do I fit into this in my organization? And what is my personal AI vision and my AI leadership principles?
And then the last one is shifting this idea about results. So we talk about how to measure return on AI investments, what are realistic ways to think about it, and what systems can you put into place to measure progress over time.
And we take them through. We really encourage peer learning as well. So these 15 people are coming from different organizations, they're all similar seniority and all going through an AI transformation process, but can learn from one another. And then they have that community after the program finishes.
And the idea here is that they can walk away feeling that they can go into a room with their executive peers or with the board and have a conversation about AI that is rooted in reality, that reflects the organization that they're in, reflects their own principles, and they don't feel a need to be reactive. They don't need to give into the hype. They're not overwhelmed by noise. They don't feel this pressure of being left behind.
They can have a very grounded stance on AI now as an individual to operate more responsibly and make better decisions in their organization.
Rachel: What kinds of people should consider signing up?
Allegra: Yeah, if you are a senior leader that is thinking about AI in your organization, maybe a C-suite executive or a senior VP or an investor or a board member that really has AI as top of mind and you're not sure how to navigate it, and you've had a reality check of, "okay, I need to do something. I can't be paralyzed anymore, but I need guidance. And I need to have a more curated and interactive experience to figure out how to do this."
Those are the ones that should be a part of this. It's not a purely academic program. It's not something that will go into every technical detail of AI. It is really a leadership development for the age of AI.
Rachel: What's one thing that you hope strategic leaders will walk away with from the accelerator?
Allegra: Yeah, I think it is that confidence to not be reactive, to really feel that they can set a strategic direction, that they can work with their team and build the trust of their team, rather than making decisions that are simply based on fear or pressure or just not knowing, and they feel that they have to display something.
Instead, it will be rooted in actual knowledge and confidence.
Rachel: How do you personally stay up with everything that's going on in AI? What are some of your favorite sources for learning about it?
Allegra: Yeah, I'm a sucker for podcasts. I mean, my dream is coming true now, where I've been invited onto some of the podcasts that I really respect, this one included.
There are a few out there that are really top of mind. One is the MLOps community podcast. It is more technical, but there is just so much great content out there. and there are virtual conferences as well that people participate in from around the world that are really talented.
Practical AI is another one that is technical and very applicable. Hidden Brain for psychology.
Rachel: That's a new one for me.
Allegra: Yeah, it is not a technology or AI podcast, but understanding psychology and the way our brains work I think is really critical for building AI responsibly. Everything we just talked about around voice and everything, that could be something you might hear on a Hidden Brain podcast. It comes originally, I think, from NPR in the US.
And there's also one called AI By Hand. Not a podcast, but very challenging for the brain, where it's somebody that goes through actual AI systems and how they're built, but by hand. So doing the math by hand, which is so interesting to watch. I've tried to do a very small amount of that and it's actually mind blowing that it's possible.
And then Hugging Face. I think for research on open source and ethical standards as well, they're really at the front of the pack for that. So tons of things. Honestly I drown myself in AI content all day. Maybe not everybody's like that, but I'm deep into it now.
Rachel: I'm absolutely like that. Those are a couple of new ones on me. We'll put all of those in the show notes, listeners.
Allegra, you're so great. I'm going to make you president of the solar system. Everything is going to go exactly how you think it should for the next five years. What does the future look like?
Allegra: Yeah, so I don't want to see a repeat of how social media impacted our society and the youth, where we looked up one day and we were like, "oh no, what did we do with this technology?"
So the future would be a lot more human interaction and a lot less digital dependence. I know that might seem counterintuitive to me being in this field, but if things are going the way that I prefer, it's actually a significant shift away from hyper optimization and technology centered experiences.
I'm not against technology being used to get us there or even fueling it in some regards, but I don't want our primary interfaces to be digital.
Rachel: Last question. Favorite question. If you had your own interstellar starship, a generation ship that takes longer than a human generation to get where it's going, what would you name it?
Allegra: Yeah, I'm going to borrow something from physics today and call my ship Diffraction.
Rachel: Oh, that's beautiful.
Allegra: Yeah, it's a really nice word actually. It's the spreading out of waves as they pass around obstacles or pass through openings. And I really like this idea of deviating from a single path by being flexible to changes in energy.
Rachel: And it also makes me think of the double slit experiment and those beautiful diffraction patterns that come through the gratings. That's really gorgeous. I love that one.
Allegra, it's been a delight to have you on the show. Thank you so much.
Allegra: Thank you.
Content from the Library
Open Source Ready Ep. #23, Kubernetes, AI, and Community Engagement with Davanum Srinivas
In episode 23 of Open Source Ready, Brian Douglas and John McBride sit down with Davanum “Dims” Srinivas to discuss the health...
Sourcegraph's Quinn Slack on Founder Folklore
Sourcegraph CEO Quinn Slack argues that in the AI era, founders must abandon outdated playbooks, move radically faster, and stay...
Open Source Ready Ep. #22, AI and Container Security with Benji Kalman of Root
In episode 22 of Open Source Ready, Brian and John sit down with Benji Kalman, co-founder of Root, to explore the intersection of...