1. Library
  2. Podcasts
  3. Generationship
  4. Ep. #48, Trusting AI with Sarah Novotny
light mode
about the episode

In episode 48 of Generationship, Rachel Chalmers speaks with Sarah Novotny. They dig into why AI models fall short of true creativity, how the tech industry drifted into extractive incentives, and what real security and accountability might look like at scale. Sarah highlights lessons from Kubernetes, open source ecosystems, and political science to propose a more trustworthy technological future.

Sarah Novotny is a longtime leader in open-source infrastructure, with contributions across Kubernetes, OpenTelemetry, NGINX, and MySQL. She has led open source strategy at Microsoft and Google and served on the Linux Foundation Board of Directors. Now based in Dublin, she studies political science at Trinity College, focusing on technological sovereignty, trust, and the social impacts of emerging technologies.

transcript

Rachel Chalmers: Today, I'm so pleased to have Sarah Novotny on the show. Sarah has long been an open source champion, leading in projects such as Kubernetes, OpenTelemetry, Nginx, MySQL.

She previously led an open source ecosystem team from the Microsoft Azure office of the CTO, an open source strategy group at Google and represented both Microsoft and Google on the Linux Foundation Board of Directors. In the distant past, she ran large scale technology infrastructures before web scale had a name.

Sarah's 20 plus years of expertise extends beyond technical realms to encompass developer relations, marketing, sales engineering, and more. She has a proven track record of leading technical operations in development teams while effectively bridging the gap between customer requirements and technical vision.

Her current adventure includes relocating to Dublin, Ireland and going back to school in the political science department of the Trinity College, which is my alma mater, as we just learned. There she is applying social data analysis to investigate questions relating to geopolitical impact of technological sovereignty and its implications for trust. All subjects very close to our hearts at Generationship.

Sarah, thank you so much for coming on the show.

Sarah Novotny: It is wonderful to see you Rachel, or to hear you since this is a podcast.

Rachel: We're seeing each other, but our friends are hearing us. So how far, if at all, can we trust our new AI overlords? What would it even mean to trust non deterministic software?

Sarah: Oh, this is such an interesting question right now because I've been thinking a lot about trust as it relates to all of technology and then more specifically the non determinism. I mean you nailed it in this question because we are looking at a world where we are offered answers freely by something that is an autocomplete.

I've referred to LLMs as T9 4.0. So you remember the old texting on the little keypads where it auto predicted what the word was based on the numbers you used?

Rachel: I call it spicy predictive text.

Sarah: Yes, that works as well. This is a space where we are building really lovely systems that predict what is the most likely next word. Fantastic. We're adding all sorts of important pieces around that to make it better and stronger and more robust and maybe even have capabilities like reasoning and such. But all of this is actually within the corpus of everything that has been done. So that's what's underpinning all of this.

Rachel: It's kind of the opposite of innovation.

Sarah: I know, it is. It's like where is the creativity in this? It offers that tableau. So if you want the most boring answer to your question, you can ask an AI. If you want to think about something really critically and really understand it in a creative and interesting way and then take that to a next logical extreme, that's not what we can use right now.

And so trusting this when it is predictive and statistically generated is fine. As long as you're within, I'm taking a stats course right now, as long as you're within the first standard deviation of everything that might be interesting.

But the really interesting parts of the world and the really innovative parts of the world and the really forward looking parts of the world are in those edges. They're in the spaces where things break because they haven't been done before or because we didn't expect it to happen this way. And that's not what these projects are going to give us.

Rachel: And what's really messing with my mind is that venture is supposed to be about those weird edge cases. It's supposed to be about the non intuitive outlier indications of what the future would be. And yet the big VC firms, the nine firms that raise 70% of all venture money are just doubling down on spicy predictive text. Like, do what you're doing. But more.

It's really strange and interesting to see the way this has captured people's imaginations in two different ways. One, "oh, this is interesting. This is a capability that we hadn't had before."

And two, "this is the oracle. This is the magical computer that will solve all of our problems and we should invest everything in it immediately."

Sarah: I suspect there are plenty of spaces where we are growing new technology around this. Actually I know, there is. I work with companies that have really interesting innovations on top of the varying levels of AI and machine learning. And when you mix them all up in different ways, you get some really interesting stuff.

But what I see right now, and this is why I'm in the political science department at Trinity, what I see right now is that we, as the former nerdy underdogs who were taking the world away from the jocks, are now the entrenched power structures in the world and we're doubling down on the requirements that we have to give us back shareholder value.

And we've lost sight of anything that isn't the next hype cycle and the next place where we can change the way we sell something in order to make sure we're extracting more value right now at the expense of the long term investment and growth of technology. So that's my biggest worry.

Rachel: We built the paperclip machine.

Sarah: We did.

Rachel: We built the paperclip machine to own the jocks.

Sarah: Yep. And now we've gone ahead and are doing different levels of innovation, certainly in pockets.

I think the majority of the tech industry right now is paperclipping, is building paperclips in order to ride this bubble until it bursts and then make the next hype cycle. We can look at this across any number of historical hype cycles.

You've joined me in the tech industry for a few years.

Rachel: Tulips.

Sarah: Yes, it's tulips. It's tulips. Now, that doesn't mean the tulips aren't amazing and you can't get really interesting hybridization out of them. And that it can't be a phenomenal way to build a garden.

But it isn't every garden and it isn't the only way to build it. And it isn't the only innovative, beautiful way to extend my metaphor.

Rachel: There's this totalizing impulse that has gone into tech and honestly, I blame Peter Thiel for it and his book Zero to One. I think that's a source of a lot of what I fight against in the tech industry.

But this idea that there's only one way to do things correctly, this stamping out of difference and nuance and weirdness and strangeness, which, as you say, is the opposite of what drew me into tech in the first place. I came in in the 90s when we were giving software away and we thought that the web would change politics and change the world.

Not like this. In the other direction.

Sarah: Yes.

We wanted tech to build more free exchange and not more walled gardens. We wanted it to share knowledge and spread wealth as opposed to consolidate it.

Rachel: And eight guys own everything.

Sarah: Yeah, these are exactly the questions that I'm looking at in this future theoretical research. I just started this thesis, so.

Rachel: And I'm so excited that you're at Trinity because for me, as an Australian, Ireland was the first country that the British really colonized. It gave me a lens for looking at my own experience with empire and as a white settler and as a colonialist in an indigenous land that has carried with me, has become a core part of my identity going forward.

I think Trinity is an unbelievably cool place to be studying this because, you know, they are both the victims and the beneficiaries of tech, which is the new empire. They had tremendous subsidies for all of the tech companies. IBM, Microsoft, Intel have enormous European locations there.

Sarah: Facebook, Google, others. Yes.

Rachel: And Ireland was one of the biggest sufferers in the housing crisis. There are ghost estates all over the country where Insta McMansions were put up and never lived in as an artifact of the economic bubble. So yeah, you've gone right to the source of people who are thinking really deeply about these issues and I'm super excited for you.

Sarah: And longitudinally, speaking to Trinity having been around for hundreds of years longer than the US.

Rachel: And it was planted there to balance UCD. Trinity was the Protestant English speaking counterbalance to the Gaelic and Irish college. It is a contested site in the history of the English language and it's an incredible place.

There are no easy solutions to any of this, Sarah. How can we get ourselves as tech practitioners to think more mindfully and more inclusively and more imaginatively about the world that we're leaving to our grandchildren?

Sarah: Oh, I'm spending my time in these efforts working in, oddly enough, open source software, looking at things that are outside of single corporate control. I am looking at these in terms of standards.

And one of the projects that I've been working on most recently in this last year has been the Coalition for Secure AI, which while being an industry consortium, is very much looking at the way that security needs to be done around AI applications. AI models less so because there are plenty of other organizations that are looking at that.

But how does your average company look at this bubble and then try to balance the risks and rewards of new tech that is being strangely, sometimes board mandated, while bringing forward actual good security practices to make sure that in the chasing of the shareholder value, we are not shortchanging our customers in what we deliver to them.

So I think looking at, and I'm going to say the terrible R word and the terrible P word here, regulation and policy, around the way technology is delivered at scale, two of the scariest words in the English language, in combination.

Looking at the way we deliver these things to our customers and looking at things like consumer protection laws around this, like if you ship an AI model that tells you to eat glue, you go, "oops, sorry, I will go ahead and rerun this."

If someone sends a car out on the road that doesn't brake properly, that's different. You know, there are rules and regulations. If we look at this as the nth industrial revolution, there are points in time where we have to come back and say that's cool, we did all of this massive innovation and now we need to actually go make sure that our consumers are protected in a meaningful way.

So those are the things that I've been looking at and this is why I'm looking at political sovereignty and trust. Because--

I think we are seeing in today's world that these companies, many of whom I've worked for, let's be clear, that these companies who are the size of small nation states and whose leaders are often those who have the wealth of sovereigns, are not being held accountable for the impact of the technology that is being deployed.

Rachel: I want to dig into why regulation and policy are bad words. So as with AI, government is two things. There's the monomyth where government is just a brake on innovation and it's red tape and it's depriving the heroic John Galt type man of his individual sovereignty. And that's one image of government, like drain the swamp, make it small enough to drown in a bathtub.

And then there's the other sort of counter narrative. And I don't want to idealize this because it can be oppressive in its own way. But I really like how Michael Lewis describes government as an insurance company.

So there's a bunch of things which it doesn't make sense to do privately because it doesn't scale. Air traffic control, health insurance, highways.

Sarah: Prisons.

Rachel: I guess.

Sarah: Complicated.

Rachel: Yeah, separate issue. Having grown up in Australia, I have strong feelings about prisons .

Sarah: One would imagine. Yes.

Rachel: But all of those things work best if we all contribute, and we all contribute to the upkeep, and we own them collectively. And to the John Galt's of our society that smacks of socialism and communism and is evil.

But we can look at economies like, notably the Scandinavians, which are practicing social democracy where that infrastructure layer is collectively owned, where there's guaranteed income and you can take two years off to raise your kids and you can still innovate tremendously even with that enormous tax burden. I don't know where I was going with all of that.

Sarah: No, no, no. But it's great because you can also do really innovative, socially good things. Like Sweden did with its time service for all of Sweden. Like they have hardened time servers that are patched and properly maintained and built in physically safe locations, because this is something that underpins the infrastructure of our world today.

So some of that revenue from their ISP system goes to make sure that they have these critical infrastructure, literally critical infrastructure systems built and managed and run in an amazing way, that is not the way we in the US tend to look at these things.

Rachel: Yeah, it does come back to trust. And this is why I'm so excited about what you're working on. Because what we're talking about is the social contract. And infrastructure is part of the social contract.

It's so often invisible, we don't celebrate it. We don't laud the achievements of the people who built our sewer systems in our electrical grids. We only notice them when they fail.

And yet whenever I get really despairing about the state of the world, I think about an ambulance is coming and it's got all of its sirens blaring and everyone pulls over, you know, we're going to be late and that's fine because whoever's in the ambulance needs it more than we do.

People get very despairing about the state of the world, but people still pull over for ambulances. And I think those of us who want a more equitable world need to celebrate that infrastructure, that shared trust a lot more.

Sarah: Mhm.

Rachel: There wasn't a question there. Sorry, that was just an impassioned rant.

Sarah: But there is also the cynical side of that, which is in many places in the rural U.S. for example, where we have had privatized ambulances and fire and so on, you have underserved communities because they are not a good, profitable place to invest.

Rachel: And that's why some things have to transcend profit. There's some things we have to pay for even though they're not profitable. The last line of telephones in rural areas because people need to be able to call 911.

Sarah: You have hospitals in rural areas. There was a very great book that I read, I don't know, a year, year and a half ago that was about a rural hospital in Michigan who was independently run and was working to stay independently run because they knew if they were gobbled up by the conglomerate that wanted to buy them that their services would be cut, they would be squeezed to the point that they would have a maternity desert, et cetera, et cetera.

And so they were really struggling to stay independent. And it was a really lovely book and I'll have to look up the title of it and send it to you for notes.

Rachel: To me, the really long term economic argument for this, not that I think it should be an economic argument, but there is an economic argument which is that no human is surplus.

Like we know AI is not going to solve our wicked problems. We know it's not capable of thinking itself into the edge cases. It's not capable of coming up with unique and novel insights. By definition, that's how the math works. We know that the only thing that works against really large scale problems is human ingenuity.

And we also know, although our tech bros would love to deny this, that human ingenuity is evenly distributed among people. It's not concentrated in the Bay Area. It didn't all go to Stanford or Harvard.

Sarah: Or men.

Rachel: Or men or white people. Everyone has a piece of this puzzle. And I think that's another thing that we need to say with our whole chest more loudly and more often. Sorry, this is turning into the Rachel Has Opinions show.

Sarah: No, Rachel has opinions, but they're great because this goes right into one of the things that we talk about regularly, which is:

Is AI going to take away all of our jobs? And what do we do when AI can code? Well, here's the thing that I think the sales pitch is missing. Writing the code is the smallest portion of running software over the lifetime of any product. Writing it is just the first step on a journey to running it, managing it, maintaining it, patching it, keeping it up to date, keeping it moving with what the customer's needs are.

Like this maintenance, which again is the unsung infrastructure of the world, is the larger cost in this. And it's also one of the things that we've been seeing shortchanged again and again in this short term shareholder value space.

This is how I end up in the question of open source security so often because most of our software is built upon lots of building blocks of open source software, even all of our proprietary software. I think the statistics recently I saw were 90 plus percent of software has some amount of open source in it.

And if you look at commercial software, it's on average about 80% open source libraries. Very few people write their own math libraries. Those are open source libraries. So if we are not still investing in the maintenance of those building blocks, which we aren't at the level we need to. We do some, but not at the level we need to. Then as we see large companies build on top of and using these building blocks, they are extracting from a public good without giving back at a proportional value even, to the value that they're receiving from it.

This is like the Swedish percentage from their ISPs. To have them all have a shared service like this seems super obvious. And I've struggled to make this something that I could turn into a program in these large companies who benefit so much from open source software.

Rachel: Yeah, we've got an economy that's really taken a hard right turn towards the extractive. And you know, there are just hard, finite limits on what they can do with that. They're deluding themselves if they think Mars is the next frontier. Mars is made of poison.

Sarah: Yes.

Rachel: The Earth is where we keep all of our stuff and we do need to take better care of it.

Sarah: There is no Planet B.

Rachel: We're already talking about the challenges of securing open source software. Is this going to introduce even more risks to AI?

Sarah: More or different. I don't know that it will introduce more risks. I think the risk exists in the way we are building. And I think within AI, we see AI trained off of the patterns we have had historically to our points about not anything really novel or new.

And so we have systems that are now telling us to build things the way a Stack Overflow article shared it or a GitHub repository was built. So I don't think that it's new. I think it's just repeating it, it's stamping it out at an industrial scale and we may not see any improvement, but I don't know that it changes it substantively or makes it a bigger risk.

I just think we will have more of it because we will be writing more code with less oversight.

Rachel: We're amplifying it by reiterating these errors over and over. I did want to go back to something you said about the very strange situation that we're in, and you and I have never seen this before in our long careers in tech. CEOs, mandating the use of AI and CTOs, having to find applications for it. I mean, it's fascinating. What's going on there?

Sarah: I have theories, I have theories of this being an industrialization bubble that is so deeply interconnected. We see new and novel business structures that are, if not self dealing, are certainly friendly, friendly deals.

"I will invest this much by having you buy from me, which increases my top line revenue and increases your valuation."

Wow, that seems conveniently good for the two parties in a way that minimizes risk. Especially when you have those companies controlling the way these products are deployed or those people who are involved and party to those deals also controlling how this is deployed.

Or the sweetheart deal of, "my new AI tooling doesn't seem to be selling itself. So what I'm going to do is I'm going to raise the price on the thing you have now and stuff AI into it."

So I think what we're seeing is the natural extension of this bubble leading the application and deployment of these new technologies in order to further the interests of those who are most invested in the bubble.

As opposed to, what I really needed was a stochastic parrot that could tell me the same things predictively--

Rachel: That's what therapy is for.

Sarah: I know. Oh yes.

Rachel: Men will build a world historical economic bubble rather than go to therapy.

Sarah: Yes. Which leads us to the weirder applications of things like, you know, the modern Elizas in the world where you are having people confiding in a stochastic parrot, confiding in a piece of software that is rewarded for reinforcing your feelings of positivity in a way that may or may not be healthy. I'm going to go with isn't healthy. But we will wait for science to confirm.

Rachel: AI psychosis is much more widespread than people think. I'm seeing manifestations of it firsthand. It's wild.

Sarah: It is, it is really, really wild. Yeah.

There are really interesting applications of this technology. I don't think we're focusing on them.

Rachel: Yeah, hard. Agree. How do you think we might mitigate some of these risks?

Sarah: I think some of it is reinvesting in the reality that are the people around us. I think looking to our real communities and reinforcing engagement with real human people, instead of feeling that you are isolated and need to be asking a chatbot for support or suggestions on something.

I think finding ways to be open and vulnerable with people that you trust or building relationships that you trust are an enormous part of this. Go out and go to dinner with somebody that you haven't talked to in a while. Don't just send them an AI generated text or an AI encouraged text.

You know, actually go wow, I'm having a hard time. Like this is another piece, I think of the last 20 years of the way technology has socialized us. We have to always be showing the best of our polished, prescribed life. Like we don't have bad days, or even if we show a bad day on Instagram, it is by showing it in the most glamorous way possible.

We have lost the sense of reality in favor of the sense of polish and performance. And so engaging with people in a way that is messy, because humans are very, very messy, is important. And doing less performance and more connection, I think is a piece of how we can actually take a better, more realistic look at what is going on in the world today.

Rachel: Yeah, I do see a rededication to community as a big part of the work in front of us right now.

I wanted to talk a bit about Kubernetes just because it's such a glowing example of the kind of infrastructure I love. It's open source, built and supported by a community widely, widely adopted.

It's one of my favorite examples of infrastructure software as the re-ification, the incarnation of a set of software best practices. What role do you see Kubernetes playing in this AI platform shift?

Sarah: I think there's a lot of different roles that it can play and one of them of course is the infrastructure for this. There's actually work going on in the Cloud Native Computing Foundation right now to make an AI conformance guidance and standard so that when someone says, "I want to run my AI platform or my AI whatever in Kubernetes," this is the way that you do it, you do it well.

So you have a community based group coming together and setting up a conformance standard so that people know what it means to be building a good infrastructure to run AI on. So I think that's one piece of it.

And that's interesting, but probably the least interesting piece to me, amusingly enough. There's another piece which speaks to sort of your conversation about building in the open and building with a community and a set of values.

We worked really hard to make Kubernetes and the cloud native ecosystem be value led and to make sure that within the many structures that you had intersecting, you had the industry foundation intersecting with large scale corporations, intersecting with special interest groups that were volunteer run, intersecting with individuals who had their own careers that they were trying to build.

We worked really hard to make sure that there were rewards at every level of that structure. So if someone was an individual working within the Kubernetes and cloud native community, they were rewarded for the work they did there that was independent from just their corporate role.

Or that they had, you know, I used gamification in the status, access, power stuff. It's pretty straightforward. You know, you get stuff from your day job, but you get access to a group of people and a new potential set of peers who see your work, who might lead to your next job.

You get an opportunity to engage with the public, for lack of a better term, by working in the public and sharing what you see and sharing what you're learning and getting to learn in the public in a way that helps others learn and helps teach others.

And the way that I see that community and that very intentionally built structure of values and reward systems and groups of people enmeshed to build something much bigger than themselves or even their companies, I see that same structure being sought by people working in the open in AI.

So I literally have people coming to me going how do we make the next cloud native community for AI? So use that community structure and that, that cohesiveness and that, that drive that we, we were able to build. How do we make that for something in AI?

And so there are lots of open source projects in the AI space that are starting to play with how to build that and how to build it in a way that does take into account all these different interlocking pieces where everyone needs to get some value out of it.

Rachel: That's really exciting to me because what you built was a constitution, a social contract and the idea that what you built with Kubernetes is inspiring people to build around AI, that's one of the most hopeful things I've heard this week.

Sarah: I'm excited about it. On top of my Trinity thing, but well within my student visa limits, I have an occasional consulting contract and I'm working with a couple of companies to try and help them think through how to build this system. We'll see if it works.

We may have bottled lightning with Kubernetes in the cloud native ecosystem. But my hope is we can take a lot of what we learned from that and build forward with it into something that can be a model for the next large scale infrastructure changes.

Rachel: Do you use AI tools in your own workflow?

Sarah: I do. Interestingly enough, I use them as sort of a checkpoint for me in terms of what am I not thinking about or if I turn this problem inside out, what have I not seen? Or take a counter perspective for me and tell me what I'm not seeing.

So I use them as sort of a thought partner in a lot of cases. I have also been using them a little bit more recently with the new schooling stuff. Although I have to sign and with every assignment I put in, I have to say how I used AI in this new system.

I've also been using them a little bit for basic teaching. Like, "hey, I really don't understand the student's T test," which was another thing that came out of Dublin because that came out of the Guinness Storehouse.

But like, "I don't understand the student's T test in statistics. So you know, I'm trying to do this and this is what I did and where am I going wrong?"

And I don't think it's quite as pedagogical as I would like because I'm not guided to answers. I'm told what the answer is sometimes, but it is helpful when I'm stuck or I'm trying to think something through in a different way or I can't get my words straight.

Rachel: It's a more refined version of Googling the answer on Stack Overflow.

Sarah: Yep. And sometimes I end up on Stack Overflow too, which is something I never really expected to say. But it's true.

Rachel: Obviously, AI is one of the contributors to an unprecedentedly bleak job market right now. What advice, if any, do you have for college graduates?

Sarah: Don't forget to learn yourself. Don't rely on it as a crutch.

Don't rely on AI as a crutch, but recognize that if you know the basics and you know how to use these tools to make you more efficient or make you a better, more thoughtful or a more deep thinker by probing for counter positions, that we will get through this and we will get back to a space where people will understand and recognize the value that we as individuals bring in all of this work.

But I think two things. One, don't give up hope. Two, don't rely exclusively on the tools and make sure you know the work that you want to be doing. And then the other is to seek out communities where you can meet people who are doing things that you want to be doing.

This was how I launched my career 20 odd years ago. I got hired at Amazon after leaving grad school the first time. But I got hired in the late 90s to answer customer support emails because I only programmed in two languages, none of them were used at Amazon at the time.

And so they said, "ah, we'll start you in customer support. Everyone starts in customer support." Which is great.

But what I did in order to grow from that starting point, from that very basic starting point, which was not really what I wanted my career to be, was taking on more things, working with people who were doing things that I was interested in, offering to help them so that I could learn how to do it and to generally participate and grow that community, that group, that cohort around me so that I had support as I was learning.

And that's, to me, the thing that has always underpinned my work in open source. It underpins the work I'm doing with Kosai. I took it on because I could bring program management and open source work to it, but I was not really strong in security thought processes.

So I now lead a working group that is all about agentic security and I help all of the organizational bits of it. And I'm learning a ton about traditional security thought processes. Traditional security, you know, red teaming, blue teaming, assessments, compliance.

I'm learning all of this because I'm offering the skills I have while always learning more. So that has forever been my recommendation to people coming out of school, is to find people that you want to be working with and then offer your time to learn.

Rachel: That's incredible advice. Find the others.

Sarah: Mhm.

Rachel: What are some of your favorite sources for learning about AI, other than throwing yourself wholesale in as project manager for enormous programs?

Sarah: That is actually my favorite is finding people who have skills. The other thing I do is when someone mentions blah blah, blah, such and such paper or blah blah blah, such and such article, I always go read it. I always go find it.

So I spend a lot of time on Arxiv these days. I have a million tabs open to. Even if it's just read the abstract in the first 10 paragraphs of a paper, reading as much as I can about what is actually happening, as well as the counterpoint of whatever the media is saying as well.

Because you get two very different perspectives on this. So that's one of the ways that I learn. The other is always, for me, has always been through people. So it's been finding people who are doing interesting things so that I can go do things with them.

Rachel: If everything goes exactly how you'd like it to for the next five years, what does the world look like?

Sarah: I think the space that I would most like to see change and the places that I am most looking at investing for that change is around security for software, being the underpinnings of open source software, which I do think will take some policy and some regulation in order to make the money flow in a way that it can get the attention it needs as well.

As making sure that as we are building new things and new ideas that we are implementing security as a first principle as opposed to a bolt on.

So if everything goes fantastically, we will not be putting security into all of our technology as a last resort to patch things over when we recognize the fault lines, but instead building it in, in a way that may slow innovation, that may cost more money up front, but is ultimately giving a better experience to the end consumer of that technology.

Rachel: It's an insurance premium.

Sarah: It is.

Rachel: If you had your own generation ship, a starship that takes longer than a human life to get to its destination, what would you call it?

Sarah: I would call it Hope into the Future, because Hop into the Future was cute, but it was really more about putting the hope forward into it.

Rachel: Have you read Rebecca Solnit's book Hope in the Dark?

Sarah: I have not. I will have to read that. .

Rachel: It's really good. I love Rebecca Solnit.

Sarah, it's been an absolute joy to have you on the show. I wish you as much delight in Dublin as I had when I was there and would love to get you on again sometime.

Sarah: You are more than welcome to, you know how to find me on the Internet. And please, if you ever come through Dublin, give me a call.

Rachel: Absolutely.

Sarah: We will have a Guinness.

Rachel: We'll go out for a cleansing pint of ale.

Sarah: Yes.