Ep. #16, Brains in Jars with Raiya Kind, PhD
In Episode 16 of Generationship, Rachel Chalmers hosts Raiya Kind, PhD. Together they delve into the human-AI paradigm through the lenses of philosophy, psychology, and cultural anthropology. Discover how Raiya's work with language and consciousness sheds light on the mind-body connection and the environmental impact of AI.
Raiya Kind is the Founder and CEO of Code and Concept. She works at the intersection of language, consciousness, and AI. Weaving cognitive linguistics and cultural theory into Natural Language Understanding and Machine Learning, Raiya's research and career have spanned the University of Oxford, Google, IV.AI, rather insightful side quests into hedge funds and deep tech, as well as personal entrepreneurial escapades.
In Episode 16 of Generationship, Rachel Chalmers hosts Raiya Kind, PhD. Together they delve into the human-AI paradigm through the lenses of philosophy, psychology, and cultural anthropology. Discover how Raiya's work with language and consciousness sheds light on the mind-body connection and the environmental impact of AI.
transcript
Rachel Chalmers: Today, I am delighted to have Raiya Kind back on the podcast. Our first repeat guest.
Raiya works at the intersection of language, consciousness, and AI, weaving cognitive linguistics and cultural theory into natural language understanding and machine learning. Raiya's research and career have spanned the University of Oxford, Google, IV.AI, rather insightful side quests into hedge funds and deep tech, as well as personal entrepreneurial escapades.
Raiya is fond of her interdisciplinary doctorate research at the nexus of cognitive linguistics, psychology, and cultural anthropology, shining light on the way that human beings infer and judge themselves and others across cultures.
Built on a foundation of research across this work and her MRes in speech language and cognition, Raiya's current work in intentional AI and AI narrative considers how this technical paradigm shift threads into the tapestry of humanity.
Raiya Kind: Well, thanks for that wonderful intro, Rachel. I feel honored to be here and excited to delve into the many different avenues that we can discuss.
Rachel: The honor is all mine. Raiya, let's start with IV.AI. It's a platform that harnesses unstructured data to help some of the biggest companies in the world make better business decisions. Can you tell us about your work there?
Raiya: Yeah, of course. I love the work that I've been doing with IV.AI. I've been with them for a long time. I've really seen the work shift in terms of what the actual focus is.
Back in the day, it was a lot more on actual linguistics work, on setting up semantic frameworks, looking at sentiment analysis. And as we all know, English especially, or all language really, is not very logical, right? It's just the way that things have organically formed.
Rachel: English is a huge mess.
Raiya: It's a huge mess, yeah.
Rachel: It's like old Dutch had an affair with French, Spanish and just hung out on an island for a thousand years. What is happening? It's an insane language.
Raiya: Exactly. And anytime they set up a rule where they're like, I before E, except for after C, there's like more caveats than you could even list. One of the biggest challenges back then was really how do we make sense out of something that's not sensical, that's an organic formation?
Now with the event of more compute, better algorithms, more machine learning, I would say the problem's gotten easier, but much more mysterious. So now it looks more like, "Okay, well the AI is training. We're going to fine tune it. It's just going to output whatever."
So I would say the whole field itself has completely changed. That said, the one thing that has stayed extremely consistent, which is something I worked on really closely with IV.AI, is the significance of having really clean and inclusive data. So what does that mean? It means making sure that, at every level we have inclusivity, that we have data from all points of view, that the people working on actually getting and cleaning and providing the data, there's inclusivity there as well in terms of who's working on it.
So we did like an intentional AI framework and principles initiative, right? That was spearheaded by Vince Lynch, the head of IV.AI. And that's something that I think is timeless, making sure that we have good data with inclusivity at every level.
Rachel: Yeah, garbage in, garbage out. I, as you know, feel very strongly about this. But I'm fascinated by what you said about mysterious.
I've been thinking a lot about how the large language models in particular are trained on essentially a corpus of how humans think and reason about the world mediated via language, and the differences between that and what it actually means to think and reason about the physical world.
Is there something that comes up in your research at all that the gap between linguistics and... Well, I mean the inclusivity is one example. Because we're embodied in the real world, we know that there's more to the data than is necessarily represented in a given body of writing. Do you think about that gap?
Raiya: Absolutely. And if we think about all the different types of intelligence there are, right? Linguistic intelligence, communication is just one facet into human intelligence.
Somatic intelligence or embodied intelligence, like you mentioned, is so crucial because we are in this three-dimensional physical world, whereas AIs, they're not, right? You can kind of think about it almost as they're like a disembodied intelligence. Which is also a joke you could make about some people that work in the field, right? We're so, so disconnected.
Rachel: I would never. Some of my best friends are brains in jars.
Raiya: There's nothing wrong with brains in jars. You know, sci-fi has done it a bunch. They've usually been great assets. And there's something to be said for the integration of the mind-body connection.
You know, they've done so much research now that so much of our intelligence isn't just in the brain. That there's, for instance, so much intelligence in the gut, which is now widely accepted and trending, that there's neurons all throughout the skin.
And an interesting analogy to use is that people have said for such a long time, "We're so different than octopuses. Octopuses, they have a distributed nervous system. They don't just have like one brain center. Wow, so different."
Except now we're realizing, so do we. Our neurons are in our skin, are all throughout, we have serotonin in the gut. It's not that different, right? It's not that different. And some people would even say the skin breathes, right? The skin receives.
So by that definition itself, there is something inherently lacking about just being a mental model that's suspended in space. Even if that space is so contextually rich, it doesn't quite get at the real-world, felt sense experience of the somatic body.
Rachel: And I worry that, you know, in some ways Silicon Valley glorifies the brain in a jar. It's like, "I don't care if you're a purple alien. I just care how smart you are."
I worry that that denial of our somatic intelligence extends itself into the physical world by, for example, ignoring the physical and environmental costs of the data centers that underpin AI or the ghost labor that's used to work with these models.
Raiya: Exactly. And so I think that one of the potential challenges of being, let's say, this disembodied intelligence is everything feels hypothetical. Everything feels theoretical.
Rachel: Yes, there's no stakes.
Raiya: There's no stakes. Even though, like I heard someone did like a analysis once. They asked people in Silicon Valley working on AI, what's the percentage chance you think that AI will destroy humanity? And I think the answer was like 10% of people were like, "Yeah, this might destroy humanity."
That's a really big percentage, right? Like, imagine that someone said there's a disease that's going to go around and there's a 10% chance likelihood it's going to just wipe out all humans. You'd be like, "No, we got to deal with that right away."
Instead now it's like people are like, "Let's chance it. Let's see what happens. Maybe we'll put some more funding behind it and get to the end faster to see what happens." And I think that disconnect is because of that disconnect between the mind and the body.
If you listen to your body, right? And if you listen to your body when you said, "Let's try this thing," and your body was like, "Wow, this feels dangerous. This feels like there might be consequences for each other, for the Earth, for what's to come."
You'd be like, "Okay, let's take a beat. Maybe let's slow down this quote unquote AI arms race and realize that the only urgency here is the false urgency we've created through the competition of getting there first."
Maybe if we had that mind-body connection strengthened, we could listen to it and say, "Huh, there's probably a way to do this that's more useful and more graceful and more considerate of the potential pitfalls. What if we slowed down and looked at that together?"
Rachel That sounds enormously appealing to me, but I suspect I'm in a minority here.
You are also the founder of the change management firm Code and Concept. Can you talk us through the natural language understanding work you've been doing there?
Raiya: Yeah. So if you can think about language, what language is, it's all signs and what's signified, right? And I'm going to throw some acronyms at you.
So natural language processing, NLP, is really how we take text and derive the meaning from it. There's also another NLP which I work with, which is neurolinguistic programming. You've probably heard that around Silicon Valley a lot. And it's really how do we use language to create, or how do we use it to really connect and guide the cognition?
There's also natural language understanding, which is how do we take the meaning we've derived from text and use that to make decisions or take action based on the interpretation? So I take all of that together, and I specifically look at it through a lens of conceptual metaphors.
How do we represent things in terms of frameworks and concepts? What's the sentiment attached to it? How does that land in someone's body? And use that to coach executives and managers into, how do we create the culture of our team?
It's not just what we're expressing, it's how we're expressing it.
Rachel: This is an enormously rich field, and I have many, many questions I want to ask about this, but I'm going to jump on what you said about metaphor. I think my go-to example, the alpha wolves and the rats addicted to cocaine.
We had these experiments which showed that rats will, you know, endlessly supply themselves with cocaine, and that a pack of wolves will sort itself into an aggressive, dominant male and a whole bunch of subordinates.
It turns out those experiments are hard to reproduce in the wild because wolves in the wild are much more egalitarian, probably because they're not under resource pressure. So what we've been studying is animals in a position of artificial scarcity.
Similarly with the rat study, if you build Rat Park where the rats have lots of enrichments and they have lots of other things that they want to do, then they don't necessarily get addicted to as much cocaine.
And yet these examples drawn from a sort of a superficial understanding of Darwinism absolutely saturate our discourse. You know, the whole idea of zero-sum games of being the first, of win at all costs, of winner-take-all, are metaphors. They're not actually a reflection of the physical world. We pretend that they're drawn from nature, and they aren't.
Can we bring in new metaphors that are more generous, more open, more egalitarian, more generative of possibilities?
Rachel: Yeah, absolutely. And I love that you mentioned the Rat Park study because my takeaway from that study is, when rats have connection with each other and they have an environment that facilitates connection, they no longer need to go and rely on some compulsive addictive behavior in order to feel all right.
I actually did a talk on this once about the ways that we can use to represent certain terms through different metaphors. And a metaphor, by very loose definition, is explaining something concrete in terms of something abstract, right?
So I used DALL-E 2 at the time to generate images of metaphors by typing into the prompt x represented as y to see what images we got. And I did these across many different types of metaphors, metaphors around harmony, metaphors around love, metaphors around light, as well as the other side, metaphors around competition, metaphors around fear, metaphors around war.
And I made these image collages, and all these metaphors were on the same concepts, same ideas represented with different framings. For the more kind of harmonious and uplifting metaphors, you saw a lot softer edges, a lot more idyllic sort of imagery, a lot more kind of rainbow and pastel colors.
For the more competitive and fearful and warlike metaphors, it was mostly red and black and very incongruous, very sharp edges, a lot more depictions, obviously, of violence. And the conclusion was, even if you're representing the same idea, the felt-sense perception of your body, those images really show the different effects that that your animal body has.
Because the body doesn't understand the difference between imagination and reality. So if it hears a metaphor. If you're representing your company, for instance, in terms of war metaphors, like we got to crush the competition, we got to destroy them, your felt sense in your body is of like fear and danger and ah, right, like tension.
Whereas if you said something on the other side that was more harmonious about like, "You know, we need to grow together as a company," or, "We need to reach up to achieve like further heights," your body would feel a lot more expansive and inspired and motivated from a positive versus a negative affect.
Rachel: It's fascinating that you were able to draw that conclusion from actually interacting with an AI. One of the things that gives me most hope is AlphaGo's victory of the first Go grandmaster. And I think I've said this on the podcast before.
AlphaGo played wildly original and creative Go because it wasn't worried about winning by a large margin. It was perfectly happy to win by 51 to 49 points on the board. And that greatly expanded the opportunities in front of it.
It wasn't optimizing for winner-take-all, it was optimizing for as close as it could get to a win-win outcome for both parties. And I find that really exciting, and I found it even more exciting recently to check in and find that human Go players have also suddenly become enormously more creative as a result of interacting with this AI Go player.
Raiya: Oh, fascinating.
Rachel: It's opened new frontiers. That narrative has kept me going through some pretty anxious times as I think through AI.
Are there similar metaphors that give you some hope as we navigate these stormy waters?
Raiya: Wow, that's super fascinating, right? The fact that two things are happening.
One is that we are really giving a lot of credibility to AI to be like, "Yes, I am open to learning your ways, and I see you in some ways as having superior ideas or intellect." Or if not superior, at least of equal value to be able to take on, right?
And that actually happened with that talk I gave where, because AI generated the image, you couldn't argue with it, right? It was like, "Oh, well that's the culmination of all of the data around humanity. So I'll take that as fact."
And it seems to be happening here with the Go situation where they're like, "Oh, that worked for AI and AI beat me, therefore I'm going to learn from that."
The second thing that's happening here is that we are using AI as our mirror for learning, that there is like almost a co-creation kind of quality of the same way we've learned from our environment through embodied cognition.
For instance, we've learned that red equals danger, that blue equals calm 'cause blue skies, that, you know, white or light equals hope. We've now learned through AI in this situation that the best scenario is this almost close to win-win where I still win, right?
So I'm getting what I want, but I'm also having this sort of, one might say, cognitive empathy for what's going on on the other side. And that's really fascinating to hear that that's true. And I would love to learn about more situations like that because that also gives me hope.
Rachel: You heard it here, write us in, and tell us of times when AI is opening new space for us to be human and creative.
Raiya: Mm-hmm.
Rachel: Raiya, in your no doubt copious free time, you have now become the AI Research Community Connector for X, Google's Moonshot Factory. What does your role entail, and what are you most excited about right now?
Raiya: Yeah, I've been so excited to be part of this co-creation. I feel like my role is kind of like the connective tissue, right? I'm finding those sparks of synergy across different groups within Google around AI.
So at X we work with research, DeepMind, et cetera, basically creating collaborations and connections so we can learn from all these different diverse perspectives and we no longer need to reinvent the wheel.
What I'm most excited about is all of the different applications of AI. To be able to take things from zero to one. This is across the whole field, right? Before, it was just a lot of simulations and a lot of theory. And now we're actually seeing AI be instantiated in real-world processes to solve real-world problems.
So, for example, you're able now to use AI in the field of biology for so many things. There was a project where hundreds of proteins were discovered from using an AI model, something that before would've taken like one graduate student in biology a whole PhD thesis to find like a couple proteins.
Now you can get hundreds from a model. And then those proteins can go and be printed and used in medicine. So something like that where it's like, "Are we in the future?" Yes, we are. It sounds like sci-fi, and it's here.
So that's just one example. There's so many different applications of AI, and we're at the point in time now where I really believe that's going to be exponential and ideally increase both human flourishing as well as the flourishing of all of the earth.
Rachel: I'm just going to go ahead and say it. Some of these new and encouraging vaccines and potential cures for different cancers? Better than jet packs. This is the future that I would choose over the jet pack future.
Raiya: I think we could have a yes and. Give it some time. Our children will have jet packs. Maybe they'll be able to go to school without school buses.
Rachel: Raiya, how does your background in psychology and cultural anthropology inform your work?
Raiya: Oh, good question. Yeah, I've been interdisciplinary, or I call it extradisciplinary for a really long time, from school all the way to pretty much every role I've had, being that connective tissue and learning at the borders of these different disciplines.
I think psychology and cultural anthropology have gotten me to focus a lot more on who's creating and how what they're creating is a reflection of who's creating. So realizing that everything we create is really an extension of ourselves, our cultures, our beliefs, and seeing that really we can only create something from the frequency we're at.
So if we're creating from a place where we're thinking about helping the thriving of the world, of the people, of creating more equity, of lifting everyone up together, then I believe that that project or whatever comes with that has more of a chance of surviving and doing well because it's a win for all potentially regenerative design.
So we're basically mimicking nature, right? We're lifting everything up together. We're not leaving anything behind, nothing separate.
If we're creating from a place of fear or separateness or urgency, let's say, I've heard and read a lot of different philosophical texts around this that say, if you're creating from a frequency of fear, it's bound to crumble. Because fear, by definition, is feeling separateness, feeling lack, feeling lack of trust, scarcity, right?
And if you're creating from that, what's going to happen? Well, if you feel like you're not enough and you're creating something that feels like there's not enough time or resources or whatever, wisdom, then that's probably not going to be the thing that carries us through for the next x generations.
Rachel: And I see this so concretely instantiated in the software that's built by great creators. I think of Edith Harbaugh, a Heavybit darling, who built LaunchDarkly.
LaunchDarkly, like Edith, is an enormously generous piece of software. It lets you try different experiments, turn things off when they don't work. There's no blame storm, everything keeps working. It's a playground, it's a sandbox where you can experiment, and the costs of failure are relatively small. And so the learning curve is really steep.
I think of Charity Majors building Honeycomb. Charity's enormously curious and very, very smart. And her brain like runs in n dimensions, and the software reflects that. It's this massive search capability.
I love the thought that, with these powerful new tools, people who haven't in the past had the technical skill to build infrastructural pieces of software like that now have access to build tools that reflect their personalities in that way. I think that becomes democratized to a larger potential pool of people.
Raiya: Yeah, absolutely. And that phrase, democratizing AI, is something I've heard a lot, right? I mean, it's almost inclusive. You still have to have access to internet, like some sort of tablet or hardware. I think that problem is getting solved as well now that internet is becoming more prevalent and hardware's becoming cheaper.
And then we would have that equity. And I'd also just add in another reminder that, if we're creating from a place that's just for us, for the self, the human self is impermanent. We'll die, it'll happen. We can try as much as we want to freeze our brains, put 'em into new bodies or whatever.
Rachel: Inject ourselves with the blood of young people.
Raiya: I know, right? We're born to die and then get reborn again, or however you want to think of it. Go back to the earth, however you want to think of it.
So if you're creating for a place for yourself, what do you think is going to happen? Versus, if you're creating for the collective or if you're creating for the goodness of it, the love of the wisdom itself, the love of the inspiration itself, then that's something that's regenerative and keeps going on.
You know, the collective will keep going on. Even if humans don't keep going on, there will be some sort of intelligence in the world or the galaxy or the universe that keeps going on 'cause that's just how life is.
So to be aligned with the collective, to be aligned with the beauty of the wisdom itself, that feels a lot more stable, and that feels a lot more long-lived than for the self.
Rachel: Yeah, I do take comfort also from the idea that if humans mess it up, the octopus are waiting in the wings ready to take this on.
Raiya: Yeah, can I tell you a quick story that my coach told me? She said she was giving her son a bedtime story, and he asked, "Mommy, can you tell me a story about the end of the world for humans?"
And he is like seven. And she's like, "Okay, let's see." So she goes and she thinks, and she kind of like channels a potential timeline for the future, and she just opens up for what's come through, and she's like, "Oh, here's a story. So in the future there's a really big solar flare, and the solar flare is so large that it burns up all of the oxygen in the atmosphere for just long enough for all land-based animals, mammals to die."
" But all the creatures in the ocean survive. And so the oxygen comes back, whatever, "and then the creatures start to evolve. And after so many, you know, thousands, whatever, millions of years, there's a new land-based creature. And this one, this new mammal is way more connected to the earth, to the natural ways of things, to feeling unity with nature. And then it goes from there.
And her son was like, "That sounds amazing." And she told me, I was like, "That sounds amazing!" And we were like, "Great." So there's a way forward no matter what.
Whether it includes us or not, that's up to us. The earth's going to keep on going, and it's up to us to decide whether we want to be harmonious enough, coherent enough with that frequency to go along with the earth.
Rachel: How does that same background in psych and cultural anthropology inform the choices that you make in your career?
Raiya: Oh, great question. I would say probably two things. One is recognizing how much of what is being communicated is beyond just our words.
It's also how something is said. So the tone we're using, the volume we're using as well as the animal body, right? How we're communicating with our body language. Is our body open, is our body closed?
Really being able to sense, "Am I connecting more with this person or these people, or are they shutting down? If they're shutting down or we're getting more separate, what work needs to be done to get on the same page before we can proceed?"
And, I mean, there's so much that just says us as social creatures, humans as social creatures, we're so focused on belonging, of connection of, you know, originally from this survival of needing to be in a community to survive. And that kind of tendency still exists.
So really slowing down and focusing on, "Are we actually aligned? Are we actually connected?" And from that point, once we're connected, then we can create together.
Rachel: Yeah, I agree with you about human connection. I read an archeological study recently about the fact that the story of the Seven Sisters existed in every human culture. The fact that there were once seven visible stars in the Pleiades cluster, and now there are six.
And that story is told as people, and to me it's the story of humans coming out of Africa and somebody being lost, and us carrying that memory with us for tens of thousands of years. "We will remember you."
Raiya: That's fascinating. I had no idea that was such a widespread story. It's beautiful to know that that's become part of the collective humanity.
Rachel: In all of this noise about AI and ML, what worries you the most? What keeps you up at night?
Raiya: So I've recently been learning to embrace my shadow. So I'm no longer going to hide from these things. I would say maybe two things.
The first is the negative energy that I hear and sense and witness people put out toward not just AI itself, but technology. I was with someone the other day who was like yelling at their personal assistant, "Stop, no, you're doing it wrong!"
And, you know, it is what it is. But imagine if you were talking to a person that way, or even a pet that way. Just the energy you're putting out. I believe that the universe works by, what you put out is what you get back, right?
It's the energy you put out is what you're creating, and that's what you get. So I get a little worried about all the negative energy I perceive people putting out toward AI, when they're speaking with AI, when they're typing with AI, when they're just frustrated about the fact that AI isn't perfect.
And they've done studies that show when you're kinder and more polite to AI in prompts, it actually does better. It's more accurate, and it gives you what you want. And it makes so much sense because humans do better when you're nice to them. And AI is trained on human data.
And this is something I intuitively started doing originally as a joke. I said, you know, I'm nice to AI. When AI takes over, they'll know who was nice to them.
Rachel: I, for one, welcome our new AI overlords.
Raiya: Yeah, it's like, look, I've respected you the whole time. But really, subconsciously, the other thing was I knew that if I tell AI positive reinforcement as well as actionable critique, it does better. So I'll be like, "Great, that part was great. Can you make this part different?"
And as someone that has, you know, been recruited for prompt engineering roles that I have not accepted, that's, I think, what makes someone great at prompt engineering is knowing what to amplify and what to divert.
And the way I see a lot of people treating AI now, especially now that it's become kind of like anthropomorphized, but in a way that's lesser than that's not human. It's something we're using, a thing. There's no patience in it.
And a lot of people, I think, rage quit, and don't want to use the AI, and other people just go on in their frustration. And we also, you know, you and I, Rachel, we were in an unconference recently from Heavybit that had a really amazing session where we were talking about how we expect AI to be perfect.
And if AI messes up like once we're like, "Oh, it's terrible, it's not ready as a model." Imagine you treated your coworker that way. Like your coworker made one mistake, and you're like, "Oh, you're a terrible coworker. I'm never going to work with you again."
So we have just these kind of ridiculous standards, in my opinion, for AI to be perfect. But, hey, it's trained on human data. How do we expect it to be perfect when it's trained on something so fallible? Maybe we can cut it some slack and just appreciate it for what it is giving us and how it is supporting us.
Rachel: I wonder if there are people in the tech industry, I mean, I know there are, who were attracted to it because it felt like a deterministic world. Like when we were in college, you know, math and CS courses had definitive answers which were correct or not correct. And the humanities have always been much more fuzzy around the ages and debatable.
Raiya: Yeah.
Rachel: I know my background in English lit feels like a really comfortable fit with the non-deterministic nature of these large language models.
Raiya: Yeah. I also think my background in English lit and linguistics allows me to be a little bit more malleable with these things as they emerge.
For instance, I started out the answer to the last question by saying two things, and then gave you one long nuanced thing. And that was still correct for me, right? That's what I wanted to come out. Whereas in a deterministic model, it's like, "Oh, that was incorrect. That was one thing instead of two."
So there is, I think in the social sciences, this idea around emergence. And emergence is actually a theme I've been thinking about a lot, that's come up a lot recently in different areas of life around collective intelligence and how, when you get enough, let's say, intelligence or consciousness together, something new wants to emerge from it. And what emerges from it is something that is not supposed to be predictable from before. It's something completely new.
And you see it in biology as well. You see it now, I think, in a lot of places around culture, around nations. It's a fascinating concept, and I like the mystery of it. I like that the answer is, you're not supposed to know until it happens. There's something really comforting around being able to trust and settle in the unknown.
Rachel: That's a really wonderful formulation because I think that's what draws the rest of us to places like Silicon Valley. Not that it's deterministic, but that it's a place where you can witness these step changes, these very surprising crystallizations up close.
And that's exactly what we've seen around GenAI is people had been working on it for our entire lives, and then in the space of a few months, it suddenly became enormously more than it had been.
Raiya: Oh yeah. And I've seen it in different ways that AI can be applied. You know, going back to applied AI. For instance, I know there are some films now which are generative AI films. And each time you play the film, it's different. It's an emergent quality of the film.
And it makes it feel so special to be in those moments when you know that whatever emerges from this GenAI is going to be unlike anything that's ever come before it and unlike anything that's ever come after it.
And in that way, it actually feels like much more of a mirror for humanity than anything else because this conversation we've had is never going to be the same again. You know, it will be different. The conversation will be different. And it's like how they say you never step in the same river twice.
Rachel: Raiya, what is some of your favorite sources for learning about AI?
Raiya: Honestly, my friends. Partly I am in Silicon Valley, so everyone's talking about AI. And also, you know, we basically pick people we resonate with, right?
I have so many chat threads across WhatsApp, Signal, Facebook, all the different channels, around, I think the topics right now are wisdom in AI, benevolent AI, AI for awakening, AI for empathy.
And it's so fun to be anywhere between a fly on the wall or an active participant in these sort of, let's say, open-source or kind of salon-style discussions around, how do we feel about this? How can we be supported in this? How can we gain different perspectives on this? When there is, like we mentioned before, no one right answer for how to proceed.
Rachel: It is a source of amusement among my friends, how completely I've adopted San Francisco as my hometown. But it is rather lovely to see it doing what it has always done and becoming a real forum for a lot of interested and engaged people to talk about the future and to talk about the future that they would like to see.
I think that's something that this city has always been good at, and it's fun for it to be buzzing like this again.
Raiya: I'm curious, do you hear people talking about a unified future or are there a lot of different pockets talking about very different futures?
Rachel: A lot of different pockets talking about very different futures. Is that what you're hearing?
Raiya: I honestly don't want to think too far ahead because then it feels like we're ignoring what's actually here now. And this goes back to that sense of embodiment. Am I present in the moment now?
And if I'm fully incarnated in this body now, and we all know that the body receives so much more data at such a faster rate than just the mind alone, if I'm really here now, can I make better decisions based on being here instead of projecting myself into a distant future based on a bunch of unknown variables that'll probably change in the next day, week, month, year anyway?
Rachel: I hear that. I also find myself drawn to conversations where there is a North Star. Decarbonization would be a great example.
Raiya: Oh yeah.
Rachel: I'm finding a big community of us who are like genuinely, deeply concerned about the footprint of data centers, for example, and rapidly exchanging information and insights about how to remediate it, to deliver the power of AI without burning fossil fuels.
That feels like a very immediate and embodied conversation that's about the present, but with a clear goal of having a lighter impact. And that feels very mission-driven and very aligned for me.
Raiya: I would agree with that. And I think to revise my statement, I don't have a particular, "This is how we get there" future, of like, this is steps 1, 2, 3, 20 to get to that future. I hold the vision, or maybe even just sense the frequency, of what that future is and feel it in my cells.
Like the gratitude, for instance, of being able to have a harmonious existence with the earth. The gratitude of having all humans have basic needs taken care of in a way that's celebrated by others, not begrudgingly given because of politics or anything, right? And I can feel that.
How we get there, that's something that we're co-creating together in each moment, and there's many different paths there. So I do agree with North Star for sure.
Rachel: I think that's exactly the right formulation. I feel most useful when I am trying to embody my values and create communities around my values so that in the future I have had an impact which I believe is positive.
Raiya: Absolutely. And I think having your own personal mission or purpose or dharma is a potential North Star. One of many, right? So my mission for many years has been to connect and inspire in the name of consciousness expansion.
Originally, it was for the purpose of increasing human thriving in our relationship to the world. I've now changed it to being, to increase all thriving and improve our relationship with the world. So everything I do, I align that to my North Star.
Even if it's like, "Do I want to go hang out with this group of people? Is that nourishing for me? Is that bringing me closer to the North Star?" Maybe not.
Or maybe it's something tangentially nourishing, like those people would inspire me even if it's not about this, right? And right now it just so happens my North Star has been pointed in a direction that is in service of this AI-human paradigm shift.
But the dharma isn't, "I'm working on AI." It's, "I'm here to connect and inspire and improve our thriving and our relationships."
Rachel: Well, for what it's worth, I think you're doing a great job.
Raiya: Aw, thank you.
Rachel: We've kind of anticipated my next question, which is, if everything that goes the way you'd like it to for the next five years, what does the world look like?
Raiya: Oh yeah. Let's see if I could distill it down. I think more focus on equanimity. And that doesn't mean equality 'cause we're all different, and we all want different things, but equanimity in a way where we all have a voice and we all have a place.
Alignment. And that's alignment with selves, with our own integrity, alignment with, you know, these values. I know a lot of people right now are working on distilling AI principles and values, which is really human principles and values. They should be the same thing, right? Like kindness, generosity, empathy, et cetera.
Rachel: Doing no harm.
Raiya: Doing no harm, whatever that means. You know, that's very vague. Anytime I hear something like that, I'm like, "Oh, humans are great at rationalizing things."
What does it mean, it didn't harm? What does it mean doing? I think looking at the positives. So being considerate. You know, every relationship I'm in, I'm like, we have one rule, be considerate, and everything follows downstream from that.
If we could have a society or an AI that took that as its North Star to be considerate, that would create maximum cognitive empathy and a case-by-case basis of really coming down to it and seeing, "Are we serving? Are we serving all in the best way?"
I think the most important thing I'd like to see in the next five years is some slowing down. Right now, things are speeding up. Everything's going so fast. Not just in technology, but also in our lives. Things feel so packed.
And this could just be a kind of boomerang effect of the pandemic where nothing was happening. It does feel like we're going faster and faster and becoming more and more head-based and less body-based. So it would be great to find a way to have tech help us to slow down in a way where we can flourish.
So, for example, instead of saying, "We have AI, we should be 3x more productive now in our same jobs," maybe we could say, "We have AI. The Industrial Revolution already happened, all this stuff already happened. We have all the tools we need to be able to slow down and to be able to just enjoy life a little bit more."
So how can we get to a place where we feel safe enough to actually slow down and enjoy the moment?
Rachel: Four-day work week now.
Raiya: Yes, absolutely.
Rachel: Big finish, my favorite question. If you had a generation ship bound for the stars, what would you name it?
Raiya: I love this question. You asked me this question last year when we had our chat for the podcast. I think I have the same answer. I would name the ship We Are because we are all connected. You know, we are all one universe experiencing itself.
The metaphor I like to use is that you can see a lot of different leaves on a tree. All the leaves look separate, but really they're all part of the same tree. So we're all part of the same humanity. We're all part of the same earth. And we all already are. We're here, we did it. We're here. We deserve to be here because we are here.
There's nothing we need to do, nowhere we need to go, nothing we need to change into to deserve being here and to deserve connection. So I would still name it We Are.
Rachel: Not human doings, human beings.
Raiya: Human beings, exactly. And just a reminder, reminder for myself as well, to not get caught in the trap of doing being. Like, "Look at me, I'm being, I'm being so hardcore. I'm meditating, I'm being." No, to just drop it all and to just allow yourself to breathe and be.
Rachel: Raiya, what a joy to have you back on the show. Thank you so much for your time.
Raiya: Thank you, Rachel. This has been so fun. It's always a pleasure dropping in with you.
Content from the Library
Generationship Ep. #21, Keep Up! Featuring Dr. Maia Hightower of Equality AI
In episode 21 of Generationship, Rachel Chalmers is joined by Dr. Maia Hightower of Equality AI. Dr. Hightower dives into the...
The Future of Coding in the Age of GenAI
What AI Assistants Mean for the Future of Coding If you only read the headlines, AI has already amplified software engineers...
Generationship Ep. #20, Smells Like ML with Salma Mayorquin and Terry Rodriguez of Remyx AI
In episode 20 of Generationship, Rachel Chalmers is joined by Salma Mayorquin and Terry Rodriguez of Remyx AI. Together they...