1. Library
  2. Podcasts
  3. Unintended Consequences
  4. Ep. #1, Defending Our Thesis with Dr. Aleks Krotoski of BBC Radio 4
Unintended Consequences
38 MIN

Ep. #1, Defending Our Thesis with Dr. Aleks Krotoski of BBC Radio 4

light mode
about the episode

In this inaugural episode of Unintended Consequences, Kim Harrison and Yoz Grahame speak with the award-winning journalist and social psychologist Dr. Aleks Krotoski of BBC Radio 4 about the unintended consequences of technological growth.

Dr. Aleks Krotoski is an award-winning journalist, broadcaster, social psychologist, and roller coaster enthusiast. She has a PhD in the social psychology of relationships in online communities and has worked extensively with BBC Radio 4 and The Guardian.

transcript

Yoz Grahame: Hello Aleks, thank you for joining us today.

Dr. Aleks Krotoski: Thank you, Yoz.

Yoz: It's so good to have you here.

Dr. Aleks Krotoski is a writer, podcaster, academic TV presenter. What am I missing?

Aleks: Rollercoaster obsessive. I like to bake.

I'm obsessed with the sensorium of interfaces.

Yoz: Which is hugely applicable to this stuff.

And all of your podcasts, I love how your podcasts are something that, Kim was saying, it's about the human side of things.

Aleks: Oh, always. Well, I'm a psychologist, right?

So I'm a social psychologist.

So that's the thing, right? That's the thing that cuts across all of it.

It used to be technology was the thing that cut across all of it, which I actually found frustrating because I was like, no, I'm not just technology.

I like learning about people through the lens of where technology doesn't quite work or where technology makes us uncomfortable, but I'm also interested in looking at who we are through the lens of why we like to go to theme parks and escapism or why we have particular marriage, birth, and death rituals.

Yoz: And that's exactly the kind of thing we want to look at here except that usually what you're focused on-- technology that affects us all, and actually so are we.

Aleks: Technology affects everybody regardless, like, even if it's just a rooter system or if it's just a, you know, a bunch of pipes.

It still affects us because we use it to communicate with one another.

You know, communication technology is really the kind of the crux of what it is.

Yoz: And we see that scaling up massively over the past 30, 40 years with the internet and the effects of it, and that's part of our thesis and what we wanted to ask you about today.

As we are starting this podcast, we wanted to come to you and say, okay, this kind of thing, is this the right sort of thing to talk about?

What should we be talking about? Who should we be talking about it with?

Because what happened is that Kim and I started talking about scale.

We're fascinated by scale, and it's the kind of thing that in our industry, we're in the Bay Area and surrounded by startups of every kind.

Everything is a tech startup here.

Every corner shop is actually a tech startup in waiting in the same way that every American is a temporarily embarrassed millionaire.

Kim Harrison: All the conversations at happy hours and cafes, I mean, we're just infused with it. It's how we think about things.

Aleks: All of the ads everywhere, like anytime I go over to the Bay Area, it's the first thing I notice is the ads are very different.

I lived in LA for a while, and all the ads there were about film and entertainment.

I moved to New York and you see more ads about fashion, but you also see ads about politics.

You see ads about different types of things. And in the Bay Area, it's only about technology.

Yoz: It's everywhere and it affects everything, and so all the conversations are around, you know, unicorns, and hockey sticks, and other terrible analogies, but everybody is dreaming of success.

And the things that they only recently are starting to think about is the problems of success, especially social media is an obvious poster child or poster villain here where the scale gets such that it starts to infect international politics.

And when we have, you know, obviously Facebook and Twitter affecting elections and being giant players in international statecraft and diplomacy, then something may have gone horribly wrong.

Aleks: Well, I think that, I mean, it depends.

Like, I think that there are many ways to look at it, but I want to pull out two ways.

The first is the expectation that a single person or a small group of people has the audacity to think that they can create something that will fulfill the needs, the social and psychological needs, of all of the people of the world, when, in fact, they are operating from within an experience frame of themselves and the people who are like them.

And that's, you know, one villainous element.

And then the other side of this villainous element is the fact that people look to these technologies, audiences, consumers look to these technologies, as magic and just accept what it is that's given to them without being critical thinkers and believing that the technology is doing something to them.

So I think the reality is, as always, somewhere in the middle and neither of these poles of this spectrum are entirely accurate, but I think that there is fault at both ends, shall we say.

Kim: Sometimes when groups are creating these technologies, do you think they even know where it's going to end up?

Aleks: Of course not.

Kim: Were they even aware?

And so it's easy to look back in hindsight and say, oh, they're evil, they created this thing, versus they created a thing and then it took off. It ran away from them.

Aleks: Well, they create, exactly, they created a thing that had a shortcut in there that they needed to put in there because that was how they were operationalizing something.

Right, they were like, oh, I just need a solution to that.

Somebody has done this before. "I'll pull in this library of code and I'll plop that in because it does the thing that I need it to do because I've got a VC meeting on Thursday at three o'clock and it's currently Thursday at two o'clock, and it works, it's fine." They don't think about what assumptions are being made in that system that then becomes hard-baked into the system because the VC has just funded that thing that they did on the fly going, "Oh my God, we need to shove this in."

And so that's another element of scale, is like the if you were so academically-minded that you reflected on every single decision that you made at every single point within your group, within your individual part in the cog of that system, you would never get anything out the door, and that's not the Bay Area's modus operandi.

It's about getting it out, getting it fast, getting it to people, getting them to feed it back, and then it's gone.

And I think that may be, Kim, the thing that is problematic with scale is because people just didn't take the time because they didn't have it, 'cause that's not what the culture is.

Kim: Yeah, but maybe this is a moment to pause and think about it, 'cause, to Yoz's point, we're now at a point in time where things are far bigger than they have been and we're moving way faster than we ever did before.

So you're starting to see new tools, new platforms, new products in ways that I don't think we dreamt about even five years ago.

And so how are people thinking about if they build something now what's the implication of that even six months from it? Like TikTok.

Aleks: There is no way, because there are so many different factors.

Several years ago, I invented something called the Serendipity Engine.

I was looking for something to do immediately after my PhD, because there was a gap in my life that I could have filled with lying on a beach and drinking Mai Tais but instead I decided to embark on an independent, preposterous, philosophical, artistic expression experiment.

It was great. It was so good, and it was brilliant.

It basically was inspired by the fact that I had just come off the back of filming a BBC Two documentary series which was all about the internet.

I had submitted my PhD thesis, which was looking at how information spreads around social networks online, specifically looking at the virtual world of Second Life because at the time that was the hotness.

And when I started, there were 3000 people, and by the end of it when I was attempting to do social network analysis of it, there were 15 million accounts, and that was a very painful moment in my life.

But it was super interesting, and as I wrote this up, I was making, you know, a lot of assumptions about what it was that theorists and people had said.

And the benefit of going off and doing an interview series about the internet and its implications was that I was able to go out and speak with all of the people that I'd referenced in my PhD and ask them, is this what you meant, and, you know, really dig into it and then come back and set my viva and have my examiners ask me why I thought that the statement that I made there was correct, and I was able to say because I asked them.

But in addition to getting confirmation from all of these people that my ideas that I had researched were at least in line with what they were talking about, the thing that they were all talking about at that moment in time was serendipity.

It was such a big thing around 2009, 2010. It was when Eric Schmidt had said we want Google to be a serendipity engine, and he defined that as providing the answers to questions that you didn't even know you already had.

And these are some of the solutions that people are attempting to create, not fully understanding what it is that these things that they are trying to define actually are.

So within the last five years, we didn't realize TikTok would take off because we, you know, the developers, whomever, didn't think about all the reasons why people would go into this space.

We hoped that we would understand that if we got it out to influencers, it would attract this and that, but, you know, if we put it in this way that we had no idea.

We literally had no idea how that would happen.

And I think that perhaps, you know, giving ourselves a little bit of a break about unintended consequences would at least, you know, take the pressure off developers who feel like they have to define the indefinable, go about it in a way that is ham-fisted and perhaps not as thorough or as ethical in terms of like, you know, what are the human beings going to be doing with it on the other side and focus on the fact that what we're doing right now is something that we need to iterate even with scale--

That you cannot do that, and perhaps that would allow the humbleness of technology to saturate the rest of the world who thinks that technology is infallible and needs to answer these questions before we even ask them.

Yoz: I love that approach. You know, Silicon Valley and the startup scene is entirely based on the idea that there is a 99% chance that your idea will fail before more than 10 people have installed it, so you just throw everything in because the chances are that you're not going to succeed.

And if you do succeed, then the saying that I've heard from so many people is, well, that's a good problem to have.

What if we become so big that we are like a nation state unto ourselves? Well, that's a good problem to have.

You know, to me a good problem to have is, you know, what flavor milkshake do I want this morning, what color Learjet do I want to climb into, et cetera.

But I love what you said there, which is say, no, look, you've still got to have the boldness, you've still got to be able to experiment and innovate without too much fear, but be ready.

You know, the moment that you see that something is going wrong, have some tools or some way of dealing with it so that you're not just sticking your fingers in your ears, because there are some other problems that we did see coming in advance.

There's the interview with Jack Dorsey a few months ago that had a whole bunch of people I know up in arms, was, you know, Dorsey saying again that nobody could have foreseen the kinds of problems that we are having.

And a bunch of us who've worked in social media for many years, we're going, no, we were yelling at you, we told you this was exactly what was going to happen.

And of course we didn't know this is exactly what was going to happen.

Aleks: We probably sounded like conspiracy theorists.

Yoz: Oh, trouble there, yeah.

Aleks: That's the problem.

Yoz: All the same critical voices you know, the kinds of naysayers from the outside who tend to be blocked out anyway.

You got to let the haters hate, as it were.

So like what kinds of things would you like to see innovators and entrepreneurs take on for this?

Aleks: Well, first of all, I mean, I do think being humble is not something that's rewarded necessarily.

As you say, it's hockey sticks and unicorns that are rewarded. So that requires a bit of a culture shift.

You have to be bold if you're going to go out there and ask for money or somebody to have faith in this crazy thing that you've spent all of your time over the last however long creating, but humble enough to recognize that you do not have the answers. Especially if the things that you are trying to solve for are human, humans are very complicated.

And I'm saying this as a psychologist, and this is, you know, this is a field that I even find problematic at times because we do also try and put people into boxes.

The number of instruments we, you know, that we call, like questionnaires and stuff, we call them instruments--

The number of instruments that exist to tackle emotion, right, there are like 500 different ways that you can ask somebody if they are happy or sad.

And each of them has been more or less successful to the degree that they have been published, and are being used, and people can buy them.

Recognizing that even people who have been studying this their entire lives at times have to stop and turn around and say, nope, we really don't know how to measure this.

Recognizing that sticking a one and a zero onto it is actually not going to be a solution.

So, first of all, I think that that's really important.

And that's the sort of, I'm begging, I'm genuinely begging, because I've spent years talking with people about that.

I'm begging people to be a little bit more humble and recognize that human beings are not machines and we are not things that can be adjusted and re-manufactured.

Heck, you know, just re-duct a Frankenstein.

Yeah, I mean, go back, you know, mold something out of some mud and realize what what's going on.

This is, I recognize this is hubris. So it's something that it is fundamentally humble.

Secondly, what I would not recommend, but I also kind of, you know, would, is get somebody in to ask those difficult questions.

Like, why is it that you've decided to do something to put that code in in that way?

I live with somebody.

My darling partner is coding at the moment and he's loving it and he's creating something that's absolutely wonderful.

And I keep interrupting him and saying, well, why did you put that bit of code, why did you set boundaries around that bit there?

And he's internally frustrated with me 'cause he's like I just want to get the damn thing done, but at the same time, my persistent sort of badgering about why did you set the boundaries around that, why have you decided that that is what is your definition of a table is or that your definition of a user is this.

Why have you done that?

Because my persistent badgering is just simply, it doesn't, I'm not asking him to change anything, just to think about, right, oh, okay, that's right, there are ways to think differently about this.

And so I wouldn't hire somebody like that because I know that it would drive people crazy, but I also would hire somebody like that because I recognize that that's actually a really important position within these types of development environments, especially if these are the types of questions that you're trying to solve for.

Yoz: Yeah, and it's the kind of thing that engineers take a somewhat bitter joy, as it were, in realizing that there's a whole series of blog posts by different people that use the name template, "falsehoods programmers believe about X."

Aleks: Absolutely. I love them.

Yoz: Yeah.

Aleks: Because it's like, wow, this for me, it's like a, it's a view into minds that I don't know.

I'm not a developer, I'm not an engineer, I'm not a programmer, but I talk a lot about their output all the time.

And for me, it's really nice to see people reflecting and saying, oh, I have thought in this way, now I think a little bit more differently.

Yoz: One of the first ones was like falsehoods the programmers believe about names, that human names, they don't always have a first name and a last name.

They don't, you know, sometimes the family name comes first.

Sometimes there's only one name. Sometimes somebody might not have a given name until they are at least five years old, et cetera, et cetera.

And this giant unending list, which if you are writing software to deal with names, just makes you want to throw everything up in the air and go off and be a farmer, right?

Aleks: Exactly.

It interrupts the flow because you're like, I just want it to be simple, but actually, if you want to scale, if you want to make something that is applicable to a global audience--

Somebody who was-- Ethan Zuckerman many years ago, we were talking about this, he was saying, you know, the idea, and this is such an old-school criticism but I guess that this is probably the seed that started me off on this path.

He said, why on earth should a virtual, a representation of a desk with a blotter, and files, and pens, and those folders on it be relevant to, you know, a Swahili farmer who has never sat at a desk? Why are you asking these people to represent the world or adopt a representation of the world that's forcing them to learn a brand new language while they're doing it? Don't.

Yoz: Yeah.

Aleks: Do something else.

Yoz: It's fascinating. These are exactly the kind of analogies.

Unfortunately, we don't realize the languages that we have grown up with, that they are actual languages, right, that there may be differences, right?

We think about, oh, look, we represent save as a floppy disk icon. That's all we do.

And then, you know, obviously now, we have people who are coming into the workforce, who are going, what is that thing?

Aleks: Totally. Some of the larger organizations have been very forthcoming or forthright in trying to incorporate these types of thinkers.

So Genevieve Bell is a great example. Genevieve was at Intel.

She was their chief researcher for a hundred years.

And she's not a developer, she's not a programmer, she's a sociologist, and she's an ethnographic sociologist as well.

She's like the first person that told me about the fact that, you know, what was interesting about an iPhone was, you know, not that it was a technological marvel.

It is a technological marvel, but it's also how people think of it, right?

So in China, this is a decade and a half ago now, people valued the device so much that they were creating little paper icons of it so that they could burn them at loved ones funerals, right?

And she did research through Intel where she asked people to take all of the objects out of their cars, all of the objects out of their cars, and talk about each of these things.

And in that way, she discovered the value of in, again, in Chinese cars, they always carry around a red envelope just in case they need to pop into somebody's house and give an offering,

you know, some kind of money offering that is given in a red envelope.

So there's always, in Chinese people's cars, there's a red envelope.

So how can you build that into a technology?

And then she said you can't do that with an iPhone.

You cannot ask people to open up these technological devices in the same way that you would with a car and sort of pull out the folders of pictures and pull out the folders of contacts and talk through each of them.

We have to think about these things in a different way.

Now, that opened up Intel to thinking about lots and lots of different things that, you know, may not have resulted in a single product but permeated through how they thought about those products.

Which allowed them to go globally in a way that, you know, perhaps one aspect of the technology that was developed didn't speak to us and we didn't even know it was in there, but the fact that it was incorporated into that piece of technology meant that somewhere, somebody else adopted that technology.

And, sure, they had the money, they had the resources, they had the time, they had the ability to have that department as well, but bringing in somebody like Genevieve was, I think, you know, such smart move on the part of that large organization.

Microsoft has done it as well.

They've got entire research departments in Cambridge in the UK, you know, where the technologies aren't necessarily developed, or the ideas that are developed within those research units aren't necessarily going to come in to the day-to-day technologies, but the thinking is.

Even the fact that that thinking is happening within the organization, it might be extremely irritating when you've got to get a product out, but just even to be made aware of that kind of thinking is essential.

Yoz: So it sounds like this is a great answer to one of the questions that we have for you today, which is who should we be talking to?

Both Genevieve Bell specifically, but also the sociologists and ethnographers who work at these companies and who look at the ways that humans use technology, and that it sounds like there's very much a kind of refusing to fit into boxes thing there, right?

Aleks: Well, it depends on which box you're talking about.

Yoz: Or mutating to fit into the box, right?

There's no junk drawer in an iPhone, right?

You've got to, everything you save has to be a photo, or it has to be a text message, or it has to fit into very specific document types.

There's no glove compartment where you just throw stuff in.

And so what happens is people end up, I mean, I'm sure, I've seen loads of people do it, I've done it myself, is you take photographs as aid memoirs or other kinds of things.

It's the fastest way to either remember something or communicate something with somebody else.

And I'm guessing that this is exactly the kind of thing that, well, some of the kinds of things that they research.

Aleks: Yeah. I mean, another great researcher who actually was a collaborator on the Serendipity Engine with me is a woman named Kat Jungnickel, and I love Kat.

Kat is just an amazing human being.

She changed my life in countless ways.

And it's funny, I never actually thought about that, but she is probably the one person in my life who has changed things in lots and lots of ways.

I met her when we were both doing our PhDs, and she was doing it in sociology and I was doing it in psychology, and we both sat in on a lecture on a series by a cyber ethnographer named Christine Hine who was also at our university.

And Christine is great because she's done all kinds of methodological things about cyber ethnography.

She was really the first person to define the idea of cyber ethnography, and what that is to literally go in and participate, and observe, and extract from a sociological or an anthropological point of view what people are doing, how they are doing it, and how you as a researcher should examine this.

So Christine's also another good person, but Kat has done some really fantastic work.

She worked with Genevieve on a project called Home is Where the Hub is and it was at Intel, and it was very, I mean, early is relative, It was 2010, I guess, 2008 probably when she was working on this.

But it was a big research project speaking not with people at Intel, not even necessarily speaking with people who had Intel chips in their computers, but looking rather at how people were fitting the technologies into their homes.

So when you have a laptop, what does that mean for the home?

What does that mean for the meaning of the kitchen?

What does that mean for the meaning of the kitchen between this hour and this hour?

What does that mean for the relationships between the people who are either using the technology and not using the technology?

What does that do?

What are those open questions that you might be able to, if you just add a little tweak in your system, recognize that this is how people are actually using your systems, rather than having the audacity to think that you have decided how people are going to use these systems? Because what we've found again and again and a million times again is that people do not use the systems in the way that they were originally intended to be used.

And so Kat is a wonderful person to talk about those types of things as well. She's great.

Kim: There's some meme that I think runs the ranks with designers and product managers where you've got a picture of a gate that's locked so everybody just walks around it.

And so in the dirt path, the path just goes around the gate, and the whole point is, oh, you're supposed to use the gate and lock the gate and people just don't want to bother so they just walk around the gate.

Like, they're never going to use it the way you intended.

They're going to do what they're going to do, and you can recognize that or not.

Aleks: That's so funny because I used to describe exactly that about gamers and how great gamers were because gamers are trained basically to come up with--

You find a locked gate, how are you going to get through the gate?

Well, you obviously can't go through it, so you're going to figure out and you're going to have the tenacity to figure that out rather than be stopped by the gate.

So it's nice. We've reached peak gate, Kim. We have achieved it.

But yeah, I mean, I wonder.

That's a mentality that I think that we have to recognize of the people who are making the technologies, is that, you know, many people, not all, but many people came from a similar background, similar system, and yet we just have to be reflective about that.

So in the good old days, I used to have a conversation with a friend of mine which would always end in like screaming fights.

Like, there was no middle ground.

We would start this conversation thinking this time it'll be better, but no, it was always about, it ended up into screaming fights.

And it was about the nature of artificial intelligence, which increasingly is obviously a very interesting area to be talking about in the technology world.

And it, ultimately, the reason these things ended into screaming fights was 'cause we were positing the idea of--

Will technology create an artificial intelligence in the next 50 years that will pass the Turing test beyond like just here I am, I'm talking to a machine, am I talking to a machine type of thing?

And it was interesting because he always took the opinion, he was an engineer--

He took the opinion that, yes, absolutely, and I took the opinion, as a psychologist, I was like, absolutely not.

And then it came down to the fact that we both had a difference of opinion, which I didn't even realize, in this notion of a divine spark of humanity, that he was like, no, you could break people down into this, and you iterate, and you iterate, and you iterate, and you iterate, and you get closer and closer and closer until people just don't realize.

And I was like, no, there are things that you cannot recreate because we simply do not know what they are.

We don't know what's inside the black box of human beings, whether it's belief or whether it's the idea of happiness or whatever it is.

We just don't know how to define that, and so I was always pushing against this idea.

But in these conversations, we would also, in order to try not to end in screaming fights, we tried different ways of like how can we talk about this in a way that will be productive rather than destructive?

And we came up with a panel of people that we would want on our development team of an artificial intelligence.

Obviously you need to have somebody who is skilled in creating out of the words that we are speaking and putting into the binary system of ones and zeros the technology.

So you need to have somebody to do that.

But we also kind of, we were like, you know who we want?

We want a magician, right, and we want an actor, and we want a thief, and we want a pathological liar.

I think a pathological liar's really, really important in having an artificial intelligence.

And it was these people that you want to talk to if you're thinking about scaling, because these are the people who are thinking very differently about how to mess with your technologies and how to break them.

They have insights into human capacities that those of us who were not in those categories do.

Either they're trying to recreate, or they're trying to bamboozle, or they're trying to swindle, or they're trying to get around, and so they understand different elements of human that perhaps we would not if we're just looking at humans straight on and trying to solve for it.

Yoz: That's brilliant.

Kim: You know what this makes me think of?

At LaunchDarkly in what we do, we talk a lot about chaos engineering, and it's exactly this but in a very technical sense.

And so this is the human side of chaos engineering.

How is that data being used? How is that information being shared? Is it secure?

Are people clicking the pathways the way you intend them to do?

Are they, I mean, this is like human chaos engineering.

Yoz: It's also about, as you say in your analogy with the gate, in that sometimes we don't even think of which questions to ask.

You know, we might look continually at, well, are people clicking the answers to this the right way, are they clicking on the forms the right way, without saying what if thought the question the form is asking is completely wrong?

Aleks: Absolutely.

Yoz: You know, what if you're not actually putting up a form?

What if you're gathering the information in a way that is far less direct?

You know, thinking in that adversarial way, right, of saying, look, the best way to get the most honest answer is not to ask the question.

It's to watch it, to do things another way that gives you something, because when people have to think about answering a question, they're already, in some ways, being dishonest.

Aleks: And this is also about how you slice data, right?

'Cause ultimately what you're talking about is you're talking about data points, capturing individual data points from individuals, but there are different ways to do that, right?

There are different interfaces, and this is where I come to my interface obsession.

Somebody who I love is a composer in the UK named Nick Ryan, and I was very close with Nick several years ago and he was creating the most fantastic, just, ideas.

He worked on the audio binaural sound design for the iPhone game Papa Sangre, which was a game that was completely in darkness and that you could only navigate using headphones and just sound.

That was the only way you could navigate it with your thumb.

And Nick also worked with some people, he's done all kinds of crazy things, where he's created instruments out of the covers of magazines, like, you know, just simply the thickness of the print, right?

There are different ways, there are different inputs where you can get information.

So for example, in scientific inquiry, I'm sure many people have heard about the idea of like sonic data, but using it in physics, right, getting the sound of Cassini, you know, what is the sound of Cassini coming back?

Okay, right, great. Now we can hear that.

Because we can hear those data points, we can start to pick up patterns in our aural, A-U-R-A-L, aural sort of input mechanism, which we may not already see, you know, through our eyes.

Right, there are different ways that we interact with one another across the sensorium, whether that's scent, or whether that's sound, or whether that's taste, or whether that's hearing.

If you are expert in one thing, then that means that you're able to navigate things in a particular way. And if you're not an expert, then you navigate things in another way. But we do that through various things that we don't even recognize that we're attending to. And I think that that's another area of absolutely fascinating opportunity just for insight in terms of the products that you're building.

I had such a great time.

I gave the keynote at Chi, the Computers and Human Interaction Conference, a couple of years ago.

It's one of the highlights of my speaking experience because I gave, apart from the fact that I really like to talk, I also gave everybody business cards that were scratch-and-sniff business cards.

Yoz: Oh, wow.

Aleks: Yeah, it was awesome.

And I still have like two and a half thousand of them and they stink, but the idea was, you know, it was basically, right, okay, at this point, everybody has got my business card.

This is halfway through the talk.

I said take it out and give it a scratch and have a sniff, and everybody did.

3000 people in the room go, "What's this," right?

And then I said what do you think it is? I didn't give them any input as to what they thought it might be.

So what do you think it is?

And some people were like, oh, it's caramel, and other people were like, oh, it's mango, and other people were like, oh, it's chocolate, or other people were like, oh, it's, you know, this thing that my grandmother used to make when I was a child.

And I'm not going to tell you what it is because that would prime you for what it is that you will smell.

But through that experience and through other research that I've done with some scent designers, I discovered that scent is a nostalgia machine, right?

How can you take that knowledge of the fact that scent, which is extremely difficult to create precision on, right, just like human beings--

How can you take the knowledge that scent brings in all of these different memories, and qualities, and elements to hook onto a single moment of an individual's past that then allows them to make sense of the data that you were giving to them, the scent that you're giving to them?

How can you translate that into digital technology development process?

Not necessarily sticking a scent on your machine, but how can you recognize that nostalgia is something that you need to think about when you are creating a technology?

This is about how do you ensure that what it is that you are outputting for people to input, right?

Because as the receiver of your invention, of your creation, right, you are outputting that creation and I'm inputting it into my system.

And then subsequently the effect is that I then output X, Y, and Z and you hope that X, Y, and Z maps onto your X, Y and zed.

But when X, Y, and Z, when it turns out to be P, Q, and R that I've outputted, and you're like, "What? Nah."

You know, instead of just like hacking and slashing and, you know, getting rid of this, not the other because it doesn't work, try and understand why that P, Q, and R happened and what that might mean for how you develop later.

And that may help to consider unintended consequences.

Yoz: When you say PQRs, you mean, are you talking about the system just output something unexpected, or are you talking about?

Aleks: No, no, me, me.

What I do with your system, you expect X, Y, and Z.

Yoz: Oh, right.

Aleks: And what I do with your system is P, Q, and R.

I don't know if I just used a term of art there.

Yoz: That's great. So, one thing I've mentioned is that you and I have known each other for many years in different places.

Aleks: For many years.

Yoz: In the UK new media scene of the late '90s and then we ended up together at Linden Lab for a short while.

Aleks: That's right, that's right. That's right.

Yoz: Fabulous surprise.

Aleks: I was there for a couple of months, but I was there to do the data collection because we went from 3000 people to 15 million, and I was like, Jesus, Corey, how can I, I need to, I need your data.

How can I do this? Please! Oh, my God. I completely forgot about that. I remember that now.

Yoz: I think you told me one of my favorite bits of computer ethnography that you'd seen in Second Life, which was to do with how people who live in different world nations react to each other in virtual worlds, especially like personal distance, interpersonal distance.

So people who live in different societies, you see their avatars keeping different kinds of distances, and what we found was like, some people from certain countries seem to have no concept of personal distance and will just go straight up to like millimeters away from somebody else to talk to them.

At which point, the person they're talking to, who is from a different society, you know, stands off immediately, reactively, takes a few steps back and then the other person steps forward, and they kind of just move horizontally across the whole landscape, you know, during a conversation.

And those the ways that the physical or the real world intervenes in these totally unexpected ways.

Aleks: Absolutely.

That was the reason I went back to university and did my PhD.

That was exactly the reason. It wasn't that dance. It was about real money transfer.

It was reading Julian Dibbell's "Rape in Cyberspace" from The Village Voice in 1993, and it was the fact that at the State of Play Conference in 2001--

Which was like at the New York Law School with Beth Noveck, was one of the conveners of that who ended up sort of, you know, doing a lot of work with the US government on creating the US digital plan.

That was all about the fact that you had people doing stuff in a virtual environment, and this doesn't need to be in Second Life but in a virtual environment in the online space.

There was nothing particularly at that time that was tying them to the offline space.

It was nothing, nothing on the internet.

You could be a dog, you could be a, you know, as Sherry Turkle described, you could be a swarm of bees. It didn't matter.

It did not matter.

It was a place of exploration, and yet we insisted on creating judiciary systems, and penal systems, and all kinds of other economic systems that were exactly what we knew already.

We imported those things.

Literally, we picked 'em up and we plopped 'em down and we imported them wholeheartedly in their entirety, and we didn't even play around with that.

And I was like, what is this about human beings?

This is so interesting, that whole nature of how human beings are not different people online, we are extensions of ourselves online as we are offline.

But I mean, that's why I'm obsessed with the fact that you have to think about the human being, right?

You can't just create solutions in a vacuum.

You have to recognize that the people that you're dealing with are really messy, are messy, messy, messy, messy humans.