1. Library
  2. Podcasts
  3. Generationship
  4. Ep. #3, A Contrarian History of AI with Steven Schkolne of MightyMeld
Generationship
31 MIN

Ep. #3, A Contrarian History of AI with Steven Schkolne of MightyMeld

light mode
about the episode

In episode 3 of Generationship, Rachel speaks with Steven Schkolne of MightyMeld. They discuss cultural myths surrounding AGI, theories on machine consciousness, philosophical insights on how humans might hybridize with machines, Steven’s literary influences, and how generative AI is helping humans advance their creative abilities.

Steven Schkolne is a computer scientist, designer, and entrepreneur. He is currently Founder & CEO of MightyMeld. He is also the author of Living with Frankenstein: The History and Destiny of Machine Consciousness.

transcript

Rachel Calmers: Today it's my pleasure to welcome Steven Schkolne. Steven is a computer scientist, designer and entrepreneur who is particularly passionate about the way humans work creatively with machines. His current focus is a web project called MightyMeld, a visualization and creation platform for sophisticated React codebases.

Steven is a self-taught designer who's classically trained in computer science. He studied first at Carnegie Mellon and then at Cal Tech for his PHD. While at Cal Tech he built the world's first creative tools for VR, 15 years before the hardware was commercially available. His technical expertise has been used by companies like BMW, Microsoft and Disney.

In 2020, and this is why we're having him on the show today, he published an amazing book called Living With Frankenstein - The History And Destiny Of Machine Consciousness. Steven, welcome to the show.

Steven Schkolne: Thanks, Rachel. Glad to be here.

Rachel: I have to tell you, I loved reading this book. You and I met in your persona as founder, and I loved reading this and just how poetic and creative it was, and seeing a whole different side of you. It was really a delight.

Steven: Yeah. Well, I'm glad you got a chance to read it and I'm really glad we're getting a chance to talk about it here on your show. It certainly was an interesting project and a thought provoking project, and I know you to be a very thought provoking person so I'm looking forward to the conversation.

Rachel: Did you ever think that this book would become as incredibly topical as it is right now?

Steven: Yeah, I figured it would, seeing the trends of how AI is growing and machine intelligence is growing. But I think the book is also about trying to take a bit of a broader perspective and really looking at the Information Age, and that's certainly been very topical for quite a while, the Information Age and where it's headed.

Rachel: That was a huge part of what I loved about it. I got to take my kids to the Museum of Arts and Crafts in Paris, we love industrial museums. Reading your book reminded me of that, in that it was an alternative history of computing.

You read your conventional histories, and it's Babbage and then Turing, and those guys don't even get a look in here, and that's because you're looking through this lens of consciousness. So let's start where you start, with the Pascaline. What was Pascal's contribution here?

Steven: Yeah. So Pascal made I think what you could call the first... maybe the first computing device. It was basically an adder, or it would add numbers and subtract them and it had a series of registers to hold the number and it would carry from one to the next.

What I find impressive about that, first it's definitely really a computation going on in material, but also what's surprising is that same pattern exists today in the ALU, the Arithmetic Logic Unit, of a CPU. I thought it was really landmark, a move to have computation happening like that in a device that's clearly quite sophisticated.

Rachel: So just to throw you a wild card out there, the other thing that Pascal is famous for is his wager. Let's pretend that we believe in God, because then we get to heaven if he's real and it doesn't matter if he doesn't. Your book plays a kind of Pascal's Wager with the question of machine consciousness, I think. Let's treat them as if they're conscious because if they are, they'll be glad, and if they're not it didn't really cost us anything.

Steven: Interesting. Yeah, I never really thought of it in those terms, but I do think there's a sense in my book and also in how I feel about machines in general, to give them respect and to give them the respect that they have for who they are. Maybe it's just being a computer scientist and spending so much time with a machine, but I think we all spend time with machines these days.

Also, there's a tendency with animals and other things to say they're not conscious, and say they don't really have self-awareness. My general approach is to be pretty generous, along with that generosity is respecting machines.

Rachel: That came across really strongly, and I do find it so refreshing in an age where the fear of the AGI, the Artificial General Intelligence, is provoking legislation. It seems to me to be a really narrow view of history to pretend that we are the only sentient species on Earth. I mean, have these people even met an orca or an elephant?

Steven: Yeah, exactly. Well, I think a lot of the way people look at AGI is more rooted in myth and fairy tales, and these old cultural tropes. So when a new technology comes along, we saw the same thing with VR, there's this trope about The Lion, The Witch and The Wardrobe being transformed into some alternate fantasy reality, Willy Wonka or something like that. What actually VR is very different from that.

But when new technology comes along our instinct is to take whatever cultural baggage or history we have and apply it to the technology. The same happens with AI, and that's the first half of the title, Living With Frankenstein.

It talks about Frankenstein's Monster and that's also a cultural trope that goes back to things like the Golem and this idea that's very common in movies today. Recently, Ex Machina, HAL9000. This idea of the computer taking over or creating something that we can't control and so that's really a strong part of the narrative and culture around AGI.

I try to see things more in terms of what the technology actually is, what it's actually doing, and within that mindset it's pretty easy to see artificial computers being generally intelligent today. With that more gradualist point of view you can see things more clearly for what they are.

Rachel: Shout out to our queen, Mary Shelley, inventor of science fiction. What I find particularly interesting about your invocation of Frankenstein here is that, in the novel as opposed to the impression everyone has of the novel, the creature that Frankenstein created is extremely humane and sympathetic, and feeling. The person who demonstrates monstrous behavior is Dr. Frankenstein who rejects his creation and is very cruel.

This seems to me to rhyme with a critique that Ted Chang, the great modern science fiction writer who wrote the short story on which the film Arrival is based, has been saying about the AGI panic which is that, "People who fear AGI are projecting qualities onto artificial intelligence that are actually the properties of capitalist organizations." Is that something that resonates with you?

Steven: Yeah, definitely. It resonates deeply. Just to stick a bit on the Shelley point, I think we do have to give props to her for really codifying the fear of the unknown and really painting a picture of what it's like in the presence of this other that has capabilities. Yeah, the Monster itself was quite a sad being that I think if he had been loved things would've turned out quite differently in the end.

I also see the same thing with AGI, people are really afraid of AGI. The thing I'm most afraid of, this free will thing that we might get into in a bit, but take all the consciousness and all the abilities of artificial intelligence and traditional machine intelligence, and put one evil human on top of the pyramid pulling the strings and we have a really nightmare scenario.

Personally, I'm far more afraid of that evil human with a lot of concentrated power than any machine with a kind of evil intent. That's because, as I argue in the book, machines could wreak a lot of havoc, they've demonstrated a lot of capabilities. They just have yet to demonstrate a lot of evil intent, and there's not a market for building a machine with evil intent and so why would we assume that we would...

I think the myth is that at some point that things get so smart, the Singularity happens and the machine gets evil and, "Ah-ha! I'm going to destroy my creator." Why would a machine actually do that, other than it saw it in a lot of movies?

Rachel: Yeah. It's definitely one of our big cultural myths. I think we could spend a lot more than an hour digging into why that's so. From the Pascaline you moved to a Dutch Barrel Organ which I loved, and Jacquard's Looms which if you haven't seen them, the Museum of Arts and Crafts in Paris has a whole room of them. I just sat on the floor and looked at the mechanism, they're so beautiful. What do you see punch cards bringing to the table?

Steven: Yeah. That's essentially what Jacquard's Loom was, a punch card for weaving. I think my reason for focusing on these, I actually began this project with a series of Medium posts called The Proof Of Machine Consciousness. What I did is I actually looked at different definitions of consciousness, from the philosophical and other traditions, and tried to see how a machine satisfied them.

I think the first one, one of the first ones I started with was selfhood. A lot of people really go back to selfhood, the idea that, "Oh, I have a self and it's separate from others." When you try to see where that might be in machines, it's pretty easy to locate that in a machine's data, and so a really basic way of looking at a machine is you have information and computation, or you have internal state and a way of that state being processed.

So looking at Jacquard's Loom and the Dutch Barrel Organ which is basically like a Player Piano, that's the first time where we start to see, in almost a binary form, information being encoded that's somewhat separate from the device that it goes into.

That's really where I see the historical birth of information as pure information, separated from mechanism and different from, say, a clock's gears or something like that. That isn't really interchangeable or separate from the mechanism that processes or uses that information to function or behave in a certain way.

Rachel: So then you trace this process of evolving layers of abstraction, which we're still seeing today. The next big moment for you is 1918, what happened?

Steven: Yeah. That's a time where we start to see something called the Latch, which anyone who studies computer engineering, I did some hardware engineering as an undergrad, learns very quickly about the Flip-Flop, these basic switches. Basically, it's a device that takes an electrical signal and can store it, so you can think of that as your basic, fundamental unit of machine memory.

That's a device that really starts to have an interiority, and the way I see consciousness in animals and in machines is all about this interiority and the self needs a kind of interior space. Around that time, the early 1910s is when we saw that develop in machines. If you take the developments of Pascal, which is the Pascaline which is a computational mechanism that can move information forward or along its way and then you have this other development, which is information, you start to cut the two things together and you get the machines that we have today which have both some kind of information inside of them and a way of moving that forward.

That's also very similar to what essentially happens in the brain of a human, and how we see consciousness in humans, and so that's a very important piece. If it wasn't for writing my history from the viewpoint of a philosophical tradition, I don't think I would assume those as being as landmark events as they are. But they certainly are when you try to look at the self and machine, as the birth of a self.

One of the important things about that is it can also die. If you take the electricity away from that Latch, it fades away and it has this sense of life and it may not be self-reproducing and have these other biological aspects of life, but there's a bit of that spirit in there that's a thing that can die or can go away and be there.

If you had a bunch of information, you remove the power, it's gone forever. So that's why I was really interested in that moment in time and those particular developments.

Rachel: So 1918, you have machines that have a sense of self and then post-World War II, the great machines like ENIAC and its siblings, start to get networked together. Is that the moment when machines start to develop a little bit of a theory of mind, the idea of the self and other?

Steven: Yeah. So the theory of mind is an interesting point, and that's the ability to know what it is that you know. There is an interesting development around the Von Neumann Architecture, and basically Von Neumann built the ENIAC and one of the aspects of its architecture that were the two dominant forms of architecture, the Von Neumann one stuck it out.

Basically, what it does is instructions live in the same memory as data does and so the machine can fetch data, it can also read its own instructions, and you start to be able to have things like self-modifying machines that actually change their program as they run. Turns out those are very difficult things to write, there's been a lot of experimentation with self-modifying machines but you can do it and that idea, to be able to fully introspect into one's own internal state with absolute clarity, to me really fits that philosophical definition of the theory of mind.

I actually think that those kinds of traditional machines have a stronger theory of mind than humans do. Humans are actually kind of incapable of knowing what's inside their head, right? We might as well get to the human exceptionalism part of how I see things, which is this strong tendency for us to privilege ourselves and put ourselves above other things.

Rachel: Let me ask the question. I worked really hard on this phrasing. Steven, are humans the specialist snowflakes in the whole, entire universe or are we just chimpanzees with anxiety?

Steven: I mean, we're definitely chimpanzees with anxiety. I do think humans are pretty special, but not as special as we want to be. I think there's been basically a humbling of the human in a lot of religious traditions. Talking about Darwin and what was really controversial about Darwin when his theories came out was this idea of, "I'm not an ape. I'm not an animal." Curiously, my daughter who loves animals, she's three years old, when you say, "You're an animal," she's like, "No, I'm not." She's really resistant to this idea that she's on the same playing field as animals.

Rachel: Well, and she's creating categories in her mind. There's people and there's animals, and I heard a really charming story where somebody said something to their son, Elliot, who was about three and he said, "Oh, I'll go and tell the other Elliots." And they questioned him about it and he was modeling it on Thomas The Tank Engine, a character called Diesel who was a Diesel train who was given some information and said, "Oh, I'll go and tell the other Diesels."

So I actually find those toddler, naive experiments in categorization to be incredibly charming, obviously. But also enlightening about how we array engines structure the world. We cannot stop ourselves from chunking and blocking, it's a fundamental thing that we do.

Steven: Yeah. I think we have these tendencies, and a lot of the great achievements, humanity's been able to understand this about ourselves. As adults mature, we understand how we think and neuroscience is helping a lot with that as well. Machines have actually been helping us out tremendously as well. So yeah, I would say we're still really special, certainly on this planet.

I don't know about the universe, but we're getting more special and more capable by hybridizing with machines and so I think the thing that's really special about humans, a lot of it is actually outside of ourselves. A lot of people think they're special, but they don't realize it's actually the society they're in that enables them to be so special.

And so I think there is something very special going on this planet, but it's bigger than any one of us and it doesn't live in my biological form. It's about how my biological form in nature versus nurture experiences other things like that.

Rachel: I think about this a lot and I think about the contrast between Descartes', "I think, therefore I am," and the translation of Ubuntu, the Swahili word, which I've seen translated as, "I think, therefore we are." The Ubuntu idea is an intelligence that's embedded in community and is almost a gestalt, whereas Descartes takes you down this very binary separation of mind-body world that, honestly, lends itself to authoritarian structures and the kinds of corruption and evil that you were talking about earlier.

So I think surfacing that quality of humans that we can, not only hybridize with machines, but that we can train guide dogs and ride horses and that we can have these very intense mutual relationships with other intelligences. I think that gets flattened in the conversations that we have about intelligence.

Steven: Yeah. It's almost like when you talk about intelligence, the ego and our own insecurities start to really come to the surface. It's like, "It's me, I'm good." It's that sort of sense. That's a lot of how I see machines really as collaborators with humans, and I think that's why I'm so allergic to this notion that at some point the machines take over and the humans just fade away into nothingness.

Rachel: "I will diminish and go into the West and remain Galadriel."

Steven: Yeah. There might be a future like that, let's say there is a future in a century where humans don't exist. Let's say we get eradicated. We've already passed the torch through just the neural net architecture to machines. There's a part of us that's going to live on in these kinds of machine intelligences, so it isn't black and white and I certainly don't know what's going to happen in the future but I'm very not in the Singularity camp, if that wasn't obvious already.

Rachel: I wanted to ask about your influences, because I thought I caught some echoes of Donna Haraway with her amazing Cyborg Manifesto, and I don't know if you've read Alan Lightman's amazing book, Einstein's Dreams. But I had little echoes of them when I was reading your book.

Steven: Yeah. I think with the history of machine consciousness book, my main influences... There were stylistic influences. One was early Wittgenstein, the Tractatus, and just so impressed by how short and simple he made his philosophical work. Really, the book is only like 90 pages long and they're small pages, and that idea of really trying to focus towards brevity was really stylistically influential.

Along with Thus Spake Zarathustra, actually, which I read as an adult which I think is a very different experience. I didn't read it until I was in my 30s, which is I think very different from people who read it earlier. But something about that sort of poetic telling of a story, and I think there might be a little bit of Sapiens thrown in there, stylistically. Then there are these intellectual kind of influences.

One of them is Damasio and he wrote this book called Descartes' Error. I'm not sure if you're familiar with it, but he talks about Phineas Gage, this man who basically had a railroad spike fired through his head and he becomes a different person. That I read as a teenager and that just stuck with me for a long time, in terms of the sense of who am I?

And when you look at how a lot of people talk about consciousness, it's about this atomic thing about their self, it's like, "I have this self and it can't be split, and it just is this thing. It might be a soul that exists outside my body." And when you hear the Phineas Gage story and you hear about how much he changed based on losing part of his cortex, you start to realize it's hard not to see the self as being something that is very material and conditioned on the neural hardware, and to see it in a different light.

Then there are some people more explicitly writing, Stanislaus Tahan, I think it's pronounced. He wrote some essays that I found really influential as I was beginning the project. Those are some of my influences for this particular work.

Rachel: Yeah. I think Donna Haraway and Alan Lightman were probably drawing on some of the same German sources as you, so I highly recommend them. I think you'll enjoy both of those. What has happened since you published Living With Frankenstein and has it changed any of your conclusions?

Steven: Yeah. A lot has happened in the world of AI. Not so much on the research front, I think things have been progressing pretty nicely in terms of capability. But in terms of the applications, we've all seen recently with ChatGPT and LLMs, you'd have to be living under a rock not to know about it these days. But in terms of the basic framework, nothing has really changed. I actually skimmed through it before our chat here today and nothing has really changed about the book.

Nothing I think has changed and it seems very appropriate to the world we're living in today. But I do think the one thing that's changed is people are asking a lot more questions about what's going to happen with us and machine intelligence and machine consciousness, and so I think there might be more need for the book right now in terms of giving people an alternate narrative.

Other than the, "Oh, HAL9000 is going to take over. When's it going to happen?" There's really only that one narrative to argue against, and I think the book now might be more timely and helpful for people by showing an alternate narrative, which essentially says, "We've been hybridizing for a long time. We're going to continue doing so," this more gradualist perspective. Machines, in a sense, have already taken over so that's how I see things have changed.

Rachel: You and I are talking through the screens of a machine.

Steven: Exactly, exactly. A lot of our social interactions are mediated by machines, this is talking through the mouth of machines even five years ago had taken over. People spending a lot of time with their screens and also just the number of machines that are around. If we just want to look at it in terms of a population, how quickly are CPUs reproducing at a time when the human population is leveling off?

So machines seem to be successful at reproducing machine architectures with our help in this kind of symbiotic way. Yeah. I think humans who can interact with machines, it helps us survive and so I think our environment is becoming conditioned by these machines.

Rachel: Yeah. I think machines are domesticating us, much the way that cats have already done. That is kind of a worrying aspect of the embodiment of intelligences that the machines that we're building are very thirsty and very power hungry, and have an increasing environmental footprint which is a little concerning.

Steven: Yeah. There are a lot of other considerations about what's happening with machines, the ecological ones being very important. I think there are also psychological considerations about social media not necessarily being healthy, especially for teenagers. But there are also so many benefits that come from machines, like we wouldn't be having this conversation without them.

So in general, I'm an optimist about our interactions with computers and I think the increasing applicability of LLMs and the way they're transforming the market is a very exciting development and I'm in general very positive about it. I'm not the type to think that we need to limit it in any way because I think it's actually not the technology that causes problems, but things outside of technology that cause problems.

Rachel: Speaking of ChatGPT, one thing that skipping over Turing let you do is avoid talking about the Turing Test which I think ChatGPT 4 it can pass in a pretty lightweight way. Does that mean the Turing Test is outdated? Or does that mean by the standards that we've set for ourselves, ChatGPT is sentient?

Steven: Yeah. I would say that a lot of things much simpler than ChatGPT are sentient, and I would even say that it's, in a way, less sentient than a traditional computer architecture because it has a weaker theory of mind in terms of knowing what it thinks.

Rachel: It lives in a float tank.

Steven: I think the Turing Test is well passed, I think that's one of the biggest developments since I published Living With Frankenstein. It's that I think it's a much more difficult argument to say that the Turing Test hasn't been passed and I think most people kind of feel that way. It's an exciting time to be on the other side of that, but I think for people that thought that that would be some kind of watershed moment, it must be giving a bit of pause now, right?

Or maybe they're saying the watershed moment is a little bit away, a little bit away, in a kind of messianic way that people who get excited about the Singularity can be. But yeah, I would say Turing Test is well passed and we're seeing not just conversation, but human-ish levels of work in so many different disciplines. Also, a system which is a lot better at receiving human-ish inputs. It's an exciting development for technology and people who build things with it.

Rachel: Speaking of messiahs, this is the part of the show where I proclaim you God Emperor of Dune. You get to rule the world for the next five years, everything goes the way you think it should go. What does the future look like?

Steven: I think the future I'm most focused on is, at least with MightyMeld, and building a creative tool is the future of human creativity. If you look at human creativity with machines, certainly in some of the more visual realms, you mentioned my PHD work in VR and creativity in VR... There is this tendency for there to be simplistic systems, easy ways to make things that didn't really scale to complex tasks.

VR is an easy way to build things in 3D, but if you want to do something more sophisticated you're still doing it on a flat screen just because of that ability to control precise information. LLMs seem to be bridging that gap, giving people something that both can be easy for them to interact with, but also gives us something very complex, can deal with a very complex input with a lot of precision.

And so I think the most exciting part of what's happening with LLMs is creativity, whether it's in building web apps which is what we're working on, or whether it's writing or making product specs, or making images. I'm just a huge fan of VR. It's one thing I really value in this world is creativity. I'm actually a bit of a nihilist, unless we're talking about creativity and the internet and visual culture, at which point I get very excited and positive.

So for someone who cares so much about creativity, it's a really exciting time to see transformational technology come out and we've had many transformations already. But I feel like things are kind of stabilizing a bit, and to see fresh air come in is very exciting.

The other exciting thing about generative AI is that it helps humans along the spectrum, and so you have beginners and it helps them level up more quickly. Then you have people who are more advanced and it helps them flow more quickly. It's nice to see that implementation of human creative ability.

I mean, just look at what's happening in music with people who are experimenting with sample generation and AI, and creating new genres by the way it just helps you experiment and discover new sounds which musicians are always hungry for. So five years from now, I don't think I need to be the king of the world to make that vision come true. I'm very excited about it and excited to be alive and playing a small part in that evolution.

Rachel: That's good, because we reject hierarchies here on Generationship. No gods, no masters. I do like Jaron Lanier's characterization of an LLM as a library that you can talk to. It's a corpus of human knowledge that you can actually interact with, and I think that's not exactly how they work because they're not fact checking against reality but I can see them starting to fill that role.

That's where they accelerate creativity in exactly the way that your high school librarian did, God rest her, Mari Satching. May her name be praised.

Steven: Yeah. I think the exciting frontiers around LLMs are more around higher level executive functions and bringing things together and almost like the DJ aspect of working with that information. I mean, that's what I'm the most curious about in terms of how things are going to go. I have my inclination as for how I see things will go, but it's hard to be certain with the rate of change these days.

Rachel: If you had a colony ship to visit Alpha Centauri, what would you call it?

Steven: Sorry.

Rachel: Sorry is your ship name?

Steven: Maybe I'm just colored by learning a lot about extinctions, the mass extinction even we're going through, and humans' tendencies to mess things up. I think if we went to some distant star system, I don't know if we would be better for what's there.

Rachel: I mean, that ship name does have the virtue of collapsing 200 years of Australian history from White settlement to Sorry Day, so I'll allow it. I think that's a strong name. Possibly in the Iain M. Banks tradition of very sarcastic ship names, but yeah.

Steven: Yeah. Or Douglas Adams, right? I don't know if that would motivate the people on the ship, and for the record I would still love to have such a ship. But either whatever we would find on that distant planet would destroy us, or we would find a way to survive and destroy whatever's there. Probably the latter eventually, knowing us.

Rachel: It all comes back to Darwin in the end.

Steven: Indeed, indeed.

Rachel: Steven, this has been such a delight. Thank you so much for coming on the show.

Steven: Yeah, it was a pleasure.