Stop me if you think you’ve heard this one before. There’s this guy who works with computers, a software developer. As part of his duties, he has to interrogate the equipment, a quality control pass to make sure the program is working within normal parameters. He discovers, or realises, or believes, his particular piece of software is not only over-performing—it has developed a soul.
SF fans will happily recognise this story—it’s been the base of many iconic tales over the decades. From Asimov’s Bicentennial Man to untold ‘evil AIs’ like Colossus and of course Skynet, it’s a story we never tire of telling or hearing (as a sidebar, I believe HAL from 2001 was not sentient, just an example of really poorly-thought out programming).
We love stories of artificial sentience. We also like the chance of those stories coming true. In a world where we already coexist with machines in near-frictionless ways, it’s kind of cool to consider our phones and tablets becoming—you know, more.
There is a thick tranche of scientists and designers who grew up watching Star Trek, Star Wars and all the other vectors to geekery. You could argue they wanted the SF of their childhood to come true, and found jobs which could help them make that happen. There’s a reason mobile phones look the way they do, from the Motorola flip-phone (‘bridge to Captain Kirk’, anyone?) on to the touchscreen revolution which runs the planet. Have you touched a screen today? Of course you have. We live in a world whose interfaces look like control panels from the bridge of the Enterprise.
That possible future is also filled with artificial intelligence and talking computers. Voice interaction with machines is something we see all the time in science fiction, and lo and behold, it’s becoming a part of our lives as well. Hey, Google.
With all that in mind, let’s return to our protagonist Blake Lemoine for a second. Is it possible that in course of his conversations with LamDA, he saw himself as part of a story in which he was not only the hero, but the father of a new species? Somewhat arrogant, probably delusional, but honestly you can see the attraction.
Max Read has more on this, and the whole article is well worth it. Here’s the key quote for me, though…
Lemoine was drawing on a well-worn cultural script — built out of decades of science-fictional use of machine intelligence as a trope and plot device — in his approach to and understanding of LaMDA, and LaMDA, naturally, responded in the terms of that same cultural script, which is a portion of its trillion-word dataset. They were not having a conversation — at least in any familiar sense of the term — so much as co-writing a hackneyed science fiction story, which middlebrow and unsophisticated cretins like me ate up. A more imaginative interlocutor may have recognized that, just as adherence to a familiar script doesn’t necessarily imply sentience, sentience, if and when it comes, will not necessarily adhere to that familiar script. If machine sentience (or consciousness, or sapience) is ever going to arrive, it may not emerge in exactly the format countless hacks have already imagined, but in stranger, wilder, even wholly unrecognizable ways.Max Read
Oh, as a sidebar, is anyone else reminded of the Spike Jonze movie HER? The elevator pitch is simplicity itself: Joachim Phoenix falls in love with Siri. Ok, this is a voice assistant with the dulcet tones of Scarlett Johannsen, which brings a hint of veracity to the tale, but I wonder if Blake ever felt his heroic attempt to earn personhood for LamDA could somehow lead to some manner of… romantic reward.
OK, ick, eww, let’s move on.
We’ve covered Janelle Shane’s AI Weirdness many times on The Cut – she points her pet AIs at a library of phrases and asks them to generate fresh versions of, say, cat names or call signs of Minds in the Culture novels of Iain M. Banks. Although the output of Janelle’s machines is frequently ridiculous, we can still see something there. In the same way that we notice faces in plug sockets and wallpaper patterns, we apply our own thoughts, feelings and sense of humour to a random sample. We’re doing most of the work here. Remember, AIs don’t write stories. They create a word salad which we consume as we like, taking out the elements we don’t, in the same way in which we’d pick the anchovies or egg out of a ceasar salad.
Or take the internet’s favourite toy of the moment, the AI image creation app Dall-E. Many people, especially on Twitter, have taken to feeding it the most outrageous seed phrase and sharing the result. This is a typical comedy setup where the prompt is the setup, the punchline the art.
But again, our response to these images has nothing to do with what Dall-E has created. It has everything to do with our perception of the output. Which, to be fair, is kind of the point of art, right? How many times has art been subject to wildly different interpretations, while the artifact under discussion remains serenely itself, unchanging? Context plays a part—nail a banana to a wall and it’s a waste of fruit, nail a banana to a wall in an art gallery and it’s a searing indictment of… something. I dunno, a coded reference to the comedy pratfall.
But then context is something we apply based on a set of rules which no-one seems to properly understand. The banana will always be a banana, the nail as pointy as ever. The only difference is us.
There’s more to consider here. Specifically, Blake Lemoine’s hopes for LamDA’s future. His attempts to have the software declared sentient are, in my view, merely the first step on the journey. You see, I don’t think sentient status was all he wanted. He wanted her to be seen as a person.
This is where things get interesting, because there’s history in seeking personhood in non-human entities. Take the example of Happy the elephant…
Sentience vs. personhood is a subject littered with difficult questions and uncertain answers. We have no problem anthropomorphising our chums in the animal kingdom, applying human traits like humour, monogamous love and cunning to them. We understand the intelligence of dolphins, octopi and yes, elephants like Happy. The relationships we forge and cherish with our pets are mutually beneficial, a pure expression of love and trust.
We also have no problem with keeping animals in cages, hunting them for sport and eating them. Look at the clever octopus cheekily escaping from its tank behind the aquarium keeper’s back! Isn’t that adorable? Calamari for tea? Don’t mind if I do!
It’s tough enough to define what makes a person in the first place. The rules are fuzzily defined. At best, we can provide a simple core statement—people look like us. In which case, Happy and LamDA are sadly shit outta luck. For them, the quest for personhood becomes no more than a fun intellectual exercise with no real benefit for the potential inductee to the human race.
But consider that core statement one more time. People look like us.
The uncomfortable truth is we have issues with personhood being applied to big chunks of our own species, let alone elephants or octopi. We seem happier allowing chat bots or rivers to be allowed human rights than we do humans.
We’re very good at othering those who don’t fit into our set of rules for personhood—rules which have little to do with fairness and everything to do with prejudice, greed and ignorance. It doesn’t take much to see examples of our casual approach to inclusion. If you’re a refugee, from a particular racial group or culture, or express your personal sexual preference in a particular way, then your right to personhood is fragile and subject to continuous and hostile review. The recent overturn of Roe vs. Wade brings the most egregious example of our standards starkly to light—we seem to have little problem with insisting on personhood for an undifferentiated clump of cells, while denying rights of control to the 51% of the entire human race who have the capability to create them.
And hey, you know, slavery. But I think I’ve been banging on enough today without cracking open that particular can of turds.
Blake Lemoine has been placed on administrative leave, and his future at Google is very uncertain. We have no idea what’s happened to LamDA. Last month, courts in New York ruled that Happy was just an elephant. The rights of humans to be recognised as such across the planet remain the subject of political whims and eye-watering blasts of prejudice.
If an artificial intellegence did achieve sentience and looked at the world it was born into, I can’t say I could blame it for going full Skynet.
See you next Saturday.