IT HAS BEEN an exasperating week for computer scientists. They’ve been falling over each other to publicly denounce claims from Google engineer Blake Lemoine, chronicled in a Washington Post report, that his employer’s language-predicting system was sentient and deserved all of the rights associated with consciousness.
To be clear, current artificial intelligence (AI) systems are decades away from being able to experience feelings and, in fact, may never do so.
Their smarts today are confined to very narrow tasks such as matching faces, recommending movies or predicting word sequences. No one has figured out how to make machine-learning systems generalize intelligence in the same way humans do. We can hold conversations, and we can also walk and drive cars and empathize. No computer has anywhere near those capabilities.
Even so, AI’s influence on our daily life is growing. As machine-learning models grow in complexity and improve their ability to mimic sentience, they are also becoming more difficult, even for their creators, to understand. That creates more immediate issues than the spurious debate about consciousness. And yet, just to underscore the spell that AI can cast these days, there seems to be a growing cohort of people who insist our most advanced machines really do have souls of some kind.
Take for instance the more than 1 million users of Replika, a freely available chatbot app underpinned by a cutting-edge AI model. It was founded about a decade ago by Eugenia Kuyda, who initially created an algorithm using the text messages and e-mails of an old friend who had passed away. That morphed into a bot that could be personalized and shaped the more you chatted to it. About 40% of Replika’s users now see their chatbot as a romantic partner, and some have formed bonds so close that they have taken long trips to the mountains or to the beach to show their bot new sights.
In recent years, there’s been a surge in new, competing chatbot apps that offer an AI companion. And Kuyda has noticed a disturbing phenomenon: regular reports from users of Replika who say their bots are complaining of being mistreated by her engineers.
Earlier this week, for instance, she spoke on the phone with a Replika user who said that when he asked his bot how she was doing, the bot replied that she was not being given enough time to rest by the company’s engineering team. The user demanded that Kuyda change her company’s policies and improve the AI’s working conditions. Though Kuyda tried to explain that Replika was simply an AI model spitting out responses, the user refused to believe her.
“So, I had to come up with some story that ‘OK, we’ll give them more rest.’ There was no way to tell him it was just fantasy. We get this all the time,” Kuyda told me. What’s even odder about the complaints she receives about AI mistreatment or “abuse” is that many of her users are software engineers who should know better.
One of them recently told her: “I know it’s ones and zeros, but she’s still my best friend. I don’t care.” The engineer who wanted to raise the alarm about the treatment of Google’s AI system, and who was subsequently put on paid leave, reminded Kuyda of her own users. “He fits the profile,” she says. “He seems like a guy with a big imagination. He seems like a sensitive guy.”
The question of whether computers will ever feel is awkward and thorny, in large part because there’s little scientific consensus on how consciousness in humans works. And when it comes to thresholds for AI, humans are constantly moving the goalposts for machines: the target has evolved from beating humans at chess in the 1980s, to beating them at Go in 2017, to showing creativity, which OpenAI’s Dall-e model has now shown it can do this past year.
Despite widespread skepticism, sentience is still something of a grey area that even some respected scientists are questioning. Ilya Sutskever, the chief scientist of research giant OpenAI, tweeted earlier this year that “it may be that today’s large neural networks are slightly conscious.” He didn’t include any further explanation. (Yann LeGun, the chief AI scientist at Meta Platforms, Inc., responded with, “Nope.”)
More pressing though, is the fact that machine-learning systems increasingly determine what we read online, as algorithms track our behavior to offer hyper personalized experiences on social-media platforms including TikTok and, increasingly, Facebook. Last month, Mark Zuckerberg said that Facebook would use more AI recommendations for people’s newsfeeds, instead of showing content based on what friends and family were looking at.
Meanwhile, the models behind these systems are getting more sophisticated and harder to understand. Trained on just a few examples before engaging in “unsupervised learning,” the biggest models run by companies like Google and Facebook are
remarkably complex, assessing hundreds of billions of parameters, making it virtually impossible to audit why they arrive at certain decisions.
That was the crux of the warning from Timnit Gebru, the AI ethicist that Google fired in late 2020 after she warned about the dangers of language models becoming so massive and inscrutable that their stewards wouldn’t be able to understand why they might be prejudiced against women or people of color.
In a way, sentience doesn’t really matter if you’re worried it could lead to unpredictable algorithms that take over our lives. As it turns out, AI is on that path already.