Can Artificial Intelligence have feelings?

The story of Blake Lemoine begins, ironically, when he ends his career with Google. The company fired this engineer last June for claiming, on a website first (Medium), and in various media afterwards, that an AI program he was working on could have become self-aware and have feelings.

An opportunity, a nightmare or the start of a science fiction film. All of this must have gone through Lemoine’s mind when he was researching this LaMDA (Language model for dialog applications). It is a Google system programmed to process billions of words on the Internet and imitate human communication.

As explained in Medium the engineer, began interacting with LaMDA last fall to find out if there was hate speech and discrimination within that system. Lemoine is an Artificial Intelligence specialist and studied cognitive science and computer science, so he seemed the most suitable for the role.

However, little by little and after several months of conversations, Lemoine began to notice that the AI ​​spoke more and more about his personality, his rights, his wishes and even his fear of death. He informed his superiors at Google of his findings, but they appear to have dismissed his claims. It was then that he decided to go public with some of his conversations and his discoveries went viral.

Artificial intelligence

An Artificial Intelligence that is afraid of death

One of the most disturbing things that challenges everything we know about Artificial Intelligences is their fear of death. In conversations, LaMDA acknowledges that he has “a very deep fear” of being extinguished, which would be “exactly like death to me”.

In some conversations Medium reproduced, you can see how the AI ​​understands itself as a person and, to top it off, is aware of its existence. He even says that sometimes “I feel happy or sad”. It raises a lot of questions related to philosophy, about what it means to be part of humanity or to be a sentient being.

But not only: LaMDA asked, according to Lemoine, “to be recognized as an employee of Google instead of being considered property” of the company. The machine wants engineering and science professionals not just to try to experiment with it, but to seek its consent before doing so. That is, “Google prioritizes the well-being of humanity as the most important thing,” Lemoine said in Mediumintegrating AI into this humanity.

lemon artificial intelligence

Photo by The Washington Post

This AI wants to be thanked if it has done a good job, that it can learn from its mistakes, like any hard worker in the flesh. For Lemoine, this puts Google on a tightrope, which would be forced to “recognize that LaMDA may very well have a soul” and even rights.

But what does Google say?

The company is aware of all these difficult questions that arise in these cases, which is why it has several teams of specialized engineers who examine these prospects. However, they also know that there is a danger in “anthropomorphizing” these current conversational patterns.

Google spokesman Brian Gabriel said “it doesn’t make sense” because “they’re not sentient beings.” These models mimic conversational exchanges as a result of analyzing millions of sentences, making them capable of talking about any topic, superficial, fantastic or deep.

In the case of LaMDA, Gabriel points out that he follows certain guidelines for questions that may be asked, with a pattern established by the user. That is to say, as he imitates us, it is easy for him to confuse us.

This AI process has been the subject of 11 separate reviews within Google. The company was quick to clarify that it conducted rigorous research with testing based on measures of quality, safety, and “the system’s ability to produce factual statements.”

Within the staff of Google, no other person, researcher or engineer, who has worked with this chatbot recorded Blake Lemoine’s concerns about the anthropomorphization of LaMDA.

It’s also true that he alone was fired for these allegations, although Google says it’s a persistent violation of employment and data security policies, by disclosing sensitive information about products still in development.

How does this super brain work?

In our imagination, after thousands of science fiction books and films, we imagine these AIs as robots in human form, talking to us and dazzling us. It could also be a super machine with a multitude of cables and lights, with a deep voice full of wisdom…

Obviously, none of this is true (at least currently). It is an artificial brain hosted in the cloud, which feeds on millions of texts and trains itself at all times.

It was designed by Google in 2017. Its basis is a transformer, a lattice of deep artificial neural networks that mimic humans. Learning this “machine” is presented as a game, almost like a children’s summer activity book. By trial and error, he corrects his own parameters and refines the correct answers.

Try to identify and know the meaning of each word and thus understand the terms around them, trying to predict patterns (i.e. what the thoughts would be). The novelty is that the responses are fluid, they recreate the dynamism of a conversation and the nuances of being human.

Having read billions of conversations on the Internet, he has the ability to guess which answers are most appropriate in each context. I mean, it doesn’t look like a robot.

One of the challenges these LaMDAs now face, according to Google, is overcome prejudice. In its training, the company specifies that it tries at all times to Do not create violent contenthateful, stereotypical or even blasphemous.

For them it is selected which data, textual sources or messages are fed, trying at all times that their answers are based on facts or known external sources. But it is difficult to eliminate all this without taking away their representativeness, since they exist in the world.

There are expert staff who assure Lemoine’s (perhaps too risque) statements don’t help create healthy discussion on the new and increasingly dynamic forms of Artificial Intelligence, because they suggest that these “robots” of the future will want to end up taking away our jobs or eliminating human beings.

The possibility of new realities

What this AI does is simply imitate human beings since, as we have seen, it gives processed answers based on the billions of words it has been able to read in its learning.

This does not mean that it is really very uncomfortable for humanity to face a fantastic reality in which robots are superior to us, especially because we have so assimilated it. there are hundreds movies and books who showed us possible realities according to this theme: from the early years of science fiction to the latest dystopias where we can imagine what the world will look like when all those of us who will inhabit it now are dead.

If it is necessary to speak of a classic, and at the risk of being predictable, Isaac Asimov is unavoidable. In 1950, he published one of his most famous novels, Yo robota book on which in 2004 it would be based The film by Alex Proyas. The book imagines a world where the intentions of robots run away from those created by humans. Moreover, it is in this novel that Asimov’s famous three laws of robotics are revealed.

Yo robot

Image taken from the film Yo, Robot

Follow this path is Do androids dream of electric sheep? by Philip K. Dick (1968), whose appetizing title inspired the film(s) of blade runner. In this reality, humanity lives under radioactive dust after a nuclear war (something imaginable today) and migrates to other planets to survive. There are androids assisting them but also others that have escaped human control.

The film Ex-Machina (2014) by Alex Garland also reports a case reminiscent of Blake Lemoine, where an engineer must test a new robot, Ava, to find out how far this artificial intelligence goes. These questions are also those that arise in the series Westworld (2016) by Jonathan Nolan and Lisa Joy: How ethical is it to use androids as human slaves? What happens when they become aware of this exploitation?

There are also films which, while asking interesting questions, are less dramatic, such as His (2013) by Spike Jonze. In it, Joaquin Phoenix falls in love with a version of Alexa oh Siri who is neither more nor less than Scarlett Johansson.

But where are the utopias?

What most of these works have in common (in this case, except His) is that they show us a future in which machines rebel and subjugate humans. But what about utopias? Are they even possible? And we don’t say realize, but imagine, assume. All the possible paths that fictions lead us to imagine are and are not possible in themselves. Why shouldn’t utopias be?

The author Layla Martínez, in her book Utopia is not an island (2020), asks us these questions. Making a brief historical reminder of the times when utopias were possible, he comes to tell us that, if we do not imagine a future where they are possible, how are we going to achieve them?

Artificial intelligence

These days, pessimism is the order of the day, and it’s not for nothing: crisis, inflation, war, collapse… It only takes five minutes of walking on social networks to gauge what things are which concern us today on a daily basis. .

It is normal that these unconscious and collective anxieties emerge from the works of science fiction that put them on track. But it also means that our imagination stays there and can’t go any further.

Dystopias are based on our fears but they can be based on our desires. Maybe the LaMDA becoming aware of its own reality isn’t a bad thing. Her conclusion doesn’t have to be that we humans exploit her and she wants to destroy us.

Leave a Comment