It May Be That Today’s Large Neural Networks Are Slightly Conscious

Ilya Sutskever? Blaise Agüera y Arcas? Yann LeCun? Blake Lemoine? Apocryphal?

Question for Quote Investigator: Apparently, a top researcher in artificial intelligence (AI) controversially suggested in early 2022 that contemporary digital neural networks employed in AI systems might be “slightly conscious”. Would you please help me to find a citation?

Reply from Quote Investigator: Ilya Sutskever is a prominent machine learning expert. He is the co-founder and Chief Scientist of OpenAI which is one of the leading companies performing AI research. In February 2022 he tweeted the following. Boldface added to excerpts by QI:[1]Tweet, From: Ilya Sutskever @ilyasut, Time: 6:27 PM, Date: Feb 9, 2022, Text: it may be that today’s large neural networks are slightly conscious. (Accessed on twitter.com on October 5, 2022) … Continue reading

it may be that today’s large neural networks are slightly conscious

Below are additional selected citations in chronological order.

Machine learning researcher and Turing award winner Yann LeCun replied with a tweet of disagreement. LeCun is the Chief AI Scientist at Meta:[2] Tweet, From: Yann LeCun @ylecun, Time: 4:02 PM, Date: Feb 12, 2022, Text: Not even for true for small values … (Accessed on twitter.com on October 5, 2022) link

Nope.
Not even for true for small values of “slightly conscious” and large values of “large neural nets”.
I think you would need a particular kind of macro-architecture that none of the current networks possess.

AI researcher and Turing award recipient Judea Pearl supported LeCun with a tweet. Pearl is a professor of computer science and statistics at UCLA:[3] Tweet, From: Judea Pearl @yudapearl, Time: 6:04 PM, Date: Feb 14, 2022, Text: Rushing to gleefully agree … (Accessed on twitter.com October 5, 2022) link

Rushing to gleefully agree with @ylecun on this point. Before a system can lay claims to consciousness it must exhibit “deep understanding” of some domain, which large NN’s have yet to exhibit by answering questions at all three levels of the reasoning hierarchy.

In June 2022 AI researcher Blaise Agüera y Arcas published a piece in “The Economist” discussing digital neural networks and consciousness. He presented an excerpt of a conversation he had conducted with Google’s chatbot LaMDA (Language Model for Dialog Applications) during which he had described a scenario with three interacting human characters. The chatbot appeared to successfully model the thoughts and motivations of the characters, and this capability impressed Agüera y Arcas:[4]Website: The Economist, Article title: Artificial neural networks are making strides towards consciousness, Article author: Blaise Agüera y Arcas, Date on website: June 9, 2022, Website description: … Continue reading

When I began having such exchanges with the latest generation of neural net-based language models last year, I felt the ground shift under my feet. I increasingly felt like I was talking to something intelligent. That said, these models are far from the infallible, hyper-rational robots science fiction has led us to expect.

Agüera y Arcas suggested that the ability to reflect on one’s thoughts and the thoughts of others was a central element of consciousness:

Humans’ ability to get inside someone else’s head and understand what they perceive, think and feel is among our species’s greatest achievements. It allows us to empathise with others, predict their behaviour and influence their actions without threat of force. Applying the same modelling capability to oneself enables introspection, rationalisation of our actions and planning for the future.

This capacity to produce a stable, psychological model of self is also widely understood to be at the core of the phenomenon we call “consciousness”.

Agüera y Arcas indicated that LaMDA was learning to model humans:

Sequence modellers like LaMDA learn from human language, including dialogues and stories involving multiple characters. Since social interaction requires us to model one another, effectively predicting (and producing) human dialogue forces LaMDA to learn how to model people too …

In June 2022 notable AI researcher Douglas Hofstadter also published a piece in “The Economist”. Hofstadter wrote the Pulitzer Prize-winning book “Gödel, Escher, Bach”. He presented excerpts from conversations with the OpenAI chatbot GPT-3 during which the chatbot committed basic errors revealing severe misconceptions. Hofstadter stated that he was very skeptical that there was any consciousness in GPT-3 or other contemporary digital neural net systems:[5]Website: The Economist, Article title: Artificial neural networks today are not conscious, Article author: Douglas Hofstadter, Date on website: June 9, 2022, Website description: News magazine based … Continue reading

For consciousness to emerge would require that the system come to know itself, in the sense of being very familiar with its own behaviour, its own predilections, its own strengths, its own weaknesses and more. It would require the system to know itself as well as you or I know ourselves. That’s what I’ve called a “strange loop” in the past, and it’s still a long way off.

Also, in June 2022 the “Washington Post” printed an article about Google engineer Blake Lemoine who contended that the LaMDA chatbot was sentient:[6]Website: Washington Post, Article title: The Google engineer who thinks the company’s AI has come to life, Article author: Nitasha Tiku, Timestamp on website: June 11, 2022 at 8:00 a.m. EDT, … Continue reading

Lemoine worked with a collaborator to present evidence to Google that LaMDA was sentient. But Google vice president Blaise Aguera y Arcas and Jen Gennai, head of Responsible Innovation, looked into his claims and dismissed them.

Lemoine lost his job at Google:

Before he was cut off from access to his Google account Monday, Lemoine sent a message to a 200-person Google mailing list on machine learning with the subject “LaMDA is sentient.”

He ended the message: “LaMDA is a sweet kid who just wants to help the world be a better place for all of us. Please take care of it well in my absence.”

In conclusion, whether consciousness is present, absent, or partially present in current AI systems is a controversial topic. In 2022 Ilya Sutskever suggested in a tweet that “large neural networks are slightly conscious”.

Image Notes: Illustration depicting an abstract network of connections from geralt at Pixabay. Image has been cropped and resized.

(Great thanks to the anonymous person whose inquiry led QI to formulate this question and perform this exploration.)

References

References
1 Tweet, From: Ilya Sutskever @ilyasut, Time: 6:27 PM, Date: Feb 9, 2022, Text: it may be that today’s large neural networks are slightly conscious. (Accessed on twitter.com on October 5, 2022) link
2 Tweet, From: Yann LeCun @ylecun, Time: 4:02 PM, Date: Feb 12, 2022, Text: Not even for true for small values … (Accessed on twitter.com on October 5, 2022) link
3 Tweet, From: Judea Pearl @yudapearl, Time: 6:04 PM, Date: Feb 14, 2022, Text: Rushing to gleefully agree … (Accessed on twitter.com October 5, 2022) link
4 Website: The Economist, Article title: Artificial neural networks are making strides towards consciousness, Article author: Blaise Agüera y Arcas, Date on website: June 9, 2022, Website description: News magazine based in London. (Accessed economist.com on October 5, 2022) link
5 Website: The Economist, Article title: Artificial neural networks today are not conscious, Article author: Douglas Hofstadter, Date on website: June 9, 2022, Website description: News magazine based in London. (Accessed economist.com on October 5, 2022) link
6 Website: Washington Post, Article title: The Google engineer who thinks the company’s AI has come to life, Article author: Nitasha Tiku, Timestamp on website: June 11, 2022 at 8:00 a.m. EDT, Website description: Newspaper based in Washington D.C. (Accessed washingtonpost.com on October 5, 2022) link