Andrew Ng? Nick Bostrom? Apocryphal?
Dear Quote Investigator: A top artificial intelligence (AI) researcher was asked whether he feared the possibility of malevolent superintelligent robots wreaking havoc in the near future, and he answered “No”.
He illustrated his answer with the following analogy. Worrying about human overpopulation on Mars is fruitless. It is a distant and speculative possibility. Also, currently there are no constructive actions to perform to prevent it. Worrying about the danger of futuristic evil killer robots is similarly pointless.
Do you know the name of the AI researcher? Would you please help me to find a citation?
Quote Investigator: In March 2015 a conference focused on GPU (graphics processing unit) technology was held in San Jose, California. The keynote was delivered by computer scientist Andrew Ng who was the former Director of the Stanford University AI Lab and the co-founder of the Google Brain project. Ng contended that discussion of “evil killer robots” was an “unnecessary distraction”. The following excerpt has been transcribed from a YouTube video of the address:[1]YouTube video, Title: GPU Technology Conference 2015 day 3: What’s Next in Deep Learning, Uploaded on November 20, 2015, Uploaded by: Tech Events, (Quotation starts at 63:16 of 67:39) … Continue reading
I don’t see a realistic path for our AI, for our neural networks, to become sentient and turn evil. I think we’re building more and more intelligent software. That’s a great thing. . . . But there’s a big difference between intelligence and sentience, and I think our machines are getting more and more intelligent. I don’t see them getting sentient.
Ng downplayed the danger of autonomous malevolent AI systems by employing an analogy referring to the futuristic possibility of overpopulation on Mars. Boldface added to excerpts by QI:
I don’t work on preventing AI from turning evil today, for the same reason, because I don’t think we can productively make progress on that. So I don’t work on preventing AI from turning evil for the same reason that I don’t work on the problem of overpopulation on the planet Mars.
Below are additional selected citations in chronological order.
Technology news website “The Register” published an article about Ng’s speech on the day it was delivered. The piece included a quotation from Ng:[2]Website: The Register, Article title: AI guru Ng: Fearing a rise of killer robots is like worrying about overpopulation on Mars, Article author: Chris Williams (Editor in Chief), Date on website: … Continue reading
“There’s a big difference between intelligence and sentience. There could be a race of killer robots in the far future, but I don’t work on not turning AI evil today for the same reason I don’t worry about the problem of overpopulation on the planet Mars.”
In May 2015 the website “Wired” published an interview with Ng conducted via Skype, and he revisited the topic of dangerous AI:[3]Website: Wired, Article title: Andrew Ng: Why ‘Deep Learning’ Is a Mandate for Humans, Not Just Machines, Article author: Caleb Garling, Date on website: May 5, 2015, Website description: … Continue reading
I think that hundreds of years from now if people invent a technology that we haven’t heard of yet, maybe a computer could turn evil. But the future is so uncertain. I don’t know what’s going to happen five years from now. The reason I say that I don’t worry about AI turning evil is the same reason I don’t worry about overpopulation on Mars. Hundreds of years from now I hope we’ve colonized Mars. But we’ve never set foot on the planet so how can we productively worry about this problem now?
In November 2015 “The New Yorker” website published a profile of University of Oxford philosopher Nick Bostrom who does fear the emergence of superintelligent AI. The article included a streamlined instance of the quotation without attribution:[4]Website: The New Yorker, Article title: A Reporter at Large: The Doomsday Invention, Article author: Raffi Khatchadourian, Date on website: November 23, 2015, Website description: Reporting, … Continue reading
Last summer, Oren Etzioni, the C.E.O. of the Allen Institute for Artificial Intelligence, in Seattle, referred to the fear of machine intelligence as a “Frankenstein complex.” Another leading researcher declared, “I don’t worry about that for the same reason I don’t worry about overpopulation on Mars.”
In conclusion, Andrew Ng deserves credit for the remark under examination. QI suggests using the version spoken during the YouTube video or the version in the “Wired” interview.
Image Notes: Two robots standing on a planet with a moon backdrop from kellepics at Pixabay.
References
↑1 | YouTube video, Title: GPU Technology Conference 2015 day 3: What’s Next in Deep Learning, Uploaded on November 20, 2015, Uploaded by: Tech Events, (Quotation starts at 63:16 of 67:39) Description: Speech delivered by Andrew Ng at The GPU Technology Conference held at the San Jose Convention Center in California from March 17 to March 20, 2015; Andrew Ng spoke on the third day which was March 19, 2020. (Accessed on youtube.com on October 3, 2020) link |
---|---|
↑2 | Website: The Register, Article title: AI guru Ng: Fearing a rise of killer robots is like worrying about overpopulation on Mars, Article author: Chris Williams (Editor in Chief), Date on website: March 19, 2015, Website description: News and views for the tech community. (Accessed theregister.com on October 1, 2020) link |
↑3 | Website: Wired, Article title: Andrew Ng: Why ‘Deep Learning’ Is a Mandate for Humans, Not Just Machines, Article author: Caleb Garling, Date on website: May 5, 2015, Website description: Technology news. Published by Condé Nast. (Accessed wired.com on October 1, 2020) link |
↑4 | Website: The New Yorker, Article title: A Reporter at Large: The Doomsday Invention, Article author: Raffi Khatchadourian, Date on website: November 23, 2015, Website description: Reporting, political and cultural commentary, fiction, poetry, and humor. (Accessed newyorker.com on October 1, 2020) link |