Andrew Ng? Nick Bostrom? Apocryphal?

Question for Quote Investigator: A top artificial intelligence (AI) researcher was asked whether he feared the possibility of malevolent superintelligent robots wreaking havoc in the near future, and he answered “No”.
He illustrated his answer with the following analogy. Worrying about human overpopulation on Mars is fruitless. It is a distant and speculative possibility. Also, currently there are no constructive actions to perform to prevent it. Worrying about the danger of futuristic evil killer robots is similarly pointless.
Do you know the name of the AI researcher? Would you please help me to find a citation?
Reply from Quote Investigator: In March 2015 a conference focused on GPU (graphics processing unit) technology was held in San Jose, California. The keynote was delivered by computer scientist Andrew Ng who was the former Director of the Stanford University AI Lab and the co-founder of the Google Brain project. Ng contended that discussion of “evil killer robots” was an “unnecessary distraction”. The following excerpt has been transcribed from a YouTube video of the address:1
I don’t see a realistic path for our AI, for our neural networks, to become sentient and turn evil. I think we’re building more and more intelligent software. That’s a great thing. . . . But there’s a big difference between intelligence and sentience, and I think our machines are getting more and more intelligent. I don’t see them getting sentient.
Ng downplayed the danger of autonomous malevolent AI systems by employing an analogy referring to the futuristic possibility of overpopulation on Mars. Boldface added to excerpts by QI:
I don’t work on preventing AI from turning evil today, for the same reason, because I don’t think we can productively make progress on that. So I don’t work on preventing AI from turning evil for the same reason that I don’t work on the problem of overpopulation on the planet Mars.
Below are additional selected citations in chronological order.
Continue reading “Quote Origin: I Don’t Work on Preventing AI from Turning Evil for the Same Reason That I Don’t Work on the Problem of Overpopulation on the Planet Mars”