Some claim that Mr Musk’s real worry is market concentration—a Facebook or Google monopoly in AI, say—though he dismisses such concerns as “petty”.
..........Fears about AIs going rogue are not widely shared by people at the cutting edge of AI research. “A lot of the alarmism comes from people not working directly at the coal face, so they think a lot about more science-fiction scenarios,” says Demis Hassabis of DeepMind.
“I don’t think it’s helpful when you use very emotive terms, because it creates hysteria.” Mr Hassabis considers the paperclip scenario to be “unrealistic”, but thinks Mr Bostrom is right to highlight the question of AI motivation. How to specify the right goals and values for AIs, and ensure they remain stable over time, are interesting research questions, he says. (DeepMind has just published a paper with Mr Bostrom’s Future of Humanity Institute about adding “off switches” to AI systems.)
A meeting of AI experts held in 2009 in Asilomar, California, also concluded that AI safety was a matter for research, but not immediate concern.
AI scares people, says Marc Andreessen, because it combines two deep-seated fears: the Luddite worry that machines will take all the jobs, and the Frankenstein scenario that AIs will “wake up” and do unintended things.
...........The idea that machines will “one day wake up and change their minds about what they will do” is just not realistic, says Francesca Rossi, who works on the ethics of AI at IBM.
Second, an “intelligence explosion” is considered unlikely, because it would require an AI to make each version of itself in less time than the previous version as its intelligence grows. Yet most computing problems, even much simpler ones than designing an AI, take much longer as you scale them up.