A fictional story goes that scientists turn on an artificially intelligent machine and the first question they ask is: “Is there a god”. The machine responds “There is now”.
So what is to stop this new god, if and when it is created, from smiting us mere mortals?
On Tuesday Channel 4 News interviews Alex Garland, writer and director of Ex Machina – a film about artificial intelligence (see video, below).
You can construct scenarios where recovery of human civilisation does not occur. Elon Musk, entrepreneur and inventor
In the interview, he says: “There is so much latent power involved in the world of AIs that you need a system of checks and balances – it’s crucial”.
His film comes amid of backdrop of concern over advances in artificial intelligence.
Earlier this month an open letter signed by AI experts and leading thinkers, including Professor Stephen Hawking, called for greater research in how to make AI systems “robust and beneficial”.
“Because of the great potential of AI, it is important to research how to reap its benefits while avoiding potential pitfalls,” the letter read.
A study by Stanford University last year went further, warning: “We could one day lose control of AI systems via the rise of superintelligences that do not act in accordance with human wishes – and that such powerful systems would threaten humanity.”
It is such a matter of concern in the minds of the scientific community that Elon Musk, the billionaire founder of SpaceX and Tersla, has recently donated $10m to the Future of Life Institute to “run a global research program aimed at keeping AI beneficial to humanity”.
Mr Musk, speaking after his donation, said: “It’s not clear whether we would be able to recover from some of these negative outcomes (of AI).
“You can construct scenarios where recovery of human civilisation does not occur. And when the risk is that severe it seems you should be proactive, not reactive.”
Nick Bostrom, another signatory of the open letter and an expert in superintelligence from the University of Oxford, argues that one problem is the perceptions people have of the risks.
He says people focus on plot lines from Hollywood movies, such as Terminator and Blade Runner, instead of the realities of AI development.
“If we think about how humans maintain their dominant position on earth, it is not because we are stronger, or faster,” he told Channel 4 News.
“It is because our ancestors were slightly more intelligent than other apes. If we give a machine greater intelligence, then it will potentially be in a very powerful position to shape our future according to its preferences.”
He says that if the aim is creating machines that are as intelligent as computers, if not more intelligent, “safety has to be integral” and that work is “underway to control the intelligence so that it would be on our side”.
The formidable challenge is programming morality into a machine, he said.
If you have super intelligence it could invent all the other technologies we think we could need. Nick Bostrum, Oxford University
“Programming a computer to maximise Pi to so many decimal points is fine,” he said. “But to maximise love, or justice, or liberty – these are complex concepts that we would have to try and convert into computer language.”
And the other moral problem is how we treat artificially intelligent machines.
“At what point does it achieve a degree of moral status?” he asked. “Animals have a degree of moral status – that is why we know it is wrong to kick an animal, because it suffers.
“With AI we should start to think about if we have any moral responsibility to what is going on side a computer.”
Bostrom is among a small number of minds working on such questions and the issues around AI safety, as businesses take more interest in the field.
Last year Google bought DeepMind – a firm specialising in artificial intelligence – for £242m.
The interest is because AI technology is already being used in the world economy – from Google’s search engine predicting what you want to see, to Amazon predicting what you want to buy. For now, DeepMind’s work is understood to be focused on improving intelligent algorithms such as those used in its search engines.
The key to artificial intelligence is teaching computers how to learn – and as these learning algorithms become more advanced, the intelligence of computers will increase.
Read more: Science fact and ascience fiction - a timeline of AI
And though there are fears about what super-intelligence could mean for humanity, both Musk and Bostrum argue there could be great potential advantages.
Musk predicts an end to “drudgery”, as intelligent machines carry out mundane tasks.
“The greater benefits from AI would probably be in eliminating drudgery, or tasks that are mentally boring, not interesting,” he said.
And Bostrum, talking about machines that are more intelligent than humans, predict a sudden technological acceleration in fields such as space colonisation.
“If you have super intelligence it could invent all the other technologies we think we could need,” he said.
There have also been suggestion that autonomy could feature in military developments. In 2013 Sir Andrew Pulford, chief air marshal of the RAF, told Channel 4 News that “the world of Terminator 2 is coming” in response to a question about autonomous drones, saying there will be a time when drones “make decision for themselves”.