Professor Peter McOwan, from the School of Electronic Engineering and Computer Science, discusses whether artificial intelligences would actually be able to take over the world, whether they’d want to, and how we'd know if they did.
Avengers: Age of Ultron is the latest film about robots or artificial intelligence (AI) trying to take over the world. It’s not a new conceit, with the likes of The Terminator, War Games and The Matrix coming before it, but perhaps it’s a theme that rings more resonantly with us these days as intelligent software becomes more widespread.
Perhaps this explains the nagging fears about the potential impact on humanity of artificial super-intelligences – such as Ultron in this film, an AI accidentally created by the Avengers. But what relation do the evil AIs of science fiction have with scientific reality? Could AI take over the world? How would it do so, and why would it bother?
We need to consider the staples of motive and opportunity for our movie villain. For the motive, few would say intelligence in itself unswervingly leads to a desire to rule the world. Depicted in films AI is often driven by self-preservation, a realisation that fearful humans might shut them down. It’s what drives HAL 9000 to kill the crew in 2001: A Space Odyssey, and it’s why Ava in Ex Machina plots against her creator.
It seems unlikely we’d ever give our current intelligent software tools cause to feel threatened: they benefit us and there seems little motive in striving to create self-awareness in, for example, software that searches the web for the nearest Italian restaurant.
Another popular motive for the evilness of evil AI is its zealous application of logic. In the Avengers film, Ultron believes that he can only protect the earth by wiping out humanity. This death-by-logic is reminiscent of the notion that a computer would select a stopped clock over one that is two seconds slow, as the stopped clock is at least right twice a day. Ultron’s motivation, based on brittle logic combined with indifference to life, seems at odds with today’s AI systems that can already deal with uncertanty using mathematical formulas and are built to provide productive services for us.
When we consider the opportunity for an AI to rule the world we reach somewhat firmer ground. The famous Turning Test of machine intelligence was set up to measure a particular definition of intelligence, the ability to conduct a believable human conversation. If you can’t tell the difference between AI and human renditions of the same skill, the argument goes, the AI has demonstrated human-like qualities.
So what would a Turing Test for the skill of world domination look like? Compare the antisocial behaviours of AI with the attributes expected of human would-be world dominators. Such megalomaniacs need to control important parts of our lives, such as access to money or ability to travel freely. AI does that already: lending decisions are frequently made by machine intelligence that sifts through mountains of information to decide your creditworthiness. They even trade on the stock market. The intelligence and security services use the same information-gathering and processing to pick suspects out for travel watch lists.
An overlord would give orders and expect them to be followed; anyone who has stood helpless as a self-service till in a shop makes repeated bagging-related demands of them already knows what it feels like to be bossed about by AI.
Your browser doesn't support iframesProf McOwan speaks about AI at a special screening of Avengers: Age of Ultron at Genesis Cinema
Finally, no megalomaniac Hollywood robot would be complete without at least some desire to kill us. Today’s military robots can identify targets without human intervention. It’s currently a human controller that gives permission to attack, but it’s not a stretch to say that the potential to kill automatically already exists within these AI, even if their code would require a rewrite to allow it.
These examples arguably show AI in control in limited but significant parts of life on Earth, but to truly dominate the world in the way they do in movies, these individual AIs would need to start working together to create a synchronised AI army. At which point that bossy self-service till talks to your health monitor and denies you beer, then combines with a credit scoring system to provide credit only if you buy a pair of trainers with a built in GPS tracker to detect their use, while your smart fridge allows you only kale until the fitness tracker records the required five-mile run as completed.
Engineers around the world are developing the internet of things, in which all manner of devices are networked together to offer new services and ways to interact. These are the billions of pieces of a jigsaw that would need to communicate and act together in order to bring about total world domination.
If this all sounds worrying, I feel it’s unlikely – about as likely as the inexplicable cross-platform compatibility of an Apple Mac and an alien spaceship in Independence Day.
Our earthly AI and computer systems are written in a range of computer languages, hold different data in different ways and use different and incompatible rule sets and learning techniques. Unless we design them to be compatible there is no reason why two systems, developed by separate companies for separate purposes, would spontaneously communicate and share capabilities towards some greater common goal – at least not without a lot more help from us.
For media information, contact: