Keeping an eye on artificial intelligence
In June last year a fascinating aerial battle took place. It didn’t take place in the actual sky but rather in the virtual one, which was appropriate considering it was a battle of man against machine.
The man in question wasn’t an ordinary pilot but a retired US Airforce pilot, Gene Lee, with combat experience in Iraq and a graduate of the US Fighter Weapons School. The machine he was battling was a simulated aircraft controlled by an artificial intelligence (AI).
What was surprising about the outcome was that the artifical AI emerged as the victor. What was more surprising was that the computer running the software wasn’t a multimillion dollar supercomputer but one that used about $35 worth of computing power.
Welcome to the fast-moving world of AI.
It’s an area that has attracted significant media focus, and justifiably so. Experts in the field see the deployment of AI as the dawn of a new age. Andrew Ng, chief scientist at Baidu Research, is one of the gurus in the field.
“AI is the new electricity,” he says. “Just as 100 years ago electricity transformed industry after industry, AI will now do the same.”
Most of the current applications of AI focus on recognising patterns. Software is "trained" with vast amounts of information, usually with help from people who have manually tagged the data. In this way, an AI may start with images that have been labelled as cars, then, through trial and error guided by programmers, eventually recognise images of cars without any intervention.
This simple explanation of AI belies the extraordinary breakthroughs achieved with this approach and is illustrated by an experiment conducted by an English company called DeepMind.
In 2015, DeepMind revealed that its AI had learned how to play 1980s-era computer games without any instruction. Once it had learned the games, it could outperform any human player by astonishing margins.
This feat is a stark contrast to the battle waged almost two decades ago when an IBM computer beat Russian grandmaster Gary Kasparov at chess in the mid-1990s. To beat him, the computer relied on a virtual encyclopaedia of pre-programmed information about known moves. At no point did the machine learn how to play chess.
Winning simple computer games clearly wasn’t enough to prove the abilities of DeepMind, so a more challenging option was found in the game called Go. It’s an incredibly complex Asian board game with more possible moves than the total number of atoms in the visible universe.
To learn Go, the AI played itself more than a million times. To put this in perspective, if a person played 10 games a day every day for 60 years, they would only manage to play around 180,000 games.
Despite the bold predictions of expert Go players, when the tournament ended in 2015, it was the DeepMind AI that had beaten one of the world’s best players.
The ability to "learn" can be easily leveraged into the real world. While gaming applications may excite hard-core geeks, DeepMind’s power was unleashed on a more useful challenge last year – increasing energy efficiency in data centres.
By looking at the information about power consumption – such as temperature, server demand and cooling pump speeds – the AI reduced electricity requirements for a Google data centre by an astonishing 40%. This may seem esoteric but around the world data centres already use as much electricity as the entire UK.
Once you start to consider the power of AI, the feeling of astonishment evaporates and is replaced with an unsettling feeling about the potential implications. For example, at the end of last year a Japanese insurance company laid off a third of one of its departments when it announced plans to replace people with an IBM AI. In this example, only 34 people were made redundant but this trend is likely to accelerate.
At this stage, it’s useful to put this development in context and consider what jobs might be replaced by AI. Andrew Ng has a useful rule of thumb – “If a typical person can do a mental task with less than one second of thought, we can probably automate it using AI either now or in the near future.”
What’s important about this quote is the term “near future.” Once you extend the timeline out longer, researchers have theorised that the implications of AI on the workforce are significant. One study published in 2015 estimated that across the OECD an average of 57% of jobs were at risk from automation.
This number has been disputed heavily since it was published but it doesn’t really matter what the exact percentage will be. What is important to keep in mind is that AI will change the nature of jobs forever, and it’s highly likely that work in the future will feature people working alongside machines. This will result in a more efficient workforce, which will in turn likely to lead to job losses.
However, it’s not just the workforce that could change. The potential for this technology dwarfs anything humans have ever invented, and, just like the splitting of the atom, the jury is out on how things will develop.
One of the world’s experts on existential threats to humanity – Nick Bostrom at Oxford University – surveyed the top 100 AI researchers. He asked them about the potential threat that AI poses to humanity, and responses were startling. More than half of them responded that they believed there is a substantial chance that the development of an artificial intelligence that matches the human mind won’t end up well for one of the groups involved. You don’t need to work alongside an AI to figure out which group.
The thesis is simple – Darwinian theory applied to the biological world leads to the dominance of one species over another. If humans create a machine intelligence, probably the first thing it would do is re-programme itself become smarter. In the blink of an evolutionary eye, people could become subservient to machines with intelligence levels that were impossible to comprehend.
The exact timeframe for this scenario is hotly debated, but the same experts polled by Bostrom thought that there was a high chance of machines having human-level intelligence this century – perhaps as early as 2050.
To paraphrase a well-worn cliché, we will live in interesting times.