This week AI bot AlphaGo, a creation of Google Deepmind, successfully beat one of the best Go players Lee Sedol at his own game, 4-1. Why was this victory so impressive and does it mean we need to start fearing AI already?
Computers have been beating us at games for decades, most famously Chess in the 1970s. But Go was always seen as an impossible task for AI, because the methods needed for a computer to beat a human in this game would be exponentially more difficult when compared to a game like Chess.
In Chess, Deep Blue in the 1970s managed to be a world champion through brute force. What this means is that the computer was able to evaluate every possible route it could take, evaluating each one and comparing them to a database of successful routes and picking what is deemed the most likely to result in a win. In a way it’s kind of cheating, and was more of a testament to the power of processing power as opposed to actual artificial intelligence.
Go is different. Even with more modern technology, the number of potential routes a game can take is simply too vast, surpassing the number of atoms in the known universe. Go relies on what many believe to be human intuition as well as tactical play. There is often no definitive right more or wrong move, and that is something computers struggle with.
Basically this means that AlphaGo needed to play using different methods. Methods that are much closer to how us humans learn. AlphaGo learnt by what is referred to as deep reinforcement learning, which consists partly of playing against itself to learn from its mistakes and in turn improve its own ability.
DeepMind programmer Dennis Hassabis briefly described what this means –
It’s the combination of deep learning, neural network stuff, with reinforcement learning: so learning by trial and error, and incrementally improving and learning from your mistakes and your errors, so that you improve your decisions.
Essentially AlphaGo was learning in ways much more similar to how a human would learn, with the added advantage of being a computer. AlphaGo could play itself millions of times, many more times than a human could ever play. Each time it would learn and improve until it was capable of defeating one of the best human’s players on the planet, 4 games to 1.
Ultimately this is a massive step in the artificial intelligence world. AlphaGo didn’t rely on a database of information to defeat its opponent, but through neural learning techniques combined with super-fast processing power that almost mimicked the human brain but with a lot more power and resources.
So does this mean we’re already going to have to worry about AI taking over the world?
Sci-fi fans will most likely have watched movies like iRobot and The Terminator where AI revolts against its human creators and tries to take over the planet. Is this something we need to worry about here?
Well not really. At least not for now. AlphaGo was programmed purely to play Go. It couldn’t even beat you at tic-tac-toe without some kind of human intervention, let alone conquer the world and enslave the human population.
AlphaGo is a great example of how computers can learn to the point where they can surpass humans at a particular task, even ones that we believe require intuition as opposed to an ability to just follow the rules. But there are important ingredients missing when compared to sci-fi creations like SkyNet, and that’s any kind of self-awareness and an ability to learn outside of the parameters set out by the AI’s human programmers.
But it’s still a big step towards creating truly amazing artificial intelligence, and given that industry leaders like Bill Gates and Stephen Hawkings have already warned of a possible future where AI could destroy the human race, who knows… may be AlphaGo beating Lee Sedol will be that moment in time when future humans will send back a time traveller to destroy the robots once and for all.