Google’s computer wins at ancient game of Go

By Elizabeth Hopper

Artificial Intelligence has won another victory—literally. Twenty years on from the first time a computer beat the world champion at chess, a programme called AlphaGo has beaten the European champion of the board game Go, Fan Hui, five games to nil.

Go is an ancient Chinese board game. Two players take turns to place black and white counters on a 19 x 19 square board, trying to surround the opponent’s pieces to ‘capture’ them, and to form boundaries which enclose the most territory. There are 10^171 possible positions to play in a game compared to the 9 million possible positions in chess, with typically 200 moves allowed per turn against chess’ meagre 20. Go is considered an intuitive game, and players claim there is no way to see who is winning other than educated guesswork.

AlphaGo was created by the English company DeepMind, which was bought by Google two years ago. After learning the rules of the game, the programme learnt by watching for repeated patterns in 30 million moves from professional games of Go. It then applied its knowledge to constantly improve its strategy by playing against itself millions of times and learning from its own mistakes, in a process called reinforcement learning.

At the start of a turn the programme suggests possible moves based on the tactics it has observed in human games. The second part of the system sorts though these moves to decide which it thinks may lead to success. This is AlphaGo’s planning stage, in which it applies almost intuitive knowledge of which positions are good and bad, so that it can make long term plans.

Victory has come 5 to 10 years earlier than expected, as the size of the board and sheer number of possible moves, as well as the intuitive aspect to the game, makes a strategy difficult to programme in algorithms. The real test will come in March, when AlphaGo is intended to play against the world Go champion, Lee Sedol, for a stake of one million dollars. Sedol is confident he will win, but when Google’s system was tested against rival programmes, including Facebook’s, it won 499 out of 500 matches. Beating Sedol would be considered a major breakthrough.

Teams from around the world have been working on this project for years and this first success is a landmark in artificial intelligence. The ability of computers to teach themselves, called ‘machine learning’ is used in facial recognition, responding to speech, and language translation. Now computers are better poised than ever before to start thinking for themselves.

The pattern recognition and forward planning abilities are used in ‘intelligent personal assistants’ in smartphones, like Siri and Cortana. Now, the implementation of ‘reinforcement learning’, in which computers improve their behaviour to achieve goals, can be used in decision-making programmes, from creating medical treatment plans using images from scans and other patient data or business plans, to playing 3D computer games or simulations much more like the real world.

The next big challenge for AI will be games without ‘complete information’, like poker. In Go the programme can see all the pieces on the board, and knows what the opposition can do, but it is much harder to prepare a computer for situations in which it doesn’t know what the opponent can do next. The added complications of the psychology involved make it a huge challenge, which, if achieved, may eventually lead to computers with a more human-like intelligence.

Are computers about to take over? No—they still can’t apply common sense or recognise basic human concepts like emotions and funny faces. But they can learn for themselves, and that’s definitely one step closer.

Photograph: HermanHiddema via Wikimedia Commons

Leave a Reply

Your email address will not be published. Required fields are marked *

 

This site uses Akismet to reduce spam. Learn how your comment data is processed.

© Palatinate 2010-2017