If you like old and complex things as much as I do, Chinese culture has much to offer. The I Ching is said by some to be the world’s oldest book, dating back perhaps 5,000 years. And the game of Go is the oldest board game still played, though only half the age of the I Ching. So complex is Go that it has been a target for people interested in developing artificial intelligence in computers. Could we create a computer smart enough to take on the world’s best players?
The challenge may not seem overwhelming at first, considering that it was way back in 1996 that IBM’s Deep Blue computer played Gary Kasparov, who at that time was world chess champion, and won. Kasparov, it has to be said, went on to win three and draw two of the remaining five games he played, so the victory wasn’t total. But the notion that a computer could beat a human master at a game had taken hold. It was only reinforced by IBM’s Watson. Playing without an Internet connection, the machine defeated top players at Jeopardy in 2011.
Now we learn that Google has developed a program called AlphaGo that took on European Go champion Fan Hui in a five game match. AlphaGo pretty much cleaned Fan Hui’s clock, sweeping all five of the games, the first time a professional Go player has lost to a computer. This is significant because in terms of complexity, Go makes Chess look like Parcheesi.
Sign Up and Save
Get six months of free digital access to The News & Observer
Consider: Go is played on a 19 X 19 grid with black and white stones, requiring players to try to envelop more of the board than their opponent. The rules are fairly simple. What’s not simple is the number of moves available, about ten times as many as chess. If you work out the number of positions possible for a Go player, it’s larger than the number of atoms in the universe, according to the team that created AlphaGo. Moreover, Go requires an instinct for the game that is all but preternatural, making it hard for spectators to look at a board and know who is winning.
Now that AlphaGo has waxed Fan Hui, DeepMind, the Google company behind AlphaGo, has set its sights on a challenge match with the world’s highest-ranked Go player, a man named Lee Seedol, to be played in Seoul in March for a $1 million prize. Seedol seems relaxed about the challenge while acknowledging that AlphaGo is a strong player. It’s hard to disagree.
The larger context here is that AlphaGo – and artificial intelligence in general – is what in Google parlance is a “moon shot.” When it reorganized its corporate structure to create the entity called Alphabet, Google tucked nine companies inside it, from its famous search engine business to smart device company Nest, from its X division which works on self-driving cars to Verify, which is based around life sciences. Google is into high-speed Internet delivery and WiFi by balloons, and who can forget its Google Glass foray into truly ugly smart eyewear?
In the midst of all this, we learn that John Giannandrea, who has been heading up Google’s artificial intelligence effort, is now going to take over the company’s search operations. There could be no clearer signal that while the company investigates the frontiers of tech in every which direction, its attention on basic search is more robust than ever. If AlphaGo can beat champions at Go, then Google can take search functions to the next level.
Exactly where that leads can be hard to fathom, but you can turn to the smart assistant inside today’s smartphones to get an idea. Highly developed artificial intelligence should make search predictive, able to anticipate what you need. In primitive ways, both Google Now, Siri and Microsoft’s Cortana can do that now. Expanding these powers exponentially will integrate our daily lives and our technology in ways that science fiction writers have only dreamed about.
Paul A. Gilster is the author of several books on technology. Reach him at firstname.lastname@example.org.