• September 1, 2016

Will AI Beat Humans at the Game of Being Human?

AI has achieved a win experts once thought wasn’t possible.

It’s a race to reverse-engineer the human brain.

Harvard University was recently awarded a $28 million grant to discover why human brains are so much better at learning and pattern recognition than artificial intelligence (AI). Dispensed by the Intelligence Advanced Research Projects Activity (IARPA), the funding will fuel a quest to make AI systems faster, smarter, and match or outperform human neural networks.

The steep challenge in this quest is the enormous complexity of the human brain and its billions of neurons and trillions of synaptic interconnections with electrochemical signaling. The other challenge: There is no accepted theory of mind that describes what thought—the gist of intelligence—actually is.

But these challenges didn’t prevent Google DeepMind, an artificial intelligence company, from developing an algorithm that recently beat one of the best players of Go by mimicking the once-thought inimitable neural networks and learning abilities of the human brain.

AI’s Winding, Bumpy Road

The long drive to create artificial intelligence has been marked mostly by lofty predictions and disappointing results. Sure, there was IBM’s Big Blue, the supercomputer that toppled Russian chess wunderkind Garry Kasparov in 1997. But Big Blue’s victory was the result of set rules, limited possible moves, and brute computing force. Big Blue calculated every possible move it could make at every turn—not exactly artificial intelligence.

AI has another problem: Raw computing power is slamming against the limitations of physics. Transistors are made of atoms, and we’re fast approaching the point where if they’re made any smaller, they will cease to be viable. Dense microprocessors already have challenges with power consumption and heat dissipation. To this add the speed of light. It governs just about every digital interconnection, putting a hard and fast limit on how much information can flow into and out of computer chips for processing.

Game of Go

Google DeepMind has taken a different approach. It developed an algorithm called AlphaGo that leverages deep neural networks, or computer programs that mimic the connections of neurons in the brain to solve problems. These networks consist of layers of abstract interconnected neurons.

Result? In March AlphaGo beat Lee Sedol at the game of Go. Sedol is recognized as one of the world’s best players of the ancient Chinese board game, the aim of which is to surround more territory than an opponent. Though deceptively simple, Go is computationally complex. It features a 19-by-19 grid of points and an astronomical number of possible moves. Plus, for any particular arrangement of stones, or game pieces, it’s difficult to estimate which player has the advantage.

The system allows AlphaGo to learn by playing against itself over and over again. From this experience, it learns to distinguish better moves from poorer ones. The Sedol Go match marks one of the first times a computer program has been able to successfully adapt to the situation in front of it. It accomplished this by employing machine learning rather than brute computing force.

It remains to be seen if AI systems can match or exceed the human brain in pattern recognition and learning. Yet sophisticated AI systems with neural networks might open breakthrough computer architectures not previously considered, rendering current AI hurdles irrelevant.