Flaws in AI seen despite AlphaGo victory

Source: http://asia.nikkei.com/Tech-Scie ... haGo-victory?page=2

.... [page 1 skipped]...

But the man versus machine Go match in Seoul also highlighted two potential flaws of deep learning that could hamper its practical applications.

     One of them concerns misguided decisions made by AI, and the fact that it is extremely difficult for humans to pinpoint the factors that led to such mistakes.

     During game four of the series, AlphaGo lost to Lee after making a string of clearly bad moves. But even the members of the DeepMind team could not identify the cause of these errors.

     With an ordinary computer program, experts can find and resolve bugs by checking the code. But deep learning does not involve a logic code that humans can read. The only elements using software are parameters indicating the strength of connections between artificial neurons, with the algorithm being a black box to us.

     The other flaw is that highly trained AI can take actions that we cannot understand, but still produce good results.

     In game two, AlphaGo made an unusual move that flummoxed a commentator, himself a professional Go player. Later, the commentator repeatedly said that he could not understand why the baffling move had led to the machine's victory.

     This could be a big problem in an environment where humans and AI work together, or where decisions made by AI are a matter of life and death for people.

     If, for instance, AI for a self-driving car operates the vehicles in a way that baffles other drivers, it could cause serious accidents.

Enhancing advantages

The Go match also showed that companies that have tremendous data and computing power, like Google, have overwhelming advantages in AI research and development.

     Traditional deep learning technology is not quite good at distributed computing, in which tasks are carried out simultaneously by multiple servers. But Google has enhanced the distributed computing capacity of its TensorFlow machine learning library. The distributed computing version of AlphaGo was used for the Go match.

     When AlphaGo faced Fan Hui, Europe's reigning Go champion, in a machine-versus-man contest in October, the DeepMind researchers used a large network of computers spanning 176 graphics processing units and 1,202 central processing units.

     In AlpahGo's reinforcement learning process, the researchers took advantage of the vast computing resources of the company's infrastructure as a service capabilities, called "Google Cloud Platform." The Go match also served as a major public relations event for Google's AI research.

     In 1997 IBM's Deep Blue beat the reigning world chess champion at that time, and in 2011 the company's Watson supercomputer defeated two former champions on the "Jeopardy" trivia game show.

     In AlphaGo's showdown with the Go grandmaster, Google took a page out of IBM's book and landed a spectacular PR feat.

     Obviously, the latest AI technology milestone has further enhanced Google's power to attract deep learning and related AI talent by capitalizing on its stock of astronomical amounts of data and IT resources.