The British company DeepMind bought by Google last year for £400m has created the first self-learning computer program capable of playing retro computer games and devising its own winning strategies. The program is reasonably expected to be the most significant step towards true artificial intelligence (AI).
The agent, as the developers call it, uses only minimal background information to learn how to play a game but manages to solve the challenge gaining “personal” (if we may put it in this way) experience, much like a human brain.
When the agent begins to play, it looks at the frames of the game and presses random buttons to see what happens. The agent uses a “deep learning” method to turn the visual input into meaningful concepts and is also programmed to analyze that losing points is bad and scoring them is good. It takes the program about 600 hundred rounds of training to understand what a game is about. The study shows that the agent has learnt 49 games ranged from side-scrolling shooters to boxing and 3D car-racing and has performed at 75% of a professional game tester's level.
In the very near future such machine skill may be warmly welcomed in many fields: from powering self-driving cars to conducting scientific researches. The company's founder Demis Hassabis determines the extent of the project by saying, “This is the first significant rung of the ladder towards proving a general learning system can work. It can work on a challenging task that even humans find difficult. It’s the very first baby step towards that grander goal... but an important one.”