DeepMind wants to teach robots to play board games
DeepMind wants to teach robots to play board games
By Kyle Wiggers
September 15, 2020
Summary
Mastering physical systems with abstract goals is an unsolved challenge in AI. To encourage the development of techniques that might overcome it, researchers at DeepMind created custom scenarios for the physics engine MuJoCo that task an AI agent with coordinating perception, reasoning, and motor control over time.
Recent work in machine learning has led to algorithms capable of mastering board games such as Go, chess, and shogi.
These algorithms observe the states of games and control these states directly with their actions, unlike humans, who don't just reason about the moves but look at the board and physically manipulate the game pieces with their fingers.
The team's solution is a set of challenges that embed tasks from games into environments where agents must control a physical body to execute moves.
To place a single tic-tac-toe piece, an agent has to reach the board with a 9-degree-of-freedom arm and touch the corresponding place on that board.
Learning to play tic-tac-toe and executing a reaching movement are well within the capabilities of current AI approaches, but most agents struggle when they're faced with both problems at once.
The last game, MuJoGo, is a 7-by-7 Go board designed to be solved in roughly 50 moves.
In experiments, the researchers designed example agents to complete various game tasks.
The agents employed a planner module to map ground truth game states to target states as well as plot out the actions needed to reach them.
The simplest agent required around a million games before it could play MuJoXo "Convincingly," and it didn't show any sign of progress in MuJoGo even after billions of steps of training.
Reference
Wiggers, K. (2020, September 15). DeepMind wants to teach robots to play board games. Retrieved September 17, 2020, from https://venturebeat.com/2020/09/15/deepmind-wants-to-teach-robots-to-play-board-games/