Building Interactive Agents In Video Game Worlds
Building Interactive Agents In Video Game Worlds
By The Interactive Agent team
November 23, 2022
Summary
To explore these learning-based approaches and quickly build agents that can make sense of human instructions and safely perform actions in open-ended conditions, we created a research framework within a video game environment.
First we built a simple video game world based on the concept of a child's "Playhouse." This environment provided a safe setting for humans and agents to interact and made it easy to rapidly collect large volumes of these interaction data.
Human participants set the contexts for the interactions by navigating through the world, setting goals, and asking questions for agents.
This phase was covered in two of our earlier papers, Imitating Interactive Intelligence, and Creating Multimodal Interactive Agents with Imitation and Self-Supervised Learning, which explored building imitation-based agents.
We used a variety of independent mechanisms to evaluate our agents, from hand-scripted tests to a new mechanism for offline human scoring of open-ended tasks created by people, developed in our previous work Evaluating Multimodal Interactive Agents.
Once an agent is trained via RL, we asked people to interact with this new agent, annotate its behaviour, update our reward model, and then perform another iteration of RL. The result of this approach was increasingly competent agents.
In Deep reinforcement learning from human preferences, researchers pioneered recent approaches to aligning neural network based agents with human preferences.
Reference
The Interactive Agent team. (2022, November 23). Building Interactive Agents in Video Game Worlds. RSS. Retrieved January 4, 2023, from https://www.deepmind.com/blog/building-interactive-agents-in-video-game-worlds