Navigates Like Me: Understanding How People Evaluate Human-Like AI in Video Games
Navigates Like Me: Understanding How People Evaluate Human-Like AI in Video Games
Stephanie Milani, Arthur Juliani, Ida Momennejad, Raluca Georgescu, Jaroslaw Rzpecki, Alison Shaw, Gavin Costello, Fei Fang, Sam Devlin, Katja Hofmann
Abstract
"We aim to understand how people assess human likeness in navigation produced by people and artificially intelligent (AI) agents in a video game. To this end, we propose a novel AI agent with the goal of generating more human-like behavior. We collect hundreds of crowd-sourced assessments comparing the human-likeness of navigation behavior generated by our agent and baseline AI agents with human-generated behavior. Our proposed agent passes a Turing Test, while the baseline agents do not. By passing a Turing Test, we mean that human judges could not quantitatively distinguish between videos of a person and an AI agent navigating. To understand what people believe constitutes human-like navigation, we extensively analyze the justifications of these assessments. This work provides insights into the characteristics that people consider human-like in the context of goal-directed video game navigation, which is a key step for further improving human interactions with AI agents."
Reference
Milani, S., Juliani, A., Momennejad, I., Georgescu, R., Rzepecki, J., Shaw, A., ... & Hofmann, K. (2023, April). Navigates like me: Understanding how people evaluate human-like AI in video games. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems (pp. 1-18). https://arxiv.org/abs/2303.02160
Keywords
Human likeness, Navigation, Artificially intelligent agents