Too many cooks: Bayesian inference for coordinating multi-agent collaboration
Collaboration requires agents to coordinate their behavior on the fly,
sometimes cooperating to solve a single task together and other times dividing
it up into sub-tasks to work on in parallel. Underlying the human ability to
collaborate is theory-of-mind, the ability to infer the hidden mental states
that drive others to act. Here, we develop Bayesian Delegation, a decentralized
multi-agent learning mechanism with these abilities. Bayesian Delegation
enables agents to rapidly infer the hidden intentions of others by inverse
planning. We test Bayesian Delegation in a suite of multi-agent Markov decision
processes inspired by cooking problems. On these tasks, agents with Bayesian
Delegation coordinate both their high-level plans (e.g. what sub-task they
should work on) and their low-level actions (e.g. avoiding getting in each
other's way). In a self-play evaluation, Bayesian Delegation outperforms
alternative algorithms. Bayesian Delegation is also a capable ad-hoc
collaborator and successfully coordinates with other agent types even in the
absence of prior experience. Finally, in a behavioral experiment, we show that
Bayesian Delegation makes inferences similar to human observers about the
intent of others. Together, these results demonstrate the power of Bayesian
Delegation for decentralized multi-agent collaboration.
Authors
Rose E. Wang, Sarah A. Wu, James A. Evans, Joshua B. Tenenbaum, David C. Parkes, Max Kleiman-Weiner