Automated Reinforcement Learning (AutoRL): A Survey and Open Problems
The combination of Reinforcement Learning (RL) with deep learning has led to
a series of impressive feats, with many believing (deep) RL provides a path
towards generally capable agents. However, the success of RL agents is often
highly sensitive to design choices in the training process, which may require
tedious and error-prone manual tuning. This makes it challenging to use RL for
new problems, while also limits its full potential. In many other areas of
machine learning, AutoML has shown it is possible to automate such design
choices and has also yielded promising initial results when applied to RL.
However, Automated Reinforcement Learning (AutoRL) involves not only standard
applications of AutoML but also includes additional challenges unique to RL,
that naturally produce a different set of methods. As such, AutoRL has been
emerging as an important area of research in RL, providing promise in a variety
of applications from RNA design to playing games such as Go. Given the
diversity of methods and environments considered in RL, much of the research
has been conducted in distinct subfields, ranging from meta-learning to
evolution. In this survey we seek to unify the field of AutoRL, we provide a
common taxonomy, discuss each area in detail and pose open problems which would
be of interest to researchers going forward.
Authors
Jack Parker-Holder, Raghu Rajan, Xingyou Song, André Biedenkapp, Yingjie Miao, Theresa Eimer, Baohe Zhang, Vu Nguyen, Roberto Calandra, Aleksandra Faust, Frank Hutter, Marius Lindauer