OPAL: Offline Primitive Discovery for Accelerating Offline Reinforcement Learning
Reinforcement learning (RL) has achieved impressive performance in a variety
of online settings in which an agent's ability to query the environment for
transitions and rewards is effectively unlimited. However, in many practical
applications, the situation is reversed: an agent may have access to large
amounts of undirected offline experience data, while access to the online
environment is severely limited. In this work, we focus on this offline
setting. Our main insight is that, when presented with offline data composed of
a variety of behaviors, an effective way to leverage this data is to extract a
continuous space of recurring and temporally extended primitive behaviors
before using these primitives for downstream task learning. Primitives
extracted in this way serve two purposes: they delineate the behaviors that are
supported by the data from those that are not, making them useful for avoiding
distributional shift in offline RL; and they provide a degree of temporal
abstraction, which reduces the effective horizon yielding better learning in
theory, and improved offline RL in practice. In addition to benefiting offline
policy optimization, we show that performing offline primitive learning in this
way can also be leveraged for improving few-shot imitation learning as well as
exploration and transfer in online RL on a variety of benchmark domains.
Visualizations are available at this https URL