Accelerating Robotic Reinforcement Learning via Parameterized Action Primitives
Murtaza Dalal, Deepak Pathak, Ruslan Salakhutdinov
Despite the potential of reinforcement learning (RL) for building
general-purpose robotic systems, training RL agents to solve robotics tasks
still remains challenging due to the difficulty of exploration in purely
continuous action spaces. Addressing this problem is an active area of research
with the majority of focus on improving RL methods via better optimization or
more efficient exploration. An alternate but important component to consider
improving is the interface of the RL algorithm with the robot. In this work, we
manually specify a library of robot action primitives (RAPS), parameterized
with arguments that are learned by an RL policy. These parameterized primitives
are expressive, simple to implement, enable efficient exploration and can be
transferred across robots, tasks and environments. We perform a thorough
empirical study across challenging tasks in three distinct domains with image
input and a sparse terminal reward. We find that our simple change to the
action interface substantially improves both the learning efficiency and task
performance irrespective of the underlying RL algorithm, significantly
outperforming prior methods which learn skills from offline expert data. Code
and videos at this https URL