Motion Policy Networks for Collision-Free Motion Generation
Motion Policy Networks
M
Collision-free motion generation in unknown environments is a core buildingblock for robot manipulation.
We present an end-to - end neural model called motionpolicy networks to generate collision-free, smooth motion from just a single depth camera observation.
Are trained on over 3 million motion planning problems in over 500,000 environments and are 46% better than prior neural planners and more robust than local control policies.
Our experiments show that are significantly faster than global planners while exhibiting the reactivity needed to deal with dynamic scenes.
Authors
Adam Fishman, Adithyavairan Murali, Clemens Eppner, Bryan Peele, Byron Boots, Dieter Fox
Neural motion planning is a promising approach to generate fast and legible motions for a robotic manipulator in unknown environments that require real-time control.
We train a reactive, end-to-end neural policy that operates on point clouds of the environment and moves to task space targets while avoiding obstacles.
Our policy is significantly faster than other baseline configuration space planners and succeeds more than local task space controllers.
It also draws samples from a straight line path in configuration space which may not generalize to challenging environments beyond a tabletop.
Result
We present a class of end-to-end neural policy policies that learn to navigate to pose targets in task space while avoiding obstacles.
Our experiments show that when applied to appropriate problems, these policies are significantly faster than a global motion planner and more capable than prior neural planners and manually designed local control policies.