Model-based algorithms, which learn a dynamics model from logged experience
and perform some sort of pessimistic planning under the learned model, have
emerged as a promising paradigm for offline reinforcement learning (offline
RL). However, practical variants of such model-based algorithms rely on
explicit uncertainty quantification for incorporating pessimism. Uncertainty
estimation with complex models, such as deep neural networks, can be difficult
and unreliable. We overcome this limitation by developing a new model-based
offline RL algorithm, COMBO, that regularizes the value function on
out-of-support state-action tuples generated via rollouts under the learned
model. This results in a conservative estimate of the value function for
out-of-support state-action tuples, without requiring explicit uncertainty
estimation. We theoretically show that our method optimizes a lower bound on
the true policy value, that this bound is tighter than that of prior methods,
and our approach satisfies a policy improvement guarantee in the offline
setting. Through experiments, we find that COMBO consistently performs as well
or better as compared to prior offline model-free and model-based methods on
widely studied offline RL benchmarks, including image-based tasks.
Authors
Tianhe Yu, Aviral Kumar, Rafael Rafailov, Aravind Rajeswaran, Sergey Levine, Chelsea Finn