Accelerating Offline Reinforcement Learning Application in Real-Time Bidding and Recommendation: Potential Use of Simulation
Haruka Kiyohara, Kosuke Kawakami, Yuta Saito
In recommender systems (RecSys) and real-time bidding (RTB) for online
advertisements, we often try to optimize sequential decision making using
bandit and reinforcement learning (RL) techniques. In these applications,
offline reinforcement learning (offline RL) and off-policy evaluation (OPE) are
beneficial because they enable safe policy optimization using only logged data
without any risky online interaction. In this position paper, we explore the
potential of using simulation to accelerate practical research of offline RL
and OPE, particularly in RecSys and RTB. Specifically, we discuss how
simulation can help us conduct empirical research of offline RL and OPE. We
take a position to argue that we should effectively use simulations in the
empirical research of offline RL and OPE. To refute the counterclaim that
experiments using only real-world data are preferable, we first point out the
underlying risks and reproducibility issue in real-world experiments. Then, we
describe how these issues can be addressed by using simulations. Moreover, we
show how to incorporate the benefits of both real-world and simulation-based
experiments to defend our position. Finally, we also present an open challenge
to further facilitate practical research of offline RL and OPE in RecSys and
RTB, with respect to public simulation platforms. As a possible solution for
the issue, we show our ongoing open source project and its potential use case.
We believe that building and utilizing simulation-based evaluation platforms
for offline RL and OPE will be of great interest and relevance for the RecSys
and RTB community.