Discrete Latent Space World Models for Reinforcement Learning
Sample efficiency remains a fundamental issue of reinforcement learning.
Model-based algorithms try to make better use of data by simulating the
environment with a model. We propose a new neural network architecture for
world models based on a vector quantized-variational autoencoder (VQ-VAE) to
encode observations and a convolutional LSTM to predict the next embedding
indices. A model-free PPO agent is trained purely on simulated experience from
the world model. We adopt the setup introduced by Kaiser et al. (2020), which
only allows 100K interactions with the real environment, and show that we reach
better performance than their SimPLe algorithm in five out of six randomly
selected Atari environments, while our model is significantly smaller.