Wang, S., Si, N., Blanchet, J., & Zhou, Z. (2023). Sample Complexity of Variance-reduced Distributionally Robust Q-learning. ArXiv. /abs/2305.18420
Abstract
Dynamic decision making under distributional shifts is of fundamental interest in theory and applications of reinforcement learning: The distribution of the environment on which the data is collected can differ from that of the environment on which the model is deployed. This paper presents two novel model-free algorithms, namely the distributionally robust Q-learning and its variance-reduced counterpart, that can effectively learn a robust policy despite distributional shifts. These algorithms are designed to efficiently approximate the -function of an infinite-horizon -discounted robust Markov decision process with Kullback-Leibler uncertainty set to an entry-wise -degree of precision. Further, the variance-reduced distributionally robust Q-learning combines the synchronous Q-learning with variance-reduction techniques to enhance its performance. Consequently, we establish that it attains a minmax sample complexity upper bound of , where and denote the state and action spaces. This is the first complexity result that is independent of the uncertainty size , thereby providing new complexity theoretic insights. Additionally, a series of numerical experiments confirm the theoretical findings and the efficiency of the algorithms in handling distributional shifts.
Authors
Shengbo Wang, Nian Si, Jose Blanchet, Zhengyuan Zhou
Publication date
2023/5/28
Journal
arXiv preprint arXiv:2305.18420