Ilai Bistritz, Zhengyuan Zhou, Xi Chen, Nicholas Bambos, and Jose Blanchet. 2019. Online EXP3 learning in adversarial bandits with delayed feedback. Proceedings of the 33rd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, Article 1018, 11349–11358.
Abstract
Consider a player that in each of T rounds chooses one of K arms. An adversary chooses the cost of each arm in a bounded interval, and a sequence of feedback delays\left {d {t}\right} that are unknown to the player. After picking arm a {t} at round t, the player receives the cost of playing this arm d {t} rounds later. In cases where t+ d {t}> T, this feedback is simply missing. We prove that the EXP3 algorithm (that uses the delayed feedback upon its arrival) achieves a regret of O\left (\sqrt {\ln K\left (KT+\sum {t= 1}^{T} d {t}\right)}\right). For the case where\sum {t= 1}^{T} d {t} and T are unknown, we propose a novel doubling trick for online learning with delays and prove that this adaptive EXP3 achieves a regret of O\left (\sqrt {\ln K\left (K^{2} T+\sum {t= 1}^{T} d {t}\right)}\right). We then consider a two player zero-sum game where players experience asynchronous delays. We show that even when the delays are large enough such that players no longer enjoy the “no-regret property”,(eg, where d {t}= O\left (t\log t\right)) the ergodic average of the strategy profile still converges to the set of Nash equilibria of the game. The result is made possible by choosing an adaptive step size\eta {t} that is not summable but is square summable, and proving a “weighted regret bound” for this general case.
Authors
Ilai Bistritz, Zhengyuan Zhou, Xi Chen, Nicholas Bambos, Jose Blanchet
Publication date
2019
Journal
Advances in neural information processing systems
Volume
32