Bistritz, I., Zhou, Z., Chen, X., Bambos, N., & Blanchet, J. (2021). No Discounted-Regret Learning in Adversarial Bandits with Delays. ArXiv. /abs/2103.04550

View Publication

Abstract

Consider a player that in each round t out of T rounds chooses an action and observes the incurred cost after a delay of dt rounds. The cost functions and the delay sequence are chosen by an adversary. We show that even if the players’ algorithms lose their” no regret” property due to too large delays, the expected discounted ergodic distribution of play converges to the set of coarse correlated equilibrium (CCE) if the algorithms have “no discounted-regret”. For a zero-sum game, we show that no discounted-regret is sufficient for the discounted ergodic average of play to converge to the set of Nash equilibria. We prove that the FKM algorithm with n dimensions achieves a regret of O (nT 3 4+/nT 1 3 D 1 3) and the EXP3 algorithm with K arms achieves a regret of O (√ ln K (KT+ D)) even when D=∑ T t= 1 dt and T are unknown. These bounds use a novel doubling trick that provably retains the regret bound for when D and T are known. Using these bounds, we show that EXP3 and FKM have no discounted-regret even for dt= O (t log t). Therefore, the CCE of a finite or convex unknown game can be approximated even when only delayed bandit feedback is available via simulation.

Authors
Ilai Bistritz, Zhengyuan Zhou, Xi Chen, Nicholas Bambos, Jose Blanchet
Publication date
2021/3
Journal
arXiv preprint arXiv:2103.04550