View Publication

Abstract

We propose a novel Frank-Wolfe (FW) procedure for the optimization of infinite-dimensional functionals of probability measures-a task which arises naturally in a wide range of areas including statistical learning (eg variational inference) and artificial intelligence (eg generative adversarial networks). Our FW procedure takes advantage of Wasserstein gradient flows and strong duality results recently developed in Distributionally Robust Optimization so that gradient steps (in the Wasserstein space) can be efficiently computed using finite-dimensional, convex optimization methods. We show how to choose the step sizes in order to guarantee exponentially fast iteration convergence, under mild assumptions on the functional to optimize. We apply our algorithm to a range of functionals arising from applications in nonparametric estimation.

Authors
Carson Kent, Jiajin Li, Jose Blanchet, Peter W Glynn
Publication date
2021/12/6
Journal
Advances in Neural Information Processing Systems
Volume
34
Pages
14448-14462