Blanchet, J., Cartis, C., Menickelly, M., & Scheinberg, K. (2016). Convergence Rate Analysis of a Stochastic Trust Region Method via Submartingales. ArXiv. /abs/1609.07428
Abstract
We propose a novel framework for analyzing convergence rates of stochastic optimization algorithms with adaptive step sizes. This framework is based on analyzing properties of an underlying generic stochastic process, in particular by deriving a bound on the expected stopping time of this process. We utilize this framework to analyze the bounds on expected global convergence rates of a stochastic variant of a traditional trust region method, introduced in \cite{ChenMenickellyScheinberg2014}. While traditional trust region methods rely on exact computations of the gradient, Hessian and values of the objective function, this method assumes that these values are available up to some dynamically adjusted accuracy. Moreover, this accuracy is assumed to hold only with some sufficiently large, but fixed, probability, without any additional restrictions on the variance of the errors. This setting applies, for example, to standard stochastic optimization and machine learning formulations. Improving upon the analysis in \cite{ChenMenickellyScheinberg2014}, we show that the stochastic process defined by the algorithm satisfies the assumptions of our proposed general framework, with the stopping time defined as reaching accuracy . The resulting bound for this stopping time is , under the assumption of sufficiently accurate stochastic gradient, and is the first global complexity bound for a stochastic trust-region method. Finally, we apply the same framework to derive second order complexity bound under some additional assumptions.
Authors
Jose Blanchet, Coralia Cartis, Matt Menickelly, Katya Scheinberg
Publication date
2016/9/23
Journal
arXiv preprint arXiv:1609.07428