U. V. Shanbhag and J. H. Blanchet, “Budget-constrained stochastic approximation,” 2015 Winter Simulation Conference (WSC), Huntington Beach, CA, USA, 2015, pp. 368-379, doi: 10.1109/WSC.2015.7408179.

View Publication

Abstract

Traditional stochastic approximation (SA) schemes for stochastic optimization employ a single gradient or a fixed batch of noisy gradients in computing a new iterate. We consider the development of SA schemes in which Nk gradient samples are utilized at step k and the total computational budget is M, so that∑ K k= 1 Nk≤ M where K denotes the terminal step. This paper makes the following contributions:(i) We conduct an error analysis for constant batches (Nk= N) both for constant and diminishing steplength regime and show linear convergence in terms of expected optimal value;(ii) we extend the two schemes in (i), and the corresponding linear convergence rates, now in the setting of increasing sample sizes (increasing Nk), assuming constant or diminishing steplength;(iii) finally, when steplengths are constant, we obtain the optimal number of projection steps that minimize the bound on the mean-squared error.

Authors
Uday V Shanbhag, José Blanchet