Blanchet, J. & Kang, Y.. (2017). Distributionally Robust Groupwise Regularization Estimator. Proceedings of the Ninth Asian Conference on Machine Learning, in Proceedings of Machine Learning Research 77:97-112 Available from https://proceedings.mlr.press/v77/blanchet17a.html.

View Publication

Abstract

Regularized estimators in the context of group variables have been applied successfully in model and feature selection in order to preserve interpretability. We formulate a Distributionally Robust Optimization (DRO) problem which recovers popular estimators, such as Group Square Root Lasso (GSRL). Our DRO formulation allows us to interpret GSRL as a game, in which we learn a regression parameter while an adversary chooses a perturbation of the data. We wish to pick the parameter to minimize the expected loss under any plausible model chosen by the adversary-who, on the other hand, wishes to increase the expected loss. The regularization parameter turns out to be precisely determined by the amount of perturbation on the training data allowed by the adversary. In this paper, we introduce a data-driven (statistical) criterion for the optimal choice of regularization, which we evaluate asymptotically, in closed form, as the size of the training set increases. Our easy-to-evaluate regularization formula is compared against cross-validation, showing comparable performance.

Authors
Jose Blanchet, Yang Kang
Publication date
2017/11/11
Conference
Asian Conference on Machine Learning
Pages
97-112
Publisher
PMLR