Jose Blanchet is a faculty member in the Management Science and Engineering Department at Stanford University – where he earned his Ph.D. in 2004. Prior to joining the Stanford faculty, Jose was a professor in the IEOR and Statistics Departments at Columbia University (2008-2017) and before that he was faculty member in the Statistics Department at Harvard University (2004-2008). Jose is a recipient of the 2009 Best Publication Award given by the INFORMS Applied Probability Society and of the 2010 Erlang Prize. He also received a PECASE award given by NSF in 2010. He worked as an analyst in Protego Financial Advisors, a leading investment bank in Mexico. He has research interests in applied probability and Monte Carlo methods. He serves in the editorial board of ALEA, Advances in Applied Probability, Extremes, Insurance: Mathematics and Economics, Journal of Applied Probability, Mathematics of Operations Research, and Stochastic Systems.
Presentations
Tutorial: Optimal Transport Methods in Operations Research and Statistics, APS INFORMS, 2017, Northwestern University.
Exact Simulation of Multidimensional Diffusions, Newton Institute, 2017, Cambridge.
Optimal Transport in Risk Analysis, EVA 2017, TU Delft.
Robust Risk Analysis, IWAP 2016, Toronto.
Monte Carlo Methods for Spatial Extremes, MCQMC 2016, Stanford University.
Exact Simulation of Objects Depending on Inifnite Future Information with Applications to Optimal Exact Simulation of Max-Stable Processes, EVA, 2015. Portugal, SIAM 2015 conference.
Multiscale Analysis of Limit Order Books, University of Chicago, Conference on High Frequency Trading at the Stevanovich Center, 2015.
Tolerance Enforced Simulation for Stochastic Differential Equations via Rough Path Analysis, Newton Institute, 2013; Conference in Monte Carlo, Warwick, 2014.
Exact Sampling of Multidimensional Reflected Brownian Motion, Tata Institute of Fundamental Research, India, 2014; Brown University 2015 and MCQMC 2014 (expended version).
Publications
Statistical Learning of Distributionally Robust Stochastic Control in Continuous State Spaces
Wang, S., Si, N., Blanchet, J., & Zhou, Z. (2024). Statistical Learning of Distributionally Robust Stochastic Control in Continuous State Spaces. ArXiv. /abs/2406.11281
Abstract
We explore the control of stochastic systems with potentially continuous state and action spaces, characterized by the state dynamics . Here, , , and represent the state, action, and exogenous random noise processes, respectively, with denoting a known function that describes state transitions. Traditionally, the noise process is assumed to be independent and identically distributed, with a distribution that is either fully known or can be consistently estimated. However, the occurrence of distributional shifts, typical in engineering settings, necessitates the consideration of the robustness of the policy. This paper introduces a distributionally robust stochastic control paradigm that accommodates possibly adaptive adversarial perturbation to the noise distribution within a prescribed ambiguity set. We examine two adversary models: current-action-aware and current-action-unaware, leading to different dynamic programming equations. Furthermore, we characterize the optimal finite sample minimax rates for achieving uniform learning of the robust value function across continuum states under both adversary types, considering ambiguity sets defined by -divergence and Wasserstein distance. Finally, we demonstrate the applicability of our framework across various real-world settings.
Authors
Shengbo Wang, Nian Si, Jose Blanchet, Zhengyuan Zhou
Publication date
2024/6/17
Journal
arXiv preprint arXiv:2406.11281
Modeling shortest paths in polymeric networks using spatial branching processes
Zhang, Z., Mohanty, S., Blanchet, J., & Cai, W. (2024). Modeling shortest paths in polymeric networks using spatial branching processes. Journal of the Mechanics and Physics of Solids, 187, 105636. https://doi.org/10.1016/j.jmps.2024.105636
Abstract
Recent studies have established a connection between the macroscopic mechanical response of polymeric materials and the statistics of the shortest path (SP) length between distant nodes in the polymer network. Since these statistics can be costly to compute and difficult to study theoretically, we introduce a branching random walk (BRW) model to describe the SP statistics from the coarse-grained molecular dynamics (CGMD) simulations of polymer networks. We postulate that the first passage time (FPT) of the BRW to a given termination site can be used to approximate the statistics of the SP between distant nodes in the polymer network. We develop a theoretical framework for studying the FPT of spatial branching processes and obtain an analytical expression for estimating the FPT distribution as a function of the cross-link density. We demonstrate by extensive numerical calculations that the distribution of the …
Authors
Zhenyuan Zhang, Shaswat Mohanty, Jose Blanchet, Wei Cai
Publication date
2024/6/1
Journal
Journal of the Mechanics and Physics of Solids
Volume
187
Pages
105636
Publisher
Pergamon
Deep Learning for Computing Convergence Rates of Markov Chains
Qu, Y., Blanchet, J., & Glynn, P. (2024). Deep Learning for Computing Convergence Rates of Markov Chains. ArXiv. /abs/2405.20435
Abstract
Convergence rate analysis for general state-space Markov chains is fundamentally important in areas such as Markov chain Monte Carlo and algorithmic analysis (for computing explicit convergence bounds). This problem, however, is notoriously difficult because traditional analytical methods often do not generate practically useful convergence bounds for realistic Markov chains. We propose the Deep Contractive Drift Calculator (DCDC), the first general-purpose sample-based algorithm for bounding the convergence of Markov chains to stationarity in Wasserstein distance. The DCDC has two components. First, inspired by the new convergence analysis framework in (Qu et.al, 2023), we introduce the Contractive Drift Equation (CDE), the solution of which leads to an explicit convergence bound. Second, we develop an efficient neural-network-based CDE solver. Equipped with these two components, DCDC solves the CDE and converts the solution into a convergence bound. We analyze the sample complexity of the algorithm and further demonstrate the effectiveness of the DCDC by generating convergence bounds for realistic Markov chains arising from stochastic processing networks as well as constant step-size stochastic optimization.
Authors
Yanlin Qu, Jose Blanchet, Peter Glynn
Publication date
2024/5/30
Journal
arXiv preprint arXiv:2405.20435