Beyond Exact Sparsity: Minimax Learning Under Approximate Models
Classical high-dimensional statistics often assumes that models are exactly sparse: only a handful of coefficients matter, and the rest vanish. But what happens when reality is messier? Many modern problems, from nonparametric regression with splines to kernel methods, fit data only approximately through linear structures. In this talk, we’ll explore how moving from exact to approximate sparsity reshapes what is statistically possible.
I’ll introduce a new framework that extends minimax semiparametric theory to this setting, revealing surprising insights: when root-n estimation is still achievable, when “double robustness” breaks down, and how the rules differ depending on whether regressors are ordered or unordered. Along the way, we’ll see how these results challenge long-standing intuitions about sparsity, efficiency, and optimality.
Bio: Before joining Cornell University, Jelena Bradic held positions at the University of California, San Diego, in both the Department of Mathematics and the HalÃÂñcÃÂñoÃÂÃÂlu Data Science Institute. She earned her Ph.D. in statistics from the Department of Operations Research and Financial Engineering at Princeton University, where she studied under Professor Jianqing Fan. Prior to that, she completed her B.S. and M.S. degrees in mathematics at the University of Belgrade in Serbia.