Mathematical Sciences - Colloquium - "Some Priors for Nonparametric Shrinkage and Bayesian Sparsity Inference" by Minsuk Shin (Harvard University)

Monday, December 10, 2018
11:00 am to 12:00 pm


Floor/Room #: 

Speaker: Minsuk Shin Harvard University
Title: Some Priors for Nonparametric Shrinkage and Bayesian Sparsity Inference

ABSTRACT: Some Priors for Nonparametric Shrinkage and Bayesian Sparsity Inference
Abstract: In this talk, I introduce two novel classes of shrinkage priors for different purposes: functional HorseShoe (fHS) prior for nonparametric subspace shrinkage and neuronized priors for general sparsity inference.
In function estimation problems, the fHS prior encourages shrinkage towards parametric classes of functions. Unlike other shrinkage priors for parametric models, the fHS shrinkage acts on the shape of the function rather than inducing sparsity on model parameters. I study some desirable theoretical properties including an optimal posterior concentration property on the function and the model selection consistency. I apply the fHS prior to nonparametric additive models for some simulated and real data sets, and the results show that the proposed procedure outperforms the state-of-the-art methods in terms of estimation and model selection.
For general sparsity inference, I also propose the neuronized priors to unify and extend existing shrinkage priors such as one-group continuous shrinkage priors, continuous spike-and-slab priors, and discrete spike-and-slab priors with point-mass mixtures. The new priors are formulated as the product of a weight variable and a transformed scale variable via an activation function. By altering the activation function, practitioners can easily implement a large class of Bayesian variable selection procedures. Compared with classic spike and slab priors, the neuronized priors achieve the same explicit variable selection without employing any latent indicator variable, which results in more efficient MCMC algorithms and more effective posterior modal estimates. I also show that these new formulations can be applied to more general and complex sparsity inference problems, which are computationally challenging, such as structured sparsity and spatially correlated sparsity problems.
Monday, December 10, 2018
Olin Hall 126