Regularization for sparsity in statistical analysis and machine learning
- Vidaurre Henche, Diego
- Concha Bielza Lozoya Director/a
- Pedro Larrañaga Múgica Director/a
Universidad de defensa: Universidad Politécnica de Madrid
Fecha de defensa: 18 de julio de 2012
- Serafín Moral Callejón Presidente/a
- Rubén Armañanzas Arnedillo Secretario/a
- Iñaki Inza Cano Vocal
- Juan Antonio Fernández del Pozo de Salamanca Vocal
- Robert Castelo Valdueza Vocal
- Antonio Salmerón Cerdán Vocal
- Vicente Gómez Cerdà Vocal
Tipo: Tesis
Resumen
Pragmatism is the leading motivation of regularization. We can understand regularization as a modification of the maximum-likelihood estimator so that a reasonable answer could be given in an unstable or ill-posed situation. In this dissertation, i focus on the applications of regularization for obtaining sparse or parsimonious representations, where only a subset of the inputs is used. A particular form of regularization, L1-regularization, plays a key role for reaching sparsity. Most of the contributions presented revolve around L1-regularization, although other forms of regularization are explored (also pursuing sparsity in some sense). In addition to present a compact review of L1-regularization and its applications in statistical and machine learning, i devise methodology for regression, supervised classification and structure induction of graphical models. Within the regression paradigm, i focus on kernel smoothing learning, proposing techniques for kernel design that are suitable for high dimensional settings and sparse regression functions. I also present an application of regularized regression techniques for modeling the response of biological neurons. Supervised classification advances deal, on the one hand, with the application of regularization for obtaining a naive Bayes classifier and, on the other hand, with a novel algorithm for brain-computer interface design that uses group regularization in an efficient manner.