Regularization for sparsity in statistical analysis and machine learning

  1. Vidaurre Henche, Diego
Dirigida por:
  1. Concha Bielza Lozoya Director/a
  2. Pedro Larrañaga Múgica Director/a

Universidad de defensa: Universidad Politécnica de Madrid

Fecha de defensa: 18 de julio de 2012

Tribunal:
  1. Serafín Moral Callejón Presidente/a
  2. Rubén Armañanzas Arnedillo Secretario/a
  3. Iñaki Inza Cano Vocal
  4. Juan Antonio Fernández del Pozo de Salamanca Vocal
  5. Robert Castelo Valdueza Vocal
  6. Antonio Salmerón Cerdán Vocal
  7. Vicente Gómez Cerdà Vocal

Tipo: Tesis

Resumen

Pragmatism is the leading motivation of regularization. We can understand regularization as a modification of the maximum-likelihood estimator so that a reasonable answer could be given in an unstable or ill-posed situation. In this dissertation, i focus on the applications of regularization for obtaining sparse or parsimonious representations, where only a subset of the inputs is used. A particular form of regularization, L1-regularization, plays a key role for reaching sparsity. Most of the contributions presented revolve around L1-regularization, although other forms of regularization are explored (also pursuing sparsity in some sense). In addition to present a compact review of L1-regularization and its applications in statistical and machine learning, i devise methodology for regression, supervised classification and structure induction of graphical models. Within the regression paradigm, i focus on kernel smoothing learning, proposing techniques for kernel design that are suitable for high dimensional settings and sparse regression functions. I also present an application of regularized regression techniques for modeling the response of biological neurons. Supervised classification advances deal, on the one hand, with the application of regularization for obtaining a naive Bayes classifier and, on the other hand, with a novel algorithm for brain-computer interface design that uses group regularization in an efficient manner.