Sparse embedding: A framework for sparsity promoting dimensionality reduction

Hien V. Nguyen, Nasser M. Nasrabadi, Rama Chellappa

Research output: Chapter in Book/Report/Conference proceedingConference contribution

42 Scopus citations

Abstract

We introduce a novel framework, called sparse embedding (SE), for simultaneous dimensionality reduction and dictionary learning. We formulate an optimization problem for learning a transformation from the original signal domain to a lower-dimensional one in a way that preserves the sparse structure of data. We propose an efficient optimization algorithm and present its non-linear extension based on the kernel methods. One of the key features of our method is that it is computationally efficient as the learning is done in the lower-dimensional space and it discards the irrelevant part of the signal that derails the dictionary learning process. Various experiments show that our method is able to capture the meaningful structure of data and can perform significantly better than many competitive algorithms on signal recovery and object classification tasks.

Original languageAmerican English
Title of host publicationComputer Vision, ECCV 2012 - 12th European Conference on Computer Vision, Proceedings
Pages414-427
Number of pages14
EditionPART 6
DOIs
StatePublished - 2012
Externally publishedYes
Event12th European Conference on Computer Vision, ECCV 2012 - Florence, Italy
Duration: Oct 7 2012Oct 13 2012

Publication series

NameLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
NumberPART 6
Volume7577 LNCS

Conference

Conference12th European Conference on Computer Vision, ECCV 2012
Country/TerritoryItaly
CityFlorence
Period10/7/1210/13/12

ASJC Scopus subject areas

  • Theoretical Computer Science
  • General Computer Science

Fingerprint

Dive into the research topics of 'Sparse embedding: A framework for sparsity promoting dimensionality reduction'. Together they form a unique fingerprint.

Cite this