Local-learning-based feature selection for high-dimensional data analysis

Yijun Sun, Sinisa Todorovic, Steven Goodison

Research output: Contribution to journalArticle

229 Citations (Scopus)

Abstract

This paper considers feature selection for data classification in the presence of a huge number of irrelevant features. We propose a new feature-selection algorithm that addresses several major issues with prior work, including problems with algorithm implementation, computational complexity, and solution accuracy. The key idea is to decompose an arbitrarily complex nonlinear problem into a set of locally linear ones through local learning, and then learn feature relevance globally within the large margin framework. The proposed algorithm is based on well-established machine learning and numerical analysis techniques, without making any assumptions about the underlying data distribution. It is capable of processing many thousands of features within minutes on a personal computer while maintaining a very high accuracy that is nearly insensitive to a growing number of irrelevant features. Theoretical analyses of the algorithm's sample complexity suggest that the algorithm has a logarithmical sample complexity with respect to the number of features. Experiments on 11 synthetic and real-world data sets demonstrate the viability of our formulation of the feature-selection problem for supervised learning and the effectiveness of our algorithm.

Original languageEnglish (US)
Article number5342431
Pages (from-to)1610-1626
Number of pages17
JournalIEEE Transactions on Pattern Analysis and Machine Intelligence
Volume32
Issue number9
DOIs
StatePublished - Aug 13 2010
Externally publishedYes

Fingerprint

High-dimensional Data
Feature Selection
Feature extraction
Data analysis
Data Classification
Supervised learning
Personal Computer
Data Distribution
Supervised Learning
Viability
Personal computers
Margin
Learning systems
Nonlinear Problem
Learning
Numerical analysis
Numerical Analysis
Computational complexity
Machine Learning
High Accuracy

Keywords

  • '1 regularization
  • Feature selection
  • local learning
  • logistical regression
  • sample complexity

ASJC Scopus subject areas

  • Software
  • Computer Vision and Pattern Recognition
  • Computational Theory and Mathematics
  • Artificial Intelligence
  • Applied Mathematics

Cite this

Local-learning-based feature selection for high-dimensional data analysis. / Sun, Yijun; Todorovic, Sinisa; Goodison, Steven.

In: IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 32, No. 9, 5342431, 13.08.2010, p. 1610-1626.

Research output: Contribution to journalArticle

@article{eaa9bc06d28e4b5c8680f939a0f7cf10,
title = "Local-learning-based feature selection for high-dimensional data analysis",
abstract = "This paper considers feature selection for data classification in the presence of a huge number of irrelevant features. We propose a new feature-selection algorithm that addresses several major issues with prior work, including problems with algorithm implementation, computational complexity, and solution accuracy. The key idea is to decompose an arbitrarily complex nonlinear problem into a set of locally linear ones through local learning, and then learn feature relevance globally within the large margin framework. The proposed algorithm is based on well-established machine learning and numerical analysis techniques, without making any assumptions about the underlying data distribution. It is capable of processing many thousands of features within minutes on a personal computer while maintaining a very high accuracy that is nearly insensitive to a growing number of irrelevant features. Theoretical analyses of the algorithm's sample complexity suggest that the algorithm has a logarithmical sample complexity with respect to the number of features. Experiments on 11 synthetic and real-world data sets demonstrate the viability of our formulation of the feature-selection problem for supervised learning and the effectiveness of our algorithm.",
keywords = "'1 regularization, Feature selection, local learning, logistical regression, sample complexity",
author = "Yijun Sun and Sinisa Todorovic and Steven Goodison",
year = "2010",
month = "8",
day = "13",
doi = "10.1109/TPAMI.2009.190",
language = "English (US)",
volume = "32",
pages = "1610--1626",
journal = "IEEE Transactions on Pattern Analysis and Machine Intelligence",
issn = "0162-8828",
publisher = "IEEE Computer Society",
number = "9",

}

TY - JOUR

T1 - Local-learning-based feature selection for high-dimensional data analysis

AU - Sun, Yijun

AU - Todorovic, Sinisa

AU - Goodison, Steven

PY - 2010/8/13

Y1 - 2010/8/13

N2 - This paper considers feature selection for data classification in the presence of a huge number of irrelevant features. We propose a new feature-selection algorithm that addresses several major issues with prior work, including problems with algorithm implementation, computational complexity, and solution accuracy. The key idea is to decompose an arbitrarily complex nonlinear problem into a set of locally linear ones through local learning, and then learn feature relevance globally within the large margin framework. The proposed algorithm is based on well-established machine learning and numerical analysis techniques, without making any assumptions about the underlying data distribution. It is capable of processing many thousands of features within minutes on a personal computer while maintaining a very high accuracy that is nearly insensitive to a growing number of irrelevant features. Theoretical analyses of the algorithm's sample complexity suggest that the algorithm has a logarithmical sample complexity with respect to the number of features. Experiments on 11 synthetic and real-world data sets demonstrate the viability of our formulation of the feature-selection problem for supervised learning and the effectiveness of our algorithm.

AB - This paper considers feature selection for data classification in the presence of a huge number of irrelevant features. We propose a new feature-selection algorithm that addresses several major issues with prior work, including problems with algorithm implementation, computational complexity, and solution accuracy. The key idea is to decompose an arbitrarily complex nonlinear problem into a set of locally linear ones through local learning, and then learn feature relevance globally within the large margin framework. The proposed algorithm is based on well-established machine learning and numerical analysis techniques, without making any assumptions about the underlying data distribution. It is capable of processing many thousands of features within minutes on a personal computer while maintaining a very high accuracy that is nearly insensitive to a growing number of irrelevant features. Theoretical analyses of the algorithm's sample complexity suggest that the algorithm has a logarithmical sample complexity with respect to the number of features. Experiments on 11 synthetic and real-world data sets demonstrate the viability of our formulation of the feature-selection problem for supervised learning and the effectiveness of our algorithm.

KW - '1 regularization

KW - Feature selection

KW - local learning

KW - logistical regression

KW - sample complexity

UR - http://www.scopus.com/inward/record.url?scp=77955397866&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=77955397866&partnerID=8YFLogxK

U2 - 10.1109/TPAMI.2009.190

DO - 10.1109/TPAMI.2009.190

M3 - Article

VL - 32

SP - 1610

EP - 1626

JO - IEEE Transactions on Pattern Analysis and Machine Intelligence

JF - IEEE Transactions on Pattern Analysis and Machine Intelligence

SN - 0162-8828

IS - 9

M1 - 5342431

ER -