Fast implementation of ℓ1 regularized learning algorithms using gradient descent methods

Yunpeng Cai, Yijun Sun, Yubo Cheng, Jian Li, Steve Goodison

Research output: Contribution to conferencePaperpeer-review

9 Scopus citations


With the advent of high-throughput technologies, ℓ1 regularized learning algorithms have attracted much attention recently. Dozens of algorithms have been proposed for fast implementation, using various advanced optimization techniques. In this paper, we demonstrate that ℓ1 regularized learning problems can be easily solved by using gradient-descent techniques. The basic idea is to transform a convex optimization problem with a non-differentiable objective function into an unconstrained non-convex problem, upon which, via gradient descent, reaching a globally optimum solution is guaranteed. We present detailed implementation of the algorithm using ℓ1 regularized logistic regression as a particular application. We conduct large-scale experiments to compare the new approach with other state-of-the-art algorithms on eight medium and large-scale problems. We demonstrate that our algorithm, though simple, performs similarly or even better than other advanced algorithms in terms of computational efficiency and memory usage.

Original languageEnglish (US)
Number of pages10
StatePublished - 2010
Event10th SIAM International Conference on Data Mining, SDM 2010 - Columbus, OH, United States
Duration: Apr 29 2010May 1 2010


Other10th SIAM International Conference on Data Mining, SDM 2010
Country/TerritoryUnited States
CityColumbus, OH


  • Feature selection
  • Gradient descent
  • Sparse solution

ASJC Scopus subject areas

  • Software


Dive into the research topics of 'Fast implementation of ℓ<sub>1</sub> regularized learning algorithms using gradient descent methods'. Together they form a unique fingerprint.

Cite this