Go to main content

PDF

Description

This paper introduces the concept of "backward" step in contrast with forward fashion algorithms like Boosting and Forward Stagewise Fitting. Like classical elimination methods, this "backward" step works by shrinking the model complexity of an ensemble learner. Through a step analysis, we show that this additional step is necessary for minimizing $L_1$ penalized loss (Lasso loss). We also propose a BLasso algorithm as a combination of both backward and forward steps which is able to produce the complete regularization path for Lasso problems. Moreover, BLasso can be generalized to solve problems with general convex loss with general convex penalty.

Details

Files

Statistics

from
to
Export
Download Full History
Formats
Format
BibTeX
MARCXML
TextMARC
MARC
DublinCore
EndNote
NLM
RefWorks
RIS