Skip to content

Boosting🔗

In this class, we introduce the principle of boosting, which can be seen as an incremental way to build a "strong" classifier with "weak" classifiers. As is, this technic is an ensemble method. More specifically, the "weak" classifiers are added sequentially, so that the new model compensates the flaws of the ensemble composed of the previous models. Further, we introduce the gradient boosting boosting as a generalization of boosting but using gradients for the incremental addition of models.

Notebook

References🔗

The Boosting Approach to Machine Learning An Overview. R. E. Schapire. MSRI workshop on Nonlinear Estimation and Classification, (2002).