Skip to content

Explainability of ML algorithms🔗

This class covers the theoretical concepts of explainability in machine learning, as well as their practical implementation. It is divided in 2 steps:

  1. An overview of the concepts behind explainability of machine learning algorithms (the beginning of explainability, its use in multiple sectors by multiple stakeholders, the tradeoff between performance and explainability, the difference between local and global explainability as well as illustrative use cases from the real world).
  2. A deep dive on explainability methods (based on a notebook) where the LIME and SHAP methods are introduced and illustrated through 3 examples:
    • LIME for text based on a news article classification algorithm
    • LIME for images based on an InceptionV3 model
    • SHAP for tabular data based on an XGBoost model used to predict the winner team during a League of Legends game

Notebook (colab)

Notebook with solutions (colab)