Group Importances#
In this notebook we show how to compute and interpret Overall Importances shown in InterpretML’s Global Explanations for EBMs. We also show how to compute importances of a group of features or terms.
Throughout the notebook we use term to denote both single features and interactions (pairs).
This notebook can be found in our examples folder on GitHub.
# install interpret if not already installed
try:
import interpret
except ModuleNotFoundError:
!pip install --quiet interpret pandas scikit-learn
Train an Explainable Boosting Machine (EBM) for a regression task
Let’s use the Boston dataset as a reference and train an EBM.
import numpy as np
import pandas as pd
from sklearn.datasets import load_diabetes
from interpret.glassbox import ExplainableBoostingRegressor
from interpret import set_visualize_provider
from interpret.provider import InlineProvider
set_visualize_provider(InlineProvider())
X, y = load_diabetes(return_X_y=True, as_frame=True)
ebm = ExplainableBoostingRegressor()
ebm.fit(X, y)
ExplainableBoostingRegressor()In a Jupyter environment, please rerun this cell to show the HTML representation or trust the notebook.
On GitHub, the HTML representation is unable to render, please try loading this page with nbviewer.org.
ExplainableBoostingRegressor()
Explain the Model
EBMs provide two different kinds of explanations: global explanations about the overall model behavior and local explanations about individual predictions from the model.
Global Explanation
Global Explanations are useful for understanding what a model finds important, as well as identifying potential flaws in its decision making or the training data. Let’s start by computing and displaying a global explanation:
from interpret import show
show(ebm.explain_global(name='EBM'))