Getting Started#

Installation

Interpret is supported across Windows, Mac and Linux on Python 3.5+

pip install interpret

conda install -c conda-forge interpret

git clone interpretml/interpret.git && cd interpret/scripts && make install

InterpretML supports training interpretable models (glassbox), as well as explaining existing ML pipelines (blackbox). Let’s walk through an example of each using the UCI adult income classification dataset.

Download and Prepare Data

First, we will load the data into a standard pandas dataframe or a numpy array, and create a train / test split. There’s no special preprocessing necessary to use your data with InterpretML.

import numpy as np
import pandas as pd
from sklearn.model_selection import train_test_split

from interpret import set_visualize_provider
from interpret.provider import InlineProvider
set_visualize_provider(InlineProvider())

df = pd.read_csv(
    "https://archive.ics.uci.edu/ml/machine-learning-databases/adult/adult.data",
    header=None)
df.columns = [
    "Age", "WorkClass", "fnlwgt", "Education", "EducationNum",
    "MaritalStatus", "Occupation", "Relationship", "Race", "Gender",
    "CapitalGain", "CapitalLoss", "HoursPerWeek", "NativeCountry", "Income"
]
X = df.iloc[:, :-1]
y = (df.iloc[:, -1] == " >50K").astype(int)

seed = 42
np.random.seed(seed)
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.20, random_state=seed)

Train a Glassbox Model

Glassbox models are designed to be completely interpretable, and often provide similar accuracy to state-of-the-art methods.

InterpretML lets you train many of the latest glassbox models with the familiar scikit-learn interface.

from interpret.glassbox import ExplainableBoostingClassifier
ebm = ExplainableBoostingClassifier()
ebm.fit(X_train, y_train)
ExplainableBoostingClassifier()
In a Jupyter environment, please rerun this cell to show the HTML representation or trust the notebook.
On GitHub, the HTML representation is unable to render, please try loading this page with nbviewer.org.

Explain the Glassbox

Glassbox models can provide explanations on a both global (overall behavior) and local (individual predictions) level.

Global explanations are useful for understanding what a model finds important, as well as identifying potential flaws in its decision making (i.e. racial bias).

The inline visualization embedded here are exactly what gets produced in the notebook.

For this global explanation, the initial summary page shows the most important features overall. You can use the dropdown to search, filter, and select individual features to drill down deeper into.

Try looking at the “Age” feature to see how the probability of high income varies with Age, or the “Race” or “Gender” features to observe potential bias the model may have learned.

from interpret import show
show(ebm.explain_global())