Visualizations#

The visualizations that power interpret use different renderers depending on the environment they are in. In most cases, the package will detect what kind of environment it is in and use the appropriat renderer. There are times though when you want to forcefully select which one.

Dash Renderer

The Dash renderer is used for local environments such as running a Jupyter notebook on your laptop. It runs a Dash server, backed by Apache Flask in a separate process the first time its called by interpret.

This provides access to both embedded visualizations within notebooks as well as the full dashboard. However, due to requiring a live Flask server, it cannot render in an offline notebook.

See the source code for underestandings its configuration here.

from interpret import set_visualize_provider
from interpret.provider import DashProvider
set_visualize_provider(DashProvider.from_address(('127.0.0.1', 7001)))
class interpret.provider.DashProvider(app_runner)#

Provides rendering via Plotly’s Dash.

This works in the event of an environment that can expose HTTP(s) ports.

Initializes class.

This requires an instantiated AppRunner, call .from_address instead to initialize both.

Parameters:

app_runner – An AppRunner instance.

classmethod from_address(addr=None, base_url=None, use_relative_links=False)#

Initialize a new AppRunner along with the provider.

Parameters:
  • addr – A tuple that is (ip_addr, port).

  • base_url – Base URL, this useful when behind a proxy.

  • use_relative_links – Relative links for rendered pages instead of full URI.

Inline Renderer

The inline renderer is used for cloud environments access to flask servers are not always available. In most configurations, it injects JavaScript in each notebook cell, including the bundle.

This does not allow for full dashboards, but it does allow offline support.

See the source code for underestandings its configuration here.

from interpret import set_visualize_provider
from interpret.provider import InlineProvider
set_visualize_provider(InlineProvider())
class interpret.provider.InlineProvider(detected_envs=None, js_url=None)#

Provides rendering via JavaScript that are invoked within Jupyter cells.

Initializes class.

Parameters:
  • detected_envs – Environments targetted as defined in interpret.utils.environment.

  • js_url – If defined, will load the JavaScript bundle for interpret-inline from the given URL.

Interactivity

The visualizations consume the Interpret API, and is responsible for both displaying explanations and the underlying rendering infrastructure.

Visualizing with the show method

Interpret exposes a top-level method show, of which acts as the surface for rendering explanation visualizations. This can produce either a drop down widget or dashboard depending on what’s provided.

Show a single explanation

For basic use cases, it is good to show an explanation one at a time. The rendered widget will provide a dropdown to select between visualizations. For example, in the event of a global explanation, it will provide an overview, along with graphs for each feature as shown with the code below:

from interpret import set_visualize_provider
from interpret.provider import InlineProvider
set_visualize_provider(InlineProvider())
import numpy as np
import pandas as pd
from sklearn.model_selection import train_test_split

from interpret.glassbox import ExplainableBoostingClassifier
from interpret import show

df = pd.read_csv(
    "https://archive.ics.uci.edu/ml/machine-learning-databases/adult/adult.data",
    header=None)
df.columns = [
    "Age", "WorkClass", "fnlwgt", "Education", "EducationNum",
    "MaritalStatus", "Occupation", "Relationship", "Race", "Gender",
    "CapitalGain", "CapitalLoss", "HoursPerWeek", "NativeCountry", "Income"
]
X = df.iloc[:, :-1]
y = (df.iloc[:, -1] == " >50K").astype(int)

seed = 42
np.random.seed(seed)
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.20, random_state=seed)

ebm = ExplainableBoostingClassifier()
ebm.fit(X_train, y_train)
ExplainableBoostingClassifier()
In a Jupyter environment, please rerun this cell to show the HTML representation or trust the notebook.
On GitHub, the HTML representation is unable to render, please try loading this page with nbviewer.org.
ebm_global = ebm.explain_global()
show(ebm_global)




Show a specific visualization within an explanation

Let’s say you are after one specific visualization within an explanation, then you can specify it with a key as the subsequent function argument.

show(ebm_global, "Age")




Show multiple explanations for comparison

If you running in a local environment (such as a running Python on your laptop), then show can expose a dashboard for comparison which can be invoked the in the following way (provide a list of explanations in the first argument):

from interpret.glassbox import LogisticRegression

# We have to transform categorical variables to use Logistic Regression
X_train = pd.get_dummies(X_train, prefix_sep='.').astype(float)

lr = LogisticRegression(random_state=seed, penalty='l1', solver='liblinear')
lr.fit(X_train, y_train)

lr_global = lr.explain_global()
show([ebm_global, lr_global])

Interpret API

The API is responsible for standardizing ML interpretability explainers and explanations, providing a consistent interface for both users and developers. To support this, it also provides foundational top-level methods that support visualization and data access.

Explainers are glassbox or blackbox algorithms that will produce an explanation, an artifact that is ready for visualizations or further data processing.

Explainer

An explainer will produce an explanation from its .explain_* method. These explanations normally provide an understanding of global model behavior or local individual predictions (.explain_global and .explain_local respectively).

class interpret.api.base.ExplainerMixin#
An object that computes explanations.

This is a contract required for InterpretML.

Variables:
  • available_explanations – A list of strings subsetting the following - “perf”, “data”, “local”, “global”.

  • explainer_type – A string that is one of the following - “blackbox”, “model”, “specific”, “data”, “perf”.

Explanation

An explanation is a self-contained object that help understands either its target model behavior, or a set of individual predictions. The explanation should provide access to visualizations through .visualize, and data processing the .data method. Both .visualize and .data should share the same function signature in terms of arguments.

class interpret.api.base.ExplanationMixin#
The result of calling explain_* from an Explainer. Responsible for providing data and/or visualization.

This is a contract required for InterpretML.

Variables:
  • explanation_type – A string that is one of the explainer’s available explanations. Should be one of “perf”, “data”, “local”, “global”.

  • name – A string that denotes the name of the explanation for display purposes.

  • selector – An optional dataframe that describes the data. Each row of the dataframe corresponds with a respective data item.

abstract data(key=None)#

Provides specific explanation data.

Parameters:

key – A number/string that references a specific data item.

Returns:

A serializable dictionary.

abstract visualize(key=None)#

Provides interactive visualizations.

Parameters:

key – Either a scalar or list that indexes the internal object for sub-plotting. If an overall visualization is requested, pass None.

Returns:

A Plotly figure, html as string, or a Dash component.

Show

The show method is used as a universal function that provides visualizations for whatever explanation(s) is provided in its arguments. Implementation-wise it will provide some visualization platform (i.e. a dashboard or widget) and expose the explanation(s)’ visualizations as given by the .visualize call.