ClassificationTree#

Link to Algorithm description: Decision Tree

class interpret.glassbox.ClassificationTree(feature_names=None, feature_types=None, max_depth=3, **kwargs)#

Classification tree with shallow depth.

Initializes tree with low depth.

Parameters:
  • feature_names – List of feature names.

  • feature_types – List of feature types.

  • max_depth – Max depth of tree.

  • **kwargs – Kwargs sent to __init__() method of tree.

explain_global(name=None)#

Provides global explanation for model.

Parameters:

name – User-defined explanation name.

Returns:

An explanation object, visualizing feature-value pairs as horizontal bar chart.

explain_local(X, y=None, name=None)#

Provides local explanations for provided instances.

Parameters:
  • X – Numpy array for X to explain.

  • y – Numpy vector for y to explain.

  • name – User-defined explanation name.

Returns:

An explanation object.

fit(X, y)#

Fits model to provided instances.

Parameters:
  • X – Numpy array for training instances.

  • y – Numpy array as training labels.

Returns:

Itself.

predict(X)#

Predicts on provided instances.

Parameters:

X – Numpy array for instances.

Returns:

Predicted class label per instance.

predict_proba(X)#

Probability estimates on provided instances.

Parameters:

X – Numpy array for instances.

Returns:

Probability estimate of instance for each class.

score(X, y, sample_weight=None)#

Return the mean accuracy on the given test data and labels.

In multi-label classification, this is the subset accuracy which is a harsh metric since you require for each sample that each label set be correctly predicted.

Parameters:
  • X (array-like of shape (n_samples, n_features)) – Test samples.

  • y (array-like of shape (n_samples,) or (n_samples, n_outputs)) – True labels for X.

  • sample_weight (array-like of shape (n_samples,), default=None) – Sample weights.

Returns:

score – Mean accuracy of self.predict(X) w.r.t. y.

Return type:

float