{
“cells”: [
{

“cell_type”: “markdown”, “metadata”: {}, “source”: [

“## Generate feasible counterfactual explanations using a VAE”

]

}, {

“cell_type”: “markdown”, “metadata”: {}, “source”: [

“This presents the variational inference based approach for generating feasible counterfactuals, where we first train an encoder-decoder framework to generate counterfactuals. More details about our framework can be found here: https://arxiv.org/abs/1912.03277

]

}, {

“cell_type”: “code”, “execution_count”: null, “metadata”: {}, “outputs”: [], “source”: [

“# import DiCEn”, “import dice_mln”, “from dice_ml.utils import helpers # helper functionsn”, “n”, “%load_ext autoreloadn”, “%autoreload 2”

]

}, {

“cell_type”: “markdown”, “metadata”: {}, “source”: [

“DiCE requires two inputs: a training dataset and a pre-trained ML model. It can also work without access to the full dataset (see this [notebook](DiCE_with_private_data.ipynb) for advanced examples).”

]

}, {

“cell_type”: “markdown”, “metadata”: {}, “source”: [

“### Loading dataset”

]

}, {

“cell_type”: “markdown”, “metadata”: {}, “source”: [

“We use the "adult" income dataset from UCI Machine Learning Repository (https://archive.ics.uci.edu/ml/datasets/adult). For demonstration purposes, we transform the data as described in dice_ml.utils.helpers module. “

]

}, {

“cell_type”: “code”, “execution_count”: null, “metadata”: {

“scrolled”: true

}, “outputs”: [], “source”: [

“dataset = helpers.load_adult_income_dataset()”

]

}, {

“cell_type”: “markdown”, “metadata”: {}, “source”: [

“This dataset has 8 features. The outcome is income which is binarized to 0 (low-income, <=50K) or 1 (high-income, >50K). “

]

}, {

“cell_type”: “code”, “execution_count”: null, “metadata”: {}, “outputs”: [], “source”: [

“dataset.head()”

]

}, {

“cell_type”: “code”, “execution_count”: null, “metadata”: {}, “outputs”: [], “source”: [

“# description of transformed featuresn”, “adult_info = helpers.get_adult_data_info()n”, “adult_info”

]

}, {

“cell_type”: “markdown”, “metadata”: {}, “source”: [

“Given this dataset, we construct a data object for DiCE. Since continuous and discrete features have different ways of perturbation, we need to specify the names of the continuous features. DiCE also requires the name of the output variable that the ML model will predict.”

]

}, {

“cell_type”: “code”, “execution_count”: null, “metadata”: {}, “outputs”: [], “source”: [

“d = dice_ml.Data(dataframe=dataset, continuous_features=[‘age’, ‘hours_per_week’],n”, ” outcome_name=’income’, data_name=’adult’, test_size=0.1)”

]

}, {

“cell_type”: “markdown”, “metadata”: {}, “source”: [

“### Loading the ML model”

]

}, {

“cell_type”: “markdown”, “metadata”: {}, “source”: [

“Below, we use a pre-trained ML model which produces high accuracy comparable to other baselines. For convenience, we include the sample trained model with the DiCE package.n”, “n”, “Note that we need to specify the explainer in the model backend. This is because both model and explainer need to be using the same backend library (pytorch or tensorflow).”

]

}, {

“cell_type”: “code”, “execution_count”: null, “metadata”: {}, “outputs”: [], “source”: [

“backend = {‘model’: ‘pytorch_model.PyTorchModel’,n”, ” ‘explainer’: ‘feasible_base_vae.FeasibleBaseVAE’}n”, “ML_modelpath = helpers.get_adult_income_modelpath(backend=’PYT’)n”, “ML_modelpath = ML_modelpath[:-4] + ‘_2nodes.pth’n”, “m = dice_ml.Model(model_path=ML_modelpath, backend=backend)n”, “m.load_model()n”, “print(‘ML Model’, m.model)”

]

}, {

“cell_type”: “markdown”, “metadata”: {}, “source”: [

“### Generate counterfactuals using a VAE model”

]

}, {

“cell_type”: “markdown”, “metadata”: {}, “source”: [

“Based on the data object d and the model object m, we can now instantiate the DiCE class for generating explanations. n”, “We present the variational inference based approach towards generating counterfactuals, where we first train an encoder-decoder framework to generate counterfactuals.n”, “n”, “FeasibleBaseVAE class has an method train(), which would train the Variational Encoder Decoder framework on the input dataframe. It has another arugment, pre_trained, which if set to 0 would re-train the framework each time while generating CFs. Else, it can be set to 1 to avoid repeated training of the framework and would load the latest fitted VAE model. “

]

}, {

“cell_type”: “code”, “execution_count”: null, “metadata”: {

“scrolled”: true

}, “outputs”: [], “source”: [

“# initiate DiCEn”, “exp = dice_ml.Dice(d, m, encoded_size=10, lr=1e-2,n”, ” batch_size=2048, validity_reg=42.0, margin=0.165, epochs=25,n”, ” wm1=1e-2, wm2=1e-2, wm3=1e-2)n”, “exp.train(pre_trained=1)”

]

}, {

“cell_type”: “markdown”, “metadata”: {}, “source”: [

“DiCE is a form of a local explanation and requires an query input whose outcome needs to be explained. Below we provide a sample input whose outcome is 0 (low-income) as per the ML model object m. “

]

}, {

“cell_type”: “code”, “execution_count”: null, “metadata”: {}, “outputs”: [], “source”: [

“# query instance in the form of a dictionary; keys: feature name, values: feature valuen”, “query_instance = {‘age’: 41,n”, ” ‘workclass’: ‘Private’,n”, ” ‘education’: ‘HS-grad’,n”, ” ‘marital_status’: ‘Single’,n”, ” ‘occupation’: ‘Service’,n”, ” ‘race’: ‘White’,n”, ” ‘gender’: ‘Female’,n”, ” ‘hours_per_week’: 45}”

]

}, {

“cell_type”: “markdown”, “metadata”: {}, “source”: [

“Given the query input, we can now generate counterfactual explanations to show perturbed inputs from the original input where the ML model outputs class 1 (high-income). “

]

}, {

“cell_type”: “code”, “execution_count”: null, “metadata”: {}, “outputs”: [], “source”: [

“# generate counterfactualsn”, “dice_exp = exp.generate_counterfactuals(query_instance, total_CFs=5, desired_class="opposite")n”, “# visualize the resultsn”, “dice_exp.visualize_as_dataframe(show_only_changes=True)”

]

}, {

“cell_type”: “markdown”, “metadata”: {}, “source”: [

“That’s it! You can try generating counterfactual explanations for other examples using the same code. You can compare the running time of this VAE-based to DiCE’s default method: VAE-based method is super fast!n”, “n”, “## Adding feasibility constraintsn”, “However, you might notice that for some examples, the above method can still return infeasible counterfactuals. This requires our base framework to be adpated for prodcuing feasible counterfactuals. A detailed description of how we adapt the method under different assumptions is provided in [this paper](https://arxiv.org/pdf/1912.03277). n”, “n”, “In the section below, we show an adaptation our base approach for preserving the Age-Ed constraint: Age and Education can never decrease and increasing Education implies increase in Age. This approach is called ModelApprox, where we adapt our base approach for simple unary and binary constraints. “

]

}, {

“cell_type”: “markdown”, “metadata”: {}, “source”: [

“### ModelApprox “

]

}, {

“cell_type”: “markdown”, “metadata”: {}, “source”: [

“Similar to the FeasibleBaseVAE class above, FeasibleModelApprox class has a method train() with argument pre_trained, which determines whether to train the framework again or load the latest optimal model. However, there are additional arguments to the train() method:n”, “n”, “1. The first arugment determines whether the constraint to be preserved is unary or monotonicn”, “2. The second arugment provides the list of constraint variable names: [[Effect, Cause_1,..,Cause_n]]. In the case of a unary constraint, there would be no causes but only a single constrained variable.n”, “3. The third argument provides the intended direction of change for the constrained variables: Value of 1 means that we allow for only increase in the constrained variable on the change from data point to its counterfactual and vice versa.n”, “4. The fourth argument refers to the penalty weight for infeasibility under given constraint. “

]

}, {

“cell_type”: “markdown”, “metadata”: {}, “source”: [

“### Initilize the Model and Explainer for FeasibleModelApprox”

]

}, {

“cell_type”: “code”, “execution_count”: null, “metadata”: {}, “outputs”: [], “source”: [

“backend = {‘model’: ‘pytorch_model.PyTorchModel’,n”, ” ‘explainer’: ‘feasible_model_approx.FeasibleModelApprox’}n”, “ML_modelpath = helpers.get_adult_income_modelpath(backend=backend)n”, “ML_modelpath = ML_modelpath[:-4] + ‘_2nodes.pth’n”, “m = dice_ml.Model(model_path=ML_modelpath, backend=backend)n”, “m.load_model()n”, “print(‘ML Model’, m.model)”

]

}, {

“cell_type”: “code”, “execution_count”: null, “metadata”: {}, “outputs”: [], “source”: [

“# initiate DiCEn”, “exp = dice_ml.Dice(d, m, encoded_size=10, lr=1e-2, batch_size=2048,n”, ” validity_reg=76.0, margin=0.344, epochs=25,n”, ” wm1=1e-2, wm2=1e-2, wm3=1e-2)n”, “exp.train(1, [[0]], 1, 87, pre_trained=1)”

]

}, {

“cell_type”: “code”, “execution_count”: null, “metadata”: {}, “outputs”: [], “source”: [

“# generate counterfactualsn”, “dice_exp = exp.generate_counterfactuals(query_instance, total_CFs=5, desired_class="opposite")n”, “# visualize the resultsn”, “dice_exp.visualize_as_dataframe(show_only_changes=True)”

]

}, {

“cell_type”: “markdown”, “metadata”: {}, “source”: [

“The results for ModelApprox show that the Age is also increased with increase in Education in counterfactual explanations unlike the BaseVAE method. You can try to experiment with ModelApprox to preserve unary and monotonic constraints for other datasets too. Examples for even more advanced approaches like SCMGenCF,**OracleGenCF** would be included soon to this repository, where we learn to generate feasible counterfactuals for complex feasiblity constraints. More details can be found in [our paper](https://arxiv.org/pdf/1912.03277).”

]

}

], “metadata”: {

“nbsphinx”: {

“execute”: “never”

}, “kernelspec”: {

“display_name”: “Python 3”, “language”: “python”, “name”: “python3”

}, “language_info”: {

“codemirror_mode”: {

“name”: “ipython”, “version”: 3

}, “file_extension”: “.py”, “mimetype”: “text/x-python”, “name”: “python”, “nbconvert_exporter”: “python”, “pygments_lexer”: “ipython3”, “version”: “3.6.12”

}, “toc”: {

“base_numbering”: 1, “nav_menu”: {}, “number_sections”: true, “sideBar”: true, “skip_h1_title”: false, “title_cell”: “Table of Contents”, “title_sidebar”: “Contents”, “toc_cell”: false, “toc_position”: {}, “toc_section_display”: true, “toc_window_display”: true

}, “varInspector”: {

“cols”: {

“lenName”: 16, “lenType”: 16, “lenVar”: 40

}, “kernels_config”: {

“python”: {

“delete_cmd_postfix”: “”, “delete_cmd_prefix”: “del “, “library”: “var_list.py”, “varRefreshCmd”: “print(var_dic_list())”

}, “r”: {

“delete_cmd_postfix”: “) “, “delete_cmd_prefix”: “rm(“, “library”: “var_list.r”, “varRefreshCmd”: “cat(var_dic_list()) “

}

}, “types_to_exclude”: [

“module”, “function”, “builtin_function_or_method”, “instance”, “_Feature”

], “window_display”: false

}

}, “nbformat”: 4, “nbformat_minor”: 4,

}