What-If Tool 1.5 released - attributions mode!

31 views
Skip to first unread message

James Wexler

unread,
Nov 21, 2019, 11:45:24 AM11/21/19
to What-If Tool
The witwidget 1.5 pip package is now out.

The main addition is full support for models that return feature-wise attributions. If your model's prediction function can return floating point numbers representing attributions for each feature value for each example, then WIT will use that information in the visualization.

When datapoints are selected, features will be sorted and colored by attribution value (as opposed to alphabetically, though there is a dropdown to change the sort order, including by absolute attribution), with the attribution value displayed right next to the feature value. Additionally, those attribution values can be used just like any other columns from your dataset. It can be used as an axis of a scatter plot or histogram (e.x. create a scatter plot of attributions of the "age" feature versus the prediction from the model). It can also be used as a dimension to slice your results by in the performance tab, so you could view aggregate model performance on datapoints with low attribution to the "age" feature vs. high attribution to the "age" feature.

Google Cloud also announced today their new Explainable AI features (goo.gle/2qAd6v5). The What-If Tool works out of the box with any models deployed to cloud using the explainability feature. Just as you could use WIT with a cloud-deployed model with just a single line of python configuration, now if that model has explainability enabled then WIT will show those attributions with no additional code or setup needed.

Example colabs:
https://colab.research.google.com/github/PAIR-code/what-if-tool/blob/master/WIT_COMPAS_with_SHAP.ipynb - Train a COMPAS binary classifier, use the SHAP library to get attributions, and display the results in WIT.
https://colab.research.google.com/github/GoogleCloudPlatform/ml-on-gcp/blob/master/tutorials/explanations/ai-explanations-tabular.ipynb - Train a bike trip estimation regression model, enable it to return attributions through integrated gradients with the new Explainable AI feature from Google Cloud, deploy it to Cloud, and then use WIT to query the model and display the results including attributions.

The WIT age prediction regression web demo (https://pair-code.github.io/what-if-tool/age.html) has also been updated to show attributions (calculated through vanilla gradients). The screenshot below shows this demo with a single datapoint selected. You can see its feature values colored and sorted by how their value affected the mode's prediction, and also that the scatter plot is showing the attribution of the 'education" feature plotted against the predicted regression value.

fullscreen-attr.png



Reply all
Reply to author
Forward
0 new messages