🔥 LIT v0.4: explore tabular & image models, use new interpretability techniques, see LIT in more surfaces
Image and tabular models now supported, with new features for interpretability and analysis, including new counterfactual methods.
TCAV for concept analysis, directly from the LIT UI.
Notebook support for Google Cloud Vertex AI Workbench.
LIT can be used for image and tabular data with a range of new features:
Partial Dependence Plots
Visualize datapoint-specific feature attribution scores for models that return feature attribution
Analyze how different fairness constraints such as demographic parity or equal opportunity can change optimal classification thresholds for different slices of a dataset
The LIT setup is similar to text, see our documentation.
Check out our new demos!
Analyze an image classifier on the Imagenette dataset: image demo
Analyze a tabular data classifier that determines the species of a penguin: penguin demo
Testing with Concept Activation Vectors (TCAV) is an ML interpretability method that shows the importance of high level concepts (e.g., color, gender, race) for a model even if those concepts are not labeled in the training or evaluation data.
Explore how to use TCAV in our step by step tutorial and in our documentation.
Creating what-if scenarios is now easier in LIT. For text models, use either token ablation or HotFlip to find nearby counterfactual examples.
For tabular data, find minimal perturbations to feature values, based on nearby examples from the data. Test for directional behavior (does it change when it should?) as well as invariance.
As an alternative to running a LIT server and connecting to it through a web browser, LIT can be used directly inside of python notebook environments, now including Cloud Vertex AI Workbench.
More in the documentation.
Please reach out to us through GitHub issues if you have any questions. Want to subscribe to our updates? Join the lit-announcements group.
- Made with 🔥 by the LIT team