Dear Colleagues,
Since I see a lot of traffic here about explainability in AI, I thought I will mention these 2 tools for explainable AI that my group has developed. These tools will learn a symbolic model expressed as a default theory---a stratified answer set program (and
ASP is closely related to constraint programming). These are industrial strength tools that produce an interpretable model in which a prediction can be explained. They are competitive with state-of-the-art traditional machine learning tools such as XGBoost
or Multilayer Perceptrons (MLPs), however, in addition, our systems generate models that are interpretable/explainable. They are freely available on Github. They have been adopted by Atos, the French Software giant, as part of their XAI toolchain. The great
thing about these tools is that complex data transformations such as one-hot-encoding are not needed. High school students with a little knowledge of python have used them successfully. These tools are also super-efficient (just takes a few seconds to output
the model).
FOLD-R++ does binary classification. FOLD-RM does multi-category classification.
Enhanced version of these tools will be available soon. The enhanced versions produce even smaller number of rules, produce more or less the same rules regardless of how the data is split between testing and training, and are more efficient.
As an example, for the well-known Titanic dataset on Kaggle, just 2 rules generated by FOLD-R++ suffice to produce 98.6% accuracy.
status(X,perished) :- not sex(X,'female').
status(X,perished) :- class(X,'3'), sex(X,'female'), fare(X,N1), not(N1=<23.25).
and obviously
status(X,survived) :- not status(X,perished).
If you achieve something of significance with these tools, please write to me.
Limitation of the tools: only work with tabular data containing numerical and categorical attributes (no images).
Enjoy
Gopal Gupta