Model Explanations
Interactive ML Explainability Demo
This demo uses SHAP (SHapley Additive exPlanations) to explain predictions from NLP text classification models. You can enter text below to see which words contribute most to the model's prediction.
Words highlighted in green contribute toward the positive/neutral class, while words in red contribute toward the negative/toxic class.
This page is a spinoff of my Text Counterfactuals project. I have seperated the serverless explanations for your amusement!