Shapley Additive Explanations (SHAP)
Bias Goes Undercover: Adversarial attacks can fool explainable AI techniques.
As black-box algorithms like neural networks find their way into high-stakes fields such as transportation, healthcare, and finance, researchers have developed techniques to help explain models’ decisions. New findings show that some of these methods can be fooled.