AI Infrastructure, Safety & Ethics

SHAP Values

Definition

SHAP values, developed by Lundberg and Lee (2017), apply the Shapley value concept from cooperative game theory to ML model explanations. For each prediction, SHAP computes the marginal contribution of each feature by averaging over all possible orderings in which features could be introduced. Formally, a feature's SHAP value is the expected change in prediction when that feature is included vs. excluded, averaged over all feature subsets. SHAP satisfies three key axioms: local accuracy (SHAP values sum to the predicted value), missingness (absent features get zero contribution), and consistency (if a model changes so a feature contributes more, its SHAP value never decreases). These properties make SHAP uniquely principled among feature attribution methods.

Why It Matters

SHAP values have become the standard for feature-level model explanations in production AI systems. They provide the most rigorous answer to 'why did the model predict X?' available for black-box models. For regulated industries, SHAP values provide the mathematically principled explanations required for adverse action notices (credit denial reasons), clinical AI justifications, and algorithm audit documentation. For model debugging, SHAP summary plots reveal which features drive predictions globally—and per-prediction SHAP values immediately identify when a model has relied on a spurious feature for a specific prediction. Libraries: shap (Python), DALEX (R), and interpretML all implement SHAP.

How It Works

SHAP computation methods vary by model type: TreeSHAP is an exact polynomial-time algorithm for tree-based models (random forests, XGBoost, LightGBM); KernelSHAP uses Shapley-weighted linear regression for any model using sampled feature coalitions; DeepSHAP is a fast approximation for neural networks using DeepLIFT backpropagation. For a credit decision, TreeSHAP might output: base_value=0.3 (average approval probability), credit_score=+0.25, debt_to_income=-0.18, employment_years=+0.08, final_prediction=0.45. Positive values push toward approval; negative values push toward denial. The customer receives: 'Your application was primarily affected by your credit score (positive) and debt-to-income ratio (negative).'

SHAP Values — Churn Prediction Feature Impact

hours_since_last_login

+0.41

ticket_count_30d

+0.28

plan_tier

-0.15

nps_score

-0.12

support_resolved_rate

-0.09
← Reduces churn riskIncreases churn risk →

Real-World Example

A P2P lending platform used SHAP values to comply with the Equal Credit Opportunity Act requirement to provide adverse action notices to denied applicants. For each denial, the system computed TreeSHAP values and extracted the top 3 negative contributors. Automated adverse action notices stated: 'Your application was declined. The primary reasons were: (1) Debt-to-income ratio above threshold, (2) Insufficient credit history, (3) Recent missed payments.' The SHAP-powered explanation system replaced a previous system that provided generic boilerplate reasons—customer complaint rates about adverse action notices dropped 71%, and regulatory examination scores for the lending platform improved significantly.

Common Mistakes

  • Using SHAP explanations as ground truth about causality—SHAP values explain feature contributions to predictions, not causal relationships
  • Computing SHAP values on test sets that differ from production input distributions—SHAP values are only meaningful relative to the baseline distribution used for computation
  • Presenting raw SHAP values to end users without translation—numerical SHAP values require business-language translation to be meaningful to non-technical users

Related Terms

Ready to build your AI chatbot?

Put these concepts into practice with 99helpers — no code required.

Start free trial →
What is SHAP Values? SHAP Values Definition & Guide | 99helpers | 99helpers.com