Customer Support & Experience

Customer Satisfaction Score

Definition

Customer Satisfaction Score (CSAT) is a widely used customer experience metric collected through brief surveys sent immediately after a customer interaction — a support ticket resolution, a chat session, a product feature use, or a purchase. The standard CSAT question is 'How satisfied were you with [this experience]?' with a 1-5 or 1-10 scale. The CSAT score is calculated as the percentage of respondents giving a positive rating (typically 4-5 on a 5-point scale). CSAT measures satisfaction with a specific, recent interaction, making it valuable for assessing support quality, product feature satisfaction, and individual agent performance.

Why It Matters

CSAT provides immediate, actionable feedback on support quality. Unlike NPS (which measures overall loyalty) or CES (which measures effort), CSAT is interaction-specific and close-to-the-moment — customers can accurately recall their satisfaction with a conversation that just ended. High CSAT correlates with customer retention; low CSAT is a leading indicator of churn risk. For AI chatbot deployments, CSAT scores on chatbot-handled interactions versus human-handled interactions reveal whether the AI is meeting customer expectations, guiding optimization priorities.

How It Works

CSAT surveys are triggered automatically after defined interaction types — ticket closure, chat session end, call completion. The survey is typically sent via email or shown in-app within minutes of the interaction. Response rates for in-channel surveys (shown immediately in the chat) are much higher than email follow-ups. Once collected, CSAT scores are aggregated in the help desk analytics dashboard and can be segmented by channel, agent, issue category, customer segment, or time period. Individual low-scoring interactions (typically below 3/5) are flagged for quality review and potential customer recovery follow-up.

CSAT Measurement Cycle and Trend

Cycle

1Interaction ends
2Survey sent (1–5 scale)
3Response collected
4Score calculated

Satisfied / Total × 100

= 87%

6-Month Trend

OctNovDecJanFebMar

Industry avg

82%

Your score

87%

Goal

90%

Real-World Example

A 99helpers customer tracks CSAT separately for AI chatbot-handled conversations and human agent-handled conversations. AI chatbot CSAT averages 3.8/5 while human agent CSAT averages 4.6/5. Analysis of low-scoring chatbot interactions reveals that they cluster around two issue types: billing disputes and complex technical errors. The customer reconfigures their chatbot to immediately escalate these issue types to human agents rather than attempting self-service. Overall blended CSAT improves from 4.1 to 4.5 within 60 days.

Common Mistakes

  • Using CSAT as the only customer satisfaction metric — CSAT measures interaction satisfaction, not overall loyalty; combine with NPS and CES for a complete picture
  • Sending CSAT surveys too long after the interaction — memory fades quickly; surveys sent more than 24 hours after an interaction have lower response rates and less accurate ratings
  • Not following up on low CSAT scores — a low-rated interaction is a recovery opportunity; proactive follow-up can rescue relationships that would otherwise churn

Related Terms

Ready to build your AI chatbot?

Put these concepts into practice with 99helpers — no code required.

Start free trial →
What is Customer Satisfaction Score? Customer Satisfaction Score Definition & Guide | 99helpers | 99helpers.com