Customer Support & Experience

Quality Assurance

Definition

Quality assurance in customer support is the operational practice of reviewing a sample of customer interactions — chats, calls, emails, tickets — against a defined quality rubric to evaluate agent performance, identify systemic issues, and drive continuous improvement. A QA framework defines the criteria for a quality interaction: greeting and tone, problem understanding, accuracy of information provided, adherence to process, empathy, and resolution completeness. QA reviewers (dedicated QA analysts or team leads) score interactions and provide feedback to agents. Aggregate QA scores reveal training needs, process gaps, and the correlation between quality metrics and customer satisfaction.

Why It Matters

QA is the mechanism by which support teams maintain and improve the human side of the support experience — something that metrics alone cannot capture. A ticket can have a fast response time and a closed status while still delivering a poor customer experience through unhelpful responses, incorrect information, or cold tone. QA catches these issues before they become patterns. For AI chatbot quality, automated QA tools evaluate every bot conversation (not just a sample), flagging low-confidence responses, off-topic answers, and conversations that escalated after the bot failed — enabling continuous AI improvement at scale.

How It Works

QA programs operate through a sampling and scoring cycle: interactions are sampled (randomly or targeted to specific agents, channels, or issue types), reviewed against a scoring rubric by a QA analyst, scores and feedback are shared with the agent and their manager, patterns across multiple reviews inform training priorities, and improvement is tracked over subsequent review cycles. QA scores are typically separate from CSAT — QA measures process adherence and quality from the company's perspective, while CSAT measures experience quality from the customer's. Both are needed for a complete picture.

Quality Assurance — Scorecard & Review Workflow

QA Scorecard

Greeting
10 / 10
Issue Understanding
8 / 10
Resolution Quality
9 / 10
Tone & Empathy
7 / 10
Closure
10 / 10
Total: 44 / 5088%

90–100%

Excellent

80–89%

Good

< 80%

Needs work

QA Review Workflow

Random sampling

10% of tickets reviewed

Score

QA scorecard applied

Coach

1:1 feedback session

Improve

track delta week-over-week

Real-World Example

A 99helpers customer implements a bi-weekly QA review program where team leads review 5 chat interactions per agent per two-week cycle. They create a 10-point rubric covering: accurate information, appropriate tone, complete resolution, proper process adherence, and effective use of available tools. After three months, aggregate QA scores identify that new agents consistently underperform on the 'complete resolution' criterion — they close tickets after addressing the presenting issue without checking for related problems. Targeted coaching on resolution completeness improves both QA scores and FCR.

Common Mistakes

  • Scoring against a rigid rubric without considering context — QA criteria should allow for judgment calls when following the script would produce a worse outcome
  • Sharing QA feedback only as criticism without acknowledging strengths — effective QA coaching includes positive reinforcement of excellent behaviors
  • Sampling only problematic agents or interactions — QA should be systematic and sample all agents to provide comparative data and prevent selection bias

Related Terms

Ready to build your AI chatbot?

Put these concepts into practice with 99helpers — no code required.

Start free trial →
What is Quality Assurance? Quality Assurance Definition & Guide | 99helpers | 99helpers.com