Natural Language Processing (NLP)

Intent Detection

Definition

Intent detection (also called intent classification or intent recognition) is the NLP task of identifying what a user is trying to accomplish from a natural language input. For a customer support chatbot, intent categories might include: 'check_order_status', 'request_refund', 'report_technical_issue', 'upgrade_plan', 'cancel_subscription'. Each user message is classified into one (or multiple, for multi-intent detection) of these categories. Historically implemented with machine learning classifiers (SVM, logistic regression) trained on labeled examples, modern intent detection uses transformer-based models, few-shot learning with LLMs, or embedding-based nearest-neighbor classification. Confidence scores alongside predictions enable graceful handling of low-confidence or out-of-scope queries.

Why It Matters

Intent detection is the routing backbone of any rule-augmented chatbot. Without it, every query goes through the same generic pipeline; with it, queries are directed to specialized knowledge bases, workflows, and response templates optimized for each intent. For 99helpers customers, accurate intent detection improves answer quality (billing questions go to billing knowledge, technical questions go to technical documentation), enables workflow automation (a 'cancel_subscription' intent can trigger the cancellation flow), and provides analytics (which intents are most common, which have the worst resolution rates). Intent detection accuracy directly caps chatbot effectiveness—if the bot misroutes 20% of queries, those queries will likely receive poor answers regardless of how good the retrieval or generation is.

How It Works

Intent detection implementation options: (1) trained classifier—collect labeled (text, intent) pairs, fine-tune BERT or similar encoder model; high accuracy, requires training data; (2) few-shot LLM—provide 2-3 examples per intent in a prompt: 'Classify this message as one of: [intent list]. Examples: ...' return JSON {intent, confidence}; works well with < 20 intents, no training data required; (3) embedding-based similarity—embed each intent definition and user query, classify by cosine similarity to intent embeddings; flexible, handles new intents without retraining; (4) keyword rules—simple pattern matching for high-confidence cases, falls through to ML for ambiguous cases; fast, predictable for common patterns. Multi-intent detection (one message expresses two intents) requires multi-label classification rather than softmax.

Intent Detection — Utterance to Intent Classification

User utterance

"I want to cancel my subscription and get a refund"

Intent Classifier (softmax)

Intent confidence scores

Cancel Subscription
87%
Request Refund
74%
Account Management
31%
Billing Inquiry
18%
Product Complaint
12%

Detected intent

Cancel Subscription

Confidence

87%

→ Routes to: Cancellation flow with refund sub-intent flagged

Real-World Example

A 99helpers customer's chatbot handles 8 distinct intent categories. Initially using keyword rules, the chatbot correctly handles 72% of queries but struggles with paraphrase variation: 'halt my account' (cancel_subscription) triggers no rule because only 'cancel' and 'close' are mapped. Switching to a fine-tuned intent classifier trained on 500 labeled examples per intent: accuracy jumps to 91%. Adding an 'out_of_scope' intent category routes the 9% of queries about topics outside the chatbot's domain to human agents rather than attempting unhelpful responses. Intent distribution analytics reveal 34% of queries are 'technical_support'—signaling investment in that knowledge base section.

Common Mistakes

  • Creating too many fine-grained intent categories without sufficient training data per category—classifiers need at least 50-100 examples per intent for reliable performance.
  • Not including an 'out_of_scope' or 'other' intent—without it, every query is forced into one of the defined categories, even irrelevant ones.
  • Treating intent detection as one-size-fits-all—different input channels (chat, email, voice transcript) have different language patterns; intent models may need channel-specific training or adaptation.

Related Terms

Natural Language Understanding (NLU)

Natural Language Understanding (NLU) is the AI capability that interprets the meaning behind human text or speech — identifying what the user wants (intent) and extracting key details (entities). NLU is the 'comprehension' layer of a chatbot, translating raw input into structured information the system can act on.

Named Entity Recognition (NER)

Named Entity Recognition (NER) is an NLP task that identifies and classifies named entities in text—people, organizations, locations, dates, product names, and other specific items—enabling structured extraction from unstructured text.

Text Classification

Text classification automatically assigns predefined labels to text documents—such as topic, urgency, language, or intent—enabling large-scale categorization of unstructured content without manual review.

Slot Filling

Slot filling is the dialogue management process of collecting all the required pieces of information (slots) needed to complete a task. The chatbot systematically asks for any missing slots — like date, time, or account number — until it has everything needed to fulfill the user's request.

Dialogue Management

Dialogue management is the component of a conversational AI system that tracks conversation state and decides what the bot should do next — ask a follow-up question, retrieve information, take an action, or hand off to a human. It is the 'brain' that orchestrates a coherent, goal-directed conversation across multiple turns.

Ready to build your AI chatbot?

Put these concepts into practice with 99helpers — no code required.

Start free trial →
What is Intent Detection? Intent Detection Definition & Guide | 99helpers | 99helpers.com