Measuring Customer Satisfaction: CSAT, NPS, and CES Compared
Three metrics, three goals. Which one fits your support team — and how do you use them without causing survey fatigue?
Your team closed 200 tickets last week. Average handle time: 4 minutes. First response time: under 2 minutes. Those numbers look great on a dashboard. But here's the question nobody's asking: were those customers actually happy? Speed isn't satisfaction. Closed tickets aren't proof of happy customers. Without measuring, you're flying blind.
Customer satisfaction measurement sounds straightforward. Send a survey, count the scores, done. In practice, you'll quickly run into three acronyms that everyone uses interchangeably: CSAT, NPS, and CES. Each measures something different, each has distinct strengths and weaknesses, and picking the wrong one gives you data you can't act on. Let's put them side by side — with concrete guidance on which metric fits your situation.
The three metrics explained
The basics first. CSAT (Customer Satisfaction Score) measures satisfaction at the transaction level: "How satisfied were you with this interaction?" NPS (Net Promoter Score) looks at the relationship as a whole: "How likely are you to recommend us to a friend?" CES (Customer Effort Score) focuses on ease: "How much effort did it take to resolve your issue?"
Three questions, three perspectives. CSAT is a snapshot. NPS is a thermometer for your entire brand. CES predicts loyalty better than both — yet it's the least commonly used. The goal isn't to pick the "best" one; it's knowing when to deploy each.
Many teams start with NPS because it's trendy. That's like training for a marathon when you haven't run a mile. NPS measures something broad and diffuse. If your support team is three weeks old, the direct feedback from CSAT serves you far better.
CSAT: the quick pulse check
CSAT is the most direct metric you can use. Ask the question within 24 hours of a support interaction and you get a score that measures exactly how that single contact was experienced. No noise from previous experiences, no vague brand perception — purely the interaction itself.
The standard scale is 1-5 or 1-7. Calculate your CSAT percentage by dividing the number of positive scores (4 and 5 on a 5-point scale) by total responses. For ecommerce support, a score between 75% and 85% is good. Above 90% is excellent. Below 70%, you have a problem that no prettier survey can fix.
Where most teams stumble: they send a survey after every single conversation. After the third one in a week, your customer stops responding. Or worse — they give a frustrated 1-star because the survey itself is annoying. Selective sampling works better. Send surveys to 25-30% of your conversations. You'll get fewer but more reliable data points.
The real value of CSAT is in trends, not individual scores. An agent who scores below the team average for three straight weeks probably needs coaching. A product category that consistently generates lower CSAT points to a product issue — not a support issue. Connect CSAT to sentiment analysis and you'll spot patterns that isolated scores miss entirely.
One more trap: response bias. Customers who are extremely satisfied or extremely dissatisfied fill out surveys more often. You miss the gray middle. That's not a reason to dismiss CSAT, but it is a reason to take scores with a grain of salt. Always pair CSAT with qualitative feedback — an open text field after the score sometimes yields more insight than the number itself.
NPS: the big picture
NPS is the most popular customer satisfaction survey in the world — and simultaneously the most misunderstood. The question is simple: "How likely are you to recommend us to a friend or colleague?" Score from 0 to 10. Respondents fall into three groups: detractors (0-6), passives (7-8), and promoters (9-10). Your NPS equals the percentage of promoters minus the percentage of detractors.
An NPS of +30 is solid for ecommerce. Above +50 is excellent. Negative NPS means you have more customers actively churning than customers who'd vouch for you.
But here's the critical point: NPS doesn't measure your support team. It measures your entire company. If your product is mediocre, your shipping slow, and your return policy rigid, your support team can perform brilliantly without moving NPS a single point. That makes NPS valuable for executives but frustrating for support leads trying to improve their team's performance.
Use NPS as a quarterly brand-level check, not as a primary support metric. Send the NPS survey independently from support interactions — for example, once per quarter to your customer base. That way you're measuring the relationship, not the last conversation.
The most actionable NPS data comes from the follow-up question: "Why did you give this score?" Those open responses tell you what customers actually care about. Detractors name concrete pain points. Promoters tell you what to keep doing. Those insights are gold — the number itself is secondary.
CES: measure how easy you make it
CES is the underdog of customer satisfaction metrics. Less well-known than CSAT and NPS, but according to Gartner research, the strongest predictor of future buying behavior. The logic is straightforward: customers don't become loyal because you deliver "wow" experiences. They become loyal because you make things easy.
The CES question: "How much effort did it take to resolve your issue?" Scale from 1 (very low effort) to 7 (very high effort). Anything below 3 is good. Above 5 means your customer had to jump through hoops.
CES is particularly suited for optimizing self-service. How easily does a customer find the answer in your help center? How smooth is the returns process? How many times did someone need to reach out before their issue was resolved? CES answers these questions better than CSAT or NPS.
A high effort score on a specific question type is a direct action item. If return requests consistently score high on effort, something is wrong with your return process — not with your agents. Maybe customers have to email three times, or repeat information you should already have. Making order context available directly in your unified inbox reduces effort for both the customer and the agent.
The limitation of CES: it only measures contact moments. Customers who never reach out — positive or negative — fall outside the picture. Combine CES with NPS for the complete view.
Which metric should you choose? A decision framework
Stop thinking you need to pick just one. The real question is which to implement first and how to layer them. Here's a practical framework.
Team of 1-3 agents, just starting to measure: Start with CSAT. It's the simplest to set up, gives immediately actionable feedback, and your team understands the scores intuitively. Configure a simple 5-point scale after each conversation (sample 30% of interactions).
Team of 4-10 agents, CSAT running for a few months: Add CES for your top 3 question types. Specifically measure how much effort returns, delivery questions, and technical issues require. Use the data to simplify your processes.
Team of 10+ agents, multiple channels: Now NPS becomes relevant. Send a quarterly survey to your customer base. Combine results with your CSAT trends to see whether operational improvements are also shifting brand perception.
Critical rule: never measure everything simultaneously on the same customer. That's the fastest route to survey fatigue. Choose per contact moment which metric you measure. After a support conversation: CSAT or CES (not both). Once per quarter: NPS. This keeps your response rate healthy — above 20% — and gives you data you can actually use.
The tooling doesn't need to be complex. Most helpdesks support post-conversation CSAT surveys out of the box. For CES and NPS, you can start with something as simple as Typeform or your helpdesk's built-in surveys. It's not about the tool. It's about the discipline to look at the data every week and do something with it.
Measuring is just the beginning
The most common mistake in measuring customer satisfaction is thinking the number is the goal. "Our CSAT is 82% — done." But what are you doing about the 18% who weren't satisfied? What patterns hide in the low scores? Which channel, which question type, which time of day?
The value of satisfaction metrics lives in the action that follows. A CSAT drop after launching a new return policy tells you immediately that something's off. Rising CES on technical questions means your knowledge base is falling short. Declining NPS while CSAT stays stable points to a problem outside support — product, pricing, or logistics.
Start today with the simplest step: enable CSAT for 30% of your conversations. Look at it next week. See a pattern? Act on it. That cycle of measuring, analyzing, and adjusting is what separates teams that think they're doing well from teams that know they are. Try SamDesk free and discover how sentiment analysis and conversation summaries help you improve customer satisfaction systematically.
Ready to improve your customer service?
Start free with SamDesk and experience how AI empowers your support team.
Try SamDesk free