Customer Satisfaction Score (CSAT)
Why is it useful?
This survey measures the Customer Satisfaction Score (CSAT) of your product or service. It helps identify areas where users are satisfied or dissatisfied. By understanding CSAT, product managers can improve the overall user experience.
How to get started:
Once you have setup the Formbricks Widget, you have two ways to pre-segment your user base: Based on events and based on attributes. Soon, you will also be able to import cohorts from PostHog with just a few clicks.
Preview
CSAT measures how satisfied a customer is with a specific experience. Unlike NPS, which captures overall loyalty, CSAT zeroes in on individual touchpoints: a support interaction, a feature, a purchase, an onboarding flow. That specificity is what makes it useful.
When 62% of customers say brands need to care more about their experience, measuring satisfaction is not optional. It is the minimum. CSAT gives you the data to know exactly where you are delivering and where you are falling short.
How CSAT works
The standard CSAT question is:
"How satisfied are you with [specific experience]?"
Respondents answer on a scale. The most common options are a 1-to-5 Likert scale (Very Unsatisfied to Very Satisfied) or a 1-to-7 scale for more granularity. Some teams use emoji scales or simple thumbs up/thumbs down for in-app micro-surveys.
The formula: CSAT = (Number of satisfied responses / Total responses) x 100
"Satisfied" typically means respondents who selected 4 or 5 on a 5-point scale. This gives you a percentage that is easy to track and compare over time.
When to deploy a CSAT survey
CSAT works best when tied to specific moments in the customer journey.
Post-support interaction. Send immediately after a support ticket is resolved. This is the most common CSAT use case. It tells you whether your support team is actually helping people or just closing tickets.
Post-purchase. Capture sentiment right after a transaction completes. Checkout friction, delivery experience, and product quality all surface here.
Post-onboarding. Measure satisfaction after a user completes their initial setup or reaches a key activation milestone. Low CSAT here predicts churn more reliably than most other signals.
Post-feature release. Deploy a CSAT survey after users interact with a new feature. This gives your product team direct signal on whether the feature meets expectations.
Periodic relational check. A quarterly CSAT survey across your entire user base tracks overall satisfaction trends and catches emerging issues before they compound.
CSAT survey questions
Start with the core CSAT question, then layer in context questions that help you understand what is driving the score.
- How satisfied are you with [specific experience]? | 1-5 scale | Required
- What did you like most about this experience? | Open text | Optional
- What could we improve? | Open text | Optional
- How would you rate the ease of [specific task]? | 1-5 scale | Optional
- Is there anything else you would like us to know? | Open text | Optional
For in-app micro-surveys, a single CSAT question with one follow-up is often enough. For email-based surveys, you can go up to seven questions without a significant drop in completion rates.
Acting on CSAT data
CSAT data is only valuable when it drives decisions.
Identify patterns, not outliers. A single low score is noise. A cluster of low scores on the same feature, page, or interaction is a signal. Look for recurring themes in open-ended responses before jumping to conclusions.
Set thresholds for action. Define what CSAT score triggers an investigation. For example, any touchpoint scoring below 70% CSAT gets flagged for review in the next sprint planning session.
Close the feedback loop. When customers report dissatisfaction, follow up. A personal response to a dissatisfied customer can shift their perception significantly. One bad experience makes customers reduce their spending, but a genuine recovery effort can reverse that.
Track trends, not snapshots. A single CSAT measurement is a data point. Monthly or quarterly CSAT tracking gives you a trend line that reveals whether your product experience is improving or degrading.
CSAT vs. NPS vs. CES
These three metrics measure different things, and the best teams use all of them strategically.
- CSAT answers: "How satisfied was this customer with this specific experience?" Use it for touchpoint-level feedback.
- NPS answers: "How likely is this customer to recommend us?" Use it for overall loyalty and growth prediction.
- CES answers: "How much effort did this customer have to exert?" Use it for friction identification in support and self-service flows.
CSAT is the most versatile of the three because you can attach it to any interaction. Running all three across different touchpoints gives you the most complete picture of your customer experience.
Common mistakes
Asking about everything at once. CSAT should measure one thing per survey. "How satisfied are you with our product, support, and pricing?" is three questions crammed into one. Split them up.
Double-barreled questions. "How satisfied are you with the speed and quality of our response?" Speed and quality are different dimensions. A customer might be happy with one and frustrated with the other.
Using internal jargon. Your customers do not think in terms of "ticket resolution SLA" or "feature parity." Write questions in their language, not yours.
Ignoring response bias. Satisfied customers are less likely to respond than dissatisfied ones. Account for this by looking at response rates alongside scores, and by making surveys easy enough that even mildly satisfied customers complete them.
CSAT benchmarks
B2B SaaS companies typically target 75% or higher. E-commerce and consumer products often see 80% or higher. Support-specific CSAT should aim for 85% or higher, since customers reaching out to support already have a problem that needs solving.
The most important benchmark is your own. If your CSAT is 72% today and 78% next quarter, that improvement matters more than an industry average.
Set up this survey in Formbricks
Formbricks lets you trigger CSAT surveys at exactly the right moment. Set up in-app surveys that appear after a user completes a specific action, or send email-based surveys after support interactions.
The CSAT template includes configurable scales, conditional follow-up questions based on the score, and automatic tagging so you can filter responses by user segment, plan type, or any custom attribute you track.
Results update in real time, so your team can spot satisfaction drops the day they happen rather than waiting for a quarterly report.