70+ Customer Satisfaction Survey Questions (CSAT Guide + Template)
Johannes
CEO & Co-Founder
15 Minutes
March 25th, 2026
Companies that measure customer satisfaction outperform those that do not by 85% in sales growth (Microsoft). But measuring satisfaction badly is worse than not measuring it at all because you act on bad data. You make changes that do not matter, ignore problems that do, and lose trust with customers who feel surveyed but not heard.
This guide gives you 70+ customer satisfaction survey questions organized by touchpoint, with the three core metrics (CSAT, NPS, CES) explained, scoring methods, and benchmark data. You also get best practices, psychological biases to watch for, and a free template you can deploy in minutes.
What you will find in this guide:
- CSAT, NPS, and CES explained with formulas and benchmarks
- 70+ customer satisfaction survey questions organized by 8 touchpoints
- Question type and effectiveness rating for every question
- Best practices for timing, length, and metric selection
- Psychological biases that silently skew satisfaction data
- How to calculate and benchmark your CSAT score
- A free customer satisfaction survey template
CSAT, NPS, and CES Explained
Three metrics dominate customer satisfaction measurement. Each one answers a different question, and using the wrong metric at the wrong touchpoint gives you misleading data.
CSAT (Customer Satisfaction Score)
What it measures: Satisfaction with a specific interaction or experience.
How to calculate: Ask customers to rate their satisfaction on a 1-5 scale. CSAT = (Number of respondents who selected 4 or 5 / Total respondents) x 100.
When to use: After specific touchpoints like a purchase, support interaction, onboarding, or feature use. CSAT works best for transactional feedback where you want to evaluate a single experience.
Benchmark: A CSAT score above 75% is good. Above 80% is excellent. Below 60% signals serious problems. Industry averages range from 65% (telecom) to 82% (ecommerce) according to the American Customer Satisfaction Index. Start measuring with the CSAT survey template.
NPS (Net Promoter Score)
What it measures: Overall loyalty and likelihood to recommend your brand.
How to calculate: Ask "How likely are you to recommend us to a friend or colleague?" on a 0-10 scale. Segment responses into Promoters (9-10), Passives (7-8), and Detractors (0-6). NPS = % Promoters - % Detractors.
When to use: Quarterly relationship surveys. NPS captures the big picture of brand loyalty rather than satisfaction with any single interaction. For question variations, see our guide on NPS question examples or try the NPS survey template.
Benchmark: +30 is good, +50 is strong, +70 is excellent. The global average NPS is +32 to +42 (Retently).
CES (Customer Effort Score)
What it measures: How much effort a customer had to exert to complete a task, resolve an issue, or accomplish a goal.
How to calculate: Ask "How easy was it to [specific action]?" on a 1-7 scale (1 = very difficult, 7 = very easy). CES = average of all responses.
When to use: After support interactions, after task completion, after onboarding. CES is especially powerful for identifying friction points that cause churn.
Why it matters: Research from Gartner found that effort predicts customer loyalty better than satisfaction. Reducing effort is the fastest path to reducing churn. Try the CES survey template to start measuring effort at key touchpoints.
CSAT vs NPS vs CES Comparison
| Metric | What It Measures | Scale | Formula | Best Used After | Good Score |
|---|---|---|---|---|---|
| CSAT | Specific interaction satisfaction | 1-5 | (4+5 responses / total) x 100 | Purchases, support, onboarding | >75% |
| NPS | Overall loyalty | 0-10 | % Promoters - % Detractors | Quarterly relationship check | +30 or higher |
| CES | Effort required | 1-7 | Average score | Support, task completion | 5.5+ (out of 7) |
The most effective satisfaction programs use all three metrics at different points in the customer journey. CSAT tells you how a specific interaction went. NPS tells you where the overall relationship stands. CES tells you where friction is hiding.
70+ Customer Satisfaction Survey Questions by Touchpoint
Each question below includes a recommended type and an effectiveness rating: Essential (include in every survey), Recommended (include when relevant), or Nice-to-have (include if survey length allows).
Overall Satisfaction (Questions 1-10)
Start here. These questions establish a baseline for how customers feel about your brand as a whole. They are the most trackable metrics over time and the best candidates for benchmarking.
1. How satisfied are you with your overall experience with [company/product]?
- Type: Likert (1-5, CSAT) | Essential
- Your primary benchmark metric. Track this quarterly to measure whether changes move the needle. This is the single most important question in any satisfaction survey.
2. How likely are you to recommend [company/product] to a friend or colleague?
- Type: Rating (0-10, NPS) | Essential
- Net Promoter Score tracks loyalty and predicts organic growth. Segment responses into Promoters (9-10), Passives (7-8), and Detractors (0-6) for deeper analysis.
3. How well does [company/product] meet your expectations?
- Type: Scale (Exceeds / Meets / Falls short) | Recommended
- Identifies expectation gaps. Consistent "falls short" responses point to a disconnect between what marketing promises and what the product delivers.
4. How would you rate the overall quality of [product/service]?
- Type: Likert (1-5) | Essential
- Quality perception drives willingness to pay and retention. Low quality scores combined with high satisfaction scores suggest customers tolerate issues because of price or lack of alternatives.
5. How would you rate the value you receive relative to what you pay?
- Type: Likert (1-5) | Recommended
- Value perception is distinct from quality. A product can be high quality but feel overpriced, or average quality but feel like a great deal. This question separates the two.
6. How likely are you to continue using [product/service] over the next 12 months?
- Type: Likert (1-5) | Essential
- Direct retention predictor. Low scores are an early warning system for churn. Cross-reference with satisfaction scores to identify at-risk segments before they leave.
7. How easy is it to do business with us overall?
- Type: Likert (1-7, CES) | Recommended
- An overall effort score that captures friction across the entire journey. High-effort experiences drive churn even when customers are otherwise satisfied.
8. Compared to alternatives you have tried, how does [company/product] perform?
- Type: Scale (Much worse / Worse / About the same / Better / Much better) | Nice-to-have
- Competitive positioning data from the people who matter most: your customers. "About the same" responses are a retention risk because switching costs are low.
9. In one word, how would you describe your experience with [company]?
- Type: Open-ended | Nice-to-have
- Captures raw emotional response without the overhead of a longer open-ended question. Aggregate responses into word clouds for quick visual analysis across quarters.
10. What overall score (1-10) would you give your experience, and why?
- Type: Rating + Open-ended | Essential
- The "why" is where the real value lives. The number gives you a trend line. The explanation tells you what to fix. Always pair a score with an open-ended follow-up.
Purchase and Checkout Experience (Questions 11-20)
The purchase experience shapes first impressions and determines whether customers come back. These questions catch friction in the buying process before it silently kills conversion and repeat purchase rates.
11. How easy was it to complete your purchase?
- Type: Likert (1-7, CES) | Essential
- Checkout friction is one of the top drivers of cart abandonment. Even small effort reductions can significantly improve conversion.
12. How satisfied are you with the checkout process?
- Type: Likert (1-5, CSAT) | Recommended
- Captures overall checkout sentiment. Low scores warrant a deeper dive into specific steps (account creation, payment, confirmation).
13. Were your preferred payment options available?
- Type: Binary (Yes/No) + conditional follow-up | Recommended
- Missing payment options cause silent abandonment. If "No," follow up with "Which payment method were you looking for?" to prioritize payment integrations.
14. How clear and helpful was your order confirmation?
- Type: Likert (1-5) | Nice-to-have
- Confirmation anxiety drives unnecessary support tickets. A clear confirmation reduces "Where is my order?" contacts and builds post-purchase confidence.
15. How satisfied are you with the delivery of your order?
- Type: Likert (1-5, CSAT) | Essential
- Delivery is the final touchpoint before the customer evaluates the product itself. Late or damaged deliveries color the entire experience regardless of product quality.
16. How would you rate the packaging and presentation of your order?
- Type: Likert (1-5) | Nice-to-have
- Unboxing experience matters more than most companies realize, especially for premium products. This question is most valuable for direct-to-consumer brands.
17. How clear is our return and refund policy?
- Type: Likert (1-5) | Recommended
- Unclear return policies create purchase hesitation and post-purchase anxiety. Customers who understand the return process are more confident buyers.
18. How would you describe your first impression after receiving [product/service]?
- Type: Open-ended | Recommended
- First impressions are powerful and lasting. This captures the moment of truth when expectation meets reality for the first time.
19. Do you feel confident that you made the right purchase decision?
- Type: Likert (1-5) | Nice-to-have
- Post-purchase cognitive dissonance is real. Low confidence scores suggest your onboarding or welcome communication needs to reinforce the value of their decision.
20. Based on your purchase experience, how likely are you to recommend us to someone else?
- Type: Rating (0-10, NPS) | Recommended
- A purchase-specific NPS. Compare this to your overall NPS to see whether the buying experience helps or hurts your brand perception.
Product and Service Quality (Questions 21-32)
Product quality is the foundation of customer satisfaction. These questions diagnose whether your core offering delivers on its promise and where gaps exist between what customers expect and what they experience.
21. How would you rate the overall quality of [product/service]?
- Type: Likert (1-5, CSAT) | Essential
- The headline quality metric. Track over time and segment by customer type, use case, and tenure to find patterns.
22. How reliable is [product/service] in your day-to-day use?
- Type: Likert (1-5) | Essential
- Reliability trumps flashy features. A consistently reliable 4/5 experience beats an inconsistent mix of 5s and 2s. Low reliability scores are a churn accelerator.
23. How satisfied are you with the features available in [product/service]?
- Type: Likert (1-5) | Recommended
- Feature satisfaction captures whether your roadmap aligns with customer needs. Low scores paired with high quality scores suggest missing features rather than broken ones.
24. How does [product/service] perform compared to what you expected?
- Type: Scale (Much worse / Worse / As expected / Better / Much better) | Recommended
- Expectation alignment predicts long-term retention. "As expected" is good. "Better" is excellent. "Worse" needs immediate investigation into what was promised.
25. How easy is [product/service] to use?
- Type: Likert (1-7, CES) | Essential
- Usability is a non-negotiable baseline. Products that are powerful but hard to use lose to simpler competitors. Cross-reference with tenure to see if usability improves over time.
26. How would you rate the design and visual appeal of [product/service]?
- Type: Likert (1-5) | Nice-to-have
- Design satisfaction correlates with perceived quality and willingness to recommend. This is most valuable for consumer-facing products where aesthetics influence purchase decisions.
27. How would you describe your experience setting up [product/service] for the first time?
- Type: Scale (Very difficult / Difficult / Neutral / Easy / Very easy) | Recommended
- Setup friction is a leading cause of early churn. If customers struggle in the first 48 hours, many will never reach the "aha" moment that drives retention.
28. How satisfied are you with the documentation and help resources available?
- Type: Likert (1-5) | Recommended
- Good documentation reduces support load and increases product adoption. Low scores here often correlate with high support ticket volume for the same topics.
29. How satisfied are you with the frequency and quality of product updates?
- Type: Likert (1-5) | Nice-to-have
- Too many updates create change fatigue. Too few signal stagnation. This question helps you calibrate your release cadence to customer expectations.
30. Are there features you need that [product/service] does not currently offer?
- Type: Binary (Yes/No) + conditional open-ended | Recommended
- Gate question. If "Yes," follow up with "Which features would be most valuable to you?" This feeds directly into your roadmap prioritization.
31. How would you rate the quality of [product/service] relative to what you pay?
- Type: Likert (1-5) | Essential
- Price-quality alignment determines perceived value. Low scores signal a pricing problem, a positioning problem, or both. High scores are an opportunity for upselling.
32. If you could improve one thing about [product/service], what would it be?
- Type: Open-ended | Essential
- The "one thing" constraint forces prioritization. More actionable than "what would you improve?" which yields scattered wish lists. Group responses by theme for your roadmap.
Customer Support Satisfaction (Questions 33-42)
Support interactions have an outsized impact on overall satisfaction. A single bad support experience can undo months of positive product experiences. These questions diagnose whether your support team is meeting, exceeding, or falling short of customer expectations.
33. How satisfied are you with the support you received?
- Type: Likert (1-5, CSAT) | Essential
- Your headline support metric. Send within 1-2 hours of ticket resolution for the most accurate data. Benchmark against your overall CSAT to see if support helps or hurts.
34. How knowledgeable was the support agent who assisted you?
- Type: Likert (1-5) | Recommended
- Agent knowledge is the top driver of support satisfaction. Low scores point to training gaps or knowledge base issues rather than attitude problems.
35. Was your issue fully resolved?
- Type: Scale (Yes / Partially / No) | Essential
- Resolution effectiveness matters more than speed. "Partially" responses often indicate systemic issues where agents lack the authority or tools to fully resolve problems.
36. How satisfied are you with the time it took to resolve your issue?
- Type: Likert (1-5) | Recommended
- Response time expectations vary by channel: chat (under 5 minutes), email (under 4 hours), phone (under 2 minutes). Measure against channel-specific benchmarks.
37. How much effort did you have to put in to get your issue resolved?
- Type: Likert (1-7, CES) | Essential
- Customer effort in support interactions is one of the strongest predictors of loyalty. High-effort resolutions (multiple contacts, repeated explanations, channel switching) destroy goodwill.
38. How satisfied are you with the support channel you used (email, chat, phone)?
- Type: Likert (1-5) | Nice-to-have
- Channel satisfaction varies by issue type. Simple questions work well on chat. Complex problems often need phone or screen sharing. Low scores may mean customers are using the wrong channel for their issue type.
39. Did the support team follow up with you after resolving your issue?
- Type: Binary (Yes/No) | Nice-to-have
- Follow-up is a loyalty multiplier. Customers who receive follow-up communication are significantly more likely to report high satisfaction even when the initial resolution was imperfect.
40. If your issue was escalated, how would you rate the escalation experience?
- Type: Likert (1-5) | Nice-to-have
- Escalation is a high-stakes moment. The customer is already frustrated. Smooth escalations (no repeated explanations, faster resolution) can salvage the relationship. Rough ones accelerate churn.
41. Based on this support experience, how likely are you to recommend [company] to others?
- Type: Rating (0-10, NPS) | Recommended
- Support-specific NPS. Compare against your overall NPS to quantify whether support is a brand asset or liability.
42. What one thing could we do to improve our support?
- Type: Open-ended | Essential
- Support improvement suggestions are often the most specific and actionable feedback you will receive. Customers know exactly what went wrong and what would fix it.
Onboarding and First Experience (Questions 43-50)
The first experience shapes everything that follows. Research shows that customers form lasting opinions within the first 90 days. These questions identify friction that prevents customers from reaching the value they signed up for.
43. How easy was the onboarding process?
- Type: Likert (1-7, CES) | Essential
- Onboarding effort predicts early churn better than almost any other metric. If customers struggle to get started, they leave before discovering the value of your product.
44. How quickly did you achieve your first meaningful result with [product/service]?
- Type: Multiple choice (Same day / Within a week / Within a month / Still working on it / I have not yet) | Essential
- Time to value is the most important onboarding metric. "Still working on it" and "I have not yet" responses are red flags that require immediate intervention.
45. How helpful was the setup documentation and onboarding guidance?
- Type: Likert (1-5) | Recommended
- Good onboarding documentation reduces support tickets and accelerates adoption. Low scores here often correlate with high early-stage support volume.
46. How would you rate the welcome communication you received?
- Type: Likert (1-5) | Nice-to-have
- Welcome emails and in-app messages set expectations and guide first steps. Low scores suggest your welcome sequence is generic, overwhelming, or missing entirely.
47. How effective was the training or onboarding support you received?
- Type: Likert (1-5) | Recommended
- For products with dedicated onboarding, this measures whether the investment in training actually helps customers succeed. Low scores justify revamping your onboarding program.
48. Did [product/service] meet your initial expectations in the first [week/month]?
- Type: Scale (Exceeded / Met / Fell short) | Essential
- Early expectation alignment predicts long-term satisfaction. "Fell short" responses in the first month often become churned customers by month three.
49. What was your first impression of [product/service]?
- Type: Open-ended | Recommended
- First impressions capture the raw, unfiltered reaction before rationalization kicks in. These responses often surface UX issues and confusion points that quantitative questions miss.
50. What was the biggest challenge you faced in getting started?
- Type: Open-ended | Essential
- Directly identifies the friction points that block adoption. Group responses by theme to prioritize onboarding improvements that help the most customers.
Website and Digital Experience (Questions 51-58)
Your website is often the first and most frequent touchpoint customers have with your brand. These questions evaluate whether your digital experience helps customers accomplish their goals or creates unnecessary friction. For deeper analytics on digital touchpoints, see our guide on customer experience analytics.
51. How easy is it to find what you need on our website?
- Type: Likert (1-7, CES) | Essential
- Navigation effort drives bounce rates and support tickets. If customers cannot find information on your website, they either leave or create support requests for answers that should be self-service.
52. How satisfied are you with your experience on our website using a mobile device?
- Type: Likert (1-5) | Recommended
- Over 50% of web traffic is mobile. A poor mobile experience affects more than half your visitors. Test satisfaction separately for mobile and desktop to catch device-specific issues.
53. How satisfied are you with the speed and performance of our website?
- Type: Likert (1-5) | Recommended
- Page load speed directly impacts conversion and satisfaction. Google found that 53% of mobile site visits are abandoned if pages take longer than 3 seconds to load.
54. How easy is it to find the information you need about our products or services?
- Type: Likert (1-5) | Essential
- Information findability determines whether customers can self-serve or need to contact support. Low scores mean your content architecture or search functionality needs work.
55. How appealing and professional is the design of our website?
- Type: Likert (1-5) | Nice-to-have
- Design credibility affects trust and perceived quality. Studies show that 94% of first impressions of a website are design-related.
56. How easy is it to complete a purchase or sign up on our website?
- Type: Likert (1-7, CES) | Recommended
- Conversion-path effort is directly tied to revenue. Every additional click or form field in the signup or purchase flow costs you customers.
57. How easy is it to manage your account on our website or app?
- Type: Likert (1-7, CES) | Nice-to-have
- Account management friction (updating payment info, changing settings, viewing history) drives avoidable support contacts. Low scores identify self-service gaps.
58. Do you prefer interacting with us digitally or in person?
- Type: Multiple choice (Strongly prefer digital / Slightly prefer digital / No preference / Slightly prefer in-person / Strongly prefer in-person) | Nice-to-have
- Channel preference data shapes your investment strategy. If most customers prefer digital but your best experience is in-person, you have a misalignment to address.
Loyalty and Retention (Questions 59-66)
These forward-looking questions predict future behavior and identify at-risk customers before you lose them. They are critical for retention analysis and building a voice of the customer program that reduces churn.
59. How likely are you to renew your subscription or make another purchase?
- Type: Likert (1-5) | Essential
- The most direct retention predictor. Scores of 1-2 are urgent. Scores of 3 are at-risk. Combine with other signals and use churn reduction strategies to intervene before it is too late.
60. How likely are you to actively recommend [company/product] to someone in your network?
- Type: Likert (1-5) | Essential
- Distinct from NPS, this measures active advocacy intent rather than passive willingness. Customers who score 5 here are your best candidates for referral programs, case studies, and review prompts.
61. Have you considered switching to a competitor in the past 6 months?
- Type: Binary (Yes/No) + conditional follow-up | Essential
- A "Yes" paired with "Why?" reveals your competitive vulnerabilities. Even among satisfied customers, competitive curiosity is normal, but understanding the trigger helps you defend.
62. How satisfied are you with any loyalty rewards or perks you receive?
- Type: Likert (1-5) | Nice-to-have
- Only ask this if you have a loyalty program. Low scores do not mean the program is bad; they may mean customers are unaware of or do not understand the benefits.
63. How has your overall satisfaction changed over the past 6-12 months?
- Type: Scale (Decreased significantly / Decreased somewhat / Stayed the same / Increased somewhat / Increased significantly) | Recommended
- Satisfaction trajectory matters more than a snapshot. "Stayed the same" is stable. "Decreased somewhat" is an early warning that requires investigation before it becomes "decreased significantly."
64. How does [company/product] compare to other solutions you have used?
- Type: Scale (Much worse / Worse / About the same / Better / Much better) | Recommended
- Competitive positioning from customers who have direct comparison experience. "About the same" is a retention risk because it means switching costs are the only thing keeping them.
65. How satisfied are you with the length of your relationship with [company]?
- Type: Likert (1-5) | Nice-to-have
- An unusual question that surfaces whether tenure feels positive or stagnant. Long-tenured customers with low scores may feel taken for granted.
66. If you previously left and came back, what brought you back?
- Type: Open-ended | Nice-to-have
- Win-back intelligence. Understanding why customers return reveals your true competitive advantages and the improvements that actually matter to churned users.
Open-Ended and Discovery (Questions 67-72)
Open-ended questions surface insights that structured questions miss entirely. They are your discovery engine for problems and opportunities you did not know existed. Limit open-ended questions to 2-4 per survey since they have an 18% nonresponse rate compared to 1-2% for closed-ended (Pew Research).
67. What do you like most about [company/product]?
- Type: Open-ended | Essential
- Identifies your strengths from the customer's perspective. These are the things to protect, amplify, and reference in your marketing. Patterns across responses reveal your true differentiators.
68. What do you like least about [company/product]?
- Type: Open-ended | Essential
- The counterpart to question 67. Together they give you a complete picture of peaks and valleys. Low-frequency complaints are noise. High-frequency complaints are your improvement roadmap.
69. If you could change one thing about your experience with us, what would it be?
- Type: Open-ended | Essential
- The "one thing" constraint forces customers to prioritize their top issue. This question consistently produces more actionable responses than "What would you improve?" which yields scattered wish lists.
70. How would you describe [company/product] to a friend?
- Type: Open-ended | Recommended
- Reveals your brand perception in the customer's own language. Use these responses to refine messaging, identify positioning gaps, and discover what customers actually value (often different from what you think).
71. Was there a moment when you almost stopped using [product/service]? What happened?
- Type: Open-ended | Recommended
- Near-miss churn triggers are gold. These are the issues that did not cause departure this time but will next time if not addressed. They reveal the breaking points in your experience.
72. Is there anything else you would like to share about your experience?
- Type: Open-ended | Recommended
- The catch-all. Some of the most valuable feedback comes from questions you did not think to ask. Always include this as your final question. Closing the feedback loop on these responses builds trust and increases future participation.
Customer Satisfaction Survey Best Practices
Writing good questions is half the work. How you structure, time, and follow up on your survey determines whether you get data you can act on or noise you ignore.
Match the metric to the touchpoint. Use CSAT after transactions (purchases, support interactions). Use NPS for quarterly relationship checks. Use CES after any process where effort matters (onboarding, support, checkout). Using the wrong metric gives you a number without meaning.
Time surveys to the interaction. Send transactional surveys within 24-48 hours while the experience is fresh. Waiting longer introduces recall bias and reduces response rates. For more on timing and channel strategy, see our guide on survey distribution methods.
Keep transactional surveys to 3-5 questions. Relationship surveys can be 8-12 questions. Surveys with 1-3 questions see 83% completion rates; 15+ questions drop to 42%. If you need more data, send multiple shorter surveys over time. With granular targeting, you can show different surveys to different segments without overwhelming anyone.
Always pair a score with an open-ended "why." The number tells you where you stand. The explanation tells you what to do about it. A CSAT score of 3.2 is meaningless without knowing what is driving it down.
Benchmark internally first. Your own historical scores are your most meaningful benchmark. Compare this quarter to last quarter, this cohort to last cohort. External benchmarks provide useful context but your internal trend line is what drives action.
Close the loop. Tell customers what changed because of their feedback. "You told us checkout was confusing, so we simplified it to three steps." This single practice increases future response rates and builds trust that feedback actually matters.
Psychological Biases in Satisfaction Surveys
Even well-designed satisfaction surveys produce misleading data if you are not aware of the biases that influence how people respond. These effects are well-documented in survey methodology research from Pew Research Center and affect every survey you send.
Social desirability bias. Customers over-report satisfaction because they want to be agreeable or avoid conflict. This inflates CSAT scores by an estimated 10-15%. Mitigation: use anonymous surveys, avoid personalizing questions ("How did YOUR experience go?"), and normalize negative responses ("Many customers find this step challenging...").
Recency bias. The last interaction colors the entire judgment. A customer with 11 great experiences and 1 recent bad one will rate their overall satisfaction low. Mitigation: ask about overall satisfaction first before drilling into specific interactions. Recognize that "overall satisfaction" scores often reflect the most recent touchpoint.
Anchoring effect. The first question sets a reference point that influences all subsequent answers. If your first question is about a specific problem, the rest of the survey skews negative. Mitigation: lead with a neutral, big-picture question (overall satisfaction) before drilling into specifics. Save problem-focused questions for later in the survey.
Acquiescence bias. Agree-disagree formats inflate scores because respondents, especially those who are less engaged, default to agreeing. "Do you agree that our support is helpful?" produces higher scores than "How would you rate our support?" Mitigation: use scales (poor to excellent) instead of agree-disagree. When you must use agree-disagree, include reverse-coded items.
How to mitigate all four. Randomize question order where possible. Use neutral language. Offer anonymous response options. Test your survey with 5-10 people before sending to your full audience. For analyzing customer feedback effectively, account for these biases in your interpretation, not just your design.
How to Calculate and Benchmark CSAT
CSAT is the most widely used satisfaction metric, but many teams calculate it incorrectly or benchmark against the wrong numbers. Here is the correct process.
CSAT Formula Step by Step
- Ask: "How satisfied are you with [specific experience]?" on a 1-5 scale
- Count the number of respondents who selected 4 (satisfied) or 5 (very satisfied)
- Divide by the total number of respondents
- Multiply by 100
Example: 200 customers respond. 85 select 5, 70 select 4, 25 select 3, 12 select 2, 8 select 1. CSAT = (85 + 70) / 200 x 100 = 77.5%.
Industry Benchmarks
Benchmarks vary significantly by industry. These figures come from the American Customer Satisfaction Index:
| Industry | Average CSAT |
|---|---|
| Ecommerce / Online Retail | 78-82% |
| Software / SaaS | 74-78% |
| Banking | 75-78% |
| Hotels | 73-76% |
| Airlines | 72-75% |
| Telecom / Internet Service | 62-68% |
| Government Services | 60-65% |
What "Good" Looks Like
- Below 60%: Critical issues that need immediate attention
- 60-70%: Below average; improvement is a priority
- 70-75%: Average; room for growth
- 75-80%: Good; you are meeting expectations for most customers
- 80-85%: Excellent; customers are genuinely happy
- Above 85%: World-class; protect what you are doing
Tracking Over Time
A single CSAT score is a snapshot. The trend is what matters. Track CSAT monthly or quarterly and look for:
- Sudden drops: Investigate what changed (product update, pricing change, support issue)
- Gradual declines: Often harder to diagnose; segment by customer type, touchpoint, and tenure to find the source
- Stagnation: You may be measuring customer satisfaction without acting on it
Set incremental improvement targets of 2-5 percentage points per quarter. Dramatic jumps are rare; steady improvement is the goal. Segment your score by touchpoint, customer segment, and channel to see where gains are coming from and where problems persist. Using a customer segmentation strategy ensures you are improving satisfaction for the right groups.
Free Customer Satisfaction Survey Template
Skip the blank page. Formbricks offers free, open-source survey templates you can deploy in minutes. Each template includes pre-written questions, smart targeting rules, and built-in analytics.
Three ways to collect satisfaction feedback with Formbricks:
- In-app surveys: Trigger CSAT, NPS, or CES surveys inside your product based on user behavior, plan, or lifecycle stage. Highest response rates (25-30%) because you capture feedback at the moment of experience.
- Website surveys: Embed surveys on specific pages to measure satisfaction with your website experience, checkout flow, or content. Target by page, scroll depth, or exit intent.
- Link surveys: Share a standalone survey URL via email, SMS, social media, or thank-you pages. Most flexible distribution for reaching customers outside your product.
How to get started:
- Sign up at formbricks.com (free tier available, no credit card required)
- Choose a customer satisfaction template or build from scratch
- Customize with questions from this guide for your specific touchpoints
- Set targeting rules to reach the right customers at the right time
- Launch and monitor responses in real time from your dashboard
Formbricks is open source, privacy-first, and supports self-hosting for teams that need full data control. It is a GDPR-compliant survey tool built for product teams, customer success, and marketing teams who want to collect targeted feedback without heavy engineering lift.
Get Your Free Customer Satisfaction Survey Template →
Frequently Asked Questions
Try Formbricks now
