Customer Effort Score (CES): Questions, Formula & Benchmarks (2026)
Johannes
CEO & Co-Founder
12 Minutes
March 25th, 2026
96% of customers who have high-effort experiences become disloyal, compared to only 9% of those with low-effort experiences (Gartner, originally CEB). That single stat reshaped how the best companies think about customer experience. While CSAT asks "How satisfied are you?" and NPS asks "Would you recommend us?", Customer Effort Score (CES) asks the question that predicts retention most accurately: "How easy was it?"
This guide covers everything you need to run a CES program: what CES is, how to calculate it, 35 ready-to-use survey questions by touchpoint, benchmarks by industry, how CES compares to CSAT and NPS, and a free template you can deploy in minutes.
What you will find in this guide:
- What CES is and where it came from
- The CES formula with a step-by-step calculation example
- CES vs. CSAT vs. NPS comparison
- 35 CES survey questions organized by touchpoint
- CES benchmarks by industry
- How to improve your CES score
- Best practices for CES surveys
- A free CES survey template
What Is Customer Effort Score (CES)?
Customer Effort Score is a metric that measures how much effort a customer has to exert to get an issue resolved, a product used, or a task completed. The lower the effort, the more likely the customer is to stay, buy again, and spend more.
CES was introduced by Matthew Dixon, Karen Freeman, and Nicholas Toman in a 2010 Harvard Business Review article titled "Stop Trying to Delight Your Customers". Their research across 75,000+ customer interactions revealed a counterintuitive finding: reducing customer effort is a far better predictor of loyalty than exceeding expectations or "delighting" customers.
The standard CES survey question uses an agree/disagree format:
"[Company] made it easy for me to [complete task]."
1 = Strongly Disagree ... 7 = Strongly Agree
This framing puts the focus on the company's responsibility to make things easy, not on the customer's ability to figure things out. A score of 7 means the customer felt zero friction. A score of 1 means they struggled through the experience.
CES works because it measures something specific and actionable. When a customer reports high effort, you can trace that effort back to a specific process, touchpoint, or interaction and fix it. That direct connection between measurement and action is what makes CES valuable for teams focused on customer experience analytics.
CES Formula and How to Calculate
The CES formula is straightforward:
CES = Sum of all scores / Number of responses
Step-by-Step Calculation Example
Say you survey 200 customers after a support interaction using a 1-7 scale. Here are the results:
| Score | Responses |
|---|---|
| 1 (Strongly Disagree) | 5 |
| 2 | 10 |
| 3 | 15 |
| 4 | 20 |
| 5 | 40 |
| 6 | 60 |
| 7 (Strongly Agree) | 50 |
Step 1: Multiply each score by its number of responses:
(1 x 5) + (2 x 10) + (3 x 15) + (4 x 20) + (5 x 40) + (6 x 60) + (7 x 50) = 5 + 20 + 45 + 80 + 200 + 360 + 350 = 1,060
Step 2: Divide by total responses:
1,060 / 200 = 5.3
Your CES is 5.3 out of 7. On a 7-point scale, anything above 5 is generally considered good.
Alternative: Percentage Method
Some teams prefer to report the percentage of customers who agree or strongly agree (scores of 6-7):
CES % = (Number of 6-7 responses / Total responses) x 100
In our example: (110 / 200) x 100 = 55% of customers found the experience easy.
Both methods are valid. The average score is better for tracking trends over time. The percentage method is better for communicating results to stakeholders who want a simple "what percent of customers had an easy experience?" answer.
To double-check your totals, use our free CES calculator with the same response counts and scale.
CES vs. CSAT vs. NPS
Each metric answers a different question. Using all three at different touchpoints gives you a complete view of the customer experience.
| CES | CSAT | NPS | |
|---|---|---|---|
| What it measures | Effort / ease of a specific task | Satisfaction with a specific experience | Overall loyalty and likelihood to recommend |
| Standard question | "[Company] made it easy for me to [task]" | "How satisfied are you with [experience]?" | "How likely are you to recommend us?" |
| Scale | 1-7 Likert (Strongly Disagree to Strongly Agree) | 1-5 (Very Dissatisfied to Very Satisfied) | 0-10 |
| When to use | After a specific task or interaction | After a specific experience or touchpoint | Quarterly or semi-annually for relationship health |
| Best predicts | Retention, repeat purchase, reduced churn | Short-term satisfaction, immediate experience quality | Long-term loyalty, referrals, organic growth |
| Strength | Highly actionable (pinpoints friction) | Easy to understand, widely benchmarked | Strategic, predicts growth |
| Weakness | Narrow (only measures the interaction) | Superficial (satisfied customers still leave) | Broad (hard to act on without follow-up) |
The key insight from the original CEB research: the more effort a customer puts in, the less loyal they become, regardless of how satisfied they report being. A customer can rate their satisfaction as 4/5 and still churn if the process was painful. CES catches what CSAT misses.
That said, CES is not a replacement for CSAT or NPS. It is a complement. Use CES after task-based interactions, CSAT for experience quality, and NPS for the big-picture loyalty question. For a deep dive on satisfaction measurement, see our guide on measuring customer satisfaction.
35 CES Survey Questions by Touchpoint
Each question below includes the recommended format and a guidance note. Most CES questions use a 1-7 Likert scale (Strongly Disagree to Strongly Agree) or a 1-7 ease scale. Customize the bracketed text for your context.
General CES Questions (1-5)
These are versatile CES questions that work across any touchpoint. Start here if you are deploying CES for the first time.
1. "[Company] made it easy for me to handle my issue."
- Type: Likert 1-7 (Strongly Disagree to Strongly Agree) | Essential
- The classic CES question. Use this exact framing for benchmarking against published CES research.
2. "How easy was it to [complete task] today?"
- Type: Ease scale 1-7 (Very Difficult to Very Easy) | Essential
- A direct alternative to the agree/disagree format. Some respondents find ease scales more intuitive than Likert.
3. "On a scale of 1-7, how much effort did you personally have to put forth to [action]?"
- Type: Effort scale 1-7 (Very Low Effort to Very High Effort) | Recommended
- Inverts the framing to measure effort directly. Note: lower scores are better here, which is the reverse of the standard CES. Be consistent in your reporting.
4. "How easy was it to get the help you needed today?"
- Type: Ease scale 1-7 | Essential
- Works well as a general post-interaction question when the specific task varies across customers.
5. "The process was simpler than I expected."
- Type: Likert 1-7 (Strongly Disagree to Strongly Agree) | Nice-to-have
- Measures effort relative to expectations. Useful for identifying where you are exceeding or falling short of what customers anticipate.
Customer Support CES Questions (6-12)
Support interactions are the most common CES touchpoint. High-effort support experiences are the number one driver of disloyalty according to the original CEB research.
6. "How easy was it to resolve your issue with our support team?"
- Type: Ease scale 1-7 | Essential
- The core support CES question. Track this per channel (chat, email, phone) to compare effort across support channels.
7. "How many times did you have to contact us to resolve your issue?"
- Type: Multiple choice (1 / 2 / 3 / 4+) | Recommended
- Repeat contacts are the biggest driver of high effort. If most answers are 2+, your first-contact resolution rate needs work.
8. "Did you have to repeat information to multiple agents?"
- Type: Binary (Yes / No) | Recommended
- Repeating information is a top customer frustration. A high "Yes" rate points to gaps in your ticketing system or agent handoff process.
9. "How easy was it to reach a support agent?"
- Type: Ease scale 1-7 | Recommended
- Measures accessibility. Long wait times, confusing IVR menus, and buried contact pages all increase effort before the actual support interaction even starts.
10. "Was your issue resolved on the first contact?"
- Type: Binary (Yes / No) | Essential
- First-contact resolution (FCR) is the strongest lever for reducing effort. Track FCR alongside CES to see the direct relationship.
11. "How easy was it to find the right support channel?"
- Type: Ease scale 1-7 | Nice-to-have
- Customers should not have to search for how to get help. Low scores here suggest your contact options are buried or confusing.
12. "How much effort did the resolution process require from you?"
- Type: Effort scale 1-7 (Very Low Effort to Very High Effort) | Recommended
- Captures the total effort across the full resolution journey, not just the initial contact. Useful for complex issues that span multiple interactions.
Product and Onboarding CES Questions (13-18)
Product effort determines whether users adopt your product or abandon it. These questions are especially valuable during onboarding and after feature launches. Use in-app surveys with granular targeting to trigger them at the right moment. The measure task accomplishment template is a ready-made starting point for product CES.
13. "How easy was it to get started with [product]?"
- Type: Ease scale 1-7 | Essential
- Send after onboarding completion. Low scores early in the customer lifecycle predict churn before it happens. Combine with CES data to reduce churn rate.
14. "How easy is it to accomplish [key task] in [product]?"
- Type: Ease scale 1-7 | Essential
- Replace [key task] with your product's core action. This question measures the effort of the task your product exists to simplify.
15. "How intuitive was the setup process?"
- Type: Ease scale 1-7 | Recommended
- Intuitive and easy are related but distinct. A process can be easy with hand-holding but not intuitive on its own. This question tests whether users can self-serve.
16. "How easy is it to find what you need in [product]?"
- Type: Ease scale 1-7 | Recommended
- Measures navigation and information architecture. Low scores suggest your product needs better search, clearer menus, or restructured workflows.
17. "How much effort does it take to complete your typical workflow in [product]?"
- Type: Effort scale 1-7 (Very Low to Very High) | Recommended
- Focuses on the daily user experience rather than a one-time event. Useful for established users, not just new ones.
18. "How easy was it to learn how to use [product]?"
- Type: Ease scale 1-7 | Nice-to-have
- Learning curve question. Low scores indicate a need for better documentation, tooltips, or guided walkthroughs.
Purchase and Checkout CES Questions (19-24)
Every extra step in a purchase flow costs conversions. CES surveys after checkout reveal where buyers hit friction.
19. "How easy was it to complete your purchase?"
- Type: Ease scale 1-7 | Essential
- The core checkout CES question. Track this after every purchase to catch regressions introduced by design changes or new payment flows.
20. "How easy was it to find the product you were looking for?"
- Type: Ease scale 1-7 | Recommended
- Measures search and browse experience. Low scores point to poor site search, confusing categories, or missing filters.
21. "How smooth was the checkout process?"
- Type: Ease scale 1-7 | Recommended
- "Smooth" captures both speed and absence of friction. Useful as a complement to the more general purchase question.
22. "How easy was it to apply a discount code?"
- Type: Ease scale 1-7 | Nice-to-have
- Promo code friction is a common checkout abandonment trigger. Only ask this when the customer used a code.
23. "How easy was it to choose between pricing plans?"
- Type: Ease scale 1-7 | Recommended
- Pricing page confusion is a hidden conversion killer for SaaS. Low scores mean your pricing page needs clearer differentiation or a recommendation engine.
24. "How easy was the return or exchange process?"
- Type: Ease scale 1-7 | Essential (conditional)
- Only trigger after a return or exchange. Post-return CES is a strong predictor of whether the customer will buy again despite having had a problem.
Website and Digital Experience CES Questions (25-30)
Digital self-service is where most customer effort lives today. These questions identify friction in your online experience before customers escalate to support.
25. "How easy was it to find the information you needed on our website?"
- Type: Ease scale 1-7 | Essential
- Measures content discoverability. Low scores mean customers cannot self-serve, which drives up support volume and effort.
26. "How easy was it to navigate our mobile app?"
- Type: Ease scale 1-7 | Recommended
- Mobile-specific CES. Over 50% of web traffic is mobile, so mobile ease is not optional.
27. "How easy was it to complete [task] on our website?"
- Type: Ease scale 1-7 | Recommended
- Replace [task] with specific actions: updating a subscription, downloading an invoice, finding a help article. Specificity makes the feedback actionable.
28. "How easy was it to manage your account online?"
- Type: Ease scale 1-7 | Recommended
- Account management covers billing, settings, and profile changes. Low scores often point to buried settings or unclear navigation.
29. "How easy was it to update your information?"
- Type: Ease scale 1-7 | Nice-to-have
- Targets a specific self-service task. Useful when you have recently changed your account management flow.
30. "How easy was it to use our self-service tools?"
- Type: Ease scale 1-7 | Recommended
- Broad self-service question covering knowledge base, FAQ, chatbot, and help center. Low scores mean your self-service is creating effort instead of reducing it.
Follow-Up and Open-Ended Questions (31-35)
Pair one of these with your CES rating question. The rating tells you the score. The open-ended question tells you why.
31. "What made [task] easy or difficult for you?"
- Type: Open-ended | Essential
- The most important follow-up question. This is where you learn what to fix. Pair it with every CES rating question.
32. "What would have made this process easier?"
- Type: Open-ended | Recommended
- Forward-looking version of question 31. Respondents often provide specific, actionable suggestions.
33. "What part of the experience required the most effort?"
- Type: Open-ended | Recommended
- Forces respondents to identify the single biggest friction point. More useful than asking about the experience broadly.
34. "Is there anything that almost made you give up?"
- Type: Open-ended | Essential
- Surfaces near-miss failures. These are the moments where a customer was one frustration away from abandoning the process. Fix these first.
35. "Any other feedback on how we can make things easier for you?"
- Type: Open-ended | Nice-to-have
- The catch-all. Place it at the end of your survey to capture anything the structured questions missed.
For tips on getting more people to answer these questions, see our guide on how to increase survey response rate.
CES Benchmarks by Industry
CES benchmarks vary depending on industry, touchpoint, and the scale used. The numbers below are based on a 7-point scale and represent general ranges compiled from customer experience research by Gartner, Qualtrics, and industry reports.
| Industry | Average CES (1-7 scale) | Notes |
|---|---|---|
| Software / SaaS | 5.2 - 5.6 | Onboarding complexity drives variance; companies with guided setup score higher |
| E-commerce | 5.4 - 5.8 | Checkout optimization is mature; returns process is the main effort driver |
| Financial Services | 4.8 - 5.3 | Regulatory requirements add steps; digital-first banks score higher |
| Telecommunications | 4.2 - 4.8 | Consistently lowest scores; plan changes and billing are top friction points |
| Healthcare | 4.5 - 5.0 | Scheduling and insurance verification create effort; patient portals help |
| Insurance | 4.6 - 5.1 | Claims processes drive most effort; digital claims filing improves scores |
| Travel and Hospitality | 5.0 - 5.5 | Booking is easy; cancellations and changes create effort |
| Retail (brick and mortar) | 5.3 - 5.7 | In-store experience is generally low-effort; returns vary widely |
How to use these benchmarks:
- If your CES is below 5.0, you have significant effort problems that likely drive churn
- If your CES is 5.0-5.5, you are average and have room to differentiate
- If your CES is above 5.5, you are performing well but should still track per-touchpoint
The most useful benchmark is your own score over time. Compare quarter over quarter and across touchpoints to identify trends and measure the impact of improvements. External benchmarks give you a rough sense of where you stand, but internal trend data drives the real decisions.
How to Improve Your CES Score
Improving CES is about systematically removing friction from every customer interaction. Here are six high-impact strategies, each tied to specific effort drivers identified in the CEB/Gartner research.
1. Prioritize First-Contact Resolution
Repeat contacts are the single biggest driver of high effort. Every time a customer has to follow up, their effort doubles and their loyalty drops.
- Set FCR targets by channel (aim for 70%+ for chat and phone)
- Equip agents with full customer context so they can resolve without escalation
- Measure and track FCR alongside CES to see the direct correlation
2. Eliminate Channel Switching
Forcing customers to switch channels (start on chat, get told to call, then get redirected to email) creates enormous effort. Each switch resets context and adds frustration.
- Resolve issues in the channel the customer chose
- If a switch is necessary, warm-transfer with full context so the customer does not repeat themselves
- Make every channel capable of handling common issues end-to-end
3. Simplify Processes
Every extra click, form field, or step is effort. Audit your highest-volume customer tasks and cut anything that does not serve the customer.
- Reduce checkout steps (guest checkout, fewer form fields, auto-fill)
- Simplify onboarding (progressive disclosure instead of upfront configuration)
- Streamline account management (one-click updates, clear navigation)
4. Communicate Proactively
Proactive communication eliminates the effort of customers having to seek information. Instead of waiting for customers to ask "Where is my order?", tell them before they wonder.
- Send status updates automatically (order shipped, ticket in progress, renewal upcoming)
- Alert customers to known issues before they encounter them
- Provide next-step guidance at every stage of a process
5. Build Self-Service That Actually Resolves Issues
Bad self-service is worse than no self-service. A knowledge base that does not answer the question, a chatbot that loops endlessly, or an FAQ buried three clicks deep all increase effort.
- Test your self-service paths by trying to resolve real customer issues yourself
- Track self-service resolution rate (what percentage of visitors solved their problem without contacting support?)
- Update content based on the most common support tickets
6. Reduce Wait Times
Waiting is effort. Whether it is hold time, response time, or loading time, every second of waiting erodes the experience.
- Set and publish response time expectations by channel
- Use async channels (chat, email) effectively so customers do not have to sit on hold
- Monitor and optimize page load times for digital self-service
For each of these strategies, use your CES survey data to measure impact. Fix something, re-measure CES for that touchpoint, and track whether the score improves. For a complete framework on acting on feedback, see our guide on closing the feedback loop.
CES Survey Best Practices
How you design and deploy your CES survey matters as much as the questions you ask. Follow these practices to get reliable, actionable data.
Send CES surveys immediately after the interaction. CES measures effort, and effort perception fades fast. Send the survey within minutes of a task completion, not days later. A CES survey sent 48 hours after a support call captures a vague memory, not the actual experience.
Keep it short: 1-3 questions max. The ideal CES survey is one rating question plus one open-ended follow-up. That is it. Adding more questions increases effort for the respondent, which is ironic for a survey about reducing effort. If you need more data, send separate surveys for separate touchpoints.
Trigger automatically based on events. Do not send CES surveys manually or on a schedule. Set up event-based triggers: ticket closed, purchase completed, onboarding finished, self-service article viewed. With Formbricks, you can trigger in-app CES surveys based on specific user actions and target them to specific segments using granular targeting.
Measure per-touchpoint, not just overall. A single company-wide CES score hides the real story. Your checkout might score 6.2 while your returns process scores 3.8. Track CES by touchpoint so you know exactly where to focus.
Combine CES with CSAT and NPS. CES tells you how easy the experience was. CSAT tells you how satisfied the customer felt. NPS tells you how loyal they are overall. Together, they give you a three-dimensional view of the customer experience. Use CES for task-based interactions, CSAT for experience quality, and NPS for relationship health.
Act on the data and close the loop. Collecting CES data without acting on it erodes trust and wastes resources. Build a process to review CES results weekly, identify trends, assign owners to high-effort touchpoints, and communicate changes back to customers. For a detailed framework, see our guide on closing the feedback loop.
Respect survey fatigue. Do not survey the same customer after every interaction. Set frequency caps so that no customer receives more than one CES survey per month. For more on distribution strategies and response rate optimization, choose the right channel for each touchpoint.
Ensure privacy and compliance. If you operate in the EU or collect data from EU residents, your CES surveys must comply with GDPR. Use a GDPR-compliant survey tool and be transparent about how feedback data is stored and used.
Free CES Survey Template
Skip the blank page. Formbricks is an open-source experience management platform that lets you deploy CES surveys in minutes. Trigger them in-app, on your website, or via link. Target specific user segments based on behavior, plan, or lifecycle stage. Collect responses and analyze the feedback from a single dashboard.
How to get started:
- Sign up at formbricks.com (free tier available, no credit card required)
- Choose the CES survey template or create a CES survey from scratch
- Add your CES question plus one open-ended follow-up from this guide
- Set an event-based trigger (ticket closed, onboarding completed, purchase finished)
- Define your target audience with granular targeting rules
- Launch and monitor CES scores by touchpoint in real time
Formbricks is privacy-first and supports self-hosting for teams that need full data control. No data leaves your infrastructure unless you want it to.
Get Your Free CES Survey Template →
Frequently Asked Questions
Try Formbricks now
