Formbricks
Formbricks Open source Forms & Surveys Logo

System Usability Scale (SUS)

Why is it useful?

The System Usability Scale (SUS) helps you quantify how users perceive the overall usability and learnability of your product. With just 10 standardized questions, it provides a reliable benchmark to compare different product versions or track usability improvements over time making it ideal for spotting friction in the user experience early and often.

How to get started:

Once the Formbricks Widget is integrated, you can trigger the SUS survey after a user completes key workflows or at the end of a session. You can segment who sees it based on user attributes or triggered events. Soon, you’ll also be able to auto-target cohorts for deeper, behavior-based usability insights.

Preview

The System Usability Scale is a 10-question standardized survey that produces a single usability score between 0 and 100. Created in 1986, it has been used in thousands of studies and has robust benchmarking data, which means your score can be compared against industry averages with statistical confidence.

SUS measures perceived usability: how easy the system feels to use, not how easy it objectively is. That perception is what drives user adoption, satisfaction, and willingness to recommend.

How SUS works

The survey consists of 10 statements. Respondents rate their agreement with each on a 1-to-5 scale (Strongly disagree to Strongly agree). The statements alternate between positive and negative phrasing to reduce response bias.

The 10 SUS statements:

  1. I think that I would like to use this system frequently.
  2. I found the system unnecessarily complex.
  3. I thought the system was easy to use.
  4. I think that I would need the support of a technical person to be able to use this system.
  5. I found the various functions in this system were well integrated.
  6. I thought there was too much inconsistency in this system.
  7. I would imagine that most people would learn to use this system very quickly.
  8. I found the system very cumbersome to use.
  9. I felt very confident using the system.
  10. I needed to learn a lot of things before I could get going with this system.

Replace "system" with your product name when deploying.

Scoring methodology

The scoring is not intuitive, which is why it is important to understand the formula rather than eyeballing the raw responses.

For odd-numbered statements (1, 3, 5, 7, 9): subtract 1 from the score. For even-numbered statements (2, 4, 6, 8, 10): subtract the score from 5. Sum all adjusted scores and multiply by 2.5.

The result is a score between 0 and 100.

Interpreting the score:

  • Above 80: Excellent usability. Top 10% of products tested.
  • 68 to 80: Good usability. Above average.
  • 50 to 68: Below average. Usability issues need attention.
  • Below 50: Poor usability. Significant redesign likely needed.

The average SUS score across all studies is 68. That is your baseline comparison point.

When to deploy a SUS survey

After onboarding. Once a user has completed onboarding and used the product for a few sessions, SUS captures their initial usability impression.

Before and after a redesign. SUS provides a before-and-after comparison that quantifies whether a redesign actually improved usability. This is one of the most common and most valuable uses.

Quarterly for benchmarking. Regular SUS measurements track usability over time. A declining score signals that new features or changes are hurting the user experience.

After user testing sessions. SUS is frequently used at the end of formal usability tests to capture the participant's overall impression.

When comparing product versions. If you are testing two design approaches (A/B testing at the experience level), SUS scores for each version provide a quantitative comparison.

Why SUS works despite its age

SUS has survived for four decades because of its statistical properties.

Reliability. SUS produces consistent results across studies. The same product tested by different groups will produce similar scores.

Validity. SUS scores correlate strongly with other usability measures (task completion rates, error rates, user satisfaction), confirming that it measures what it claims to measure.

Sensitivity. SUS can detect meaningful differences between products or product versions with relatively small sample sizes (as few as 12 to 15 respondents).

Benchmarkability. With thousands of published SUS studies, you can compare your score against a large reference dataset. This context turns an abstract number into a meaningful evaluation.

SUS survey best practices

Do not modify the questions. The SUS questions are standardized. Changing the wording invalidates the benchmarking data. You can replace "system" with your product name, but the rest of the statement should remain unchanged.

All 10 questions are required. The positive-negative alternation pattern and the scoring formula depend on all 10 questions being answered. Missing responses break the scoring.

Survey after meaningful use. SUS measures usability perception, which requires experience. A user who just logged in for the first time cannot meaningfully rate the 10 statements. Wait until they have completed at least one core workflow.

Sample size matters. While SUS works with small samples, aim for at least 20 responses for reliable averages and at least 50 for segment comparisons.

Do not over-interpret small differences. A SUS score of 72 is not meaningfully different from 74. Focus on changes of five points or more, which are more likely to reflect genuine usability differences.

Common mistakes

Interpreting the score as a percentage. A SUS score of 68 does not mean "68% usable." It is a relative score where 68 is the historical average. Think of it like a standardized test score, not a completion rate.

Surveying too early. First-session SUS scores reflect learning curve, not usability. Wait until users have had multiple sessions.

Only measuring once. A single SUS score is a snapshot. Regular measurement shows trajectory and catches usability regressions.

Not acting on the components. While the overall score is the headline metric, individual statement scores reveal specific issues. Low scores on statement 4 ("need technical support") point to complexity. Low scores on statement 6 ("inconsistency") point to design coherence issues.

Set up this survey in Formbricks

Formbricks supports the full 10-question SUS survey with Likert scale responses. The template includes all 10 standardized statements with the correct positive-negative alternation.

Deploy as an in-app survey triggered after a user has completed a defined number of sessions or key actions. Formbricks handles the scoring calculation automatically, so you see the final SUS score without manual computation.

Track SUS scores over time with recurring deployments. Compare scores across user segments to identify which cohorts experience the best and worst usability.

Explore related templates