Good survey questions: principles, types, and 20 examples (2026)
Johannes
CEO & Co-Founder
12 Minutes
April 15th, 2026
Survey data is only as reliable as the questions behind it. Pew Research Center's methodology team has documented that small changes in wording, order, or response options routinely swing measured opinion by 10 to 20 percentage points on the same underlying topic. That is not a measurement error. That is the difference between knowing what people think and measuring what you accidentally asked them.
This guide covers what actually makes a survey question good: the five principles, the six core question types, fifteen wording mistakes that break data quality, how to choose a response scale, how respondents actually answer, and how to pretest before launch. Every example is specific, and every rule comes from published survey methodology research.
What you will find in this guide:
- Five principles that separate good survey questions from bad ones
- The six core question types with a comparison table
- Fifteen wording mistakes that quietly corrupt your data
- How to pick a response scale (5-point, 7-point, NPS, CSAT)
- Open-ended vs closed-ended questions and when to use each
- Question ordering, flow, and the funnel approach
- A cognitive model of how respondents answer questions
- How to pretest questions with cognitive interviews
- Twenty good vs bad question rewrites side by side
- Free Formbricks survey templates you can deploy in minutes
What makes a survey question good
A good survey question meets five criteria. If any one is missing, the data is contaminated and no amount of analysis can recover it.
1. It measures one thing. A question that asks about two concepts at once produces ambiguous answers. "How happy are you with the price and quality of the product?" cannot be answered if the respondent loves the quality but hates the price. Split compound questions apart. One question, one concept.
2. It is answerable. The respondent must have the information the question requires. Asking users to recall their satisfaction with an event from nine months ago returns noise. Ask questions that match the respondent's actual memory and knowledge.
3. It is neutral. The question wording should not tilt the respondent toward any answer. Loaded adjectives ("excellent service"), presuppositions ("how often do you have problems with..."), and framing effects all push respondents in a direction. Neutral wording produces honest answers.
4. It uses a response format that fits the construct. Measuring intensity on a yes/no scale loses information. Measuring a binary fact on a 7-point scale asks for precision that does not exist. The format has to match the underlying variable.
5. It is necessary. Every question you ask costs you response rate. If you would not take action on the answer, cut the question. A survey is a decision tool, not a fact collection.
These five criteria are the foundation. The rest of this guide explains how to meet them in practice.
The six core types of survey questions
Survey questions fall into six types. Each measures a different kind of variable, and mixing them up is one of the most common survey design mistakes.
| Question type | Best for | Example | Pros | Cons |
|---|---|---|---|---|
| Likert scale (1 to 5 or 1 to 7) | Attitudes, agreement, frequency | "I find the product easy to use" (Strongly disagree to Strongly agree) | Comparable over time, easy to benchmark | Susceptible to acquiescence bias |
| Rating scale (0 to 10) | NPS, eNPS, granular measurement | "How likely are you to recommend us to a friend?" | Captures intensity, sensitive to small changes | Anchor points can feel arbitrary |
| Multiple choice | Categorization, single selection | "Which channel do you use most often?" | Fast for respondents, clean analysis | Limited to the options you provide |
| Binary (yes or no) | Factual screening, gating | "Have you used the feature this month?" | Lowest cognitive load | No nuance, no intensity |
| Ranking | Priorities, tradeoffs | "Rank these five benefits from most to least important" | Forces real tradeoffs | Higher cognitive load, dropout risk |
| Open-ended | Context, unexpected insights | "What would you change about this product?" | Rich qualitative data | 18% nonresponse rate per Pew Research |
Guideline: Use 70 to 80% closed-ended questions for trend tracking and benchmarking. Use 20 to 30% open-ended questions for qualitative discovery. Limit open-ended to two or three per survey. See our open-ended survey questions guide for a deeper treatment of qualitative question design.
Fifteen wording mistakes that break survey data
Most bad surveys are bad because of wording. These fifteen mistakes appear in more than half the surveys we see, and each one silently corrupts the data.
1. Double-barreled questions. "How satisfied are you with the speed and accuracy of our support?" mixes two constructs. Split into two questions.
2. Leading questions. "How much did you enjoy our excellent service?" presupposes enjoyment and excellence. Use neutral framing: "How would you describe your experience with our service?"
3. Loaded language. Words like "failed," "excessive," or "only" bias responses. Replace with neutral alternatives.
4. Double negatives. "Do you disagree that we should not remove this feature?" forces respondents to parse nested negations. Rewrite in positive form.
5. Absolute phrasing. "Do you always check our newsletter?" pushes respondents away from "yes" because nothing is always true. Use frequency scales instead.
6. Jargon and acronyms. If respondents do not know what NPS, CSAT, or MRR mean, do not use the terms. Define the concept in plain language.
7. Assumption of experience. "How satisfied were you with the onboarding call?" assumes every respondent had an onboarding call. Add a screener or an N/A option.
8. Unbalanced scales. "Excellent, good, fair" has no negative options. Include an equal number of positive and negative response choices.
9. Non-exhaustive options. Multiple-choice lists that do not cover all realistic answers force respondents to pick something false. Add "Other (please specify)" as a safety net.
10. Overlapping categories. Age buckets like "18 to 25, 25 to 35" let someone who is 25 pick either one. Use clean ranges: "18 to 24, 25 to 34."
11. Vague quantifiers. "Do you use the product often?" means different things to different people. Use concrete time anchors: "How many times did you use the product in the past seven days?"
12. Social desirability bias traps. Asking about politically sensitive or socially loaded behavior in non-anonymous surveys yields inflated answers. Use anonymous surveys for sensitive topics.
13. Memory decay. Questions about events more than three months old return mostly guesses. Ask close to the event or use "in the past 30 days" anchors.
14. Too many response options. More than seven options forces respondents into satisficing. Keep scales between 5 and 7 points.
15. Sentence length over 20 words. Long questions increase comprehension failure. Count your words. If a question runs past 20, rewrite it.
Every one of these is preventable. Run your draft survey through the list before you launch.
Response scales and how to choose one
Picking the wrong scale throws away information or invents precision that does not exist. Here is how to choose.
5-point Likert. Use for operational surveys where you need clean benchmarks and fast mobile completion. 5-point scales produce simple, comparable top-2-box percentages and are the default for customer and employee satisfaction tracking.
7-point Likert. Use for research surveys where you care about fine-grained intensity. Academic research by Krosnick and Presser shows that 7-point scales produce slightly higher reliability than 5-point when measuring attitudes, because they let respondents express "somewhat" versus "very."
0 to 10 rating scale. Use for NPS and eNPS. The 11-point scale is sensitive enough to distinguish promoters from passives and is the industry standard for relationship tracking. See our NPS question examples for proven phrasing.
Binary. Use for factual screening where there is no middle ground. "Do you have a manager? Yes or no" is a clean binary. "Are you satisfied? Yes or no" is not.
Whether to include a neutral midpoint. Neutral midpoints ("neither agree nor disagree") attract satisficers who do not want to commit. Remove the midpoint when you need to force a directional answer. Keep it when you believe neutrality is a real position.
Unipolar vs bipolar. Unipolar scales measure one end of a construct ("not at all satisfied to extremely satisfied"). Bipolar scales measure both ends ("very dissatisfied to very satisfied"). Use bipolar when the construct has a real opposite; use unipolar when you are measuring presence versus absence.
Open-ended vs closed-ended questions
Closed-ended questions are the backbone of survey data. They are fast to answer, easy to analyze, and produce clean benchmarks. Open-ended questions fill in the gaps closed-ended questions cannot reach.
The nonresponse problem. Pew Research found that open-ended questions receive an 18% item nonresponse rate compared to 1 to 2% for closed-ended. That means nearly one in five respondents skips every open-ended question. The asymmetry compounds across multiple open-ended questions in a row.
When to use open-ended. For qualitative discovery, unexpected insights, and follow-up probes after a rating. "What is the one thing we could improve?" at the end of a satisfaction survey is the most valuable question on the survey.
When to avoid open-ended. In high-frequency pulse surveys. In surveys with more than 15 questions. When you already know the answer categories you need to measure.
The ratio rule. 70 to 80% closed-ended for measurement, 20 to 30% open-ended for depth. Cap open-ended at two or three per survey so completion rates stay above 60%.
Question ordering and flow
Question order is invisible but powerful. Two surveys with identical questions in different orders can produce measurably different results.
The funnel approach. Start with broad, easy questions. Move to specific questions in the middle. End with demographics and sensitive items. This mirrors how human conversation works and minimizes early dropout.
Primacy and recency effects. On visual surveys, respondents tend to pick the first option they see (primacy). On audio surveys, they pick the last (recency). Randomize option order for multiple-choice items where order is not meaningful.
Priming and contrast. Asking about specific dissatisfactions before a general satisfaction question lowers the general rating. The specific questions "prime" the respondent to focus on problems. Put general questions before specific ones unless you have a reason not to.
Group questions by topic. Switching topics repeatedly increases cognitive load. Cluster questions by theme so respondents can stay in one mental frame.
Sensitive questions last. Income, politics, health, and demographics go at the end. If a respondent drops off, you still get their substantive answers.
How respondents actually answer questions
Understanding the four-stage cognitive model from Tourangeau, Rips, and Rasinski's "The Psychology of Survey Response" (2000) explains why wording, order, and format matter so much. Every answer goes through four stages:
1. Comprehension. The respondent parses the question and decides what it is asking. Failures here come from jargon, ambiguity, and long sentences.
2. Retrieval. The respondent searches memory for relevant information. Failures come from asking about events that are too old or too minor to remember accurately.
3. Judgment. The respondent weighs what they found in memory and forms an opinion. Failures come from vague questions that do not tell the respondent what standard to use.
4. Response. The respondent maps their judgment onto the available response options. Failures come from response formats that do not fit the underlying construct.
Every survey question failure can be traced to one of these four stages. When a question does not work, the fix is usually obvious once you know which stage broke.
Satisficing. When respondents cannot or will not go through all four steps carefully, they shortcut the process. Straight-lining, picking the first reasonable answer, and always selecting the midpoint are satisficing behaviors. Long surveys, boring questions, and bad response formats all push respondents into satisficing mode.
How to pretest survey questions before launch
Most survey problems are cheap to fix before launch and expensive to fix after. Pretesting takes an hour and catches most of them.
Cognitive interviewing. Sit with five representative respondents. Ask them to "think aloud" as they read and answer each question. Document every hesitation, misinterpretation, or clarifying question. Gordon Willis's research shows that five cognitive interviews surface roughly 80% of comprehension problems.
Expert review. Have a second writer read the survey cold. They will catch leading language, ambiguity, and missed options that the original author is blind to.
Pilot testing. Launch the survey to 20 to 50 real respondents. Look at item nonresponse (which questions people skip), completion time (are people rushing?), and straight-lining (are people picking the same option for every scale?). Any question with more than 5% nonresponse or heavy straight-lining needs to be rewritten.
Data review. Before drawing conclusions, look at the distribution of every question. If a question shows almost no variance (everyone picks "4 out of 5"), the question is not discriminating. If the distribution is U-shaped, the question is probably double-barreled or leading.
Good vs bad survey questions side by side
Here are common bad questions and their good rewrites. Every rewrite fixes a specific problem from the list above.
| Bad question | Problem | Good rewrite |
|---|---|---|
| "How satisfied are you with the speed and accuracy of our support?" | Double-barreled | "How satisfied are you with the speed of our support?" + "How satisfied are you with the accuracy of our support?" |
| "How much did you enjoy our excellent service?" | Leading, loaded | "How would you describe your experience with our service?" |
| "Do you always check your email first thing in the morning?" | Absolute phrasing | "In the past seven days, how often did you check your email within 30 minutes of waking up?" |
| "How often do you use the product often?" | Vague quantifier | "How many days in the past week did you use the product?" |
| "Do you disagree that we should not add this feature?" | Double negative | "Should we add this feature? Yes or no." |
| "How was your onboarding call?" | Assumes the call happened | "Did you have an onboarding call? If yes, how would you rate it?" |
| "Rate our service: excellent, good, fair" | Unbalanced scale | "Rate our service: very poor, poor, fair, good, excellent" |
| "Which of these best describes you? A, B, C" | Non-exhaustive | "Which of these best describes you? A, B, C, D, Other (please specify)" |
| "How old are you? 18 to 25, 25 to 35, 35 to 45" | Overlapping categories | "How old are you? 18 to 24, 25 to 34, 35 to 44" |
| "How would you rate our customer-centric, omnichannel engagement platform?" | Jargon | "How would you rate the experience of using our product?" |
For more before-and-after examples across different survey types, see our survey questions examples guide.
Writing survey questions for specific contexts
Different survey contexts have different constraints. Here is how the principles apply in practice.
Customer satisfaction. Keep surveys to 3 to 5 questions. Anchor on a single benchmark metric (CSAT or NPS) and add one or two drivers. Our customer satisfaction measurement guide goes deeper.
Employee surveys. Anonymity is non-negotiable. Keep length under 15 questions. Rotate topics across quarters. See our employee survey questions guide for the full framework.
Product feedback. Trigger surveys at natural moments in the user journey, not randomly. Use binary screeners to filter for the right respondents before asking rating questions. See how to use in-app surveys.
Post-event feedback. Send within 24 hours while memory is fresh. Use the post-event survey framework to structure reaction, learning, and intent to return.
Research surveys. Use 7-point scales and rigorous pretesting. Pilot with 50 respondents before launch. Document every wording decision so the data can be replicated later.
In every context, the five principles hold. Every rule on this page is a way of making sure respondents go through all four cognitive stages without shortcuts.
Best practices for getting better survey data
These are the practices that separate surveys that produce action from surveys that produce noise.
Keep surveys short. Completion rates fall sharply past 15 questions. If you need more data, run multiple shorter surveys instead of one long one. Our guide on increasing survey response rates covers length and incentives in detail.
Write before you design. Draft every question as plain text before you think about scales, layouts, or branching. A well-written question in a plain form beats a poorly written question in a beautiful interface.
Pretest with five real respondents. This single practice catches more problems than any other. Five people, 30 minutes each, thinking aloud.
Anchor to decisions. Before you ask a question, write down what action you will take for each possible answer. If you cannot name an action, the question is not necessary.
Respect anonymity where it matters. Sensitive topics produce honest answers only when respondents trust the channel. For employee and customer feedback on sensitive topics, use a tool that enforces anonymity by design.
Close the loop. Share results with respondents and tell them what changed because of their answers. Closing the feedback loop is what makes people willing to answer next time.
Free survey templates from Formbricks
Skip the blank page. Formbricks is an open-source experience management platform with free, pre-built survey templates you can deploy in minutes.
Why Formbricks for survey work:
- Open source and self-hostable. Survey data stays on your infrastructure. No third-party access, no data sharing, full compliance with internal privacy requirements. Self-hosting matters when you are asking sensitive questions and need to guarantee respondents that their answers stay private. See our self-hosting guide.
- Built-in anonymity. Anonymous surveys are a first-class feature, not an afterthought. Respondents trust the channel, and you get honest answers.
- Proven templates. Every template on Formbricks is based on validated question sets from published research and real-world deployments.
- Flexible distribution. Deploy via link, email, in-app survey, or website survey. Reach respondents in the channel that fits your use case.
- Privacy-first by default. GDPR-compliant out of the box. See our GDPR survey tool guide for the full compliance checklist.
How to get started:
- Sign up at formbricks.com (free tier, no credit card required)
- Browse survey templates or start from a blank survey
- Customize the questions using the principles in this guide
- Set distribution channels and response targets
- Launch and monitor responses in real time from your dashboard
Start writing better survey questions with Formbricks →
Frequently asked questions
Try Formbricks now
