35+ concept testing survey questions for products and marketing (2026)
Johannes
CEO & Co-Founder
12 Minutes
April 15th, 2026
Most products that fail at launch failed at the concept stage and no one noticed. By the time a product reaches the market, the organization has spent six to twelve months building something that a 30-minute concept test could have redirected. Concept testing is the cheapest way to find out whether an idea is worth building before you build it. A good concept test survey answers four questions: do people understand the concept, is it different from what exists, is it relevant to them, and would they actually use or buy it?
This guide gives you 40+ concept testing survey questions, the methodology choices that separate directional noise from reliable signal (monadic vs sequential monadic vs comparative), the Van Westendorp Price Sensitivity Meter for pricing, and a free template. It applies to product concepts, feature concepts, ad concepts, and brand concepts.
What you will find in this guide:
- What concept testing is and when to run it
- Monadic vs sequential monadic vs comparative methodology
- 35+ concept test survey questions grouped by purpose
- The Van Westendorp Price Sensitivity Meter (4 questions)
- MaxDiff for feature concept prioritization
- Best practices for sample size and recruitment
- Common mistakes that produce misleading data
- Free Formbricks concept testing template
What is a concept test
A concept test is a structured survey run with the target audience that measures how a product, feature, or marketing concept performs against a defined set of criteria before development. It is one of the highest-leverage market research methods because the cost of running it is almost nothing compared to the cost of building and launching the wrong thing.
What concept testing is good for:
- Go / no-go decisions before building a new product.
- Variant selection between multiple concepts competing for the same budget.
- Price range discovery for new products and features.
- Positioning validation for marketing campaigns and brand work.
- Feature prioritization when planning a roadmap with more options than resources.
What concept testing is not good for:
- Predicting the actual market share of a new product (intent to buy always overstates actual buying).
- Replacing qualitative research. The why behind a concept's performance comes from interviews and open-ended responses, not closed-ended survey data.
- Testing concepts that do not yet have concrete form. Respondents need something to react to.
For the related product-market fit framework, see our product market fit survey questions guide and the PMF best practice guide.
When to run a concept test
Before development starts. The ideal time is after you have a defined concept but before the engineering investment. This is when the survey can redirect the most resources.
Before a brand or ad launch. Creative concept tests before media spend are cheap compared to the cost of a campaign that does not land.
Before a major feature release. Feature concept tests can reveal that the feature solves a problem users do not have, or that the messaging around the feature is unclear.
Before pricing decisions. Van Westendorp and related pricing surveys are most useful when run before the first pricing commitment is made.
As a recurring check. Mature product teams run light concept tests every quarter on ideas in the roadmap. It keeps the pipeline honest and catches bad ideas before they get to sprint planning.
For broader market research framing, see our market research survey questions guide.
Monadic vs sequential monadic vs comparative
This is the most important design choice in a concept test, and the one most commonly skipped.
| Method | How it works | Pros | Cons | Best for |
|---|---|---|---|---|
| Monadic | Each respondent sees one concept only | Clean signal, no contamination | Expensive; needs N per concept | Big go/no-go decisions, primary concept validation |
| Sequential monadic | Each respondent sees all concepts in random order with the same questions repeated | More efficient sample use; direct comparison possible | Order effects; later concepts get fewer thoughtful answers | Variant tests, iteration rounds |
| Comparative | All concepts shown together, respondents pick or rank | Simple; surfaces preferences directly | Biased toward novelty and first impressions; no absolute judgment | Creative testing, brainstorm narrowing |
Practical guidance:
- Use monadic when the decision is big, expensive, and hard to reverse (new product launch, major rebrand). Monadic data is cleaner than anything else and protects against order effects that make comparative tests unreliable.
- Use sequential monadic for variant tests within a category (three ad concepts for the same product). Randomize the order aggressively and include attention checks.
- Use comparative when you only need directional preference between 2 to 3 concepts and you can accept order bias.
The choice of method affects sample size. Monadic needs 100 to 300 per concept. Sequential monadic can share respondents across concepts, so total N can be lower.
35+ concept testing survey questions
Each question is tagged with a type and priority (Essential, Recommended, or Nice-to-have). Every concept test should show the concept itself (image, description, mockup, ad creative, positioning statement) before the questions.
Comprehension (questions 1-5)
Start here. If people do not understand the concept, every other question is polluted.
1. In your own words, what is this product or idea?
- Type: Open-ended | Essential
- Free-recall comprehension check. If responses wildly diverge from your intended description, the concept is unclear.
2. How clear is this concept to you?
- Type: Likert (1-5) | Essential
- Self-reported clarity.
3. What does this product or idea do?
- Type: Open-ended | Essential
- Functional comprehension.
4. Who is this product or idea for?
- Type: Open-ended | Essential
- Audience recognition. If respondents cannot name the target, positioning is off.
5. Is there anything confusing or unclear about this concept?
- Type: Open-ended | Essential
- Surfaces specific confusion points.
Uniqueness and differentiation (questions 6-9)
6. How different is this from other products or ideas you have seen?
- Type: Likert (1-5) | Essential
- Differentiation from the respondent's reference set.
7. Does this concept remind you of anything else?
- Type: Open-ended | Essential
- Reveals the respondent's mental reference set. Sometimes surprising.
8. How unique do you find the idea behind this concept?
- Type: Likert (1-5) | Recommended
9. What is different or new about this compared to what you use today?
- Type: Open-ended | Recommended
Relevance and fit (questions 10-14)
10. How relevant is this product or idea to your needs?
- Type: Likert (1-5) | Essential
- Primary relevance signal.
11. What problem does this solve for you personally?
- Type: Open-ended | Essential
- Forces the respondent to articulate the problem in their own words.
12. How often do you encounter the problem this concept is designed to solve?
- Type: Multiple choice | Essential
- Daily, weekly, monthly, rarely, never. Frequency of the underlying problem is a strong purchase intent predictor.
13. How important is solving this problem for you?
- Type: Likert (1-5) | Essential
14. Does this concept fit into your current routine or workflow?
- Type: Likert (1-5) | Recommended
Appeal and interest (questions 15-19)
15. How appealing do you find this concept overall?
- Type: Likert (1-5) | Essential
16. How interested would you be in learning more about this?
- Type: Likert (1-5) | Essential
17. How likely would you be to try this if it were available?
- Type: Likert (1-5) | Essential
18. Would you tell a friend or colleague about this concept?
- Type: Likert (1-5) | Recommended
- Word-of-mouth intent.
19. What is your first reaction to this concept?
- Type: Open-ended | Recommended
Purchase and usage intent (questions 20-24)
20. If this were available today, how likely would you be to buy or use it?
- Type: Likert (1-5) | Essential
- The classic purchase intent question.
21. At what price would you seriously consider buying this? (if applicable)
- Type: Open-ended or multiple choice | Essential
22. How likely are you to recommend this to a friend? (NPS-style)
- Type: Rating (0-10) | Essential
- Word-of-mouth intent quantified.
23. If this were available now, how soon would you try or buy it?
- Type: Multiple choice | Recommended
- Immediately / Within a month / Within 3 months / Within a year / Never.
24. What would have to be true for you to actually buy or use this?
- Type: Open-ended | Essential
- Surfaces adoption barriers. Often the richest open-ended question in the survey.
Price sensitivity: Van Westendorp (questions 25-28)
The Van Westendorp Price Sensitivity Meter is a four-question framework that estimates an optimal price range from respondent pricing judgments.
25. At what price would you consider this product or idea to be priced so low that you would question the quality?
- Type: Open-ended (numeric) | Essential
- The "too cheap" price.
26. At what price would you consider this product or idea to be a bargain, a great buy for the money?
- Type: Open-ended (numeric) | Essential
- The "cheap" price.
27. At what price would you consider this product or idea to be getting expensive, but you would still consider buying it?
- Type: Open-ended (numeric) | Essential
- The "expensive" price.
28. At what price would you consider this product or idea to be so expensive that you would not consider buying it?
- Type: Open-ended (numeric) | Essential
- The "too expensive" price.
Plot the four curves across respondents. The intersections identify the optimal price range and the boundaries of the acceptable price band.
Feature prioritization with MaxDiff (questions 29-32)
MaxDiff (maximum difference scaling) is a forced-choice method that yields reliable feature rankings without the cognitive overload of traditional importance ranking.
29. Of the following features, which is the most important to you?
- Type: Multiple choice | Essential
- Part of a repeated MaxDiff block. Respondents see 4 to 5 features and pick most and least important. Repeat across multiple subsets.
30. Of the following features, which is the least important to you?
- Type: Multiple choice | Essential
31. Would any of these features change your intent to use or buy?
- Type: Binary (Yes/No) + follow-up | Recommended
32. What feature is missing that you would expect from a product like this?
- Type: Open-ended | Essential
Likes, dislikes, and open feedback (questions 33-42)
33. What do you like most about this concept?
- Type: Open-ended | Essential
- Positives in the respondent's own words.
34. What do you dislike or find concerning about this concept?
- Type: Open-ended | Essential
- Negatives. Always include alongside the positives.
35. What would you change about this concept to make it more appealing?
- Type: Open-ended | Essential
- Action-oriented improvement feedback.
36. How believable are the claims in this concept?
- Type: Likert (1-5) | Recommended
- Credibility check, especially important for ad and brand concepts.
37. Who do you think is most likely to use or buy this?
- Type: Open-ended | Recommended
38. What name fits this concept best?
- Type: Open-ended or multiple choice | Nice-to-have
- Useful for brand and product naming tests.
39. Are there any images, words, or phrases that stood out to you (positively or negatively)?
- Type: Open-ended | Recommended
40. How well does this fit with what you know about [brand]?
- Type: Likert (1-5) | Recommended
- Brand fit question for existing brands.
41. Is there anything that seems missing from this concept?
- Type: Open-ended | Recommended
42. Is there anything else you would like to share about this concept?
- Type: Open-ended | Essential
- Catch-all.
Best practices
Recruit from the actual target audience. Concept tests with the wrong sample produce confident but wrong answers. If the concept is for SMB owners, do not recruit from a general consumer panel.
Show the concept clearly. Respondents need something concrete to react to. Use a visual, a mockup, or a clear written description with a benefit statement. Vague concepts produce vague data.
Run monadic when the decision is big. Sequential monadic and comparative are tempting because they are cheaper, but they produce biased data on high-stakes decisions.
Include a comprehension check before anything else. If people do not understand the concept, downstream answers are noise.
Ask the "what would change your mind" question. The most actionable part of any concept test is the open-ended feedback on barriers to adoption.
Pair survey data with qualitative research. Run 5 to 10 qualitative interviews alongside every concept test. The survey tells you what; the interviews tell you why.
Benchmark against previous concept tests. Over time, your internal benchmark for purchase intent, relevance, and appeal becomes the best signal for whether a new concept is a winner.
For the broader survey design principles, see our good survey questions guide.
Common mistakes
Testing with the wrong audience. Data from the wrong sample is worse than no data.
Skipping comprehension checks. Half of concept test failures trace back to concepts that were not understood.
Using comparative testing for big decisions. Comparative data is biased by order and novelty. Monadic is worth the extra investment.
Relying on purchase intent alone. Intent always overstates actual behavior. Always pair intent with frequency of the underlying problem and willingness to pay.
Inflated intent scores. A "5" on a 1-5 intent scale rarely means the respondent will actually buy. Apply a reality discount (typically 0.5 to 0.7) when estimating real-world adoption.
No open-ended questions. Closed-ended data alone cannot explain why a concept is winning or losing.
Running only one round. Great concept testing programs iterate. Round 1 surfaces problems; round 2 tests fixes.
Free concept testing template
Formbricks is an open-source experience management platform with free concept testing templates you can deploy in minutes.
Why Formbricks for concept testing:
- Open source and self-hostable. Concept data stays on your infrastructure, which matters when testing competitive concepts.
- Monadic and sequential monadic support. Randomize concept display across respondents.
- Image and video embedding. Show the concept directly inline with the questions.
- Open-ended response analysis. Easy coding of qualitative data alongside closed-ended metrics.
- Free tier. No credit card required.
How to get started:
- Sign up at formbricks.com
- Start from the concept testing template
- Add your concept image, description, and questions from this guide
- Recruit target-audience respondents via your channels
- Analyze results and iterate
Start your concept test with Formbricks →
For related frameworks, see our product market fit survey questions, product survey questions, market research survey questions, and survey questions examples. You can also start from the evaluate a product idea template, the pricing survey template, or the fake door follow-up template for validation-stage research.
Frequently asked questions
Try Formbricks now
