Formbricks
Formbricks Open source Forms & Surveys Logo

Docs Feedback

Why is it useful?

This survey measures the clarity of developer documentation. It helps identify areas where documentation might be unclear or confusing. By improving documentation, product managers can enhance the developer experience.

How to get started:

Once you have setup the Formbricks Widget, you have two ways to pre-segment your user base: Based on events and based on attributes. Soon, you will also be able to import cohorts from PostHog with just a few clicks.

Step-by-step manual

Preview

Good documentation reduces support tickets, accelerates onboarding, and increases product adoption. Bad documentation does the opposite. The problem is that most teams do not know which category their docs fall into until they measure.

A documentation feedback survey captures whether each page helped the reader accomplish their goal. Over time, this data builds a quality map of your entire docs site, showing you exactly where to invest editorial effort.

When to deploy a docs feedback survey

On every documentation page. Place a persistent, low-friction feedback widget at the bottom of each page. This is the standard approach and it works because it requires zero targeting logic. Every page gets measured.

After a user completes a multi-page flow. If your docs have step-by-step guides that span multiple pages, add a summary feedback question at the end of the sequence.

After a search-to-page navigation. When a user searches your docs and clicks a result, ask whether the page answered their question. This measures search-to-resolution effectiveness.

Docs feedback survey questions

The simplest version is a single binary question. More detailed versions add context questions for negative feedback.

Minimal version:

  1. Was this page helpful? | Yes / No | Required

Standard version:

  1. Was this page helpful? | Yes / No | Required
  2. What were you trying to accomplish? | Open text | Conditional on "No"
  3. What was missing or incorrect? | Open text | Conditional on "No"

Detailed version:

  1. Did this page help you accomplish what you needed? | Yes / Partially / No | Required
  2. How would you rate the clarity of this page? | 1-5 scale | Optional
  3. How accurate is the information on this page? | Accurate / Contains errors / Not sure | Optional
  4. What could we improve? | Open text | Optional

The minimal version maximizes response volume. The detailed version produces richer data but at lower completion rates. Most teams start with the minimal version and use the detailed version on high-traffic pages.

Building a docs quality dashboard

With feedback on every page, you can build a documentation quality map.

Helpfulness rate per page. Calculate the percentage of "Yes" responses for each page. Sort by helpfulness rate to find your weakest pages.

Volume-weighted priority. A page with 50% helpfulness and 1,000 monthly visitors needs attention before a page with 30% helpfulness and 10 monthly visitors. Weight quality scores by traffic to prioritize fixes.

Trend tracking. After updating a page, compare the helpfulness rate before and after. This validates whether your edits actually improved the reader experience.

Category analysis. Group pages by documentation type (API reference, how-to guides, conceptual explanations, troubleshooting). If one category consistently scores lower, the problem may be structural (format, depth, examples) rather than page-specific.

What docs feedback data reveals

Missing pages. The "what were you trying to accomplish" responses from unhelpful pages reveal topics you have not documented. These are content creation priorities.

Outdated content. "Contains errors" responses flag pages where the documentation has fallen behind the product. Outdated docs are worse than no docs because they lead users down wrong paths.

Clarity problems. Pages that score high on accuracy but low on clarity need editorial attention, not technical updates. The information is correct but the presentation is not working.

Examples gap. One of the most common "what could we improve" responses across all documentation is "add more examples." If your docs are conceptually correct but lack code samples, API examples, or step-by-step walkthroughs, this feedback will surface it.

Common mistakes

Not reviewing feedback regularly. A docs feedback widget that no one checks is worse than no widget because it trains users to expect that their feedback is ignored. Review feedback weekly or biweekly.

Only tracking the binary. "Was this page helpful: Yes/No" produces a number. The open-text follow-ups produce insight. Both matter, but the text is where the action items live.

Not closing the loop. When you update a page based on feedback, consider adding a note: "Updated on [date] based on user feedback." This signals that you listen and encourages future feedback.

Making the widget intrusive. The feedback prompt should be subtle and positioned below the main content. It should not overlay the documentation or require scrolling past a survey to reach the content.

Set up this survey in Formbricks

Formbricks supports inline website surveys that you can embed on every documentation page. The widget sits at the bottom of the page and expands to show follow-up questions only when the user clicks "No" or "Partially."

The template captures the page URL automatically, so you can filter and sort feedback by specific page without any additional setup. This builds your docs quality dashboard automatically as feedback accumulates.

Formbricks also supports custom events, so you can trigger a docs feedback survey after a user completes a search-to-page navigation, measuring whether the search effectively connected the user with the right documentation.

Explore related templates