In 2010, Sean Ellis was trying to find a repeatable signal for product-market fit across dozens of startups he was advising. He tried NPS, retention metrics, and qualitative interviews. None of them gave him a clean, consistent signal he could compare across companies.
So he designed a single survey question that did.
The method became the de facto standard for measuring PMF in SaaS. It's been used by Dropbox, Superhuman, Airbnb, and hundreds of other companies to test whether they had product-market fit before scaling. It requires no analytics infrastructure. It takes under two weeks to get meaningful results. And the threshold is strikingly specific: if 40% of your users say they'd be "very disappointed" without your product, you have PMF.
Note
What is the Sean Ellis PMF survey? The Sean Ellis PMF survey is a one-question survey that asks users: "How would you feel if you could no longer use [Product]?" with four response options: Very disappointed, Somewhat disappointed, Not disappointed, and I no longer use it. The percentage of users who answer "Very disappointed" is your PMF score. Sean Ellis found that products where 40% or more of users answered "Very disappointed" consistently achieved sustainable growth.
Note
TL;DR: Ask active users "How would you feel if you could no longer use [Product]?" If 40%+ say "Very disappointed," you have product-market fit. Below 40%: you don't — yet. The survey should target users who've experienced value (2+ weeks active, completed a core action). Aim for 40-100 responses minimum. The follow-up questions — why they'd be disappointed and what type of person should use your product — are where the actionable insight lives.
The question itself
The Sean Ellis PMF survey is built around a single question:
"How would you feel if you could no longer use [Product]?"
With four fixed response options:
- Very disappointed
- Somewhat disappointed
- Not disappointed (it really isn't that necessary)
- N/A - I no longer use [Product]
That's it. One question. Four options.
Your PMF score is the percentage of respondents who choose "Very disappointed."
The benchmark: 40% or above indicates product-market fit.
Ellis derived this threshold empirically — he surveyed the user bases of dozens of startups and found that 40% was the consistent dividing line between companies that grew sustainably and companies that stalled. Below 40%, no amount of marketing or sales investment produces compounding growth. Above 40%, growth starts to work.
Why this question works
Most product satisfaction surveys ask users to rate their satisfaction, likelihood to recommend, or feature preferences. These metrics are useful — but they don't measure fit.
The "very disappointed" question works because it measures dependency, not satisfaction.
A user can be satisfied with your product and still switch if a better option appears. A user who'd be very disappointed to lose your product has integrated it into their workflow. It solves a problem they haven't solved elsewhere. That's fit.
The "somewhat disappointed" cohort is also informative — these users see value but haven't built a dependency. They're your product improvement signal. The gap between "somewhat disappointed" and "very disappointed" is where product development work usually sits.
The "not disappointed" group is equally revealing. These are people who signed up, tried it, and found it disposable. If this group is large, you have a targeting or positioning problem — you're attracting people who aren't your ICP.
Who to survey: the most important variable
The single biggest mistake teams make with the PMF survey is surveying the wrong people.
If you survey all users — including people who signed up last week, churned users, or users who never completed a core action — you'll get a deflated score that doesn't reflect your real product fit.
The Sean Ellis PMF survey should target active users who have experienced value.
The standard targeting criteria:
- Active in the last 2 weeks (users who are actively using the product, not dormant)
- Account age: at least 2 weeks (enough time to have experienced core value)
- Completed at least one core action (the key action that defines value in your product - for a survey tool, this might be "published at least one survey")
The "I no longer use [Product]" option exists specifically to handle users who slip through targeting — don't remove it, and filter those responses out when calculating your score.
Warning
Do not include trial users who haven't completed a core action. If you're a project management tool, surveying someone who signed up but never created a project will consistently understate your PMF score. You're measuring fit, not acquisition quality.
How to run the survey on your website
You can run the Sean Ellis PMF survey in two places: in your product (in-app) or on your website. Each has trade-offs.
In-app reaches your most active users at a natural moment in their workflow. It's the preferred method if you have a logged-in product. The survey typically appears as a modal or slide-in after a user completes a core action or hits a session milestone.
On-site (website) works if your product doesn't have a traditional logged-in interface, or if you want to survey website visitors who've used your product but aren't currently logged in. You can target returning visitors, visitors who match behavioral criteria (scrolled to a certain section, visited multiple times), or visitors coming from email links.
For on-site implementation with Selge, the setup looks like this:
- Create a new survey with the PMF question and four response options
- Add three follow-up questions (see below)
- Set targeting to: returning visitors only, minimum 2 visits, exclude users who haven't completed a core action
- Set the trigger to appear after 15 seconds on a page the user frequently visits (dashboard, settings, or a return visit to the homepage)
- Set a response cap and a cookie window so the same user isn't surveyed twice
The follow-up questions that make results actionable
The PMF score tells you whether you have fit. The follow-up questions tell you why — and more importantly, what to do about it.
Sean Ellis recommends adding three follow-up questions after the initial question:
Question 2: "What type of person do you think would most benefit from [Product]?"
This is an open-text question. The purpose is to get users to articulate your ICP in their own words. Users who answer "Very disappointed" will describe your ideal customer. This is more valuable than any internal positioning exercise — you're letting your best users write your marketing copy.
Common output: "SaaS founders who are frustrated with Typeform," "Marketing managers at B2B companies who need quick feedback without survey fatigue."
Question 3: "What is the main benefit you receive from [Product]?"
Open text. This surfaces the value proposition your users actually experience — which is often different from the one you thought you were delivering. Patterns in this response reveal your real differentiation.
Question 4: "How can we improve [Product] for you?"
Open text. This is directed at users who answered "Somewhat disappointed" — they see value but aren't dependent. Their improvement suggestions are your highest-priority product roadmap input.
How many responses do you need?
You need a minimum of 40 responses to draw any meaningful conclusion from the PMF survey. Aim for 100 responses before treating the score as stable.
Below 40 responses, the margin of error is wide enough that a few responses in either direction can swing you across the 40% threshold. The score isn't actionable.
If you can't reach 40 active users who've experienced value, that's a finding in itself — your activated user base is too small to measure PMF reliably. Focus on activation before running the PMF survey.
Interpreting your results
40%+ "Very disappointed" — you have PMF
Your product is solving a real problem for a meaningful segment of users. The work now is understanding which users are in the "very disappointed" cohort (read their open-text answers), and optimizing acquisition and activation to funnel more of those specific users into your product.
Don't treat 40% as a ceiling. Superhuman famously used their PMF survey iteratively — they surveyed, found their core segment, removed features that appealed to the "not disappointed" cohort (because those features diluted the product's focus), and watched their score climb to 58%.
30-39% "Very disappointed" — you're close
You have real users who depend on your product. The issue is either that you're attracting too many users who aren't your ICP (diluting the score) or that the product isn't differentiated enough for a broader segment.
At this range: read the "somewhat disappointed" follow-up answers carefully. They'll tell you exactly what's preventing users from becoming dependent. These are usually 2-3 specific gaps — missing features, friction in a key flow, or a positioning mismatch.
20-29% "Very disappointed" — early signal, not fit
You have a product that solves a problem for some users, but it hasn't found the segment or the value proposition that creates dependency. This is normal for early-stage products.
The "very disappointed" cohort — even if it's only 20-25% — is extremely valuable. Read every open-text response from users in this group. Who are they? What do they do? What did they say about main benefit? That's your ICP. Build toward them.
Below 20% — no signal yet
Your product hasn't found its user yet, or it's not solving a problem with enough weight to create dependency. This is honest and useful data. The fix is not marketing — it's product work.
PMF survey benchmarks
Sean Ellis published benchmarks across the companies he surveyed. Here are the reference points:
| Score | Interpretation |
|---|---|
| 40%+ | Product-market fit. Scale acquisition. |
| 30-39% | Strong signal. Segment and optimize. |
| 20-29% | Early indicators. Focus on ICP cohort. |
| Below 20% | No fit yet. Product work needed. |
These benchmarks come from B2B SaaS primarily. Consumer products tend to score lower and still succeed — the 40% threshold is most reliable for SaaS where word-of-mouth and retention matter.
Notable published scores: Dropbox scored 48% before they launched the referral program that drove their growth. Superhuman started at 33%, analyzed the "very disappointed" cohort, removed features that were confusing non-ICP users, and reached 58%.
The PMF survey loop: survey, segment, improve, repeat
The Sean Ellis PMF survey is not a one-time measurement. It's a feedback loop.
Cycle 1: Survey your current active users. Get your baseline score.
Segment: Separate the "very disappointed" cohort from the rest. Who are they? What's their job title, use case, company size? What did they say about main benefit? This is your ICP.
Act: Make product decisions that serve the "very disappointed" cohort. This often means removing features that attract the wrong users, not adding features.
Cycle 2: After 6-8 weeks of product changes, survey again. Compare the new score to the baseline.
At each cycle, you're either confirming that your changes moved the score up, or finding that you've drifted from what your best users care about. The score is a compass, not a destination.
Running the PMF survey with Selge
The Sean Ellis PMF survey is available as a pre-built template in Selge. The template includes:
- The "how disappointed" question with all four response options
- The three follow-up questions pre-configured as open text
- Recommended targeting settings (returning visitors, 2+ sessions)
- A 15-second delay trigger
To use it: create a new survey, select the PMF Survey template, update the product name in the question text, configure your targeting, and publish.
Results appear in your dashboard in real time. The response breakdown shows your PMF score prominently — you'll see the "very disappointed" percentage update as responses come in.
Use the PMF Survey template in Selge →
FAQ
What is the Sean Ellis PMF survey?
The Sean Ellis PMF survey is a one-question survey that asks users "How would you feel if you could no longer use [Product]?" with four response options. The percentage of users who answer "Very disappointed" is your product-market fit score. A score of 40% or above is the threshold Sean Ellis identified as indicating product-market fit, based on analysis of dozens of early-stage SaaS companies.
What does the 40% rule mean for product-market fit?
Ellis found empirically that 40% is the dividing line between companies that grow sustainably and companies that stall despite investment in marketing and sales. Products below 40% tend to have high churn, low word-of-mouth, and poor unit economics regardless of how much acquisition spend you apply. Above 40%, growth mechanisms start to compound. It's a threshold, not a guarantee — but it's a reliable diagnostic.
How many responses do you need for a PMF survey?
Minimum 40 responses to draw any conclusion. Aim for 100 responses before treating the score as stable. Below 40 responses, the margin of error is wide enough to be misleading. If you can't reach 40 active users who've completed a core action, focus on activation first.
What questions are in the Sean Ellis test?
The core question is: "How would you feel if you could no longer use [Product]?" with options: Very disappointed / Somewhat disappointed / Not disappointed / I no longer use it. The recommended follow-up questions are: "What type of person would most benefit from [Product]?", "What is the main benefit you receive from [Product]?", and "How can we improve [Product] for you?"
How do you calculate your PMF score?
Take the number of respondents who answered "Very disappointed," divide by the total number of respondents who answered "Very disappointed," "Somewhat disappointed," or "Not disappointed" (exclude "I no longer use it" from the denominator), and multiply by 100. That percentage is your PMF score.
What is a good product-market fit score?
40% or above is the standard benchmark. Above 50% is strong. Superhuman reached 58% after iteratively removing features that appealed to non-ICP users. For early-stage B2B SaaS, a score of 30-39% with a clearly identifiable "very disappointed" cohort is a strong signal worth building on — the product has found a segment, even if it hasn't yet found enough of them.
Ready to hear what your visitors think?
Pick a template, paste one script tag, start getting real answers. No developer required.
Why didn't you sign up?
Conversion & CRO
Pricing page clarity check
Conversion & CRO
Homepage clarity check
Messaging & Positioning
Navigation check
Website & UX
What stopped you from signing up?
What stopped you from signing up today?
What stopped you from signing up today?
AI Insight
34% cite pricing. Consider adding a comparison table or rewriting plan descriptions using visitor language.
