Last verified May 2026 · 7 min read
Methodology
How we source signup drop-off numbers, how the per-field cost calculator turns them into a dollar figure, and the refresh cadence that keeps everything in lockstep.
Primary sources
Twenty primary publishers across academic UX research, vendor-published case data, industry benchmark reports, and product-growth writing. We cite the original publisher in every numeric claim. Where two publishers disagree, we name both numbers.
| Source | Cadence | What we take from it |
|---|---|---|
| Baymard Institute | Annual + quarterly large-sample studies | Per-field drop-off (8-10pp per extra field), checkout abandonment rates extended to signup forms, mobile vs desktop conversion deltas. Paywalled at roughly $200/month for full database; we cite the public summaries and the figures Baymard themselves publish in conference talks and free reports. |
| Nielsen Norman Group | Continuous; topic-led | Form usability, error messaging, mobile form input, password UX. Free, citation-friendly, peer-respected. Specific articles cited inline include Jakob Nielsen on form length, Kara Pernice on password rules, and the NN/g mobile form audits. |
| Luke Wroblewski (LukeW) | Continuous | Inline validation, single-column forms, the Web Form Design book. The reference for form best-practice prescriptions. Quoted for the foundational design patterns; we add the 2026-era drop-off numbers Wroblewski himself does not publish. |
| Mixpanel Product Benchmarks | Annual report | Cross-industry product conversion rates (web and mobile). Signup conversion ranges by industry vertical (SaaS, fintech, ecommerce, media, dev tools). Public report; segments are broad rather than narrow but the dataset (millions of monthly active users) is large enough to be authoritative. |
| Amplitude Product Report | Annual | Activation and signup conversion deltas. Amplitude's benchmarks track funnel-step completion; for signup the relevant slice is start-to-complete conversion. Used to triangulate Mixpanel and Pendo figures. |
| Pendo Product Benchmarks | Annual | B2B SaaS-focused signup and activation rates. Tilted toward enterprise and mid-market where Pendo's customer base concentrates. Helpful when triangulating B2B-specific signup conversion bands. |
| OpenView SaaS Benchmarks Report | Annual | Free-trial-to-paid and signup-to-trial conversion by ARR band. Where the audit benchmarks for SaaS signup come from when we stratify by company stage (seed, Series A, Series B, growth, public). |
| Reforge | Continuous | Product-growth and signup-funnel writing from Brian Balfour, Casey Winters, Andrew Chen. Cited for qualitative framing on B2C onboarding patterns and the upper-funnel signup decision (which we then back with cited percentages from Baymard / Mixpanel / Segment). |
| Lenny Rachitsky / Lenny's Newsletter | Continuous | B2C and B2B product benchmark posts (signup conversion 30-70% typical range; activation 5-30% typical). Lenny aggregates from his network of product leaders; we cite where numbers are attributable to a named source within his posts. |
| Andrew Chen / a16z | Continuous | User acquisition, growth-loop framing, paid-organic conversion deltas. Quoted for the qualitative argument on signup as a conversion event in a larger acquisition loop, not the per-field percentages. |
| ChartMogul SaaS Subscription Index | Annual | Trial-to-paid conversion by ARR band. ChartMogul concentrates on the post-signup conversion (trial conversion) but their dataset includes signup-to-trial counts that calibrate the upstream funnel. |
| FirstMark Capital SaaS Survey | Annual | Growth-stage SaaS funnel metrics (signup, trial, paid). Sample skews to FirstMark's portfolio plus respondents; helpful as a third triangulation point alongside OpenView and KeyBanc. |
| KISSmetrics archive (Neil Patel) | Archive (active 2010-2018, archived since) | Foundational conversion-rate-optimisation data including A/B test case studies of signup form changes. Historical reference rather than a live source; we use only the case studies still verifiable against the original test screenshots. |
| Baymard / Statista form abandonment data | Quarterly | Cross-industry form abandonment percentages. Used to calibrate the calculator's defaults for unknown-vertical inputs. |
| Userpilot | Continuous | Product-tour, onboarding-flow, and signup-to-activation benchmark blog posts. Vendor-published, so we cross-check Userpilot numbers against Mixpanel or Pendo before citing. |
| Heap | Continuous | Drop-off analysis methodology and case studies. Helpful for the funnel-diagnostic framing on the calculator and benchmarks pages. |
| Segment SaaS benchmark report | Annual (State of Personalization Report) | OAuth conversion lift (+15-25pp B2C / +8-15pp B2B from the 2023 report). The most-cited single number on social-login conversion across the literature. |
| Auth0 (Okta) case data | Continuous | Magic link conversion lift (+15-30pp), passkey rollout case studies, email verification gate impact. Vendor-flavoured; we use Auth0 numbers where they are corroborated by independent sources (Slack disclosed magic-link numbers, Notion blog posts). |
| Cloudflare Turnstile / Google reCAPTCHA published benchmarks | As-published per release | Captcha friction cost numbers (reCAPTCHA v2 checkbox 2-5pp, Turnstile sub-1pp). Vendor-published; we use the conservative end of the published range and cross-check against independent CRO blogs. |
| NIST SP 800-63B Digital Identity Guidelines | Periodic (current revision 3) | Authoritative US government guidance on password rules: length over complexity, no forced rotation, no forced special characters. The single most-ignored research in signup UX; we cite the actual paragraph numbers, not a vendor summary. |
In scope
- The signup layer specifically: the form, the auth method, the verification step, the password rules, the captcha, the country / language defaults, and the mobile vs desktop adaptations.
- Per-field drop-off math and the dollar cost of one extra field at named LTV and CAC bands.
- OAuth (Google, Apple, GitHub, Microsoft) conversion lift over email plus password baselines.
- Magic link and passkey conversion data and the speed-of-repeat-login trade-off.
- Email verification gate strategies (hard gate, soft verify, verify-later, delay-until-action).
- B2B vs B2C signup form-length norms, including the PLG B2B exception.
- Industry-stratified benchmark bands: SaaS, ecommerce, fintech, healthtech, dev tools.
- Mobile vs desktop signup conversion deltas, including the autocomplete and keyboard-type contribution.
- International signup pitfalls: country dropdown defaults, phone formats, postcode validation.
Out of scope
- Onboarding and activation after signup (covered briefly on the home page but not the site's focus).
- Per-company contract pricing for auth vendors. The /auth-stack-comparison page covers public pricing only.
- Checkout abandonment in ecommerce, which is Baymard's primary remit and is its own discipline.
- SAML / SCIM enterprise SSO provisioning. We cover it briefly on /oauth-vs-email and /b2b-signup-form-fields; the deeper guidance lives at enterprise IAM publishers.
- Anti-fraud and KYC verification beyond the conversion-cost framing on /b2b-vs-b2c-norms.
- PCI compliance and other regulatory regimes that drive friction; we name them as legitimate reasons for added friction but do not write the compliance guidance.
Calculation framework
Six calculator rules, each tied to a named source band. The calculator at /calculator surfaces these explicitly in worked examples.
Per-field cost
annual_starts * 0.08 * LTV for each field beyond the second. The 0.08 is the conservative end of Baymard's aggregated checkout-form drop-off range (typically reported as 8-10pp per extra field). For B2B SaaS at $1500 LTV, two extra fields cost 16% * annual_starts * $1500 in lost LTV before any CAC waste is added back in.
OAuth lift attribution
+15-25pp baseline for B2C, +8-15pp for B2B per Segment 2023. We apply the midpoint when toggled on. The calculator does not double-count: if OAuth is toggled and email-verify is also toggled, OAuth's implicit verification absorbs the verify gate's drop.
Magic link / passkey adjustment
+15-30pp signup conversion lift over email plus password per Auth0 case data and Slack / Notion disclosed numbers. The calculator's default uses the midpoint (+22pp). Trade-off note: magic links slow repeat login, which the activation funnel pays for, not the signup funnel.
Email verify gate adjustment
Hard gate before product access: 8-20pp drop (typically modelled as 12pp). Verify-later or delay-until-action gates: 2-5pp drop. Soft verify (banner inside the product): roughly zero. Auth0 and Segment case data converge here.
Captcha friction adjustment
reCAPTCHA v2 checkbox: 2-5pp drop on legitimate signups. reCAPTCHA v3 invisible: 0.5-1.5pp drop. Cloudflare Turnstile: under 1pp drop. hCaptcha: 1.5-3pp drop. Numbers from Cloudflare and Google published benchmarks plus independent CRO blog cross-checks.
Mobile vs desktop adjustment
Mobile signup conversion runs 5-15pp below desktop per Baymard and NN/g mobile form research. The calculator does not split the result by device by default; the /mobile-vs-desktop sub-page covers the device-stratified math.
Refresh cadence
First business week of each month, every dated claim is re-checked against its source. The label changes in one place (src/lib/schema.ts: LAST_VERIFIED_DATE and LAST_VERIFIED_LABEL), which propagates to every visible Updated stamp, every Article schema dateModified, every WebSite schema dateModified, and every per-page Last verified line across 24 routes. No drift between visible date and structured-data date.
Out-of-cycle refresh triggers:
- Annual benchmark release from Mixpanel, Amplitude, Pendo, OpenView, ChartMogul, FirstMark, or Segment.
- Material auth vendor pricing or product change (Auth0, Clerk, Stytch, Supabase Auth).
- NIST SP 800-63B revision update.
- New case study publication from a named company on signup flow numbers (Slack, Notion, Dropbox, Stripe, Shopify, Figma, Airbnb).
- Reader-submitted correction with verifiable source.
Limitations
- Benchmark ranges are central tendencies, not predictions. A specific product may run higher or lower than the cited bands depending on category, market maturity, and user intent.
- Vendor-published numbers (Auth0, Cloudflare, Userpilot, Heap) are vendor-flavoured. We use them only where cross-checkable against independent sources, and we use the conservative end of the published range.
- Calculator outputs are estimates, not guarantees. The right way to use the dollar figure is to support the decision to A/B test a change, not to forecast the lift you will see.
- OAuth lift, magic link lift, and email verify gate drops do not stack linearly. The calculator caps the combined adjustment to keep results inside realistic ranges; the explicit math is shown on /calculator.
- Some primary sources are paywalled (Baymard Institute). Where we cite a paywalled source, the underlying number is one Baymard or the relevant publisher has shared publicly in conference talks, free reports, or interview transcripts.
Corrections
Email [email protected] with the page, the cited number, and the verifiable source. We aim for a five-business-day turnaround and acknowledge the fix in-page with the date.
RELATED READING
RELATED IN THIS PORTFOLIO