# Research 07 — Consumer-Product Strategy for the Veritas MVP

> *What a 10-question consensus quiz can learn from 16Personalities, Pol.is, Spotify Wrapped, and the graveyard of invite-only launches; and how to ship one without degrading into either BuzzFeed or Cambridge Analytica.*

## Scope

This document researches the consumer layer of the Veritas Protocol — a 10-question consensus quiz whose output is a starter CPML (see `04-cpml.md`), plus four companion features: daily-refinement prompts, similar-people matchmaking, suggested-source discovery, and opposite-view surfacing. It asks what has worked, what has failed, and where the ethical and financial bright lines sit.

The frame used throughout: **every mechanic that makes a quiz spread also makes it seductive to misuse.** The consumer MVP is the protocol's front door. If it becomes a pigeonhole generator, the protocol inherits the reputation cost. If it stays purely earnest, it does not spread. The research is about finding the narrow path between those failure modes.

---

## 1. Viral-Quiz Case Studies

### 1.1 16Personalities

**Scale.** NERIS Analytics Ltd claims 460M+ lifetime test-takers; ~$7M/month equivalent search-traffic value ([Inpages, Jan 2025](https://inpages.ai/insight/marketing-strategy/16personalities.com)). De-facto consumer face of MBTI-shaped typing.

**Mechanics.** Free core, paid add-ons (Career Suite, Premium Profiles — [Terms](https://www.16personalities.com/terms)). Four-letter identity handle (INTJ, ENFP) — short, tweetable, biography-ready — is the most-copied pattern in consumer-identity products and directly relevant to CPML. ~6,000-word narrative result pages sell *being seen*, not accuracy. Their fifth letter (Assertive/Turbulent) derives from NEO-PI Big-Five ([Wikipedia: MBTI](https://en.wikipedia.org/wiki/Myers%E2%80%93Briggs_Type_Indicator)) and inoculates against the pseudo-science critique ([Psychology Today 2020](https://www.psychologytoday.com/us/blog/quantum-leaps/202004/two-reasons-personality-tests-like-myers-briggs-could-be-harmful); [Stein & Swan 2019](https://swanpsych.com/publications/SteinSwanMBTITheory_2019.pdf)). `[UNVERIFIED]` ARR estimates ($10-50M range) are inferential.

### 1.2 Political Compass — 24-year compounding

Founded July 2001 by journalist Wayne Brittenden ([Wikipedia](https://en.wikipedia.org/wiki/The_Political_Compass)). 62 propositions, two axes. Compounded into meme infrastructure: r/PoliticalCompassMemes, Chinese Political Compass (Peking U, 2017), countless clones.

**Lesson.** A 2×2 grid is portable. A shareable output *image* with dot + quadrant label generates derivative content (memes, reaction videos, "where does X celebrity land") for years. The result image does more marketing than the result text.

### 1.3 8values, 9axes, LeftValues — fork ecology

8values is MIT-licensed on GitHub ([source](https://github.com/8values/8values.github.io)). Produced a fork ecosystem — LeftValues, 9Axes, SapplyValues, InfValues (45 axes), 8Dreams, plus localizations — each typically a single-maintainer brand of its own.

**Lesson.** Open-source quiz multiplies reach at zero marketing cost but fragments data. For Veritas, open-source quiz + protocol-level CPML schema means forks ship their own 10-question variants while still emitting CPML — the schema, not the quiz, is the moat.

### 1.4 BuzzFeed quizzes — the dopamine engine

Formula ([HuffPost on Matt Stopera](https://www.huffpost.com/entry/buzzfeed-quiz-how-do-they-work_n_4810992); [LeadQuizzes guide](https://www.leadquizzes.com/blog/how-to-make-buzzfeed-quiz/)): (1) self-presentation reward (self-disclosure activates reward circuitry like food/money); (2) social validation loop; (3) *result flattery* — distributions skewed mildly positive because unflattering results don't get shared.

**Ethical problem.** (1) and (2) are universal human psychology; not dark patterns in themselves. (3) is where manipulation enters — deliberate distortion for sharing lift. A public-interest protocol cannot skew CPML output toward flattery; the output has to be honest even when it is boring.

### 1.5 OkCupid — the deep-profile pattern

~3,000 available match questions; users average ~50; recommended 50-100 ([OkCupid FAQ](https://okcupid-app.zendesk.com/hc/en-us/articles/22982200783771)). Each answer carries three signals: user's answer, desired partner's answer, importance weight. Retention is driven by the fact that *each additional answer improves match quality* — a ratchet.

**Applicable to Veritas.** The daily-refinement prompt (one question per day) is the OkCupid pattern stripped of dating. Each daily question adds one more coordinate to the CPML; leaving the app makes the CPML go stale. This is the strongest argument for the daily-prompt feature.

### 1.6 Ancestry / 23andMe — cautionary tale

23andMe filed Chapter 11 March 2025 ([NPR](https://www.npr.org/2025/03/24/nx-s1-5338622/23andme-bankruptcy-genetic-data-privacy); [Fortune](https://fortune.com/2025/03/24/timeline-23andme-wojcicki-bankruptcy-dna-testing-company/)). $3.5B peak (2021 SPAC) → ~$50M. Causes: one-time-purchase dynamics (no repeat), 2023 breach of ~7M profiles, $30M class-action settlement.

**Two lessons.** (a) One-shot products die — Veritas's daily-refinement loop is the survival feature. (b) Sensitive-data breaches destroy trust — CPMLs are Article-9 ([Open Rights Group](https://www.openrightsgroup.org/blog/profiling-political-opinions-and-data-protection-the-legal-background/)); a CPML breach for political/epistemic profiles would be worse than a genetic-data breach.

### 1.7 Spotify Wrapped — seasonal reveal

~2B social-media impressions from 2023 Wrapped ([NoGood](https://nogood.io/blog/spotify-wrapped-marketing-strategy/)). Design reverse-engineered from Instagram Stories / TikTok — 9:16 vertical, one-tap share. 2024 AI-podcast shift drew backlash from users who missed "Top Genres" / "Sound Town" ([MarketingWithDave](https://marketingwithdave.com/case-study-how-spotify-wrapped-became-a-viral-phenomenon-and-when-it-backfired/)).

**Lesson.** For Veritas, an annual "Your Year in Epistemology" — questions flipped on, CPML drift, new sources added — is high-leverage at near-zero cost once the data pipeline exists. Easiest seasonal growth loop available.

### 1.8 Pew Political Typology — credentialed variant

90k users in the first 24 hours of the 2014 launch; ~1.5M over three years for 2011 ([Pew](https://www.pewresearch.org/short-reads/2014/06/27/why-the-typology-quiz-questions-are-asked-the-way-they-are/)). 2021 edition: 10,221-person panel, 27-item weighted clustering around medoids ([Pew Decoded](https://www.pewresearch.org/decoded/2021/11/09/behind-pew-research-centers-2021-political-typology/)).

**Lesson.** Published methodology + stratified panel buys institutional credibility a viral internet quiz cannot. Veritas should publish its 10-question rationale, domain mapping, and CPML-generation logic from day one. Pew is the model for "earnest quiz with citation."

---

## 2. Ethical / Dark-Pattern Analysis

The dark-patterns literature ([Oxford Academic: Shining a Light on Dark Patterns](https://academic.oup.com/jla/article/13/1/43/6180579); [arxiv 2503.01828, 2025](https://arxiv.org/html/2503.01828v1)) categorizes manipulation along a spectrum: deception, obstruction, forced action, interface interference, social proof abuse, sneaking, and urgency manipulation. Applied to quizzes:

| Pattern | Example | Defensible for Veritas? |
|---|---|---|
| **Forced-share to see result** | "Post to 3 friends to unlock" | **No.** Violates autonomy. Protocol-breaking. |
| **Email-gate before result** | Mid-quiz email capture | **Conditional.** Ask at end, make skippable. RevenueHunt's data: mid-quiz email capture "kills momentum" ([docs](https://docs.revenuehunt.com/customer-success/reduce-dropoff/)). |
| **Flattery skew** | Bias results toward "you are a thoughtful moderate" | **No.** Distorts the CPML's honesty. |
| **Sunk-cost trap** | Hide question count, make quiz feel endless | **No.** Always show progress. |
| **Reveal-later tease** | Questions 1-8 are "personality," 9-10 collect identifiers | **No.** All questions visible up-front. |
| **Social proof abuse** | "74,321 people took this today" | **Conditional.** Factual counts = fine. Fabricated or ambiguous counts = manipulation. |
| **Scarcity/urgency** | "Only 500 CPMLs left today" | **No.** False scarcity on free service. |
| **Addiction loop (daily streak)** | Duolingo-style streak pressure | **Conditional.** Gentle prompts = fine; streak-breaking guilt = harmful. The daily-refinement feature must not generate loss-aversion pressure. |
| **Genuine self-insight** | Honest result with caveats | **Yes.** The core value proposition. |
| **Transparent benefit** | Clear explanation of what the CPML does and who holds it | **Yes.** Required. |

**The Cambridge Analytica line.** The 2018 scandal turned on Aleksandr Kogan's "thisisyourdigitallife" quiz harvesting data from ~300k direct participants plus ~87M friends-of-friends ([UW Jackson School](https://jsis.washington.edu/news/facebook-data-privacy-age-cambridge-analytica/); [CNN 2019](https://www.cnn.com/2019/04/25/tech/facebook-personality-quizzes)). Facebook banned "apps with minimal utility, such as personality quizzes" in 2019. The regulatory residue:

- GDPR Article 9 treats political opinions as a special category requiring explicit consent or "substantial public interest" justification ([Open Rights Group](https://www.openrightsgroup.org/blog/profiling-political-opinions-and-data-protection-the-legal-background/)).
- EU Regulation 2024/900 (TTPA) on political advertising transparency went live April 2024, full enforcement Oct 2025 ([Qomon summary](https://qomon.com/blog/ttpa-gdpr-2024-900-what-every-political-and-nonprofit-organization-must-know)).
- EDPB opinion (2024): "consent or pay" models generally fail GDPR consent validity.

**Implication.** Any Veritas CPML that encodes political/religious/epistemic preferences is Article-9 data. Storage must be user-side by default (browser localStorage + optional end-to-end-encrypted backup), with an explicit consent flow for any server-side processing. A "browser-history reading" feature — raised in the brief — is probably the single highest-risk design choice and is discussed under §7.

---

## 3. Matching & Community Discovery Without Echo Chambers

### 3.1 Pol.is — anti-echo-chamber algorithm

Pol.is ([Wikipedia](https://en.wikipedia.org/wiki/Pol.is); [Democracy.earth](https://words.democracy.earth/hacking-ideology-pol-is-and-vtaiwan-570d36442ee5); [arXiv:2502.05017](https://arxiv.org/html/2502.05017v1)) clusters participants by voting pattern (PCA + k-means) and *elevates comments liked across clusters*. No replies — only agree/disagree/pass and author-new-comment. vTaiwan: 200k+ participants, 26 pieces of legislation.

**Direct applicability.** Closest living analog to what Veritas's matchmaking + opposite-view surfacing need to be. Three rules worth copying: (a) cluster first, recommend users from *adjacent* clusters whose agreement would be informative, not just "similar" users; (b) elevate cross-cluster agreement — when a claim gets agreement across opposed clusters, it's a consensus candidate; (c) no per-comment replies, which drive flame wars. Pol.is prevents them architecturally.

### 3.2 Bumble BFF / Meetup — interest-based grouping

Bumble BFF relaunched Sep 2025 on Geneva with group-first architecture ([TechCrunch](https://techcrunch.com/2025/09/18/bumble-bffs-revamped-app-is-here-focusing-on-friend-groups-and-community-building/)). Design: behavioral > stated interests (weight groups actually joined over tags selected); interest tags as soft filters not hard gates; 1:many community rooms reduce 1:1 pressure.

**Lesson.** "Similar to you on epistemology" is a dating-app pattern (collaborative filtering on self-described traits). "Would find the same three sources valuable" is *task-based* matching — more likely to produce durable ties. Veritas should match on shared *information needs*, not shared *labels*.

### 3.3 Recommender serendipity vs accuracy

Accuracy-diversity-serendipity trade-off well-mapped ([Frontiers 2023](https://www.frontiersin.org/journals/big-data/articles/10.3389/fdata.2023.1251072/full); [arXiv:1907.01591](https://arxiv.org/abs/1907.01591)). Standard tools: Maximal Marginal Relevance (MMR) for re-ranking with a diversity penalty; multi-armed bandits for explore/exploit.

**Concrete rule.** For community/source suggestions: generate N accurate candidates, then re-rank so top K contains at most ⌊K/2⌋ from the dominant cluster. Remainder fills from adjacent clusters — "similar but not identical." MMR applied to epistemic clusters rather than content topics.

### 3.4 Bail result — forced exposure backfires

[Bail et al., PNAS 2018](https://www.pnas.org/doi/10.1073/pnas.1804840115): paying Twitter users to follow an opposing-ideology bot *increased* polarization — strongly for Republicans, weakly for Democrats. Naive "show the other side" is counterproductive.

**What works.** Pacing-and-leading contact; shared interests first, then disagreement (Braver Angels pattern). For Veritas: opposite-view surfacing is *not* an opposed-take feed — it's "users who agree with you on 6/10 of your frame but differ on topic X, here is their reasoning on X." Agreement first, disagreement within a relationship.

---

## 4. Opposite-View / Steel-Man Surfacing

### 4.1 Ground News and AllSides

Ground News: ~60k articles/day from 50k sources, merged by story with left/center/right color-coding ([ground.news/about](https://ground.news/about)). Pro subscription revenue. "Blind Spot" feature — stories one side over-covers, the other ignores. AllSides: crowdsourced bias ratings (community + expert panel), discussion forums; less sophisticated tech, longer bias-ratings track record.

### 4.2 Tangle News

500k+ newsletter readers, 60+ countries ([Editor & Publisher](https://www.editorandpublisher.com/stories/how-tangle-is-rebuilding-trust-in-political-news-through-transparent-multi-perspective-reporting,261167); [readtangle.com/about](https://www.readtangle.com/about/)). Format: one debate per day; best *left* arguments, best *right* arguments, best *center* arguments; editor's take last. Explicit steelmanning. Subscriber-funded, non-partisan, rated favorably by AllSides / Ad Fontes / MBFC.

**Lesson.** Most direct proof-of-concept that a revenue-sustainable steelman product works at consumer scale. Three transferable format rules: (a) one topic per day — attention is scarce; (b) three-perspective summary, not a debate forum; (c) editor's view last, clearly labeled.

### 4.3 Braver Angels + Mercier-Sperber

Braver Angels ([Wikipedia](https://en.wikipedia.org/wiki/Braver_Angels); [self-critique](https://braverangels.org/a-mild-sympathetic-critique-of-better-angels-debates/)): non-profit running Red/Blue workshops since 2016. Their debate format separates "speak" and "question" phases architecturally. Mixed evidence on polarization outcomes.

Mercier & Sperber's argumentative theory ([PMID 21447233](https://pubmed.ncbi.nlm.nih.gov/21447233/); [Edge](https://www.edge.org/conversation/hugo_mercier-the-argumentative-theory)): reasoning evolved for social persuasion/evaluation, not solitary truth-seeking. Individual reasoners produce myside bias; groups surfacing each other's flaws do better. Reasoning is a social technology.

**Implication.** Opposite-view surfacing is preparation for (or simulation of) an argumentative encounter — not solitary self-improvement. UX should frame it: "Here is the strongest case against your position from within the Catholic traditional frame. A Catholic reader would emphasize Y." Not "you should believe this."

### 4.4 UX patterns with evidence

Three moves have support from the literature + case studies: (a) side-by-side summaries across frames (Ground News / AllSides / Tangle) — reduces switching cost, surfaces framing differences; (b) "best case from each frame" writing (Tangle's innovation) — explicit strongest-version labeling, not straw-men; (c) shared-interest framing first, then disagreement (Bail implication, Braver Angels practice). What fails: raw exposure (backfire), reply-threaded debate (flame wars), forced unframed exposure (polarization).

---

## 5. Revenue-Mix Analysis for Public-Interest Protocols

### 5.1 Wikimedia — donations archetype

FY 2023-24: $185.4M total revenue, $174.7M donations ([Diff](https://diff.wikimedia.org/2024/11/07/highlights-from-the-fiscal-year-2023-2024-wikimedia-foundation-and-wikimedia-endowment-audit-reports/)). 8M donors. Average gift $10.58. Investment income $5.1M. Wikimedia Enterprise (B2B API) $3.4M gross. Endowment $144M. Pattern: small-dollar recurring from broad base, plus investment income, plus B2B API. Long-tail donor base is the moat. Caveat: has attracted criticism for aggressive fundraising banners inconsistent with cash position ([Wikipedia:Fundraising statistics](https://en.wikipedia.org/wiki/Wikipedia:Fundraising_statistics)) — donation sustainability depends on preserving a genuine sense of need.

### 5.2 Mozilla — single-source-of-funds trap

2023: Foundation received $16.5M contributions + $18.6M royalties from Corporation ([State of Mozilla 2024](https://www.mozilla.org/en-US/foundation/annualreport/2024/article/financing-an-open-internet-mozillas-path-forward/)). Corporation derives the overwhelming majority of its revenue from the Google search-default deal, now an antitrust liability post-Judge-Mehta.

**Lesson.** Never take more than ~30% of revenue from any single counterparty. Protocol-level design should make this structurally hard.

### 5.3 Brave — advertising-as-commodity

$100M revenue Q1 2025; $30M ad revenue 2024; 100M+ MAU ([Brave blog](https://brave.com/blog/100m-mau/); [Stan Ventures](https://www.stanventures.com/news/brave-hits-100-million-users-and-100-million-revenue-5746/)). Mix: search partnerships, Brave Ads (BAT), Brave Rewards (30% platform / 70% user split). Independent search index.

**Lesson.** If an ad layer is ever contemplated, it must be opt-in, non-profiling (topic-level not person-level), and must not use CPML content. Brave's architecture is a reasonable reference.

### 5.4 Signal — loan-and-donations runway

2024 revenue $25.8M, up from $11.1M in 2023 ([Wikipedia](https://en.wikipedia.org/wiki/Signal_Foundation); [Business of Apps](https://www.businessofapps.com/data/signal-statistics/)). Needs ~$50M/year by 2025. Bootstrapped with $50M zero-interest 50-year loan from Brian Acton. In-app $3/$5 donations are the primary public channel.

**Lesson.** A single benefactor can buy 3-5 years of runway. Critical to *build the donor base in parallel*. Signal's donation ask is in-product, non-annoying, frequent; this is the model.

### 5.5 Protocol Labs / Filecoin — token-treasury

ProPGF Batch 1: $3.68M across 14 teams; RetroPGF Round 3: 585k FIL ([Filecoin](https://filecoin.io/blog/posts/the-future-of-public-goods-funding-in-filecoin-scaling-the-pl-pgf-vision); [Dev Grants Aug 2024](https://fil.org/blog/developer-grants-updates-august-2024)). Token-treasury-funded grants. Works when token has market value; fragile when it declines.

**Lesson.** If tokenomics (see `03-tokenomics-design.md`) include a treasury, RetroPGF is the best-proven way to fund ecosystem work without picking winners in advance. Option, not dependency.

### 5.6 Recommended Veritas mix

Drow's brief lists: AI-lab fees + certificate subscriptions + investigation market + donations. Mapped to the patterns above:

| Stream | Archetype | Target share | Lesson |
|---|---|---|---|
| AI-lab fees (B2B API) | Wikimedia Enterprise | 30-40% | High-value, low-counterparty-count; rate-limit exposure. |
| Certificate subs (validators) | SaaS | 20-30% | Most predictable; price at cost-plus, not value-based, to protect public-interest stance. |
| Investigation market | Two-sided marketplace | 10-20% | Take rate 5-15%; audit for manipulation risk. |
| Donations | Wikimedia / Signal | 15-25% | In-product ask; small recurring; annual "Wrapped"-style reveal drives donation conversion. |
| Treasury / grants | Protocol Labs | 0-10% | Optional; only if tokenomics ship. |

No single stream should exceed ~40% of revenue for >2 consecutive years.

---

## 6. Launch Strategy

### 6.1 When to launch the consumer MVP

The Nuanu/Medicus/Organon-style 24-month roadmap to full protocol implies the MVP should ship **before** validators are in production, not after. Reasons:

1. **MVP generates the demand signal.** If 100k people build CPMLs and then discover there are no verdicts to compose, that pressure pulls validators into the protocol faster than any B2B sales process.
2. **CPML-first is honest.** Shipping validator layer first creates a period where "Veritas produces verdicts" is the public narrative — hard to course-correct to "Veritas produces plural verdicts that users compose."
3. **Risk management favors consumer-first.** A consumer product that gets abandoned costs dignity. A validator-layer that gets abandoned costs institutional trust.

### 6.2 Standalone utility

The MVP must be useful without any validators populating the protocol. The quiz + CPML + CPML-based reading suggestions are useful **as a self-reflection and reading-list tool** regardless of the validator layer. Once validators exist, the CPML becomes additionally useful for composing plural verdicts — but that is upside, not prerequisite.

### 6.3 Brand architecture — three options

| Option | Upside | Downside |
|---|---|---|
| **A. MVP flies the Veritas Protocol brand** | Single brand to build; MVP users become protocol advocates | Political baggage of "truth" branding may attract pre-existing enemies; harder to pivot MVP if full protocol lags |
| **B. MVP has its own brand ("Compass"/"Frame"/etc.)** | Playful/approachable; protocol brand stays pristine for institutional audience; MVP can iterate without protocol-brand risk | Two brands to build; harder brand-equity transfer at full-launch |
| **C. MVP is "$NAME by Veritas Protocol"** | Endorsement architecture — MVP credibility borrows from protocol; protocol reach amplified by MVP | Requires both brands to be respectable simultaneously |

**Recommendation: Option C.** Apple / Google / Adobe have shown endorsement-branding handles this well. The consumer MVP is "$NAME, built on the Veritas Protocol." Protocol stays the institutional face; MVP gets playful identity. If the MVP is loved, "built on Veritas" becomes the marketing tagline. If the MVP stumbles, the protocol is insulated.

### 6.4 Invite-only vs open

Evidence is mixed ([Waitlister case studies](https://waitlister.me/growth-hub/blog/case-studies-successful-product-launches-powered-by-waitlists); [Brinna Thomsen](https://www.brinnathomsen.com/4-pitfalls-of-the-invite-only-strategy-used-by-apps-like-clubhouse-and-superhuman)). Gmail (2004): capacity-constrained, worked. Superhuman: hand-picked fit, worked but 5+ years to scale. Clubhouse: 10M waitlist, $4B valuation, lost 80% of users within months of opening — scarcity without retention. Dropbox: referral-rewarded, 3,900% growth in 15 months.

**Pattern.** Invite-only works when capacity is genuinely constrained or curation makes the product good. Neither applies to Veritas — a CPML quiz has ~zero marginal cost per user and improves with population size.

**Recommended: open launch with referral reward.** Each new user gets a limited "CPML Compare" mode (view how your CPML differs from a friend's) when a friend signs up. Dropbox pattern, not Clubhouse. Viral coefficient target 0.5-0.8 — below 1.0 is fine when combined with organic search and PR; self-propagating virality rarely materializes and chasing it leads to dark patterns.

### 6.5 First 1k / first 100k sequence

**Phase 1 — First 1k (M0-M2).** Closed alpha with rationalist-adjacent communities (LessWrong, ACX, Marginal Revolution, philosophy-of-science). High-feedback, low-offense-taking. Goal: refine 10 questions, stress-test CPML output, calibrate cross-frame argument rendering. Metric: >40% report result as "accurate" and recommend ≥1 new source.

**Phase 2 — First 10k (M2-M5).** Soft public launch: Show HN, Product Hunt, select long-form podcasts (Lex Fridman, Dwarkesh, EconTalk). Invite mechanic active (3 invites/user). Goal: surface edge cases (radical users, gaming, linguistic minorities). Metric: 30-day retention >25%; organic CPML-comparison / quote-screenshot UGC.

**Phase 3 — First 100k (M5-M12).** Full open launch: paid press (FT, The Atlantic, Wired profile). Seasonal "Your Year in Epistemology" reveal deployed. Validator onboarding begins — 100k CPMLs generate real demand signal. Goal: diversify beyond rationalist cluster, ≥30% of users from outside US/UK. Metric: 90-day retention >15%; ≥5 validators piloting; donations covering ≥10% of operating cost.

---

## 7. Risk Analysis

### 7.1 Political backlash scenarios

| Scenario | Probability | Severity | Mitigation |
|---|---|---|---|
| **Left-coded critique: "this flattens structural analysis into personal preferences"** | High | Medium | Publish honest methodology; include explicitly collectivist domain options in CPML schema; partner with a left-leaning academic advisor. |
| **Right-coded critique: "this is a social-credit system"** | High | Medium-High | CPML is user-owned and portable; no central scoring; make source code inspectable; disable any server-side aggregate score. |
| **Mainstream-media critique: "another political typology quiz, just more sophisticated"** | Very high | Low | Accept and absorb — the Pew typology quiz survives this critique by publishing methodology. |
| **State-aligned critique (CN/RU/TR)**: "foreign information operations" | Medium | High if site blocked | Host-neutral deployment; open-source quiz; localized forks acceptable; do not claim universal truth. |
| **Rationalist in-community critique**: "just Scott Alexander's taxonomy with extra steps" | Medium | Low | Acknowledge debt; cite influences; invite critique. |

### 7.2 Echo-chamber amplification risk

**The risk.** "Similar users" matchmaking converges users into tighter clusters over time. Measured by any standard diversity metric, naive collaborative filtering *will* produce filter bubbles.

**Design responses.**
- MMR re-ranking on cluster diversity (§3.3).
- Daily-refinement prompts must include at least 1-in-5 questions from *outside* the user's dominant domain (randomized injection).
- "People who agree with you on X but disagree on Y" framing — never pure agreement.
- Annual "Your drift this year" feature — show the user how their CPML has moved, in which direction, and whether it has narrowed or broadened. Broadening is celebrated.

### 7.3 Data-privacy backlash if browser-history reading is introduced

**The risk.** Reading browser history to infer CPML updates is the single highest-profile privacy move available. Under GDPR Article 9 it is almost certainly special-category data requiring explicit consent. Under US state privacy laws (CCPA, CDPA, CPA) it triggers additional disclosure obligations. If poorly implemented, it is a Cambridge-Analytica-scale story.

**Recommended design.** Do not ship browser-history reading in the MVP. If ever shipped:
- Client-side only. Browser extension that reads history *in the browser*, derives updates to the CPML *in the browser*, and never transmits raw browsing to the server.
- Explicit per-session opt-in, with a preview of what would be read.
- Open-source and third-party auditable.
- Default off, always.

### 7.4 Mis-categorisation / stereotyping risk

The MBTI critique (§1.1) applies directly. Two types (E/I etc.) is pseudo-science; 16 types is barely better; a CPML with weighted domain membership is closer to a Big-Five continuum than a typology. The design response:

- Never publish discrete "types" ("You are a Principled Pragmatist!"). Publish weighted multi-domain profiles.
- Provide confidence intervals on every domain weight.
- Explicitly communicate that the output is a *starting point*, refinable via daily prompts.
- Offer a "reset my CPML" button prominently.

### 7.5 Gamification of the quiz itself

Users will try to game the quiz (answer as "the ideal Veritas user" rather than honestly). This is not fully preventable. Mitigations: (a) questions have no obvious "correct" answers; (b) daily-refinement catches inconsistency over time; (c) the CPML is for the user's own use primarily — gaming it harms mostly the gamer.

---

## 8. Naming — Anchoring Terminology

Candidate name space: Compass, Frame, Lens, Prism, Mirror, Sphere, plus protocol-native options.

**High-level trademark / SEO observations** (based on USPTO search portal existence, public app-store listings, and general domain availability; none of this is legal advice — a proper clearance search is required before commitment):

| Name | Trademark-space density | SEO competition | Semantic fit | Verdict |
|---|---|---|---|---|
| **Compass** | Extremely crowded (Compass real estate, Compass.com, dozens of political-compass variants) | Very high | Strong (orientation metaphor) | **Avoid alone.** Consider as part of phrase: "Consensus Compass," "The Frame Compass." |
| **Frame** | Moderately crowded (Frame.io is Adobe-owned, strong tech-sector presence) | High | Very strong (framing = epistemic positioning) | **Feasible in a phrase.** "Your Frame," "Frame Protocol." |
| **Lens** | Crowded (Snap Lens, Google Lens, Lens protocol — the Aave social graph) | Very high | Strong (perspective metaphor) | **Avoid; Lens Protocol collision** — existing crypto-adjacent "Lens Protocol" is the biggest issue given Veritas's validator/protocol register. |
| **Prism** | Crowded but more dispersed; "Prism political quiz" exists as a static GitHub site; PRISM is an NSA surveillance codename (strong negative association) | High | Strong (light-splitting into spectrum) | **Weak** — PRISM / NSA association is a long-term tailwind the brand never escapes. |
| **Mirror** | Crowded (Mirror.xyz publishing platform is crypto-adjacent; Mirror fitness hardware owned by Lululemon) | Very high | Medium (mirror = reflection, but suggests narcissism risk) | **Weak** — Mirror.xyz is the closest collision, already owns the crypto-writer mindshare. |
| **Sphere** | Moderately crowded (Sphere.social, Sphere VR, Sphere crypto wallet) | Medium-high | Medium (sphere = worldview? Less precise) | **Feasible but diluted.** |

**Three recommended name candidates (in order of preference):**

1. **Frame** (product) + "Frame, by the Veritas Protocol" (full). Semantically strongest: "what's your frame on this?" is already idiomatic for epistemic positioning. Trademark density is manageable in a branded phrase. `[UNVERIFIED]` full clearance required.
2. **Compass** (product) + "Consensus Compass" as the differentiator. Highly crowded but the consensus qualifier disambiguates; the metaphor is clean. `[UNVERIFIED]` clearance required; likely needs pairing with a second distinctive term.
3. **Plural** (product). A short adjective-noun, semantically precise (the plural-verdict architecture's consumer face), less trademark collision than the elemental nouns. Pairs naturally: "What's your Plural?" "Your Plural is made of..." Underused in the trademark space relative to the others. `[UNVERIFIED]` preliminary — a Plural banking product and a Plural.live streaming service exist; clearance needed.

**Names to avoid:** Prism (NSA), Lens (Lens Protocol), Mirror (Mirror.xyz), Truth (politically loaded post-Truth Social), Veritas Quiz (demotes the protocol brand).

**SEO observation.** For any of these names, launching the quiz on a distinct domain (e.g. `frame.veritas-protocol.xyz` or `plural.is`) is strictly better than using the protocol's main domain. Makes the quiz independently shareable and lets the protocol's SEO accumulate separately.

---

## 9. Go-to-Market Timing

Consolidating §6 with the 24-month full-protocol timeline:

| Month | Protocol side | Consumer MVP side |
|---|---|---|
| 0-2 | Architecture spec frozen; CPML schema published | Closed alpha (§6.5 Phase 1) |
| 2-5 | First 3 validators in signed-MoU stage | Soft launch (Phase 2); 10k users |
| 5-12 | Validators in production; first plural verdicts | Open launch (Phase 3); 100k users; seasonal reveal |
| 12-18 | Investigation market pilot; certificate subscriptions live | User base expanding; daily prompts steady-state |
| 18-24 | Full protocol — AI-lab API, steady revenue | 500k-1M users; opposite-view feature mature |

**Critical answer to the brief's question "is the MVP useful standalone":** Yes. The MVP as a self-reflection + reading-list + annual-review tool has standalone utility even with zero validators. The CPML grows additional utility as validators populate, but the base product is not waiting on them.

---

## 10. Final Recommendations

### (a) Recommended first-launch MVP scope

1. **10-question consensus quiz** producing a starter CPML.
2. **CPML output** downloadable as JSON, shareable as a human-readable summary card (seasonal-reveal format, 9:16 optimized).
3. **Daily-refinement prompt** — one question per day, 30 seconds, adds one coordinate to the CPML.
4. **Suggested-source discovery** — given CPML, surface 3-5 high-quality sources the user has not seen, at least 40% from outside their dominant cluster.
5. **Opposite-view surfacing** — given a topic the user has marked, show the strongest case from the nearest adjacent cluster, labeled as such (Tangle/Pol.is hybrid).
6. **Similar-people matchmaking** — opt-in only; matches on *shared information needs* not *shared labels*; always shows a small disagreement alongside shared agreements.
7. **Open-source quiz** (MIT or CC-BY-SA) with the CPML schema documented publicly so forks can emit compatible CPMLs.

Scope-out for v1: browser-history reading, AI-agent integration, public CPML directories, premium tier.

### (b) Five design principles for the quiz UX

1. **No flattery skew.** Results are honest, with confidence intervals; no forced "positive" rendering.
2. **User owns the CPML.** Default storage is client-side. Server-side is opt-in, E2E-encrypted, explicitly consented per Article 9 GDPR.
3. **No forced-share gates.** Results visible before any share/email prompt. Share is a choice, not a cost.
4. **Cluster diversity by default.** Recommendations (sources, people, communities) are MMR-re-ranked so at least 40% of top K are from adjacent clusters — prevents echo-chamber drift.
5. **Transparent methodology.** The 10 questions, their mapping to validator domains, and the CPML-generation logic are public and peer-reviewable from day one. Pew is the reference; not BuzzFeed.

### (c) Three names + trademark-space assessment

1. **Frame** — strongest semantic fit; manageable trademark density in a branded phrase. `[UNVERIFIED]` full USPTO clearance required.
2. **Compass** (paired, e.g. "Consensus Compass") — crowded but qualifier-disambiguable. `[UNVERIFIED]` clearance required; likely needs the qualifying word to be distinctive enough.
3. **Plural** — least collision, precise semantics. `[UNVERIFIED]` preliminary only; a Plural banking product and a Plural.live streaming service exist; professional clearance required.

Avoid: Prism (NSA association), Lens (Lens Protocol collision), Mirror (Mirror.xyz collision).

---

## Verification Notes

Revenue / user numbers cited with year are from sources linked inline. Marked `[UNVERIFIED]` where claims are inferential or where a primary source is unavailable. Trademark observations are preliminary and non-legal; any final name requires professional clearance in relevant jurisdictions. All URL citations are live inline; sources can be extracted via any markdown-link parser.
