What four critics said about Veritas — in plain English.
We asked four independent reviewers to find problems with the proposal. They did. This is what they said, translated for normal humans. Nothing was softened.
~ 8-minute read · Honest · Print-friendly
1Why we did this
Most projects publish a polished pitch and bury the criticism. We wanted the opposite. After publishing the v0.2 working paper, we dispatched four independent reviewers — each with a different specialty, each told to find problems, not reasons the design is good.
They wrote 320 KB of detailed critique across four reports. We synthesised them into a master review and published all five documents on the same website as the proposal. Nothing was edited. Nothing was removed.
If a reviewer concluded the core idea has a hole, that conclusion stayed in.
2The four reviewers and what they found
Found 7 critical problems and 11 high-severity ones in the working paper.
Biggest finding: The economic model is built on a single pillar — payments from AI laboratories. The optimistic projection ($695M–$2.78B/year) and the realistic-scenario projection ($17M/year) differ by a factor of 40 to 400. The paper hand-waves at the difference instead of reconciling it.
Also flagged: a citation about a US court case had the wrong year (caught and fixed). A footnote about a 1996 web standard misstated what it did (caught and fixed). Several supposed primary references were missing or unverifiable.
Found 3 critical and 4 high-severity logical / philosophical problems.
Biggest finding: The pluralism — the idea that "different communities check things differently and that's OK" — slides into philosophical relativism if you push on it. The paper says it's not relativism but doesn't engage with the standard philosophical objection (Boghossian's Fear of Knowledge). Sophisticated readers will catch this and decide the project is intellectually shallow.
Also: the protocol claims "no single authority" but the foundation actually controls five editorial surfaces. The CPML technical sketch cites a fancy formal framework (Bench-Capon's Value-based Argumentation Frameworks) but isn't actually an instance of that framework. That's "provenance laundering" — using an authority's name without doing the work.
Found 5 critical attack scenarios.
Biggest finding: A motivated state actor with about $2 million per year could run 100 validator nodes in the system, building reputation honestly, then weaponising it. The protocol's defences (reputation math + jurisdictional diversity) work in steady state but are too weak in the early period when validators are scarce.
Other concrete attacks they identified:
- Bribing 3 of 5 validators to flip a high-value cascade event ($500K–$5M cost — small relative to what's at stake on contested public claims).
- Publishing a malicious "starter consensus profile" via SEO and tricking users into installing it.
- Compromising journal press credentials to fake a retraction (~$100K).
- Russian/Chinese/Turkish "foreign agent" criminalisation of validators in their jurisdictions.
- Spamming the write layer with $2,000 of fake attestations.
Their core observation: "Permissionless-write with post-hoc reputation weighting is not defended against adversaries with patience and budget to earn reputation honestly before weaponising it."
Found 50 problems — version labels in wrong places, contradictions between documents, internal team identifiers leaking into public prose.
Biggest finding: The phase budget says one thing in the paper, a different thing in the supporting document, a third thing in the deck. The validator count drifts between 5, 10, and 12. The list of refused operations is worded differently in three places — and one of those three silently broadens what's refused.
Verdict: "Not ready for publication without a focused 2–4 hour copyedit round." Not a rewrite — but the editorial discipline is below what serious external partners expect.
3The four big problems they all agreed on
Independent reviewers reaching the same conclusion is the strongest signal. Four reviewers, four briefs, but they converged on these:
Problem A — The money plan rests on one shaky leg
If AI laboratories pay for grounding access at scale, the project pays for itself. If they don't, the validator-compensation model collapses, and with it the promise that "mutually-hostile communities can both check each other's claims" (because nobody's getting paid).
This needs honest tiering: pessimistic ($5–15M/yr), base case ($50–150M/yr), optimistic ($500M+/yr). Plus a fallback model — what does Veritas look like if AI labs don't sign up at all?
Problem B — "Permissionless write" and "narrow refusals" contradict each other
Two of the five things the protocol refuses to record (credible threats; mass-casualty-weapon synthesis instructions) are topical, not operational. They look like content moderation in disguise. The paper either has to redraw the line or admit that some refusals are content-based and explain why.
Problem C — "No single authority" isn't true
The foundation controls at least five editorial surfaces. That's substantial editorial power. The honest framing is "distributed authority with five disclosed editorial surfaces with appeal paths" rather than "no single authority decides anything."
Problem D — Adversarial costs are too low
$2 million/year buys a state-actor presence; $500K–$5M can flip a cascade event; $2K can spam the write layer. These are small relative to what's at stake on important public claims. The defences need to scale with the value at risk, not stay flat.
4Sorting findings — answered, partly answered, still open
After the reviews came back, we went through every finding and asked: do we already have an answer? Does the v0.3 plan have an answer? Does the data we now have (especially the new oracle-economy research) resolve it? Or is it genuinely unsolved?
The findings split into three groups. The classification matters because "we have an answer but it's not yet shipped" is very different from "we don't yet know how to answer this and might never."
Has an answer (most of them)
Most findings have a clear response — either already shipped, committed in the v0.3 plan, or resolved by data:
- The 40-to-400× revenue gap is resolved. The new oracle-economy research builds a defensible base case at $8M–$24M Year-3 revenue. The aspirational $695M–$2.78B tier was never an honest base case; the comparable network UMA reached $5M after six years. Veritas now states its base case in the same band as comparable networks.
- The "permissionless write vs refusal list" contradiction has a clean fix: tiered enforcement. Only one operation is refused at the protocol level (verifying child sexual abuse material — genuinely incoherent because possession is the violation). Items 3 and 4 of the original list move down to aggregator-level filtering — which is honest about what they actually are.
- The 14 specific adversarial defences the security reviewer named all have solutions: K-scaling cascade quorum, source-authenticated retractions, signed CPML registry, short-lived signing keys, state-actor disclosure, jurisdictional diversity requirement. Each closes a specific attack from the report.
- The cascade-bribery cost can be raised above 50% of contested value by setting the resolution-stake cap at 5% of token market cap (data-backed recommendation from the oracle-economy research). Closes the $500K–$5M whale-attack scenario.
- The CPML formal-framework citation problem has three named paths: build a real Value-based Argumentation Framework implementation, switch to a different formal basis, or drop the citation honestly. v0.3 picks one.
- The cold-start period gets a "cold-start validator pool" of ≥20 credentialed institutions for the first 12 months.
- The four citation errors the editor and the auditor caught (court date, web standard misstatement, journal reference, identifier dates) were already corrected in the v0.2 copyedit round.
- Six "internal" failure conditions (foundation editorial drift, CPML default convergence, investigation-market capture, etc.) are added to the four "external" ones the original paper had.
Partly answered (direction set; full close needs more work)
- Foundation editorial power: v0.3 reduces 5 surfaces to 3 (smaller starter-CPML set; one minimal reference aggregator; chapter affiliation kept for legal accountability). Full reduction to 1 or 0 requires the protocol to be operating, not just on paper.
- Pluralism vs relativism: v0.3 writes the philosophical section (engages Boghossian directly; names the position as Lynchian; defines the architectural universals vs frame-relative objects). Full close requires multi-round philosophical review.
- AI-laboratory revenue assumption: tiered honestly in v0.3. But the actual validation requires an actual pilot.
Genuinely open (cannot promise to close in v0.3)
These are the things we will not pretend to have solved:
- Will AI labs actually integrate? No precedent network has captured frontier-lab traffic at scale. We can build the Phase II pilot. We cannot guarantee the lab signs up.
- Will the investigation market reach a stable equilibrium? Unknown until real traffic. Specifically: does the public-interest fund actually counter pay-to-muddy asymmetry?
- Will validator labour saturate before revenue grows? Unknown. The unit-economy research says small institutional validators are at break-even on the base case.
- Foreign-agent legal targeting: a state can criminalise our validators in their jurisdiction (Russia's "foreign agent" law, China's Article 105, Turkey's defamation regime). Jurisdictional diversity helps; nothing prevents legal action against a specific validator.
- Long-term token regulatory posture: the regulator decides. Mitigated by jurisdiction choice and the documented contingency to drop the burn mechanism, but the underlying uncertainty remains.
- Full philosophical close on pluralism: Boghossian's incoherence objection has been live for 20 years; serious philosophers still disagree about whether any response works. v0.3 writes the first version of our response. Full discharge is multi-year work.
This split lets you read the critique without a false-precision impression. Most findings are addressable. A few are not — and we say which.
5What's already fixed
Some of the reviewer findings were small and specific. We did a copyedit round immediately after the review came back. Already in place:
- Stossel court case: corrected to 2021 dismissal + 2022 appeal affirmance (was wrong).
- W3C Verifiable Credentials date: corrected to 15 May 2025 (was wrong).
- Wojcik / PNAS / 2024 reference: replaced with the actual indexed paper (the original was unverifiable).
- "PICS labelled users": fixed to "PICS labelled content" (was a factual error in a key clause).
- v0.1 / v0.2 version labels: cleaned up across all documents.
- Internal team identifiers (agent codenames, internal email): removed from public prose.
- Pluralism caveat added: acknowledges Boghossian's incoherence objection inline; full response is v0.3 work.
- Revenue gap disclosed: the 40-400× difference between paper and supporting document is now flagged explicitly with a pointer to this critical review.
6What's not fixed yet — and why
The substantive criticism — the four big problems above — needs structural design work, not just copyediting. That's the v0.3 plan. The plain-English version of the v0.3 plan covers it; the technical version has the full mapping of every reviewer finding to a workstream.
The summary of what v0.3 commits to:
- Tier the revenue model with three scenarios and a fallback if AI labs don't integrate.
- Resolve the permissionless / refusal contradiction by specifying which refusals happen at which layer (protocol vs aggregator vs user-profile).
- Reduce the foundation's editorial surfaces from 5 to 3 (long-term goal: even fewer).
- Strengthen adversarial defences — 14 named technical fixes including value-at-stake-scaled quorum, source-authenticated retractions, signed CPML registry.
- Write the pluralism-coherence section — explicitly engage with Boghossian and clarify the architectural universals (bare empirical claims) vs frame-relative objects (verdicts under specific consensus standards).
- Fix the CPML formal-framework citation — either build a real VAF implementation or honestly drop the citation.
- Single source of truth for numbers — one spreadsheet feeds all documents; no more drift.
Estimated calendar: 12–13 weeks. Cost: not zero. Result: a paper that survives serious peer review.
7What this exercise tells us about the project
Three observations a thoughtful reader might take from this critical review:
The substantive idea is respectable. No reviewer concluded the core proposal is unsalvageable. The architecture story is coherent. The use of mature open standards (Verifiable Credentials, IETF SCITT, libp2p, Sigstore, Ethereum L2s) is grounded. The novel parts — domain-indexed verdicts, cascading retraction, investigation market — are real contributions.
The editorial discipline is not yet at the level required to win serious partners. Foundations that fund open-infrastructure projects (Mozilla, Knight) will catch the 40-400× revenue gap on first reading. Senior philosophers will catch the unaddressed Boghossian objection. The next round of writing has to do better.
The team commissioned the criticism and published it. That's a signal. Most projects of this kind publish their pitch and let critics shout from outside. Veritas published the critics' reports on the same website as the pitch, with the same prominence, and is now publishing a v0.3 plan that addresses every CRITICAL and HIGH finding. Reasonable readers can decide what that's worth.
ELI5 home · The proposal (plain) · v0.3 plan (plain) · Questions · Full reviewer reports