What we're going to fix — the v0.3 plan in plain English.
Four reviewers found problems with the v0.2 working paper. Some are easy. Some are hard. This is the plan to address all of them, what's getting deferred, and why.
~ 7-minute read · Print-friendly
1What we already fixed
Right after the critical review came back, we did a quick copyedit round to fix the easy stuff. Already done in version 0.2:
- Three citation errors that the super-reviewer caught (a court case had the wrong year, a 1996 web standard was misstated, a journal paper reference couldn't be verified).
- Old version labels (v0.1) leaking onto v0.2 pages.
- Internal team identifiers that had crept into the public-facing documents.
- The 40-to-400-times revenue gap is now flagged inline in the paper, with a pointer to the critical review.
- The pluralism / relativism question has a caveat acknowledging the philosophical objection (full response is v0.3 work).
That handled the small stuff. The substantial criticism needs the next version.
2The seven things v0.3 will address
Seven workstreams, mapped to every CRITICAL and HIGH finding from the four reviewers. Some can run in parallel; some block others. Estimated calendar: 12–13 weeks.
Stop pretending one revenue source is six revenue sources.
The biggest finding from the reviewers: the project depends almost entirely on AI laboratories paying for grounding access. The paper currently lists this as one of six equal income streams. It isn't.
v0.3 will say so plainly. Three revenue scenarios — pessimistic ($5–15M/year), base case ($50–150M), optimistic ($500M+). Plus a fallback model: what if AI labs don't sign up at all? Smaller operation, slower validator network, no investigation market at scale, foundation grant survival mode.
Single reconciled cost table: phase costs, validator counts, fee tiers — same numbers across paper, brief, deck, ideas. No drift.
Resolve the contradiction we pretended wasn't there.
The protocol says "anyone can write" and also "we refuse five operations." Both can't be true in the strong sense. v0.3 will specify, layer by layer, where each refusal class lives.
At the protocol level: only one true refusal — the operation of "verifying CSAM" is incoherent (you can't verify what you can't possess legally). One item maximum on this list.
At the aggregator level: each aggregator publishes its own filter policy. Foundation reference aggregators filter certain categories; third-party aggregators may or may not.
At the user level: the consensus profile decides what gets surfaced for that user.
Items 3 and 4 of the current refusal list (credible threats, mass-casualty-weapon synthesis) move down to aggregator-level filtering — which is honest about what they actually are.
"No single authority" was never quite true. Say so.
The foundation operates five editorial surfaces: which reference aggregators get the foundation badge, which starter consensus profiles ship, what's on the refusal list, which countries get chapters, who's on the dispute panel.
v0.3 acknowledges this openly. The honest framing is "distributed authority with five disclosed editorial surfaces with appeal paths."
Concrete reductions: only one minimal reference aggregator (others are third-party); only one minimal new-user consensus profile (others are community-published); chapters and panels stay (justified by legal accountability, not editorial preference).
Goal: from 5 surfaces to 3. Long-term goal: even fewer.
The reviewer red-team listed 14 specific holes. Close them.
Sentinel found that adversarial-cost floors are too low. Costs don't scale with what's at stake. v0.3 implements 14 specific defences:
- The number of validators required to confirm a cascade scales with the value at stake. Below $10K, three validators. Above $1M, fifteen + a 72-hour confirmation window.
- Retractions need a signed credential from the original source — not just any validator claiming a retraction happened.
- State-aligned validators must disclose their affiliation in their credential.
- Foundation-published consensus profiles are cryptographically signed; readers refuse to load unsigned profiles by default. Stops the "trojan profile via SEO" attack.
- Validator signing keys rotate every 90 days.
- Posting a fake attestation costs five cents minimum, with rate limits per credential.
- For the first 12 months a "cold-start validator pool" of ≥20 institutions anchors reputation math. Closes the early-stage attack window.
- To recognise a consensus domain, three validators in three jurisdictions are required. No single state can dominate.
- Plus six more specific defences against oracle-manipulation, withdrawal attacks, investigation-market gaming, etc.
The hardest piece to write. We have to write it.
The paper invokes "epistemic pluralism" — the idea that different communities legitimately use different evidence standards. Sophisticated readers immediately ask: doesn't this slide into relativism (everything is just opinion)?
The reviewer pointed out: yes, if you don't carefully address it, that's exactly what readers will conclude. The paper doesn't address Boghossian's standard philosophical objection.
v0.3 adds a 1,500-word section that:
- States explicitly that we hold the Lynchian position (frame-scoped evidence standards, world is not divided into incommensurable truths).
- Explicitly disavows the Rortyan and Foucauldian framings.
- Engages directly with Boghossian's incoherence objection.
- Names the architectural universals (bare empirical claims, validator attribution, retraction events) that compose the same way regardless of what consensus profile you use.
- Provides specific test cases — Holocaust denial, flat-earth, religious-scientific conflicts — and shows how the system handles each.
Plus six new internal failure conditions. The current paper lists only external ones (e.g., "AI labs don't integrate"). Internal ones include: "the foundation editorially drifts"; "user consensus profiles converge into a single default profile"; "investigation market gets captured"; "Boghossian engagement fails public test."
Stop citing a formal framework we haven't actually implemented.
The CPML format (the file that holds your consensus profile) currently cites a formal framework called Value-based Argumentation Frameworks (Bench-Capon 2003, Amgoud-Vesic 2011). The reviewer pointed out that the JSON sketch isn't actually an instance of that framework. We're using the citation's authority without doing the formal work.
v0.3 picks one of three paths:
- Path A: Build a real Value-based Argumentation Framework implementation. Substantial formal work; a real reference resolver in code.
- Path B: Switch to a different formal basis (truth-maintenance systems, trust-based recommendation, probabilistic logic — there are options).
- Path C: Drop the formal-framework citation entirely. Present CPML as an engineering artefact, no theoretical claim.
Plus: define behaviour for empty profiles, contradictory profiles, profiles referencing non-existent consensus domains.
Single source of truth for every number. Glossary. Reference cleanup.
One spreadsheet containing every numerical claim across all v0.3 documents. Phase costs, validator counts, chapter counts, fee tiers, latency targets, revenue tiers. Every document draws from the same row of the spreadsheet. No more drift between paper, brief, deck, supporting documents.
A glossary at /glossary/ for the load-bearing terms (consensus domain, validator, CPML, etc.) — pick one term per concept and use it consistently.
Every reference in the paper's bibliography is either cited in the body or removed. Every citation marker in the body resolves to a bibliography entry. (The reviewer found seven references listed but never cited.)
The five claims currently missing the "[unverified]" hedge get either restored hedges or properly sourced numbers.
Replace the placeholder nousaeternos@gmail.com contact destination with a working-group address (or accept form-only contact).
3What we can't fix immediately
Some things require operational data we won't have until the protocol actually runs:
- Real AI-lab pilot benchmark numbers. Until one lab actually integrates and we measure hallucination reduction, the AI-grounding revenue case is theoretical.
- Real investigation-market dynamics. Pay-to-muddy detection, equilibrium pricing, asymmetry handling — these need real traffic to validate.
- Validator reputation math at scale. The algorithm (EigenTrust variant) is named in v0.3 but not battle-tested.
- Full reduction of foundation editorial surfaces to one or zero. Possible only after the protocol is operational, not while it's a paper.
- Country chapters beyond the initial three. EU, US, UK in Phase II. Others wait.
4The order things happen in
Some workstreams can start immediately and run in parallel; others depend on each other.
- Phase A — weeks 1-2: Editorial pass (W7) and pluralism research (W5) start. Independent.
- Phase B — weeks 3-6: Refusal-list resolution (W2), governance rewrite (W3), CPML formal-spec path selection (W6).
- Phase C — weeks 5-10: Money plan rewrite (W1), adversarial hardening (W4), CPML implementation, pluralism section writing. Long-running parallel work.
- Phase D — weeks 10-12: Merge everything. Final editorial pass on the integrated document. Commission a follow-up critical review against the v0.3 draft.
- Phase E — weeks 12-13: Publish v0.3. Archive v0.2 alongside v0.1. Publish the follow-up critical review next to v0.3.
Total: 12-13 weeks if we have the right people on each workstream. Could compress to ~8 weeks with parallel editors and quicker philosophical consultation.
5What v0.3 doesn't promise
v0.3 is a paper, not a running protocol. It will not yet have:
- Live validators verifying real-world claims.
- Live AI-lab integration with measured benchmark data.
- An operating investigation market.
- A working consumer-facing consensus quiz.
- Any blockchain transactions in production.
v0.3 is the last paper-only release. The version after that — v0.4 — will be a paper plus an actual implementation. That's where the proof lives.
6What this exercise costs
Not zero. Six named workstream owners (a tokenomics analyst, a protocol architect, a red-team engineer, a philosophy consultant, a formal-methods consultant, an editorial lead), 12-13 calendar weeks. Plus the AI-lab pilot conversation (which has to start now if W1 is going to be honest).
Approximate cost of the v0.3 paper round itself: $80–150K of consultant + editorial time, depending on how the workstream owners get compensated and how much philosophical consultation we book.
We're not promising any of this. We're saying: this is what's queued up to address the criticism, and these are the rough costs of taking the project seriously.
ELI5 home · The proposal (plain) · Critics' findings (plain) · Questions · Full technical plan