ELI5 home The full proposal What critics said What we're fixing The thinking behind it Where to invest Common questions
Plain English · The v0.3 plan Technical plan →

What we're going to fix — the v0.3 plan in plain English.

Four reviewers found problems with the v0.2 working paper. Some are easy. Some are hard. This is the plan to address all of them, what's getting deferred, and why.

~ 7-minute read · Print-friendly

1What we already fixed

Right after the critical review came back, we did a quick copyedit round to fix the easy stuff. Already done in version 0.2:

That handled the small stuff. The substantial criticism needs the next version.

2The seven things v0.3 will address

Seven workstreams, mapped to every CRITICAL and HIGH finding from the four reviewers. Some can run in parallel; some block others. Estimated calendar: 12–13 weeks.

Workstream 1 — Money plan, honest version

Stop pretending one revenue source is six revenue sources.

The biggest finding from the reviewers: the project depends almost entirely on AI laboratories paying for grounding access. The paper currently lists this as one of six equal income streams. It isn't.

v0.3 will say so plainly. Three revenue scenarios — pessimistic ($5–15M/year), base case ($50–150M), optimistic ($500M+). Plus a fallback model: what if AI labs don't sign up at all? Smaller operation, slower validator network, no investigation market at scale, foundation grant survival mode.

Single reconciled cost table: phase costs, validator counts, fee tiers — same numbers across paper, brief, deck, ideas. No drift.

~ 2-3 weeks · closes 6 reviewer findings
Workstream 2 — Permissionless write vs refusal list

Resolve the contradiction we pretended wasn't there.

The protocol says "anyone can write" and also "we refuse five operations." Both can't be true in the strong sense. v0.3 will specify, layer by layer, where each refusal class lives.

At the protocol level: only one true refusal — the operation of "verifying CSAM" is incoherent (you can't verify what you can't possess legally). One item maximum on this list.

At the aggregator level: each aggregator publishes its own filter policy. Foundation reference aggregators filter certain categories; third-party aggregators may or may not.

At the user level: the consensus profile decides what gets surfaced for that user.

Items 3 and 4 of the current refusal list (credible threats, mass-casualty-weapon synthesis) move down to aggregator-level filtering — which is honest about what they actually are.

~ 2 weeks · closes the contradiction
Workstream 3 — Foundation power, named honestly

"No single authority" was never quite true. Say so.

The foundation operates five editorial surfaces: which reference aggregators get the foundation badge, which starter consensus profiles ship, what's on the refusal list, which countries get chapters, who's on the dispute panel.

v0.3 acknowledges this openly. The honest framing is "distributed authority with five disclosed editorial surfaces with appeal paths."

Concrete reductions: only one minimal reference aggregator (others are third-party); only one minimal new-user consensus profile (others are community-published); chapters and panels stay (justified by legal accountability, not editorial preference).

Goal: from 5 surfaces to 3. Long-term goal: even fewer.

~ 1 week · partial fix; full reduction is v0.4+
Workstream 4 — Hardening against attackers

The reviewer red-team listed 14 specific holes. Close them.

Sentinel found that adversarial-cost floors are too low. Costs don't scale with what's at stake. v0.3 implements 14 specific defences:

~ 4-8 weeks · closes 5 critical attack classes + the cold-start gap
Workstream 5 — The pluralism section we've been avoiding

The hardest piece to write. We have to write it.

The paper invokes "epistemic pluralism" — the idea that different communities legitimately use different evidence standards. Sophisticated readers immediately ask: doesn't this slide into relativism (everything is just opinion)?

The reviewer pointed out: yes, if you don't carefully address it, that's exactly what readers will conclude. The paper doesn't address Boghossian's standard philosophical objection.

v0.3 adds a 1,500-word section that:

Plus six new internal failure conditions. The current paper lists only external ones (e.g., "AI labs don't integrate"). Internal ones include: "the foundation editorially drifts"; "user consensus profiles converge into a single default profile"; "investigation market gets captured"; "Boghossian engagement fails public test."

~ 3-4 weeks · requires philosophy consultation · the most-important single piece
Workstream 6 — The CPML technical specification

Stop citing a formal framework we haven't actually implemented.

The CPML format (the file that holds your consensus profile) currently cites a formal framework called Value-based Argumentation Frameworks (Bench-Capon 2003, Amgoud-Vesic 2011). The reviewer pointed out that the JSON sketch isn't actually an instance of that framework. We're using the citation's authority without doing the formal work.

v0.3 picks one of three paths:

Plus: define behaviour for empty profiles, contradictory profiles, profiles referencing non-existent consensus domains.

~ 3-4 weeks · likely Path A or Path C
Workstream 7 — The editorial pass

Single source of truth for every number. Glossary. Reference cleanup.

One spreadsheet containing every numerical claim across all v0.3 documents. Phase costs, validator counts, chapter counts, fee tiers, latency targets, revenue tiers. Every document draws from the same row of the spreadsheet. No more drift between paper, brief, deck, supporting documents.

A glossary at /glossary/ for the load-bearing terms (consensus domain, validator, CPML, etc.) — pick one term per concept and use it consistently.

Every reference in the paper's bibliography is either cited in the body or removed. Every citation marker in the body resolves to a bibliography entry. (The reviewer found seven references listed but never cited.)

The five claims currently missing the "[unverified]" hedge get either restored hedges or properly sourced numbers.

Replace the placeholder nousaeternos@gmail.com contact destination with a working-group address (or accept form-only contact).

~ 1-2 weeks · always-running editorial work

3What we can't fix immediately

Deferred to v0.4 or beyond

Some things require operational data we won't have until the protocol actually runs:

4The order things happen in

Some workstreams can start immediately and run in parallel; others depend on each other.

Total: 12-13 weeks if we have the right people on each workstream. Could compress to ~8 weeks with parallel editors and quicker philosophical consultation.

5What v0.3 doesn't promise

v0.3 is a paper, not a running protocol. It will not yet have:

v0.3 is the last paper-only release. The version after that — v0.4 — will be a paper plus an actual implementation. That's where the proof lives.

6What this exercise costs

Not zero. Six named workstream owners (a tokenomics analyst, a protocol architect, a red-team engineer, a philosophy consultant, a formal-methods consultant, an editorial lead), 12-13 calendar weeks. Plus the AI-lab pilot conversation (which has to start now if W1 is going to be honest).

Approximate cost of the v0.3 paper round itself: $80–150K of consultant + editorial time, depending on how the workstream owners get compensated and how much philosophical consultation we book.

We're not promising any of this. We're saying: this is what's queued up to address the criticism, and these are the rough costs of taking the project seriously.

"v0.3 is the response. v0.4 is the proof. Until both exist, the project is a serious proposal, not a working protocol."
Veritas Protocol · Plain-English v0.3 plan · April 2026
ELI5 home · The proposal (plain) · Critics' findings (plain) · Questions · Full technical plan