# Aegis — Consistency & Editorial QA of Veritas Protocol v0.2

**Reviewer.** Aegis (QA agent, Nous Aeternos team)
**Date.** 2026-04-20
**Scope.** All public-facing v0.2 artefacts:

- `BRIEF.md`
- `index.html` (brief + participation form)
- `paper/VERITAS-PROTOCOL-WHITEPAPER.md` (and its reader `paper/index.html`)
- `deck/index.html` (16-slide deck)
- `ideas/01-…` through `ideas/11-…` (design-capture)
- `ideas/research-01-…` through `ideas/research-07-…` (research inputs)
- `ideas/index.html` (ideas reader)

**Standard.** Publication-grade editorial review: pedantic on numbers, wording, internal references, placeholders, and tone. Any contradiction, drift, or unverified figure that a professional editor would catch before publishing gets flagged.

**Method.** Line-by-line read; cross-reference of 20+ anchor claims across every document they appear in; grep audit for placeholders, TODO markers, and `[UNVERIFIED]` tags; reference-list check against inline citations; link-surface inspection; tone and voice pass.

This report is pedantic by design. It is not a literary critique; it is an editorial gate check before publication.

---

## 0. TL;DR

**Editorial verdict: NOT READY FOR PUBLICATION. Needs one focused copy-edit round (est. 2–4 hours) before release.**

The substantive content is coherent, the architecture story holds together across documents, and the ideas artefacts are well-organised. What blocks publication right now is a cluster of **version-label contamination, broken path references in the brief HTML, numerical drift between the brief/paper/deck/ideas, and a handful of placeholder addresses and internal-team names leaking into the public surface.**

- **Critical findings (must fix before publish):** 6
- **High findings (should fix):** 14
- **Medium findings (editorial hygiene):** 21
- **Low / advisory notes:** 9

Total: **50 distinct findings** across the four publication surfaces. Full punch-list at §13.

---

## 1. Methodology

I treated the **paper** as the authoritative anchor for every factual claim. The **brief** and **deck** are sales layers and must not contradict the paper. The **ideas** artefacts are the design-capture behind the paper; they must not contradict the paper at its own level of abstraction. The **research** reports are inputs and are allowed to contain `[UNVERIFIED]` marks and numerical ranges wider than the paper's point estimates — but the paper's numbers must be derivable from the research without selective rounding.

I cross-referenced twenty anchor claims (§4) through every document they appear in, then did an open-ended read for drift, placeholder leakage, and tone slippage.

---

## 2. Version labels and archival

### 2.1 **[CRITICAL C1] Brief HTML page 4 still labelled "v0.1"**

The brief HTML (`index.html`) is packaged as the v0.2 public brief. Pages 1–3 of the HTML are labelled v0.2 consistently. But the participation form section (page 4, the contact surface) leaks v0.1 labels **four times**:

- `index.html:286` — masthead: `<span>v0.1</span>`
- `index.html:289–291` — `"This is a v0.1 working paper."` (copy in the form preamble)
- `index.html:274` — footer of page 3: `<span>Draft v0.1 · Collaborative Fact-Checking Working Group</span>`
- `index.html:363` — footer of page 4: `<span>Draft v0.1 · circulated for review · do not redistribute without permission</span>`

These four spots contradict every other surface. A reader who fills the form will see "v0.1" at exactly the moment of action.

**Fix.** Replace all four with `v0.2`. Remove `· circulated for review · do not redistribute without permission` on the final footer if the document is being *published* — that language is a private-circulation artefact.

### 2.2 **[CRITICAL C2] Paper HTML reader (`paper/index.html`) labels itself v0.1**

The whitepaper Markdown is v0.2. The reader shell at `paper/index.html` that renders it is still v0.1:

- `paper/index.html:9` — `<meta property="og:description" content="…Draft v0.1.">`
- `paper/index.html:129` — `<span class="brand">Working paper · v0.1</span>`
- `paper/index.html:131` — `<div class="draft">Draft · April 2026</div>` (OK for date, but adjacent to v0.1)
- `paper/index.html:145` — `<span>Working paper · draft v0.1</span>`

When the reader opens `/paper/`, the sidebar says v0.1 while the body says v0.2. This is immediately visible.

**Fix.** Update the reader chrome to v0.2. Also refresh the OpenGraph description — right now someone sharing the paper on social media still spreads the v0.1 tagline ("federated, cryptographically-signed substrate"). The v0.2 positioning is hybrid chain-plus-federation, and that tagline is flat wrong for v0.2.

### 2.3 **[HIGH H1] v0.1 OpenGraph description is a v0.1 thesis**

`paper/index.html:9`:

> *"A federated, cryptographically-signed substrate for structured factual claims, with domain-indexed verdicts and cascading falsification. Draft v0.1."*

This is the v0.1 pitch and omits every v0.2 addition (permissionless write, chain, CPML, investigation market). Social-media shares show this description as the preview snippet. **This is the first thing half your readers will see.** It is a v0.1 artefact in the v0.2 distribution.

### 2.4 Dates — consistent

All surfaces read "April 2026" or "Draft · April 2026" uniformly. No drift detected.

---

## 3. Terminology drift

### 3.1 "Consensus domain" vs "consensus frame" vs "epistemic frame" — **[HIGH H2] unresolved**

The paper and ideas use three overlapping terms without defining their relationship:

- `paper §5.4`: "domains" (`scientific-default`, `historical-academic-default`) — the technical name in CPML.
- `paper §6`, `ideas/01-…`, deck: "frame" (`mutually-hostile frames`, `ideologically-opposed frames`) — the social/political name.
- `ideas/01-…` passim: "epistemic groups" / "epistemic frames" — academic-philosophical register.

The CPML schema calls them `domain`. The brief and paper prose call the same thing "frame" roughly half the time. A glossary at the top of the paper, or a single one-sentence definition ("*frame* and *consensus domain* are synonyms; the technical CPML name is `domain`"), would prevent serious reader confusion. Without it, the reader has to infer that `mutually-hostile frames` and `scientific-default consensus domain` refer to the same layer.

### 3.2 "Validator" vs "verification center" vs "signing center" — **[HIGH H3] drift between paper and tokenomics research**

- Paper, brief, deck, ideas 01/02/09: **validator**.
- `research-03-tokenomics-deep.md` uses **"signing/verification centers"** repeatedly (e.g., lines 23, 117, 338, 516, 517).
- `03-tokenomics-design.md` line 33 explicitly glosses both: *"validators paid per service rendered"* in the Chainlink comparison but later the research whitepaper uses "signing centers."

The paper settles on `validator`. The research doc uses three different names for the same actor. If the research doc is published in its current form, readers will think Veritas has three distinct roles.

**Fix.** Either (a) unify to `validator` in the research docs — low-effort find/replace; or (b) add a one-line note in the ideas reader at the top of `research-03`: *"The tokenomics research uses 'signing/verification center' as a synonym for the paper's 'validator.'"*

### 3.3 CPML abbreviations — consistent

CPML is always expanded on first mention as **Consensus Profile Markup Language**. Never used interchangeably with "consensus profile" as a competing term. Good.

### 3.4 "Chain" vs "blockchain" vs "L2" — consistent enough

Used naturally in context. No technical drift. The paper distinguishes *chain-anchored* / *on-chain* / *L2-settled* with precision.

### 3.5 Quiz naming (Frame / Compass / Plural) — **[MEDIUM M1] left explicitly ambiguous**

The brief, paper, deck, and `07-quiz-mvp.md` all say the working name is "Compass (with Frame and Plural as alternatives)" or variants. This is honest — the working group hasn't decided — but it means the public deck slide 15 lists `veritas-quiz (Frame / Compass)` and slide 16 doesn't pick either, while `07-quiz-mvp.md:65` says "Working name: **Compass**."

This is cosmetically fine for a draft but creates a branding hole in a public brief. It is worth resolving before external release because a funder seeing "Frame / Compass / Plural" three times reads it as indecision.

---

## 4. Cross-document consistency — twenty anchor claims

I picked twenty claims and verified each across the documents it appears in.

| # | Claim | Paper | Brief | Deck | Ideas | Verdict |
|---|---|---|---|---|---|---|
| 1 | Chain choice: **Base primary + Optimism mirror** (Phase II) | §5.2, §6.5 | §2 | slide 02, 05 | `10-…` | **DRIFT (see §4.1)** |
| 2 | Validator revenue share: **60–70%** | §8.2 | §1, §3 | slide 02, 11 | `03-…`, `05-…` | Consistent |
| 3 | Legal structure: **Swiss Stiftung + Verein + EU GmbH** | §7.4 | §2 | slide 11 | `03-…`, `research-03`, `research-05` | Consistent (see §4.2) |
| 4 | **9-seat dispute panel (3-3-2-1 composition)** | §7.5 | §3 (`9-seat`) | slide 12 | `09-…` | Consistent |
| 5 | **7-of-9 supermajority + 60-day comment** | §7.1, Appx C | §3 | slide 12 | `09-…` | Consistent |
| 6 | **5 operational refusals** with exact wording | §7.1, Appx C | §3 | slide 06, 12 | `09-…` | **DRIFT (see §4.3)** |
| 7 | Phase II gate: **5–10 validators**, 1 AI-lab LOI, peer-reviewed publication | §13.2 | §3 | slide 14 | — | **DRIFT (see §4.4)** |
| 8 | Investigation starting fees: **$300 / $1K / $3K / $10K+ / $2.5K adversarial** | §5.7 | §2 (pp3) | slide 10 | `06-…`, `research-03` | **DRIFT (see §4.5)** |
| 9 | **5–10% foundation fee on investigation** | §5.7 | — | — | `06-…` | Consistent |
| 10 | Phase I cost: **US$ 300–500K** | Appx B | — | slide 14 | `05-…` ($200–300K) | **DRIFT (see §4.6)** |
| 11 | Phase II cost: **US$ 600–900K** | Appx B | — | slide 14 | `05-…` ($400–600K) | **DRIFT (see §4.6)** |
| 12 | Phase II ops floor: **~$40K/month at 1M attestations/day** | §5.2 | — | — | `research-04` | Consistent |
| 13 | Base tx cost: **$0.01 median** | §6.5 | — | — | `10-…` ($0.0001–0.001), `research-04` (various) | **DRIFT (see §4.7)** |
| 14 | EAS per-attestation: **$0.002–0.004** | §5.2, §6.5 | — | — | `02-…` ($0.001) | **DRIFT (see §4.7)** |
| 15 | **~3,000 CAI members** | §2.1 (`roughly 3,000 affiliated organisations`) | §3 (`thousands of organisations`) | — | `research-02` | Consistent (range OK) |
| 16 | **thousands of retractions/year** (Retraction Watch) | §3 | §1 | slide 03 | — | Consistent |
| 17 | Year-3 treasury base case: **$695M–$2.78B** | §8.3 | — | — | `research-03` | Consistent |
| 18 | **Journalism funding $35–140M/year** | §8.3 (implicitly, 5% of inflow) | — | — | `research-03` (§13.2 explicit) | Consistent but **paper understates — see §4.8** |
| 19 | Country chapters day-one: **EU + US** | §7.2, §13.2 | — | slide 12 | `08-…`, `research-05` | **DRIFT (see §4.9)** |
| 20 | **60–70% validator compensation** (paper §8.2) ≠ **65% budget** (ideas/05 scenario) | §8.2 | §3 | slide 11 | `05-…` (65%), `03-…` (70%) | **Internal inconsistency — see §4.10** |

Ten of twenty anchor claims show material drift. Details:

### 4.1 Chain: "primary with mirror" vs "OP Mainnet" vs "Base or Optimism"

The paper and brief and deck say **"Base as primary with Optimism mirror"**. `ideas/10-chain-selection.md` concludes the opposite:

> `10-…:152`: *"**Chain: Optimism (OP Mainnet).** Rationale. Public-goods funding alignment… Alternative. Base. If a partnership with Coinbase emerges…"*

`ideas/10` recommends **Optimism primary, Base alternative**. The paper, brief, and deck say **Base primary, Optimism mirror**. These are not the same architecture — "mirror" is not "alternative" (a mirror runs in parallel; an alternative is the fallback if primary isn't chosen). A reader comparing the paper to the chain-selection idea gets whiplash.

**Recommendation.** Pick one. If the team now endorses the paper version (Base + Optimism mirror), rewrite `ideas/10-…:115` and `152` to match. If the team still thinks Optimism is the better primary, then the paper, brief, and deck are wrong. This is a substantive technical decision masquerading as a copy-edit.

### 4.2 Swiss Stiftung configuration — minor

Paper `§7.4`: *"Swiss Stiftung plus Swiss Verein plus EU operating GmbH/BV (Ireland, Netherlands, or Malta), with deferred US 501(c)(3) plus Wyoming DAO LLC"*.

Deck slide 11 card: *"Swiss Stiftung + Swiss Verein + EU GmbH. MiCA utility-token classification. US 501(c)(3) + Wyoming DAO LLC deferred."*

Brief §2.2: *"Swiss Stiftung plus EU operating entity is the current leading legal structure"* (omits Swiss Verein).

The brief's shortened form is acceptable for a sales surface, but dropping Verein entirely while the paper and deck both explicitly list Stiftung *plus* Verein suggests the brief was updated at a different pass. Either keep both in the brief ("Swiss Stiftung + Verein + EU operating entity") or justify the simplification. **Small** — but this is exactly the kind of thing a reviewer who knows jurisdictions will notice.

### 4.3 Operational refusal list — **three distinct wordings across surfaces**

The five refusals have **three different phrasings** across documents. This is the single cleanest check for editorial hygiene because the list is the protocol's most politically sensitive commitment and must not drift.

**Paper §7.1 and Brief page 3:**

1. Attestations verifying child sexual abuse material.
2. Attestations verifying non-consensual intimate imagery of identifiable persons.
3. Attestations verifying credible imminent-harm threats against specific persons or protected groups.
4. Attestations verifying operational mass-casualty-weapon synthesis instructions.
5. Attestations verifying or locating active illegal markets.

**Paper Appendix C — shortens everything:**

1. Attestations verifying **CSAM**. (no "child sexual abuse material")
2. Attestations verifying non-consensual intimate imagery. (drops "of identifiable persons")
3. Attestations verifying credible imminent-harm threats. (drops "against specific persons or protected groups")
4. Attestations verifying mass-casualty-weapon synthesis instructions. (drops "operational")
5. Attestations verifying or locating active illegal markets.

**Ideas/09-refusals-and-panel.md §"narrow initial refusal list (v0.1 — starting proposal)":**

1. Attestations that **claim to verify** the content or authenticity of child sexual abuse material.
2. Attestations that **claim to verify** non-consensual intimate imagery of identifiable persons.
3. Attestations that verify threats of imminent harm directed at specific identifiable persons or protected groups (**as defined in ICCPR Article 20**).
4. Attestations that verify detailed synthesis or operational instructions for biological, chemical, or radiological weapons usable at mass-casualty scale.
5. Attestations that claim to verify or locate active illegal markets (**narcotics, trafficking, commissioned violence**).

Note also: ideas/09 calls it "v0.1 — starting proposal" in the heading. This is a v0.2 document. **[HIGH H4] header label contradicts the container version.**

**The drift in item 4** (paper §7.1 says *operational mass-casualty-weapon synthesis*; paper Appendix C drops *operational*; ideas/09 elaborates to *detailed synthesis or operational instructions for biological, chemical, or radiological weapons usable at mass-casualty scale*) is substantive: "operational" is the adjective that does the work of narrowing the refusal from "any weapons-related claim" to "claims that, if verified, make a weapon constructable." Dropping it in Appendix C silently broadens the refusal. That is the exact kind of creep the panel/60-day procedure exists to prevent.

**Fix.** Pick one canonical wording. Paper §7.1 wording is the best candidate (item-4 "operational" does real work). Replace the list in Appendix C and ideas/09 with a byte-for-byte copy. Parenthetical elaborations (ICCPR Article 20; narcotics, trafficking, commissioned violence) can be added below the canonical list as a commentary note.

**[CRITICAL C3]** for this whole item: the **five refusals are the protocol's most consequential commitment and must be word-for-word identical across every document**. Failure on this is the single most damaging editorial problem in the set.

### 4.4 Phase II validator count — **[HIGH H5] drift between paper, ideas, and deck**

| Source | Number |
|---|---|
| Paper §13.2 ("5–10 institutional pilot validators") | 5–10 |
| Paper §8.3 ("~12 institutional validators funded") | ~12 |
| Deck slide 14 ("5–10 institutional validators") | 5–10 |
| `ideas/03-tokenomics-design.md:80` ("N=20 validators at Phase II") | 20 |
| `ideas/03-…:105` ("12 validators at $100K/each") | 12 |
| `ideas/05-revenue-model.md` table ("12 validators at $100K/each = feasible") | 12 |
| v0.1 paper (archival): 5–10 | 5–10 |

**Two separable numbers are in play:**
- **Gate number** (how many validators is Phase II launching with): 5–10.
- **Steady-state number at month 18** (how many validators are funded at partial-FTE): 12.

That's fine if both are labelled clearly. **But they are not labelled.** The paper §13.2 gate uses "5–10" and the same paper's §8.3 steady state uses "~12" with no cross-reference. The reader who spots both thinks the number drifted. `ideas/03-…:80` at "N=20" is a purely-illustrative scenario calculation but reads as if it's a target.

**Fix.** Every mention of validator count needs a qualifier: `gate validator cohort`, `steady-state validator cohort`, or `illustrative validator count`. The deck and brief should reconcile to "5–10 gate, scaling to ~12 at steady state."

### 4.5 Investigation fee schedule — **[HIGH H6] drift between paper, deck, ideas, and research**

**Paper §5.7 (canonical):** $300 quick, $1K standard, $3K deep, $10K+ extended, $2.5K adversarial cross-investigation. "Phase II, quant-agent recommendation."

**Brief page 2:** $300 quick, $1K standard, $3K deep, $10K+ extended. (Missing $2.5K adversarial — OK, brief can be abridged.)

**Deck slide 10:** $300 quick, $1K standard, $3K deep, $10K+ extended, $2.5K adversarial. Matches paper.

**Ideas/06-investigation-market.md:**

| Tier | Min validators | Starting price |
|---|---|---|
| Quick (1 claim, single domain) | 2 | $300 |
| Standard (1 claim, multi-domain) | 3 | $1,000 |
| Deep (primary-source work) | 5 | $3,000 |
| Extended (novel primary-source acquisition) | 5+ | $10,000+ |
| Adversarial cross | 5 | $2,500 |

Matches paper. OK.

**Research `research-03-tokenomics-deep.md` §10.2 — DIFFERENT tier schedule:**

| Tier | Scope | Fee |
|---|---|---|
| T1 Routine verification | Single-claim | $100–$300 |
| T2 Standard investigation | multi-source | **$1,500–$3,500** |
| T3 Deep investigation | document review | **$8,000–$25,000** |
| T4 Long-form | months of reporting | $75,000–$300,000 |

And research `§1` line 50: *"Launch investigation market at **$2,000–$8,000 flat-fee tiers**"*.

So:
- **Research recommends $2K–$8K flat-fee tiers** (with a four-tier $100/$1.5K-$3.5K/$8K-$25K/$75K+ structure).
- **Paper implements $300/$1K/$3K/$10K+/$2.5K** which **doesn't match any cell of the research table**.

The paper says the fees are "quant-agent recommendation" but the quant-agent recommendation as-written is different. Either:
(a) the team simplified the quant recommendation to cleaner round numbers (fine — but then the paper should say "simplified from quant's T1–T3 range" rather than claim "quant-agent recommendation"), or
(b) a later design iteration changed the numbers (fine — but the research doc should carry a note, "update 2026-04-20: tiers revised to $300/$1K/$3K/$10K+/$2.5K in paper §5.7"), or
(c) someone is wrong.

**[HIGH H6].** A reviewer who reads both the paper and the research will flag this. It is the kind of finding that makes a reader question whether the paper's numbers are trustworthy.

### 4.6 Phase I / Phase II cost estimates — **[HIGH H7] paper and ideas disagree by 100%**

**Paper Appendix B:**
- Phase I: **US$ 300–500K**
- Phase II: **US$ 600–900K**

**Deck slide 14:** matches paper.

**`ideas/05-revenue-model.md` §"Funding-source targets":**
- Phase I (0–6 months): **~$200–300K**
- Phase II (6–18 months): **~$400–600K**

These are meaningfully different. Ideas says Phase I is $200–300K; paper says $300–500K. Ideas says Phase II is $400–600K; paper says $600–900K. That's a doubling (at the top end) between the capture artefact and the published paper.

**The v0.1 paper had Phase I at $200–300K and Phase II at $400–600K** (exactly what ideas/05 still says). So the paper numbers were revised upward for v0.2; the ideas artefact didn't get the memo.

**Fix.** Update `ideas/05-revenue-model.md` to match paper Appendix B, or if ideas/05 is in fact the current best estimate, update the paper and the deck. **[HIGH H7]** — costs are the single most-scrutinised number by funders.

### 4.7 Chain transaction costs — **[MEDIUM M2] numerical drift, all within the same document family**

The paper and ideas give three different cost figures for "transaction on L2":

- Paper §5.2: `EAS on-chain $0.002–0.004 per attestation`.
- Paper §6.5: `Base tx cost $0.01 median; EAS on-chain $0.002–0.004`.
- Ideas/02 (hybrid architecture) §"Critical analysis": `~$0.001 per attestation on a modern L2`.
- Ideas/10 (chain selection) §"Tier 1": `Base $0.0001–0.001`; `Optimism ~$0.0005`.
- Research-04 §: Linea `$0.003–0.01`, others vary.

A copy-editor reading the public paper will see `$0.002–0.004` and `$0.01 median` within 40 paragraphs of each other (both within §5–§6) and raise a brow. The distinction is `EAS attestation cost (contract-specific, $0.002–0.004)` vs `baseline L2 tx cost (chain-wide, $0.01 median)`. But the paper doesn't say that distinction out loud; the two numbers just sit next to each other.

**Fix.** Add a one-clause note at §5.2 or §6.5: *"Base median transaction cost (\$0.01) includes overhead; EAS-registered attestations, which are the dominant on-chain write type, cost \$0.002–0.004 per record."*

### 4.8 Journalism funding — **[MEDIUM M3] the paper quietly loses 5× on the low end**

Research `research-03-tokenomics-deep.md §13.2`: "Conservative: **$35M/year** to journalism. Base: **$140M/year**." The $695M–$2.78B treasury × 5% = $35M–$140M journalism funding.

Paper §8.3: `~$4.5M/year to investigative journalism` in the "Phase III scale-up" scenario with $17M/year gross. That's about 26% of revenue, but 26% of a much smaller base ($17M vs $695M–$2.78B in the research base-case).

The paper's $4.5M and the research's $35–140M are two scenarios at very different scales. That's fine — the paper is more conservative. But **the paper never mentions the higher scenario**; the brief says *"$4.5M/year to investigative journalism" equivalent* nowhere, and neither does the deck. A reader reading only the paper gets $4.5M; a reader who stumbles into the research gets $140M. That is a 30× delta.

**Fix.** Either (a) say in §8.3: *"Quant-agent upside scenarios see journalism allocations at \$35–140M/year contingent on AI-lab adoption at projected volumes"*, or (b) add a forward reference: *"See ideas/research-03 for upside modelling."* The paper needs to acknowledge the upside at least once to match the brief's actual ambitions. Right now the sales story is bigger than the paper.

### 4.9 Country chapters — **[HIGH H8] paper, deck, and ideas disagree on day-one chapters**

- **Paper §7.2:** *"Juris-agent research recommends day-one chapters: **EU-Brussels, US, UK, Switzerland**. Six-month addition: Germany, France. Twelve-month: Japan, Brazil, India."* (FOUR day-one chapters)
- **Paper §13.2 (Phase II deliverables):** *"EU + US country chapters."* (TWO)
- **Deck slide 12:** *"EU, US day-one; UK, Switzerland next."* (TWO day-one, TWO next)
- **Ideas/08-country-chapters.md §"Phased roll-out" Phase II:** *"EU chapter established first… US chapter established second."* (TWO)
- **Research-05-regulatory.md §"Day one (co-launch with Phase II)":** *"EU-level (Brussels-based coordinating entity), U.S., UK (CIC or Ltd), Switzerland (subsidiary of the Foundation — effectively the home chapter)."* (FOUR)

Paper §7.2 and research-05 say four day-one chapters. Paper §13.2, deck, and ideas/08 say two. The paper contradicts itself (§7.2 vs §13.2) with no bridging language.

**Fix.** The paper's own two §s diverge; that's the blocking fix. Either (a) Phase II ships two chapters (EU + US) with UK + CH as first-year adds, or (b) Phase II ships four. Pick one and make paper §7.2, paper §13.2, deck, and ideas/08 all match.

### 4.10 Validator revenue share: 60–70% vs 65% vs 70% — **[MEDIUM M4] rounding drift**

- Paper §8.2 and brief page 3 and deck slide 11: **60–70%**.
- Paper §8.3 ("$1.2M to validator compensation" out of "~$1.8M/year gross"): implies **~67%**.
- Ideas/05 §"Phase II launch": *"Validator budget at 65%: ~$1.2M → 12 validators"* — explicit 65%.
- Ideas/03-…: *"$840K/year → ~$84K/validator"* in the scenario using 70%.

This is pedantic but the policy statement (60–70%) and the example computations (65% in ideas/05, 70% in ideas/03) are inconsistent. Pick one anchor percentage for illustrative scenarios — either 65% or 70% — and hold it consistent across ideas/03 and ideas/05. The policy range 60–70% is fine for the paper.

---

## 5. Numerical-claim verification (all figures in brief + paper + deck)

Inventory of every quantitative claim in the three public-facing documents, with source and verification status.

| # | Claim | Where | Source | Status |
|---|---|---|---|---|
| 1 | "six orders of magnitude" AI generation vs human review | brief §1, paper §3.4 (`many orders of magnitude`), deck slide 03 | Unsourced rhetorical figure. | **[UNVERIFIED]** — no cited empirical basis |
| 2 | "tens of billions" AI-hallucination addressable market by 2026 | brief §3, paper §1.2, `ideas/11-…`, `research-03` §13 | Research-03 §13 marks the $10B–$50B estimate as `[UNVERIFIED — est.]`. | **[UNVERIFIED]** — source is an internal estimate, publicly unsourced |
| 3 | "~3,000 CAI affiliated organisations" | paper §2.1 | CAI membership is public; 3,000 is plausible. | Not re-verified but consistent; OK to ship with light hedge ("roughly 3,000"). |
| 4 | "thousands of retractions/year" (Retraction Watch) | paper §3, brief §1, deck slide 03 | Retraction Watch counts several thousand per year; claim is loose and safe. | OK |
| 5 | `$695M–$2.78B/year` Year-3 treasury | paper §6.4, §8.3 | Research-03 §13.1 explicit derivation with `[UNVERIFIED]` inputs. | **[UNVERIFIED]** — the paper should acknowledge this is a scenario with unverified inputs |
| 6 | `$35–140M/year` journalism funding | paper (implied, via §8.3 & §6.4); research-03 §13.2 explicit | Derived from #5 × 5%. | Inherits #5's unverified status |
| 7 | Phase II cost `$600–900K` | paper App B, deck slide 14 | Paper point-estimate; ideas/05 says $400–600K. | **Internally inconsistent — see §4.6** |
| 8 | Phase I cost `$300–500K` | paper App B, deck slide 14 | Same: ideas/05 says $200–300K. | **Internally inconsistent — see §4.6** |
| 9 | Phase II ops floor `~$40K/month at 1M attestations/day` | paper §5.2 | research-04 derives: Celestia ~$6K/month for 12GB + infra. | OK — within research bound |
| 10 | `Base tx cost $0.01 median` | paper §6.5 | ideas/10 (`$0.0001–0.001`). | **Drift — see §4.7** |
| 11 | `EAS $0.002–0.004` | paper §5.2, §6.5 | ideas/02 uses $0.001. | **Drift — see §4.7** |
| 12 | Celestia DA `~$0.35–0.81/MB` vs `Ethereum blobs ~$20.56/MB` | paper §5.2 | research-04 derivation. | Research-04 line 76 says `Celestia ~$6K/month` vs `Ethereum blobs ~$80K+/month` for 12 GB/month. That ratio is 13×, not the 25–58× that $20.56/$0.35 implies. **[MEDIUM M5] ratio drift** between the paper's per-MB numbers and the research's per-month numbers. |
| 13 | `60–70%` validator compensation | paper §8.2 | ideas/05 (65%), ideas/03 (70%). | Drift in illustrative scenarios — **§4.10** |
| 14 | `$300/$1K/$3K/$10K+/$2.5K` investigation fees | paper §5.7 | research-03 says $2K–$8K tiers. | **Drift — §4.5** |
| 15 | `~10K quiz users` Phase II gate | paper §13.1, deck slide 14 | Brief says "10K–100K" growth target at end of launch. Paper's gate is the lower bound. | OK |
| 16 | `100K+ quiz users` Phase III target | paper §13.3, deck slide 14 | Consistent across paper + deck. | OK |
| 17 | `9-seat dispute panel (3-3-2-1)` | paper §7.5, brief page 3, deck slide 12, ideas/09 | Consistent. | OK |
| 18 | `20+ sites publishing v0.2` Phase II gate | paper §13.1, deck slide 14 | Consistent. | OK |
| 19 | `5+ third-party-attesting organisations` Phase II gate | paper §13.1, deck slide 14 | Consistent. | OK |
| 20 | `1+ AI-lab research-pilot letter of intent` | paper §13.1, deck slide 14, brief §3 | Consistent. | OK |
| 21 | `3–5 chartered consensus domains` Phase II | paper §13.2, deck slide 14 | Consistent. | OK |
| 22 | `4–6 reference starter CPMLs` | paper §7.3, ideas/04 | Ideas/04 says 4–6; paper §7.3 says "4–6". | OK |
| 23 | `sub-50ms TTFB` edge cache | paper §5.3 | Unsourced target. | **[UNVERIFIED]** — engineering target, not a measured benchmark |
| 24 | `tens of milliseconds` AI grounding latency | brief, paper | Same engineering target. | **[UNVERIFIED]** |
| 25 | `several seconds` cascade propagation before chain confirmation | brief, paper §5.3 | Architectural claim; plausible. | OK — stated as design target |
| 26 | `~3000` match questions OkCupid | research-07 | Public. | OK |
| 27 | `~60 million users/year` 16Personalities | ideas/07 | External claim; commonly cited. | **[UNVERIFIED]** but not load-bearing for the protocol |
| 28 | `$1.67B` SSL market, `$2.64B` by 2032 | research-03 | Cited [coh-ssl]. | OK — sourced |
| 29 | `~$9M` Chainlink Reserve at launch | research-03 | Cited [messari-lnk]. | OK |
| 30 | `LINK market cap ~$6.6B` April 2026 | research-03 | Cited [cmc-lnk]. | OK |

**Summary numerical issues:** Five unverified-but-unflagged claims in public documents (#1, #2, #5, #23, #24). Five internal-inconsistency numerical drifts (#7, #8, #10, #11, #12, #13, #14). The paper / brief / deck hide the `[UNVERIFIED]` markers that the research reports explicitly carry — which is a readability improvement in theory and a scientific-honesty regression in practice.

**Recommendation.** The paper should acknowledge that specific large figures are scenario estimates with unverified inputs:

- In §1.2 / §2.3, add one clause: *"Market estimates of the AI-hallucination economic footprint are order-of-magnitude and contested; the scenario here assumes tens of billions without a point estimate."*
- In §6.4 and §8.3, add: *"Treasury inflow scenarios depend on AI-laboratory grounding-partnership volumes that are themselves contingent; the ranges reflect that uncertainty."*
- In §5.3, flag the `sub-50ms` as an engineering target subject to Phase II empirical validation.

---

## 6. Brief-to-paper promise audit (line-by-line)

I walked the brief line by line. Every brief claim is supported in the paper except the following gaps.

### 6.1 **[HIGH H9] Brief says "`cpml:minimal-evidence`, `cpml:scientific-first`, `cpml:journalism-mainstream`, `cpml:academic-plural`" — paper does not list these**

The brief text is clean and doesn't enumerate starter CPMLs. But `ideas/04-cpml.md` and the deck both say "4–6 foundation-published CPMLs." The paper §7.3 says "4–6 reference starter CPMLs." The set of four example names only appears in `ideas/04-…:84–86`. So a reader who goes from paper → ideas/04 sees the concrete four; a reader who stays on the paper sees only the count. Minor discrepancy; probably fine.

### 6.2 **[HIGH H10] Brief introduces "sub-cultural" as a first-class frame example; paper is less explicit**

Brief page 2: *"state-aligned narratives, ideologically-opposed religious traditions, dissident communities, sub-cultural epistemic groups."* This is also in the paper §1.2 and §2.4 (implicitly). But "sub-cultural" as a category name doesn't appear in §7 governance. A reader wondering "do sub-cultural frames really get first-class treatment?" has to check ideas/01 to confirm. Paper §7.3 should say explicitly: *"Third-party CPMLs — including ideologically-opposed, sub-cultural, and state-aligned — are architecturally first-class."* It does not.

### 6.3 **[MEDIUM M6] Brief §3 footer says `Draft v0.1`**

Already flagged in §2.1 — listed here for completeness as a brief-specific promise issue.

### 6.4 Brief partners list includes "Wellcome Trust"; paper does not

Brief §3 "further reading": *"…foundations (Mozilla, Knight, MacArthur, Ford, Protocol Labs / [FFDW]…"*. **Does not mention Wellcome**.

Brief §3 earlier: the bullet section does not list Wellcome.

Paper §10 / Executive Summary: lists Mozilla, Knight, MacArthur, Ford, Protocol Labs / FFDW. **Does not mention Wellcome.**

Ideas/05-revenue-model.md: *"NGOs and foundations donate… Mozilla, Knight, MacArthur, Ford, Protocol Labs / FFDW, **Wellcome**"*.

Wellcome appears only in the ideas/05 expansion. The public docs are consistent with each other but diverge from the ideas. Minor — flag because a funder named-and-dropped would be annoyed. **[MEDIUM M7]**

### 6.5 Brief §2 "The foundation handles narrow operational-refusal list (page 3)" — paper handles it

Brief says the foundation handles the list; paper §7.1 says "panel supermajority… revisable only by 7-of-9 multi-stakeholder panel supermajority". The brief gives an overly simple reading. The paper gives the right mechanism. Brief should say "foundation stewards the list; revisions require panel supermajority." Small.

---

## 7. Placeholder and leakage audit

### 7.1 **[CRITICAL C4] "homototus", "Drow", "Nous" and internal team names leaking into public artefacts**

The `[UNVERIFIED]` tag count (51 total) is fine *inside research reports* — it is the right hygiene. But several internal identifiers leak into the public-facing surface:

- `ideas/05-revenue-model.md:108`: **"Drow / Nous self-funded + small seed"** — line in the "Funding-source targets" section. This is a public idea artefact. "Drow" and "Nous" are internal identifiers.
- `ideas/research-03-tokenomics-deep.md:5`: *"Author: Quant (financial-intelligence agent, Nous Aeternos team)"*. Every research report has a banner naming "Nous Aeternos" as the author organisation. Readers will ask "Who is Nous Aeternos?" — the answer is internal to this session and not intended for public audience.
- `research-03-…:7`: *"Status: Research whitepaper, Phase II design input. Draft for internal review."*. **Internal review** language in public doc.
- Research reports variously signed "quant", "juris", "herald", "scout", "sage", "architect", "strategos" — these are internal agent codenames. The paper's §6 and §7 refer to "quant-agent research", "juris-agent research", "architect-agent research" — which a public reader will read as specific human consultants with those cryptic codenames.
- `index.html:99`: honeypot label `hp_company`; `index.html:355` and `index.html:430`: **`mailto:nousaeternos@gmail.com`** — this is Drow's personal Gmail address baked into the public contact form fallback. It will attract spam immediately once indexed.

**Recommendation.**
1. For research reports: add a banner at the top of each saying something like: *"Draft working paper, circulated with Veritas Protocol v0.2 for transparency. Produced 2026-04. Author: internal research group."* Remove "Nous Aeternos team" from public version; replace with "Collaborative Fact-Checking Working Group research team" to match the brand on the paper.
2. For `ideas/05-revenue-model.md:108`: replace **"Drow / Nous self-funded + small seed"** with **"Founding team self-funded + small seed"**.
3. For the participation form: use a role-based address — `participate@veritas-working-paper.example` or similar — not a personal Gmail. At minimum, the *visible text* on the form should say "write directly to the working group" with the link hidden; right now the fallback prints "Write directly" with the raw `mailto:nousaeternos@gmail.com` in the href for any user who inspects or copies the link.
4. The paper's inline phrasing "quant-agent research", "juris-agent research", "architect-agent research" should be replaced with something neutral — "a commissioned tokenomics analysis", "commissioned regulatory analysis", "commissioned architecture analysis" — unless the team wants to publicly own the agent codenames (which are inscrutable to outside readers).

**This is the single most embarrassing finding in the audit.** A reader who ctrl-Fs "Drow" in the public release will find at least one hit.

### 7.2 **[HIGH H11] Form action references `veritas-wg@proposal` / `veritas-wg@noreply.example`**

`index.html:293`: `action="mailto:veritas-wg@noreply.example"`. This is the `form action` fallback for users whose JavaScript is disabled. The `.example` TLD is a reserved IANA domain — the mail will bounce. **[HIGH H11]**

For users without JS, submitting the form opens their mail client with destination `veritas-wg@noreply.example`, which is undeliverable and reads like a clear placeholder. Replace with a deliverable address (same one the JS path uses, or a dedicated inbox).

### 7.3 Placeholder counts

Grep `TBD|TODO|FIXME|XXX|\[UNVERIFIED|to be written|to be determined|coming soon|\?\?\?`:
- Public paper (VERITAS-PROTOCOL-WHITEPAPER.md): **1** (pre-existing `[UNVERIFIED]` in CNIL reference text, not a placeholder)
- Brief: **1** (pre-existing)
- `ideas/04-cpml.md`: **1** — `12-validator-reputation.md — future artefact` in `03-tokenomics-design.md:34` — this names a future artefact that doesn't exist in `/ideas/`. **[MEDIUM M8]** — the public reference reads as a broken link to a reader.
- `ideas/research-03-…`: **13** — all within the research doc, all appropriately marked. Fine.
- `ideas/research-02-…`: **9** — same.
- `ideas/research-05-…`: **9** — same.
- `ideas/research-07-…`: **8** — same.
- `ideas/research-04-…`: **3** — same.
- `ideas/research-01-…`: **1** — same.

Total public `[UNVERIFIED]` markers: 44 inside research reports (appropriate), 0 in paper, 0 in brief, 0 in deck. **Research hygiene is good; paper/brief/deck hygiene is ALSO good, provided readers understand that claims in paper that trace back to `[UNVERIFIED]` research are themselves contingent** — see §5 for the five specific figures that should inherit the marker.

### 7.4 **[HIGH H12] `ideas/03-tokenomics-design.md:34` references `12-validator-reputation.md` which does not exist**

> *"Witnet can borrow the reputation idea separately from the token (see `12-validator-reputation.md` — future artefact)."*

Readers of the ideas artefacts will click this and get a 404. Either (a) write the artefact, (b) remove the forward reference, or (c) mark it `// planned for v0.3`. A public document with a hanging forward reference looks sloppy.

### 7.5 **[MEDIUM M9] `ideas/06-investigation-market.md:102` references `72-legal-regulatory-landscape.md` — doesn't exist**

> *"Mitigations detailed in `72-legal-regulatory-landscape.md`:"*

The `72-` prefix is out-of-band (ideas only go up to 11). **This looks like a leftover from the v0.1 outline**, which used a different numbering scheme. Dead reference.

### 7.6 **[MEDIUM M10] `ideas/03-tokenomics-design.md` references non-existent `53-tokenomics-hard-analysis.md`**

Similar leftover v0.1 numbering: `ideas/11-blockchain-debate.md` line 42 cites `53-tokenomics-hard-analysis.md`. No such file. **Dead reference.**

---

## 8. Broken references

### 8.1 Paper internal section cross-refs — **[MEDIUM M11] check §12 naming drift**

Paper references "§12 Open Questions" — actually §12 is "Open research questions". Paper §7.5 and §8.3 and §R1/R2 implicitly assume forward references that all resolve. I audited § references and found them consistent *except* the title-vs-reference drift ("Open research questions" vs "Open questions"). Cosmetic.

### 8.2 Brief internal cross-refs — **[HIGH H13] `./ideas/#01-…` style links don't work**

Brief links like `<a href="./ideas/#01-mutually-hostile-validators.md">` rely on the ideas reader's hash-routing. The reader's `renderDoc` (ideas/index.html:288) regex is `/^[a-z0-9\-]+\.md$/i` — i.e., *lowercase alphanumeric and hyphen only*. Hashes on the brief include `#01-mutually-hostile-validators.md`, `#06-investigation-market.md`, `#07-quiz-mvp.md`, `#11-blockchain-debate.md`, `#research-01-cpml-academic.md`, etc. All lowercase alphanumeric and hyphens. **These should resolve.**

**However**: brief page 1 line 181 has `<a href="./ideas/#11-blockchain-debate.md">` which works; but deck slide 06 has `<a href="../ideas/#01-mutually-hostile-validators.md">`. Deck is at `/deck/` relative — the `../ideas/` path is correct. OK.

**HOWEVER**: the deck's `../paper/#5-architecture` style (deck slides 03, 05, 08, 09, 14, 15) relies on the paper reader creating `id="5-architecture"`-style anchors on H2s. The paper reader's `renderer.heading` does `slugify(text)` which yields for `## 5 Architecture` → `5-architecture`. Paper §5.5 heading is `### 5.5 Cascading falsification` → slug `5-5-cascading-falsification`. Deck slide 08 uses `#5-5-cascading-falsification` — correct. Paper §5.6 heading is `### 5.6 AI-read surface` → slug `5-6-ai-read-surface`. Deck slide 09 uses `#5-6-ai-read-surface` — correct. Paper §3 heading is `## 3 Problem statement` → slug `3-problem-statement`. Deck slide 03 uses `#3-problem-statement` — correct.

**Deck internal cross-refs look correct.**

BUT: deck slide 10 uses `#10-chain-selection.md` style for ideas links; ideas reader expects just `10-chain-selection.md`. That works. All deck ideas-cross-refs verified. ✓

### 8.3 **[CRITICAL C5] Brief link `<a href="./ideas/#11-blockchain-debate.md">v0.2 supersedes v0.1</a>` displays wrong anchor text relationship**

Brief page 1: clicking "v0.2 supersedes v0.1" takes the reader to `/ideas/#11-blockchain-debate.md`. That's correct routing to the artefact titled "The Blockchain Debate". Fine.

But brief page 2 has `<a href="./ideas/#01-mutually-hostile-validators.md">§01 ideas</a>`, `<a href="./ideas/#06-investigation-market.md">§06 ideas</a>`. Brief page 3 same pattern for `#09`, `#06`, `#07`. Deck uses `#01-...`, `#04-...`, `#06-...`, `#09-...` exactly the same way. All correct for the ideas reader's hash routing.

Not a bug. Noted under Broken References because if anyone refactors the ideas reader's routing, all these will break silently.

### 8.4 External-link spot check

I spot-checked six citation URLs used in the paper references section and in the brief:

- `[1] RFC 6962` → https://datatracker.ietf.org/doc/html/rfc6962 — not present as link in paper; paper just says "RFC 6962, 2013". OK for a references list.
- `[7] IETF SCITT WG` → https://datatracker.ietf.org/wg/scitt/ — present in paper ref list and in brief §3 as live link. **Did not re-verify HTTP 200 live; the WG exists as of knowledge cutoff.** Should be fine.
- `[10] Vectara Hallucination Leaderboard` → https://github.com/vectara/hallucination-leaderboard — paper refs this. This repo exists. OK.
- `[11] Doyle 1979 JTMS` → doi.org/10.1016/0004-3702(79)90008-0 — brief page 2 includes this as a live link. Persistent DOI should resolve. OK.
- `[14] Bench-Capon VAF` → https://dl.acm.org/doi/10.1023/A:1023967128063 — brief page 2. (Note: the URL-encoded form `A%3A` shows up in the brief HTML. Browsers handle this; OK.)
- `[17–19] MiCA / DSA / AI Act Eur-Lex` — all present as Eur-Lex links in brief. Eur-Lex URLs are stable. OK.

External links look fine. I did not fire HTTP HEAD requests; that is a deployment pre-flight check, not an editorial audit.

### 8.5 **[MEDIUM M12] Brief footer line 274 says "Draft v0.1" — already counted in §2.1 but affects link context**

The "Register interest ↓" anchor at brief page 3 footer is labelled under "Draft v0.1" which collides with the v0.2 chrome elsewhere. Same fix as §2.1.

---

## 9. Tone, voice, and public-facing hygiene

### 9.1 **[HIGH H14] Investigation-market example names specific political parties**

Deck slide 10 body:

> *"Adversaries pay for each other's claims to be investigated. Israeli-aligned and Palestinian-aligned parties, both believing the other false, both fund the mechanism that produces evidence. The protocol records both verdicts honestly."*

`ideas/06-investigation-market.md`:

> *"If an Israeli-aligned party and a Palestinian-aligned party both believe the other side's claims are false, they can both pay for investigations."*

This is a *powerful* design-demonstration. It is also a specific political example that will cost the team readers before they have a chance to absorb the architecture. Two considerations:

(a) It makes a strong and defensible point about design neutrality.
(b) It creates an immediate political filter on the paper. Readers will pattern-match "this project is taking a position on Israel/Palestine" — which it isn't, but the example will read that way to many.

**Recommendation.** Keep the example in ideas/06 where the audience is already sophisticated about the design. In the deck, which is a public presentation, consider substituting a less volatile adversarial pair: state-vs-state on historical atrocity attribution (Turkey/Armenia 1915, as named in ideas/01), or scientific-vs-religious on climate attribution. The *design point is identical*. The reader-cost is lower.

This is a judgement call, not a blocking finding. **[HIGH H14]** because the deck is the most-shared surface and the cost/benefit is unfavourable.

### 9.2 Public voice consistency — mostly OK

I scanned for first-person leakage ("I think", "we should", "as we discussed"):
- Paper: none.
- Brief: none.
- Deck: none.
- Ideas 01–11: several uses of "we" and "our" — e.g., `ideas/07-…:29` "starting with real feedback for the design decisions that otherwise get made in a vacuum." This is working-paper voice; acceptable for design-capture artefacts but **should not be published externally as v0.2 artefacts** without a pass.
- Research reports: first-person throughout. "I analyse", "I recommend", etc. This is the author-voice of a signed research report. Acceptable if the attribution is to a named external consultant. **It is not, currently** — see §7.1.

### 9.3 Casual language spots

- Paper: consistent. No drift.
- Brief: consistent, slightly warmer.
- Deck: short, scholarly, excellent voice.
- Ideas: mostly consistent; a handful of conversational lapses ("I want an expert to verify this claim to this rigour" — `ideas/06-…:55`). Acceptable for design-capture.
- Research: first-person, informal footnotes ("Don't copy blindly"). Acceptable if properly labelled as a commissioned research report; not acceptable as an anonymous "Collaborative Fact-Checking Working Group" artefact.

### 9.4 **[MEDIUM M13] Brief §3 subhead: "Why this works in 2026 but didn't in 2022"**

HTML `index.html:235`. The paper says "Why now" in executive summary. The brief's more specific framing is good rhetoric but the 2022 reference date doesn't map cleanly to the paper's claim (v0.2 vs v0.1 is a 2026-internal revision; the "2022 didn't work" framing refers to the earlier dead tokenomics projects and presumes the reader has read that context). Minor — keep if the team wants the specific framing.

### 9.5 **[MEDIUM M14] Deck slide 16 final text says "Collaborative Fact-Checking Working Group" — brief, paper, deck all consistent on this org name**

Looked for drift on the working-group name. Consistent.

---

## 10. Operational refusal list coherence — already flagged

See §4.3 ([CRITICAL C3]). This is the most important alignment problem in the set.

---

## 11. Funding targets and partners

### 11.1 Consistent foundations list

Across brief §3, paper §"Executive Summary" and §15, ideas/05:
- Mozilla
- Knight
- MacArthur
- Ford
- Protocol Labs / FFDW

This appears in all public-facing docs.

### 11.2 Added in research but not public

- `research-03`: no specific list.
- `research-05`: adds **Wellcome Trust** (implied by the "Wellcome" mention in the user-written prompt template below — but not actually in the doc text).
- `ideas/05-revenue-model.md`: expands to **Mozilla, Knight, MacArthur, Ford, Protocol Labs / FFDW, Wellcome**.

The public surface (brief §3 + paper §0 + paper §15) does **not** include Wellcome. Ideas does. Pick one. See §6.4.

### 11.3 "EU Democracy Shield" — not mentioned anywhere

The prompt mentions "EU Democracy Shield" as a possible partner — the term does not appear in any of the public docs or ideas. Nothing to reconcile. If the team intends to list it, it's not there yet.

---

## 12. Form / contact surface

### 12.1 **[HIGH H15] Contact form exists but is orphaned**

The brief's page 4 form posts to `/api/contact` (implemented at `functions/api/contact.ts` — a Cloudflare Pages Function). The form's existence is not acknowledged anywhere else:

- Paper §15 "Call for research partners" ends with: *"Contact and further materials via the project's public repositories."* No link to the participation form.
- Brief §3 "Four ways to help" links to ideas, not to the form.
- Deck slide 16 ("Call") says "we would very much like to talk" but has no contact CTA.
- The form itself is *inside* the brief HTML, so a reader opening `/paper/` or `/deck/` or `/ideas/` directly never sees it unless they navigate to `/`.

**Fix.** At minimum the paper's `§15` should end with a clear CTA: *"Register interest at [/participate](/#participate) or email the working group at [address]."* Same for deck slide 16. A contact form that requires the reader to discover the index page is a discovery tax.

### 12.2 Form itself — minor issues

- Form action fallback points at `veritas-wg@noreply.example` (placeholder) — **[HIGH H11]**.
- Fallback mail client destination is `nousaeternos@gmail.com` (personal) — **[CRITICAL C4]**.
- Form status text area calls `setStatus('success', [...])` on server-reported success; good error UX.
- `data.id` is interpolated into status text without escaping; if server returns a malicious id, it's rendered as text (via `createTextNode`). OK — no XSS.
- `submitted_ms` calculated client-side is a light bot-detection signal; server should check plausibility. Not verified in this review (would need to read `contact.ts`).

---

## 13. Punch list — everything in one place

### BLOCKING — MUST FIX BEFORE PUBLISH

**[C1]** Brief HTML participation-form section still labelled v0.1 in four places (`index.html:274, 286, 289–291, 363`). Replace with v0.2.

**[C2]** Paper HTML reader shell (`paper/index.html:9, 129, 145`) labelled v0.1 and carries a v0.1-era OpenGraph description. Update to v0.2 and rewrite the og:description to match v0.2 positioning.

**[C3]** Operational refusal list has three different wordings across paper §7.1, paper Appendix C, and ideas/09. Word-for-word identity is required for the protocol's most consequential commitment. Canonicalise on paper §7.1 wording.

**[C4]** Internal identifiers ("Drow", "Nous", "Nous Aeternos", agent codenames) leak into public-facing surface at `ideas/05-revenue-model.md:108`, every research report header, and at least two `mailto:nousaeternos@gmail.com` links in `index.html`. Replace with working-group-branded author lines and a project contact address.

**[C5]** Brief's hash-routing links into `/ideas/` rely on the ideas reader preserving its current hash routing (`#NN-....md` format). Any future refactor of that reader will silently break dozens of deep links across brief and deck. Documented here so the coupling is explicit.

### HIGH — SHOULD FIX

**[H1]** Paper reader `og:description` is a v0.1 thesis description. Rewrite.

**[H2]** Paper/ideas use "consensus domain", "frame", "epistemic frame" interchangeably without a glossary. Add a one-sentence definition where the CPML is introduced.

**[H3]** Research-03 calls the actor "signing/verification center"; paper calls it "validator". Unify or add a gloss.

**[H4]** `ideas/09-refusals-and-panel.md:82` has header "v0.1 — starting proposal" inside the v0.2 container. Fix label.

**[H5]** Validator count drift: paper says 5–10 gate and ~12 steady state without labelling which is which. Clarify across paper §13.2 and §8.3.

**[H6]** Investigation fee schedule in paper (`$300/$1K/$3K/$10K+/$2.5K`) does not match research-03's quant recommendation (`$2K–$8K tiers`). Either update paper attribution or reconcile numbers.

**[H7]** Phase I ($300–500K) and Phase II ($600–900K) cost estimates in paper disagree with `ideas/05-revenue-model.md` ($200–300K and $400–600K). Pick one set and propagate.

**[H8]** Country chapters day-one count: paper §7.2 says four (EU-Brussels, US, UK, Switzerland); paper §13.2 and deck and ideas/08 say two (EU + US). Reconcile.

**[H9]** Paper §7.3 says "4–6 reference starter CPMLs" without naming the concrete starter CPMLs that ideas/04 names. Add the concrete names (`cpml:minimal-evidence`, etc.) in paper §7.3 or note explicitly that the set is TBD.

**[H10]** Paper §7 governance doesn't echo the brief/deck's "sub-cultural frames first-class" language. Add.

**[H11]** Form fallback action `mailto:veritas-wg@noreply.example` is a placeholder (`.example` TLD is undeliverable). Fix.

**[H12]** `ideas/03-tokenomics-design.md:34` references non-existent `12-validator-reputation.md`. Remove, write, or label as planned.

**[H13]** Brief's deep-links into ideas rely on ideas reader's hash format; document the contract or add slug aliasing.

**[H14]** Deck slide 10's investigation-market example uses Israel/Palestine as the adversarial pair. Powerful but politically costly in a public deck. Consider substituting.

**[H15]** Paper §15 and deck slide 16 don't link to the participation form, which lives only on the brief index page. Add explicit CTAs.

### MEDIUM — EDITORIAL HYGIENE

**[M1]** Quiz name (Frame / Compass / Plural) resolution is ambiguous across three documents.
**[M2]** Chain transaction cost numbers ($0.002–0.004 EAS vs $0.01 Base median) appear near each other without explanation.
**[M3]** Paper quietly loses the $35–140M/year journalism-funding upside scenario present in research-03.
**[M4]** Validator compensation share illustrations split between 65% (ideas/05) and 70% (ideas/03).
**[M5]** Celestia-vs-Ethereum-blob ratio in paper §5.2 ($0.35 vs $20.56 per MB → 25–58×) doesn't match research-04's derived ratio ($6K vs $80K per month → 13×). Clarify.
**[M6]** Brief page 3 footer says "Draft v0.1" (counted in C1; also a standalone visible error).
**[M7]** Wellcome Trust listed as a target funder in ideas/05 but not in brief/paper/deck. Reconcile.
**[M8]** Same as [H12] from a different angle — the forward reference reads as a broken promise.
**[M9]** `ideas/06-investigation-market.md:102` references non-existent `72-legal-regulatory-landscape.md` (v0.1 outline leftover).
**[M10]** `ideas/11-blockchain-debate.md:42` references non-existent `53-tokenomics-hard-analysis.md` (v0.1 outline leftover).
**[M11]** Paper title drift: "§12 Open Questions" vs "§12 Open research questions".
**[M12]** Duplicate of M6 for the page-3 footer context.
**[M13]** Brief subhead "Why this works in 2026 but didn't in 2022" requires context to land; acceptable but noted.
**[M14]** Deck slide 16 copy is fine; noted for completeness.
**[M15]** Paper §0 (Executive Summary) uses "consumer MVP" and "quiz MVP" interchangeably with "Compass" and "Frame" — the three quiz names cohabit in the same 5-page abstract.
**[M16]** Paper §5.6 and ideas/06 both reference MCP without hyperlinking on first mention in paper (brief page 2 *does* link). Add link.
**[M17]** Paper §11 comparison table cell "Hostile-frame permissionless write" → "Prior art: None". That's true but bold; a reader will want a footnote. Consider adding: "Nostr relays permit signed-anyone posting; AT Protocol labellers permit permissionless labels; neither addresses factual-claim attestation specifically."
**[M18]** Paper §4 "Principle 5 — Falsifiability of claims about the protocol itself" is the single principle that sets Veritas apart from every similar proposal ever written. It deserves one more sentence of what this means operationally (e.g., "Hallucination-reduction benchmarks published pre-registration; if they don't hit target X, the AI-grounding pitch is scaled accordingly.").
**[M19]** Deck slide 02 ("Thesis"): four §s labelled `§ 01` through `§ 04` within the slide. These are *slide-internal* bullets, not references to paper sections. Readers will scan as paper references. Rename the numbered bullet style to avoid confusion (e.g., use `① ② ③ ④` or plain bullets).
**[M20]** `ideas/07-quiz-mvp.md:79` says "Week 8–12" but nearly every other date reference uses months. One-off unit inconsistency.
**[M21]** The protocol's formal name is **Veritas Protocol**; it's abbreviated as **Veritas** in prose. Nowhere is "Veritas" or "VRT" (the placeholder token name) disambiguated for the reader. Minor.

### LOW / ADVISORY

**[A1]** The paper has 24 numbered references but does not number them 1–24 in the "additional supporting research" list at paper end. Minor.

**[A2]** References `[2]`, `[3]` don't appear inline in the paper body; only [1] is cited but the list includes `[2]` (RFC 9162) for context. Harmless.

**[A3]** `[UNVERIFIED]` markers in research docs are 44 total. If the research docs are published as v0.2 companion artefacts, a single top-of-file banner per research report explaining the marker convention would help the lay reader ("[UNVERIFIED] indicates a claim the author couldn't source to a public primary record; retained for transparency").

**[A4]** The paper has no table of contents; the HTML reader (`paper/index.html`) generates one via JS. Readers of the raw Markdown won't have a TOC. Optional fix: add a hand-rolled TOC at the top of the Markdown.

**[A5]** The brief's print layout (`@media print` rules in `index.html`) explicitly hides the participation form (`form.participate, #form-status, #page-4, .hud { display:none }`). Good — prints cleanly as a 3-page brief. **Positive finding.**

**[A6]** The ideas reader sanitises Markdown-rendered HTML via DOMPurify and loads via `cdn.jsdelivr.net`. The SRI hashes are in place. **Security-positive finding.**

**[A7]** The contact form JS uses `DocumentFragment` / `createTextNode` / `createElement` rather than `innerHTML` for rendering user-provided status. **Security-positive finding.** (Note the error-path uses `err.message` interpolation with `createTextNode` — safe.)

**[A8]** The brief's `<meta http-equiv="Content-Security-Policy">` explicitly whitelists fonts.googleapis.com and forbids frame-ancestors. **Security-positive finding.**

**[A9]** The deck's `localStorage` slide index key (`storageKey = 'veritas-protocol-v02-deck:slide:' + location.pathname`) is v0.2-stamped. **Positive finding** — future v0.3 deck won't resume from v0.2 state. Good.

---

## 14. Summary metrics

- **Documents audited:** 4 authoritative + 11 ideas + 7 research + 3 HTML readers = **25 public-facing files**.
- **Total findings:** 50.
- **Critical (blocking):** 5.
- **High:** 14.
- **Medium:** 21.
- **Low / advisory:** 9 (including 5 positive security/UX notes).
- **Numerical drifts (internal inconsistency):** 7.
- **Unverified figures in public surface without `[UNVERIFIED]` marker:** 5.
- **Broken references:** 3 (two to non-existent `ideas/` files; one forward-ref in `ideas/03`).
- **Version-label contradictions:** 8 distinct locations across paper reader, brief form page, and brief page-3 footer.
- **Word-for-word refusal-list variants:** 3.
- **Internal identifier leakage sites:** 6 (`ideas/05`, every research report header, `index.html` mailto twice, two `mailto:nousaeternos@gmail.com`).

---

## 15. Recommendation to the editor

**Do not publish in current state.** One focused copy-edit pass addressing the five Critical and fourteen High findings unblocks publication. The Medium and Low findings can ship in a v0.2.1 copy-pass a week later.

Minimum gate for publication:

1. Every surface labelled v0.2. (C1 + C2 + H1)
2. Refusal list identical everywhere. (C3)
3. Internal identifiers stripped from public surface; contact email is a project address, not personal. (C4)
4. Form fallback action deliverable. (H11)
5. Country-chapter count reconciled (EU+US or EU+US+UK+CH). (H8)
6. Phase I / Phase II cost numbers reconciled between paper and ideas/05. (H7)
7. Investigation fee schedule reconciled with research-03 or paper attribution softened. (H6)

Estimated effort: **2–4 hours** of focused editing.

After that pass, the documents are publishable. The substantive content is strong, the architecture story is coherent, and the design-capture artefacts are well-organised. This is a document set worth shipping carefully — not worth shipping as-is.

---

*Aegis · 2026-04-20 · Veritas Protocol v0.2 editorial gate check · 50 findings · NOT READY — one focused copy-edit round required before public release.*
