The full proposal — in plain English.
A longer-form version of the simple page. Same ideas, more detail. We cover what we'd build, how money flows, who runs what, what could go wrong, and what the 12–18 month plan looks like.
~ 12-minute read · No jargon · Print-friendly
1Why we wrote a long paper
The simple version of Veritas takes 8 minutes to read. The full working paper takes 30. Why bother with the long one?
Because honest answers take space. "How does the system handle a state-sponsored validator posting fake claims?" is not a one-paragraph answer — it touches on architecture, governance, economics, and law. Running away from those connections is how you end up with a polished pitch that falls apart on contact with reality.
The working paper is the long answer to "we'd actually have to think about all of this, wouldn't we." This page is its plain-English mirror — same structure, same answers, fewer footnotes.
2The six properties — what makes Veritas different
Lots of fact-check projects exist. None of them combines all six of these:
1. Provenance — every claim has lineage
Each claim carries a small, signed record: where it came from, who checked it, when. Not just a link — a cryptographic signature that proves a specific person or institution actually checked it. Like a notarised statement, not a tweet.
2. Permissionless validators — anyone can check
You don't need permission to be a validator. A university can do it. So can a religious tradition. So can a sub-cultural community. So can a state-aligned group. The system records who said what; it doesn't gatekeep who's allowed to participate.
This is the controversial part. We could build a "trusted validators only" version that excludes anyone uncomfortable. But then we're back to a single-frame system pretending to be plural. The whole point is that hostile communities can use the same substrate — and the reader picks who they trust.
3. Per-user composition — your view, your rules
Different users, looking at the same claim, see different verdicts — based on which validator communities they trust. This is your Consensus Profile: a small file that lives on your device and tells the system "for science, trust the scientific consensus; for legal questions, trust EU jurisprudence; for history, show me everything and let me decide."
4. Cascading retraction — bad sources flag everyone who used them
Today, when a paper is retracted, the news article that quoted it stays up forever, with no warning. Veritas marks every claim that depended on the retracted source — within seconds. Your browser, search engines, AI systems — all see the flag. The journalist who originally cited the bad source gets a notice and can update.
5. Open AI grounding — AI systems can ask before they answer
The big AI assistants (ChatGPT, Claude, Gemini) sometimes confidently make things up. Veritas gives them somewhere to ask. "Is this claim verified, contested, or made up?" before they write the next sentence. Substantially fewer fake citations. Substantially fewer confident wrongs.
6. Investigation market — pay to check what matters
If two parties care strongly about a contested claim — say, two opposing news organisations — they can both pay for a formal investigation. Money goes into escrow. Qualified investigators take the case. They check primary sources, document reasoning, sign a verdict. Money releases when they finish.
This is the most novel mechanism. It scales verification effort to where money lives. It gives investigative journalism a new revenue stream — paid per case, not per ad.
3How the architecture actually works
Two layers, doing different jobs.
The chain — the slow, public, permanent layer
Underneath, there's a public blockchain (specifically a "Layer 2" — fast and cheap, costs less than a cent per record). It holds:
- Every claim record (what was said).
- Every attestation (who checked it).
- Every retraction event (what was withdrawn).
- Every payment (who paid whom for what).
Permanent. Public. Auditable. Anyone can run a node and read the whole history.
The federation — the fast layer that you actually touch
Above the chain, there's a network of aggregators — services that read the chain, apply your consensus profile, and serve you the answer. They cache responses at the edge of the internet, so the answer comes back in milliseconds. Your browser, your AI assistant, the website you're reading — they all go through aggregators, not directly to the chain.
Different aggregators have different editorial policies. The reference ones (run by the foundation) follow specific rules. Third-party ones (run by anyone) follow their own. You choose which to use — like choosing a search engine.
The chain is like a giant filing cabinet — every document filed, with a date and signature. Slow, bulletproof, eternal.
The aggregators are like the index card system on top — fast lookups, organised the way you want them. If one index goes corrupt, you switch to another; the filing cabinet underneath is still fine.
4Where the money comes from, where it goes
Six revenue streams. No advertising. No selling your data. No tokens with speculative price.
- AI laboratories pay for grounding access. (Biggest source if it materialises.)
- Websites pay an annual fee for a "Veritas-certified" badge — like the security padlock, but for facts.
- Users / apps pay subscriptions for premium features.
- Content publishers pay for priority verification of breaking-news claims.
- Investigation commissions from contested-claim parties.
- Foundations and donations (Mozilla, Knight, MacArthur, Ford-style funders).
Where does it go?
- 60–70% to the validators — the institutions doing the checking work.
- 20–30% to operations — software, legal, audits, foundation staff.
- 10–15% to a reserve fund — covers a year of operations if revenue dips.
The numbers are published. The treasury is auditable annually.
The biggest single risk in the project is whether AI laboratories actually sign up. If they don't, the validator-compensation model has to shrink dramatically. The critical review pointed out that our optimistic case ($695M–$2.78B/year) and our scenario projection ($17M/year) differ by a factor of 40-400. The next version of the paper (v0.3) tiers these projections honestly: pessimistic, base, optimistic — with a fallback plan if the AI thesis fails.
5Who runs what — three roles, no single boss
Validators — the checkers
Universities, libraries, newsrooms, research institutes. Plus self-organised communities outside formal institutions. Each gets a credential — a cryptographic identifier — and signs their attestations with it.
Reputation accumulates: validators who consistently produce careful, defensible checks earn weight in their domain. Validators who produce sloppy or partisan attestations lose weight. The math is open-source and audit-able.
The foundation — the rule-keeper
A non-profit foundation (we're proposing Swiss Stiftung + EU operating entity for legal reasons) maintains the technical specification, runs reference servers, manages the small list of operations the protocol refuses entirely, and convenes country chapters.
The foundation does not decide what's true. It maintains the substrate.
The critical review pointed out that the foundation has more editorial influence than the marketing implies. Five places where its choices matter: which reference aggregators it operates, which starter consensus profiles it ships, what's on the refusal list, which countries get chapters, who's on the dispute panel. The next version of the paper acknowledges this openly — "distributed authority with five disclosed editorial surfaces" rather than "no single authority."
You — the reader
Your consensus profile decides which validators count for you. Most users will use a default profile — that's fine. Power users tune it. Communities can publish their own profiles for members to opt into.
6The country chapters — different laws, same protocol
One filter doesn't work for the whole world. German law forbids Holocaust denial. The US First Amendment protects most of the same speech. India has its own rules. Singapore, Brazil, the UK — each different.
The architecture handles this with country chapters:
- The chain is universal — claims from anywhere, visible to anyone with the right software.
- Country chapters operate aggregators for their jurisdiction with local-law compliance.
- The German chapter complies with German law. The US chapter with US law. And so on.
- Users can connect to a different chapter's view if they need to (with disclosure of the implications).
This is the same model Wikipedia uses for its national chapters. It's been tested. It works.
7Eight risks we know about — and what we're doing about them
The full paper lists ten. Here are the ones a regular reader should know:
- State propaganda gets validators. Hard to stop without gatekeeping. Our defence: state affiliation must be disclosed; user profile decides whether to count it; reputation math doesn't let single state dominate.
- AI labs don't integrate. Biggest single risk. Pilot with one lab during Phase II under a research grant; published benchmark; if the delta isn't real, the AI thesis shrinks accordingly.
- Foundation gets captured. Sector caps on board (no more than 30% from any one sector); 3-year terms; geographic distribution; multi-stakeholder dispute panel; published decisions with rationale.
- Validator gets sued. Attestations are framed as opinion ("under standard X, evidence Y, conclusion Z"). Foundation indemnification. Insurance. The legal argument is well-tested in US courts.
- Cascade attack. A bad-faith retraction triggers false flags on lots of legitimate claims. Defence: retractions require a quorum of independent validators; 24-hour pending window; recusal rules.
- Echo chambers. Users pick profiles that just reinforce their tribe. Default profiles surface "today's most-surprising claim from outside your frame" daily. We can't force open-mindedness; we can build for it.
- Investigation market gets gamed. Rich actors flood the system with commissions designed to muddy. Defence: public-interest fund for under-resourced claims; pattern-detection on commissioner behaviour; commissioners can't buy private suppression — investigation outputs are public.
- Token regulatory issues. If tokens get reclassified as securities, the economic model gets hairy. Mitigation: jurisdiction choice (Switzerland + EU); start without burn-to-cash mechanism; pre-documented contingency for moving to USDT-only payments.
8The three-phase plan
Phase I — months 0 to 6
Build the consumer "consensus quiz" — a fun standalone product that helps users discover their consensus profile. Build reference servers on a test blockchain. Submit specifications to standards bodies (W3C, IETF SCITT, schema.org). Cost: ~$300–500K.
Goal at end of Phase I: 20+ websites publishing in the format. 5+ third-party fact-check organisations attesting. One AI lab in early conversation. ~10,000 users have taken the consensus quiz.
Phase II — months 6 to 18
Foundation legally formed. 5–10 institutional pilot validators across 3+ countries. First chartered consensus domains live. AI lab integration with published hallucination-reduction benchmark. Investigation market live. Country chapters in EU + US. Dispute panel seated. Cost: ~$600–900K.
Goal at end of Phase II: measurable benefit demonstrated; protocol operational across jurisdictions; peer-reviewed publication; sustained revenue from at least three streams.
Phase III — months 18+
Scale up. Additional country chapters (UK, Japan, Brazil). Expand consensus domains. Decide whether to migrate to a sovereign blockchain (probably not — staying on Layer 2). Reach 100K+ quiz users.
9The 35 things we have to build
The full paper lists every concrete software component. Here's the count, by layer:
- Protocol layer (5 things): Smart contracts, deployment tools, transparency-log adapter, utility token, treasury management.
- Federation layer (4): Aggregator service, edge cache, AI-assistant integration adapter, client SDK.
- Consumer layer (4): Profile-language specification, profile resolver, the consensus quiz, browser extension.
- Publisher layer (3): Publishing CLI, validator reference implementation, certificate management.
- Governance and operations (6): Foundation charter, dispute-panel rules, refusal-list specification, country-chapter framework, starter profiles, validator onboarding.
- Integration (5): AI-lab pilot, retraction-feed integration, schema bridge, investigation-market contract, public-interest investigation fund.
That's 27 named software components plus 8 governance / specification documents. Some are forks or adapters of existing open-source code (Sigstore, Bluesky's Ozone labeller, IETF SCITT reference implementations). Others are net-new.
The full paper's §14 names each component and what existing project it should fork from where applicable.
10The honest summary
If this works, the web has a thin trust layer: a way for facts to carry their lineage, a way for retractions to actually reach readers, a way for AI to ground itself, a way for different communities to coexist on the same substrate without erasing each other.
If it doesn't work, the most likely failure modes are: AI laboratories never integrate (kills the funding model); the foundation gets captured (the substrate becomes another partisan tool); state actors weaponise the permissionless-write layer (the substrate gets associated with disinformation more than information).
The critical review found genuine problems. The next version of the paper addresses what can be addressed. Some things require operational data we won't have until the pilot runs.
The team is small. The funding is not yet committed. The validators don't exist yet. We're at the moment where a serious idea is either picked up by people who can carry it, or it stays a polished proposal in a folder.
If any of this resonates and you can help — fund a pilot, operate a validator, audit the code, charter a domain, publish a starter profile — there's a contact form on the brief and a published critical review showing exactly what's broken so far.
Back to ELI5 home · Questions · Critical review (plain) · v0.3 plan (plain) · Full technical paper