Veritas Protocol — A trust layer for the open web.
A three-page brief on a proposal to make factual claims on the web machine-readable, reviewable, and correctable — for readers, for institutions, and for the artificial-intelligence systems that now flood the web with unverifiable text.
1 · Why this mattersThe web has a trust gap — and it's getting worse fast.
The web has layers for addressing (DNS), for transport (TCP/IP), for security (TLS), and for content structure (HTML, schema.org). It was never built with a standard layer for trust.
When a website says "Bolivia has 10 million hectares of degraded land," there is no structured way for a reader to ask the follow-up questions that matter. Says who? Based on what source? Checked by whom? Last verified when? If the source is retracted next year, how would the reader find out? These questions are currently answered only by hand, page by page, by each reader individually — or not at all.
That was manageable when making a credible-looking website took months of editorial work. It is no longer manageable. Generative AI systems now produce polished text roughly six orders of magnitude faster than any human review process can verify it. Fabricated citations look real. Confident claims travel faster than their corrections. Retraction Watch tracks thousands of retracted papers each year, and the signal almost never reaches the reader of the newspaper article that cited them. Meanwhile, AI systems spend enormous compute filtering their own unreliable outputs because they have no trusted corpus to check against.
It is a thin, open standard — similar in spirit to HTTPS, which standardised encryption, or to schema.org, which standardised structured metadata — that lets any website publish a machine-readable record of what it claims, where each claim comes from, who checked it, and how confident they are. When an upstream source is retracted, the protocol propagates the retraction to every site that relied on it. AI systems query the protocol before producing text, so they suppress claims that have been falsified, surface claims that are contested, and attribute each claim to its basis.
The protocol does not decide what is true. It standardises the metadata that lets readers, institutions, and AI systems decide for themselves, with clear evidence.
2 · How it worksFour properties, each delivered by mature open standards.
Provenance — every claim has lineage
Each claim carries a cryptographic identifier, a pointer to its primary source, and the signed identity of whoever checked it. The data model extends schema.org/ClaimReview, already recognised by search engines and used by fact-check organisations in the International Fact-Checking Network. Identity is handled through W3C Verifiable Credentials and Decentralized Identifiers. There is no new identity system to invent.
Plural verdicts — truth can be frame-relative without being nothing
A single claim can be verified under one consensus domain and disputed under another. scientific-default, legal-jurisdiction-EU, journalism-default, historical-academic-default — these are genuinely different standards of evidence, and a protocol that picks one and discards the others misrepresents reality. The protocol records verdicts under each named domain without collapsing them. This is the proposal's main intellectual contribution. It is also where the governance problem concentrates: the protocol commits to refusing charters that fail a published admissibility criterion, including a hard list of claims for which credible adjudication has already been conducted. Openness to frames is not equivalence of findings. Full detail in §7 of the working paper.
Cascading falsification — retractions reach the reader
When a source is retracted, every claim that depended on it is marked for re-evaluation, and the event propagates to every subscribed consumer within seconds. The mechanism borrows classical AI work on truth maintenance — a well-understood 1979-vintage algorithm, now applied at internet scale. Propagation happens over libp2p gossip, the same peer layer that runs Ethereum's and Filecoin's networks. No blockchain is required.
AI-read surface — grounding at inference time
A simple, cacheable REST API lets a generative model check its output against the signed, domain-scoped claim store before emitting text. Public evidence on retrieval-augmented grounding shows measurable reductions in confidently-wrong outputs. The protocol makes that corpus open, canonical, and signed, so every AI system can use it — not just the ones whose operators have built in-house grounding stacks.
Who runs it
Institutions, not a company. Validators are universities, libraries, newsrooms, and research organisations — the institutions society already trusts to check things. They are credentialed by an open foundation hosted under an existing multi-stakeholder parent such as the Mozilla Foundation or the Linux Foundation. Their signed attestations are published in public transparency logs — the same cryptographic pattern used for HTTPS certificate transparency and for Sigstore software-supply-chain signing. If a validator drifts from consensus inside a domain, reputation erodes automatically and publicly. If credentials are compromised, they are revoked through a standard mechanism. If the foundation itself becomes captured, the published audit trail makes it visible.
3 · Why now, how to helpThe window is open, but not indefinitely.
Why this works in 2026 but didn't in 2022
The necessary standards have only recently become mature enough to compose. W3C Verifiable Credentials 2.0 reached Recommendation. W3C Decentralized Identifiers 1.1 followed. The IETF working group for Supply-Chain Integrity, Transparency and Trust (SCITT) is producing drafts that are a near-perfect fit for the signed-attestation log layer of this protocol. Sigstore has proven that federated signing with short-lived certificates works at scale. The Content Authenticity Initiative has enrolled thousands of organisations in provenance standards for media. None of these were usable building blocks three years ago.
Regulation has also moved. The EU Digital Services Act introduces the Trusted Flagger pathway. The EU AI Act Article 50 creates transparency obligations for AI-generated content. The window for a voluntary, open, self-declaration standard to front-run heavier regulation is open now. It will not stay open long.
This is explicitly not a blockchain
An earlier generation of fact-checking projects attempted to combine this ambition with tokenomic incentives. Civil Media, Po.et, Bitpress, and several academic prototypes tried. None achieved sustained adoption among fact-checking organisations. The full working paper's §6 and §11 document the empirical record and the reasons a federated, token-free architecture delivers every property the protocol needs at roughly an order of magnitude lower cost and with substantially lower regulatory exposure. Certificate Transparency + Sigstore + classical truth-maintenance algorithms replicate every claimed property of a blockchain-based alternative, without the blockchain.
What is hard — honestly
- Governance. Deciding which consensus domains are admissible is the central substantive question. The working paper proposes an admissibility criterion and a hard list of non-admissible positions; both will be contested, and both are the right things to contest openly.
- Adoption. AI laboratories may not integrate a third-party grounding substrate quickly. The proposed first-mover pilot is with one laboratory under a research grant, publishing a measured reduction on a standard benchmark. That benchmark is the honest falsification test: if the delta is not real, the protocol must shrink its claims accordingly.
- Sustainability. Institutional validators do real work and need real support. The funding model is foundation grants, AI-laboratory service fees for high-volume access, institutional in-kind contributions, and major gifts. No tokens. No advertising. No equity.
Four ways to help
- Co-author the formal specification in the relevant standards bodies — W3C Credentials Community Group, IETF SCITT, schema.org.
- Operate a validator in the Phase II pilot network — universities, libraries, fact-check organisations, open-science consortia.
- Audit the protocol in the open — cryptographic, economic, governance, and legal contributions.
- Charter a consensus domain by publishing its editorial standard.
Partners are also sought from the AI-laboratory community, from foundations with programmes in open infrastructure and journalism (Mozilla, Knight, MacArthur, Ford, Protocol Labs / FFDW), from the International Fact-Checking Network, the Content Authenticity Initiative, and open-science infrastructure including Crossref, OpenAlex, and Retraction Watch.
Further reading
- Full working paper — abstract, 14 sections, 36 references, two appendices. Research-grade.
- Open research questions — eight specific problems where partner engagement is invited.
- Three-phase roadmap — phases, gates, exit conditions, cost estimates.
4 · Register interestIf any of this resonates with your agenda, we would very much like to talk.
This is a v0.1 working paper. We are looking for co-authors of the formal specification, pilot validators, open auditors, consensus-domain rapporteurs, funders, and AI-laboratory research partners. Tell us how you think you could contribute — we reply by email.