Veritas Protocol — explained for normal humans.
A trust label for facts on the internet. A way for you, news sites, and AI to know if something has been checked, by whom, and what happens when it turns out to be wrong.
~ 8-minute read · No jargon · Print-friendly
1The problem in two paragraphs
The internet is full of confident-sounding statements. Some are true. Some are wrong. A lot are made up. Until recently, making a fake-but-credible-looking website took weeks of work — fake organisations, fake studies, fake people. Most people couldn't be bothered. So most things online were at least real attempts at being true.
Then AI happened. Now anyone can produce a polished fake site in an afternoon, with fake citations to fake studies. The cost of creating misinformation went down. The cost of checking things stayed the same — humans still take hours to verify a single claim. And the AI systems we use every day have the same problem: they make stuff up, and there's no good way for them to check before they answer.
Imagine if every food package said whatever the seller wanted: "100% organic," "no chemicals," "doctors recommend." With no rules, no checking, no labels you could trust. That's the internet's relationship with facts right now.
Veritas is — roughly — nutrition labels for facts on the web.
2What Veritas actually does
It's a small, open standard. When a website says something — "Bolivia has 10 million hectares of degraded land" — it can attach a tiny invisible label that includes:
- Where the claim came from (a study, a government report, an interview)
- Who checked it (a university, a library, a newsroom)
- When they checked
- What happens if the source turns out to be wrong — every site that used it gets flagged
You don't have to read the labels. But your browser can. AI assistants can. Search engines can. Other websites can. The label is structured so software can use it, but a human can also click through and see exactly who said what and when.
Crucially, Veritas doesn't decide if something is true. It just gives you the receipts.
3The "different communities" idea
This is the part that's a bit weird, but stick with us — it matters.
Different groups of experts check things differently. Scientists ask: "is this experiment reproducible?" Lawyers ask: "is there evidence that would stand up in court?" Historians ask: "what do the primary sources say?" Doctors ask: "what does the clinical trial show?"
For the same fact — say, "this drug works" — these communities might come to different conclusions using different standards. Both can be honest. They're just answering slightly different questions.
Yelp and Michelin both review restaurants. They use different standards. A restaurant could be 5 stars on Yelp and zero stars from Michelin. Neither is lying. They're measuring different things.
Veritas keeps the verdicts from all of them. Then you choose which standard you care about for which topic — and the system shows you the right view.
You set this up once via a small file called your Consensus Profile. "For science questions, I trust scientific consensus. For legal questions, I trust EU jurisprudence. For history, I want to see all the views and let me decide." The file lives on your device. You own it. Nobody else can change it without your permission.
4What happens when something turns out to be wrong
Today, when a scientific paper is retracted, almost nobody finds out. The retraction notice gets buried; the original article that quoted the bad paper stays up forever, with no warning. People keep quoting the wrong thing for years.
It's like a food recall, but with no system to actually pull the food off the shelves. The recall happens. Nobody hears about it. People keep eating the bad batch.
Veritas does the equivalent of food-recall propagation. When a source gets retracted, the system automatically marks every claim that depended on it. Within seconds, your browser knows. Search engines know. AI assistants know. The website that originally cited the bad source gets a notice and can update or dispute.
It's not perfect — bad-actor sites can ignore the signal — but for honest sites and tools, the recall actually reaches the reader.
5Why AI cares
The big AI assistants you use — ChatGPT, Claude, Gemini — sometimes confidently make things up. The technical word is "hallucination." It happens because they don't actually know what's true; they just produce text that sounds true.
If those AI systems can ask Veritas before they answer ("is this claim verified, contested, or made up?"), they can:
- Skip claims that have been falsified.
- Tell you when something is contested ("scientists say one thing, the courts say another").
- Say "I don't have a verified source for this" instead of inventing one.
This makes them measurably more honest. The companies that build these systems are willing to pay for the service — which is how Veritas pays its bills (we'll get to that).
6Who runs it
Three groups, with different roles:
Validators — the checkers
Universities, libraries, newsrooms, research organisations. The institutions society already trusts to check facts. Their checks are public and signed (cryptographically — like a fancy digital signature). If they get something wrong over time, their reputation drops; their future checks count for less.
Importantly: validators don't need permission to participate. Anyone can publish their checks. A research community outside formal institutions can self-organise. A religious tradition can have its own validators. A dissident group can post checks in jurisdictions where mainstream voices won't go. The system records who said what; you decide what counts.
The foundation — the rule-keeper
A non-profit foundation maintains the technical standard, runs reference servers, publishes a small list of things the system refuses to do (like verifying child sexual abuse material — which the operation itself is illegal). It does not decide what's true.
You — the reader
Your Consensus Profile decides which checkers you trust on which topics. Most people will use a default — that's fine. Power users can fine-tune.
7How it pays for itself
No advertising. No selling your data. No tokens that go up and down in price. The money comes from:
- AI companies pay subscription fees to use Veritas to ground their answers.
- Websites pay a small annual fee to get a "Veritas-certified" badge — like the lock icon next to the URL, but for facts.
- Foundations donate (Mozilla, Knight, MacArthur, Ford — the usual suspects for open-internet projects).
- Disputed claims: parties who care about a contested claim can pay to have it formally investigated. This is how investigative journalism gets revived, basically: parties on both sides of an argument fund the investigation, professional checkers do the work, the result is public.
About 60–70% of the money goes to paying the checkers. The rest covers operations and a reserve fund. The numbers are published; anyone can audit.
8The honest part — what's still hard
This isn't shipping yet. It's a working paper — a serious proposal that could become real. We commissioned four independent reviewers to find problems with it, and they found some.
The money plan depends a lot on AI companies signing up. If they don't, the whole funding model is shaky.
Some of the security claims are optimistic. A motivated state actor with $2 million could probably mess with parts of the system. We're working on that.
The promise that "no single entity decides" isn't quite true. The foundation has more power than the marketing suggests. We're being more honest about that in v0.3.
The hard philosophical questions aren't fully answered. Specifically: how do you keep "different communities have different standards" from sliding into "everything is just opinion"? We have a plan; it's not finished.
All of this is on the same website as the cheerful version. We didn't hide the criticism. Read the critical review if you want the full list — it's deliberately published next to everything else.
9Why we think it's worth trying anyway
Three reasons:
- The pieces exist. Every component (signatures, identity standards, transparency logs, gossip networks) is already built and proven. We're combining them, not inventing.
- Nobody else is doing it. Many fact-check projects exist. None combine all six properties together (provenance + permissionless validators + per-user composition + cascading retractions + AI-readable + investigation market). The space is real and unoccupied.
- The cost of doing nothing is rising. AI-generated misinformation isn't slowing down. The longer we wait, the more locked-in the bad equilibrium gets.
That's the pitch. If it works, the web becomes a bit more honest. If it doesn't, we publish what we learned and try something else.
If you want more
- The full proposal in plain EnglishA longer version of this page (~12 min). Same ideas, more depth on architecture, money, governance, build elements.
- What the critics said in plain EnglishFour independent reviewers tore the proposal apart. We synthesised their findings without softening them.
- What we're going to fixThe plan to address every problem the critics found. Seven workstreams, twelve weeks.
- The thinking behind itEleven design ideas + seven research investigations that fed the working paper. Each in two paragraphs.
- Where to investFour distinct paths to put money in: verification centers, the utility token, AI-augmented verification teams, and the application layer built on CPML.
- Common questions and worriesAnswers to "but what about state propaganda?", "couldn't this just become another Wikipedia argument?", "how is this different from Community Notes?", and other reasonable doubts.
- Three-page briefA tighter, slightly more technical version (~6 minutes). Same ideas, more references.
- Full working paperThe detailed version with citations, code sketches, regulatory analysis. ~30 minutes.
- Critical reviewIndependent reviewers tearing the proposal apart. Honest. Not soft. Worth reading if you're considering whether to take this seriously.
- Get in touchThere's a form on the brief page if you want to participate, ask questions, or critique us.
Plain-English version. The technical paper says the same things with more nuance.