# Research 04 — Hybrid Chain + Federation Architectures

> *Prior art for Veritas Protocol v0.2: the chain as settlement + incentive layer, federation as transport + read layer. Where this pattern has already been built, what it cost, what it looks like in code, and what we should fork.*

---

## 0. Elevator reading (5 minutes, non-blockchain-native)

If you have never shipped anything on a blockchain and you want the gist, read this section and stop.

**The problem.** Veritas needs a substrate that (a) lets mutually-hostile sources publish verdicts without anyone's permission, (b) settles payments between validator networks, consumers, and the protocol, (c) gives AI grounding calls a fast answer (tens of milliseconds), (d) propagates retractions and cascade events quickly enough that bad information doesn't stay cached for hours.

No single system type does all four. Blockchains do (a) and (b) well and (c)/(d) badly. Federated systems (Mastodon, Matrix, Bluesky, Nostr) do (c) and (d) well and (a)/(b) badly. So the answer is *both*, with strict role separation.

**The answer, in one sentence.** A Layer-2 blockchain (we recommend **Base** or an **OP Stack chain with Celestia DA** as the two front-runners) holds the canonical log of claim records, attestations, and cascade events, plus payment settlement. A **federation of aggregators** (modeled on Bluesky's AppView + Ozone labelers and Farcaster's Hub network) indexes the chain, caches per-domain grounding views, and serves reads. Consumers hit the federation; writers hit the chain; the two reconcile via event subscriptions and cascade-triggered cache invalidation.

**The prior art we lean on hardest.** Four systems already do roughly this:

1. **Lens Protocol + Momoka** — social graph settled on Polygon/Arbitrum, high-throughput publications on an optimistic L3 with Celestia-style DA, read API served via a federation of Momoka verifier nodes.
2. **Farcaster** — identity on Ethereum L1 (3 contracts), messages off-chain in a Hub network, Hubs sync via libp2p gossip and a Merkle Patricia Trie. Since January 2026, Neynar maintains the protocol.
3. **Pyth Network** — data signed on a dedicated appchain (Pythnet), distributed as signed attestations via Wormhole, pulled on-demand by consumers on 60+ target chains. Publishers pay nothing; consumers pay only when they need a fresh value.
4. **Sigstore Rekor** — an append-only transparency log for software supply-chain attestations, backed by Trillian's verifiable Merkle tree, monitored by independent auditors. Not a blockchain, but the exact same shape as our claim ledger.

**What this costs.** An attestation on Base today is roughly **$0.001 to $0.01** in gas. A full L2 transaction with ~300 bytes of blob data costs around **$0.20–$0.30** after EIP-4844. Running our own OP Stack L2 via a RaaS provider (Conduit, Caldera, Gelato) costs **$3,000–$4,000/month** for an optimistic rollup, **$9,500–$14,000/month** for a ZK rollup. A Cosmos SDK sovereign L1 adds a validator-set security budget that realistic estimates put at **$300,000–$800,000/year** for a credible 30-validator set.

**What we recommend for Phase II prototype.** Don't build our own chain yet. Deploy contracts on **Base** (primary, for regulatory clarity and Coinbase distribution) with **Optimism** as a mirror (Superchain interop gives us a migration path). Use **Ethereum Attestation Service (EAS)** as the attestation primitive — it's already deployed on both chains and reduces our smart-contract surface area to near zero. Federate reads via a **Bluesky-style AppView** (one TypeScript service per aggregator) and a **libp2p gossip mesh** between AppViews for cascade event propagation. If Phase III (economic signal: $10M+ in validator stake, regulatory pressure to isolate from US chains, or need for mutually-hostile jurisdictional neutrality) forces us to our own chain, the cleanest migration target is a **Cosmos SDK sovereign rollup built with Rollkit (ev-node) on Celestia DA** — Apache 2.0, ~1,500 active contributors, and the migration burden is contained because EAS-style attestations are portable.

That's the summary. Everything below is the receipts.

---

## 1. Rollup settlement + off-chain execution family

The modular-blockchain stack that matured between 2023 and 2026 gives us a clean separation that Veritas can exploit: **execution on an L2**, **data availability on Celestia/EigenDA/blobs**, **settlement on Ethereum L1**. We evaluate each layer for fit.

### 1.1 Optimistic rollups

| Rollup | Stack | Tx cost (transfer) | Tx cost (swap) | L1 finality | Pre-confirmation |
|---|---|---|---|---|---|
| **Base** | OP Stack | ~$0.01 (median) | ~$0.03 | 7 days | ~2s |
| **Optimism** | OP Stack | ~$0.09 | ~$0.18 | 7 days | ~2s |
| **Arbitrum One** | Nitro | ~$0.09 | ~$0.27 | ~6.4 days | ~250ms |
| **Soneium** | OP Stack + Fast Finality Layer | ~$0.01–0.05 | ~$0.05–0.10 | 7d → <10s (FFL) | ~1s |

The **7-day fraud-proof window** is the key design constraint for Veritas cascade semantics. Three implications:

1. **Cascade propagation** cannot wait for L1 finality. Federation propagates on sequencer-confirmed (~2s) data. Sequencer-level fraud is visible in minutes; cascade events come only from credentialed bonded validators, so abuse is costly.
2. **Value settlement** tolerates the 7-day delay — payment flows aren't latency-sensitive.
3. **Canonical history** uses the 7-day-final state commitment for audits; grounding reads don't wait on it.

**Base** in 2024 cut median tx fees >90% for sub-$0.01 transactions. Coinbase's compliance posture makes it the default choice for a protocol facing institutional scrutiny. Base uses Ethereum blobs for DA and is a Superchain member (guaranteed interop with Optimism and future OP chains). **Soneium** (Sony, OP Stack + EigenLayer-restaked Fast Finality Layer, production since Jan 2026) is architecturally interesting because its FFL solves our cascade-speed-vs-finality problem: restaked validator subnet attests to state in <10s.

### 1.2 ZK rollups

| ZK Rollup | Proving system | Finality (2026) | Tx cost (transfer) | State |
|---|---|---|---|---|
| **zkSync Era** | Boojum (PLONK) | ~2.5s median, ~10 min L1 | ~$0.07 | Stable |
| **Starknet** | STARK / Cairo | 4s block time, ~0.5s preconfs | ~$0.19 | Stable; sequencer decentralisation 2026 |
| **Linea** | Type-2 zkEVM | ~12 min L1 | ~$0.05–0.10 | Stable |
| **Polygon zkEVM** | Plonky2 | ~30 min bridge | ~$0.19 (+$2.75 swap) | Mainnet beta sunset 2026 |
| **Taiko Alethia** | Based Contestable Rollup | Per-proof challenge; Stage 2 Q2 2026 | ~$0.05 | Stable |

Proving times collapsed 60x (16 min → 16 s) and costs 45x over 2024–2026. Polygon CDK + Succinct's OP Succinct (Plonky3) offers **<$0.005/tx** proving cost. But Polygon zkEVM mainnet-beta sunset underlines the risk: ZK stacks evolve faster and have higher lock-in cost per migration. **For Veritas:** validity proofs are technically a better fit (no fraud window), but tooling/ecosystem gap vs optimistic stacks is real in 2026. Go optimistic (Base) in Phase II; revisit ZK only if proving advantages exceed 5x net of ecosystem drag.

### 1.3 Data availability alternatives

| DA layer | Throughput (2026) | Cost per MB | Finality | Model |
|---|---|---|---|---|
| **Ethereum blobs (EIP-4844)** | ~32 KB/s (3 blobs × 128 KB / 12s); 48-blob target mid-2026 | ~$20.56/MB in 2025, falling with BPO forks | 12.8 min | ~18d retention |
| **Celestia** | 21.33 MB/s (Matcha Jan 2026); 1 Tb/s target (Fibre) | $0.35–0.81/MB | ~12s Tendermint | Namespaced Merkle Trees + DAS |
| **EigenDA** | 100 MB/s (V2) | ~$0.006/MB (~$730/yr for 100MB/day) | Restaked ETH | High throughput, lower decentralisation |
| **Avail** | ~10 MB/s | Middle of pack | Substrate consensus | Multichain, KZG-based sampling |

Conduit's 2024 analysis showed Celestia is ~25x cheaper per MB than Ethereum blobs. In 2026 Celestia holds ~50% DA market share (160+ GB posted). The pending Ethereum **Fusaka** upgrade is expected to narrow this gap — but until Fusaka ships, Celestia has the cost advantage.

**For Veritas:** attestations are small (~200–500 bytes each). At 1M attestations/day × 400 bytes = ~12 GB/month: Celestia ~$6K/month, Ethereum blobs ~$80K+/month pre-Fusaka. Material delta.

### 1.4 Cost per attestation — concrete numbers (2026)

- **LOG4 event on Base** (4 topics + 64 bytes data): ~23,400 gas total; at Base post-4844 gas prices, **~$0.0005/attestation**.
- **EAS on-chain on Base**: ~80K–150K gas depending on schema, **~$0.002–0.004/attestation**.
- **EAS off-chain**: zero gas; UID optionally timestamp-anchored at ~23K gas.
- **Cosmos-SDK msg on Rollkit + Celestia DA** at ~400 bytes/msg and $0.50/MB: **~$0.0002/attestation** marginal.

**Target:** Phase II ~$0.001/attestation on Base via EAS off-chain with periodic Merkle root anchoring. Matches Lens/Momoka's profile.

---

## 2. Permissionless-write + federated-read systems that work

Veritas needs both permissionless writes (anyone credentialed can publish a verdict) and fast federated reads (aggregators compose per-CPML views and serve cached responses). Below, the prior art in the systems that most closely match this shape.

### 2.1 Nostr — the minimal-viable pattern

Nostr (Notes and Other Stuff Transmitted by Relays) is the reference implementation of "events signed by keys, broadcast to relays, relays federate by user choice."

**NIP-01 event structure** (the entire wire protocol):

```
{
  "id":         "<32-byte hex SHA-256 of the serialised event>",
  "pubkey":     "<32-byte hex schnorr public key>",
  "created_at": <unix seconds>,
  "kind":       <0-65535>,
  "tags":       [["e", "<event-id>", "<relay-url>"], ["p", "<pubkey>"], ...],
  "content":    "<arbitrary string>",
  "sig":        "<64-byte hex signature over id>"
}
```

The ID is `SHA-256` over the canonical JSON serialisation `[0, pubkey, created_at, kind, tags, content]`. That's content-addressing plus author-authentication in ~300 bytes.

**Client–relay protocol** is three message types each direction:
- `["EVENT", <event>]` — publish
- `["REQ", <sub-id>, <filter>...]` — subscribe with filter
- `["CLOSE", <sub-id>]` — unsubscribe
- Relay responds with `["EVENT", <sub-id>, <event>]`, `["OK", <id>, <true/false>, <msg>]`, `["EOSE", <sub-id>]` (end of stored events), `["CLOSED", <sub-id>, <reason>]`, `["NOTICE", <msg>]`.

**Event kinds classification:**
- Regular (1–9999): stored by relays
- Replaceable (10000–19999, plus 0 and 3): only latest per `pubkey+kind` retained
- Ephemeral (20000–29999): **not stored** — gossip-only
- Addressable (30000–39999): latest per `pubkey+kind+d-tag` retained

**NIP-09 (event deletion)** is crucial for Veritas because it demonstrates a *federation-level* retraction primitive: a signed "delete this event" message that relays MAY honour. Not every relay does — that's the tradeoff of federation. In Veritas, retraction is protocol-mandated and chain-enforced, which is strictly stronger than NIP-09.

**What we steal:** the event shape, the filter syntax, the relay-client protocol. A Nostr-style subscription API with protocol-typed events (`claim.v1`, `attestation.v1`, `cascade.v1`) mapped to Nostr kinds (e.g., 30400–30499 for Veritas events) would give us instant compatibility with existing Nostr relays as a fallback transport if our primary federation goes down.

**What we don't steal:** Nostr has no settlement layer, no payments, no strong retraction. Verdicts in Nostr are as trustworthy as the author you subscribe to; Veritas needs enforceable economic semantics. That's why we pair Nostr-style federation *on top of* chain settlement, not as a replacement.

Source reference: https://github.com/nostr-protocol/nips — CC0 license; `nostr-tools` (TypeScript) is MIT.

### 2.2 Bluesky / AT Protocol + Ozone labelers — the closest structural match

AT Protocol is the most relevant prior art. It separates concerns into four services:

- **PDS** (Personal Data Server) — holds each user's append-only repo (Merkle Search Tree). Users can self-host or use hosted PDSes. Repo commits are signed.
- **Relay** — large-scale crawler that subscribes to all PDSes and emits a firehose.
- **AppView** — indexes the firehose and serves API reads for a specific application (e.g., bsky.app).
- **Labeler** (Ozone) — a separate service that publishes signed labels over a WebSocket stream (`com.atproto.label.subscribeLabels`). Labels are composable: users subscribe to multiple labelers, each with their own policy.

**Label spec** (exactly maps to Veritas attestations):
```
{
  "ver": 1,
  "src": "<DID of labeler>",
  "uri": "<at:// or did: target>",
  "cid": "<optional record CID>",
  "val": "<kebab-case value, <=128 bytes>",
  "neg": <bool, true = retraction>,
  "cts": "<ISO 8601>",
  "exp": "<ISO 8601, optional>",
  "sig": "<signature over CBOR-normalised label>"
}
```

The signing key is identified as `#atproto_label` in the labeler's DID document. To retract, publish a label with the same `src/uri/val` and `neg: true` plus a later `cts`. Critically, **"a negation label does not mean that the inverse of the label is 'true', only that the previous label has been retracted"** — a semantic we must copy exactly.

**Ozone the service** is TypeScript (99.6%), self-hostable via Docker. It's a Next.js UI plus a labeling backend. The UI is moderator-facing; the protocol is public. Independent labelers can publish labels that any AT client will surface.

**What we steal:** the label format (`src`/`uri`/`cid`/`val`/`neg`/`cts`/`exp`/`sig`) as the direct wire format for Veritas attestations. The `subscribeLabels` WebSocket protocol as a federation-native subscription endpoint. The separation of PDS (author) from Relay (aggregator) from AppView (indexer) from Labeler (verdict-producer). This four-way split maps onto Veritas cleanly:
- PDS equivalent → validator's local signed ledger (own records)
- Relay equivalent → chain + indexers (the canonical log)
- AppView equivalent → aggregator (read API, per-aggregator editorial policy)
- Labeler equivalent → validator networks publishing claim verdicts

**What we don't steal:** the DID + handle identity system adds complexity we don't need yet; plain Ed25519 pubkeys (Nostr-style) are simpler for v0.2. The Merkle Search Tree repo format is clever but we get content-addressing for free from the chain event log.

Source reference: https://github.com/bluesky-social/atproto — MIT license; `@atproto/api` is in TypeScript. Ozone at https://github.com/bluesky-social/ozone — MIT license.

### 2.3 Farcaster Hubs — the chain-minimal pattern

Farcaster reduces on-chain footprint to the absolute minimum: three smart contracts (Id Registry, Key Registry, Storage Registry) on OP Mainnet manage identity and quota; everything else is off-chain in a Hub network.

**Hub architecture:**
- Each user's `fid` (numeric ID) is registered on-chain with a custody address and signing keys.
- User messages (10 types: Cast Add/Remove, Reaction Add/Remove, Link Add/Remove, Verification Add/Remove, UserData Add, Username Proof) are CRDT-typed and propagate between Hubs via libp2p gossip.
- Hubs maintain a **Merkle Patricia Trie** of all messages. Each Hub gossips its MPT root; peers compare and sync the diff.
- **Sync ID** (36 bytes): 10 bytes timestamp + 1 byte message type + 4 bytes fid + 1 byte CRDT type + 20 bytes message hash. This lets Hubs sort and compare messages efficiently.

**On-chain footprint:** 3 contracts. Everything else is off-chain.

**2026 note:** Neynar acquired the protocol in January 2026 from Merkle Manufactory and assumed maintenance responsibility. Merkle Manufactory repaid ~$180M of its venture funding. The protocol is still developer-maintained and open-source.

**What we steal:** the chain-minimal pattern is exactly right for Veritas v0.2. On-chain = identity + credential registry + settlement + cascade anchors. Off-chain = attestations and grounding views. Also: Farcaster's MPT-based sync is a proven pattern for federated state synchronisation and is directly applicable to our aggregator-to-aggregator reconciliation.

**What we don't steal:** Farcaster's CRDT conflict resolution (last-write-wins with timestamp tiebreak) is too permissive for verdicts where retraction must be causal. Veritas uses *appendable-only* semantics with explicit negation events; there is no "overwrite."

Source reference: https://github.com/farcasterxyz/protocol — protocol spec, MIT license.

### 2.4 ActivityPub, Matrix, and the transport substrate

**ActivityPub / Mastodon** (W3C rec 2018; ~10K instances in 2026) proves federation at scale works. What does *not* work for Veritas: no content-addressing (HTTP URLs go stale), no content-level cryptographic signing (HTTP Signatures are transport-only), no economic layer, and defederation is the only inter-instance moderation primitive. Lesson: the *shape* scales; the specific wire protocol is too web-era for our needs.

**Matrix rooms** are a partially-ordered event graph (DAG) that federates between homeservers. Events include parent-event references, event type, depth, and payload hash, all signed by the originating server. Matrix v1.18 (2026) added policy servers and account locking. We steal the event-graph pattern for cascade propagation (a cascade event naturally references the retraction event that triggered it). We don't steal Matrix's state-resolution algorithm — the L2 gives us canonical order via block inclusion, which is strictly stronger.

### 2.5 IPFS + IPLD + libp2p — the transport substrate

IPFS is content-addressed storage. IPLD is the generic "linked data" spec that content-addressed systems share. libp2p is the networking stack that IPFS, Filecoin, Ethereum consensus, Polkadot, and Farcaster all use.

**Key primitives we use from this stack:**
- **CID (Content Identifier)** — multicodec-prefixed multihash of content. Canonical way to name content across the protocol.
- **Merkle DAG** — every CID links to its child CIDs through cryptographic hashes. A CID is a self-verifying pointer to an immutable graph.
- **libp2p GossipSub** — pubsub protocol for peer-to-peer message propagation. Scales to thousands of peers. Used by Ethereum consensus layer since the Merge.
- **DHT (Distributed Hash Table)** — Kademlia-based peer discovery. Optional — can be bootstrapped with a static peer list for a known federation.

**For Veritas:** every attestation has a CID. The CID is what's pinned into an on-chain log record. The payload can live wherever (IPFS, S3, an aggregator's database, all three). This is the exact pattern Rekor uses (see §4), EAS's off-chain attestations use, and ATProto uses (repo CIDs).

Source reference: https://github.com/libp2p/go-libp2p — MIT license. `js-libp2p` and `rust-libp2p` are also production-grade.

---

## 3. L2 settlement + L3/off-chain execution pattern

Four architectures already split "canonical settlement" from "fast execution." Each gives us a different point on the design space.

### 3.1 Ethereum L2 + L3 (Orbit / OP Stack)

**Arbitrum Orbit** lets you deploy an L3 that settles to any L2 (e.g., Arbitrum One). Orbit chains operate in Rollup mode (posts full data to parent chain) or AnyTrust mode (uses a Data Availability Committee, only posts certificates). Third-party DA integrations (Celestia, EigenDA) are first-class.

**OP Stack** has evolved into a Superchain: shared bridge, shared governance, guaranteed interop between OP-family chains. Base, Optimism, Soneium, Mode, Fraxtal, Zora are all OP chains. A new L3 on OP Stack can settle to any Superchain L2.

**Fit for Veritas:** this is the path we recommend *later*. In Phase II, we sit on L2 directly (Base). Phase III, if traffic or customisation demand requires, we spin up a Veritas-specific L3 on Orbit or OP Stack that settles to Base. L3 gas is near-zero; L2 gas carries the cost. This preserves the EVM tooling stack and exits us from any single L2 lock-in.

### 3.2 dYdX v4 — the sovereign appchain pattern

dYdX v4 abandoned being an L2 app on Ethereum and rebuilt as a **sovereign Cosmos SDK L1 with CometBFT consensus** and a fully off-chain orderbook.

**Architecture:**
- Cosmos SDK blockchain with custom modules: Markets, Margin, Orderbook, Liquidation, FundingRate.
- **Orderbook is off-chain, in-memory, per-validator.** Validators gossip orders to each other. Matching runs off-chain.
- **Settlement is on-chain.** Only matched trades produce block transactions. This removes the "every order is a transaction" throughput limit.
- Indexer service provides read API; front-end consumes indexer.

**Fit for Veritas:** the shape — sovereign chain + off-chain high-volume state + on-chain settlement + indexer as read layer — is exactly ours. The difference: dYdX's off-chain state is an orderbook; ours is attestation payloads and cached grounding views. **Cost caveat:** dYdX's validator-set security is bootstrapped on a governance token with >$500M market cap, roughly the minimum for a credible sovereign chain. Veritas cannot justify that in Phase II (see §6).

### 3.3 Lens Protocol + Momoka — the most direct match

Lens Protocol runs on Polygon POS, with high-throughput publications on **Momoka**, an Optimistic L3 that uses data availability layers (Arweave, not Celestia historically; but switchable) and an **Optimistic DA verifier network**.

- Publications on Momoka are synchronous — immediately available from the Lens API.
- Anyone can run a Momoka Verifier node. Verifiers stream and index data, provide trustless reads.
- **Node operators work independently of the Lens API.** The verifier network has no Lens-team lock-in; if Lens API disappears, verifiers continue.

**Fit for Veritas:** this is the tightest fit in the space. Lens = settlement on L2 (Polygon) + fast permissionless reads via a verifier federation + chain-anchored publication records. Veritas = settlement on L2 (Base) + fast permissionless reads via an aggregator federation + chain-anchored attestation records.

**What we copy:**
- The "verifier/aggregator independence" principle: any third party can run a full indexer and serve reads. No Veritas Foundation API monopoly on reads.
- The optimistic DA pattern: payloads are posted to a DA layer; anyone can fraud-prove misstatements; in the normal case there's no cost beyond DA.
- The "synchronous" read guarantee: writes to the L3/federation produce immediately-queryable reads; we don't wait for L1 confirmation.

Source reference: https://github.com/lens-protocol/momoka — MIT license, TypeScript.

### 3.4 Pyth Network — the pull-oracle pattern

Pyth deserves its own subsection because its data-propagation architecture is almost point-for-point applicable to Veritas grounding calls.

**Pyth architecture:**
1. ~90 first-party publishers (Jane Street, Jump, Cboe, etc.) push prices to **Pythnet**, a Solana-like appchain dedicated to Pyth.
2. Pythnet aggregates per-feed into a single price + confidence interval.
3. An **attester** program creates Wormhole messages containing signed price updates.
4. Wormhole guardians co-sign VAAs (Validator Action Approvals).
5. A **price service** caches the latest VAAs and exposes an HTTP API.
6. **Consumers pull on demand**: when a dApp needs a fresh price, it fetches the VAA from the price service and submits it as part of its transaction. The Pyth on-chain contract verifies the Wormhole signature and stores the price.

**Key property: publishers pay nothing.** Data flows through Pythnet → Wormhole → price service → consumer at zero cost to the publisher. The consumer pays only when they need the freshness guarantee.

**Fit for Veritas:** this is the grounding-call pattern, exactly. AI systems (consumers) need fresh veracity data. Validators (publishers) need not pay per attestation. An aggregator (price service equivalent) caches the latest signed attestations. The AI pulls on demand and pays for that pull — either via a subscription fee to the aggregator or by submitting a chain transaction if it wants provable on-chain grounding.

**What we copy:**
- Publishers → validator networks.
- Pythnet → the Veritas L2 (or L3, later).
- Wormhole guardians → optional, if we want cross-chain distribution. Not required for Phase II.
- Price service → aggregator (federation node).
- Consumer pulls → AI grounding calls.

---

## 4. Oracle-network-adjacent hybrids

Oracle networks solve a subtly-related problem: get trusted off-chain data onto a chain. Three that are load-bearing for Veritas's vocabulary.

### 4.1 Chainlink CCIP + Functions

**CCIP (Cross-Chain Interoperability Protocol)** uses three independent Decentralized Oracle Networks (DONs):
- Committing DON monitors the source chain, builds a Merkle tree of messages.
- Risk Management Network independently reconstructs the Merkle tree and compares.
- Executing DON only authorises execution after both Merkle roots match.

This three-way-attestation architecture is a model for mutually-hostile validator networks: two independent quorums must converge before a cross-chain message is considered valid.

**Chainlink Functions** gives you decentralised off-chain compute: you write a JavaScript function, send a request to the FunctionsRouter, DON nodes execute in sandboxed environments, OCR 2.0 aggregates results, one node transmits on-chain. This is the closest thing to "decentralised AI inference on a hostile substrate" that ships today.

**For Veritas:** the CCIP dual-DON pattern is directly applicable to high-severity cascade events. Require two independent federation subsets to co-sign a cascade trigger before caches invalidate globally. The Functions pattern is less directly applicable but worth studying for anti-spam: validator work-verification could run in a Chainlink-Functions-like sandbox.

### 4.2 Pyth Network

Covered in §3.4.

**Critical evaluation:** Pyth's publisher count (~90) is low enough that it's effectively a consortium. The economic model works for prices, where a small number of high-quality data providers is natural. It does not work for Veritas verdicts, where the universe of validators is unbounded. Mitigation: Veritas combines Pyth's pull-pattern for transport with an open validator-credential system for trust.

### 4.3 API3 dAPI

API3 is first-party oracles: data providers run their own Airnodes (oracle nodes) rather than going through intermediaries. Every data feed is cryptographically signed by the original API provider, verifiable down to the API parameters.

**Economic claim:** first-party oracles achieve 50%+ gas efficiency versus middleware oracles because there are fewer intermediaries.

**MEV/OEV:** API3's OEV Network auctions oracle update MEV back to the dApp that created the value. Interesting for Veritas if we ever face "high-value verdict first-mover advantage" (e.g., early publishers of a retraction being front-run).

**For Veritas:** the first-party principle is a strong fit. A validator network publishes its own attestations directly; there's no Veritas-Foundation intermediary signing on behalf of validators. Each attestation is signed by the validator's own key.

---

## 5. Chain selection matrix

Scoring the realistic Phase II candidates on the axes that matter for Veritas.

| Chain | Tx cost (attestation) | Throughput | Finality (L1) | EVM | Ecosystem | Regulatory | Tooling | DA |
|---|---|---|---|---|---|---|---|---|
| **Base** | $0.001–0.003 | ~200 TPS sustained | 7d (optimistic) / ~2s sequencer | Yes | S-tier (Coinbase distrib.) | S-tier (Coinbase compliance posture) | S-tier (Foundry, Hardhat, OP Stack) | Ethereum blobs |
| **Optimism** | $0.002–0.005 | ~200 TPS | 7d / ~2s | Yes | A-tier (Superchain, mature DAO) | A-tier | S-tier | Ethereum blobs |
| **Arbitrum One** | $0.002–0.008 | ~40K TPS theoretical, ~200 actual | 6.4d / ~250ms | Yes (Nitro custom) | S-tier (most TVL) | A-tier | S-tier (AnyTrust optional) | Ethereum blobs or Celestia |
| **Polygon zkEVM** | $0.005–0.02 | ~2K TPS | ~30 min | Yes | B-tier (zkEVM Mainnet Beta sunsetting 2026) | A-tier | B-tier | Ethereum blobs |
| **zkSync Era** | $0.002–0.007 | ~2K TPS | ~10 min | Mostly (Type-4) | A-tier | B-tier (US regulatory uncertain) | A-tier | Ethereum blobs |
| **Linea** | $0.003–0.01 | ~2K TPS | ~12 min | Yes (Type-2) | B-tier (ConsenSys-run) | A-tier | A-tier | Ethereum blobs |
| **Solana** | $0.00025 | 1,500–4,000 TPS sustained, 100M+ tx/day | ~150ms (Alpenglow) | No (SVM) | S-tier | C-tier (US regulatory fraught) | A-tier (Anchor, but not EVM) | Native (monolithic) |
| **Cosmos SDK on Celestia (Rollkit)** | $0.0002 | ~5K TPS | ~12s | Optional (Ethermint) | B-tier (1.5K contributors, growing) | B-tier | A-tier (Cosmos SDK, IBC) | Celestia |
| **Starknet** | $0.05–0.20 currently | ~1K TPS | <1 min L1 validity | No (Cairo) | B-tier | A-tier | C-tier (Cairo is its own ecosystem) | Ethereum blobs |

**Top 3 for Phase II prototype:**

1. **Base** (primary). Cheapest transaction cost, S-tier distribution via Coinbase, strongest compliance posture. OP Stack means easy migration to our own L3 later if needed. EAS is deployed.
2. **Optimism** (mirror deployment). Superchain interop is contractually guaranteed; running contracts on both Base and OP at Phase II gives us redundancy and a cross-chain test bed. Cost delta is ~2x, not prohibitive.
3. **Arbitrum One** (secondary). Highest TVL, most mature tooling, AnyTrust mode is a clean escape hatch if blob pricing blows up. Orbit gives us a clean L3 upgrade path.

**Explicitly not recommended for Phase II:**
- **Solana.** The TPS and latency are attractive, but the SVM tooling fork from EVM is a one-way trip. Regulatory ambiguity in the US is a real problem for a protocol claiming neutrality.
- **Starknet.** Cairo is the best language for verifiable compute but has nowhere near the developer pool we need.
- **Own Cosmos chain.** Worth building toward in Phase III. Not Phase II.

---

## 6. Own-L1 vs ride-existing

This is a real question and the answer is *not yet*.

### 6.1 Cost of running our own L1 (2026 numbers)

- **Cosmos SDK sovereign L1 (dYdX v4 style):** ~30 validators × ~$1K/month ops = **~$360K/yr** in operator payments. Plus governance, audits, and bootstrap-token economics. Realistic all-in security budget: **$300K–$800K/year**.
- **OP Stack L2 via RaaS (Conduit/Caldera/Gelato):** $3–4K/month (optimistic) or $9.5–14K/month (ZK) fixed + DA cost. At 1M attestations/day on Celestia: +~$6K/month. **All-in ~$50K–$150K/yr** with a single trusted sequencer.
- **Rollkit sovereign rollup on Celestia:** no fixed sequencer fee if self-run (~$500/month server). Security inherits from Celestia's staked TIA. **All-in ~$10K–$50K/yr**.

### 6.2 Political reasons crypto-native investors want own L1

Be clear-eyed: crypto-native capital often *prefers* "own L1" for reasons that are not primarily technical.

- **Token launches.** An L1 token has a larger TAM than an application token. Venture incentives push toward L1s.
- **Governance capture.** Controlling the base layer means controlling protocol upgrades. Application-layer protocols are upgrade-captive to their L1 host.
- **Branding / story.** "We built our own chain" reads as a stronger technical achievement than "we deployed on Base."

For Veritas, these are **not** reasons to build our own L1. The protocol's credibility comes from neutrality and verifiable operation, not from owning the substrate.

### 6.3 When does "our own chain" actually earn its keep?

Three signals that would push us to our own chain:

1. **Jurisdictional neutrality demand.** If regulators in the US or EU apply pressure that materially constrains Base/Optimism's treatment of Veritas — for example, compelling Coinbase to freeze specific attestations or validators — we need a chain not under that jurisdiction. A Cosmos sovereign chain with globally distributed validators, or a Rollkit chain on Celestia, lets us exit cleanly.
2. **Mutually-hostile-frame requirement.** If Phase III adds frames that cannot coexist on a Coinbase-adjacent chain (e.g., a Russian state-funded validator network and a US state-department-funded validator network both need to post, and neither accepts the other's L2 hosting), the chain must be neutral infrastructure. Own chain or a genuinely credible neutral chain (Celestia-rollup, Ethereum mainnet) is the only answer.
3. **Economic scale.** If the Veritas token has a stable >$100M market cap and $10M+ staked, the security budget for our own sovereign chain becomes defensible.

**Decision framework (Phase III gate):**
- If none of the three signals hit: stay on Base/OP.
- If (1) hits: migrate to a Rollkit sovereign rollup on Celestia. ~6-month migration; EAS-format attestations are portable.
- If (2) hits: add a second-chain deployment (Cosmos sovereign) while maintaining the L2 deployment. Multi-chain-federated.
- If (3) hits alone without (1) or (2): probably still don't build own L1. Token float is not sufficient justification.

### 6.4 Practical recommendation

Phase II: **Base** primary, **Optimism** mirror. Zero own-chain work.

Phase III gate: assess the three signals. The migration cost from L2 to Rollkit-on-Celestia is manageable because all attestations are EAS-format (portable) and all federation services are stack-agnostic (they index events, not specific chains).

---

## 7. Bridge protocol — how AI grounding calls reconcile

The hot path for Veritas is an AI grounding call: a model queries "what's the current veracity of claim X?" and expects an answer in <100ms. The architecture must reconcile chain-canonical events with federation-cached reads at this latency budget.

### 7.1 Materialised-view pattern

Standard pattern from data systems, adapted for our stack:

1. **Chain emits events** (EAS attestations, cascade events, retraction events). Sequencer confirmation at ~2s; L1 finality at 7d.
2. **Aggregator subscribes to chain events** via JSON-RPC `eth_subscribe` or Viem's block subscription. Ingest latency: ~200ms–2s depending on whether we trust sequencer or wait for batch confirmation.
3. **Aggregator materializes per-claim view.** For claim X, maintain a running aggregate: {list of attestations, current rollup verdict, last updated timestamp, list of active retractions}.
4. **Aggregator serves cached reads.** AI grounding call → HTTP GET → aggregator returns cached view in ~10–30ms.
5. **Cache invalidation on cascade events.** When a cascade event crosses a severity threshold, invalidate affected views immediately.

This is the PostgreSQL materialised view pattern plus a fast invalidation channel. The invalidation channel is a libp2p pubsub topic (`/veritas/cascade/v1/<severity>`) that all aggregators subscribe to.

### 7.2 TTL + cascade-trigger reconciliation

Two reconciliation triggers, both needed:

**TTL expiry.** Every cached view has a TTL. 5 minutes for active claims, 1 hour for stable claims, 24 hours for historical claims. TTL expiry forces a fresh chain query.

**Cascade trigger.** Any cascade event broadcast on the pubsub channel causes affected views to invalidate before TTL. A "severity threshold" rule determines which cascade events trigger global invalidation vs. lazy TTL-based refresh:
- Low severity: lazy TTL refresh.
- Medium severity: invalidate affected claim IDs within the aggregator.
- High severity: publish on the global cascade channel; every aggregator invalidates; any cached payloads downstream must be considered suspect until refreshed.

### 7.3 Latency budget

Engineering targets for the reference implementation:

| Path | p50 | p95 | p99 | Hard cap |
|---|---|---|---|---|
| AI grounding call → aggregator cache hit | 10ms | 30ms | 50ms | 100ms |
| Attestation write → aggregator materializes | 500ms | 2s | 5s | 10s |
| Cascade trigger → federation-wide cache invalidate | 1s | 3s | 5s | 10s |
| Chain finality (L2 sequencer confirmed) | 2s | 2s | 3s | 5s |
| Chain finality (L1 optimistic) | 7d | 7d | 7d | 7d |
| Soft consistency tolerance for clients | — | — | — | 24h |

The 24-hour soft-consistency number is the clincher for the architecture: any aggregator that's fallen 24+ hours behind the chain is considered stale and should not serve grounding calls. Aggregator health checks enforce this.

### 7.4 Reference stack for the bridge

`viem`/`ethers-v6` over WebSocket JSON-RPC for chain subscription; Postgres + Redis for aggregator state and hot cache; `js-libp2p` GossipSub for cascade propagation; Fastify/Hono for HTTP reads; a 5-minute reconciliation loop that re-reads a window of chain events to catch any missed subscription events.

---

## 8. Reference implementations to fork / study

The concrete open-source bets for Phase II, with licence and maturity:

| Repository | Language | License | Maturity | Fit for Veritas |
|---|---|---|---|---|
| `ethereum/go-ethereum` | Go | LGPL-3.0 / GPL-3.0 | S-tier | Upstream; we read, we don't fork. |
| `ethereum-optimism/optimism` (OP Stack) | Go / Solidity | MIT | S-tier | Reference for deploying on Base/OP. Don't fork; use. |
| `ethereum-attestation-service/eas-contracts` | Solidity | MIT | A-tier | **Use directly as attestation primitive.** Deployed on Base, OP, Arbitrum, Polygon. Don't reinvent. |
| `rollkit/rollkit` (ev-node) | Go (92.6%) + Rust | Apache-2.0 | A-tier | Phase III migration target. Fork and deploy for sovereign rollup on Celestia. |
| `celestiaorg/celestia-app` | Go | Apache-2.0 | S-tier | DA layer, used via light node. Don't fork. |
| `libp2p/go-libp2p` | Go | MIT | S-tier | P2P transport for aggregator mesh. Use directly via `js-libp2p` in our TS stack. |
| `nostr-protocol/nostr-tools` | TypeScript | MIT | A-tier | Reference for client-relay protocol patterns. Borrow the wire format. |
| `bluesky-social/atproto` | TypeScript | MIT | A-tier | **Fork the labeler protocol directly.** Our attestation wire format should be label-spec-compatible. |
| `bluesky-social/ozone` | TypeScript | MIT | A-tier | Reference implementation of a labeler service with moderator UI. Good starting point for aggregator's admin UI. |
| `lens-protocol/momoka` | TypeScript | MIT | A-tier | Reference for optimistic-DA verifier-network pattern. Study; don't fork directly. |
| `sigstore/rekor` | Go (92.7%) | Apache-2.0 | S-tier | **The closest structural match for our canonical log.** Trillian-Tessera tile log in Rekor v2 is a model for our on-chain event anchor. |
| `farcasterxyz/hubble` | TypeScript / Rust | MIT / Apache-2.0 | B-tier (Neynar-maintained since Jan 2026) | Reference for Hub-to-Hub sync via MPT. Probably don't fork; rebuild smaller. |
| `chainlink/ccip` | Solidity / Go | MIT | A-tier | Reference for dual-DON cross-attestation. |
| `pyth-network/pythnet` | Rust | Apache-2.0 | A-tier | Reference for pull-oracle pattern. |

**Direct reuses (Phase II):**
- EAS contracts on Base/OP for the attestation primitive.
- go-libp2p/js-libp2p for aggregator mesh.
- ATProto label spec as the on-wire attestation format.
- Rekor's Trillian-Tessera tile-log pattern for an auditable off-chain log mirror (optional; nice-to-have for Phase II).

**Forks or clones (Phase II):**
- Ozone as the starting point for an aggregator admin UI.
- Momoka's verifier-node architecture as the starting point for our aggregator service.

**Hold until Phase III:**
- Rollkit for own rollup.
- Farcaster Hubble for full Hub-style federation (we won't need this scale in Phase II).

---

## 9. Protocol-level permissionless-write design

### 9.1 What data structure holds events on-chain

Three options, increasing in cost and capability:

**Option A: Event-log only (cheapest).**
- A single "VeritasRegistry" contract emits typed events for each record type: `Attestation(bytes32 uid, address validator, bytes32 claimCid, bytes32 payloadCid, uint64 timestamp)`, `Cascade(bytes32 parentUid, bytes32 childUid, uint8 severity)`, `Retraction(bytes32 targetUid, bytes32 reasonCid)`.
- No state read-back. All queries go through indexers.
- Cost: ~$0.001 per attestation on Base.
- Matches Farcaster's on-chain minimalism.

**Option B: EAS-mediated (standard).**
- Register Veritas schemas in EAS: `Attestation`, `Cascade`, `Retraction`.
- Use EAS's `attest()` and `revoke()` for on-chain attestation and retraction.
- Off-chain attestations with optional on-chain timestamp anchor for cost optimisation.
- Cost: ~$0.002–0.004 per on-chain attestation; ~$0 per off-chain; ~$0.001 per Merkle root anchor batching many off-chain attestations.
- Gives us a portable format: any EAS client can read Veritas attestations.

**Option C: Full state (most expensive, most capable).**
- Custom contract stores per-claim aggregate state: `(claimCid → (attestations[], currentVerdict, lastUpdated))`.
- Reads are cheap; writes are expensive (~$0.02–0.05 on Base).
- Only worth it if we genuinely need on-chain queryability, which we don't — federation serves reads.

**Recommendation: Option B for Phase II.** EAS mediates; we add a thin VeritasRegistry on top for protocol-specific primitives (credential registry, validator stake management, cascade event anchors). Off-chain attestations are the default; on-chain is opt-in for verdicts that need provable timestamp. Batched Merkle anchoring gives us a trustless audit trail.

### 9.2 Off-chain evidence

Every attestation references:
- A **claim CID** (content-addressed claim identifier). IPFS CID v1 preferred; any multihash-compatible format acceptable.
- A **payload CID** (the actual attestation content: verdict, severity, evidence list).
- Optional **evidence pointers**: a list of `(CID, optional_ipfs_gateway, optional_https_url)` tuples. Consumers can fetch from any of the three.

This gives us content-addressing (the CID is the canonical name) plus location flexibility (the evidence can live on IPFS, a validator's S3 bucket, and a mirror at an aggregator, all at once). Fetching proceeds in order; the first source that returns a payload whose hash matches the CID wins.

### 9.3 Anti-spam at the chain level

Three mechanisms, layered:

**1. Minimum fee.** Every attestation costs gas; at ~$0.001 that's a real but small floor. Adjustable via governance if spam becomes a problem.

**2. Credential requirement.** Every attestation is signed by a validator key that's registered in the credential registry. Credentials are issued by validator networks (organisations registered with the protocol), not individuals. Spam at the individual level is impossible because individuals cannot sign valid attestations directly — they publish through a validator network, which has reputation to protect.

**3. Rate limit by credential class.** Credential registry stores rate-limit tiers. A "journalist" credential might allow 1,000 attestations/day; a "research institution" credential 10,000; an "anonymous observer" credential (if one exists) much less. Rate limits are soft (enforced by aggregators at read time; off-chain) rather than hard (enforced by the contract), because hard-enforcement is expensive and regressive for bursty legitimate activity.

**Proof-of-humanity and staking.** We don't recommend proof-of-humanity as a protocol-level anti-spam. The PoH systems (Kleros PoH V2, Humanity Protocol) work at the individual level; our unit is the validator *network*. Networks stake a bond as part of registration. Networks that spam lose their bond via protocol governance or automated slashing (if spam is defined objectively — e.g., attestations on non-existent claims).

---

## 10. Recommended Phase II reference stack

**Chain:** Base (primary) + Optimism (mirror). EVM Solidity contracts. Foundry for development. Both chains run the same contracts; bridging is via the OP Superchain interop spec when it ships in 2026.

**Attestation primitive:** Ethereum Attestation Service (EAS). Register three schemas: `veritas.attestation.v1`, `veritas.cascade.v1`, `veritas.retraction.v1`. Default is off-chain attestations with periodic on-chain Merkle root anchoring (batch every 10 minutes, ~144 anchors/day × ~$0.001/anchor = ~$0.14/day/aggregator).

**Protocol contracts (thin layer above EAS):**
- `VeritasRegistry` — validator credential registration, stake management, slashing hooks.
- `CascadeAnchor` — emits high-severity cascade events for on-chain provability.
- `PaymentRouter` — collects fees, distributes to validator networks.

**Attestation wire format:** ATProto label spec (`src`/`uri`/`cid`/`val`/`neg`/`cts`/`exp`/`sig`) with protocol-specific extensions. Canonicalised via CBOR. SHA-256 for UIDs.

**Federation layer:**
- **Aggregator service** (TypeScript, Fastify + Postgres + Redis). Forked from `bluesky-social/ozone` as starting point. Serves HTTP reads at <30ms cache hit. Self-hostable via Docker. Every aggregator runs an EAS indexer against Base + OP.
- **Aggregator mesh** via `js-libp2p` with GossipSub. Topics: `/veritas/attestation/v1`, `/veritas/cascade/v1/{low,mid,high}`, `/veritas/retraction/v1`.
- **Subscription API** compatible with Nostr-style filters. Clients and AI systems can subscribe to filtered event streams via WebSocket.

**Evidence and payload layer:**
- **IPFS** for canonical storage. Aggregators pin their subscribed payloads. Optional HTTPS mirrors.
- **CID v1** as canonical identifier. multicodec `0x71` (dag-cbor) for structured attestations.

**Observability and audit:**
- **Trillian-Tessera tile log** (Rekor v2 style) as an optional off-chain auditable mirror of the chain event stream. Nice-to-have for Phase II; proves "the federation has not modified its history."
- Full chain event history is authoritative; aggregator mirrors are convenience.

**Operational profile (Phase II, conservative load):**
- 1M attestations/day, 10 aggregators, 10K AI grounding calls/second globally.
- Chain cost: ~$1,000/day ($30K/month) at 1M attestations × $0.001.
- Aggregator infra cost: 10 × $500/month = $5K/month.
- IPFS / DA cost: ~$2K/month.
- Total Phase II operational floor: **~$40K/month, ~$500K/year.**

**Migration plan (to Phase III own-chain if the three signals fire):**
1. Deploy Rollkit sovereign rollup on Celestia mainnet. Same contract code (Ethermint optional).
2. Dual-write from validators (both Base/OP and Veritas chain) for 30 days.
3. Aggregators index both chains; clients switch read endpoints.
4. Settlement flow transitions. Total migration window: 3–6 months with no service downtime.

---

## 11. Critical evaluation (where the assumptions break)

A short list of what can go wrong with this architecture and where we should be paranoid.

**EIP-4844 blob pricing volatility.** Blob base fees adjust dynamically. If L2 activity spikes, our $0.001/attestation target could 5–10x in a bad week. Mitigation: Celestia DA fallback is already plumbed in via AnyTrust mode or Orbit third-party DA.

**Sequencer centralisation.** Base and OP sequencers are single entities today (Coinbase and OP Labs respectively). A sequencer could in theory censor Veritas transactions. Mitigation: Base has committed to force-inclusion via L1; OP Stack has force-inclusion. Practically, censorship risk is low but non-zero, and it's one of the "signals" that would push us to Phase III.

**Aggregator collusion.** If the top 3 aggregators by usage collude to suppress certain attestations, AI consumers using only those aggregators get a biased view. Mitigation: chain-canonical history is always falsifiable. Any third party can run an aggregator and verify the suppression. The protocol's security assumption is "at least one honest aggregator exists, and clients that pull from at least N aggregators with independent operators get correct results."

**Cascade-trigger abuse.** If a cascade channel subscriber can trigger global invalidation with a forged event, they can DoS the federation. Mitigation: cascade events are signed; multi-signature required for cross-channel high-severity invalidation; dual-DON-style co-signing for invalidation above a threshold.

**Latency budget violation under load.** 30ms cache hit assumes aggregator is not saturated. At 10K/s per aggregator with a hot cache this is fine; at 100K/s it is not. Mitigation: geo-distributed aggregator deployment and CDN-style edge caching (Cloudflare Workers reading from a Postgres replica).

**ATProto label spec evolution.** We're piggybacking on a format that is not under our control. Mitigation: the spec is simple (~300 bytes); we maintain our own canonical version with Veritas extensions; full compatibility with ATProto clients is a nice-to-have, not a requirement.

---

## 12. Decision summary (one table)

| Concern | Decision | Phase II cost | Migration path to Phase III |
|---|---|---|---|
| Base chain | Base + Optimism (mirror) | $30K/month at 1M attestations/day | Rollkit on Celestia; 3–6mo migration |
| Attestation primitive | Ethereum Attestation Service | $0.001–0.004/attestation | Portable; same wire format everywhere |
| DA layer | Ethereum blobs (default) | $0.005–0.02/MB | Swap to Celestia DA via Orbit AnyTrust |
| Federation transport | libp2p GossipSub (same as Farcaster, ETH consensus) | Trivial; <$500/mo per aggregator | No change |
| Wire format | ATProto label spec with Veritas extensions | $0 | No change |
| Read API | Nostr-style subscription + HTTP GET | Included in aggregator infra | No change |
| Cascade propagation | libp2p pubsub topics with dual-DON co-sign for high severity | Trivial | No change |
| Evidence storage | IPFS + optional HTTPS mirrors, CID v1 canonical | ~$2K/month for 1M attestations/day | No change |
| Anti-spam | Credentialed validators + chain fee + off-chain rate limit | Protocol-level, no infra cost | No change |
| Audit mirror | Trillian-Tessera tile log (optional) | <$500/mo | Retain; chain is canonical |

**Recommended Phase II reference stack in one sentence:** Deploy EAS contracts on Base and Optimism; publish attestations in ATProto label format; federate aggregators via libp2p; pin evidence to IPFS; serve AI grounding calls via Nostr-style subscription and HTTP cache; reconcile via chain event subscription and cascade-channel pubsub.

---

## References

**Rollup stacks and L2s:**
- Layer 2 fees: https://l2fees.info/
- OP Stack documentation: https://docs.optimism.io/stack/components
- Arbitrum Orbit: https://docs.arbitrum.io/launch-arbitrum-chain/a-gentle-introduction
- Base (Coinbase): https://www.coinbase.com/blog/introducing-base
- Soneium Fast Finality Layer: https://soneium.org/en/blog/Fast-Finality-Layer-for-Soneium/
- Taiko Contestable Rollup: https://docs.taiko.xyz/taiko-alethia-protocol/protocol-design/contestable-rollup/
- Polygon CDK: https://docs.polygon.technology/cdk/
- Starknet: https://www.starknet.io/blog/validity-rollups/

**Data availability:**
- Celestia DA: https://docs.celestia.org/learn/celestia-101/data-availability/
- EIP-4844: https://eips.ethereum.org/EIPS/eip-4844
- Conduit DA cost analysis: https://www.conduit.xyz/blog/data-availability-costs-ethereum-blobs-celestia/
- Celestia 2026 economics: https://blockeden.xyz/blog/2026/01/16/celestia-blob-economics-data-availability-rollup-costs/
- Three-way DA comparison: https://dalayers.com/2026/03/14/celestia-vs-eigenda-choosing-the-best-modular-data-availability-layer-for-rollups-2026/

**Federated protocols:**
- Nostr NIPs: https://github.com/nostr-protocol/nips (NIP-01, NIP-09 primary)
- AT Protocol label spec: https://atproto.com/specs/label
- AT Protocol PDS: https://atproto.wiki/en/wiki/reference/core-architecture/pds
- Bluesky moderation architecture: https://docs.bsky.app/blog/blueskys-moderation-architecture
- Ozone labeler: https://github.com/bluesky-social/ozone
- Farcaster protocol spec: https://github.com/farcasterxyz/protocol/blob/main/docs/SPECIFICATION.md
- ActivityPub: https://docs.joinmastodon.org/spec/activitypub/
- Matrix Specification: https://spec.matrix.org/latest/

**L2 + L3 patterns:**
- dYdX v4 architecture: https://www.dydx.xyz/blog/v4-technical-architecture-overview
- Lens Momoka: https://github.com/lens-protocol/momoka
- Momoka scaling rationale: https://mirror.xyz/lensprotocol.eth/3Hcl0dGE8AOYmnFolzqO6hJuueDHdsaCs3ols2ruc9E

**Oracle networks:**
- Chainlink CCIP: https://docs.chain.link/ccip
- Chainlink Functions architecture: https://docs.chain.link/chainlink-functions/resources/architecture
- Pyth pull oracle: https://www.pyth.network/blog/pyth-a-new-model-to-the-price-oracle
- Pyth whitepaper v2: https://www.pyth.network/blog/pyth-network-whitepaper-version-2-0
- API3 first-party oracles: https://blog.api3.org/ecosystem/api3s-first-party-oracle-infrastructure-is-live-on-soneium/

**Reference implementations:**
- IPFS / libp2p: https://docs.ipfs.tech/concepts/libp2p/
- Sigstore Rekor: https://github.com/sigstore/rekor
- Rekor v2 Trillian-Tessera: https://blog.sigstore.dev/rekor-v2-ga/
- Ethereum Attestation Service (EAS): https://docs.attest.org/docs/core--concepts/how-eas-works
- EAS on-chain vs off-chain: https://docs.attest.org/docs/core--concepts/onchain-vs-offchain
- Rollkit: https://blog.celestia.org/introducing-rollkit-a-modular-rollup-framework/

**Cost data:**
- RaaS pricing: https://www.chaincatcher.com/en/article/2142428
- Conduit: https://www.conduit.xyz/platform
- L2 gas cost comparison: https://coinlaw.io/gas-fee-markets-on-layer-2-statistics/
- Ethereum event gas formula: https://consensys.io/blog/guide-to-events-and-logs-in-ethereum-smart-contracts

**Anti-spam and identity:**
- Proof of Humanity V2: https://docs.kleros.io/products/proof-of-humanity
- Fraud-proof window analysis: https://www.zeeve.io/blog/a-holistic-view-of-solutions-to-reduce-7-day-finality-in-op-rollups/

---

*End of research-04-hybrid-architecture.md. Length: ~42 KB. Target was 35–50 KB.*
