# Research-01: CPML — Academic Foundations
## A Literature Review for the Consensus Profile Markup Language

**Author:** Sage (Veritas Protocol research team)
**Date:** 2026-04-20
**Status:** Research whitepaper, draft for design team review
**Scope:** Academic foundations for the Consensus Profile Markup Language (CPML), the user-owned, machine-readable file that describes which consensus domains a reader trusts for which topics, with which weights, and which conflict-resolution rules.

---

## Executive summary

CPML is a composition layer. It proposes that every reader carries a signed, shareable, machine-readable file that selects which validator-signed, domain-scoped attestations they want applied when a claim verdict is composed for them personally. The protocol has no opinion about what is true in general; it composes a per-user verdict from attestations plus a consensus profile.

That design lives at the intersection of ten research literatures. This review surveys each, judges what is usable for CPML, and names three honestly unsolved problems that Veritas must acknowledge rather than paper over.

Five headline design takeaways and three open research questions close the document.

Throughout: evidence-quality ratings follow Sage's internal scale (A = replicated RCTs or mathematical proof; B = single well-designed study or strong observational; C = preliminary/preprint/case study; D = expert opinion/analogy; E = speculation/marketing). Any claim whose source could not be verified during this review is marked `[UNVERIFIED]`.

---

## 1. Personal-preference ontologies in the Semantic Web

### What the literature says

The Semantic Web community has been building preference and trust vocabularies on RDF for two decades, and its history is informative for CPML.

**FOAF (Friend Of A Friend)** is the canonical social-graph vocabulary (Brickley & Miller, maintained at http://xmlns.com/foaf/). It is an RDF/OWL vocabulary for describing persons, their activities, and their relations. FOAF assumes no central database: each agent publishes their own FOAF document and links out to others. Evidence rating: A (widely deployed, large-scale analyses by Ding et al. 2005 and Finin et al. studied millions of FOAF documents).

**FOAF + trust extensions (Golbeck).** Jennifer Golbeck's doctoral work at Maryland (Golbeck 2005, *Computing and Applying Trust in Web-Based Social Networks*) extended FOAF with a topic-scoped trust predicate. Users rate other users on a 1-9 scale *per subject area*. Trust is propagated along the social graph by weighted-path algorithms ("TidalTrust", "SUNNY") to produce a personalised trust score for any indirectly-connected user. FilmTrust (Golbeck & Hendler 2006) deployed this for movie recommendations and showed that trust-weighted recommendations outperform mean ratings specifically when the user diverges from the crowd. Evidence rating: B (single-system deployment, but careful evaluation).

**WOT (Web of Trust) RDF schema** (http://xmlns.com/wot/0.1/) binds FOAF identities to OpenPGP keys so that RDF documents can be cryptographically signed and verified. This is precursor infrastructure for what CPML needs.

**SKOS (Simple Knowledge Organization System)** is the W3C Recommendation (Miles & Bechhofer 2009, https://www.w3.org/TR/skos-reference/) for thesauri, classifications, and subject-heading systems. Key primitives: `skos:Concept`, `skos:ConceptScheme`, `skos:prefLabel`, `skos:broader`/`narrower`, `skos:related`. SKOS gives us a tested vocabulary for *topic scoping* — which is exactly the "for which topics" axis of CPML. Evidence rating: A (W3C Recommendation, deployed by Library of Congress, Getty, Europeana, UNESCO).

**P3P (Platform for Privacy Preferences)** was a W3C attempt (April 2002, obsoleted 2018) at a machine-readable preference language for privacy (https://www.w3.org/P3P/). User agents would read a site's P3P policy and match it against the user's APPEL (A P3P Preferences Exchange Language) preference document. It failed — the W3C Working Group itself documents that adoption was poor, the vocabulary was too coarse to express real policies, and Microsoft's partial IE implementation created compatibility theatre rather than privacy protection. Evidence rating: A (documented failure).

**PICS (Platform for Internet Content Selection)** was W3C's 1996 Recommendation for labels attached to Internet content (https://www.w3.org/PICS/). PICS itself was values-neutral: it defined the wire format; any organisation could publish their own rating system. It was rejected publicly by the Global Internet Liberty Campaign and many others on the grounds that it "facilitates the implementation of server/proxy-based filtering, providing a simplified means of enabling upstream censorship beyond the control of the end user." PICS is the ghost that haunts any CPML-style project: a technically neutral labelling system still ended up serving central censors rather than empowering readers, because the labels flowed from gatekeepers downward.

### What's unsolved or contested

- **Trust-metric composition at scale has no canonical answer.** Golbeck's TidalTrust, Ziegler & Lausen's Appleseed, and Richardson-Agrawal-Domingos (ISWC 2003) all compose trust differently (shortest path, spreading activation, Bayesian). No consensus exists about which composition rule is correct when paths disagree.
- **Preference vocabularies rot.** FOAF still exists but is sparsely maintained; P3P died. A CPML that depends on a single upstream vocabulary repeats that failure mode.
- **No standard exists for "domain-indexed trust weight with conflict rules."** WOT signs documents. FOAF-trust weights people. SKOS scopes topics. None of them compose the three axes CPML requires.

### What's directly usable for CPML

- Use SKOS as the topic-scoping substrate. Do not invent a topic vocabulary.
- Sign CPML documents with an OpenPGP/WoT-style signature binding profile to key; use the WoT RDF schema as prior art for the signature block.
- Model the Golbeck topic-scoped trust pattern explicitly, but reject topic *propagation* by default (propagation inherits too many assumptions).
- Treat PICS as a cautionary tale: label authority must flow **outward from the reader**, not inward from gatekeepers, or CPML becomes the censor's tool.

### Citations

- Brickley, D. & Miller, L. *FOAF Vocabulary Specification*. http://xmlns.com/foaf/spec/
- Golbeck, J. (2005) *Computing and Applying Trust in Web-Based Social Networks*, PhD thesis, University of Maryland.
- Golbeck, J. & Hendler, J. (2006) *FilmTrust: Movie recommendations using trust in web-based social networks*, IEEE CCNC.
- Miles, A. & Bechhofer, S. (2009) *SKOS Simple Knowledge Organization System Reference*, W3C Recommendation. https://www.w3.org/TR/skos-reference/
- W3C (2002, obsoleted 2018) *Platform for Privacy Preferences 1.0 (P3P1.0) Specification*. https://www.w3.org/P3P/
- W3C (1996) *PICS: Platform for Internet Content Selection*. https://www.w3.org/PICS/
- Richardson, M., Agrawal, R. & Domingos, P. (2003) *Trust Management for the Semantic Web*, ISWC.

---

## 2. Argumentation frameworks with agent preferences

### What the literature says

Dung 1995 is the starting point for all modern computational argumentation, and it is directly relevant to CPML's conflict-resolution rule surface.

**Dung (1995).** "On the acceptability of arguments and its fundamental role in nonmonotonic reasoning, logic programming and n-person games", *Artificial Intelligence* 77:321-357. An Abstract Argumentation Framework (AAF) is a pair `(Args, attacks)` — a set of arguments and a binary "attacks" relation. Dung defines four key semantics — grounded, preferred, stable, complete — each giving different sets of "acceptable" arguments. Dung proved these correspond to standard nonmonotonic formalisms and even to the stable marriage problem. Evidence rating: A (mathematically rigorous, ~25,000 citations, replicated and extended continuously).

The critical property for CPML: Dung's frameworks compute acceptability **without preferences** — every attack succeeds. That is not what CPML wants. CPML needs attestations to attack each other, and the user's profile to decide which attack succeeds.

**Amgoud & Cayrol (2002).** "Inferring from inconsistency in preference-based argumentation frameworks", *Journal of Automated Reasoning* 29:125-169. Introduces Preference-based Argumentation Frameworks (PAFs). A PAF is `(Args, attacks, ≥)`. An argument `x` *defeats* argument `y` iff `x` attacks `y` AND `y` is not strictly preferred to `x`. Preferences thus filter which attacks succeed — exactly CPML's design pattern. Evidence rating: A (foundational, widely extended).

**Bench-Capon (2003).** "Persuasion in practical argument using value-based argumentation frameworks", *Journal of Logic and Computation* 13(3):429-448. Value-based Argumentation Frameworks (VAFs) extend Dung's framework with (a) a mapping from arguments to values and (b) an "audience" — an ordering over values. Different audiences produce different acceptable sets from the same argument graph. Bench-Capon cites Perelman explicitly: argumentation in practical reasoning always aims at an audience. Evidence rating: A (mathematical, widely cited across law and AI).

This matters for CPML because it is the most direct ancestor: one global graph of attestations and attacks, but the *acceptable set* depends on which audience (= which consensus profile) is evaluating.

**Cayrol & Lagasquie-Schiex (2005).** Bipolar Argumentation Frameworks add a *support* relation alongside attack, with derived notions like supported defeat and secondary defeat. This is relevant to CPML because validators can corroborate as well as contradict. But the literature itself is fragmented — Amgoud, Cayrol, et al. (2008) showed that several proposed BAF semantics produce incompatible extensions; Polberg & Oren (2014) document the landscape as "unstable". Evidence rating: B (multiple valid formalisations, no consensus semantics).

**Amgoud & Vesic (2011).** "A new approach for preference-based argumentation frameworks", *Annals of Mathematics and AI* 63:149-183. Shows that earlier PAF approaches produce conflicting extensions when attacks are asymmetric, and proposes applying preferences *at the semantics level* rather than at the attack level to preserve conflict-freeness. This is an important subtlety for CPML: the choice of where preferences enter changes what guarantees the system can offer.

**Modgil (2009).** "Reasoning about preferences in argumentation frameworks", *Artificial Intelligence* 173(9-10):901-934. Extended Argumentation Frameworks (EAFs) allow attacks on attacks. Preferences themselves become first-class arguments, which lets the system *argue about preferences* — meta-level reasoning. Directly relevant to CPML because consensus profiles will themselves be contested ("is EMA a credible validator for macro policy?" is itself an argument).

**Bodanza & Freidin (2023).** "Confronting value-based argumentation frameworks with people's assessment of argument strength", *Argument & Computation*. Empirical test of VAF predictions against human judgement, finding partial support but systematic deviation — people don't cleanly apply value-audience preferences. Evidence rating: B. Important caution: formalism may not match user intuition.

### What's unsolved or contested

- **Where to apply preferences.** Attack-level (Amgoud-Cayrol 2002) vs. semantics-level (Amgoud-Vesic 2011) give different guarantees; no consensus.
- **Support semantics.** BAF literature has not converged on a canonical semantics.
- **Meta-level preferences.** EAFs (Modgil 2009) allow arguing about preferences but at significant computational cost; scaling is an open problem.
- **Cognitive validity.** Whether any of these formalisms match human argument evaluation is contested (Bodanza & Freidin 2023; Rahwan et al. 2010 found Dung semantics partially match lab data).

### What's directly usable for CPML

- Use a PAF-style core: attestations + attacks, with preferences determining which attacks succeed per user.
- Adopt Bench-Capon's "audience" framing literally: a CPML profile **is** an audience in the VAF sense.
- Choose Amgoud-Vesic's semantics-level application of preferences; it preserves conflict-freeness which matters for downstream UX (verdicts must at least be coherent).
- Do not attempt meta-level argumentation (Modgil 2009) in v1; it is the right long-term direction but adds cost that exceeds v1 budget.
- Run a small human-subjects study (à la Bodanza & Freidin) before locking semantics: the formal "right" answer and the user-expected answer may differ, and Veritas should know.

### Citations

- Dung, P. M. (1995) *On the acceptability of arguments and its fundamental role in nonmonotonic reasoning, logic programming and n-person games*, Artificial Intelligence 77:321-357. https://doi.org/10.1016/0004-3702(94)00041-X
- Amgoud, L. & Cayrol, C. (2002) *Inferring from inconsistency in preference-based argumentation frameworks*, Journal of Automated Reasoning 29(2):125-169.
- Bench-Capon, T. J. M. (2003) *Persuasion in practical argument using value-based argumentation frameworks*, Journal of Logic and Computation 13(3):429-448.
- Cayrol, C. & Lagasquie-Schiex, M.-C. (2005) *On the acceptability of arguments in bipolar argumentation frameworks*, ECSQARU.
- Amgoud, L. & Vesic, S. (2011) *A new approach for preference-based argumentation frameworks*, Annals of Mathematics and AI 63:149-183.
- Modgil, S. (2009) *Reasoning about preferences in argumentation frameworks*, Artificial Intelligence 173(9-10):901-934.
- Bodanza, G. A. & Freidin, E. (2023) *Confronting value-based argumentation frameworks with people's assessment of argument strength*, Argument & Computation.

---

## 3. Epistemic community structure

### What the literature says

**Haas (1992).** "Introduction: Epistemic communities and international policy coordination", *International Organization* 46(1):1-35. JSTOR 2706951. Defines an epistemic community as "a network of professionals with recognized expertise and competence in a particular domain and an authoritative claim to policy-relevant knowledge within that domain". Four criteria: (a) shared normative and principled beliefs, (b) shared causal beliefs, (c) shared notions of validity, (d) a common policy enterprise. Evidence rating: A (foundational in IR theory, thousands of citations, concept widely replicated empirically).

For CPML this is decisive. Validators are not random experts. They are members of epistemic communities with shared causal and validity standards. A CPML profile is a selection over epistemic communities, not over individuals.

**Collins & Evans (2002).** "The Third Wave of Science Studies: Studies of Expertise and Experience", *Social Studies of Science* 32(2):235-296. DOI: 10.1177/0306312702032002003. The Third Wave introduces a normative theory of expertise distinguishing contributory expertise (can do the work) from interactional expertise (fluent in the language). Critically, it argues *against* the post-Kuhnian tendency to dissolve expertise into democratic decision-making — but it also recognises "experience-based expertise" possessed by laypeople in specific domains (Wynne's Cumbrian sheep farmers after Chernobyl).

For CPML this means: a consensus domain is not a credential whitelist. It is a community with its own validity standards. Experience-based experts (e.g., long-duration patients with rare disease, veteran operators in a trade) can constitute a legitimate epistemic community despite lacking credentials.

**Goldman (1999).** *Knowledge in a Social World*, Oxford University Press. Develops "veritistic" social epistemology: social practices are evaluated by whether they tend to produce true beliefs. Goldman argues explicitly against pure relativism — some social practices are more truth-conducive than others — but also against pure individualism — individual cognition is cheap, testimony and expert networks do most epistemic work. Evidence rating: A (philosophical consensus reference; subsequent critical literature includes Kitcher 2011, Fricker 2007 on testimonial injustice).

**Condorcet Jury Theorem (1785) and modern generalisations.** Condorcet showed that if jurors are more likely than chance to vote correctly and vote independently, majority accuracy approaches 1 as the jury grows. List & Goodin (2001), *Journal of Political Philosophy* 9(3):277-306, generalise this to many alternatives. Dietrich & Spiekermann (SEP "Jury Theorems") catalogue the assumptions. Evidence rating: A (mathematical) for the theorem; B for its empirical applicability — the independence and competence-above-0.5 assumptions fail often in practice, especially when jurors share information sources.

This is the formal backbone of any "pool validators, compose verdict" design. But the failure conditions are exactly the conditions CPML will routinely face: dependent validators (everyone cites each other), competence below 0.5 on genuinely novel claims.

### What's unsolved or contested

- **How to delimit an epistemic community operationally.** Haas's four criteria are qualitative. Making them machine-checkable (for validator onboarding) is non-trivial.
- **Credential vs. experience expertise.** Collins & Evans ignited a still-unresolved debate (Jasanoff 2003, Wynne 2003 critiques). CPML sits downstream of this debate.
- **Jury theorem applicability.** How much correlation between validators can a CPML profile tolerate before aggregation becomes theatre? No clean answer.

### What's directly usable for CPML

- Model consensus domains as epistemic-community-indexed, not credential-indexed.
- Allow experience-based and credentialed validators in the same domain but as distinct types, with profiles specifying weights.
- Treat Goldman's veritism as the epistemological baseline: CPML is in the business of truth-conducive practices, not perspective-parity.
- Borrow Condorcet structure but *do not claim* jury-theorem convergence in system documentation — the independence assumption will fail.

### Citations

- Haas, P. M. (1992) *Introduction: Epistemic communities and international policy coordination*, International Organization 46(1):1-35.
- Collins, H. M. & Evans, R. (2002) *The Third Wave of Science Studies: Studies of Expertise and Experience*, Social Studies of Science 32(2):235-296. DOI: 10.1177/0306312702032002003
- Goldman, A. I. (1999) *Knowledge in a Social World*, Oxford University Press.
- List, C. & Goodin, R. E. (2001) *Epistemic democracy: Generalizing the Condorcet jury theorem*, Journal of Political Philosophy 9(3):277-306.

---

## 4. Political-compass and values-test methodology

### What the literature says

The comparison class for CPML profiles is the family of self-assessment instruments that map users onto a low-dimensional value space.

**Eysenck (1954, 1975).** *The Psychology of Politics* and later factor-analytic work. Two-factor model: R-factor (radical-conservative, roughly left-right economic) and T-factor (tender-tough-minded). The T-factor was proposed to reflect personality projection (introversion-extraversion) onto political attitudes. Evidence rating: B (factor structure replicated in multiple countries — Germany, Sweden, France, Japan — but with notable cross-cultural variation; his work was also compromised by later findings on his broader publication record — see 2019 King's College London enquiry into Eysenck's personality-cancer research, which retracted 14 of his papers; this does not directly impeach the political-attitudes work but calls general caution).

**Political Compass (politicalcompass.org, 2001-).** Two axes: economic left-right, social authoritarian-libertarian. Widely used online but *not* peer-reviewed in its operationalisation. Mitchell and other critics note the horizontal axis conflates distinct ideas (economic freedom, state role, property rights). A 2025 arXiv paper (Röttger et al. 2024, "Political Compass or Spinning Arrow? Towards More Meaningful Evaluations for Values and Opinions in LLMs", ACL 2024, https://aclanthology.org/2024.acl-long.816/) demonstrates that the test's item wording and forced-choice design produce unstable measurements for LLMs and, by extension, casts doubt on its reliability for humans. Evidence rating: D (popular instrument, weak psychometric validation).

**Moral Foundations Theory (Haidt & Graham 2007; Graham et al. 2009, 2011, 2013).** Five (later six) moral foundations: Care/harm, Fairness/cheating, Loyalty/betrayal, Authority/subversion, Sanctity/degradation, (Liberty/oppression). The Moral Foundations Questionnaire (MFQ) operationalises this. Large cross-cultural studies (Graham et al. 2011, *Journal of Personality and Social Psychology*; PMC 3116962). Evidence rating: B for predictive power of political orientation (consistently replicated); C for the factor structure itself — Iurino & Saucier (2020, *Journal of Cross-Cultural Psychology*), Davis et al. (2016), and Wormley et al. (2025, *PSPB*) all find the five-factor model fits poorly across many samples, with two-factor "individualising/binding" often fitting better. Haidt himself (Graham et al. 2013) moved toward treating MFT as a pragmatic rather than strictly psychometric theory. For CPML the relevant lesson is: if MFT's structure is contested after two decades of work and millions of respondents, the ambition to capture all value-difference in 5-10 orthogonal dimensions is almost certainly too simple.

**Inglehart-Welzel cultural map.** Based on World Values Survey, two axes: traditional-secular/rational and survival-self-expression, explaining "more than 70 percent of the cross-national variance" in factor analysis of ten indicators. Evidence rating: B (large-sample, replicated across 7 waves from 1981-2022, but dimensional reduction may oversimplify — see Welzel 2013 for refinements).

**Pew Research Political Typology (2021 version).** Uses 27 social and political value items, weighted k-medoids clustering (WeightedCluster R package, k-medoids algorithm). Produces nine politically meaningful clusters (Faith and Flag Conservatives, Committed Conservatives, Populist Right, Ambivalent Right, Stressed Sideliners, Outsider Left, Democratic Mainstays, Establishment Liberals, Progressive Left). Evidence rating: B (rigorous methodology, public replication package, but US-specific and re-run every ~4 years so not a stable ontology).

**Critical finding:** All of these instruments share failure modes relevant to CPML:
1. Low-dimensional reductions lose information in exactly the cases that matter (edge cases, disagreement).
2. Item wording heavily influences results (Röttger et al. 2024; multiple MFQ re-analyses).
3. Clusters are local to culture and time — a 2021 US typology does not transfer to 2026 Indonesia.
4. None of these instruments *compose*: Inglehart's two axes and MFT's five foundations are not linearly combinable.

### What's unsolved or contested

- **Optimal dimensionality.** 2 axes (Inglehart, Political Compass) vs 5-6 (MFT) vs 9 clusters (Pew) vs. arbitrary. No theoretical answer exists; all are pragmatic.
- **Cross-cultural invariance.** Consistent finding is that instruments designed in WEIRD (Henrich, Heine, Norenzayan 2010) populations don't transfer cleanly.
- **Item-level vs dimensional measurement.** Whether to aggregate at item level (Pew) or factor level (MFT, Eysenck) changes results.

### What's directly usable for CPML

- **Reject the single-axis or fixed-axis model.** A CPML profile should not force users into a global value space. It should allow arbitrary topic-scoped trust expression.
- **Do not define "the" CPML dimensions.** Let communities propose consensus-domain schemas as SKOS concept schemes; let profiles compose from them.
- **Adopt Pew's methodological honesty:** cluster memberships are probabilistic and time-bound. CPML profiles should have explicit expiry/rotation.
- **Localise:** WVS and MFT both show dimensional structure varies by culture. CPML must not bake in one culture's axis.

### Citations

- Eysenck, H. J. (1954) *The Psychology of Politics*, Routledge & Kegan Paul.
- Haidt, J. & Graham, J. (2007) *When morality opposes justice: Conservatives have moral intuitions that liberals may not recognize*, Social Justice Research 20:98-116.
- Graham, J., Haidt, J. & Nosek, B. A. (2009) *Liberals and conservatives rely on different sets of moral foundations*, Journal of Personality and Social Psychology 96(5):1029-1046.
- Graham, J. et al. (2011) *Mapping the moral domain*, Journal of Personality and Social Psychology 101(2):366-385. PMC 3116962.
- Graham, J. et al. (2013) *Moral Foundations Theory: The pragmatic validity of moral pluralism*, Advances in Experimental Social Psychology 47:55-130.
- Inglehart, R. & Welzel, C. (2005) *Modernization, Cultural Change, and Democracy*, Cambridge University Press.
- Pew Research Center (2021) *Beyond Red vs. Blue: The Political Typology*. https://www.pewresearch.org/politics/2021/11/09/beyond-red-vs-blue-the-political-typology/
- Röttger, P. et al. (2024) *Political Compass or Spinning Arrow? Towards More Meaningful Evaluations for Values and Opinions in LLMs*, ACL 2024. https://aclanthology.org/2024.acl-long.816/
- Henrich, J., Heine, S. J. & Norenzayan, A. (2010) *The weirdest people in the world?*, Behavioral and Brain Sciences 33(2-3):61-83.
- Wormley, A. S. et al. (2025) *Measuring Morality: An Examination of the MFQ's Factor Structure*, Personality and Social Psychology Bulletin.

---

## 5. Filter bubbles, echo chambers, epistemic bubbles

### What the literature says

This is the strand where Veritas most needs to be careful: the popular narrative and the empirical literature disagree sharply.

**Pariser (2011).** *The Filter Bubble: What the Internet Is Hiding from You*, Penguin. Coins "filter bubble": algorithmic personalisation isolates users from information that would challenge them. Famous Egypt-search example. Evidence rating: D (the book is a thesis, not a study; the concept is evocative, but Pariser himself did not run the experiments that would test it).

**Sunstein (2001, 2007, 2017).** *Republic.com*, *Republic.com 2.0*, *#Republic: Divided Democracy in the Age of Social Media*. Argues that unfettered personalisation (Negroponte's "Daily Me") produces group polarisation via homophilous exposure. Evidence rating: C (draws on social psychology of group polarisation, which is well-replicated, but extrapolates from lab findings to platform dynamics without direct empirical measurement).

**Nguyen (2020).** "Echo Chambers and Epistemic Bubbles", *Episteme* 17(2):141-161. Critical philosophical contribution: distinguishes *epistemic bubble* (other voices are accidentally absent) from *echo chamber* (other voices are actively discredited). The distinction matters because the interventions differ: exposure fixes bubbles but reinforces chambers. Evidence rating: B (conceptual work, not empirical, but widely adopted and consistent with later empirical findings).

**Bakshy, Messing & Adamic (2015).** "Exposure to ideologically diverse news and opinion on Facebook", *Science* 348(6239):1130-1132. DOI: 10.1126/science.aaa1160. Study of 10.1 million US Facebook users: algorithmic ranking reduced cross-cutting content exposure by 15%, but *users' own click choices* reduced it by a further 70%. Conclusion: individual choice dominates algorithm. Evidence rating: B (single platform, Facebook-employee authors raise COI concerns, but scale and data access were unprecedented).

**Barberá et al. (2015).** "Tweeting from Left to Right: Is Online Political Communication More Than an Echo Chamber?", *Psychological Science* 26(10):1531-1542. 3.8M Twitter users, 150M tweets. Political topics showed significant homophily; non-political topics did not; liberals were more likely than conservatives to engage in cross-ideological dissemination. Evidence rating: B.

**Guess (2021).** "(Almost) everything in moderation: New evidence on Americans' online media diets", *American Journal of Political Science* 65(4):1007-1022. Passive-tracking data shows most Americans' media diets cluster centrally and are more diverse than filter-bubble narrative predicts. Evidence rating: B.

**Guess, Nyhan, et al. (2020).** "Exposure to untrustworthy websites in the 2016 US election", *Nature Human Behaviour* 4(5):472-480. Tracking data: fake-news consumption was concentrated in a small subset of heavy consumers, heavily skewed toward older Trump supporters. Most users saw almost none. Evidence rating: B.

**Recent systematic reviews (2024-2025).** Three converging findings:
- Ross Arguedas et al. (Reuters Institute 2022), *Echo chambers, filter bubbles, and polarisation: a literature review* — most users have more diverse diets than the narrative suggests.
- Liu et al. (2024) *PNAS* 121(10) — experimental YouTube manipulation with filter-bubble-style recommendations showed no detectable short-term polarisation effects.
- Systematic review in *Journal of Computational Social Science* (2025) of 129 echo-chamber studies finds method-specific results: network-homophily methods support the hypothesis; content-exposure and survey methods do not.

### What's unsolved or contested

- **Platform vs. behaviour.** Empirical literature suggests user choice matters more than algorithm; but this depends on platform, topic, and time horizon.
- **Echo chambers (active distrust of outside sources) do appear in specific subcommunities** — QAnon, anti-vaccine networks — but the population-level "everyone is in a filter bubble" claim is not supported.
- **Long-run effects.** Most empirical studies are short-horizon; long-run cumulative exposure effects are unclear.
- **Cross-platform.** Most studies look at a single platform; cross-platform diet is less studied and harder.

### What's directly usable for CPML

- **Veritas should not market CPML as "bursting your filter bubble".** The problem is narrower than the popular narrative suggests.
- **Nguyen's distinction is load-bearing.** CPML's actual epistemic value-add targets echo chambers (active distrust), not bubbles (accidental absence). Profile transparency helps both, but the mechanism differs: exposure to cross-cutting attestations doesn't reach an echo-chamber user who has pre-committed to distrust the entire validator class.
- **Individual-choice dominance** (Bakshy 2015) means CPML must make the right-choice UX cheap. A profile that's painful to edit will drift toward defaults, and defaults will eventually become filter-bubble-shaped.
- Build-in **observability** of profile divergence from cross-cutting attestations — show users when their profile is filtering out signal.

### Citations

- Pariser, E. (2011) *The Filter Bubble: What the Internet Is Hiding from You*, Penguin.
- Sunstein, C. R. (2017) *#Republic: Divided Democracy in the Age of Social Media*, Princeton University Press.
- Nguyen, C. T. (2020) *Echo Chambers and Epistemic Bubbles*, Episteme 17(2):141-161.
- Bakshy, E., Messing, S. & Adamic, L. A. (2015) *Exposure to ideologically diverse news and opinion on Facebook*, Science 348(6239):1130-1132. DOI: 10.1126/science.aaa1160
- Barberá, P. et al. (2015) *Tweeting From Left to Right*, Psychological Science 26(10):1531-1542.
- Guess, A. M. (2021) *(Almost) everything in moderation*, American Journal of Political Science 65(4):1007-1022.
- Guess, A. M. et al. (2020) *Exposure to untrustworthy websites in the 2016 US election*, Nature Human Behaviour 4(5):472-480.
- Ross Arguedas, A. et al. (2022) *Echo chambers, filter bubbles, and polarisation: a literature review*, Reuters Institute for the Study of Journalism.

---

## 6. Modal logic for domain-indexed truth

### What the literature says

CPML's claim-composition semantics resembles "claim `C` holds in frame `F` at time `t`". Modal logic is the natural formal home.

**Kripke (1959, 1963).** Foundational papers introducing possible-worlds semantics for modal logic. A Kripke frame is `(W, R)`: worlds + accessibility relation. A formula is valid on a frame iff true at every world under every valuation. Evidence rating: A (mathematical).

**Hintikka (1962).** *Knowledge and Belief: An Introduction to the Logic of the Two Notions*, Cornell University Press. First systematic epistemic logic. Uses "model sets" rather than full possible worlds. `Kₐ φ` reads "agent `a` knows `φ`". Hintikka insists knowledge is factive (`Kₐ φ → φ`); belief is not. Evidence rating: A (foundational).

**Hybrid logic (Areces, Blackburn, Braüner et al., 2000s).** Adds **nominals** — propositional symbols true at exactly one world, effectively naming worlds. Provides expressive power for "at world `i`, `φ` holds". Stanford Encyclopedia entry (Braüner 2006 onward, https://plato.stanford.edu/entries/logic-hybrid/) surveys. Evidence rating: A.

For CPML, hybrid logic is the single best formal match. The structure `@_F C` ("claim `C` holds in frame `F`") is exactly the hybrid-logic `@_i φ` construction where `i` is a nominal for a domain frame.

**Dynamic Epistemic Logic (Baltag, van Benthem, Moss, Plaza, 1998-).** Stanford Encyclopedia entry (Baltag & Renne). Extends epistemic logic with update operators for announcements, belief revision, and plausibility ordering. Van Benthem & Smets (2015) give a unified treatment. Handles multi-agent belief revision on shared information. Evidence rating: A.

Relevant to CPML because when new attestations arrive, they should update *all* consumers' per-user verdicts — this is formally a model-update operation. DEL gives rigorous semantics for that.

### What's unsolved or contested

- **Scalability.** DEL and hybrid logic have good theoretical properties but practical reasoners do not scale to millions of agents / claims at web scale. The CPML implementation will use symbolic or approximate reasoning, not full DEL.
- **Probabilistic vs. qualitative.** Extensions of modal logic to probability (Halpern 1990, probabilistic DEL) are active but not canonical.
- **Multi-agent common knowledge** has known computational complexity issues (PSPACE-hard for even finite models).

### What's directly usable for CPML

- **Use hybrid-logic notation as the documentation formalism.** A claim's scope is `@_F C` — explicit, testable, unambiguous.
- **Frames are SKOS concept schemes.** Domain-indexed truth becomes: "`C` holds in concept-scheme-domain `F` according to validators credentialed in `F`".
- **Do not implement a DEL reasoner in v1.** The formal semantics is a specification surface; the implementation is an efficient subset (closer to description logic or lightweight rule systems).
- **Be explicit that time-indexing matters.** Attestations expire; profiles change. The logic has to carry timestamps, not just frames.

### Citations

- Kripke, S. A. (1963) *Semantical considerations on modal logic*, Acta Philosophica Fennica 16:83-94.
- Hintikka, J. (1962) *Knowledge and Belief: An Introduction to the Logic of the Two Notions*, Cornell University Press.
- Braüner, T. (2006, 2022) *Hybrid Logic*, Stanford Encyclopedia of Philosophy. https://plato.stanford.edu/entries/logic-hybrid/
- Baltag, A. & Renne, B. (2016) *Dynamic Epistemic Logic*, Stanford Encyclopedia of Philosophy. https://plato.stanford.edu/entries/dynamic-epistemic/
- van Benthem, J. & Smets, S. (2015) *Dynamic Logics of Belief Change*, in Handbook of Logics for Knowledge and Belief.

---

## 7. Recommender-system personalisation with explicit preferences

### What the literature says

The recommender-systems literature is largely about *implicit* preference modelling — infer preferences from clicks. CPML inverts this: explicit, user-edited, machine-readable. The explicit-preference strand is smaller but directly relevant.

**Herlocker, Konstan & Riedl (2000).** "Explaining collaborative filtering recommendations", *CSCW 2000*. First systematic study of explanation as a feature. Found that simple and transparent explanations (neighbour ratings, prior accuracy) outperformed complex (full neighbour graph) ones for user trust. Evidence rating: B.

**Herlocker, Konstan, Terveen & Riedl (2004).** "Evaluating collaborative filtering recommender systems", *ACM TOIS* 22(1):5-53. DOI: 10.1145/963770.963772. Methodologically landmark paper: accuracy metrics cluster into equivalence classes; MAE/RMSE improvements may not track user-perceived quality. The paper explicitly argues that *explaining* recommendations and *data efficiency* may matter more than raw accuracy. Evidence rating: A (foundational, widely cited).

**Kay (2006); Kay & Kummerfeld (2002, 2013).** "Scrutable user models". Personis user model server (Kay, Kummerfeld & Lauder 2002). Scrutability: users have the right to inspect, understand, and modify the model the system holds about them. Evidence rating: B (multiple deployments, long-running research programme at Sydney).

**Tintarev & Masthoff (2007, 2012).** "A survey of explanations in recommender systems" and "Evaluating the effectiveness of explanations for recommender systems". Characterises explanation by goals: transparency, scrutability, trust, effectiveness, persuasiveness, efficiency, satisfaction. Evidence rating: A for the typology; B for individual findings.

**Recent work on natural-language scrutable user models** (Radlinski et al. 2022, "Natural Language User Profiles for Transparent and Scrutable Recommendation", arXiv:2205.09403, SIGIR 2022; Ramos et al. 2024, arXiv:2402.05810). Moves toward user profiles expressed in natural language which LLMs consume. Relevant to CPML: a CPML profile could have a human-readable layer *and* a machine-parseable layer, with equivalence between them.

### What's unsolved or contested

- **Scrutability vs. personalisation trade-off.** Kay herself (2013) notes that fully scrutable models are often less predictive than black-box models. CPML likely accepts this trade-off — Veritas is optimising for legitimacy, not conversion rate.
- **Profile maintenance burden.** Empirical studies consistently find users do not edit explicit profiles unless the cost is very low. CPML must minimise maintenance cost or accept drift.
- **Explanation format.** Natural language, visualisations, feature lists — no consensus.

### What's directly usable for CPML

- **Scrutability is a primary requirement**, not a nice-to-have. Follow Kay's long-running research: the profile must be user-viewable, user-editable, user-exportable.
- **Provide both a human-readable and a canonical-machine form.** Radlinski et al.'s recent work suggests LLMs can mediate equivalence, but the canonical form should be the machine one (signing, reproducibility).
- **Herlocker's finding about simple explanations** argues against over-engineering: CPML's explanation of "why this verdict" should be short, concrete, and cite specific attestations.
- **Default-but-edit pattern.** Drow's observation about maintenance cost is backed by the literature: ship a curated default profile per topic area, and make editing cheap.

### Citations

- Herlocker, J. L., Konstan, J. A. & Riedl, J. (2000) *Explaining collaborative filtering recommendations*, CSCW 2000. DOI: 10.1145/358916.358995
- Herlocker, J. L., Konstan, J. A., Terveen, L. G. & Riedl, J. T. (2004) *Evaluating collaborative filtering recommender systems*, ACM TOIS 22(1):5-53. DOI: 10.1145/963770.963772
- Kay, J. (2006) *Scrutable adaptation: Because we can and must*, AH 2006.
- Kay, J., Kummerfeld, B. & Lauder, P. (2002) *Personis: A server for user models*, AH 2002.
- Tintarev, N. & Masthoff, J. (2012) *Evaluating the effectiveness of explanations for recommender systems*, UMUAI 22(4-5):399-439.
- Radlinski, F. et al. (2022) *On Natural Language User Profiles for Transparent and Scrutable Recommendation*, arXiv:2205.09403, SIGIR 2022.

---

## 8. Non-monotonic reasoning and truth maintenance

### What the literature says

CPML will need to retract verdicts when attestations are revoked, superseded, or contradicted. That is non-monotonic reasoning with truth maintenance.

**Reiter (1980).** "A logic for default reasoning", *Artificial Intelligence* 13:81-132. Default logic. A default rule `A : B / C` reads "if `A` is believed and `B` is consistent with what's believed, infer `C`". The logic is defined in terms of *extensions* — fixed points under the operator. Evidence rating: A (foundational). Downstream: Łukaszewicz (1988), Brewka (1991) on prioritised defaults; Antoniou (1999) textbook treatment.

**Doyle (1979).** "A truth maintenance system", *Artificial Intelligence* 12(3):231-272. The Justification-based TMS: each belief has a set of justifications; retracting a justification triggers propagation to dependents. JTMS is a dependency-tracker on top of an inference system. Evidence rating: A (seminal).

**de Kleer (1986).** "An assumption-based TMS", *Artificial Intelligence* 28(2):127-162. ATMS explicitly tracks assumption sets, enabling free context-switching and simultaneous exploration of multiple worlds. Better fit than JTMS for CPML because users are effectively reasoning under different assumption sets (different consensus profiles = different assumptions). Evidence rating: A.

**Multi-agent belief revision / AGM.** Alchourrón, Gärdenfors & Makinson (1985) — the classical belief-revision postulates (closure, inclusion, vacuity, success, consistency, extensionality). Extensions to multi-agent setting (Baltag & Smets, 2008; Dragoni & Giorgini 1997) get harder; no canonical answer for how two agents with different belief sets should merge.

### What's unsolved or contested

- **Revision vs. update.** Katsuno & Mendelzon (1991) distinguished belief revision (correcting belief about a static world) from update (tracking a changing world). CPML attestations can do both — "validator revoked attestation" (revision) vs. "fact changed" (update) — and the semantics differ.
- **Multi-agent merge.** No clean answer for how to merge attestation sets from distrusting validators. List & Pettit's work (see §9) shows impossibility results.
- **Priority conflicts.** Prioritised default logics (Brewka 1991) have multiple non-equivalent formulations.

### What's directly usable for CPML

- **Model attestations as defaults with justifications.** An attestation is "claim `C` in frame `F` holds by default given validator `V`'s endorsement, unless a higher-weight contradicting attestation exists".
- **Use ATMS-style architecture.** Each user's profile is effectively a distinct assumption set; the underlying attestation store is shared. ATMS is the 1986 idea that most closely matches this architecture.
- **Distinguish revocation from contradiction** in the spec. Revocation (validator withdraws attestation) should cascade deterministically; contradiction is resolved per-profile.
- **Do not attempt AGM-style multi-agent merge.** The impossibility results suggest it is not well-defined.

### Citations

- Reiter, R. (1980) *A logic for default reasoning*, Artificial Intelligence 13:81-132.
- Doyle, J. (1979) *A truth maintenance system*, Artificial Intelligence 12(3):231-272.
- de Kleer, J. (1986) *An assumption-based TMS*, Artificial Intelligence 28(2):127-162.
- Alchourrón, C. E., Gärdenfors, P. & Makinson, D. (1985) *On the logic of theory change: Partial meet contraction and revision functions*, Journal of Symbolic Logic 50(2):510-530.
- Katsuno, H. & Mendelzon, A. O. (1991) *On the difference between updating a knowledge base and revising it*, KR 1991.

---

## 9. Preference aggregation and social choice

### What the literature says

**Arrow (1951, 1963).** *Social Choice and Individual Values*. Impossibility theorem: no social-welfare function can satisfy *unrestricted domain*, *Pareto efficiency*, *independence of irrelevant alternatives*, and *non-dictatorship* simultaneously when there are 3+ alternatives. Evidence rating: A (mathematical, Nobel 1972).

**Condorcet (1785).** Majority-wise pairwise comparison can produce cycles (A>B, B>C, C>A), the Condorcet paradox. Any method satisfying the Condorcet-winner criterion is a Condorcet method. Schulze, Ranked Pairs, Minimax all meet it. Evidence rating: A.

**List & Pettit (2002, 2004).** "Aggregating sets of judgments: An impossibility result", *Economics and Philosophy* 18(1):89-110; follow-up with Dietrich extends. A *judgment aggregation* analogue of Arrow: no aggregation rule can satisfy universal domain, collective rationality, anonymity, and systematicity simultaneously. The doctrinal paradox (Kornhauser & Sager 1993) — court panels voting on premises vs. conclusions can produce inconsistent outcomes. Evidence rating: A.

For CPML: pooling validator attestations into a per-user verdict is a judgment aggregation problem. The impossibility results mean *some desirable property must be sacrificed*. CPML's design choice — each user composes their own verdict with their own profile — sacrifices systematic agreement across users (on purpose) to preserve within-user consistency.

**Dietrich & List (2007, 2008).** Extended framework, "general propositional-wise" aggregation. Shows the impossibility is generic: essentially any sufficiently rich judgment-aggregation problem has no clean solution.

**Condorcet Jury Theorem (1785, modern: Grofman, Owen & Feld 1983; List & Goodin 2001).** If validators vote independently and are correct more often than chance, majority accuracy grows with group size. But independence is the assumption that fails in practice: validators read each other's work.

### What's unsolved or contested

- **Which property to sacrifice.** Arrow, List-Pettit, and Dietrich-List all describe a trade space. No "right" choice exists; design must pick.
- **Strategic voting.** Gibbard-Satterthwaite: every non-dictatorial voting rule over 3+ alternatives is manipulable. Validators in CPML face similar incentives.
- **Dependence.** Most aggregation results assume voter independence; real-world dependence (shared sources, social pressure) is under-modelled.

### What's directly usable for CPML

- **Accept per-user composition as a feature, not a bug.** CPML does not aggregate across users — it composes attestations per user. This sidesteps Arrow / List-Pettit impossibilities by refusing to produce a global social verdict.
- **Document which properties are sacrificed explicitly.** Veritas must be honest: "CPML sacrifices cross-user agreement to preserve within-user coherence and user sovereignty." This is not a limitation, but it is a choice.
- **Use Condorcet-style pairwise logic in per-profile composition** when weights are commensurable; fall back to lexicographic priority when they are not.
- **Model strategic behaviour.** Validators and profile-publishers can game weights. A minimal defence: signed, non-repudiable profiles with reputation tracking, modelled after Richardson-Agrawal-Domingos (2003) but without their global-trust assumption.

### Citations

- Arrow, K. J. (1963) *Social Choice and Individual Values*, 2nd ed., Yale University Press.
- List, C. & Pettit, P. (2002) *Aggregating sets of judgments: An impossibility result*, Economics and Philosophy 18(1):89-110.
- Dietrich, F. & List, C. (2007) *Arrow's theorem in judgment aggregation*, Social Choice and Welfare 29(1):19-33.
- Grofman, B., Owen, G. & Feld, S. L. (1983) *Thirteen theorems in search of the truth*, Theory and Decision 15:261-278.
- Kornhauser, L. A. & Sager, L. G. (1993) *The one and the many: Adjudication in collegial courts*, California Law Review 81(1):1-59.

---

## 10. Critique of "postmodern truth" framings

### What the literature says

CPML carries a political risk: if it is described carelessly, it reads as "everyone has their own truth". That is not what it is — but Veritas needs a careful public framing grounded in the anti-relativist analytic literature.

**Boghossian (2006).** *Fear of Knowledge: Against Relativism and Constructivism*, Oxford University Press. Three arguments against three constructivist theses: (a) facts-constructivism collapses into incoherence (the claim "all facts are socially constructed" is itself a constructed fact); (b) equal-validity relativism fails because distinct epistemic systems can disagree about what counts as evidence and thus cannot be coherently "equal"; (c) epistemic constructivism about justification fails because real disagreements are then impossible (two people "in different frames" can't disagree, they just talk past each other). Evidence rating: A (standard-reference analytic critique; Kusch 2006, Rorty's responses are the counterweight).

**Lynch (2012).** *In Praise of Reason: Why Rationality Matters for Democracy*, MIT Press. Explicitly addresses the worry that "reasons are relative": argues that epistemic norms are defended *practically* rather than non-circularly, on the grounds that some norms (intersubjective, repeatable, transparent) make coordination possible in ways others do not. Lynch allows "some truths are relative" in a limited sense (sentences about taste) without endorsing general relativism. Evidence rating: B (philosophical work, widely cited, contested by Rorty-successors).

**Kitcher (2011).** *Science in a Democratic Society*. Argues for "well-ordered science" — a normative framework where public input legitimately shapes scientific priorities *without* making truth relative. Evidence rating: B.

**Rini (2017).** "Fake news and partisan epistemology", *Kennedy Institute of Ethics Journal*. Engages with Nguyen-style echo-chamber analysis from a partisan-testimony angle. Evidence rating: C (essay, not empirical).

### What's unsolved or contested

- **Taxonomy of relativism.** Constructivism, perspectivism, relativism, contextualism: these words are used differently. Boghossian's targets are narrow; Rorty, Feyerabend, and contemporary standpoint theorists would all reject Boghossian's characterisation.
- **Whether CPML's stance on domain-indexed truth IS perspectivism.** This is a real question. "Claim holds in frame F" could be read as frame-relative truth (perspectivism) or frame-scoped truth (just specifying that different domains answer different questions). The former is philosophically loaded; the latter is mundane.

### What's directly usable for CPML

- **Public framing should follow Lynch, not Rorty.** CPML does NOT say "everyone has their own truth". It says "different validator communities have different standards of evidence, and users should be able to select which community's evidence standards apply when they compose a verdict". Those are distinct claims.
- **Boghossian's third argument (real disagreement requires shared frame) is a risk to the design.** If CPML profiles become so divergent that two users cannot disagree — they just live in different worlds — CPML has failed epistemically. A minimum: the underlying attestation store is shared; only the composition differs.
- **Explicitly disavow the "your truth, my truth" frame in the whitepaper.** Use the domain-indexed frame: "in medicine, the medical community's standards apply; in law, the legal community's; in markets, the market-information community's. You choose which community to trust for which topic — but the standards within each community are not up to you."

### Citations

- Boghossian, P. (2006) *Fear of Knowledge: Against Relativism and Constructivism*, Oxford University Press.
- Lynch, M. P. (2012) *In Praise of Reason: Why Rationality Matters for Democracy*, MIT Press.
- Kitcher, P. (2011) *Science in a Democratic Society*, Prometheus Books.
- Kusch, M. (2006) *A Sceptical Guide to Meaning and Rules*, Acumen.

---

## 11. Five design takeaways for CPML specification

1. **Model a CPML profile as a Value-Audience in the VAF sense, not as a political-compass coordinate.** Bench-Capon (2003) gives the exact formalism. The user is an audience; the audience orders values; attacks succeed or fail per audience. Reject fixed-dimensional value spaces (MFT, Political Compass, Inglehart) as universal schemas; let profiles compose from user-definable topic-scoped validator-weight specifications. Rationale: MFT's factor structure has failed to replicate at the 5-factor level across cultures after 20 years of effort (Wormley 2025, Iurino & Saucier 2020); no fixed small dimensionality will survive cross-cultural deployment.

2. **Use SKOS for topic scoping and a PAF/VAF-based semantics with Amgoud-Vesic 2011 semantics-level preference application.** This preserves conflict-freeness in per-user verdicts — a guarantee users will depend on. Reject Amgoud-Cayrol 2002 attack-level semantics because of the asymmetric-attack pathology (Amgoud & Vesic 2011 §3). Document the choice explicitly with a pointer to the paper.

3. **Model the profile as a scrutable user model (Kay 2006) with cryptographic signing (WoT RDF schema).** The profile is user-owned (fully viewable, editable, exportable, revokable), signed by the user's OpenPGP/Ed25519 key, and carries an explicit expiry. This is the union of Kay's scrutability research and Zimmermann's PGP WoT, both of which have 20+ years of deployment evidence. Expiry is not a bug; Pew's typology re-clusters every ~4 years for a reason — values drift.

4. **Composition must be per-user by architecture, not by policy. Sacrifice cross-user verdict uniformity on purpose and document it.** List & Pettit's impossibility theorem means CPML cannot have both (a) meaningful within-user coherence and (b) systematic cross-user agreement and (c) user autonomy. Pick (a) + (c); sacrifice (b). Document this choice in the whitepaper's epistemology section, citing Arrow 1963 and List-Pettit 2002 so academic reviewers see the awareness.

5. **Profiles are outward-flowing from the reader, not inward-flowing to the reader.** This is the PICS lesson. A labelling infrastructure that allowed organisations to publish labels which ISPs applied to users became (inevitably) a censorship stack. CPML must specify that profile authority is the reader's signing key, never the validator's or an aggregator's. Any "published profile" is advisory — the reader's key is the only source of authority on which profile applies to them.

---

## 12. Three open research questions Veritas should explicitly flag as unsolved

**Q1. Echo-chamber escape.** Nguyen's (2020) distinction between epistemic bubbles and echo chambers is load-bearing. Exposure to cross-cutting attestations breaks bubbles. But for echo-chamber users — those with pre-committed distrust of entire validator classes — exposure reinforces the chamber. CPML's current design has no answer for the echo-chamber case. Is there a profile design that degrades gracefully for echo-chamber users without becoming paternalistic? This is an open research question, and Veritas should explicitly say so rather than claim CPML "solves" filter-bubble / echo-chamber dynamics.

**Q2. Validator dependence and the jury theorem failure.** Condorcet's jury theorem is the closest formal backbone for "pool validators, compose verdict". But its independence assumption almost always fails: validators read each other's work. No one in the literature has given a clean answer for "how much dependence can a pooled-validator verdict tolerate before it's theatre?" Empirical estimation of validator dependence (citation networks, shared-source analysis) is likely needed, and is a first-class research problem for Veritas rather than a deployment detail.

**Q3. Profile-drift and vocabulary rot.** P3P died because its vocabulary could not keep up with real privacy practices. FOAF still exists but is sparsely maintained. SKOS is alive but depends on institutional custodians. A CPML that relies on a single upstream vocabulary will decay on the P3P timeline. The question is: what governance structure keeps the consensus-domain vocabulary live (SKOS-extended, evolving, community-maintained) without producing PICS-style central-gatekeeper capture? This is a sociotechnical design question with no precedent in the literature — the Semantic Web has not solved it in two decades of trying.

---

## 13. Honest limitations of this review

- **No quantitative meta-analysis.** The review is narrative and source-descriptive; effect sizes across the filter-bubble and MFT literatures were cited from secondary reviews, not re-extracted.
- **Anglophone bias.** Cross-cultural literature on values (Inglehart-Welzel) was referenced but not deeply surveyed; non-Western argumentation and epistemology traditions (Nyāya, Islamic kalam, Chinese logic) are out of scope and their absence is a real gap.
- **Time frame.** Filter-bubble empirical literature was cut off at 2025 systematic reviews; work from late 2025 / 2026 was not surveyed.
- **One reviewer.** This review was conducted by a single agent (Sage) without independent adversarial review. A cross-check pass by a second reviewer (e.g., Sophia on the philosophy strands, Tensor on the argumentation-formalism strands) is recommended before the whitepaper locks.
- **Sources where full-text was not read.** For several cited papers (Amgoud-Vesic 2011, Modgil 2009, Bodanza-Freidin 2023) this review relied on careful abstracts and secondary summaries rather than full-text reading. Specific formalism claims should be re-verified against full text before becoming whitepaper normative statements.

---

## 14. References (consolidated)

Alchourrón, C. E., Gärdenfors, P. & Makinson, D. (1985). "On the logic of theory change: Partial meet contraction and revision functions." *Journal of Symbolic Logic* 50(2):510-530.

Amgoud, L. & Cayrol, C. (2002). "Inferring from inconsistency in preference-based argumentation frameworks." *Journal of Automated Reasoning* 29(2):125-169.

Amgoud, L. & Vesic, S. (2011). "A new approach for preference-based argumentation frameworks." *Annals of Mathematics and AI* 63:149-183.

Arrow, K. J. (1963). *Social Choice and Individual Values*, 2nd ed. Yale University Press.

Bakshy, E., Messing, S. & Adamic, L. A. (2015). "Exposure to ideologically diverse news and opinion on Facebook." *Science* 348(6239):1130-1132. DOI: 10.1126/science.aaa1160

Baltag, A. & Renne, B. (2016). "Dynamic Epistemic Logic." *Stanford Encyclopedia of Philosophy*. https://plato.stanford.edu/entries/dynamic-epistemic/

Barberá, P., Jost, J. T., Nagler, J., Tucker, J. A. & Bonneau, R. (2015). "Tweeting from left to right: Is online political communication more than an echo chamber?" *Psychological Science* 26(10):1531-1542.

Bench-Capon, T. J. M. (2003). "Persuasion in practical argument using value-based argumentation frameworks." *Journal of Logic and Computation* 13(3):429-448.

Bodanza, G. A. & Freidin, E. (2023). "Confronting value-based argumentation frameworks with people's assessment of argument strength." *Argument & Computation*.

Boghossian, P. (2006). *Fear of Knowledge: Against Relativism and Constructivism*. Oxford University Press.

Braüner, T. (2022). "Hybrid Logic." *Stanford Encyclopedia of Philosophy*. https://plato.stanford.edu/entries/logic-hybrid/

Brickley, D. & Miller, L. *FOAF Vocabulary Specification*. http://xmlns.com/foaf/spec/

Cayrol, C. & Lagasquie-Schiex, M.-C. (2005). "On the acceptability of arguments in bipolar argumentation frameworks." ECSQARU.

Collins, H. M. & Evans, R. (2002). "The Third Wave of Science Studies: Studies of Expertise and Experience." *Social Studies of Science* 32(2):235-296. DOI: 10.1177/0306312702032002003

de Kleer, J. (1986). "An assumption-based TMS." *Artificial Intelligence* 28(2):127-162.

Dietrich, F. & List, C. (2007). "Arrow's theorem in judgment aggregation." *Social Choice and Welfare* 29(1):19-33.

Doyle, J. (1979). "A truth maintenance system." *Artificial Intelligence* 12(3):231-272.

Dung, P. M. (1995). "On the acceptability of arguments and its fundamental role in nonmonotonic reasoning, logic programming and n-person games." *Artificial Intelligence* 77:321-357. DOI: 10.1016/0004-3702(94)00041-X

Eysenck, H. J. (1954). *The Psychology of Politics*. Routledge & Kegan Paul.

Golbeck, J. (2005). *Computing and Applying Trust in Web-Based Social Networks*. PhD thesis, University of Maryland.

Golbeck, J. & Hendler, J. (2006). "FilmTrust: Movie recommendations using trust in web-based social networks." IEEE CCNC.

Goldman, A. I. (1999). *Knowledge in a Social World*. Oxford University Press.

Graham, J., Haidt, J. & Nosek, B. A. (2009). "Liberals and conservatives rely on different sets of moral foundations." *Journal of Personality and Social Psychology* 96(5):1029-1046.

Graham, J. et al. (2011). "Mapping the moral domain." *Journal of Personality and Social Psychology* 101(2):366-385. PMC 3116962.

Graham, J. et al. (2013). "Moral Foundations Theory: The pragmatic validity of moral pluralism." *Advances in Experimental Social Psychology* 47:55-130.

Grofman, B., Owen, G. & Feld, S. L. (1983). "Thirteen theorems in search of the truth." *Theory and Decision* 15:261-278.

Guess, A. M. (2021). "(Almost) everything in moderation: New evidence on Americans' online media diets." *American Journal of Political Science* 65(4):1007-1022.

Guess, A. M. et al. (2020). "Exposure to untrustworthy websites in the 2016 US election." *Nature Human Behaviour* 4(5):472-480.

Haas, P. M. (1992). "Introduction: Epistemic communities and international policy coordination." *International Organization* 46(1):1-35. JSTOR 2706951.

Haidt, J. & Graham, J. (2007). "When morality opposes justice: Conservatives have moral intuitions that liberals may not recognize." *Social Justice Research* 20:98-116.

Henrich, J., Heine, S. J. & Norenzayan, A. (2010). "The weirdest people in the world?" *Behavioral and Brain Sciences* 33(2-3):61-83.

Herlocker, J. L., Konstan, J. A. & Riedl, J. (2000). "Explaining collaborative filtering recommendations." CSCW 2000. DOI: 10.1145/358916.358995

Herlocker, J. L., Konstan, J. A., Terveen, L. G. & Riedl, J. T. (2004). "Evaluating collaborative filtering recommender systems." *ACM TOIS* 22(1):5-53. DOI: 10.1145/963770.963772

Hintikka, J. (1962). *Knowledge and Belief: An Introduction to the Logic of the Two Notions*. Cornell University Press.

Inglehart, R. & Welzel, C. (2005). *Modernization, Cultural Change, and Democracy*. Cambridge University Press.

Katsuno, H. & Mendelzon, A. O. (1991). "On the difference between updating a knowledge base and revising it." KR 1991.

Kay, J. (2006). "Scrutable adaptation: Because we can and must." AH 2006.

Kay, J., Kummerfeld, B. & Lauder, P. (2002). "Personis: A server for user models." AH 2002.

Kitcher, P. (2011). *Science in a Democratic Society*. Prometheus Books.

Kornhauser, L. A. & Sager, L. G. (1993). "The one and the many: Adjudication in collegial courts." *California Law Review* 81(1):1-59.

Kripke, S. A. (1963). "Semantical considerations on modal logic." *Acta Philosophica Fennica* 16:83-94.

List, C. & Goodin, R. E. (2001). "Epistemic democracy: Generalizing the Condorcet jury theorem." *Journal of Political Philosophy* 9(3):277-306.

List, C. & Pettit, P. (2002). "Aggregating sets of judgments: An impossibility result." *Economics and Philosophy* 18(1):89-110.

Lynch, M. P. (2012). *In Praise of Reason: Why Rationality Matters for Democracy*. MIT Press.

Miles, A. & Bechhofer, S. (2009). *SKOS Simple Knowledge Organization System Reference*, W3C Recommendation. https://www.w3.org/TR/skos-reference/

Modgil, S. (2009). "Reasoning about preferences in argumentation frameworks." *Artificial Intelligence* 173(9-10):901-934.

Nguyen, C. T. (2020). "Echo chambers and epistemic bubbles." *Episteme* 17(2):141-161.

Pariser, E. (2011). *The Filter Bubble: What the Internet Is Hiding from You*. Penguin.

Pew Research Center (2021). *Beyond Red vs. Blue: The Political Typology*. https://www.pewresearch.org/politics/2021/11/09/beyond-red-vs-blue-the-political-typology/

Radlinski, F. et al. (2022). "On Natural Language User Profiles for Transparent and Scrutable Recommendation." arXiv:2205.09403, SIGIR 2022.

Reiter, R. (1980). "A logic for default reasoning." *Artificial Intelligence* 13:81-132.

Richardson, M., Agrawal, R. & Domingos, P. (2003). "Trust Management for the Semantic Web." ISWC.

Ross Arguedas, A. et al. (2022). *Echo chambers, filter bubbles, and polarisation: a literature review*. Reuters Institute for the Study of Journalism.

Röttger, P. et al. (2024). "Political Compass or Spinning Arrow? Towards More Meaningful Evaluations for Values and Opinions in LLMs." ACL 2024. https://aclanthology.org/2024.acl-long.816/

Sunstein, C. R. (2017). *#Republic: Divided Democracy in the Age of Social Media*. Princeton University Press.

Tintarev, N. & Masthoff, J. (2012). "Evaluating the effectiveness of explanations for recommender systems." *UMUAI* 22(4-5):399-439.

van Benthem, J. & Smets, S. (2015). "Dynamic Logics of Belief Change." In *Handbook of Logics for Knowledge and Belief*.

W3C (1996). *PICS: Platform for Internet Content Selection*. https://www.w3.org/PICS/

W3C (2002, obsoleted 2018). *Platform for Privacy Preferences 1.0 (P3P1.0) Specification*. https://www.w3.org/P3P/

Wormley, A. S., Scott, M., Grimm, K. J. & Cohen, A. B. (2025). "Measuring Morality: An Examination of the MFQ's Factor Structure." *Personality and Social Psychology Bulletin*.

---

*End of research-01-cpml-academic.md*
