How we audit claims

The process
in full.

No single existing system does what Peptigrade does. We stitch six accepted methodologies into one workflow — applied recursively to every claim we publish, every claim we audit, and every popular framing we choose to deconstruct.

§ Two operating modes

Forward audit,
backward decomposition.

Path 1 · Claim → Evidence

The claim audit

Start with a claim circulating in popular discourse — “BPC-157 has wolverine-like effects”, “GHK-Cu reverses hair loss”, “methylene blue is a nootropic.” Decompose into atomic sub-claims, find every paper that touches each, and produce a verdict.

Output: /claims/[slug]

Path 2 · Artifact → Recursive origins

The source breakdown

Start with a podcast, paper, or post. Decompose every claim made. Trace each upstream — what did they cite, what did THEIR source cite, and so on, recursively, until you hit either a primary source or a dead end.

Output: /breakdowns/[slug] (coming next)

Both paths share the same underlying claim model: atomic claims, status tags, source provenance, recursive citation chain. The difference is the entry point — start from the claim, or start from the artifact — and the rendering.

§ The workflow

Six steps,
applied recursively.

Every claim we publish — and every claim we audit from a popular source — passes through these six steps in order. Steps 03 and 04 are recursive: when a paper inherits a claim from an upstream paper, the chain is followed until we hit a primary source or determine the chain ends in opinion.

  1. 01

    Atomic claim extraction

    Take the source artifact (paper, podcast, post, marketing copy). Decompose every assertion into atomic claims — one assertion per claim. A long sentence almost always contains 3-5 atomic claims that need to be evaluated separately.

    Drawn from

    Inspired by SciFact / FEVER. Decomposition is done by an editor and reviewed by a second editor before publication.

  2. 02

    Claim classification

    Tag the type of claim: empirical (X causes Y), mechanistic (X works via pathway Z), comparative (X is better than Y), quantitative (effect is N%), predictive (X will produce Y), or definitional (X is a Z). Different claim types require different evidence.

    Drawn from

    Argumentation theory — different argument schemes have different validity criteria.

  3. 03

    Provenance trace

    Where does the claim come from? Three possibilities: directly cited paper, inherited from a downstream paper that cited an upstream paper (recursive), or speculative (no identifiable source). For inherited claims, we recursively trace upstream until we reach the original primary source or hit a dead end.

    Drawn from

    Citation network analysis. Tools like Scite, Connected Papers, and Litmaps inform the trace, but each chain is verified by hand.

  4. 04

    Per-source bias assessment

    At each node in the chain, apply RoB2-style assessment: study type, sample size and power, blinding, pre-registration, conflict of interest, replication status, retraction status. A finding from a single un-replicated study counts less than a finding replicated across labs.

    Drawn from

    Cochrane RoB2 + GRADE downgrade criteria.

  5. 05

    Status determination

    Apply the status tag based on the trace and the evidence quality at each node. Eight possible statuses: VALIDATED, CONTESTED, UNVALIDATED, OVERSTATED, FALSIFIED, WITHDRAWN, DEPENDENT, SPECULATIVE. OVERSTATED is reserved for composite popular framings that stack validated and unvalidated sub-claims; FALSIFIED is reserved for claims positively contradicted by replicated evidence, not mere absence of human data.

    Drawn from

    GRADE-style synthesis applied at the claim level rather than the recommendation level.

  6. 06

    Confidence + publication

    Rate confidence in our status determination (High / Moderate / Low) based on how much evidence we found, how clear the chain was, and how recent the underlying studies are. Publish the result with status badge, evidence chain visualization, last-verified date, and machine-readable ClaimReview JSON-LD for AI Overviews.

    Drawn from

    Bayesian confidence overlay + structured data publication.

§ Status tags

Eight possible
statuses.

The most important one is UNVALIDATED. In science, “unvalidated” is not the same as “wrong.” It means the claim has not been adequately tested — which is the actual status of most popular peptide claims.

VALIDATED

Replicated, high-quality evidence supports this claim. Multiple independent groups have produced consistent results.

CONTESTED

Replicated evidence cuts both ways. Trials of comparable quality have produced opposing results.

UNVALIDATED

The claim exists but has not been adequately tested. NOT 'wrong' — just untested. The most common status for popular peptide claims.

OVERSTATED

Composite popular framing extends beyond what the underlying evidence supports. Parts may be validated in narrow scope (often animal data); the headline as stated is not. Reserved for framings that stack validated and unvalidated sub-claims.

FALSIFIED

Replicated, high-quality evidence positively contradicts this claim. Multiple adequately-powered trials have failed to show the claimed effect. Distinct from OVERSTATED, which is absence of supportive human evidence rather than presence of contradictory evidence.

WITHDRAWN

The underlying source has been retracted from publication. The claim is structurally unsupported.

DEPENDENT

The claim is true conditional on another claim that has not itself been validated. Common in mechanistic chains.

SPECULATIVE

No identifiable underlying source. The claim is opinion, anecdote, or extrapolation beyond what cited papers actually say.

§ Standards we stitched

Nothing we built
is novel.

Each component of our workflow is drawn from an existing accepted methodology. The novelty — if any — is the synthesis: applying these standards together, recursively, at the atomic-claim level, for the peptide domain specifically.

  • GRADE framework

    Used by WHO, Cochrane, BMJ, NICE.

    What we adopted · Letter grades for evidence quality. Downgrade reasons (study limitations, inconsistency, indirectness, imprecision, publication bias). Strength of recommendation.

  • Cochrane Risk of Bias 2 (RoB2)

    Cochrane systematic reviews.

    What we adopted · Per-study bias assessment across five domains (randomization, deviations from intervention, missing data, outcome measurement, selective reporting).

  • Scite.ai smart citations

    Adopted by NIH and major medical journals.

    What we adopted · Every citation classified as supports / contradicts / mentions. A cited paper does not necessarily support — it might contradict or merely reference.

  • SciFact / FEVER (Allen AI / Meta)

    Computational fact-verification research.

    What we adopted · Atomic claim-level evaluation. Three-way tags: SUPPORTED / CONTRADICTED / NOT ENOUGH INFO. Decompose long-form claims into single-assertion units before evaluating.

  • PubPeer + Retraction Watch

    Active community of post-publication peer review.

    What we adopted · Post-publication review. Claim-level integrity (image fraud, statistical errors). Retraction status checked on every cited source.

  • Reproducibility Project methodology

    RP:Psychology, RP:Cancer Biology, METRICS at Stanford.

    What we adopted · Replication status as a first-class field. Pre-registration awareness. Structural skepticism toward single-lab single-finding claims.

  • Bayesian epistemology + Walton argumentation schemes

    Philosophy of science, formal argumentation theory.

    What we adopted · Confidence as a graded property. Each argument decomposed into premises with critical questions per premise.

Why this matters

Anyone can grade peptides.
Almost no one will show their work.

A grade is just a letter. It tells you the destination but not the route. We publish the route — the atomic sub-claims, the evidence chain, the per-citation stance, and the explicit gaps — because that’s the difference between a search-engine answer you have to take on faith and a reference you can actually defend a clinical decision against.

It’s also exactly what AI Overviews and LLM citations need. Modern generative search systems prefer sources that publish structured claim → evidence → status data. We ship it as ClaimReview JSON-LD on every audit. The next time you see Peptigrade quoted in an AI answer, you’ll know why.