Field NotesAnchor 12 · The instrument

The AI Discovery Score, explained.

The AI Discovery Score is the institution's instrument for measuring a subject's standing across AI systems. It is computed across the eight Discovery Signals, weighted in ways the methodology defines, and reported on a 0–1000 scale. This anchor explains what it measures and how the institution uses it.

Field
The instrument
Reading time
8 minutes
First published
10 May 2026
Reviewed
Methodology Council
The institution's position

The Score is an instrument, not a verdict. It is designed to support engagement decisions, longitudinal measurement, and editorial accountability — not to rank subjects against each other in any league-table sense. The institution publishes the methodology in full so that the Score is examinable.

What the Score measures

The AI Discovery Score is a composite measure of how AI systems treat a subject across the eight Discovery Signals. It captures both the static state of the subject's authority architecture and its behavioural standing inside the systems that consult that architecture.

The Score is not a measure of search ranking. It is not a measure of website quality. It is not a substitute for engagement-level metrics like enquiry volume or conversion. It is a measure of standing inside AI, and only that.

How it is computed

The Score is computed from per-signal scores combined by published weights and reported on a 0–1000 composite scale. The weights are reviewed annually by the Methodology Council and published in each State of AI Discovery.

  • Per-signal scores are computed from the underlying instrumentation: prompt-set responses, citation density, entity resolution checks, schema audits, and graph distance calculations.
  • Composite weighting is calibrated to predict system behaviour, not to satisfy any particular vendor or methodology preference.
  • The Methodology Council ratifies the per-signal weights annually; changes are recorded in the Annual.
An observation from the methodology

The Score is deliberately designed to be slow-moving on the structural signals (Authority, Entity Resolution, Authority Graph Distance) and faster-moving on the behavioural signal (Behavioural Reinforcement). This design choice expresses the institution's view that durable visibility is the goal; flash-in-the-pan visibility is not.

What a good Score looks like

Across the institution's engagements, the following bands are practically useful.

  1. Band E · 0–400 — Below threshold. The subject is materially invisible to AI systems for the queries that should surface it. The work is foundational: entity reconciliation, authority architecture, citation-quality content.
  2. Band D · 400–600 — Emerging. The subject appears in some AI answers but inconsistently. The work is consolidating: closing authority distance, lifting citation source quality.
  3. Band C · 600–750 — Discoverable. The subject appears in most relevant AI answers and is durably cited. The work is refinement: editorial discipline, ongoing citation work, vertical-specific depth.
  4. Band B · 750–875 — Strongly discoverable. The subject is consistently surfaced across systems for the queries that should surface it. The work is sub-specialty depth and durability under change in the underlying systems.
  5. Band A · 875–1000 — Authoritative. The subject is consistently the cited source. The work is maintenance and defence — protecting the Score against change in the systems and the field.

How the institution uses it

The Score is read by the institution's practitioners at engagement onboarding (baseline), at the close of each quarter (delta and trajectory), and at engagement close (final state). It is shared with the subject as part of the published quarterly review.

The Score is not used commercially as a ranking. The institution does not publish league tables of subjects. Aggregate Score data — by industry, geography, and signal — is published in the State of AI Discovery, where it serves the field, not particular subjects.

A closing observation

The Score is an instrument, not a brand. The institution measures because measurement disciplines work. Where the instrument is wrong, we say so, in writing, and improve it.

The methodology is published. The weights are published. The signal definitions are published. The institution's accountability for the Score is expressed in those publications and reviewed annually. That is the entire offer: not a number, but the methodology behind the number.

Read the Index

The Index is the application of the Score.

This anchor explains the instrument. The Index is the institution's quarterly measurement across thousands of subjects.

Read the Index →