Sample Audit · Redacted Edition

What an Audit looks like.

A redacted, illustrative copy of the AI Discovery Audit deliverable. The subject's identity has been removed; the methodology, structure, and evidence standard are exactly as a real subject would receive at the conclusion of the engagement's first stage.

Pages · 18 Reading time · 20 minutes Delivered · Within 4 weeks of engagement Edition · v1.0 · 10 May 2026
AI Discovery Audit · Redacted Sample

Subject A — Mid-Market Professional Services

Multi-location professional services subject in a metropolitan Australian capital. Fifteen named professionals, three locations, A$28M annual revenue. The subject's identity is redacted; the audit otherwise stands.

418
SCORE / 1000
Executive summary

The institution's summary position.

Subject A holds a strong professional reputation in its metropolitan market and a substantial public footprint. It is materially under-represented inside AI systems for its category — a gap the institution attributes to two of the four classic invisibility causes.

AI Discovery Score
418/ 1000
Band D — Emerging
Dominant deficit
Entity confusion
+ citation absence
Recovery horizon
9–12months
To Band B (750+)
Recommended tier
Mid-Market
Engagement
Score Engine · Subject A · Audit Read
METHODOLOGY v1.0 · RUN 2026-05-04
AI Discovery Score
418/ 1000
Baseline · Band D — Emerging
Citation density
2.1/ 7 systems
Below segment median (3.4)
Authority Graph Distance
3.2hops
Above target (≤ 2.0)

Eight-signal breakdown · baseline read

01 · Mention Frequency
51
Developing
02 · Authority
48
Developing
03 · Citation Source Quality
34
Critical
04 · Entity Resolution
28
Critical
05 · Schema and Structure
55
Developing
06 · Authority Graph Distance
38
Critical
07 · Behavioural Reinforcement
46
Developing
08 · Recency Stability
52
Developing
Confidential. Subject-bound. Methodology Council reviewed.
PROMPT SET · n=84 · COVERED SYSTEMS · 7
Findings

The eight findings, in detail.

Each signal is reported with the institution's evidence, the qualitative observation, and the band the subject sits in. Critical-band signals are weighted in the intervention sequencing.

Finding 01 · Mention Frequency

Present in 51 of 84 prompt-set responses

Subject A appears in slightly over half the relevant prompt-set responses — adequate at the consideration-stage prompts, sparse at the comparative prompts. The mentions are concentrated in lower-Tier sources, which compounds the signal's weakness.

Developing
Finding 02 · Authority

Domain authority adequate; author authority absent

Subject A's domain is adequately authoritative in legacy SEO terms. The site does not, however, surface credentialed authorship at the page level — most substantive pages are unattributed, which AI systems penalise consistently.

Developing
Finding 03 · Citation Source Quality

Citations dominated by Tier III sources

Subject A's external presence is concentrated in directories and aggregators. Tier I citations — professional bodies, recognised editorial sources — are present but stale (latest dated reference older than 18 months). The citation graph is thin where it matters most.

Critical
Finding 04 · Entity Resolution

Three identities; no canonical reconciliation

The registered entity name, the trading name, and the Google Business Profile name are not reconciled. Two AI systems, in the prompt set, treat Subject A as two separate businesses. This is the dominant single contributor to the score deficit.

Critical
Finding 05 · Schema and Structure

LocalBusiness schema present; specific subtype absent

Subject A applies generic LocalBusiness schema across its property. The more specific and discriminating subtypes for the relevant professional category are not used; structured data depth is shallow.

Developing
Finding 06 · Authority Graph Distance

3.2 hops to nearest Tier I anchor

Subject A is, on average, 3.2 hops from the nearest Tier I authority anchor. The institution's threshold for confident AI surfacing is ≤ 2.0 hops. Closing this distance is the single highest-leverage move available to Subject A.

Critical
Finding 07 · Behavioural Reinforcement

Reviews healthy; editorial mention rare

Subject A holds an above-segment review profile across Google and one Tier II directory. Recent editorial mention — the higher-weighted behavioural signal — is sparse and requires deliberate cultivation.

Developing
Finding 08 · Recency Stability

Score volatility moderate; trajectory flat

Subject A's standing has held flat across the four observable quarters. There is no recent decline, which is the institution's strongest baseline for engagement: foundations are stable; deficits are addressable.

Developing
Intervention priorities

The work, in order.

The institution's recommended sequence of work for Subject A. Each priority is sized for delivery within the engagement; the sequence is conservative.

Priority01

Entity reconciliation, in writing

Establish a canonical entity record signed by the principal. Reconcile registered name, trading name, GBP, professional register, and aggregator listings to the canonical record. Target completion: weeks 1–4 of engagement. This is the prerequisite to every later move; until it is complete, additional content investment is suboptimal.

Effort
Low–medium · Institution-led
4 weeks · 3 stakeholder hours
Priority02

Authority Infrastructure rebuild

Practitioner-level authority architecture for the fifteen named professionals. Credentials, professional memberships, registration records, alumni links — all made machine-readable through schema and crosslinks. Target completion: weeks 4–12 of engagement.

Effort
Medium · Institution-led
8 weeks · 16 stakeholder hours
Priority03

Citation Engine cycle one

Earned editorial coverage in three Tier I sources for the relevant professional category. The institution leads the editorial cultivation; Subject A reviews and approves before placement. Target: quarters one and two of engagement.

Effort
Medium–high · Institution-led
6 months · 12 stakeholder hours
Priority04

Schema specificity and depth

Replace generic LocalBusiness schema with the appropriate specific subtype across the property. Add Practitioner schema for each named professional; add Service schema with practitioner attribution. Target: weeks 6–10 of engagement.

Effort
Low · Institution-led
4 weeks · 2 stakeholder hours
Priority05

Passage rewrite, twelve substantive pages

Rewrite the twelve highest-intent pages so substantive answers live as short, attributed, evidence-bearing passages — citable to the AI's reading. Target: quarters two and three. Page count is fewer; depth is greater; conversion improves in parallel.

Effort
Medium · Institution-led
10 weeks · 20 stakeholder hours
Engagement recommendation

The institution's recommendation.

Based on the audit, the institution recommends the engagement scope below. The recommendation is binding on the institution; it is not binding on Subject A, who is free to engage with us, with another provider, or not at all.

Read your own

Begin your Audit.

Every engagement begins with the Audit. The deliverable above is a redacted illustration; the version you receive is yours, scoped to your subject, and reviewed by a Specialist before it leaves the institution. Four weeks from instruction.

Methodology note. The sample above presents the standard structure of the AI Discovery Audit deliverable. Numerical values, signal scores, and prompt-set counts are illustrative and consistent with the institution's observed range across mid-market subjects in the relevant industry segments. Subject A is a composite redacted from the institution's engagement files; no individual subject is identifiable.

The Audit deliverable is reviewed by a Specialist (ADS) before release, signed by the Head of Practice for engagements above mid-market, and held under Tier III classification. Audit data is retained for seven years post-engagement closure per the Working Papers Retention Schedule.

For the methodology behind the eight signals, see The Index. For the editorial governance behind every published number, see the Charter.