A redacted, illustrative copy of the AI Discovery Audit deliverable. The subject's identity has been removed; the methodology, structure, and evidence standard are exactly as a real subject would receive at the conclusion of the engagement's first stage.
Multi-location professional services subject in a metropolitan Australian capital. Fifteen named professionals, three locations, A$28M annual revenue. The subject's identity is redacted; the audit otherwise stands.
Subject A holds a strong professional reputation in its metropolitan market and a substantial public footprint. It is materially under-represented inside AI systems for its category — a gap the institution attributes to two of the four classic invisibility causes.
Each signal is reported with the institution's evidence, the qualitative observation, and the band the subject sits in. Critical-band signals are weighted in the intervention sequencing.
Subject A appears in slightly over half the relevant prompt-set responses — adequate at the consideration-stage prompts, sparse at the comparative prompts. The mentions are concentrated in lower-Tier sources, which compounds the signal's weakness.
DevelopingSubject A's domain is adequately authoritative in legacy SEO terms. The site does not, however, surface credentialed authorship at the page level — most substantive pages are unattributed, which AI systems penalise consistently.
DevelopingSubject A's external presence is concentrated in directories and aggregators. Tier I citations — professional bodies, recognised editorial sources — are present but stale (latest dated reference older than 18 months). The citation graph is thin where it matters most.
CriticalThe registered entity name, the trading name, and the Google Business Profile name are not reconciled. Two AI systems, in the prompt set, treat Subject A as two separate businesses. This is the dominant single contributor to the score deficit.
CriticalSubject A applies generic LocalBusiness schema across its property. The more specific and discriminating subtypes for the relevant professional category are not used; structured data depth is shallow.
DevelopingSubject A is, on average, 3.2 hops from the nearest Tier I authority anchor. The institution's threshold for confident AI surfacing is ≤ 2.0 hops. Closing this distance is the single highest-leverage move available to Subject A.
CriticalSubject A holds an above-segment review profile across Google and one Tier II directory. Recent editorial mention — the higher-weighted behavioural signal — is sparse and requires deliberate cultivation.
DevelopingSubject A's standing has held flat across the four observable quarters. There is no recent decline, which is the institution's strongest baseline for engagement: foundations are stable; deficits are addressable.
DevelopingThe institution's recommended sequence of work for Subject A. Each priority is sized for delivery within the engagement; the sequence is conservative.
Establish a canonical entity record signed by the principal. Reconcile registered name, trading name, GBP, professional register, and aggregator listings to the canonical record. Target completion: weeks 1–4 of engagement. This is the prerequisite to every later move; until it is complete, additional content investment is suboptimal.
Practitioner-level authority architecture for the fifteen named professionals. Credentials, professional memberships, registration records, alumni links — all made machine-readable through schema and crosslinks. Target completion: weeks 4–12 of engagement.
Earned editorial coverage in three Tier I sources for the relevant professional category. The institution leads the editorial cultivation; Subject A reviews and approves before placement. Target: quarters one and two of engagement.
Replace generic LocalBusiness schema with the appropriate specific subtype across the property. Add Practitioner schema for each named professional; add Service schema with practitioner attribution. Target: weeks 6–10 of engagement.
Rewrite the twelve highest-intent pages so substantive answers live as short, attributed, evidence-bearing passages — citable to the AI's reading. Target: quarters two and three. Page count is fewer; depth is greater; conversion improves in parallel.
Based on the audit, the institution recommends the engagement scope below. The recommendation is binding on the institution; it is not binding on Subject A, who is free to engage with us, with another provider, or not at all.
The institution's full method applied to Subject A's three locations and fifteen named professionals, with quarterly Score reviews against the published methodology and Score Engine console access for Subject A's marketing leadership.
The Audit fee ($24,000) is credited against the first quarter's invoice at the institution's election. Engagement begins with the entity reconciliation work in Priority 01; the rest of the sequence runs in parallel from week four.
Every engagement begins with the Audit. The deliverable above is a redacted illustration; the version you receive is yours, scoped to your subject, and reviewed by a Specialist before it leaves the institution. Four weeks from instruction.
Methodology note. The sample above presents the standard structure of the AI Discovery Audit deliverable. Numerical values, signal scores, and prompt-set counts are illustrative and consistent with the institution's observed range across mid-market subjects in the relevant industry segments. Subject A is a composite redacted from the institution's engagement files; no individual subject is identifiable.
The Audit deliverable is reviewed by a Specialist (ADS) before release, signed by the Head of Practice for engagements above mid-market, and held under Tier III classification. Audit data is retained for seven years post-engagement closure per the Working Papers Retention Schedule.
For the methodology behind the eight signals, see The Index. For the editorial governance behind every published number, see the Charter.