Plain answers from the institution. The things prospective subjects ask on the discovery call, gathered here so you can read them at your own pace, in your own order.
What AI Discovery actually is, what we're not, and how the field is changing.
SEO optimises a page so that, when a person searches a keyword, the page ranks in a list of human-readable results. The reader is a human; the reward is a click.
AI Discovery measures and improves how AI systems — ChatGPT, Google AI Overviews, Perplexity, Claude, Gemini — quote and recommend you in their generated answers. The reader is a model; the reward is a citation.
The disciplines overlap at the substrate (indexability, structured data, authority signals) but diverge sharply on what they reward. SEO has historically tolerated long, keyword-dense pages. AI Discovery rewards short, attributed, evidence-bearing passages that the model can quote. The strategies are not the same.
No, and it matters that it isn't. An SEO agency that adds "AI" to its services is making four changes that do most of the lift: schema, citation graph, credentialed authorship, and content shape. Those changes alone won't put you in AI answers — they're the substrate, not the practice.
The practice is the eight-signal measurement, the prompt-set instrumentation across the major systems, the Source Register, and the Score Engine console. None of those exist in conventional SEO.
Not replacing. Augmenting and reshaping. Conventional Google search will exist for years; what's changing is the share of commercial decisions that happen inside AI-generated answers before any clicks. Across the institution's measurement of indexed verticals, that share grew materially in 2025 and is forecast to grow further in 2026.
The point of AI Discovery is not to abandon SEO; it's to add the measurement layer the AI surface requires. Subjects who do both — SEO maintained, AI Discovery added — perform best.
The thirteen indexed industries we publish playbooks for cover roughly 80% of the operators we engage with. If yours isn't listed, the methodology still applies — the eight signals are not industry-specific.
Write to hello@aidiscovery.com.au with your category. If we don't have a playbook, we'll tell you, and we'll quote the engagement against the closest indexed industry rather than improvising.
We're an institution. That sounds grand, but it's accurate: we publish a Standard, maintain an Index, certify practitioners, operate the Score Engine (the proprietary technology), and engage subjects through professional services. Bloomberg, S&P, JD Power, and B Lab are the closest analogues. Not a consultancy, not an agency, not pure SaaS.
What's in it, what comes out, how long it takes.
Six numbered deliverables in one written report, reviewed by a Specialist before release:
Delivered within four weeks of instruction. From $800.
Yes. The institution publishes a redacted sample Audit showing the exact structure and depth of the deliverable. Subject identity is removed; methodology, findings, and presentation are exactly as you'd receive.
Four weeks from instruction to delivery. The institution does not compress this. Week one: instrumentation. Weeks two and three: diagnostic and synthesis. Week four: Specialist review, signing, delivery, and a 30-minute walkthrough call.
Yes, at the institution's election. Where the Audit is followed by a Visibility Retainer or Foundation engagement within ninety days, the Audit fee is credited against the first quarter's invoice. The institution does not require a subsequent engagement; the Audit is a complete deliverable on its own terms.
It does happen. Subjects who already score in Band A or strong Band B and have no material deficits across the eight signals don't need an ongoing engagement; they need maintenance only. Where that's the finding, the Audit will say so plainly. We'd rather be honest at $800 than dishonest at $36,000/year.
What happens after the Audit, how long results take, what we commit to.
No, and we say so plainly. The institution guarantees method, not outcome. We commit to running the published methodology to standard, with quarterly Score reviews against the baseline. The AI systems whose behaviour we measure are not under our control; any vendor who guarantees you a specific position in ChatGPT's answers is either lying or about to.
What we do commit to: documented quarterly progress, written explanation if a quarter underperforms expectation, and the right for you to leave the engagement if the methodology is being executed but results are not following.
Median time to Band B (750+) across the institution's twelve sample case studies is 8.2 months. The fastest engagement we've seen reached Band B in six months; the longest took eleven. The variation is mostly explained by where the subject started and how much foundational work (entity reconciliation, schema, credentialing) was missing.
Smaller, visible movements come earlier — typically within the first quarter for entity reconciliation and the second quarter for citation work to begin showing up in AI answers.
They will, repeatedly, and that's part of the work. The institution publishes a quarterly methodology review precisely because the underlying systems evolve. The eight signals are designed to be durable across system changes — they map to fundamentals (entity, authority, citation, schema) that no commercial AI system can plausibly stop weighting.
When a system makes a material change, it appears first in our quarterly Score reads. Engagements are adjusted in response, in writing, with the reasoning recorded.
Every engagement has a credentialed Practitioner (ADP) leading it, with a Specialist (ADS) reviewing deliverables of mid-market scope and above. Junior staff support; they don't lead. The institution's certification programme exists specifically to make this a defensible commitment.
Yes. The institution does not displace existing teams — we operate alongside them. Where you have a paid acquisition team, an SEO agency, or in-house content function, we coordinate. We will not duplicate work they are already doing competently.
Where their work runs counter to AI Discovery objectives — typically low-quality link building, content volume without depth, or schema misapplication — we'll raise it in writing with the engagement principal so they can be redirected.
What we measure, how the Score works, and where the methodology is published.
A 0–1000 composite score computed across eight Discovery Signals — Mention Frequency, Authority, Citation Source Quality, Entity Resolution, Schema and Structure, Authority Graph Distance, Behavioural Reinforcement, and Recency Stability. The signals are defined and weighted in the published Index methodology; the underlying instrument is the Score Engine.
Five bands: 0–400 (Below threshold), 400–600 (Emerging), 600–750 (Discoverable), 750–875 (Strongly discoverable), 875–1000 (Authoritative).
Empirical. The Methodology Council reviewed thirty-one candidate signals during the v1.0 design year. Eight survived. The rest were redundant with one of the eight, behaved too noisily to be useful, or were measured downstream of an existing signal.
The Methodology Council reviews the signal set annually. If a ninth becomes warranted, it will be added publicly, with reasons, in the next State of AI Discovery.
Yes — we publish a free AI Visibility Check that returns a directional Score from your business identifiers and industry baselines. The full Score, computed by the Score Engine with prompt-set instrumentation across the five major AI systems, is produced during the Audit.
Yes. The Standard (the methodology specification), the Index (the measurement), the eight Discovery Signals, the Source Register (the sources we weight), and the Charter (the institution's editorial governance) are all published in full and reviewable. The institution amends them only through the procedure set out in the Charter, with the prior version recorded.
Fees, terms, what's included, how to leave.
The Audit is from $800, one-off. The Visibility Retainer is from $1,500/month. Multi-location enterprise engagements are custom-scoped, typically $50,000–$500,000/year. The Score Engine Institutional Licence (for buyers operating the technology themselves, not subjects being measured) is $250,000–$700,000/year.
The full Pricing page sets out every tier, with what's included at each.
Yes. The Visibility Retainer is month-to-month after the initial term. The institution does not lock subjects into long contracts; if the work is not producing the standing it should, leaving is the appropriate response.
Where you've prepaid annually for a discount, the unused portion is refunded pro rata, less any work already delivered against the period.
The Audit fee is refundable if, after delivery, you reasonably consider the report did not address what we said we'd address. We've not refunded an Audit since the institution opened, but the door is open and the standard is plain.
Retainer fees are refundable for unused period per cancellation. Already-delivered work is not refundable; the institution has performed it.
The institution's view is that pricing held private until the second meeting is the practice of the previous discipline — designed to maximise the agency's optionality at the buyer's expense. We prefer transparency and accept the consequence of disqualifying buyers for whom the pricing is wrong.
Editorial independence, conflicts, data handling, governance.
The Charter requires it, and the operating structure enforces it: the Methodology Council (editorial) is structurally separate from the Head of Practice (engagements). Methodology decisions are not directed by which subjects pay us. Source Register admissions are governed by independent process. The Standards Committee adjudicates conduct.
We also disclose the structure openly. The institution measures the field at the aggregate level (the Annual, the Index) and engages subjects at the per-subject level. The wall between those functions is in writing, reviewed by the Methodology Council, and amended only through the Charter procedure.
No. Per-subject Scores are held under Tier III classification per the Charter. The Annual publishes aggregate findings — vertical medians, signal trajectories, source register movements — never individual subject rankings. We don't publish league tables.
If you elect to be a published reference (some subjects do, after results have settled), it's by your written consent and on terms you set.
Yes. All engagement data is held under Tier III classification. The Score Engine console is subject-bound; access is restricted to the named delegates you authorise. Your data is not shared with other subjects, sold, or used for any purpose beyond your engagement.
The institution's Privacy Policy sets out the legal frame in full; the Tier classification is the operational discipline behind it.
The institution maintains a Conflicts Register. Where a conflict exists — typically a current or recent engagement with a direct competitor — it is disclosed in writing before any engagement begins, and a walled engagement structure is implemented or the engagement is declined.
The disclosure is to you, in writing, before you sign. If you're not satisfied with the wall, you can decline. We've declined engagements over conflicts; we'd rather lose a fee than the Charter.
Complaints are addressed to the Standards Committee at charter@aidiscovery.com.au. They are adjudicated by a committee independent of the engagement team, with written reasons. Outcomes range from no action through to suspension or revocation of certification where a practitioner is involved.
The institution publishes anonymised summaries of significant Standards Committee findings in each Annual, so the field can see how conduct is being adjudicated.
If your question isn't here, send it. We typically reply within two business days, and if your question is useful for the next operator we'll add it to this page.
$800. Seven-day turnaround. Six numbered deliverables. The fastest way to find out whether the institution's work is the right fit for your subject — and the worst-case is that you walk away with a written diagnostic regardless.