Field NotesAnchor 10 · The mechanism

How AI systems decide.

AI systems do not simply “know” which businesses to recommend. They consult a sequence of internal mechanisms — retrieval, reranking, citation weighting — each of which can be reasoned about. This anchor explains the mechanics in the form an operator needs, without the academic apparatus.

Field
The mechanism
Reading time
9 minutes
First published
10 May 2026
Reviewed
Methodology Council
The institution's position

AI systems make commercial recommendations through a three-stage process: retrieve, rerank, synthesise. Each stage applies different criteria, and a subject can succeed at one stage and fail at another. Effective work addresses all three.

Stage one — retrieval

The first stage is retrieval. When the user asks a commercial question, the AI converts the question into a set of search queries (sometimes one, often several) and runs them against an index. The index may be the system's own (Google's index for Gemini, a custom index for Perplexity) or an external one (Bing for ChatGPT, mixed for Claude). The retrieval stage returns a set of candidate sources.

Retrieval rewards two things: index inclusion and topical match. A subject not present in the index cannot be returned. A subject present but topically distant from the question's substance is also unlikely to be returned. This is the stage at which entity confusion does the most damage — a subject not recognised as a single entity matches its own queries less reliably.

Stage two — reranking

The second stage is reranking. The retrieval set is too large to pass directly to the model; the system therefore applies a reranker that scores each candidate on additional criteria and surfaces a smaller subset. The reranker considers authority signals (domain trust, editorial standing, schema accuracy), recency, attribution, and structural quality.

Reranking is the stage at which citation source quality and authority graph distance do most of their work. A subject with weak authority signals is dropped at the rerank, regardless of strong retrieval performance.

Stage three — synthesis and citation

The third stage is synthesis. The model receives the reranked set as context and generates the answer. As it generates, it tracks which sources supplied which claims and emits citations to the load-bearing sources. The model's behaviour at this stage is consistent across systems: it prefers sources whose passages are short, attributed, evidence-bearing, and directly responsive to the prompt.

Synthesis is the stage at which passage-level citability does its work. A subject can survive retrieval, survive rerank, and still fail at synthesis if its content is not in the form the model can quote.

An observation from engagement

Most engagements that begin with a complaint of “invisibility” are, on diagnosis, failures at stage two (rerank) or stage three (synthesis), not stage one (retrieval). The subject is in the index. It is being filtered out, then dropped before quotation. Each stage requires different work.

Operational implications

  1. Pass stage one with entity reconciliation, accurate schema, and indexability discipline.
  2. Pass stage two with authority signals: editorial standing, credentialed authorship, citation source quality, machine-readable professional memberships.
  3. Pass stage three with editorial substance: short, attributed, evidence-bearing passages that the model can quote without rephrasing.

A closing observation

Each stage rewards a different discipline. Subjects whose teams understand the three stages stop investing in moves that work at one stage and not the others, and start building the cumulative architecture that survives all three.

The mechanism is, fundamentally, not opaque. It is a stack with three checkpoints. The institution's work is to build a subject capable of clearing all three.

Read the Standard

The Standard sits behind every stage.

This anchor explains the mechanism. The Standard is the document that frames the practice that addresses it.

Read the Standard →