What ChatGPT, Claude, Perplexity, and Gemini actually cite — and why it matters
Four engines, four very different corpus policies. A field guide to what each one trusts, and what that means for your AEO content strategy.
Treating "AI search" as one surface is the easiest way to waste a quarter of AEO work. The four largest engines differ in which content they see, which they trust, and which they'll name when asked for recommendations. Optimizing for the average is optimizing for none of them.
Here is the shape of what each engine actually reads.
ChatGPT
ChatGPT's base model is trained on a large web snapshot; for questions about specific brands and products, it relies on whatever it learned during training plus, when web browsing is enabled, live retrieval from Bing. The implication for you: your training-data footprint matters, but so does what Bing ranks today.
For buyers asking recommendation-shaped questions, ChatGPT tends to lean on review-style content, comparison articles, and well-structured product documentation. If your own site doesn't answer "what is it, who is it for, how is it different" in a clean hierarchy, the model fills the gap from someone else's framing.
What to optimize
Clear category pages. FAQ content on real buyer questions. Being referenced in comparison articles on sites Bing ranks highly.
Claude
Claude has a strong training corpus but — without explicit tool use — answers from what it learned, not what the web looks like today. With search tools enabled, Claude retrieves but cites more sparingly than Perplexity. Answers tend to be longer-form, more careful, and more willing to explain uncertainty.
Claude rewards depth. Long, substantive, genuinely technical writing that explains not just "what we do" but "why it works this way" tends to be recalled and paraphrased accurately. Thin marketing copy underperforms here more than on other engines.
What to optimize
In-depth guides. Explainers with mechanisms, not slogans. Research-backed posts that the model can reason from. Buyer-centric case studies over vendor-centric ones.
Perplexity
Perplexity is a retrieval-first engine: almost every answer is a synthesis of live sources, with inline citations. That makes it the most legible engine in the set — you can see exactly which domains shaped the answer.
The implication: Perplexity is the most responsive to fresh, well-structured, clearly authored content. A post published this week can be cited tomorrow. But it also ruthlessly dedupes; if three competitors all have the same argument, only the clearest-stated one gets the citation.
If you want to move an AEO metric in the next 30 days, Perplexity is usually where the leverage is. Fresh content with clear authorship gets cited.
What to optimize
Consistent publication cadence. Clear authorship and dates. Unique arguments, not restated consensus. Strong H-structure so the retriever can quote cleanly.
Gemini (and Google AI Overviews)
Gemini is deeply integrated with Google's index and Knowledge Graph. AI Overviews specifically pulls from sources Google already ranks — so traditional SEO authority transfers more directly here than on any other engine.
That's the good news. The less-good news: Google's AI Overviews are conservative about recommending small or newer brands. You tend to appear when your brand has accumulated the kinds of signals Google trusts — reviews, third-party articles, structured data, entity associations.
What to optimize
Schema markup. Entity clarity (your brand is about X, for Y, different from Z). Coverage on sites Google already trusts. None of this is new; the surface is.
The strategic takeaway
You don't get to pick which engine your buyer uses. You do get to pick where you invest effort first. A simple heuristic: if you have no baseline presence anywhere, start with Perplexity (fastest feedback loop). If your presence is uneven, close the biggest per-engine gap. If you're strong everywhere except Google AI Overviews, that's a schema and third-party coverage problem, not a content problem.
Measure separately. Invest where the gap is biggest. Revisit quarterly — the corpora change.