How to measure AI visibility (when Google Search Console won't help)
Traditional analytics stop at the click. AEO begins where the answer gets rendered. Here's the measurement stack that replaces it.
For two decades, the definitive measure of organic visibility was a rank on a search-engine results page. Google Search Console told you the query, the impression, the click. Every AEO conversation eventually runs into the same wall: none of that exists when the answer is synthesized.
If a buyer asks ChatGPT "what's the best AEO platform for a mid-market SaaS brand," there is no SERP. There is no query impression. There is no click unless the model chose to cite a source and the user chose to follow it. The thing you need to measure — whether your brand showed up in the answer — leaves no trace in the analytics tools you already own.
Here's the measurement stack that replaces it.
1. Presence, not rank
Rank is a SERP concept. An AI answer doesn't have positions 1 through 10; it has a paragraph. The question is binary: were you named, or not? When multiple brands appear, the secondary question is order and adjacency — were you the first one mentioned, or the last? Were you anchored to the recommendation, or listed as an "also consider"?
Every meaningful AEO metric starts here. Presence rate per prompt tells you whether the model knows you exist for the kind of buyer who asks that question.
2. Share of voice, per prompt class
Once you can detect presence, you can compare it. Group prompts by buyer intent — "best tool for X," "alternatives to Y," "how should I evaluate Z" — and measure your share of mentions across competitors. A 40% share of mentions on "best AEO platform" is a very different signal than a 4% share on "most affordable."
Don't average these. Averages hide where you're winning and losing. Break them out by prompt class and by engine.
3. Sentiment of the mention
Being mentioned is not the same as being recommended. A model can mention you in a comparison where a competitor is framed as the better option for the buyer's case. Sentiment analysis on just the sentences that name you — positive, neutral, or negative — is more actionable than sentiment on the whole answer.
If you only measure presence, you'll celebrate every mention. You should only celebrate the ones that push a buyer toward you.
4. Citation source — do they link to you, or someone else?
Engines that surface sources (Perplexity, Google AI Overviews, Gemini with web access, Claude with search) show which pages they trusted when answering. If the model recommends you but cites a third-party review, you're dependent on that third party's framing. If it cites your own documentation, you own the narrative.
Track cited domains the same way you'd track backlinks in SEO. The difference: these citations are actively influencing the model's answer right now, not a ranking factor that compounds over months.
5. Per-engine drift
ChatGPT, Claude, Perplexity, and Gemini do not share a corpus. An AI visibility report that averages across engines masks the truth that you might be winning on ChatGPT and invisible on Perplexity. Always measure per-engine, always compare the gap.
The gap is the work. Closing it is the strategy.
What this replaces
Search Console gave you query, impression, position, click-through rate, click. The AEO equivalent is prompt, presence, sentiment, cited source, click-through when followed. Same shape. Different surface. Same discipline: you can't improve what you don't measure.