Two metrics define brand visibility inside AI answers: mention share and answer share. They sound similar, get used interchangeably, and mean different things. Reading only one of them, or treating them as the same, misses half the picture and leads to the wrong action plan.
Liftable definition: Mention share measures whether AI assistants are naming your brand at all. Answer share measures whether you are the named recommendation when AI returns one. Mention share is the consideration metric; answer share is the win metric.
Key terms in one place
- Mention share:
- The share of tracked prompts where AI assistants (ChatGPT, Gemini, Claude, Perplexity) name your brand anywhere in the answer. Counts presence, not position.
- Answer share:
- Of the prompts where AI returns a recommendation, the share where you are the named recommendation rather than a rival. Counts winning the slot, not just appearing.
- Co-mention:
- When AI names your brand alongside one or more rivals in the same answer. Counts toward mention share but not toward answer share unless you are the lead.
- Recommendation event:
- An AI answer that picks a specific brand or set of brands as the recommendation. Not every answer is a recommendation event; some are definitional or comparative without recommending.
Mention Share vs. Answer Share at a Glance
| Dimension | Mention Share | Answer Share |
|---|---|---|
| What it measures: | Presence: did AI name your brand anywhere in the answer | Winning: are you the named recommendation when AI gives one |
| Numerator: | Prompts where your brand is mentioned at all | Prompts where your brand is the lead recommendation |
| Denominator: | All tracked prompts | Prompts where any brand is recommended |
| SEO analogue: | "Did we appear on the SERP at all?" | "Did we get position #1?" |
| Brand-tracking analogue: | Unaided awareness | First-choice preference |
| Funnel stage: | Top-of-funnel: are we in consideration? | Mid-funnel: when AI picks one, is it us? |
| Moves with: | Listicle inclusions, third-party coverage, broad SEO presence | Decisive proof, comparison wins, strong positioning for specific buyers |
The Four Quadrants Tell Different Stories
Reading mention share and answer share together creates a 2×2 that maps your visibility position cleanly. Each quadrant signals a different action:
| Position | What It Means | Action |
|---|---|---|
| High mention share, high answer share: | Leading the category. AI both knows you and picks you when it picks anyone. | Defend. Refresh proof, monitor rivals, don’t let the position erode. |
| High mention share, low answer share: | You’re in the consideration set but losing the pick. Buyers see you alongside rivals; AI doesn’t crown you. | Sharpen positioning for specific buyers. Publish “best for X buyer” comparison content. The lever is decisive proof, not more visibility. |
| Low mention share, high answer share: | When AI does name you, you win. But AI rarely names you at all. | Expand reach. Get on more category listicles, earn third-party coverage, broaden SEO footprint. The lever is presence, not positioning. |
| Low mention share, low answer share: | You don’t exist in AI answers for this category. | Foundation work. Classic SEO, listicle inclusion, analyst coverage. You can’t optimize the answer share until AI knows you exist. |
Why You Need Both Metrics
Mention share alone is misleading because it counts every mention equally. A brand named once in a 5-vendor list scores the same as a brand named as the single recommendation. That equates “in consideration” with “winning,” which is wrong, especially on engines like ChatGPT that often pick a single winner.
Answer share alone is also misleading because it ignores the prompts where you weren’t mentioned at all. If AI names a rival as the answer on a prompt you weren’t even in, that prompt doesn’t count against your answer share but it absolutely matters: you’re missing from that buyer’s consideration set entirely.
Together, they tell you which problem you have. High mention share with low answer share means “we’re visible but losing the pick.” Low mention share with high answer share means “we win when we’re named but we’re not named often enough.” Same brand, opposite playbooks.
The Two Metrics Behave Differently Across the Four AI Assistants
Mention share and answer share aren’t equally meaningful on every engine. Each major AI assistant skews the relationship between the two:
| Engine | Behavior | What this means for the metrics |
|---|---|---|
| ChatGPT: | Decisive: often picks one or two clear winners | Answer share is the more meaningful read; mention share without answer share signals you’re losing |
| Claude: | Multi-vendor hedging: names several options per answer | Mention share is the more meaningful read; multi-vendor inclusion is the win, not solo recommendation |
| Gemini: | AI Overviews often present structured lists | Both matter; mention share for “in the list,” answer share for “the lead position” |
| Perplexity: | Citation-first with multi-source synthesis | Mention share matters but cited-source share matters more; the citation is the visibility win |
Read all four together, not just one. A brand that wins answer share on ChatGPT but loses mention share on Claude has very different work to ship than a brand with the opposite pattern.
How to Track Both Metrics Daily
Single readings on either metric are noisy. AI answers rotate among credible sources between runs, and sometimes name you, sometimes don’t, on the same prompt. Track both metrics over time, not in snapshots:
- Define your prompt set: 20 to 40 prompts your target buyers actually ask, written in their language. Mix definitional, comparison, evaluation, and how-to queries.
- Run daily across all four engines: ChatGPT, Gemini, Claude, Perplexity. The Signal Desk automates this; manually sampling 80 to 160 prompts daily across four engines is impractical.
- Calculate both metrics weekly: Mention share = prompts where you’re named / total prompts. Answer share = prompts where you’re the lead / prompts where any brand was recommended.
- Read 30-day trend lines: A single week is too noisy. The 30-day rolling window is the right cadence for spotting real movement versus rotation noise.
- Map to the quadrant: Plot your weekly numbers on the 2×2 grid. Watch which quadrant you live in and which direction you’re moving.
Common Mistakes
Three patterns trip up marketers reading these metrics for the first time:
- Reporting only mention share: Looks better than answer share for most brands because the bar is lower. Mention share alone hides whether you’re winning the recommendation. Always pair it with answer share.
- Averaging across all four engines: Different engines have different baselines. Claude’s mention share will always be higher than ChatGPT’s for the same brand because Claude names more vendors per answer. Average reads hide engine-specific gaps. Read each engine separately.
- Reading single-day snapshots: Single readings can swing 20 to 40 percentage points run-to-run on the same prompt. Always read trend lines over 7 to 30 days, not screenshots.
Bottom Line
Mention share tells you whether AI knows you exist for a category. Answer share tells you whether AI picks you when it picks anyone. Reading only one of them gets you the wrong action plan: low mention share calls for broader presence, while high mention share with low answer share calls for sharper positioning. Both are required for a complete read on AI answer visibility.
The TrendsCoded workstation reads both metrics daily across ChatGPT, Gemini, Claude, and Perplexity, plots your 30-day trend on the four-quadrant grid, and ships a weekly AEO Strategic Plan that names whether the next move is reach (broaden mention share) or sharpening (lift answer share). Product Position scoring reads which buyers each metric covers, so you know not just “which metric is moving” but “which buyer is moving it.”
