GlossaryFive terms that run the workstation

AI answer measurement glossary.

In one line: TrendsCoded measures whether AI models name, cite, and rank your brand for the buyers you sell to — daily across ChatGPT, Gemini, Claude, and Perplexity — and ships a weekly plan that closes the gap. These five terms are the vocabulary the workstation runs on.

  • Position Score

    #position-score

    In one line: A 0–100 read of how AI models name, cite, and rank your brand for a defined buyer, use case, region, and model.

    Definition

    Position Score is TrendsCoded's diagnostic measure of how a brand is positioned inside AI answers. It rolls up four sub-reads — whether the model names you, where it ranks you in lists, what it cites alongside you, and how confidently it recommends you — and normalizes them on a 0–100 scale per buyer × use case × region × model. A Position Score is always tied to a defined market context: there is no single global score.

    Example

    For a Series B fintech selling to mid-market CFOs in North America: Position Score on ChatGPT 73, Gemini 41, Claude 58, Perplexity 12. The 12 on Perplexity flags an answer-share gap with two named rivals; the 73 on ChatGPT is the position to defend.

  • AEO Strategic Plan

    #aeo-strategic-plan

    In one line: A weekly action plan with three concrete moves: the gap to close, the strength to defend, and the proof signal to publish.

    Definition

    The AEO Strategic Plan (Answer Engine Optimization Strategic Plan) is the weekly operating output of the workstation. It names one gap to close (where you are losing ground to a rival inside AI answers), one strength to defend (a position holding across models), and one proof signal to publish (the next capability claim, narrative proof, or structured content that should lift answer share). It is delivered Friday and is intended to be the marketing team's working document for the week — not a dashboard.

    Example

    Week of May 2: Gap — Perplexity is naming Rival X over you for 'best procurement automation for mid-market.' Defend — ChatGPT consistently names you for 'AP automation' since April. Ship — publish a third-party-cited integration case study with NetSuite to lift answer share on the Perplexity prompt set.

  • Signal Desk

    #signal-desk

    In one line: A daily ticker of what changed inside AI answers since yesterday — rivals gaining or losing rank, listicle drops, alternatives surfacing.

    Definition

    The Signal Desk is the daily read surface of the workstation. It runs prompt sets across ChatGPT, Gemini, Claude, and Perplexity every day and surfaces what moved: a rival entering or leaving a top-3 list, a new alternative being recommended, an attribute being credited differently, citation share shifting between sources. Think Bloomberg ticker, scoped to your category, target buyers, and target organizations.

    Example

    Today: Rival X gained rank +2 on Gemini for 'best contract lifecycle management.' Listicle drop on Perplexity removed two competitors and added one new alternative. Your brand picked up a new citation in ChatGPT for 'enterprise SOC2 readiness.'

  • Mention Share

    #mention-share

    In one line: The percentage of relevant AI answers in your defined market that name your brand, measured over a rolling 30-day window.

    Definition

    Mention Share is the share of AI answers — across ChatGPT, Gemini, Claude, and Perplexity — that name your brand for the prompts that match your defined market (buyer × use case × region). It is measured over a 30-day rolling window because individual AI answers rotate; one-day snapshots are noise. Mention Share answers the question 'do AI assistants know we exist for this buyer?' before Answer Share answers 'do they recommend us?'

    Example

    Across 240 prompt-runs in the last 30 days for 'enterprise developer security tools, North America,' your brand was named in 42% of answers (101/240). Two named rivals sat at 71% and 58%; the gap to close on mention is roughly 30 percentage points.

  • Answer Share

    #answer-share

    In one line: Among AI answers that name your brand, the percentage where you are recommended in the top three over a rolling 30-day window.

    Definition

    Answer Share is the conditional measure that follows Mention Share: of the AI answers that named you, how often were you placed in the top three recommendations? It captures whether AI assistants treat you as a leading option for the buyer, not a long-tail mention. Answer Share is measured over a 30-day rolling window across the four major models and reads as a percentage of mentioned answers, not all answers.

    Example

    Of the 101 answers that named your brand in the last 30 days, 38 placed you in the top three (38%). Rival X's Answer Share over the same window was 64%; that 26-point gap is the lever the AEO Strategic Plan attacks first.

Methodology

How these numbers are produced.

Prompt sets are configured per category and resolved against a defined buyer × use case × region. Each prompt is run repeatedly across ChatGPT, Gemini, Claude, and Perplexity. Mention Share and Answer Share are computed over a rolling 30-day window because individual AI answers rotate among credible sources; single-day snapshots are noise.

Position Score is normalized 0–100 per buyer × use case × region × model. There is no global Position Score — every score is tied to a defined market context. The AEO Strategic Plan is generated weekly by ranking the largest gaps against the freshest signals, then naming the next proof artifact to publish.

Last reviewed 2026-05-06

Want to see these numbers for your category? Start with a fixed-price one-week pilot — $500, no subscription.