How We Measure AI Visibility

The Mentioned / Cited / Recommended framework.
Transparent methodology. No black boxes.

What AI visibility means — and why most measurement fails

Your business either appears in AI search results or it doesn't. But "appearing" is where most measurement stops—and where it fails.

When someone asks ChatGPT, Perplexity, or Google Gemini for a recommendation, three outcomes are possible: you're mentioned in a list, cited as a trusted source, or recommended as the solution. These aren't the same. Being listed alongside nine competitors is fundamentally different from being endorsed. One sends customers your way. One doesn't.

Most tools flatten this into a single score: "Your visibility is 47." That number hides everything that matters. Real measurement captures quality of placement—where you appear, how you're framed, whether the AI trusts you enough to cite or endorse you.

AI systems evolve constantly—new models, new behaviors, new ranking signals. Static dashboards can't keep up. We adapt our measurement to what actually drives business, not what was true last quarter.

Four levels of AI visibility

Every AI response falls into one of four categories based on how the AI treats your business:

NOT PRESENT

The AI doesn't mention your business.

Example response:

"For jewelry repair in the area, you might try Kay Jewelers, Zales, or Helzberg Diamonds."

Business value: None. You're invisible.

MENTIONED

The AI acknowledges you exist, typically in a list.

Example response:

"Jewelers in the area include Granite Jewelers, ABC Diamonds, Smith & Sons, and Kay Jewelers."

Business value: Low. You're one option among many. The user has no reason to pick you.

CITED

The AI uses you as a source of information or authority.

Example response:

"Granite Jewelers offers ring resizing services, with customers noting their ability to handle difficult repairs that other jewelers decline."

Business value: Medium. You're credible. The AI trusts you enough to cite you. But you're not the recommendation.

The signals behind the classification

We analyze every AI response across multiple dimensions:

SignalWhat It Tells Us
PositionWhere do you appear? First, middle, or afterthought?
RoleHow are you used? As the answer, an example, or a list item?
FramingIs the AI confident ("Granite is...") or hedging ("Some say Granite might...")?
AttributionDoes the AI name you as an authority, or treat you as anonymous?

We combine these signals using consistent rules to classify every appearance. Same methodology, every time, every platform.

What we actually do

Real Queries

We test queries that real customers ask.

Not "Tell me about [Your Business]" — that just tests if AI knows you exist. We test "Where can I get a ring resized near [Your City]?" — the queries where you need to be discovered.

We call these organic queries. They're the ones that matter.

Multiple Platforms

AI platforms behave differently.

We test across ChatGPT, Perplexity, Gemini, and Claude. Your visibility on Perplexity may look nothing like your visibility on ChatGPT.

We measure each separately so you know where to focus.

Query Variants

Users phrase questions differently.

We test multiple versions of each query — direct, conversational, and detailed. If your visibility collapses with slight rephrasing, it's brittle.

We measure robustness, not just presence.

The deliverable

Not a dashboard. Not a score.

A briefing.

We tell you:

  • • Where AI reliably recommends you
  • • Where you're mentioned but not endorsed
  • • Where you're invisible
  • • What changed since last time
  • • What's likely driving the changes

Clear findings. Specific recommendations. No fluff.

Honesty about limitations

We don't claim to predict the future. AI models change. Visibility measured today may shift tomorrow.

We don't claim causation. We measure correlation between your optimization efforts and visibility changes. Proving a specific change caused improvement requires controlled testing over time.

We don't claim completeness. Our query sets sample the space of possible questions. We can't test everything.

We do claim consistency. Same methodology, applied the same way, every time. Transparent and auditable.

See what it costs