Share of Prompt

Almost every major marketing analytics platform shipped AI visibility features in 2025. HubSpot acquired xFunnel and launched an AEO Grader and Share of Voice tool. Semrush released an AI Visibility Toolkit. Ahrefs added Brand Radar. Profound, Gauge, SE Visible, Superlines, and at least a half-dozen other startups raised venture capital to build dedicated answer engine optimization platforms.

What does not yet exist is a simple, unified measurement framework that connects AI visibility to revenue. The tools can tell you whether ChatGPT mentioned your brand. They cannot yet tell you what that mention was worth, how it compares to your paid search investment, or when you should reallocate budget. Here’s how I’m thinking about it.

Three Layers of Share of Prompt

In traditional media, share of voice measures how often your brand appears relative to competitors. In the answer economy, I propose a more actionable concept: share of prompt. Share of prompt measures how frequently answer engines recommend you when customers ask category-level questions, and what that recommendation is worth. It has three measurable layers. The first is visibility: Are you being cited? The second is influence: Do those citations show up in conversion paths? The third is value: What is each agent-attributed session worth? Each layer has its own metric. Together, they create an executive scorecard that connects AI presence to revenue.

Where These Metrics Come From

I didn’t invent the three metrics I’ll describe here from scratch. They are deliberate adaptations of measurement concepts that marketing teams have used for decades, applied to a channel that did not exist two years ago.

Share of voice has been a core PR and media buying metric since the 1990s. Channel-attributed conversions are how every performance marketer measures paid search, social, and email. Revenue per session by source is a standard unit economics calculation used in every media mix model. The instrumentation I describe (UTM tagging, referrer detection, first-touch cookie persistence, GA4 custom dimensions, BigQuery attribution views) is standard analytics engineering. None of it requires novel technology.

What is new is the application. AI answer engines represent a fundamentally different channel. They do not send consistent referrer data. They do not provide impression counts. They synthesize recommendations rather than display ranked links. Any framework that claims precision here is marketing, not measurement. The goal is directionally correct, decision-grade analytics that improve as detection improves. That is what the following metrics are designed to deliver.

Layer 1: Brand Citation Rate (BCR), Your Raw Share of Prompt

BCR is the percentage of AI-generated answers in your category that mention your brand without the user prompting for it by name. This is the foundation of share of prompt. If you are not being cited, the other two layers are zero.

BCR = (answers mentioning your brand ÷ total answers sampled) × 100

The concept has strong precedent. The AEO tooling ecosystem has already converged on variations of this idea. Profound tracks “AI visibility” and citation rates across five major answer engines. HubSpot’s Share of Voice tool measures brand mention frequency across ChatGPT, Perplexity, and Gemini. Semrush, Ahrefs, Siftly, and others offer similar capabilities. BCR formalizes this into a specific, computable metric with a defined methodology.

Computing BCR requires active monitoring. Build a query set of your top 50 to 200 commercial intents (the prompts a potential customer would type into an answer engine when shopping your category). Run those queries weekly across the answer engines that matter to your audience and record four data points for each: whether your brand was mentioned, what position it appeared in, whether a link to your site was included, and whether your brand was the primary recommendation.

Segment BCR by intent cluster (brand queries, category queries, competitor queries) and by platform. Category and competitor clusters matter most because they represent consumers who have not yet chosen a brand.

Not all citations carry equal weight. A passing mention without a link has far less commercial value than a primary recommendation that sends traffic to your site. A simple weighting model captures this: 0.25 for a mention without a link, 0.75 for a mention with a link, and 1.00 for a primary recommendation with a link. (These weights reflect professional judgment and will need calibration as the market produces harder data.) Applying these weights produces a Weighted BCR you can trend over time and tie more plausibly to downstream behavior.

Weighted BCR = (sum of citation weights ÷ total answers sampled) × 100

You can use three strategic thresholds as working benchmarks for non-brand category queries. BCR above 25 percent indicates strong positioning and high likelihood that the answer engine treats your brand as a go-to recommendation. BCR below 10 percent means you are functionally invisible. Between 10 and 25 percent, you are co-cited alongside competitors without a clear advantage.

These thresholds are informed by early data from AEO platforms like Profound, which show that category leaders in established industries can achieve 30%+ visibility scores in AI-generated answers for their vertical, and will need refinement as the market develops.

AI-generated answers are non-deterministic. The same prompt can return different results on different days, at different times, and for different user profiles. A brand that scores 30 percent BCR on Monday might score 18 percent on Thursday with identical queries. Weekly monitoring helps smooth this variance, and commercial AEO tools (Profound, HubSpot, Semrush) handle the sampling at scale. If you are building BCR measurement in-house, run each query multiple times per cycle and report the mean. Do not treat any single measurement as a stable score.

Track BCR alongside an AEO visibility ladder with three levels: mentioned, linked, and primary call to action. A mention without a link rarely converts. A link without primary recommendation status will not scale. You need all three to capture meaningful agent-influenced revenue.

Layer 2: Agent-Influenced Conversions (AIC), Your Converted Share of Prompt

AIC is the count of purchases or signups where an AI agent or answer engine appears anywhere in the conversion path. BCR tells you whether agents are talking about you. AIC tells you whether that visibility is producing customers.

AIC = count of conversions where an agent source appears anywhere in the path

A consumer asks ChatGPT “what’s the best project management tool for a 50-person team” and gets your brand recommended. They visit your site, leave, come back a week later through a Google search, and convert. Traditional last-touch attribution gives credit to Google. The agent that actually shaped the decision gets nothing.

For this channel, a conversion counts as agent-influenced when an agent or answer engine appears anywhere in the conversion path, regardless of position. This is any-touch attribution, a deliberate departure from the last-touch model most teams default to. Agents do not occupy a fixed position in the purchase journey. In some cases, they compress discovery, consideration, and purchase into a single session. In others, they introduce a brand early and the consumer converts days later through a different channel. Last-touch attribution misses the first pattern. First-touch attribution misses the second. Any-touch captures both, which is why it produces the most accurate picture of agent influence.

This creates a measurement tension you should address head-on. If you report agent channels on any-touch while your paid search and social teams are measured on last-touch, you have an apples-to-oranges problem across your marketing org. The practical solution: run both models in parallel. Use last-touch for legacy reporting (what most dashboards show today) and any-touch for decision reporting (to understand influence and budget tradeoffs). The purpose of AIC is not to steal credit from other channels. It is to quantify influence so you can make better allocation decisions. Most organizations that start measuring agents on any-touch end up realizing every channel deserves it.

You detect agent involvement through three signals, and you should implement at least two because no single signal is reliable on its own. The first is UTM tagging. Add utm_source=chatgpt (gemini, perplexity, copilot, etc.) and utm_medium=answer on every link you control. ChatGPT has been observed appending utm_source=chatgpt.com on some outbound links natively, which means some of this data may already be flowing into your GA4 instance. Check your traffic acquisition report for chatgpt.com / referral before you build anything new. The second signal is referrer matching. Build an allowlist of known agent domains (chat.openai.com, perplexity.ai, gemini.google.com, copilot.microsoft.com) and flag matching sessions. Some agents strip referrer data or route through redirects, so this cannot be your only method. The third is partner ID mapping for affiliate or API relationships with agent platforms.

Store the agent source at first touch with at least a 90-day cookie window to capture the full influence arc. MarTech and Bounteous have both documented the attribution gap that makes this persistence necessary. Treat AIC as a conservative, lower-bound estimate of agent influence. Detection is imperfect and will remain so for a while.

Track AIC on 7-day, 28-day, and 90-day windows. Express it as a percentage of total conversions. For categories that already have meaningful agent traffic, AIC should reach 3 to 5 percent of total conversions within 90 days. This is an operational starting point based on the traffic growth rates reported by Conductor, Adobe, and Similarweb (which reported AI referral traffic to top websites grew 357% year-over-year, reaching 1.13 billion visits in June 2025). If you are materially below that range, your tagging is likely incomplete (a known problem, since free-tier ChatGPT users often send no referrer header, causing GA4 to classify visits as “direct”). If you are above it, you are ahead of most competitors and should accelerate investment in structured data, agent-optimized content, and AEO monitoring.

Layer 3: Downstream Revenue per Agent Session (DRAS), Your Monetized Share of Prompt

DRAS is total revenue attributed to agent-influenced sessions divided by the number of those sessions. BCR measures visibility. AIC measures influence. DRAS measures value. It answers the capital allocation question: What is each agent-driven session worth compared to the channels you are already funding?

DRAS = total revenue from agent-attributed sessions ÷ number of agent-attributed sessions

This is conceptually identical to revenue per click (RPC) segmented by traffic source, a calculation that performance marketing teams run for every paid channel. The difference is that agent traffic arrives through referral patterns rather than click-through from an ad, which means you need the tagging infrastructure described above to isolate it.

The comparison that matters most is DRAS versus your paid search RPC. If DRAS falls within 15 percent of your paid search RPC in either direction, you have reached approximate parity, and scaling is justified. If DRAS exceeds paid search RPC by more than 15 percent, agents are qualifying traffic better than your ads, meaning the audience arriving through answer engines has higher purchase intent. If DRAS lags by more than 15 percent, the problem is usually on your end: incomplete structured data, unclear pricing, landing page friction, or offer misalignment with the intent the agent is serving.

These three bands are operating guidelines I have developed through advisory work, not published research benchmarks. They are calibrated to be directionally useful while the industry collects enough data to set tighter thresholds. The underlying logic (compare unit economics across channels before reallocating budget) is standard practice in every performance marketing organization.

Your denominator depends entirely on your ability to detect agent-influenced sessions. Because detection is incomplete (as described in the AIC section), early DRAS numbers will be calculated from a smaller-than-actual session count. This inflates the metric and makes the channel look artificially efficient. As your tagging matures and you capture more sessions, expect DRAS to normalize downward. Do not set budgets based on early DRAS alone. Use it directionally until your detection rate stabilizes.

DRAS also exposes a common failure mode. Many teams celebrate rising agent traffic without checking whether it converts. Rising sessions with flat or declining DRAS means you are visible in answers but poorly positioned for conversion. At this writing, the remediation is mostly operational. Improvements come from publishing explicit pricing, cleaning up product specifications, adding structured schema markup, and reducing checkout friction.

What a Healthy Scorecard Looks Like

Read the three metrics together. They diagnose each other. Hypothetically speaking, over any given 90 days, a BCR of 22 percent (present in category queries, not dominant), AIC at 4.1 percent of total conversions (measurable influence), and DRAS running 11 percent above paid search RPC (high-intent traffic) would tell us that the brand has strong conversion and value but is leaving visibility on the table. The fix would include investing in structured data and content to push BCR above 25 percent, which should lift AIC proportionally.

If BCR were high but AIC were lagging, the problem would be downstream: landing pages, pricing, checkout friction. If all three were low, the issue would be foundational: tagging gaps, missing schema, no extractable brand facts. This three-metric scorecard tells you where to act.

What Comes Next

The measurement principles behind this framework (share of voice, multi-touch attribution, unit economics by source) are well-established. The instrumentation uses standard analytics tools. The application is what matters. AI agents now mediate discovery, synthesize recommendations, and increasingly compress the entire purchase journey into a single interaction.

The answer economy requires metrics all its own. Remember, you cannot optimize what you do not measure.

Author’s note: This is not a sponsored post. I am the author of this article and it expresses my own opinions. I am not, nor is my company, receiving compensation for it. This work was created with the assistance of various generative AI models.

About Shelly Palmer

Shelly Palmer is the Professor of Advanced Media in Residence at Syracuse University’s S.I. Newhouse School of Public Communications and CEO of The Palmer Group, a consulting practice that helps Fortune 500 companies with technology, media and marketing. Named LinkedIn’s “Top Voice in Technology,” he covers tech and business for Good Day New York, is a regular commentator on CNN and writes a popular daily business blog. He's a bestselling author, and the creator of the popular, free online course, Generative AI for Execs. Follow @shellypalmer or visit shellypalmer.com.

Categories

PreviousGoogle's Free Photo Studio NextHow Much Does An LLM Remember?

Get Briefed Every Day!

Subscribe to my daily newsletter featuring current events and the top stories in AI, technology, media, and marketing.

Subscribe