Here’s an overview of the available ways to track AI visibility and performance.
The main thing to understand: nothing currently available gives you the complete picture on where, how often, and for which actual prompts or queries your website is showing up in AI search, let alone getting clicked on. We just don’t have that level of visibility yet.
You can either use a free service from Google or Bing, or pay for a third-party tool. The difference is that the free native reporting gives you limited but based-on-real-activity data, whereas the third-party tools report on the results of “simulated” or “synthesized” AI search usage.
When considering different third-party measurement platforms, any of them can help you track high-level trends like visibility, share of voice, sentiment, and benchmarking. Each one has its own angle on how they come up with the prompts they use to track visibility and performance across varying models. Some of them offer additional products like content editing tools. So at this point it’s a matter of choosing the one with the features, UI, and price point you prefer, and then collecting enough data over time to see helpful patterns.
Here's a look at what each free option offers (and doesn't), followed by some best practices no matter where you're getting your AI visibility data.
Bing WMT AI Performance Report
On February 10, 2026, Bing launched the AI Performance report in Bing Webmaster Tools. It’s still in beta, and I did a deeper dive into that one a couple weeks ago.
The report includes a time series chart showing total citations and average number of cited pages, plus a table showing citations by grounding query and by URL.
Advantages
- Free
- Best option right now because it’s the only direct-from-source data we have for AI visibility
- 6 weeks old and they’ve already improved it: you can now filter queries or pages by searching for a term, see which grounding queries relate to which URLs and vice versa, and view timeline trends for an individual query cluster, page, or subfolder
- Also available in Microsoft Clarity (free, but it can slow down your load times)
Limitations
- No clicks. Only citation visibility.
- Only measures visibility in Copilot and unnamed “partners.” No Google, no ChatGPT, no Perplexity, no Grok, no Claude, etc.
- Max lookback to November 1, 2025
- Data is sampled, and Bing says results may be refined as more data is processed (so the numbers can change).
- No comparison filters in the report itself. If you want to compare two time periods, you have to export the data and do it in a spreadsheet.
- No actual user queries or prompts, only grounding queries, which are more like topical retrieval inputs
Google Search Console Performance Report
Google doesn’t separate out impressions and clicks from traditional organic search results and AI-generated results like AIO, AI Mode, and Web Guide. They’re all rolled into the GSC Performance report.
You can use a regex to just look at really long queries (like 9 words or longer) based on the theory that those are more likely to represent the kinds of queries people use when getting AI-generated responses. Then you can watch those queries over time and see which pages are getting impressions and clicks.
Advantages
- Free
- Google has a much larger user base than Bing
- Includes clicks as well as impressions
- Shows actual queries, not grounding queries
- 16-month lookback, or more if you export to BigQuery
Limitations
- Query length is only a rough proxy. Sometimes it’s directionally useful, sometimes it isn’t.
- No way to isolate specific AI surfaces like AIO vs. AI Mode vs. Web Guide
- Google only. Nothing from ChatGPT, Perplexity, Claude, Grok, etc.
Google Analytics
In GA4, you can look at referral traffic and build an Exploration or custom detail report to isolate traffic from sources like ChatGPT, Perplexity, Copilot, Gemini, and others.
This lets you see the referrer, the landing page, and the number of sessions. If you’re publishing a lot of content around a specific topic (like mortgages) and you start seeing more referral traffic from generative AI systems to those pages, that’s at least a useful signal.
Advantages
- Free
- Flexible custom setup in Exploration reports (choose your dimensions and metrics)
- You can filter for specific AI referral sources or compare multiple sources
- You can isolate specific pages or page groups
- Long lookback
Limitations
- No query data
- Only shows click-through traffic to your site, not citation visibility
Best Practices
There are plenty of other platforms in this space: Ahrefs, Semrush, Moz, Similarweb, Profound, SE Ranking, Gumshoe, Clearscope, Bluefish AI, many others, and probably more by next week. We’re not testing or endorsing any of those, I just listed them as examples.
If one of them offers the information and interface you prefer at a price you’re comfortable with, it can probably help you monitor things like brand mentions, citation visibility, sentiment, share of voice, and how those are trending relative to similar brands.
A few things to keep in mind:
- Prompts and queries are highly unique and often depend on context - there’s no way to report on someone asking ChatGPT “So, which one is best for me?” 20 turns deep in a conversation, or distill all of that background information into a single trackable representative query.
- AI responses are highly personalized (user location, chatbot settings, memory/behavioral history (or lack thereof), provided or inferred personal data, and more).
- Responses and cited sources change a lot, even when the same person enters the same query into the same platform on the same device.
So the best approach is:
- Look for patterns over time, not one-off fluctuations from a single afternoon.
- Be clear on what you’re measuring: brand mentions, citations, sentiment, and product recommendations are all different.
- Be clear on what kind of query data you’re looking at: grounding queries, actual queries, and simulated queries are not the same thing.
- Accept that it’s still mostly a black box, and we’re all in a “do the best you can with what you have” situation until there’s more transparency.
At least right now, reporting on AI visibility and performance is messy. So the smartest place to invest is still the same foundational stuff: brand PR, consistency, entity clarity, relevant topical authority, clarity, information gain, rock-solid technical SEO, and delightful UX on all devices.
<div class="post-note-cute">If you’re figuring out AI visibility data and want help connecting the dots, you should <a href="https://momenticmarketing.com/contact" data-wf-native-id-path="af81076c-d99f-4f39-10d6-22a7db56c574" data-wf-ao-click-engagement-tracking="true" data-wf-element-id="af81076c-d99f-4f39-10d6-22a7db56c574">reach out to Momentic</a>. We can help you figure out what’s worth tracking, what isn’t, and how to make meaningful decisions based on what’s available now.</div>
.png)


