Catherine AI

Methodology

How Catherine Measures AI Visibility

Transparency is core to how we operate. This page explains what Catherine measures, how we categorize our findings, and what we can and cannot prove.

How We Categorize Our Findings

Every data point Catherine surfaces falls into one of three categories. We label them clearly so you always know the confidence level behind each finding.

Directly Observed

Data we collect firsthand by querying AI engines and recording their responses. This is raw, verifiable evidence.

  • Whether your business is mentioned in an AI-generated answer
  • The exact text of the AI response for a monitored query
  • Which competitors appear in the same answer
  • Your position in an AI-generated list (e.g., #3 of 5 recommended)
Externally Verified

Data we confirm by checking third-party platforms directly. We verify against the source, not just our own records.

  • NAP (Name, Address, Phone) consistency across 18+ directories
  • Whether your website contains structured data (JSON-LD schema)
  • Whether a citation source lists your business at the expected URL
  • Profile completeness on platforms like Google, Yelp, and Apple Maps
Evidence-Based Recommendation

Actions we recommend based on observed gaps and verified data. These are informed suggestions, not guaranteed outcomes.

  • Prioritized fix list ranked by estimated visibility impact
  • Content and schema recommendations for queries where you are absent
  • Citation opportunities based on where competitors are listed and you are not
  • Severity scoring that reflects the relative importance of each issue

How the AI Visibility Score Works

Your AI Visibility Score (0–100) is a composite metric that blends two primary dimensions:

  • AI Answer Presence — how often your business appears in AI-generated answers for your monitored queries
  • Citation & Data Quality — the consistency and completeness of your business information across platforms that AI engines reference

The score is recalculated after each monitoring cycle using the most recent data available. Historical scores are preserved so you can track movement over time.

The score reflects what Catherine has observed and verified. It does not predict future AI engine behavior or guarantee any particular position in AI-generated responses.

Score Freshness & Monitoring Cadence

Catherine runs monitoring cycles on a regular schedule, not in real time. The frequency depends on your plan:

  • Starter — monthly visibility scans
  • Growth — weekly priority scans
  • Enterprise — daily visibility reports

Between cycles, your score reflects the last completed scan. AI engine responses can change at any time, so your live visibility may differ from your most recent score.

How Recommendations Work

Catherine identifies gaps between your current state and best-practice signals that correlate with AI visibility. When we recommend an action, it is because:

  • We observed a specific gap in your monitored data
  • The gap corresponds to a signal that AI engines are known to reference (e.g., consistent NAP data, structured markup, authoritative citations)
  • Addressing the gap is expected to strengthen those signals

Recommendations are prioritized by estimated impact, but impact estimates are directional, not guaranteed. Actual results depend on many factors including AI engine behavior, competitive dynamics, and implementation quality.

Associated Outcomes, Not Guaranteed Results

When Catherine reports that your score moved after you implemented a recommendation, we are reporting an associated outcome — a change we observed in the same timeframe as your action.

We do not claim causation unless we can directly verify it. Many factors influence AI-generated answers, including:

  • Changes to AI engine models, training data, or ranking logic
  • Competitor actions and market shifts
  • Third-party platform updates (Google, Yelp, etc.)
  • Web crawling and indexing timelines outside our control

Catherine tracks what changed and when. Interpreting cause requires context that goes beyond any single platform.

What Catherine Can and Cannot Prove

What We Can Show

  • Whether you appear in monitored AI answers
  • How your visibility score has moved over time
  • Where your NAP data is inconsistent
  • Which competitors are cited for your queries
  • What actions were taken and what changed afterward

What We Cannot Guarantee

  • That visibility will improve from any specific action
  • How or when AI engines update their models
  • That generated schema or content will be indexed
  • That citation sources will remain accurate over time
  • Rankings or placement in any AI engine

Third-Party Dependencies

Catherine operates by querying and analyzing third-party systems. The accuracy and timeliness of our data depends on factors outside our control:

  • AI engine availability — if an AI platform changes its API, rate-limits our queries, or modifies its response format, our monitoring may be temporarily affected
  • Platform data freshness — directory listings, review platforms, and citation sources update on their own schedules
  • Crawling and indexing — changes you make to your website or listings may take days or weeks to be reflected in AI engine responses

We monitor the health of our data sources and flag when a source may be degraded. Your dashboard reflects the most recent successful data collection for each source.

Questions about our methodology? Contact us at hello@joincatherine.ai