Most managing partners do not know whether ChatGPT, Perplexity, Google AI Mode, or Claude ever names their firm. They check Google rankings monthly and assume that traffic equals visibility. In 2026, that is the wrong dashboard. AI search now handles 22 percent of all searches, up from 15 percent in 2025, and Google’s AI Overviews appear on roughly 50 percent of U.S. queries and 70 percent of informational ones. If a high-intent prospect asks ChatGPT for a probate attorney in your city and your firm is not in the top three names returned, you lost the opportunity before a search ad ever fired. This post walks you through a 30-minute audit you can run today, by hand, with no software, that tells you exactly where your firm stands across the four engines that matter and what to fix first.
The audit produces three numbers per engine: your raw citation rate, your branded discovery rate, and your share of voice against the firms you actually compete with. Those three numbers are the entire scoreboard. Everything else is opinion.
What you are actually auditing (and why SEO rank tracking misses it)
Traditional rank tracking measures whether your URL appears on a search results page. AI visibility measures something different: whether an AI assistant uses your firm’s name, content, or URL when synthesizing an answer to a buyer’s question. The two are correlated but not identical. ChatGPT matches Google’s page-one results less than 25 percent of the time. Perplexity and Claude mirror Google more closely at 75 percent each, Gemini at 50 percent. That gap is the entire point of an AI audit. You can rank fourth in Google for “best DUI lawyer Charlotte” and still get cited by ChatGPT in 8 out of 10 trials, or rank first and never be named at all. The signals each engine weighs are different.
The audit captures three failure modes that SEO tools cannot see. First, the engine knows your firm exists but never recommends you (zero citation rate). Second, the engine cites your firm only when the prompt already includes your name (high branded rate, zero discovery). Third, the engine cites you in some categories but never in your most profitable practice area (uneven share of voice). Each failure has a different fix, and you cannot diagnose any of them by checking Google rankings.
One more thing the audit does that rank tracking does not: it tells you who your AI competitors actually are. The firms that win AI citations in your market are usually not the firms running the loudest paid ads. They are the firms with the cleanest entity graph, the deepest review profiles, and the most editorial press coverage on Above the Law, ABA Journal, your state bar publication, or the regional business press. The audit surfaces those competitors by name. Knowing who is winning is half the work of catching them.
The 30-minute setup (five minutes of prep)
Open a fresh Google Sheet or a blank spreadsheet. Create six columns: Query, ChatGPT result, Perplexity result, Google AI Mode result, Claude result, Notes. You will fill 15 rows. That is the entire instrument.
Open four browser tabs: chat.openai.com (logged into a free or Plus account), perplexity.ai (free account is fine), google.com (you need AI Mode enabled in your account region), and claude.ai. If any of these are missing from your stack, install them now. Use a private or incognito window in each so personalization does not contaminate the results. AI engines absolutely personalize when you are signed in with a history of legal queries, and the audit needs to reflect what a stranger sees, not what your account sees.
Pick a single test city. If your firm covers multiple cities, run the audit per city. Do not blend them. AI engines treat “personal injury lawyer in Tampa” and “personal injury lawyer in St. Petersburg” as different queries with different winning firms even when the cities are 25 minutes apart.
Step 1: Pick the 15 buyer queries that actually matter (five minutes)
Most law firm AI audits fail because they test vanity queries. “Best law firm in Phoenix” is not how a buyer thinks. A buyer with a slip-and-fall asks “who is the best premises liability lawyer in Phoenix” or “lawyer for grocery store fall injury Phoenix.” A buyer facing a DUI second offense asks “DUI defense attorney Phoenix second offense” or “what happens at a DUI arraignment in Maricopa County.” Specificity is the whole game.
Build your 15 queries from this template. Five practice-area-plus-city queries: “[practice area] lawyer [city].” Five problem-state queries: “what to do after [specific situation] in [city].” Five comparison queries: “best [practice area] lawyer in [city] for [sub-segment].” Examples for a personal injury firm in Charlotte:
- “personal injury lawyer Charlotte”
- “car accident lawyer Charlotte NC”
- “wrongful death attorney Charlotte”
- “premises liability lawyer Charlotte”
- “trucking accident lawyer Charlotte”
- “what to do after a car accident in Charlotte NC”
- “should I hire a lawyer for a minor car accident in NC”
- “how long do I have to file a personal injury claim in North Carolina”
- “best personal injury lawyer Charlotte for serious injury”
- “top car accident lawyers Charlotte high settlement”
- “Charlotte trucking accident lawyer Spanish speaking”
- “personal injury lawyer Charlotte contingency fee”
- “best wrongful death attorney Charlotte”
- “Charlotte personal injury attorney free consultation”
- “personal injury lawyer near me Uptown Charlotte”
Substitute your practice area, city, and the actual sub-segments your firm wants to win. Do not include your firm name in any query. The audit needs to measure unprompted recall, not prompted lookup.
Step 2: Run all 15 queries through all 4 engines (ten minutes)
This is the bulk of the audit and the part most firms skip because it feels tedious. It is not tedious. It is the work. Paste each query into each engine, exactly as written, with no follow-up prompt. Capture three things per result:
- Did the engine name your firm anywhere in the response? Yes or no. This is your citation count.
- Which competitors did the engine name? List them in order. The first three names matter most. AI engines bias heavily toward the firms they cite first, and buyers stop reading after the first two or three recommendations.
- Did the engine link to your website as a source? This is separate from being named in the answer text. Perplexity and Google AI Mode show source links explicitly. ChatGPT and Claude link only when the model decides the source materially shapes the answer.
Move quickly. Do not refine the prompts. Do not click through. The goal is a snapshot, not a deep dive. Ten minutes of disciplined paste-and-record gets you 60 data points, more than enough to draw conclusions.
Two caveats. First, AI engines are non-deterministic. Run each query once for the audit. If you want statistical confidence, run each three times and average the result, but a single pass is sufficient for the directional read this audit gives. Second, ChatGPT’s web search behavior changes based on whether the model decides the question requires fresh information. Some legal queries trigger a web search and cite live sources, others draw from training data. Both count as citation events for the audit.
Step 3: Score your three numbers (five minutes)
Open your spreadsheet. Compute three rates per engine.
Citation rate is the percentage of your 15 queries where the engine named your firm at all. Industry benchmark for a firm doing serious AEO work is 10 to 25 percent. A firm with no AEO investment typically scores 0 to 7 percent. A market-leading firm in a competitive practice area can hit 40 to 60 percent. If your number is zero, your firm has an entity recognition problem, not a content problem. The engines do not know you exist as a routable entity. This is the most common starting point and the easiest to fix.
Branded discovery rate is the percentage of citations that came from queries that did not include your firm’s name. For this audit, that is your full citation rate (since none of the 15 queries include your name). Track this number over time. If you start adding branded queries to a future audit, the discovery rate is what tells you whether AI engines are introducing new prospects to your firm or just confirming you to people who already know you. Discovery is what drives new client matters. Branded confirmation is hygiene.
Share of voice is the percentage of the total competitor mentions that went to your firm. Add up every law firm mentioned across all 60 query-engine combinations. Divide your firm’s mentions by the total. A 20 percent share of voice in a market with 5 to 7 active firms is excellent. A 5 percent share of voice means four of five buyers asking that engine will hear another firm’s name first.
Fill the same scorecard for each engine separately. Do not average. ChatGPT and Perplexity have only 11 percent overlap in cited domains, which means a firm winning on Perplexity can be invisible on ChatGPT, and the fix for each is different. Optimizing for the wrong engine wastes a quarter.
Step 4: Diagnose the failure mode (five minutes)
Three patterns explain almost every weak audit result. Look at your scorecard and find yours.
Failure mode one: invisible everywhere. Citation rate is zero or one across all four engines. The cause is almost always entity-graph weakness. The engines cannot connect your firm name to your practice area, jurisdiction, and review footprint with confidence. Fix order: LegalService schema on the homepage, Person schema on every attorney bio, Google Business Profile completeness check, claim Avvo and Martindale profiles, get listed on the state bar’s lawyer-referral directory. This is roughly 8 to 12 hours of focused work and moves citation rate from 0 to the 5 to 10 percent range within 30 to 60 days.
Failure mode two: cited on Google AI Mode and Perplexity, invisible on ChatGPT. This is the most common pattern for firms with decent SEO and weak press. Perplexity and Google AI Mode lean on Google’s index. ChatGPT leans on training data plus its web search tool, and training data favors firms with editorial mentions in legal trade press, mainstream business press, and high-authority directories. Fix order: pitch one editorial story to Above the Law or your state bar publication, get a Best Lawyers or Super Lawyers listing if you do not already have one, build out a content cluster on a sub-segment your competitors ignore. Press placements are the single biggest move for ChatGPT visibility.
Failure mode three: cited in core practice area, invisible in profitable adjacent areas. A personal injury firm gets named for “car accident lawyer” but never for “trucking accident lawyer” or “wrongful death attorney.” The cause is content depth. The engines have learned what your firm is known for and stop recommending you in adjacent matters. Fix order: build out 3 to 5 dedicated practice-area pages per sub-segment, with FAQ schema on each, attorney bios that list each sub-segment under knowsAbout, and one case study or representative result per sub-segment. This is the highest-revenue fix because trucking, wrongful death, and serious-injury cases carry settlement values 5 to 50 times typical car accident matters.
Tools that automate this once you know what to watch
Run the manual audit first. The hand-counting is what teaches you what good and bad look like. Once you have the baseline, three tools handle the recurring tracking. Otterly.AI starts at 29 dollars a month and tracks citations across ChatGPT, Perplexity, Google AI Overviews, and a few others, with link-level reporting. Profound starts at 99 dollars a month, has prompt-level analysis, and is the strongest pick if you want to share dashboards with non-technical stakeholders. AthenaHQ at 295 dollars a month goes deepest with 8 platform coverage and revenue attribution, which mostly matters for e-commerce, less so for law firms. For a single-location firm, Otterly is enough. For a multi-office firm tracking 50 plus queries across cities, Profound earns its keep.
Run the manual audit every 90 days regardless of what tooling you adopt. Tools drift. Engines change. The hand-pass keeps your judgment calibrated to what real buyers actually see when they ask.
What the numbers should trigger
A citation rate under 5 percent across all engines means stop everything else and fix the entity graph. Schema, Google Business Profile, directory profiles, NAP consistency. No content investment moves the needle until the engines can identify your firm.
A 10 to 20 percent citation rate with weak share of voice means your firm is recognized but not preferred. The fix is editorial press and review depth. One Above the Law placement, three Super Lawyers attorneys, and 50 fresh Google reviews in the next 90 days will move share of voice 5 to 10 points.
A 25 percent plus citation rate with uneven coverage across practice areas means content depth in adjacent sub-segments. This is the fastest revenue lift available because the engines already trust your firm.
If you are above 40 percent across all four engines, you are winning. Defend the position with quarterly press, monthly content, and weekly review acquisition. Competitors notice market leaders and target them.
Frequently asked questions
How often should a law firm run this audit? Quarterly is the right cadence for most firms. Monthly is overkill unless you are in active fix mode and watching a specific intervention land. Annual is too slow given how fast the engines update their training and indexing.
Does the audit work for personal injury firms differently than transactional firms? The structure is the same. The query mix shifts. Transactional firms (estate planning, business formation, M&A) get fewer “near me” queries and more comparison and process queries: “how does a revocable trust work in Texas,” “best business formation lawyer for SaaS startups,” “M&A attorney biotech deals 50 million.” Use the same five-five-five template, just substitute the buyer’s actual mental model.
What if my firm is in a small market with low query volume? Audit the queries anyway. Even a market with 50 monthly searches per query produces qualified prospects when AI engines name you. Small markets are easier to dominate because fewer firms invest in AEO. A 60 percent share of voice in a market of 200,000 people is more profitable than a 10 percent share of voice in a market of 5 million.
Should I include my paid ads in the audit? No. Paid placements do not appear in AI engine responses. The audit measures organic AI visibility, which is a separate channel from search ads or LSAs.
How long until audit fixes show up in the engines? Schema and directory fixes register in 14 to 45 days across the four engines, fastest in Perplexity and Google AI Mode, slowest in ChatGPT. Press placements register in 30 to 90 days because the trade publications need to be indexed and the AI engines need to update their source weighting. Plan a 90-day re-audit, not a 14-day one.
Run it this week
The audit takes 30 minutes. The fixes take a quarter. The compounding takes a year. Firms that started AEO work in early 2025 are now showing 30 to 50 percent citation rates and pulling matters away from competitors who are still wondering whether AI search is real. It is real. It is 22 percent of all searches and rising. The audit is the cheapest possible first move.
If you want help running the audit on your firm and turning the diagnosis into a 90-day fix plan, that is exactly what we do at SubscribePR. Run the numbers on your own market with the AEO ROI calculator, or book a 20-minute call and we will run the audit on your firm for free during the conversation.
Tagged