When a buyer types “best personal injury lawyer in Charleston” into ChatGPT, the model does not pull from a ranking algorithm. It scans a small pool of trusted sources, weights what they say about you, and stitches a 3-firm recommendation. If your firm is not in that pool, you are not in the answer. The fix is mechanical: get cited by the seven legal directories that own the AI citation layer, ship the right schema on your site, and earn consensus across review platforms. Everything else is noise.
That is the short version. Here is how the machine actually works, and what managing partners should fund first.
What happens when someone asks ChatGPT for a lawyer
ChatGPT Search, which OpenAI rolled out broadly in late 2024 and made the default surface for logged-in users in 2025, does not “know” who the best lawyer in Atlanta is. When the prompt comes in, the system fires off a real-time query against its index, which combines the OAI-Searchbot crawl with Bing results. It pulls back roughly the first 20 to 30 sources. Then it does a deeper read on 5 to 8 of those. Then it selects 3 to 5 to cite in the final answer.
That selection step is where firms win or lose. The model is not reading every word on every site. It is looking for trust signals it can verify quickly. If your name appears on three of the sources it pulled, you are getting recommended. If it appears on zero, the buyer never sees you.
Google AI Mode and AI Overviews work on a similar pattern, except the index is Google’s own and the model is Gemini. Perplexity uses its own index plus partnerships. Claude pulls from a curated set of high-trust sources. Across all four engines, the funnel looks the same: query, retrieve, weight, synthesize, cite.
The seven directories that decide every legal answer
A 5WPR and Haute Lawyer report published earlier this year tested AI citations across the most common legal queries: “best personal injury lawyer near me,” “top divorce attorney in Houston,” “criminal defense lawyer Phoenix,” and so on. They found that seven directories produced almost every citation: Chambers, Legal 500, Super Lawyers, Best Lawyers, Martindale-Hubbell, Avvo, and Justia.
Inside that group, four show up most often in ChatGPT specifically: Super Lawyers, Avvo, Martindale-Hubbell, and FindLaw. The reason is structural. Each of those platforms publishes attorney profiles in a consistent, machine-readable format with ratings, reviews, practice areas, and bar admissions. The AI does not have to interpret messy data. It pulls the fields it needs and trusts the source.
For a managing partner, the implication is uncomfortable. You can have the best law firm website in your market and still lose every AI recommendation because your Avvo profile is sparse, your Martindale rating is unclaimed, and you never paid for the Super Lawyers nomination process. The directories are the gatekeepers. The site is downstream.
Why AI engines are conservative with law firm recommendations
Legal services sit in what Google calls YMYL territory: Your Money or Your Life. AI platforms know that a bad legal recommendation can cost a client a custody case, a personal injury settlement, or a criminal conviction. So the models are tuned to be cautious in this category. They recommend firms with overwhelming, corroborated authority signals. They avoid firms with weak ones.
That caution shows up in three behaviors. First, the models prefer firms with multiple independent citations across the directory layer. One Avvo profile is not enough. They want Avvo plus Martindale plus Super Lawyers plus a Google Business Profile with 100+ reviews. Second, they discount firms that look promotional. A homepage full of “best in the state” claims with no third-party validation registers as low trust. Third, they weight content from the firm’s own attorneys, written under the attorney’s byline, with citations to actual statutes and case law, far higher than generic blog content.
This conservatism is why AEO for law firms is a different game from AEO for SaaS or e-commerce. The bar is higher. The shortcuts are fewer.
The four signals AI is actually reading
Strip away the marketing layer and the AI is checking four things on every law firm.
Directory citations. Does this firm appear on Avvo, Martindale, Super Lawyers, FindLaw, Justia, Chambers, or Legal 500? Are the profiles complete? Is the rating strong? Are the practice areas filled in?
Review consensus. What do clients say across Google, Avvo, and Lawyers.com? AI engines look for volume (50+ reviews), recency (reviews in the last 12 months), and rating consistency (4.5+ across platforms). One platform with 200 reviews and three platforms with 6 reviews each looks suspicious.
Schema markup on the firm’s own site. This is the technical layer most firms get wrong. AI engines parse your site faster and more confidently when you ship Attorney schema on bio pages, LegalService schema on practice-area pages, FAQPage schema on Q&A content, and Organization schema on the homepage. Without it, the engine has to infer. With it, the engine reads.
Press and editorial mentions. A profile in Above the Law, a quote in the ABA Journal, a feature in your state’s legal trade publication, all of these create citations the AI weights heavily. They also feed roundup articles (“top criminal defense attorneys in Texas”) which AI engines treat as pre-bundled comparisons and lean on disproportionately when they are picking 3 firms to cite.
If you are deciding where to spend a marketing dollar in 2026, the answer is one of those four. Not paid search. Not display ads. Not LinkedIn.
What this means for the firm website
The firm website still matters, but its job has changed. In 2018, the site was a destination. In 2026, the site is a source document the AI reads to verify what the directories already said.
The pages that matter most to AI engines, in order:
- Attorney bio pages. These need Attorney schema, bar admissions listed in plain text, education, years of experience, practice areas, and ideally a publication or speaking history. The AI uses these as the canonical record of who the lawyer is.
- Practice-area pages. These need LegalService schema, the geographic service area, the specific issues handled, and FAQ-style content that answers the questions a client would Google. “What is the average settlement for a slip and fall in South Carolina?” is the kind of query AI engines pull directly from a well-built practice page.
- Case results. AI engines treat case results as authority signals when they include settlement amounts, jurisdiction, and a one-paragraph summary. Generic “we got a great result for a client” pages do not count.
- An FAQ hub. A central FAQ page with FAQPage schema covers the long tail of buyer queries. It is the cheapest way to capture AI citations for question-style searches.
If a firm is starting from scratch, schema and FAQ content move the needle in 30 days. Directory citations take 90 days to compound. Press takes 6 months to fully feed back into the AI training and retrieval layers.
Why most law firms are losing this fight right now
Three failures show up over and over when we audit firms.
The first is unclaimed directory profiles. A firm has a profile on Avvo from 2014 that nobody updates. The bio is wrong, the practice areas are stale, the rating is mediocre. AI pulls from that profile because it is the canonical source.
The second is review concentration on Google only. The firm has 80 great Google reviews and zero Avvo reviews. AI engines flag the asymmetry as suspicious.
The third is no schema on the website. The firm pays $40,000 a year for a site that is beautiful and slow and has zero structured data. The AI cannot parse it confidently, so it leans harder on the directories. The site becomes irrelevant to the recommendation engine even when the buyer is right there reading it.
None of these are expensive to fix. They are simply ignored because most agencies still sell traditional SEO and have not updated the playbook.
The 30-day fix
If a firm wants to start showing up in AI recommendations this quarter, here is the order of operations.
Week one: Claim every directory profile (Avvo, Martindale, Super Lawyers, FindLaw, Justia, Lawyers.com, Google Business Profile). Fill them out. Match every detail across every platform. Inconsistency is the single fastest way to confuse an AI.
Week two: Ship Attorney, LegalService, Organization, and FAQPage schema on the firm site. A developer can do this in 6 to 10 hours. Validate with Google’s Rich Results Test.
Week three: Launch a review push to fix the Google-only asymmetry. Aim for 10 reviews each on Avvo and on the platform most relevant to your practice (Martindale for B2B, Justia for plaintiff work, RealSelf if you do plaintiff cosmetic litigation).
Week four: Audit press footprint. Pitch 3 trade publications and 1 regional paper on a current case angle, a regulatory commentary, or a settlement worth covering. Even one tier-two placement creates a citation the AI will pull from.
This is not glamorous work. It is also why the firms doing it are quietly winning every AI query in their market.
FAQ
How do AI engines like ChatGPT actually pick a law firm?
They scan 20 to 30 sources from their search index, do a deeper read on 5 to 8, then cite 3 to 5. The pool is dominated by Avvo, Martindale, Super Lawyers, FindLaw, Justia, Chambers, and Legal 500. Whichever firms appear most consistently across that pool get recommended.
Does my law firm website still matter for AI search?
Yes, but its role is to verify what the directories say, not to win the recommendation on its own. Attorney schema, LegalService schema, FAQ content, and case results are the highest-leverage pages.
Which legal directories matter most for AI visibility in 2026?
Avvo, Martindale-Hubbell, Super Lawyers, and FindLaw are the four most cited. Justia, Chambers, and Legal 500 round out the seven that drive almost every legal AI citation.
How long does it take to start showing up in AI recommendations?
Schema and directory cleanup move the needle in 30 days. Reviews compound over 90 days. Press feeds the AI retrieval layer over 6 months. Most firms see measurable lift in AI mentions within one quarter of starting.
Is AEO for law firms different from AEO for other industries?
Yes. Legal sits in YMYL territory, so AI engines are conservative. They demand more authority signals, more independent citations, and more verified review consensus before recommending a firm. The bar is higher than for SaaS or retail.
Want to know how AI sees your firm right now?
We run a free 20-minute AI visibility audit for law firms. We test your name across ChatGPT, Claude, Gemini, and Perplexity, name the specific directories where you are missing or weak, and show you the gaps in your schema and review profile. Book a call here or run the numbers yourself with our ROI calculator.
Related reading:
Tagged