Generative Engine
Optimization,
graded.
Paste any URL. AgentGEOScore checks whether ChatGPT, Claude, Gemini, Perplexity, Groq, and the long tail of AI agents can find, read, and cite your site — then hands you back a grade out of one hundred, a ranked fix list with copy-pasteable HTML, and the ground-truth prompts to test whether AI search actually recommends you.
or try a known good one: stripe.com · anthropic.com · perplexity.ai
What we measure
Agent Access · 25%
Are GPTBot, ClaudeBot, PerplexityBot, Google-Extended, Applebot-Extended, Amazonbot, Bytespider, and the long tail of AI crawlers actually allowed in by your robots.txt? Most sites block them by accident through a stale wildcard rule. We audit every documented AI-bot user agent (see OpenAI's bot docs and Anthropic's crawler docs) and flag the ones that lose you citations.
Discoverability · 20%
sitemap.xml, HTTPS, canonical URLs, response speed, SPA-rendering risk, Core Web Vitals via Google PageSpeed Insights, hreflang return-tag symmetry, and a multi-page sample so the homepage cannot be the only readable surface.
Structured Data · 20%
schema.org JSON-LD presence, rich-type coverage (Organization, SoftwareApplication, FAQPage, Article, Product), validator-conformance of required properties per @type, OpenGraph, Twitter cards, and Person + sameAs E-E-A-T authorship signals.
Content Clarity · 15%
Title and meta-description length, a single clean H1, semantic header / main / nav / footer / article landmarks, real visible text (not divs pretending to be a page), content depth against the Princeton GEO 1500–2500-word band (Aggarwal et al., 2024), and internal-linking quality.
Citation Probe · 20%
Live queries to Gemini (Google AI with grounded search), Mistral, Brave Search, Duck.ai, and Groq llama-3.3-70b on category-relevant prompts. We ask the AI: would you cite this site? Then we check the answer. The Duck.ai probe requires no API key and always runs.
Ranked Fix List
Every failing check becomes a prioritized fix — tagged with severity, effort, expected score lift, and a copy-pasteable HTML snippet runnable as-is. The list is sorted by impact-per-effort, so the top fix is always the one worth shipping first.
How AgentGEOScore works
You paste a URL. The backend fetches the homepage over HTTPS with a polite user agent, parses the HTML with lxml, and dispatches 8 scanner modules in parallel. Each scanner is a focused, evidence-backed slice of the GEO literature: agent_access resolves robots.txt, discoverability verifies sitemap and canonical, structured_data walks every JSON-LD block and validates required properties per schema.org @type, content_clarity measures landmarks and text density, js_rendering detects SPA shells where the real content lives in a bundle most AI crawlers will not execute, and a multi-page sampler crawls 5–10 internal links so your score reflects the site, not just a polished homepage.
"Cite Sources, Quotation Addition, and Statistics Addition are the three highest-impact content modifications for AI engine citation — measured on a 10,000-query benchmark across Perplexity-class search." — Aggarwal et al., GEO benchmark paper, KDD 2024
Once the scanners return, the citation probe layer queries 5 real LLMs and search engines — Gemini, Mistral, Brave, Duck.ai, Groq — with prompts derived from your page's category and entities. The probe asks a question your category should win, then checks whether the answer mentions your domain. Each probe degrades gracefully when its API key is unset, returning a clean skip rather than blocking the report.
The score blends category results with documented weights: Agent Access 25%, Discoverability 20%, Structured Data 20%, Content Clarity 15%, Citation Probe 20%. The grading bands are A ≥ 90, B ≥ 75, C ≥ 60, D ≥ 40, else F. The fix list ranks every warning and failure by severity, then by expected score lift, then by effort — so the first item is always the highest-leverage thing to ship. Each fix carries an HTML snippet you can paste into your site templates and ship in 5 minutes.
"AI engines treat undated content as stale. Visible 'Updated' dates paired with a machine-readable <time datetime> element raise citation rate by ~34% on the Perplexity and Google AI Overviews surfaces."
— Seenos AI-search freshness audit, 2026
Reproducibility: the backend ships 500 pytest checks under github.com/LindaHaviv/agentgeoscore, every check is unit-tested with mocked HTTP via respx, the frontend has 23 vitest tests plus an axe-core accessibility suite, and there is a 0-tolerance gate on WCAG 2.1 AA violations. Per-IP rate limits are 10/min on /api/scan, 5/min on /api/compare, 30/min on /api/test-prompts.
Frequently asked questions
What is generative engine optimization?
GEO is what SEO becomes when the readers are large language models. Classical SEO optimizes for ranking on Google search results; GEO optimizes for being found, read, and cited by AI agents like ChatGPT, Claude, Gemini, and Perplexity. The signals overlap (clean HTML, fast pages, structured data) but the failure modes diverge: JavaScript-only pages, missing schema.org JSON-LD, and unfriendly robots.txt rules silently remove you from AI answers without affecting Google ranking.
How is this different from a regular SEO audit?
Three things. First, the score blends 5 categories that mirror the AI-search literature, not Google ranking factors. Second, every check is grounded in a citation from the GEO literature or a documented platform behavior — no vibes-based heuristics. Third, the report ships a ranked fix list with copy-pasteable HTML and an expected score lift per fix, so you know exactly which change to ship first.
Do I need to install anything?
No. AgentGEOScore is a web tool — paste a URL on the homepage and the report renders. There is no signup, no cookies, no third-party tracking, and no scan results are persisted server-side. The full source is open under MIT on GitHub, so you can self-host if you would rather run scans privately.
Can I compare against competitors?
Yes. Every report has a side-by-side compare card. Drop in 1–3 competitor domains; the backend runs each through the same scoring pipeline and renders a per-category delta table. Results are cached for 1 hour, so re-running with the same competitors is instant.
Which AI engines do you probe for citations?
Five: Gemini (Google AI with native web access), Mistral, Brave Search (proxy for Perplexity's index), Groq llama-3.3-70b, and Duck.ai (DuckDuckGo's chat layer over GPT-4o-mini and Claude). All keys come from free-tier signups. The Duck.ai probe needs no key and always runs. Any probe whose key is missing returns a clean skip and the rest of the report still renders.
Is AgentGEOScore open source?
Yes — MIT licensed. The backend is FastAPI + httpx, typed with Pydantic. The frontend is Vite + React + TypeScript + Tailwind. Test suites cover backend (pytest + respx, 500 tests), frontend unit (vitest + Testing Library), and end-to-end accessibility and smoke flows (Playwright + axe-core across desktop, tablet, and mobile viewports).
All probes use free-tier APIs. No data stored. The source is open.