Skip to content
O

OOOOOOOOOOOOOOOOOOOOOOOOOOO.CARRD.CO

15 Passed
2 Partial
21 Failed
Best
Weakest

QUICK WINS

Focus on these checks for the biggest score improvement.

No WebMCP signals found.

WebMCP Declarative API
HIGH

WebMCP is the W3C proposal for exposing website functionality directly to AI agents. Declarative tool annotations on forms let agents perform actions (search, purchase, book) without screen scraping.

Implement WebMCP to let AI agents interact with your site. Declarative: add tool-name and tool-description attributes to <form> elements. Imperative: use navigator.modelContext.provideContext() to register tools.

HTML html
<form tool-name="search" tool-description="Search products by keyword">
  <input type="text" name="query" tool-param-description="Search query">
  <button type="submit">Search</button>
</form>

No A2A Agent Card found at /.well-known/agent.json.

A2A Agent Card
HIGH

Google's Agent-to-Agent (A2A) protocol enables autonomous agent-to-agent communication. An Agent Card at /.well-known/agent.json advertises your agent's capabilities to other AI systems.

Create an A2A Agent Card at /.well-known/agent.json with required fields: name, description, url, capabilities.

.well-known/agent.json json
{
  "name": "My Agent",
  "description": "An AI assistant for customer support",
  "url": "https://ooooooooooooooooooooooooooo.carrd.co/agent",
  "capabilities": {
    "streaming": true,
    "pushNotifications": false
  },
  "skills": [
    {"id": "support", "name": "Customer Support"}
  ]
}

No valid JSON-LD blocks with schema.org @context found.

JSON-LD present
HIGH

JSON-LD is the preferred format for structured data by Google and AI search engines. It tells AI systems exactly what your page is about using a standardized vocabulary.

Add at least one <script type="application/ld+json"> block with "@context": "schema.org" describing your organization or page content. Schema markup provides a 73% higher selection rate in Google AI Overviews.

JSON-LD html
<script type="application/ld+json">
{
  "@context": "https://schema.org",
  "@type": "Organization",
  "name": "Your Company",
  "url": "https://ooooooooooooooooooooooooooo.carrd.co",
  "logo": "https://ooooooooooooooooooooooooooo.carrd.co/logo.png"
}
</script>
Fix with AI Skills
$ /plugin marketplace add bartwaardenburg/isagentready-skills

DETAILED RESULTS

Page returned HTTP 200 - accessible to bots.

AI crawlers and agents must be able to fetch your pages via HTTP. If bots are blocked, AI systems cannot index or interact with your content.

Valid robots.txt found at ooooooooooooooooooooooooooo.carrd.co/robots.txt.

robots.txt is the universal standard for telling crawlers what they can access. AI crawlers like GPTBot, ClaudeBot and Googlebot-Extended check this file first before crawling your site.

Wildcard Allow: / in robots.txt permits all crawlers including AI agents.

Major AI systems use dedicated crawlers (GPTBot, ClaudeBot, Amazonbot). Explicit directives signal whether you want your content included in AI training and search results.

Valid XML sitemap found at ooooooooooooooooooooooooooo.carrd.co/sitemap.xml.

Sitemaps help AI crawlers discover all your pages efficiently. Without one, crawlers may miss important content or waste time discovering your site structure.

No valid llms.txt found at /llms.txt.

llms.txt
HIGH

llms.txt provides a curated entry point for AI agents — coding assistants, browsing agents, and task automation tools that fetch specific pages to complete tasks. Unlike crawlers that scrape everything, agents benefit from a structured overview of your most important pages.

Create an /llms.txt file with a markdown heading and relevant URLs to help LLMs understand your site.

llms.txt plain
# /llms.txt
# ooooooooooooooooooooooooooo.carrd.co

> ooooooooooooooooooooooooooo.carrd.co - your site description here.

## Docs
- [Getting Started](https://ooooooooooooooooooooooooooo.carrd.co/docs/getting-started)
- [API Reference](https://ooooooooooooooooooooooooooo.carrd.co/docs/api)

## Optional
- [Blog](https://ooooooooooooooooooooooooooo.carrd.co/blog)
- [Changelog](https://ooooooooooooooooooooooooooo.carrd.co/changelog)

Server returned HTML (content-type: text/html) when Accept: text/markdown was requested. No content negotiation support detected.

Content negotiation (text/markdown)
HIGH

AI agents fetch pages programmatically and benefit from machine-readable formats. Serving Markdown via Accept: text/markdown allows agents to consume structured content without HTML parsing overhead — saving up to 80% of tokens while preserving headings, links, and emphasis.

Serve Markdown when AI agents request it via `Accept: text/markdown`. Add a `Vary: Accept` header so CDN caches serve format-appropriate responses.

# Express.js middleware
app.get('*', (req, res, next) => {
  if (req.accepts('text/markdown')) {
    res.set('Content-Type', 'text/markdown')
    res.set('Vary', 'Accept')
    return res.sendFile(markdownPath(req.path))
  }
  next()
})

# Phoenix / Plug
case get_req_header(conn, "accept") do
  [accept | _] when accept =~ "text/markdown" ->
    conn
    |> put_resp_content_type("text/markdown")
    |> put_resp_header("vary", "Accept")
    |> send_resp(200, markdown_for(conn.request_path))
    |> halt()
  _ -> conn
end

# Test: curl -H "Accept: text/markdown" https://ooooooooooooooooooooooooooo.carrd.co/

No restrictive meta robots or X-Robots-Tag directives found.

Meta robots tags and X-Robots-Tag headers control per-page indexing. Restrictive directives like noindex or noai prevent AI systems from using your content.

Only HTTP Last-Modified header found. This is a server-level signal - add content-level dates (dateModified in JSON-LD or article:modified_time meta tag) for stronger freshness signals.

Content freshness signals
MEDIUM

AI systems strongly prefer recent content. ChatGPT gives 3.2× more citations to content updated within 30 days. Clear date signals (dateModified, article:modified_time) help AI determine your content's recency.

Add a dateModified property to your JSON-LD structured data or an article:modified_time meta tag. Content with explicit modification dates gets 3.2× more AI citations.

JSON-LD html
<!-- Option 1: JSON-LD (preferred) -->
<script type="application/ld+json">
{
  "@context": "https://schema.org",
  "@type": "Article",
  "headline": "Your Article Title",
  "datePublished": "2026-01-15T09:00:00Z",
  "dateModified": "2026-02-28T14:30:00Z"
}
</script>

<!-- Option 2: Open Graph meta tags -->
<meta property="article:published_time" content="2026-01-15T09:00:00Z">
<meta property="article:modified_time" content="2026-02-28T14:30:00Z">

<!-- Option 3: HTML time element -->
<time datetime="2026-02-28T14:30:00Z">Updated February 28, 2026</time>

<!-- Server-level: Ensure Last-Modified header is sent -->
# Nginx: add_header Last-Modified $date_gmt;
# Apache: FileETag MTime (default)
Fix with AI Skills
$ /plugin marketplace add bartwaardenburg/isagentready-skills

No valid JSON-LD blocks with schema.org @context found.

JSON-LD present
HIGH

JSON-LD is the preferred format for structured data by Google and AI search engines. It tells AI systems exactly what your page is about using a standardized vocabulary.

Add at least one <script type="application/ld+json"> block with "@context": "schema.org" describing your organization or page content. Schema markup provides a 73% higher selection rate in Google AI Overviews.

JSON-LD html
<script type="application/ld+json">
{
  "@context": "https://schema.org",
  "@type": "Organization",
  "name": "Your Company",
  "url": "https://ooooooooooooooooooooooooooo.carrd.co",
  "logo": "https://ooooooooooooooooooooooooooo.carrd.co/logo.png"
}
</script>

No Organization or WebSite schema type found in JSON-LD.

Organization / WebSite schema
HIGH

Organization and WebSite schemas establish your brand identity in the knowledge graph. AI systems use these to attribute content, display rich results, and build entity relationships.

Add a JSON-LD block with @type "Organization" or "WebSite" including at least "name" and "url" properties.

JSON-LD html
<script type="application/ld+json">
{
  "@context": "https://schema.org",
  "@type": "Organization",
  "name": "Your Company",
  "url": "https://ooooooooooooooooooooooooooo.carrd.co",
  "logo": "https://ooooooooooooooooooooooooooo.carrd.co/logo.png",
  "sameAs": [
    "https://twitter.com/yourcompany",
    "https://linkedin.com/company/yourcompany"
  ]
}
</script>

No high-value schema types found. We check for 17 recognized types including Article, Product, SoftwareApplication, FAQPage, LocalBusiness, VideoObject, and more.

High-value schema types
HIGH

Certain schema types (Article, Product, FAQPage, etc.) trigger rich results in Google and are used by AI search engines like Perplexity and ChatGPT to generate citations and structured answers.

Add JSON-LD markup for content-appropriate types like Article, Product, SoftwareApplication, FAQPage, LocalBusiness, Service, Event, or VideoObject to improve visibility in AI-powered search results.

JSON-LD html
<script type="application/ld+json">
{
  "@context": "https://schema.org",
  "@type": "SoftwareApplication",
  "name": "Your App Name",
  "operatingSystem": "Web",
  "applicationCategory": "BusinessApplication",
  "offers": {
    "@type": "Offer",
    "price": "0",
    "priceCurrency": "USD"
  }
}
</script>

No FAQPage schema found. Sites with FAQPage schema are 8× more likely to be cited by ChatGPT.

FAQPage schema
HIGH

FAQPage schema is one of the strongest structured data signals for AI citation. Research shows websites with FAQPage schema are 8× more likely to appear in ChatGPT search results compared to those without it.

Add FAQPage structured data with Question/Answer pairs covering common queries about your product or service. Pages with FAQPage schema are 8× more likely to be cited by ChatGPT, making this one of the highest-ROI structured data investments for AI visibility.

JSON-LD html
<script type="application/ld+json">
{
  "@context": "https://schema.org",
  "@type": "FAQPage",
  "mainEntity": [
    {
      "@type": "Question",
      "name": "What is AI agent readiness?",
      "acceptedAnswer": {
        "@type": "Answer",
        "text": "AI agent readiness measures how well a website can be discovered, understood, and interacted with by AI systems."
      }
    },
    {
      "@type": "Question",
      "name": "How do I improve my AI readiness score?",
      "acceptedAnswer": {
        "@type": "Answer",
        "text": "Add structured data, ensure clean semantic HTML, allow AI crawlers in robots.txt, and implement agent protocols."
      }
    }
  ]
}
</script>

No author attribution found. 96% of Google AI Overview citations come from sources with strong E-E-A-T signals including clear authorship.

Author attribution
HIGH

Author attribution is a key E-E-A-T signal. Google reports 96% of AI Overview citations come from sources with strong E-E-A-T, and expert-authored content is 3.2× more likely to be cited than generic staff-written content.

Add author information using JSON-LD (preferred), <meta name="author">, or <a rel="author">. Expert authorship is 3.2× more likely to earn AI citations.

JSON-LD html
<!-- Option 1: JSON-LD author in Article (preferred) -->
<script type="application/ld+json">
{
  "@context": "https://schema.org",
  "@type": "Article",
  "headline": "Your Article Title",
  "author": {
    "@type": "Person",
    "name": "Jane Smith",
    "url": "https://ooooooooooooooooooooooooooo.carrd.co/team/jane-smith"
  }
}
</script>

<!-- Option 2: Meta tag -->
<meta name="author" content="Jane Smith">

<!-- Option 3: Linked author -->
<a rel="author" href="https://ooooooooooooooooooooooooooo.carrd.co/team/jane-smith">Jane Smith</a>
Fix with AI Skills
$ /plugin marketplace add bartwaardenburg/isagentready-skills

376 interactive elements with proper keyboard accessibility.

AI agents navigate pages using keyboard-style interactions via the accessibility tree. Positive tabindex values break natural focus order, and missing focusable elements on interactive widgets prevent AI agents from discovering and operating controls.

Body contains 6407 characters of visible text (server-side rendered).

Neither AI crawlers nor AI agents execute JavaScript. They parse raw HTML to understand page content. Without server-side rendering, your content is invisible to both training crawlers and task-completing agents.

No headings found on the page.

Heading hierarchy
HIGH

AI agents use headings to build a content outline and determine topic hierarchy. A clear h1→h2→h3 structure helps AI systems extract and summarize your content accurately.

Add a clear <h1> heading and use sequential heading levels (h1 -> h2 -> h3). Proper heading hierarchy helps AI agents understand page structure and content importance. See developer.mozilla.org/en-US/docs/Web/HTML/Element/Heading_Elements for heading element documentation.

HTML html
<h1>Page Title - One Per Page</h1>
  <h2>Main Section</h2>
    <h3>Subsection</h3>
    <h3>Subsection</h3>
  <h2>Another Section</h2>
    <h3>Subsection</h3>

No semantic HTML elements found (<header>, <nav>, <main>, <article>/<section>, <footer>).

Semantic HTML elements
HIGH

AI agents navigate pages using the accessibility tree, not visual layout. Semantic elements like <main>, <nav>, and <article> provide structural meaning that AI systems rely on to parse content.

Add missing semantic elements: header, nav, main, footer, article or section. Semantic HTML helps AI agents identify page regions and navigate content structure. See developer.mozilla.org/en-US/docs/Glossary/Semantics#semantics_in_html for semantic HTML documentation.

HTML html
<body>
  <header>
    <nav><!-- primary navigation --></nav>
  </header>
  <main>
    <article>
      <h1>Page Title</h1>
      <section><!-- content section --></section>
    </article>
  </main>
  <footer><!-- site footer --></footer>
</body>

No landmark regions, but 1 ARIA attribute(s) found (aria-labelledby).

Accessible landmarks & ARIA
MEDIUM

ARIA landmarks help AI agents identify page regions (navigation, main content, footer). This is the same API used by screen readers and is increasingly used by AI browsing agents.

Use semantic HTML elements to define page regions: <header>, <nav>, <main>, <footer>. These provide implicit ARIA landmarks without needing explicit role attributes. See w3.org/WAI/ARIA/apg/patterns/landmarks for the W3C landmarks pattern.

HTML html
<!-- Semantic elements provide implicit ARIA landmarks -->
<header><!-- banner landmark -->
  <nav aria-label="Primary"><!-- navigation landmark -->
    <a href="/" aria-current="page">Home</a>
  </nav>
</header>
<main><!-- main landmark -->
  <article>...</article>
</main>
<footer><!-- contentinfo landmark --></footer>

All 1 images have alt attributes.

AI agents cannot see images. Alt text is the only way for AI systems to understand image content, context, and relevance to the surrounding text.

Valid language attribute found: lang="en".

The lang attribute tells AI agents what language your content is in, enabling correct text processing, translation, and language-specific understanding.

All 50 sampled links have descriptive text.

AI agents use link text to understand navigation and discover related content. Generic text like 'click here' provides no context about where the link leads.

Descriptive page title found: "𖣠⚪𔗢⚪🞋⚪𔗢⚪𖣠◦୦◦◯◦୦◦⠀       ⠀◦୦◦◯◦୦◦𖣠⚪𔗢⚪🞋⚪𔗢⚪𖣠" (41 chars).

AI agents use the <title> element as the primary label when citing, indexing, or summarizing a page. A missing, empty, or generic title makes your page unidentifiable in AI-generated responses and search results.

All 375 buttons have accessible names.

AI agents invoke buttons by their accessible name in the accessibility tree. Without a discernible name, an AI agent cannot identify what a button does, making form submission and page interaction impossible.

No aria-hidden="true" usage found - no risk of accessibility tree corruption.

aria-hidden="true" removes elements from the accessibility tree. When applied to the <body> or to focusable elements, it corrupts the tree that AI agents navigate, causing them to miss content or encounter invisible controls.

Fix with AI Skills
$ /plugin marketplace add bartwaardenburg/isagentready-skills

No WebMCP signals found.

WebMCP Declarative API
HIGH

WebMCP is the W3C proposal for exposing website functionality directly to AI agents. Declarative tool annotations on forms let agents perform actions (search, purchase, book) without screen scraping.

Implement WebMCP to let AI agents interact with your site. Declarative: add tool-name and tool-description attributes to <form> elements. Imperative: use navigator.modelContext.provideContext() to register tools.

HTML html
<form tool-name="search" tool-description="Search products by keyword">
  <input type="text" name="query" tool-param-description="Search query">
  <button type="submit">Search</button>
</form>

No WebMCP manifest found at /.well-known/webmcp or /.well-known/webmcp.json.

WebMCP manifest
HIGH

A WebMCP manifest at /.well-known/webmcp.json enables pre-navigation discovery - AI agents can learn what tools your site offers before even loading the page.

Create a WebMCP manifest at /.well-known/webmcp.json with your tool definitions to enable pre-navigation agent discovery. Discovery is an active area of discussion in the WebMCP spec.

.well-known/webmcp.json json
{
  "spec": "webmcp/0.1",
  "tools": [
    {
      "name": "search",
      "description": "Search products",
      "url": "/search",
      "method": "GET",
      "parameters": [
        {"name": "q", "type": "string", "description": "Search query"}
      ]
    }
  ]
}

No A2A Agent Card found at /.well-known/agent.json.

A2A Agent Card
HIGH

Google's Agent-to-Agent (A2A) protocol enables autonomous agent-to-agent communication. An Agent Card at /.well-known/agent.json advertises your agent's capabilities to other AI systems.

Create an A2A Agent Card at /.well-known/agent.json with required fields: name, description, url, capabilities.

.well-known/agent.json json
{
  "name": "My Agent",
  "description": "An AI assistant for customer support",
  "url": "https://ooooooooooooooooooooooooooo.carrd.co/agent",
  "capabilities": {
    "streaming": true,
    "pushNotifications": false
  },
  "skills": [
    {"id": "support", "name": "Customer Support"}
  ]
}

No MCP discovery document found at /.well-known/mcp.json.

MCP Discovery
HIGH

The Model Context Protocol (MCP) by Anthropic is the standard for connecting AI assistants to external tools and data. A discovery document lets AI systems find and connect to your MCP server.

Create a MCP discovery document at /.well-known/mcp.json describing your MCP server capabilities.

{
  "mcpServers": {
    "myService": {
      "url": "https://ooooooooooooooooooooooooooo.carrd.co/mcp",
      "name": "My MCP Server",
      "description": "Provides tools for data access"
    }
  }
}

No OpenAPI or Swagger documentation found.

OpenAPI / API documentation
HIGH

OpenAPI specifications let AI agents understand and call your API programmatically. AI coding assistants and agent frameworks use OpenAPI to generate correct API calls automatically.

Create an OpenAPI 3.x specification and serve it at /openapi.json or /.well-known/openapi to enable AI agents to discover and use your API.

openapi.json json
{
  "openapi": "3.1.0",
  "info": {
    "title": "My API",
    "version": "1.0.0"
  },
  "paths": {
    "/api/search": {
      "get": {
        "summary": "Search items",
        "parameters": [
          {"name": "q", "in": "query", "schema": {"type": "string"}}
        ]
      }
    }
  }
}

No agents.json found at /.well-known/agents.json or /agents.json.

agents.json
HIGH

agents.json provides a directory of AI agent endpoints on your site. It helps agent orchestration systems discover and connect to your available AI services.

Create an agents.json file at /.well-known/agents.json with your agent endpoint definitions.

agents.json json
{
  "agents": [
    {
      "name": "Support Agent",
      "description": "Handles customer inquiries",
      "url": "https://ooooooooooooooooooooooooooo.carrd.co/agents/support",
      "protocol": "a2a"
    }
  ]
}

0/6 interactive surfaces covered by WebMCP tool annotations (0%). Forms: 0/0, Search: 0/0, Actions: 0/6, Filters: 0/0.

Interactive Surface Coverage
HIGH

Higher coverage of interactive surfaces with WebMCP annotations means AI agents can perform more actions on your site without screen scraping or brittle heuristics.

Add tool-name and tool-description attributes to uncovered forms and interactive elements to increase agent coverage.

Fix with AI Skills
$ /plugin marketplace add bartwaardenburg/isagentready-skills

Site is served over HTTPS with a valid SSL certificate.

HTTPS is the baseline for secure communication. AI agents and crawlers refuse to interact with insecure sites, and browsers mark HTTP sites as unsafe.

Strict-Transport-Security header is not set. Note: sites on the HSTS preload list may not send this header but are still protected via browser preloading.

HSTS header
HIGH

HSTS prevents protocol downgrade attacks by telling browsers to always use HTTPS. AI agents that follow redirects benefit from HSTS as it eliminates insecure initial connections.

Add the Strict-Transport-Security header with max-age=31536000 (1 year) and includeSubDomains. If your site is on the HSTS preload list (hstspreload.org), the header is still recommended for first-visit protection.

Server config plain
# Nginx
add_header Strict-Transport-Security "max-age=31536000; includeSubDomains; preload" always;

# Apache (.htaccess)
Header always set Strict-Transport-Security "max-age=31536000; includeSubDomains; preload"

Content-Security-Policy header is not set.

Content-Security-Policy
HIGH

Content-Security-Policy prevents XSS and injection attacks. For AI agents that interact with your site, CSP ensures the page content hasn't been tampered with by malicious scripts.

Add a Content-Security-Policy header with directives like default-src, script-src, and style-src.

Server config plain
# Nginx
add_header Content-Security-Policy "default-src 'self'; script-src 'self'; style-src 'self' 'unsafe-inline'; img-src 'self' data: https:; font-src 'self'" always;

# Apache (.htaccess)
Header always set Content-Security-Policy "default-src 'self'; script-src 'self'; style-src 'self' 'unsafe-inline'; img-src 'self' data: https:; font-src 'self'"

X-Content-Type-Options header is not set.

X-Content-Type-Options
HIGH

This header prevents MIME type sniffing attacks. It ensures browsers and HTTP clients interpret your content exactly as intended, reducing the risk of content injection.

Add the header X-Content-Type-Options: nosniff to prevent MIME type sniffing.

Server config plain
# Nginx
add_header X-Content-Type-Options "nosniff" always;

# Apache (.htaccess)
Header always set X-Content-Type-Options "nosniff"

No frame protection found (neither X-Frame-Options nor CSP frame-ancestors).

Frame protection
HIGH

Frame protection prevents your site from being embedded in malicious iframes (clickjacking). This protects users who interact with your site through AI-suggested links.

Add X-Frame-Options: DENY (or SAMEORIGIN) or use CSP frame-ancestors directive.

Server config plain
# Nginx
add_header X-Frame-Options "DENY" always;
# Or via CSP:
add_header Content-Security-Policy "frame-ancestors 'none'" always;

# Apache (.htaccess)
Header always set X-Frame-Options "DENY"
# Or via CSP:
Header always set Content-Security-Policy "frame-ancestors 'none'"

No Access-Control-Allow-Origin header detected. The browser's same-origin policy is enforced by default - this is the most secure configuration for websites that don't expose cross-origin APIs.

CORS controls which domains can make cross-origin requests to your site. Without a CORS header, browsers enforce the same-origin policy - the most secure default. Only add CORS headers when you intentionally need to allow cross-origin API access.

Referrer-Policy header is not set.

Referrer-Policy
HIGH

Referrer-Policy controls what URL information is shared when users navigate away from your site. A strict policy protects user privacy and prevents leaking sensitive URL parameters.

Add a Referrer-Policy header (e.g., strict-origin-when-cross-origin or no-referrer).

Server config plain
# Nginx
add_header Referrer-Policy "strict-origin-when-cross-origin" always;

# Apache (.htaccess)
Header always set Referrer-Policy "strict-origin-when-cross-origin"
Fix with AI Skills
$ /plugin marketplace add bartwaardenburg/isagentready-skills

No AI training protections detected. Your content may be freely used for AI model training.

Common Crawl presence Unknown

Check timed out.

Training crawler blocking Not found

No training crawlers are blocked in robots.txt. All known AI training bots can access your content.

TDMRep declaration Not found

No TDMRep file found at /.well-known/tdmrep.json. The TDM Reservation Protocol (W3C) lets you declare whether you reserve text and data mining rights. See w3.org/community/reports/tdmrep/CG-FINAL-tdmrep-20240510 for the specification.

ai.txt policy Not found

No ai.txt file found. ai.txt lets you declare AI usage preferences for your content.

Developer Tools

SCAN AND FIX FROM YOUR EDITOR

Install our MCP server to scan any website from your terminal. Pair it with AI agent skills to fix failing checks automatically.

  • Scan any website directly from your terminal
  • Fix failing checks with AI agent skills
  • Re-scan to verify improvements
SCAN
$ claude mcp add isagentready-mcp -- npx -y isagentready-mcp
FIX
$ /plugin marketplace add bartwaardenburg/isagentready-skills
Free PDF Report

GET YOUR DETAILED REPORT

Receive a personalized report with actionable insights for improving your website's AI agent readiness.

  • Detailed score breakdown per category
  • Prioritized improvement recommendations
  • Step-by-step implementation guide

FROM THE BLOG

Deep dives into AI agent protocols, citation mechanics, and optimization strategies.

The Responsive Design Moment for AI Agents
11 min read

The Responsive Design Moment for AI Agents

The web is going through the same shift it went through with mobile. First separate m.dot sites, then responsive convergence. AI agent readiness is following the same arc: separate agent APIs today, one adaptive content layer tomorrow. We trace the parallels, the data, and what comes after convergence — the personal LLM layer.

ai-agents web-standards getting-started
Content Negotiation for AI Agents: Why Sentry Serves Markdown Over HTML
9 min read

Content Negotiation for AI Agents: Why Sentry Serves Markdown Over HTML

Sentry co-founder David Cramer shows how content negotiation — a 25-year-old HTTP standard — saves AI agents 80% of tokens. We break down the implementation: Accept headers, markdown delivery, authenticated page redirects, and what this means for every website preparing for agent traffic.

ai-agents seo getting-started
Vercel's agent-browser: Why a CLI Beats MCP for Browser Automation
10 min read

Vercel's agent-browser: Why a CLI Beats MCP for Browser Automation

Vercel's agent-browser hit 22,000 GitHub stars in two months. It's a CLI, not an MCP server, and the data shows why: 94% fewer tokens, 3.5x faster execution, 100% success rate. We break down how it works, why it uses the accessibility tree, and what the 'less is more' finding means for your website.

ai-agents web-standards accessibility

EXPLORE MORE

Most websites score under 45. Find out where you stand.

SEE THE RANKINGS
COMPARE

SEE THE RANKINGS

See how ooooooooooooooooooooooooooo.carrd.co compares to other websites on AI agent readiness across all 5 categories.
COMPARE
HEAD TO HEAD

COMPARE

Compare ooooooooooooooooooooooooooo.carrd.co side-by-side with a competitor across all 5 categories and 47 checkpoints.
HOW WE SCORE
METHODOLOGY

HOW WE SCORE

Learn how the AI readiness score is calculated across discovery, structured data, semantics, protocols, and security.