Take Back Control of
AI Crawler Traffic

Cryptographic verification at your origin. Granular access policies. Pay-per-crawl programs for content monetization—without CDN lock-in.

Built on Open Standards

RFC 9421
IETF Standard
HTTP Signatures
Message Authentication
JWKS
Key Distribution
Apache 2.0
Open License

Everything you need for
agent traffic management

Verify crawler identity, enforce access policies, and build monetization programs.

Origin Verification

Cryptographic identity using HTTP Message Signatures and JWKS. Verify requests at your server, not at a third-party CDN.

Policy Engine

Define access rules per crawler identity. Allow, deny, rate limit, or tier access with detailed analytics.

Early Access

Pay-per-Crawl

Create monetization programs with custom pricing tiers. Automated metering and reporting for settlement.

Simple Integration

WordPress plugin, Node.js middleware, Nginx sidecar. Add verification in minutes, not weeks.

No Lock-in

Open protocol, open source. Self-host everything or use our managed cloud. Your data, your infrastructure.

Network Effects

Crawlers registered anywhere work with any publisher. More participants means more value for everyone.

How it works

Five steps from crawler registration to verified access.

01

Register identity

Crawlers generate cryptographic keys and publish them via JWKS endpoint.

02

Sign requests

Each HTTP request is signed using HTTP Message Signatures (RFC 9421).

03

Verify at origin

Your server validates the signature against the crawler's published keys.

04

Apply policy

Access rules determine the response. Usage is tracked for analytics.

05

Meter & monetize

Usage data feeds into pay-per-crawl programs with custom pricing.

Integrate in minutes

Add crawler verification to your existing stack with just a few lines of code.

View documentation
import { createVerifier } from '@openbotauth/verifier';

const verifier = createVerifier({
  registryUrl: 'https://api.openbotauth.com',
});

// In your request handler
app.use(async (req, res, next) => {
  const result = await verifier.verify(req);

  if (result.valid) {
    req.crawler = result.crawlerId;
    // Apply your policy
  }

  next();
});

Frequently asked questions

What is OpenBotAuth?

OpenBotAuth is an identity and verification system for AI agents and crawlers. It lets publishers verify crawler identity at the origin using cryptographic signatures, then meter and monetize access through pay-per-crawl programs.

How does origin verification work?

Crawlers register cryptographic keys and sign their HTTP requests using HTTP Message Signatures (RFC 9421). Publishers verify these signatures at the origin server using JWKS, without relying on CDN or third-party infrastructure.

What is pay-per-crawl?

Pay-per-crawl is a monetization model where publishers can charge AI companies and crawler operators for access to their content, with metering and billing handled programmatically. This is currently in early access.

Is OpenBotAuth open source?

Yes. The protocol specification and reference implementation are open source under Apache 2.0. OpenBotAuth Cloud provides managed infrastructure and program tooling on top of the open protocol.

Does this require CDN integration?

No. OpenBotAuth works at the origin server level. You can use any CDN, switch CDNs, or use no CDN at all. Verification happens on your infrastructure using standard HTTP.

How do I get started?

Request early access to OpenBotAuth Cloud, or check out the open source implementation on GitHub. We provide WordPress plugins, Node.js middleware, and Nginx integration patterns.

Ready to get started?

Join publishers and platforms already using OpenBotAuth to verify, meter, and monetize AI crawler access.