Monetize AI
Crawler Traffic

Built on IETF Web Bot Auth and RFC 9421 HTTP Message Signatures. Cryptographic verification, granular policies, pay-per-crawl—without CDN lock-in.

Built on Open Standards

RFC 9421
IETF Standard
HTTP Signatures
Message Authentication
JWKS
Key Distribution
Apache 2.0
Open License

Everything you need for
agent traffic management

Verify crawler identity, enforce access policies, and build monetization programs.

Origin Verification

Cryptographic identity using HTTP Message Signatures and JWKS. Verify requests at your server, not at a third-party CDN.

Policy Engine

Define access rules per crawler identity. Allow, deny, rate limit, or tier access with detailed analytics.

Early Access

Pay-per-Crawl

Create monetization programs with custom pricing tiers. Automated metering and reporting for settlement.

Simple Integration

WordPress plugin, Node.js and Python SDKs, or zero-code proxy. Add verification in minutes, not weeks.

No Lock-in

Open protocol, open source. Self-host everything or use our managed cloud. Your data, your infrastructure.

Network Effects

Crawlers registered anywhere work with any publisher. More participants means more value for everyone.

How it works

Five steps from crawler registration to verified access.

01

Register identity

Crawlers generate cryptographic keys and publish them via JWKS endpoint.

02

Sign requests

Each HTTP request is signed using HTTP Message Signatures (RFC 9421).

03

Verify at origin

Your server validates the signature against the crawler's published keys.

04

Apply policy

Access rules determine the response. Usage is tracked for analytics.

05

Meter & monetize

Usage data feeds into pay-per-crawl programs with custom pricing.

Integrate in minutes

Add crawler verification to your existing stack with just a few lines of code.

View documentation
import express from 'express';
import { openBotAuthMiddleware } from '@openbotauth/verifier-client/express';

const app = express();
app.use(openBotAuthMiddleware());

app.get('/api/content', (req, res) => {
  const oba = req.oba;
  if (oba.signed && oba.result?.verified) {
    // Verified bot - serve full content
    res.json({ content: 'Full article', agent: oba.result.agent });
  } else {
    // Unverified - serve teaser
    res.json({ content: 'Preview only...' });
  }
});

Frequently asked questions

What is OpenBotAuth?

OpenBotAuth is an identity and verification system for AI agents and crawlers. It lets publishers verify crawler identity at the origin using cryptographic signatures, then meter and monetize access through pay-per-crawl programs.

How does origin verification work?

Crawlers register cryptographic keys and sign their HTTP requests using HTTP Message Signatures (RFC 9421). Publishers verify these signatures at the origin server using JWKS, without relying on CDN or third-party infrastructure.

What is pay-per-crawl?

Pay-per-crawl is a monetization model where publishers can charge AI companies and crawler operators for access to their content, with metering and billing handled programmatically. This is currently in early access.

Is OpenBotAuth open source?

Yes. The protocol specification and reference implementation are open source under Apache 2.0. OpenBotAuth Cloud provides managed infrastructure and program tooling on top of the open protocol.

Does this require CDN integration?

No. OpenBotAuth works at the origin server level. You can use any CDN, switch CDNs, or use no CDN at all. Verification happens on your infrastructure using standard HTTP.

How do I get started?

Check out the open source implementation on GitHub. We provide a WordPress plugin, Node.js and Python SDKs, and a zero-code proxy for any HTTP backend.

Ready to get started?

Join publishers and platforms already using OpenBotAuth to verify, meter, and monetize AI crawler access.