Back to blog
· 9 min read · By Vladislav Rajtmajer

CAPTCHA without cookies: a proof-of-work approach

Why CAPTCHAs need cookies, and how I built one that doesn't — proof-of-work design behind captchaapi.eu with production measurements.

We've all been there. You're trying to sign in, you click "I'm not a robot", and instead of a simple checkbox you get a 3×3 grid of blurry photos. Click all squares with traffic lights. You miss one — was that a real traffic light, or just a pole with a light on top of it? Wrong. New grid. Crosswalks this time. By the third round you've forgotten what you were trying to do in the first place.

That's the visible part. Behind it, there's something less visible: a small army of cookies and trackers that decide whether you "look human" enough to be let through. The cookies do more than rate-limit your CAPTCHA — they feed a profiling graph that spans every site you've ever visited that uses the same provider.

This post is the engineering write-up of how I built a CAPTCHA that doesn't do any of that, and the trade-offs that came with it. It's not a privacy rant; it's an honest engineering question: do CAPTCHAs actually need cookies? Or did we end up with cookies because they were the easiest tool when the problem was first solved, and nobody went back to challenge that assumption after GDPR shifted the cost equation underneath?

Cookies turn out not to be necessary. I built captchaapi.eu without them. Here's how.

What's actually inside reCAPTCHA's data flow

I'll use reCAPTCHA as the canonical example because it's what most EU developers default to. When a visitor hits a page with reCAPTCHA enabled, the following happens in the background:

  • A _GRECAPTCHA cookie is set on google.com (cross-site cookie via iframe).
  • If the visitor is logged into a Google account, additional SID, HSID, SSID, APISID, SAPISID cookies are read from the Google account session.
  • Browser metadata is harvested: user agent, screen resolution, plugins, font list, time zone, language settings.
  • A risk score is computed based on the visitor's recent activity across every Google property they've visited and every site that uses reCAPTCHA.

That last point is what matters for GDPR. The risk model isn't local to your site — it's a cross-property graph spanning Google Search, Gmail, YouTube, every reCAPTCHA-protected site, and a few less obvious sources. Google calls it "advanced risk analysis"; from a GDPR perspective it's classic profiling under Article 4(4).

Plus: data leaves the EU. Google's processors are global, anchored in the US. After the Schrems II ruling invalidated Privacy Shield in 2020, EU data transfers to Google require Standard Contractual Clauses or DPF certification, and the transfer impact assessment has to acknowledge the surveillance risk that the CJEU explicitly flagged.

For an EU SaaS asking visitors to "click all squares with traffic lights", that's compliance overhead. It's not technically impossible — Google publishes their DPF certification, you sign their DPA, you tick the boxes. But it's the kind of overhead that creates downstream friction: cookie banners, CMP integrations, DPO sign-offs, and the worry about what happens when the next Schrems judgment lands.

Why traditional CAPTCHAs need cookies

Cookies aren't in CAPTCHAs because someone decided to be evil. They were there because they solved real engineering problems when the category was first built:

  1. Session continuity. A CAPTCHA challenge has two phases: issue and verify. The server needs to know "this verification request is for that specific challenge". Without state, you have to put the challenge into the cookie itself, signed and timestamped.

  2. Rate limiting per visitor. A bot can fake a User-Agent, but a freshly-minted cookie identifier is a useful (if weak) signal that this is a new session. Cookies give you a stable handle to throttle.

  3. Risk scoring across visits. "This visitor has solved 17 CAPTCHAs in the last 60 seconds" is a useful signal. Without per-visitor identity, you can't accumulate it.

  4. Cross-property correlation. "This visitor is Trusted User™ on 4,000 reCAPTCHA-protected sites" is what makes invisible CAPTCHA possible. Without it, every site evaluates strangers.

Each of these has an alternative when you're willing to redesign the protocol.

The proof-of-work alternative

The shape of a proof-of-work CAPTCHA is very simple:

  1. Server issues a challenge: a random seed plus a difficulty target (a 32-bit integer).
  2. Client iterates a counter, computing SHA-256(seed || counter) until the result is numerically below the target.
  3. Client submits the winning counter (the "nonce") to the server.
  4. Server re-hashes once to verify, accepts if valid.

There's no cookie. The challenge state lives in server-side cache (Redis, 2-minute TTL) keyed by the challenge ID. The client only needs the seed and target, both of which arrive in the issue response. When the client submits the nonce, it submits the challenge ID alongside; the server looks it up, re-hashes, and accepts.

What about rate limiting without cookies? Hash the IP. Specifically: SHA-256(IP || server-secret-salt), held in cache for 2 minutes, never written to disk. This is what GDPR Article 4(5) calls pseudonymisation — the original data can't be recovered without the salt, and the salt never leaves my server. The visitor's actual IP is never persisted.

What about risk scoring without a cross-site profile? Adaptive difficulty per IP rate. If a single hashed IP issues 1,000 challenges in 60 seconds, the difficulty for the next challenge from that IP scales up proportionally. This isn't ML-grade scoring — but it's enough to cost a botnet meaningful CPU time without correlating users across sites.

The trade-off: I'm shifting the cost from "data on the visitor's device" to "CPU on the visitor's device". For tens of milliseconds of SHA-256 work — measured below — no cookie is set, no fingerprint is taken, no cross-site graph is built. For most EU SaaS forms — login, signup, contact, password reset, newsletter — that trade is straightforwardly favourable.

Implementation insights

A few things turned out more interesting than I expected.

The IP hash needs a server-side secret. A naïve SHA-256(IP) is reversible — there are only ~4 billion IPv4 addresses, and a precomputed rainbow table fits on a USB stick. Adding a server-side secret salt makes the hash non-reversible to anyone without server access. In practice this means hashing happens server-side; the client never sees the salt.

Redis matters more than I thought. PostgreSQL would work, but a 2-minute TTL on millions of ephemeral keys is a workload Redis is built for and Postgres isn't. The DB stays out of the hot path entirely; it only sees account data, project keys, and billing records.

Web Workers are non-negotiable. PoW computation on the main thread freezes the UI for visible milliseconds, especially on lower-end devices. Pushing the work into a Worker keeps the page responsive while the math happens in the background. The widget itself is ~16 KB minified, ~6 KB gzipped — small enough that the bundle-size argument against PoW just doesn't apply.

Adaptive difficulty needs to feel invisible to humans. Baseline target is set so a single "fresh" request from any IP completes well under 100 ms on average hardware. Difficulty escalates only when the same hashed IP issues many requests in quick succession. A normal human visitor hitting one form per minute will never see anything but baseline. A botnet trying to brute-force a login page will hit a difficulty wall within seconds.

Server-side is unglamorous. PHP/Laravel + Redis + PostgreSQL on a single Hetzner Cloud server in Nuremberg. No Kubernetes, no microservices. The whole thing runs on hardware that costs less per month than a London lunch.

Real measurements

Numbers are easy to claim and hard to verify, so the widget includes a data-captcha-debug flag that logs the timing breakdown to the browser console. Here's what came back from production captchaapi.eu, Free tier (which is the worst-case difficulty any visitor would see — paid-tier customers' visitors get 2-4× easier targets):

Device PoW solve (median) Network RTT Total end-to-end
Mac mini M4 (desktop) ~20 ms ~210 ms ~234 ms
iPhone (Apple Silicon) ~63 ms ~190 ms ~234 ms

Median of 5 runs per device, lucky stochastic outliers (sub-1000 PoW iterations) excluded.

Two things stand out. First, mobile and desktop produce statistically identical total times — not because the iPhone is as fast as an M4 Mac at SHA-256 (it isn't, it's about 3× slower per iteration), but because the PoW work is dwarfed by the HTTPS round-trip to a Frankfurt-anchored EU API. The math is invisible relative to the network.

Second, PoW solve under 100 ms even on the worst-case Free tier on a flagship phone. The "PoW captchas are slow on mobile" assumption — which I held myself before measuring — turns out not to survive contact with current Apple Silicon. JavaScriptCore's JIT compiles the SHA-256 loop into something close to native ARM speed. The iPhone runs essentially the same engine architecture as the Mac.

Anyone reading this can verify on their own hardware:

<form data-captcha data-captcha-debug action="/login" method="POST">
    <!-- your fields -->
</form>

Open DevTools console, reload, see four timing lines per challenge. No DevTools Performance traces required.

Honest trade-offs

PoW isn't strictly superior. Here's where it loses:

  • Sophisticated, well-funded bots. A botnet with cheap CPU can burn through difficulty escalation. ML-based scoring (which is what reCAPTCHA invests in heavily) catches behavioural signals that PoW alone won't.
  • Mobile battery cost. Tens of milliseconds of SHA-256 work on a phone is a measurable battery hit, even if it's tiny per request — in aggregate, an Apple Silicon phone solving PoW for every form on the web would notice.
  • Threat-model coverage. PoW protects against "automation by default" — typical scraping bots, opportunistic credential stuffing, low-effort spam. It doesn't protect against ad fraud, account farming for high-value targets, or sophisticated targeted attacks. For those you need behavioural ML, device fingerprinting, or human review.
  • CAPTCHA-solver services. Networks paying humans 50¢ per 1,000 solved CAPTCHAs work just as well against PoW (the human clicks "Verify" and waits 100 ms — the underlying mechanism doesn't matter to them).

For most EU SaaS use cases — login forms, signup forms, contact forms, comments, newsletter signups, simple bot deterrence — these limitations don't matter. For exchanges, betting platforms, gaming auth, and other high-value targets, layer PoW with additional defences — or use a different category of tool entirely. Honest answer: if you're at that scale, you probably already know.

Why I open-source the widget

The widget code that runs on every visitor's device is published at /captcha.js — minified for size, but not obfuscated. The Proof-of-Work worker code is preserved verbatim with comments inside the bundle. Beautify the file and you can audit byte-for-byte what runs on visitors' devices.

I do this because if I'm running code on someone else's device, they should be able to audit it. The customer who integrates my widget can verify what I'm doing on their visitors' machines. The visitor can verify what I'm doing on their machine. The whole model rests on "I'm not collecting data" — the easiest way to validate that claim is to make the code readable.

It also keeps me honest. Five years from now, when I've forgotten what was important about the design, I'll be able to read my own code and remember. Future-me is one of the people the open bundle is for.

I'd rather be reverse-engineered than trusted blindly. Both are fine. Trusted-after-audit is even better.

A note on what this is and isn't

I should be straight about what I'm shipping, because the legal docs and the trust page already are.

captchaapi.eu is a one-person project. There's no enterprise SLA, no 24/7 on-call rotation, no operator-level ISO 27001 certificate (the underlying Hetzner infrastructure has those; my application layer doesn't, and I refuse to claim certifications I haven't earned). If your procurement team needs those things, this isn't the tool for you yet — and I'd rather tell you upfront than at contract-renewal time.

I'm also not trying to disrupt the CAPTCHA market. I'm not optimising for a revenue curve. I built this because a German customer of mine needed a CAPTCHA on their forms but couldn't reasonably justify reCAPTCHA or Cloudflare Turnstile under their compliance posture, and I went looking for a low-cost EU-only alternative aimed at small developers and small businesses — and the gap was wider than I'd expected. FriendlyCaptcha and a few others exist, but their pricing optimises for enterprise tiers, not for a freelancer or a 5-person startup running a side project at €9 a month. The lower price tiers were missing — and the engineering wasn't actually that hard.

I know that "honesty as a business strategy" is usually a polite way to say "won't scale". Maybe. I care more about shipping a thing I can be proud of than about the curve. If it works out, great; if it doesn't, the code is published, the design write-up is here, and someone else can build the next iteration without re-deriving the protocol.

Closing

GDPR made tracking expensive. Schrems II made US-anchored providers risky. PoW computation on modern devices is cheap enough that the alternative is now just a different kind of "fast enough". If you've been keeping reCAPTCHA on a form because switching seemed complicated — the technical objection has mostly evaporated. PoW CAPTCHAs work, they don't need cookie banners, the widget bundle ships in ~6 KB gzipped.

If captchaapi.eu fits your shape, the Free tier covers 5,000 challenges a month. If you'd rather build your own, the engineering recipe is in this post — go for it. Both are reasonable choices in 2026.

Corrections, technical objections, and missed considerations are welcome at security@captchaapi.eu — I read every email myself.

Try it on your own site

Free up to 5,000 challenges/month. No credit card.

See plans