Back to blog
· 6 min read · By Vladislav Rajtmajer

Laravel RateLimiter and a race condition

Why tooManyAttempts() + increment() in Laravel breaks under 100 concurrent requests, and the one-line fix.

One of the manual rate-limiting patterns shown in the Laravel docs (under Manually Incrementing Attempts) looks like this:

if (RateLimiter::tooManyAttempts('send-message:'.$user->id, $maxAttempts = 5)) {
    return 'Too many attempts!';
}

RateLimiter::increment('send-message:'.$user->id);

// Send message...

It works fine. Right up until someone hits an endpoint capped at 5 requests per minute with 100 concurrent requests. Then all 100 get through.

I ran into this race condition while building rate limiting for captchaapi.eu, a PoW CAPTCHA API. Credit goes to @_newtonjob, who nailed it in 280 characters in a post on X:

Your Ratelimiting logic works until someone fires 100 concurrent requests on an endpoint that should be limited to 5 requests per minute. The fix: Ensure you/your agents also check the incremented count returned by RateLimiter::hit() and that it doesn't exceed the max attempts.

(Note: hit() and increment() are aliases — hit() is literally a one-line wrapper that calls increment(). The Laravel docs example used hit() in 8.x and 9.x, then switched to increment() from 10.x onward, but both still work and have identical behavior.)

Here's why it's a problem, the one-line fix, and what I took away from it.

Why is it a problem?

Walk through what happens on a single request:

  1. tooManyAttempts() reads the current count from cache
  2. Compares it against $maxAttempts
  3. Returns true or false
  4. If false, the code calls increment(), which bumps the count

That's two independent cache calls. Between step 1 and step 4 there's a window, usually a few microseconds, where another request can read the same stale value, pass the check, and increment too.

At 100 concurrent requests it happens at scale:

  • Request 1 reads count = 0, passes the check (0 < 5), calls increment() → count = 1
  • Request 2 reads count = 0 (at the same instant), passes the check, calls increment() → count = 2
  • ...and so on for all 100

The counter ends up at 100, but all 100 requests already ran, and your backend just processed 100x the work you wanted. If the endpoint does something expensive (a PoW challenge, AI inference, an external API call), you just paid for 100 operations instead of 5.

Why the increment is atomic but the check isn't

If you crack open Laravel's source (Illuminate\Cache\RateLimiter::increment() in 12.x):

public function increment($key, $decaySeconds = 60, $amount = 1)
{
    $key = $this->cleanRateLimiterKey($key);

    $this->cache->add(
        $key.':timer', $this->availableAt($decaySeconds), $decaySeconds
    );

    $added = $this->withoutSerializationOrCompression(
        fn () => $this->cache->add($key, 0, $decaySeconds)
    );

    $hits = (int) $this->cache->increment($key, $amount);

    // ...

    return $hits;
}

And hit() is just an alias for increment():

public function hit($key, $decaySeconds = 60)
{
    return $this->increment($key, $decaySeconds);
}

The important part is $this->cache->increment($key, $amount). That's an atomic operation in the cache backend.

I use Redis in captchaapi.eu, where it maps to INCR (or INCRBY), one of the oldest, most battle-tested commands in Redis. It's atomic at the single-key write level: no two concurrent requests will read the same value, and each one gets a unique incremented result back. Memcached has an equivalent incr with the same guarantees.

Here's the key thing: increment() returns the count after the increment. The return value is atomic, deterministic, and unique for every concurrent caller.

$hits = RateLimiter::increment($key);
// With 100 concurrent requests you get return values 1, 2, 3, ..., 100
// (in random order, but each value exactly once)

tooManyAttempts(), on the other hand, is a separate read. It can return a stale value, and the gap between that read and the next write is your race window.

The fix: one increment, check the return value

Drop the two-step pattern (tooManyAttemptsincrement) and do it in one:

$attempts = RateLimiter::increment('send-message:'.$user->id);

if ($attempts > $maxAttempts) {
    return 'Too many attempts!';
}

// Send message...

Now with 100 concurrent requests:

  • Each request gets a unique count after the increment
  • The first 5 get values 1–5 and pass
  • The remaining 95 get values 6–100 and get rejected

No window, no race. Redis's atomic increment is the single source of truth, and increment() gives you that truth directly.

One subtle thing worth pointing out: in the original pattern, the increment happens after the check, so any overshoot stays in the counter ("the counter shows 6 even though we didn't want to allow request #6"). In the new pattern you increment every time and check the return value, so the counter might show 100. That's fine, because anything over the limit got rejected. The counter readout looks the same, but the security model is stricter.

hit() vs. increment(): which one?

They do the same thing. hit() is literally function hit($key, $decay) { return $this->increment($key, $decay); }. The current Laravel docs (10.x+) show increment() in the manual-incrementing example; older versions (8.x, 9.x) used hit(). Both work, pick whichever reads better:

  • hit() for "register one event," the natural fit for rate limiting
  • increment() when you want to emphasize the atomic-counter aspect, or bump by more than 1 via the amount: parameter

In captchaapi.eu I went with increment() because it matches the current docs and makes it obvious I care about the return value, not the side effect.

Where this doesn't help

One honest limitation: this protects against the race within a single Redis instance (or a Redis cluster, where each key lives on one shard). If you had Redis split across regions without coordination and an attacker fired requests at every region, INCR's atomicity wouldn't save you. You'd get 100 requests per region, times N regions.

For captchaapi.eu this is plenty, because the whole app runs against a single Redis (Hetzner Nuremberg). For multi-region distributed rate limiting you'd need something like a sliding window log or a token bucket with a centralized source of truth. Different topic.

The other limit: this is a fix for counter-level races. If an attacker rotates IPs (botnet, residential proxy), no per-IP rate limit will stop them. That's a fundamentally different problem, and in captchaapi.eu I handle it with the PoW challenge itself.

One more thing worth being explicit about: for HTTP routes the recommended path in Laravel is the throttle middleware, not manual rate-limiting code. This post is specifically about the manual pattern, which is what you reach for when rate-limiting non-HTTP operations, custom logic inside a controller, or anything where the throttle middleware isn't a fit.

What I took away

A small fix, but a few things clicked for me that hadn't before.

1. "It works" is not the same as "it's secure." I had the documented manual pattern running in production for a while and never saw a bug, because no attacker had shown up at captchaapi.eu yet. These races are quiet. Application logs say nothing, monitoring shows green, and you only find out the limit never actually held when someone with wrk -c 100 decides to take a look. Now any time I review code that rate-limits an expensive operation, I start with: "What happens if 100 requests arrive in the same microsecond?"

2. Return values from atomic operations are code you don't have to write. Redis hands you a unique sequence number for free with every atomic increment. You can use it for rejection, sure, but also for other business logic you'd otherwise solve with more code and more locks. Calling tooManyAttempts() and ignoring increment()'s return value means throwing away information Redis already gave you.

3. The documented manual pattern isn't always the safest one. The Laravel docs have shown tooManyAttempts() + increment() (or hit() in older versions) under "Manually Incrementing Attempts" across every version from 8.x through 12.x. It isn't wrong. For most use cases (per-user limits, where users don't realistically make 100 concurrent requests) it's fine. But if you're building something where parallel abuse is the threat model, the docs aren't showing you the safest option. I read documentation a bit differently now: "Who's the assumed user here, and does their threat model match mine?"

4. X is where I pick up security gotchas. This particular fix reached me through a 280-character post from @_newtonjob, not through the docs or a security audit. Following Laravel folks who share concrete pattern-level bugs from real apps has done more for me than most security blog posts. Keep people in your feed who write "I hit X, fixed it like this." That's exactly the kind of pattern matching you need for your own code. Thanks, @_newtonjob.

TL;DR

// ❌ Race-prone — 100 concurrent requests will all get through
if (RateLimiter::tooManyAttempts($key, 5)) {
    return 'Too many!';
}
RateLimiter::increment($key);

// ✅ Race-safe — atomic increment + check the return value
if (RateLimiter::increment($key) > 5) {
    return 'Too many!';
}

One line shorter, one problem gone. If you've got Laravel rate-limiting code using the tooManyAttempts() + increment() (or hit()) two-step pattern, go through it and rewrite to the single-call variant. Especially if you're protecting an operation where every call costs you something.

Try it on your own site

Free up to 5,000 challenges/month. No credit card.

See plans