A Blog Under Siege: archive.today’s CAPTCHA Triggering DDoS-Like Traffic

Incident

A Blog Under Siege: archive.today’s CAPTCHA Triggering DDoS-Like Traffic

Published: February 2026 · Tags: archive.today, DDoS, web-archives

TL;DR: Multiple reports say archive.today’s CAPTCHA page ran client-side JavaScript that repeatedly requested third-party sites’ search endpoints every few hundred milliseconds while the CAPTCHA page remained open, effectively creating DDoS-like traffic. Read the timeline, community discussion, and quick mitigation steps below.

What was observed

Investigators found a short `setInterval` JavaScript loop inside the archive.today CAPTCHA page that issues `fetch()` calls to a blog’s search endpoint with randomized query strings approximately every 300 ms — preventing caching and continuously consuming server resources while the CAPTCHA window stays open. You can inspect the code and network behavior in the original report. :contentReference[oaicite:2]{index=2}

setInterval(function() {
  fetch("https://target-blog.example/?s="+Math.random().toString(36).substring(2,3+Math.random()*8), {
    referrerPolicy: "no-referrer",
    mode: "no-cors"
  });
}, 300);

Timeline & community reaction

The issue was reported around January 2026 and the incident generated discussion on Hacker News and Reddit where users shared observations, screenshots and context while debating intent and mitigation. :contentReference[oaicite:3]{index=3}

Why this matters

Client-side code that turns each visitor into a rapid-fire request generator can unintentionally weaponize normal traffic against small sites and search endpoints. That raises operational, ethical, and legal questions about archive tooling and how CAPTCHA/anti-bot pages should behave when they affect third-party resources. :contentReference[oaicite:4]{index=4}

Quick mitigation checklist

  • Rate-limit high-cost endpoints (return 429 / 503 for abusive patterns).
  • Configure CDN or WAF to identify and block repeated requests from the same referrers or user agents.
  • Normalize/ignore obviously random query parameters for internal search endpoints.
  • Collect logs (request headers, timestamps) for abuse reports and forensics.
  • Consider adding offending domains to local DNS/hosts or recommend users use blocklists (some blocklists already prevented the requests for users with blockers enabled). :contentReference[oaicite:5]{index=5}

Attribution & further reading

This post summarizes the original reporting and community threads. Full technical details, screenshots, email timelines, and discussion are available via the links below.

Suggested permalink: /archive-today-captcha-ddos-incident

Comments