A Cloudflare outage on November 18, 2025, briefly knocked large parts of the internet offline, affecting major online services and triggering widespread error messages. The company first suspected a massive cyberattack, but later said the disruption was caused by an internal problem tied to a file used by its Bot Management system.
The outage hit a wide range of sites and apps that rely on Cloudflare’s network for performance and protection. Reports tracked by outage-monitoring services showed disruptions across multiple well-known platforms, and Cloudflare’s own status updates said the company applied a fix and continued monitoring for remaining issues.
What users saw during the outage
During the incident, Cloudflare’s disruption made many websites difficult or impossible to reach for a period of time. The outage temporarily affected large platforms including X and ChatGPT, and outage reports also flagged issues for services such as Spotify and Zoom.
Cloudflare’s status updates indicated that a remedy had been applied and that the company believed the situation was addressed, while it continued watching for errors as services returned to normal. Cloudflare also warned that some customers could still have trouble logging into or using the Cloudflare dashboard even after the main fix was in place.
Early signs pointed to unusual traffic
In the early stages, Cloudflare linked the widespread outages to an “internal service degradation,” while its team worked through the disruption. A Cloudflare representative also told reporters the company detected “a surge in unusual traffic” aimed at one Cloudflare service, which led to errors for some traffic moving across its network.
At that time, Cloudflare said it did not understand what caused the unusual traffic surge and that the company’s priority was restoring normal processing without errors. The status page was updated as the company worked on the issue over roughly two hours.
Cloudflare later blamed an internal Bot Management file
After the outage, Cloudflare’s leadership described how initial clues made the problem look like a very large attack. In an internal chat, Cloudflare co-founder and CEO Matthew Prince said, “I worry this is the big botnet flexing,” as staff discussed whether a known botnet called Aisuru could be involved.
Cloudflare’s later account pointed to an internal failure involving a “feature file” used by its Bot Management system. According to the company’s explanation, a change to permissions in one of its database systems caused the database to output multiple entries into the feature file, which made the file double in size. Cloudflare said the larger-than-expected file then spread across the machines in its global network.
The software on those machines reads the feature file to keep Cloudflare’s Bot Management system updated against changing threats. But the routing software had a size limit that was lower than the newly enlarged file, which caused the software to fail and contributed to the widespread disruption.
Recovery and lingering issues
Prince said that after Cloudflare moved past its initial assumption of a hyper-scale DDoS attack, the company identified the root cause and stopped the spread of the oversized feature file, replacing it with a previous version. He said that step helped traffic “largely” return to normal.
Even after the file rollback, Cloudflare still needed additional time to manage the surge in demand as traffic came back online. Prince said it took another two and a half hours to handle the increased load on parts of the network during the recovery process.
Why the outage mattered
Cloudflare plays a major role in how traffic moves across the web, and earlier company statements have said a significant share of the internet depends on its services for traffic management and protection. That central position is one reason the outage quickly spread beyond a single website or app.
The disruption also appeared to rattle investors. During the incident, Cloudflare’s stock dropped by about 3%, according to reporting that tracked the market reaction as the outage unfolded.
