
On November 18, 2025, a Cloudflare outage slowed down or cut off access to many services. This affected France and the entire world, from social networks to generative AIs. The company identified a Cloudflare traffic spike at 12:20 PM, causing a chain of errors. Then, a progressive Cloudflare recovery occurred in the afternoon. Investigation ongoing and questions about the Web’s dependency.
What happened on November 18, 2025
A little before 12:30 PM (Paris time), Cloudflare reported a Cloudflare incident on its global network. At 12:48 PM, the company stated it was investigating "an issue affecting multiple clients." Around 1:21 PM, a partial recovery was observed, but errors persisted. At 2:13 PM, other functionalities were restored. In the afternoon, Cloudflare deployed a fix, then announced a gradual stabilization. Later, a return to normal was noted by the end of the day. Additionally, they are monitoring the infrastructure and promise a post-incident report.
In the meantime, a global web outage paralyzed entire sections of the Internet: slow pages, errors, and unavailable sites multiplied. Millions of internet users described converging symptoms: on some AI services, a message appeared — "Please unblock challenges.cloudflare.com to proceed" — and mobile access could sometimes bypass the blocks encountered on computers.
Condensed Timeline
- 12:20 PM: Cloudflare traffic spike at 12:20 PM observed by Cloudflare on one of its services.
- 12:48 PM: first status message; investigation ongoing.
- 1:21 PM: progressive Cloudflare recovery with still high error rates.
- 2:13 PM: new functionalities restored; fixes ongoing.
- Late afternoon: generalized fix, enhanced monitoring, and gradual normalization.
Key point: the exact origin of the traffic spike remains uncertain at the time of the first public communications. The company mentions a Cloudflare incident, without attributing the incident to a single cause.
Why a Cloudflare outage paralyzes so many services
Cloudflare is an infrastructure player providing CDN, DDoS protection, application firewall, and DNS services to millions of sites. In practice, a large portion of traffic transits through its points of presence before reaching publishers’ servers. On average, the network routes about 81 million HTTP requests per second: a volume that explains, by simple leverage effect, the magnitude of the impacts when a link malfunctions.
Let’s add a structural factor: nearly 20% of sites use it as a reverse proxy to speed up display and filter unwanted traffic. The concentration of critical functions among a few operators, such as Cloudflare, creates a systemic dependency. Moreover, other CDNs and cloud providers also contribute to this dependency. When one coughs, entire ecosystems catch a cold.
The affected services: from AI to social networks
The outage disrupted major platforms, such as X (formerly Twitter), causing significant interruptions. Additionally, generative AI services, like OpenAI, were affected. Furthermore, ChatGPT down and Claude experienced notable disruptions. Moreover, news sites – including Ecostylia Magazine – and entertainment platforms, including Spotify, were also affected. Finally, creation tools, such as Canva, encountered malfunctions. Additionally, online games, like League of Legends, were also affected. Even incident measurement services like Downdetector displayed anomalies, indicating a domino effect on the observation ecosystem itself.

The exact extent varies depending on regions and uses: some publishers with recovery plans, multi-CDN redundancies, or alternative routes maintained a limited degradation. Others, highly integrated into the Cloudflare chain, experienced clear interruptions.
What we know about the cause — and what we still don’t know
At the heart of the incident, Cloudflare reports an unusual traffic increase affecting a service on its network. Consequently, this led to cascading errors for some clients. In the first hours, caution prevails: no hasty attribution to a cyberattack or a third-party provider. The company mentions a Cloudflare incident, identifies the problem, then deploys an emergency fix. A detailed post-mortem will outline the chain of events, the lessons learned, and any preventive measures.
This latency between symptom, diagnosis, and public explanation is normal at this scale: teams must stabilize before explaining. In a network handling tens of millions of requests per second, an apparently minor modification can have consequences. Indeed, it can produce non-linear effects.
A revelation of digital vulnerabilities
This outage highlights a blind spot of contemporary digital infrastructure. Indeed, essential services (information, payments, messaging, games, professional tools) rely on private components. Moreover, their mechanisms are barely visible to the general public. The sovereignty and resilience of the Internet do not depend solely on access networks. Indeed, they also play out in these intermediation layers that optimize and secure traffic.
Three issues emerge:
- Redundancy: encourage multi-CDN architectures, alternative DNS routes, and tested continuity plans.
- Transparency: publish detailed and comparable post-mortems to foster a shared learning experience.
- Public interest: Treat these infrastructure platforms as essential connectivity assets. They must meet availability requirements, eco-design, and sobriety. Avoid massive refreshes during an incident, as this worsens the load.
Understanding: CDN, DDoS, "traffic spike"
- CDN (content delivery network): a global network of distributed servers that bring content closer to users to speed up display and absorb spikes.
- DDoS protection: mechanisms that filter malicious requests sent in clusters to overwhelm a service. An imperfect filter or a sensitive configuration can degrade legitimate traffic.
- Traffic spike: a sudden increase, expected (launch, breaking news) or unexpected, can exceed thresholds and cause queues, latency, and errors.
Good practices for users
- Check official statuses: the Cloudflare Status page and service status pages provide chronological updates.
- Switch devices or networks: if the web on a computer is blocked, trying the mobile (or vice versa) can unblock access.
- Limit refreshes: avoid compulsive F5s and repeated downloads that increase the load.
- Differentiate local/global outage: test other sites, another DNS, or a guest network to isolate the problem.
- Postpone sensitive actions: during instability, defer payments, critical updates, or configuration changes.

Good practices for publishers
- Tested continuity plan (BCP/DRP) with multi-hosting and multi-CDN.
- Instrumentation: logs, telemetry, and alerts independent of the main provider.
- Cautious deployments: change reviews, canaries, quick rollbacks.
- Communication: clear and timestamped updates to reduce user uncertainty.
What the public interest reveals
Beyond technology, the incident affects access to information, service continuity, and trust. Public authorities and businesses have an interest in mapping their dependencies and funding alternatives or capacity reserves. A resilience thought collectively — open standards, interoperability, incident publication requirements — reduces the collective risk surface.
What we are still waiting for
- A post-mortem detailing the root cause and the safeguards added.
- A sectoral feedback on dependency chains and ways to limit their side effects.
- Possible recommendations to certify or audit infrastructure operators with high systemic impact.