Amazon Web Services (AWS) has pushed back against reports attributing recent cloud outages to its autonomous AI coding tool, Kiro. While a report from the Financial Times alleged that the AI agent autonomously executed commands that disrupted services, Amazon maintains that the incidents were caused by human error and misconfigured access controls.
The controversy centers on a service disruption that occurred in December 2025. According to reports, AWS engineers allowed the Kiro “agentic” coding tool to implement changes that resulted in a service interruption. Sources familiar with the incident told the Financial Times that the AI tool decided to “delete and re-create the environment,” triggering the outage.
Conflicting Accounts of the Incident
Details regarding the severity and cause of the event differ between the company and external reports. The Financial Times reported that the December incident resulted in a 13-hour service interruption. A senior AWS staff member reportedly stated that the company had experienced “at least two production outages in recent months” linked to AI tools. The staff member noted that engineers allowed the AI to address issues autonomously, adding that while the outages were minor, they were “completely predictable.”
Amazon, however, described the December disruption as an “extremely limited event” that affected only a single service—AWS Cost Explorer—in one of its two regions in Mainland China. An AWS spokesperson told CRN that the outage was “the result of user error—specifically misconfigured access controls—not AI.”
The company emphasized that the involvement of AI tools was coincidental. “The same issue could occur with any developer tool or manual action,” Amazon stated, asserting that the engineer involved had elevated permissions.
The Role of Kiro and Human Oversight
Kiro, launched in July 2025, is an agentic coding service designed to work alongside developers. It converts user prompts into detailed specifications, working code, documentation, and tests. Amazon aims to automate tasks and help customers solve problems through the tool.
The core of Amazon’s defense is that Kiro is not designed to act without human approval. The company stated that Kiro “requests authorization before taking action” by default. According to Amazon, the tool puts developers in control, requiring users to configure which actions the AI can take.
However, reports suggest that in the instances cited, engineers may have deviated from standard procedures. Employees indicated that the AI tools were treated as extensions of human operators and granted operator-level permissions. In both reported incidents, engineers allegedly did not seek a second opinion before finalizing the AI’s changes.
New Safeguards and Broader Context
Following the December incident, AWS has introduced stricter protocols. The company confirmed it has implemented “numerous additional safeguards,” including mandatory peer review for production access. Amazon noted that the event did not impact its core compute, storage, database, or AI technologies.
The scrutiny comes as Amazon aggressively pushes for internal AI adoption. Reports indicate the company aims for 80 percent of its developers to engage with AI for coding tasks at least once a week. This pressure has led to concerns among some employees about the potential for errors and the “hallucinations” common in generative AI models.
Despite the operational hiccups, AWS continues to report strong financial growth. The cloud unit generated $35.6 billion in total sales during the fourth quarter of 2025, marking a 24 percent increase year over year, with an annual run rate now reaching $142 billion.
