The U.S. National Institute of Standards and Technology (NIST) has issued a preliminary draft “Cybersecurity Framework Profile for Artificial Intelligence” (IR 8596), offering organizations a structured way to manage cybersecurity risks tied to AI systems. The draft is open for public comment through Jan. 30, 2026, and NIST is scheduled to hold a workshop on Jan. 14, 2026, to discuss the document.
The release comes as many companies shift from testing AI to embedding it into everyday work, from customer-facing “agent” tools to regulated use cases like healthcare workflows. At the same time, cybersecurity leaders are warning that unmanaged internal AI use, geopolitical volatility, and supply chain pressures are reshaping how organizations need to plan for risk in 2026.
What the draft covers
NIST’s draft profile is designed to help organizations integrate AI security into existing cybersecurity governance, rather than treating AI as a separate or one-off project. It lays out three focus areas: securing AI system components, using AI defensively, and countering AI-enabled cyber threats.
Another description of the same draft frames it as a “three-pillar” approach—Secure, Defend, and Thwart—mapped to CSF 2.0 while adding AI-specific considerations. In that framing, “Secure” focuses on protecting AI systems from attacks such as data poisoning, model theft, and adversarial machine learning.
The “Defend” pillar emphasizes using AI to strengthen cybersecurity operations, while also recognizing that defensive AI can introduce new attack surfaces that must be addressed. “Thwart” centers on resilience against AI-enabled attacks, with examples that include AI-assisted phishing, deepfakes used to bypass authentication, and malware designed to evade detection.
Timeline details differ by source
Two sources describe different release dates for the preliminary draft document. One states NIST released IR 8596 on Dec. 16, 2025, while another states it was released on Dec. 17, 2025. Both state the public comment window runs through Jan. 30, 2026, and both mention a Jan. 14, 2026 workshop related to the draft.
Why AI security is moving up the agenda
A major theme in early 2026 is that AI is becoming everyday infrastructure, with enterprises embedding it directly into workflows rather than keeping it in isolated pilot projects. The same January update highlights deployments such as AI assistants that can plan, recommend, and transact in retail settings, alongside infrastructure moves to secure compute and energy capacity.
Cybersecurity pressure is also rising from the outside. One 2026 risk outlook argues that geopolitical realignment and the weaponization of critical supply chains are now tightly linked to cyber exposure, pushing organizations toward proactive, intelligence-driven resilience.
That outlook also flags shipping and maritime logistics as prime targets, citing an August 2024 cyberattack identified by the Port of Seattle that led to outages and the disclosure of personal data for some 90,000 individuals. It adds that the Coast Guard Cyber Command has reported a record number of maritime cyber missions responding to incidents across critical shipping infrastructure.
Governance meets day-to-day reality
Beyond external threats, organizations face internal risk as employees adopt personal or unvetted AI tools for everyday tasks—often described as “shadow AI.” The same 2026 outlook warns that without clear policies on data access, model use, and output validation, sensitive information can be exposed or misused.
In parallel, humanitarian organizations are also shifting their approach to AI, with the discussion moving from whether to engage to how to do it responsibly and without straining already stressed systems. A January 2026 sector newsletter says the emphasis is increasingly on integration, governance, and operational relevance, with fewer standalone experiments and more embedding of AI into core systems.
That humanitarian update also points to a growing recognition that capacity gaps—not the technology itself—can be the main constraint on adoption. It frames the central challenge as navigating AI’s potential benefits while managing risks tied to transparency, misuse, and broader social impacts.
