AI governance is becoming a central issue for businesses, policymakers and technology companies as the use of artificial intelligence moves faster than the systems designed to monitor it. Across the latest developments, the message is consistent: organizations want AI’s benefits, but many are still building the rules, audits and safeguards needed to manage risk.
A recent Grant Thornton survey highlighted that gap in the workplace, where nearly 80% of executives said their organizations would not pass an AI governance audit despite growing adoption. The same survey found that 75% of boards had approved significant AI investments, while 48% had not defined AI governance expectations and 46% had not included AI risk management programs.
Oversight gap
Axios reported that the pressure is rising as agentic AI systems begin handling tasks with less continuous human oversight, increasing the risk of regulatory scrutiny, legal exposure and costly mistakes. The survey also found that companies with fully integrated AI were almost four times as likely to report revenue growth as businesses still in the pilot phase, at 58% versus 15%.
Even so, confidence in governance appears uneven. Among businesses experimenting with AI, only 7% said they had very high confidence they could pass an independent audit within 90 days, compared with 74% of companies that had fully integrated AI systems. That split suggests many organizations are still trying to catch up on controls as AI spending and deployment continue to accelerate.
Policy moves
In Europe, policymakers are making the case that strong AI rules can support innovation instead of slowing it. Axios reported that European Commissioner Magnus Brunner said the EU’s AI Act creates guardrails that help build trust and provide one stable regulatory framework across member states, in contrast to the more fragmented state-by-state system in the United States.
Brunner acknowledged that Europe can be rigid and slow, but he argued that regulatory clarity can give companies more certainty as they build and sell AI products. He also said regulation should not be treated as the enemy of innovation, framing the EU approach as an effort to create security and direction rather than a regulatory free-for-all.
A similar push is taking shape in India at the state level. The Times of India reported that Karnataka has drafted a digital safety bill focused on social media regulation in the AI era, with mandatory labeling for AI-generated content and deepfakes, legal penalties for misuse, and requirements for platforms to act on harmful content within 24 to 48 hours.
The proposed law would also create a Karnataka Digital Safety and Social Media Regulatory Authority to oversee compliance and respond to digital risks. According to the report, the draft promises user protections such as the right to report harmful content, time-bound grievance redressal, and safeguards against harassment and misinformation. It also places emphasis on digital awareness and media literacy, while proposing fake news detection, deepfake tracking and real-time monitoring dashboards as part of phased implementation.
Enterprise controls
Technology companies are also trying to turn governance into a built-in part of AI deployment. In a Microsoft Community Hub post, the company said its open-source Agent Governance Toolkit is designed to bring runtime security to autonomous AI agents through policy enforcement, audit logging and site reliability practices.
Microsoft said the toolkit is meant to manage risks that can arise when agents call external APIs, process user data, and make autonomous decisions. The post said the toolkit addresses all 10 OWASP risks for agentic applications and is available in Python, TypeScript, Rust, Go, and .NET. Microsoft also described the package as framework-agnostic and said it can work with systems including LangChain, CrewAI, and Google ADK.
The company’s example focused on controlling agent behavior before actions are executed. Microsoft said one part of the toolkit acts as a policy engine that intercepts actions, while other components cover compliance mapping, audit trails, service level objectives and circuit breakers for reliability. In its own sample project, the company said governance was added to a six-agent travel planner in about 30 minutes.
Broader shift
Taken together, the recent survey findings, policy proposals and product announcements show AI governance moving from a secondary issue to a frontline concern. Companies are still racing to deploy AI for growth, but governments are writing new rules for content and safety while vendors market tools to police autonomous systems in production. The larger pattern is clear: as AI adoption spreads, the debate over accountability is becoming part of the rollout itself rather than something left for later.
