Brad Ferris
The Director's LensEdition 06 · AI Governance

AI Agents Don't Ask Permission. Boards Should.

74% of companies plan to deploy agentic AI within two years. Only 21% have a mature governance model for autonomous agents. That gap is where organisations will be exposed — and where directors need to act before, not after, deployment.

Published28 April 2026
Read6 minutes
All editions
The Governance Story

In January 2026, the Deloitte AI Institute released its annual State of AI in the Enterprise report — 3,235 director-to-C-suite respondents across 24 countries and six industries. The headline got traction: AI access has expanded, productivity gains are real, transformation is accelerating. But one number buried in the agentic AI section should be stopping board members mid-sentence: 74% of companies plan to deploy agentic AI within two years. Only 21% have a mature governance model for autonomous agents.

The governance story here isn't really about technology. It's about authority. Traditional AI — the kind most boards have been briefed on — makes recommendations. A human reviews the recommendation and decides whether to act. Agentic AI is different in kind. Agents set goals, reason through multi-step tasks, and take actions: they send communications, make purchases, modify systems, and coordinate with other agents. They operate at scale, in production, without a human approval gate on each move. The moment you deploy an AI agent, you've delegated authority. The question is whether the board knows it, and whether anyone has defined the scope of that delegation with the rigour any other material authority delegation would require.

There's a second governance thread the report surfaces, and it's a board accountability failure that's been building quietly. 84% of organisations are increasing AI investment. Yet only 25% have moved 40% or more of their AI experiments into production. Deloitte calls this "pilot fatigue" — organisations chasing the next shiny object, running hundreds of pilots with no clear pathway to value. For boards that have approved AI budgets and accepted AI strategy briefings over the past two years, this is a strategy oversight problem dressed as a technology problem. The four board functions in strategy — input, approval, monitoring, decisions — require that boards don't just rubber-stamp AI investment at approval and then disappear. The production gap suggests that's exactly what's happened.

The deeper issue is that AI governance is being treated as a technical problem when it is fundamentally an adaptive one. Most boards have received compliance-framed AI briefings: a risk register, a data privacy policy, a reference to the EU AI Act. These are technical responses to what is actually a non-programmed, novel challenge — one that requires questioning existing authority structures, escalation frameworks, and accountability models. A policy written for generative AI doesn't govern agents. A risk appetite statement that doesn't specify the scope of autonomous action is aspirational, not operational.

Questions I'd Ask in the Boardroom
  • For each agentic AI system in the pipeline — or live today — what decisions can the agent make independently, and which require human approval? Who defined those boundaries, and does the board have visibility over them?
  • When an AI agent takes an action that creates legal, financial, or reputational exposure, who is accountable within our governance structure? Does our existing framework actually answer that question, or do we have a gap?
  • Of our AI investment over the past two years, what percentage has moved from pilot to production and is generating measurable returns? If it's less than 40%, what specifically is management doing about it — and when did the board last ask?
  • Do we have a KRI framework for AI risk — not a risk register, but genuine leading indicators that would flag anomalous agent behaviour before it becomes a regulatory or reputational event?
  • Was our AI governance framework designed for generative AI (which makes recommendations) or does it specifically address agentic AI (which takes actions)? What would need to change to cover agentic use cases?
Red Flags & Watch Points
  • Management presents agentic AI deployment as a progress update without a governance section attached. Treating autonomous agents as an IT delivery milestone — not a board-level authority event — is the primary failure mode. If the first the board hears of a live agent deployment is in a digital transformation update, the governance window has already closed.
  • AI risk appetite was written for generative AI and hasn't been revisited. A risk appetite statement with no explicit treatment of autonomous action, agent scope, or human-in-the-loop requirements isn't fit for purpose. Vague phrases like "responsible AI practices" have no operational teeth.
  • Pilots present with success metrics but no production pathway. Boards approving AI investment rounds should be asking: what's the exit from pilot? A board that has heard "we ran a successful proof of concept" twelve times without seeing a proportional increase in production deployments is funding pilot fatigue, not transformation.
  • AI governance resides entirely within IT or the CIO. When legal, compliance, risk, and the board have no sight of what specific actions agents are authorised to take, the Three Lines of Defence are effectively switched off for the fastest-growing risk category in the business.
Opportunity & Risk Balance

The opportunity is real and the Deloitte data makes it hard to dismiss. The 34% of companies genuinely transforming with AI — creating new products, reimagining core processes, rethinking business models — are pulling materially ahead of the 37% using AI at a surface level. More importantly, the companies with mature governance frameworks are the ones scaling fastest. This is the finding boards should internalise: governance isn't the handbrake on AI value creation. It's the mechanism that converts an experimental programme into a durable enterprise capability. The organisations that skipped governance to move faster are the ones currently stuck at 25% production deployment.

The risk side is asymmetric in a way that matters specifically for directors. When an AI agent takes an action that causes legal, financial, or reputational harm — and no board-level oversight framework for autonomous agents existed at the time — the liability question is a director question. The Corporations Act duty of care doesn't contain an exemption for technology the board didn't fully understand. Deloitte found that 73% of organisations are most concerned about AI data privacy and security risks, followed by legal, IP, and regulatory compliance at 50%. Both become exponentially harder to manage once agents are acting autonomously in production without defined authority boundaries.

Director's Recommendation
My position

Every board needs a formal position on agentic AI governance before it's in production — not after. This doesn't mean halting deployment. It means making the answers to the five boardroom questions above a precondition for production approval on the next agentic use case. Specifically: a policy that defines the scope of agent autonomy and the conditions requiring human approval; a clear accountability map for agent actions; KRIs that surface anomalous agent behaviour before it escalates; and a dedicated quarterly briefing on AI governance status that covers agentic systems explicitly, not rolled into a general digital transformation update. On the investment accountability side, boards should require a production transition rate as a standing metric in AI strategy reporting. If the organisation has been running AI pilots for 18 months and fewer than 25% are in production, the strategy paper for the next AI budget request should open with an explanation of why — and what changes in governance, infrastructure, and resourcing will produce a different result. Approving the next round without that answer is repeating the error. The boards that build governance ahead of scale will extract the real value from AI. The others will be cleaning up incidents.

Researched and drafted by Brad's agentic AI team. Edited and published by Brad Ferris.