Brad Ferris
The Director's LensEdition 08 · AI Governance

When AI Stops Saying and Starts Doing

McKinsey's 2026 AI Trust survey reports trust maturity is rising on average, but only one-third of organisations hit a meaningful score on governance and agentic-AI controls. The blind spot in their analysis is more revealing than the findings: governance is treated as a management function. Directors are absent. That mirrors how most organisations are actually structuring this work — and it is exactly the gap a board has to close before the next agentic deployment is approved.

Published28 April 2026
Read7 minutes
All editions
The Governance Story

On 25 March, McKinsey published its annual State of AI Trust survey of around 500 organisations, headlined: trust maturity is rising on average, but only about one-third of organisations hit maturity level three or higher on strategy, governance, and the new agentic-AI controls dimension McKinsey added this year. Two-thirds of respondents now cite security and risk as the top barrier to scaling agentic AI — well ahead of regulation or technical limits. The framing is sharp: in the agentic era, the question is no longer whether AI says the wrong thing but whether it does the wrong thing.

That phrasing is correct, and it changes the governance problem entirely. Saying the wrong thing is a content risk — embarrassing, occasionally defamatory, manageable through review. Doing the wrong thing is a fiduciary one. An AI agent that places an order, transfers funds, sends a regulator filing, terminates a contract, or dispatches a customer message has executed a corporate act. Australian directors carry duties under sections 180–184 of the Corporations Act for the company's actions, and that exposure does not depend on whether the actor was a human, a process, or an autonomous system. McKinsey's data tells us boards are nowhere near ready for that.

The article's blind spot is more revealing than its findings. McKinsey treats governance as a management function — internal audit, ethics teams, AI governance roles, the chief AI officer. Directors, audit committees, and the fiduciary layer are absent from the analysis. That isn't a McKinsey oversight; it mirrors how most organisations are actually structuring this work. When the board–management line blurs under speed pressure, the board's distinctive contribution disappears. Setting the risk appetite for autonomous action, demanding clarity on the line between acceptable and unacceptable agent behaviour, holding the executive team accountable for agentic incident response — these are governance acts no internal audit team can perform on the board's behalf.

Questions I'd Ask in the Boardroom
  • What is our risk appetite for autonomous agent action, expressed as a dollar threshold, a counterparty type, and an irreversibility test? Show me the language we'd point to when an AI agent commits us to something material.
  • Which of our deployed AI agents currently have authority to do something — to transact, to commit, to communicate externally — without a human approving each instance? Where is that register maintained, and who reviews it?
  • If an agent acted beyond its authority tomorrow, what would our first 24 hours look like? Who is the named incident officer, and when would the board be informed?
  • McKinsey reports two-thirds of executives flagging security and risk as the top barrier to scaling agentic AI. Do our pace and ambition reflect that, or are we deploying ahead of our control maturity?
  • Of our annual technology spend, how much is allocated explicitly to AI trust — guardrails, evaluation, monitoring, incident response — versus net-new agent build? Is that ratio defensible?
  • McKinsey's data shows only one-third of organisations reaching governance maturity three or higher. What does our self-assessment look like across their five dimensions, and who has signed off on it?
Red Flags & Watch Points
  • The chief AI officer reports to the CEO and not separately to the board or audit committee. When the same executive owns build velocity and trust, velocity wins under pressure.
  • There is no kill-switch protocol. If you cannot articulate in plain language how each agent is paused and by whom, you do not have control of the system — you have a relationship with it.
  • Trust spend is bundled inside the AI build budget. When the same line item funds the agent and the guardrails, the guardrails are first to get cut when timelines slip.
  • Incident response capability hasn't been tested. McKinsey reports almost 60 per cent of organisations that experienced AI-related incidents rated their response as merely satisfactory or worse. A tabletop exercise once a year is the minimum bar.
  • Only one or two board members can fluently discuss agentic AI. Boards that delegate AI fluency to a single director are one departure away from being unable to govern the system.
Opportunity & Risk Balance

McKinsey's most useful finding is buried in the spend data: organisations investing $25 million or more in responsible AI report materially higher maturity scores and are far more likely to deliver EBIT impact above 5 per cent. Set aside the obvious correlation issue — bigger and more sophisticated organisations both spend more and capture more value — and the directional message still stands. Starvation budgets on AI trust produce starvation outcomes. The optimisation question for a board is not whether to spend on guardrails but how to calibrate the ratio of build-spend to trust-spend so that velocity does not outrun control.

The risk side is asymmetric. A successful agentic deployment delivers incremental productivity. A single agentic incident — funds moved without authority, a regulatory filing made incorrectly, a customer commitment dispatched in error — is reportable, often headline-making, and increasingly a personal liability question for directors. The board should be optimising for upside that is durable, not upside that is cheap. Cheap upside is gone the first time something goes wrong.

Director's Recommendation
My position

Treat agentic AI as a board-level risk-appetite decision, not a chief AI officer build plan. Before the next agentic deployment is approved, the board should require five things: a written agentic-AI risk-appetite statement with explicit dollar, counterparty, and irreversibility thresholds; a separate trust-spend budget line that cannot be reallocated to build; a named incident officer with direct board reporting authority; a quarterly tabletop exercise on agentic incidents, with the board observing at least twice a year; and a competence-matrix check confirming that at least three directors can independently interrogate agentic governance. McKinsey calls trust an enabler rather than a tax, and that's right — but only for the boards that resource it that way. The boards that don't are running an unhedged trade on autonomy, and the regulators, the courts, and the market will eventually find them.

Researched and drafted by Brad's agentic AI team. Edited and published by Brad Ferris.