The governance frameworks that fail are not, for the most part, poorly designed. They articulate sensible principles. They identify relevant risks. They propose reasonable controls. They fail because of where they sit in the organisation, not what they contain.

This is an important distinction because it changes the diagnosis entirely. If frameworks fail because of poor design, the answer is better frameworks. If they fail because of structural authority problems, better frameworks make no difference. The organisation gets a more sophisticated document that is ignored in exactly the same ways as the previous one.

Authority Is Not Influence

Governance functions in most enterprises have the authority to advise, to recommend, and to report. They rarely have the authority to halt a product launch or require a reassessment before deployment proceeds. When governance and delivery timelines conflict, the function with authority over timeline wins. This is not a cultural failure. It is an architectural one.

The practical consequence is that AI governance becomes a process that happens alongside deployment decisions rather than one that shapes them. The governance team is consulted. Their concerns are noted. The deployment proceeds. The documentation records that governance was involved. The risks the governance team raised remain unaddressed.

Most post-mortems of AI governance failures do not identify this dynamic. They identify failures of process, documentation, or oversight. The authority gap remains invisible because everyone involved had an interest in framing the failure as a process problem rather than a structural one.

AI governance becomes a process that happens alongside deployment decisions rather than one that shapes them. The risks raised remain unaddressed, and the documentation records that governance was involved.

Where the Problem Starts

Most AI governance frameworks are sponsored by the risk or compliance function. These functions have established authority over financial and regulatory risk, and they understand the governance problem in those terms. AI risk does not map cleanly onto either category. The result is that AI governance gets assigned to a function that has the vocabulary for risk but not the authority to govern a domain that primarily belongs to technology and product.

The board approves a policy. The policy is communicated. The relevant teams are trained. Nobody in the organisation has been given explicit authority to say no to an AI deployment on governance grounds. When the governance team tries to do so informally, they discover that authority and seniority are not the same thing.

What Governance Authority Actually Looks Like

Governance authority means that no AI system reaches production without documented sign-off from the governance function. It means the governance function has standing to raise concerns that can delay a launch, and that delaying a launch for governance reasons has the same organisational legitimacy as delaying it for security or legal reasons. It means there are consequences for bypassing governance, not just for non-compliance.

This is not common in current enterprise AI programmes. It requires a deliberate decision at board level to grant authority, not just assign responsibility. The difference between those two things is the difference between a governance programme that functions and one that produces documentation.

The Practical Question

The most useful question to ask when assessing an AI governance programme is not "what does the framework require?" It is "who can stop a deployment?" If the answer is unclear, or if the answer is nobody in the governance function, the framework is largely decorative.

This is a tractable problem. It does not require reorganisation or significant additional resource. It requires an explicit board decision to grant the governance function authority commensurate with its responsibility, and a clear articulation of what that authority covers. Organisations that have made this decision find that it changes the nature of governance conversations substantially. The governance team stops being a review function and starts being a decision-making one.