The discipline that was supposed to prevent AI harm produced frameworks, principles, and declarations, then largely watched as the industry did whatever it was going to do anyway. The same pattern is repeating in AI governance right now, and the window to change course is narrowing.

Consider the lifecycle of an AI ethics statement. First comes the announcement: a public declaration of principles, usually featuring words like "fairness," "transparency," and "accountability." Then come the frameworks. Then the workshops. The working groups. The whitepapers. Then, if you are paying attention: nothing. Or worse, the veneer of something. A checkbox. A committee that meets quarterly. A document that nobody outside that committee has read.

The Ethics Industry Built Its Own Graveyard

Between 2016 and 2022, over 160 AI ethics frameworks were published by governments, corporations, and civil society organisations. Researchers documented remarkable convergence around five core principles: transparency, justice, non-maleficence, responsibility, and privacy. This convergence was widely celebrated as evidence of emerging consensus.

It may also be part of the problem. When every organisation can produce a framework that checks the same principle boxes, the frameworks become table stakes: signals of legitimacy rather than instruments of change. Ethics statements serve the communications function. They signal intent. They do not govern behaviour.

Ethics statements serve the communications function. They signal intent. They do not govern behaviour.

The Pattern Is Repeating in AI Governance

The playbook looks familiar. Regulators announce consultations. Industry groups publish responsible AI commitments. Standards bodies convene working groups. Boards approve AI ethics policies. And the engineering teams deploying AI models in production make their own decisions, not because they are reckless, but because nobody gave them specific, implementable requirements that could survive contact with an actual codebase.

The governance gap is not a future problem. It is the problem that exists right now, in your organisation, in the distance between your AI policy document and your AI deployment infrastructure. The question is whether it gets closed deliberately or whether it closes by default, with the engineers making the calls your governance framework was supposed to make.

What Works Instead

Governance that functions has three properties that most ethics frameworks lack: specificity, enforceability, and technical grounding. "AI systems should be transparent" is a principle. "Every AI-generated customer decision must include a machine-readable explanation of the top three contributing factors, stored for seven years, accessible within 200ms" is a control. One of these survives an audit. The other does not.

The organisations closing the governance gap treat AI governance as an engineering problem, not a communications exercise. They build evaluation pipelines. They implement decision audit trails. They write governance requirements that an engineer can implement without calling a lawyer to interpret what "appropriate" means.

The graveyard of well-intentioned AI ethics frameworks is large. The question facing every organisation deploying AI right now is whether their governance programme ends up in it.