De-Risking
Frontier AI.

AI governance without a stable AI practice is theatre. From board risk appetite to production evals, the real work is establishing everything underneath that's needed for governance to mean anything.

See the stack
You can't run your business on AI that's governed like a side hustle.

"Should" is not a control. It never will be.

AI governance fails in a predictable place. Not in the boardroom, where frameworks get approved. Not in the risk committee, where they get celebrated. It fails when it meets the reality of AI development - where documents full of "shoulds" meet a team that deals in specifics and a technology that evolves daily.

There should be guardrails. Models should be evaluated. Agents should be catalogued. Data provenance should be maintained.

Most of what's sold as AI governance is theatre - an ethics board, a set of principles, a policy document, a workshop for the leadership team. It does not survive contact with an AI practice that runs on improvisation. You cannot govern what you cannot evaluate. You cannot evaluate what you do not measure. You cannot measure when the organisation cannot yet articulate AI's role.

87% of executives claim to have clear AI governance frameworks - but fewer than 25% have fully implemented the technical tools to manage risks.

Source: IBM Newsroom

Real governance means taking AI seriously as a proposition. If your AI practice is chaos, governance is a pipe dream. The board cannot make informed decisions about AI when the engineering team is prompt-golfing a failing model with no KPIs. The risk committee cannot set appetite for systems that nobody can describe. The audit committee cannot ask the right questions of a practice that doesn't yet know what its own answers are.

This is the work: building governance into an AI practice that operates intentionally.

AI governance isn't a project.
It's an operating model.

The cost of an unmanaged AI estate compounds quietly. Regulatory exposure that only surfaces when the regulator asks. Audit findings that arrive with a board paper attached. Engineering teams becoming cautious where they should be productive, because no-one has told them what "safe" actually means.

The gap doesn't close on its own. It widens - through every deployment that bypasses triage, every vendor claim accepted without due diligence, every model in production that no-one is watching.

Address the Gap

<1%

of organisations have fully operationalised responsible AI

Source: World Economic Forum

<28%

of respondents are confident they can secure AI used in core business operations

Source: Cloud Security Alliance & Google Cloud

>60%

are already using or plan to use agentic AI this year

Source: Cloud Security Alliance & Google Cloud

The governance stack you need.
Built to ship.

Governance becomes real when it stops being a document and starts being a set of working components: sitting in your stack, mapped to your risks, running on every commit. These are the deliverables that close the gap between policy and production.

Tiered AI Impact Assessment


Full rigour where the stakes demand it. A proportionate process everywhere else.


A tiered assessment framework built around your actual use case landscape. Proportionate handling scales scrutiny to consequence: named decision rights, clear escalation paths, and a procurement track for vendor AI so "responsible AI" claims meet an actual due-diligence checklist before contract.

AI Bill of Materials


A living inventory of every model, dataset, prompt, tool, agent, and owner in your estate.


You cannot govern what you cannot see. Shadow AI is now the default state of every large organisation. The AI-BoM gives you data lineage, model provenance, dependency tracking, and a discovery programme for what's already running without your knowledge. The thing your audit committee will eventually demand.

AI Risk Taxonomy


A risk model built from your specific models, agents, data flows, and deployment context.


Built from your models, agents, data flows, and deployment patterns, then mapped against ISO 42001, NIST AI RMF, the EU AI Act, and OWASP CycloneDX so your regulatory posture is traceable to specific technical reality. This is the artefact that lets your risk committee and your engineers talk about the same thing.

Reusable Guardrails Library


A codified controls architecture: versioned, testable, and hot-swappable as better implementations emerge.


Every team is now building AI components. Without shared infrastructure to store, version, discover, and quality-check, the same things get rebuilt repeatedly, nothing is auditable, and the AI Bill of Materials has nothing to point at. The library is where reusable practice lives - versioned, tested, discoverable across teams - so the practice compounds rather than fragments. It's where the community of practice forms: shared work creates shared standards, shared review, and cross-team conversation that policy can't manufacture.

Evaluation & Observability Pipeline


Continuous evals, tracing, and drift detection running in your CI/CD so model behaviour is measured continuously, on every commit.


Pre-deployment evaluation suites for accuracy, bias, and capability boundaries. Production tracing and observability for live behaviour, integrated with your incident response so model failures route the same way as any other Sev-1. The audit trail your regulator will ask for, generated automatically as a by-product of normal operation.

Frontier

Agentic Constraint Architecture


Decision-traceability, hard constraint layers, and red-teaming for AI systems that act without a human in the loop.


Multi-agent and autonomous deployments don't wait for governance to catch up. The failure modes are not the ones your existing risk framework anticipates. Constraint architecture, capability boundaries, and adversarial evaluation for systems where the cost of a wrong action is no longer just a wrong answer.

Governance Icebergs: the work beneath.

Action rarely begins with an organisation calmly identifying a governance need. They begin with pressure - a finding, a question that couldn't be answered, a competitor's headline that the board can't unsee. The governance language gets used because it's the language available. But what's often needed is to make the AI practice intentional, before governance can take hold. Three shapes that work has taken for past clients.

Scenario:

A regulatory finding had landed. The board was being asked questions it couldn't answer.

The pain was real and specific. A finding had landed, regulatory exposure was now visible, and the leadership team was being asked questions for which they did not yet have satisfactory answers. Assurance presumes a practice that can be assured, but the engineering team had models running with no agreed measures of performance, no instrumentation, and no shared vocabulary for what "working" meant. The first task was not the governance framework. It was defining what good looked like in concrete operational terms, instrumenting the systems already in production, and giving the team patterns they could use to recognise failure before the regulator did. The framework followed, with something to govern.

Scenario:

The board was asking about AI risk appetite. Nobody could answer.

The conversation kept stalling because the people being asked to set appetite had no working understanding of what they were setting it for. The exec team had been through training that left them more confident and no more capable - a panel of specialists explaining their specialisms, none of it integrated, none of it pitched at the altitude a board member actually needs. The work became rebuilding what executive AI literacy means: enough technical understanding to ask precise questions, enough strategic framing to allocate capital intelligently, enough governance fluency to set appetite without being captured by the people they're meant to be governing. Risk appetite is downstream of comprehension. The comprehension had to be built first.

Scenario:

The systems were already past where the field had answers.

A team that was technically deep, a practice that was already intentional, and questions that were genuinely at the frontier of the field resulted in a a trail of governance specialists struggling to grasp the situation. The work was building safety frameworks for systems where the failure modes are not yet well understood by anyone - agentic architectures, multi-agent coordination, self-modifying behaviour. Constraint architectures, red-team methodologies, decision-traceability for systems acting without a human in the loop. There is no established best practice here. That is what frontier means. What gets built here is what the rest of the field will eventually inherit.

Matt Newman - AI governance and safety advisor

From neural networks research to international AI standards.

I've spent twenty years at the intersection of technical AI and organisational reality - from early ML research and enterprise AI agent deployment, through large-scale global change programs, to shaping the IEEE standards, OWASP membership, and AI Safety research that will define AI governance for the decade ahead.

Fifteen years in AI and machine learning, a decade specifically in AI governance and safety, and more than thirty enterprise engagements across banking, insurance, energy, healthcare, and telecom.

"I help executives turn a vision into reliable assets, and engineers turn experiments into trusted systems."

That's what makes this work. A risk taxonomy that maps to your actual technology. Governance requirements that your engineers can implement without interpretation. Board assurance that is technically honest, and a board who are confident in having those conversations.

At SingularityNET I built safety frameworks for frontier AGI systems. At nib I'm operationalising ISO 42001 and NIST AI RMF through governance boards, LLM evaluation pipelines, and agentic observability infrastructure - sitting with engineers to work through exactly how guardrails land in their stack, and sitting with the risk committee to explain what that means in terms they can act on.

Work with me

Four disciplines.
One practitioner.

The AI industry is stuffed with one-dimensional 'experts'. Governance specialists who haven't sat with engineers. Safety researchers who don't read balance sheets. Change leads who don't know what an AI-BoM is. I've spent twenty years working across all four. Success required this combination.

AI Safety Research

Frontier safety work on agentic systems, multi-agent architectures, and self-improving models.

Governance built on last year's AI is already behind. The risk landscape for agentic systems, multi-agent architectures, and self-improving models is moving faster than most frameworks anticipate, and the failure modes are genuinely different from those of conventional software. Active engagement at the frontier means the governance you receive is calibrated to where AI actually is, not where it was when the standard was drafted.

International Standards

Active contributor to the international standards that define how AI is governed and audited.

Most practitioners interpret governance standards. Having helped write them is a different thing. You understand which clauses are precise, which were negotiated into deliberate vagueness, and what the audit community will actually look for as evidence. That difference shows up when a framework needs to survive regulatory scrutiny, not just satisfy an internal checklist. Active contributor to IEEE P2863 (AI Governance), P7000, P7005, and P7014.

Enterprise Implementation

Twenty years operationalising governance and transformation inside major listed organisations.

Advisory work that doesn't account for organisational reality tends to produce impressive documents and limited change. Twenty years of operationalising governance and transformation inside major listed organisations - across banking, insurance, energy, healthcare, and telecom - means what gets recommended is shaped by what actually lands inside complex, politically federated environments, not by what looks right from the outside.

Change & Adoption

The discipline that determines whether a governance framework lives or dies.

A governance framework that the organisation doesn't adopt is just a liability: it signals intent without delivering control. The disciplines above produce the architecture; this one determines whether it takes hold. Change management at scale - across enterprise AI programs, global compliance rollouts, and communities of practice - is what separates governance that persists from governance that gets filed.

Thinking on the frontier.

All articles
Governance

Should Is Not a Control: How AI Ethics Built Its Own Graveyard

Read more - Should Is Not a Control: How AI Ethics Built Its Own Graveyard
Governance

AI Governance for the Board

Read more - AI Governance for the Board
Governance

Interim Policy for Generative AI

Read more - Interim Policy for Generative AI
Views

Australia's AI Action Plan – An Open Response

Read more - Australia's AI Action Plan – An Open Response
Strategy

Implementing AI Ethics: Complex architectures

Read more - Implementing AI Ethics: Complex architectures
Strategy

MLOps - Fast-tracking AI Ethics?

Read more - MLOps - Fast-tracking AI Ethics?

Let's close
the gap.

If you're a CRO, CISO, CDO, or board member trying to get ahead of AI risk - rather than catch up to it - I'd like to hear from you.

Whether that's building your first governance framework, hardening an enterprise-scale agentic deployment, stress-testing what your engineers have already built, or preparing your leadership team for what's coming - bring the specific problem. I'll tell you directly where I can help.