De-Risking
Frontier AI.

Most AI governance fails before the engineering begins. I work across the full distance - from board risk appetite to production guardrails and evals.

See how it works
"You can't run your business on AI that's governed like a side hustle."

AI governance fails in a predictable place. Not in the boardroom, where frameworks get approved. Not in the risk committee, where they get celebrated. It fails when it meets the reality of AI development - where documents full of "shoulds" meet a team that deals in specifics and a technology that evolves daily.

There should be guardrails. Models should be evaluated. Agents should be catalogued. Data provenance should be maintained.

Should is not a control. It never will be.

And the development team, left to interpret vague mandates disconnected from technical reality, will make their own decisions. The framework joins the graveyard of well-intentioned slides. The governance gap widens.

87% of executives claim to have clear AI governance frameworks — but fewer than 25% have fully implemented the technical tools to manage risks.

Source: IBM Newsroom

Production-Ready Governance

Risk triage, use-case funnels, and leadership upskilling that treats AI as a regulated business asset - not a technology experiment run by whoever raised their hand first.

From Policy to Production

Most engagements deliver the policy and leave. I stay in the room with the engineers - translating governance requirements into specific, implementable controls for your actual stack. Your pipelines, your agent orchestration, your evaluation infrastructure.

Agentic AI Safety

Autonomous multi-agent systems don't wait for governance to catch up. Decision-traceability, red-teaming, observability, and constraint architecture for AI that acts without a human in the loop.

Matt Newman — AI governance and safety advisor

15+

Years in AI/ML

10+

Years AI Governance & Safety

30+

Enterprise Engagements

From neural networks research to international AI standards.

I've spent twenty years at the intersection of technical AI and organisational reality - from early ML research and enterprise AI agent deployment, through large-scale global change programs at Shell and Philips, to shaping the IEEE standards, OWASP membership, and AI Safety research that will define AI governance for the decade ahead.

"I help executives turn a vision into reliable assets, and engineers turn experiments into trusted systems. ."

That's what makes this work. A risk taxonomy that maps to your actual technology. Governance requirements that your engineers can implement without interpretation. Board assurance that is technically honest, and a board who are confident in having those conversations.

At SingularityNET I built safety frameworks for frontier AGI systems. At nib I'm operationalising ISO 42001 and NIST AI RMF through governance boards, LLM evaluation pipelines, and agentic observability infrastructure - sitting with engineers to work through exactly how guardrails land in their stack, and sitting with the risk committee to explain what that means in terms they can act on.

IEEE P2863 ISO 42001 NIST AI RMF EU AI Act AGI Safety MLOps Change Leadership
Work with me

Four pillars.
One outcome.

A framework built across twenty years at the intersection of technical depth and organisational reality. Designed to span the full distance - from board-level risk strategy to the engineering controls that make governance real.

Strategic Clarity

Converting AI complexity into decisions your leadership team can own. Every recommendation traces to P&L impact, risk reduction, or regulatory positioning — not just technical compliance.

Auditable Governance

IEEE- and ISO-standards-based frameworks that survive an audit committee meeting. Not governance theatre — systems your regulators, board, and legal team can rely on when they need to.

Security & Safety Assurance

From neural network research through frontier AGI safety work. Evaluation pipelines, red-teaming, and guardrails that scale from your first LLM deployment to autonomous multi-agent systems operating without human oversight.

Organisational Adoption

Governance frameworks fail when the organisation doesn't own them. I embed capability — training leaders, building governance boards, and running the change programs that make AI risk management a durable operating practice, not a project that ends with the final slide deck.

AI governance isn't a project.
It's an operating model.

Right now, somewhere in your organisation, someone is deploying an AI model that your risk team hasn't evaluated. An engineer is making a governance decision by default because nobody could articulate real requirements. A vendor is promising "responsible AI" with no mechanism you can audit, and no-one who can challenge their pitch.

The governance gap isn't a future problem. It's already costing you - in regulatory exposure, in audit unreadiness, in decisions you won't be able to explain when you need to and teams too nervous to use AI.

Address the Gap

<1%

of organisations have fully operationalised responsible AI

Source: World Economic Forum

<28%

of respondents are confident they can secure AI used in core business operations

Source: Cloud Security Alliance & Google Cloud

>60%

are already using or plan to use agentic AI this year

Source: Cloud Security Alliance & Google Cloud

Thinking on the frontier.

All articles
Governance

Should Is Not a Control: How AI Ethics Built Its Own Graveyard

Read more — Should Is Not a Control: How AI Ethics Built Its Own Graveyard
Governance

AI Governance for the Board

Read more — AI Governance for the Board
Governance

Interim Policy for Generative AI

Read more — Interim Policy for Generative AI
Strategy

Implementing AI Ethics: Complex architectures

Read more — Implementing AI Ethics: Complex architectures
Strategy

MLOps - Fast-tracking AI Ethics?

Read more — MLOps - Fast-tracking AI Ethics?

Let's close
the gap.

If you're a CRO, CISO, CDO, or board member trying to get ahead of AI risk — rather than catch up to it — I'd like to hear from you.

Whether that's building your first governance framework, hardening an enterprise-scale agentic deployment, stress-testing what your engineers have already built, or preparing your leadership team for what's coming — bring the specific problem. I'll tell you directly whether and how I can help.