"You can't run your business on AI that's governed like a side hustle."
AI governance fails in a predictable place. Not in the boardroom, where frameworks get approved.
Not in the risk committee, where they get celebrated. It fails when it meets the reality of AI development -
where documents full of "shoulds" meet a team that deals in specifics and a technology that evolves daily.
There should be guardrails. Models should be evaluated. Agents should be catalogued. Data provenance should be maintained.
Should is not a control. It never will be.
And the development team, left to interpret vague mandates disconnected from technical reality, will make
their own decisions. The framework joins the graveyard of well-intentioned slides.
The governance gap widens.
87% of executives claim to have clear AI governance frameworks —
but fewer than 25% have fully implemented the technical tools to manage risks.
Source: IBM Newsroom
Production-Ready Governance
Risk triage, use-case funnels, and leadership upskilling that treats AI as a regulated business
asset - not a technology experiment run by whoever raised their hand first.
From Policy to Production
Most engagements deliver the policy and leave. I stay in the room with the engineers -
translating governance requirements into specific, implementable controls for your actual stack.
Your pipelines, your agent orchestration, your evaluation infrastructure.
Agentic AI Safety
Autonomous multi-agent systems don't wait for governance to catch up. Decision-traceability,
red-teaming, observability, and constraint architecture for AI that acts without a human in the loop.
15+
Years in AI/ML
10+
Years AI Governance & Safety
30+
Enterprise Engagements
About
From neural networks research to international AI standards.
I've spent twenty years at the intersection of technical AI and organisational reality - from
early ML research and enterprise AI agent deployment, through large-scale global change programs
at Shell and Philips, to shaping the IEEE standards, OWASP membership, and AI Safety research that will define AI governance
for the decade ahead.
"I help executives turn a vision into reliable assets, and engineers turn experiments into trusted systems. ."
That's what makes this work. A risk taxonomy that maps to your actual technology. Governance
requirements that your engineers can implement without interpretation. Board assurance that is
technically honest, and a board who are confident in having those conversations.
At SingularityNET I built safety frameworks for frontier AGI systems. At nib I'm
operationalising ISO 42001 and NIST AI RMF through governance boards, LLM evaluation pipelines,
and agentic observability infrastructure - sitting with engineers to work through exactly how
guardrails land in their stack, and sitting with the risk committee to explain what that means
in terms they can act on.
IEEE P2863 ISO 42001 NIST AI RMF EU AI Act AGI Safety MLOps Change Leadership
A framework built across twenty years at the intersection of technical depth and organisational
reality. Designed to span the full distance - from board-level risk strategy to the engineering
controls that make governance real.
01
Strategic Clarity
Converting AI complexity into decisions your leadership team can own. Every recommendation traces to P&L impact, risk reduction, or regulatory positioning — not just technical compliance.
02
Auditable Governance
IEEE- and ISO-standards-based frameworks that survive an audit committee meeting. Not governance theatre — systems your regulators, board, and legal team can rely on when they need to.
03
Security & Safety Assurance
From neural network research through frontier AGI safety work. Evaluation pipelines, red-teaming, and guardrails that scale from your first LLM deployment to autonomous multi-agent systems operating without human oversight.
04
Organisational Adoption
Governance frameworks fail when the organisation doesn't own them. I embed capability — training leaders, building governance boards, and running the change programs that make AI risk management a durable operating practice, not a project that ends with the final slide deck.
Impact
AI governance isn't a project. It's an operating model.
Right now, somewhere in your organisation, someone is deploying an AI model that your risk
team hasn't evaluated. An engineer is making a governance decision by default because nobody
could articulate real requirements. A vendor is promising "responsible AI" with no mechanism you
can audit, and no-one who can challenge their pitch.
The governance gap isn't a future problem. It's already costing you - in regulatory exposure,
in audit unreadiness, in decisions you won't be able to explain when you need to and teams too nervous to use AI.
Should Is Not a Control: How AI Ethics Built Its Own Graveyard
The discipline that was supposed to prevent AI harm produced frameworks, principles, and declarations — then largely watched as the industry did whatever it was going to do anyway. The same pattern is repeating in AI governance right now.
What Your Audit Committee Should Be Asking About AI Decisions (And Why They're Not)
Most audit committees still treat AI governance as an IT risk. By the time they catch up, the decisions have already been made and the liability has already accrued.
Using large language models to evaluate large language models introduces a class of systematic bias that most evaluation pipelines are not designed to detect.
Human Resources: Empowering the organisation to make responsible-use decisions
The rapid advancement of Artificial Intelligence (AI) and other emerging technologies has created new opportunities and challenges for businesses, requiring them to adapt and evolve.
Ethical debt in artificial intelligence refers to the accumulated cost of unresolved ethical issues that arise during a system's design, development, and deployment.
Quantum Computing & AI: Making a tricky explanation impossible?
Innovation in technology continues to accelerate, and the advent of quantum computing is undoubtedly one of the most significant developments in recent years.
Emerging technologies, particularly artificial intelligence (AI), machine learning (ML), and generative AI, present both opportunities and challenges for businesses across industries.
Generative AI offers significant opportunities for business transformation through automating content generation, enhancing creativity, and unlocking new revenue streams.
Thank you for providing to the people of Australia an opportunity to respond to key questions regarding Australia’s AI Strategy in the form of the AI Action Plan Consultation Paper.
This reality of AI tech adoption in our private and public institutions struggles to fit into frameworks created to promote responsible and ethical use which focus heavily on the development process as the locus to effect positive outcomes.
MLOps, the big-bet for scaling Machine Learning, promises seamless development through to in-life use of models using automated, DevOps CI/CD workflows, but what does this mean for an AI Ethics discipline that has focused its fire-power on single-shot development projects?
If you're a CRO, CISO, CDO, or board member trying to get ahead of AI risk — rather than
catch up to it — I'd like to hear from you.
Whether that's building your first governance framework, hardening an enterprise-scale agentic
deployment, stress-testing what your engineers have already built, or preparing your leadership
team for what's coming — bring the specific problem. I'll tell you directly whether and how
I can help.