International standards for AI accountability tend to arrive in most organisations as compliance obligations: something to be documented, reported against, and filed. This is an understandable response to a requirement that can look like overhead. It also misses most of what standards are actually for.
A standard does not primarily tell you what to do. It creates the shared vocabulary that allows accountability claims to travel across organisational boundaries and be assessed by people who were not present when the decisions were made. Without that vocabulary, "responsible AI" means whatever the person asserting it wants it to mean at the moment they assert it. With it, the claim becomes testable.
Why Shared Vocabulary Matters
Consider what happens when an enterprise asks a vendor whether their AI system is governed responsibly. Without a standard, the vendor's answer is largely unchallengeable. They can describe their practices in terms they have selected, omit the aspects that do not reflect well, and frame everything in a way that makes independent verification difficult. The enterprise has no basis on which to push back.
With P2863 as a reference, the enterprise can ask specific, answerable questions: Do you have documented governance processes that meet the standard's requirements? Can you provide evidence of your risk assessment methodology? What is your process for maintaining documentation of AI systems in production? These are questions the vendor either can or cannot answer, and the answer is either consistent with the standard or it is not.
This is what standards do. They create a common language that allows accountability to be demanded and assessed by parties who do not share a deep technical background. The value is not primarily in the specific requirements. It is in the testability that having shared requirements creates.
Without a standard, the vendor's answer to governance questions is largely unchallengeable. With one, the claims become testable.
What P2863 Actually Addresses
P2863 is a process standard, not a technical specification. It addresses the organisational requirements for AI governance: governance structure, risk assessment processes, documentation obligations, and the accountability mechanisms that connect AI systems to the organisations responsible for them. It does not specify how to build a model or what architecture to use. It specifies what you need to be able to demonstrate about how you govern the models you build and deploy.
This scope is deliberate. A technical specification would become obsolete as the technology evolved. A process standard can remain relevant across technology generations, provided the processes it requires are genuinely the processes that governance requires. The ambition of P2863 is to define what responsible AI governance looks like at the process level, such that compliance is meaningful rather than merely procedural.
The Gap Between Adoption and Implementation
Adopting P2863 as a standard and implementing P2863 as a governance practice are substantially different activities. Adoption requires a board resolution and a policy document. Implementation requires that every AI deployment in the organisation passes through a governance process that meets the standard's requirements, that the documentation of that process is maintained in a form that supports audit, and that the governance function has the standing and capability to enforce the requirements.
Most organisations that have "adopted" AI governance standards are somewhere between adoption and implementation. The gap between them is where the actual governance work happens, and where most AI governance programmes currently face their greatest challenges. The standard defines the destination. Getting there requires deliberate programme work that the standard does not prescribe.
Using P2863 Practically
The most immediate practical use of P2863 is as a structure for governance programme design. If the standard requires that you can demonstrate a risk assessment process for each AI deployment, that requirement becomes a design constraint for your governance programme. You build the processes and documentation that would allow you to demonstrate compliance, not because demonstration is the goal, but because the processes that allow demonstration are the processes that create genuine accountability.
The second practical use is in vendor and partner assessment. P2863 provides the questions to ask and the evidence to request. An AI vendor that cannot answer P2863-based governance questions has told you something important about their governance maturity, regardless of what their marketing materials assert.
The third use is in board and audit committee reporting. P2863 provides a structure for reporting on AI governance that is more informative than generic risk language. It allows governance progress to be assessed against a defined standard rather than against internal assertions, which is a more credible basis for board-level assurance.