Most conversations about AI governance start with risk. Risk of bias, risk of hallucination, risk of regulatory non-compliance. These are real concerns. But framing governance purely as risk management leads to systems that are heavy, slow, and adversarial — systems that teams work around rather than through.
I think governance is fundamentally a design problem.
When a team inside a large organization wants to deploy an AI model, they encounter a series of questions: Is this model approved? What data was it trained on? Has it been evaluated for fairness? Who approved it? What are the usage constraints?
These are valid questions. But how they're asked — the interface — determines whether governance is a bottleneck or an enabler.
A well-designed governance system makes the right thing easy. It surfaces relevant policies at the right moment. It automates evaluations that can be automated. It creates clear, auditable records without requiring teams to fill out 40-page forms.
One pattern I've become convinced of: governance policies should be expressed as code, not documents. When a policy lives in a PDF, it gets interpreted differently by every team. When it lives as an automated check in a pipeline, it gets applied consistently.
This doesn't mean removing human judgment. It means encoding the easy decisions so humans can focus on the hard ones.
The end user of a governance platform isn't an auditor — it's a builder. An ML engineer, a product manager, a data scientist. If the system doesn't respect their workflow, they'll route around it. The best governance systems feel less like compliance checkpoints and more like guardrails on a highway — present, protective, but not in your way.
We're still early in figuring out what good AI governance looks like in practice. The regulatory landscape is shifting. The technology is moving fast. But I believe the organizations that treat governance as a design discipline — not just a legal one — will move faster and build more responsibly.