The Policy Illusion
Every ministry I have worked with in the last several years has, at some point, shown me a document. A framework. A strategy paper. A national AI roadmap stamped with an official seal and a launch date. And in every one of those meetings, I have asked the same question: what has changed on the ground since this was published?
The silence that follows is instructive. Not because the people in the room are evasive, they are often deeply committed, knowledgeable professionals, but because the document and the ground-level reality are operating in different registers. The policy exists. The transformation has not yet begun.
This is what I call the policy illusion: the institutional belief that publishing a governance framework is equivalent to enacting one. It is an understandable conflation. Governments are built around documentation. Legislation, regulation, policy, and procedure are the instruments through which states act. When a ministry produces a credible AI governance framework, it feels like progress, and in one narrow sense, it is. But it is the beginning of the work, not the end of it.
After seventeen years of advising on digital transformation across MENA and the GCC, working with ministries, national authorities, intergovernmental bodies, and development institutions, I have seen this pattern with enough regularity to treat it as structural, not incidental. The compliance gap between what a document says and what happens on the ground is not a matter of political will or budget. It is a matter of culture.
Culture Over Compliance
What does it mean for culture to lead governance? It means starting not with what the regulation requires, but with what the institution is actually capable of, and willing, to do. These are different questions, and conflating them is where most AI governance initiatives begin their decline.
Bureaucratic inertia is real, and it is not irrational. Civil servants who have spent careers operating within defined risk thresholds have excellent reasons to be cautious about systems that introduce new and poorly understood accountability structures. When an AI system makes a decision that turns out to be wrong, who is responsible? When an algorithm flags a citizen's application as high-risk, who explains that decision, and to whom? These are not abstract questions. They are the questions that keep experienced officials up at night, and they are rarely answered satisfactorily in the frameworks that mandate AI adoption.
Fear of accountability is the dominant cultural barrier I encounter. Not laziness, not technophobia, fear. The incentive structures within most government institutions reward avoiding visible failure far more than they reward innovation. An official who adopts an AI tool and encounters a high-profile error has made a career-limiting decision. An official who delays adoption indefinitely has committed no identifiable sin. Until governance frameworks address this asymmetry directly, compliance will remain surface-level.
"A policy document does not change behaviour. Culture changes behaviour. The question for every AI governance initiative is not 'what have we written?' but 'what have we changed?'"
- Rima TahaCulture change in government is slow by design. Institutions that serve millions of people cannot pivot like startups, nor should they. But slow does not mean impossible, and managed does not mean unambitious. The ministries I have seen make genuine progress on AI adoption share a common pattern: they invested simultaneously in technical capability, public trust, and internal culture, treating these as parallel tracks rather than sequential phases.
What culture change looks like in practice is less dramatic than the term suggests. It looks like a working group that includes a sceptic from HR alongside the CTO. It looks like a training programme that addresses not just tool usage but accountability frameworks and grievance mechanisms. It looks like leadership that openly acknowledges uncertainty, that says "we are learning as we go, and here is the structure within which we are doing that", rather than projecting false confidence.
The Stakeholder Ecosystem
Government AI governance does not happen in a single ministry or a single working group. It happens across an ecosystem of actors whose interests, incentives, and capacities are radically different, and who must nonetheless arrive at a coherent shared practice.
Civil servants are the implementation layer. They are the people who will actually use, report on, and be judged by AI systems. Their practical concerns, workload, accountability, skill gaps, job security, must be addressed not in a footnote but as a central design consideration. Governance frameworks that treat civil servants as passive recipients of top-down mandates generate the resistance they were designed to prevent.
Elected officials set the political context within which governance operates. They are often the primary audience for framework documents, the "strategy papers" designed to signal ambition rather than guide implementation. The challenge is that political cycles are shorter than institutional culture shifts. A framework launched under one administration may be deprioritised or reversed under the next. Governance design must therefore build durability into its structures, accountability mechanisms, independent review bodies, and cross-party commitments that outlast individual mandates.
Technology vendors are a structural presence in government AI, often supplying the platforms through which governance is enacted. The relationship between a vendor and a ministry is inherently asymmetric in information terms: the vendor understands the technology far better than the ministry, and has a financial interest in continued adoption. Governance frameworks that do not include robust vendor accountability mechanisms, independent audits, explainability requirements, contract terms that protect the public interest, are frameworks with a significant structural gap.
Civil society and the public are the constituencies that government AI ultimately serves, and the most consistently underrepresented stakeholders in governance design. Decisions about AI in public services, in welfare assessment, in border processing, in healthcare allocation, have profound effects on individual lives. Meaningful public participation in governance is not a nice-to-have. It is the difference between a system that serves citizens and one that processes them.
The ministries that navigate AI successfully are not those with the best frameworks, they are those that invested simultaneously in technical capability, public trust, and internal culture. These three things do not follow from each other automatically.
Governance Implementation Frameworks
The practical sequence for implementing AI governance in a government institution is not identical to the theoretical sequence outlined in most policy documents. What works on the ground follows a different logic, one that begins with honesty about current state, rather than aspiration about future state.
| Dimension | Compliance-First Approach | Culture-First Approach |
|---|---|---|
| Starting point | Legal/regulatory requirement | Institutional readiness assessment |
| Primary actor | Legal/policy team | Multi-stakeholder working group |
| Timeline | Defined by legislation | Organic, milestone-based |
| Success metric | Audit pass rate | Behavioural adoption rate |
| Risk | Surface compliance, no real change | Slower launch, deeper durability |
The governance implementation sequence I recommend has five stages, each of which must be completed before the next begins, not because rigidity is virtuous, but because skipping stages generates the kind of fragile deployments that become cautionary tales.
Diagnostic
A genuine readiness assessment across technical, cultural, and regulatory dimensions. Not a self-assessment by the ministry, but an independent evaluation that surfaces what the institution does not know it does not know. This stage is often skipped in the rush to demonstrate progress, which is precisely why so many implementations fail at stage three.
Stakeholder Architecture
Identify the full ecosystem of actors, champions, sceptics, decision-makers, affected communities, and map the decision gates each will influence. Governance without stakeholder architecture is governance without a theory of change.
Capability Building
Training, tooling, and team structuring must precede deployment, not accompany it. Introducing an AI system into a team that lacks the conceptual or technical vocabulary to work with it creates the conditions for either gaming or avoidance. Both outcomes produce the same result: a system that exists but does nothing.
Phased Rollout
Pilot in a low-stakes context, where errors are recoverable, learning is possible, and the blast radius of a mistake is contained. Document what you learn. Revise before scaling. The political pressure to demonstrate broad deployment quickly is the single most reliable predictor of implementation failure.
Governance Review
Quarterly outcome reviews with an external accountability mechanism that is independent of the ministry's own reporting chain. Governance without external review is governance that measures its own success, which is not governance at all.
The expandable sections below address three questions I encounter most frequently when presenting this framework to government audiences.
Measuring Governance Success
If the primary measure of AI governance success is audit pass rates, then the primary incentive is to pass audits. This sounds obvious stated plainly, but it is the precise dynamic that has made compliance-first governance so ineffective. What you measure is what you get, and most governance frameworks measure the wrong things.
The metrics that actually indicate governance success are behavioural and systemic, not documentary. They include:
- Adoption rates by department: Not just whether a system has been deployed, but the proportion of eligible staff actively using it for the decisions it was designed to support. Low adoption rates in deployed systems are a governance signal, not a communications problem.
- Public trust levels: Measured through independent survey mechanisms, not government self-reporting. Public trust in AI-assisted public services is both an intrinsic governance goal and a leading indicator of adoption sustainability. A system that citizens actively distrust will generate political pressure that eventually forces reversal.
- Audit exception rates: The frequency with which AI system outputs are overridden by human decision-makers, and the documented reasons for those overrides. High exception rates may indicate a poorly calibrated system; low exception rates may indicate that oversight is not being exercised meaningfully. Both require investigation.
- Speed and quality of decision-making: Not just throughput, but quality-adjusted throughput. AI governance that accelerates decisions while degrading their accuracy is not an improvement. Independent sampling of decision quality before and after AI adoption provides the baseline evidence needed to make this assessment.
- Capability benchmarks per department: Assessed annually, measuring not just tool literacy but conceptual understanding, can staff identify when an AI recommendation should be questioned, and do they have the institutional permission to do so?
None of these metrics are easy to collect. All of them require investment in monitoring infrastructure that most governance frameworks do not budget for. This is not accidental: governance frameworks written by policy teams are designed to govern, not to measure. Measurement is treated as an operational afterthought. Reversing this, making measurement design as central to governance as regulation design, is one of the most consequential changes a ministry can make.
The institutions that will navigate the AI transition successfully are not those that publish the most sophisticated frameworks. They are those that build the institutional muscle to learn from implementation, to take the gap between what a document says and what happens on the ground not as an embarrassment to be managed, but as information to be used. That, ultimately, is what governance is for.
Want to work together?
Rima advises organisations navigating the intersection of AI search, digital governance, and systemic transformation.
Collaborate With Me →