For artificial intelligence in government to have positive impact, it must be trusted by the employees using it and the citizens whose lives may be affected by it. As Jennifer Robinson, global strategic advisor for SAS’ public sector practice makes clear, AI governance is imperative to ensuring responsible use. So what exactly does AI governance comprise, and how should it be approached and implemented?
There is a disconnect between embracing and governing AI within governments.
A recent IDC Data and AI Impact Report found that while 78% of public sector organisations globally say they fully trust AI, only 40% have invested in the governance and safeguards needed to make that trust well‑founded.
That disconnect matters. AI is no longer confined to pilot projects or back‑office experimentation. It is increasingly embedded in everyday government workflows, automating work once performed manually by civil servants. We will increasingly have AI shaping actions and decisions that have real consequences for citizens.
Automation raises the stakes for trust. Can employees trust AI outputs enough to rely on them when making consequential decisions and taking actions? Can citizens trust their governments to use their data responsibly and deploy AI fairly?
AI governance exists to help organisations answer “yes” to those questions. It is the strategic and operational framework that ensures AI is trustworthy, ethical, and compliant. AI governance spans oversight, compliance, operations, and culture to provide the guardrails needed to manage AI responsibly across its entire lifecycle.
AI governance extends beyond compliance
When people hear about governance, they often think of regulations. But regulation isn’t the starting point for governments. Trust is. Policy, in many cases, is the mechanism governments are now using to deliver trust at scale.
Kalliopi Spyridaki, SAS’s chief privacy strategist, notes that compliance is necessary, but governance must come first. One of the greatest misconceptions about AI governance is that it is synonymous with regulatory compliance. “AI governance should not be driven by compliance obligations alone,” Spyridaki emphasises. When organisations treat governance as a downstream requirement, they are forced into reactive oversight that can limit innovation rather than support it.
True governance begins earlier, with clear accountability, risk classification, and transparency embedded into AI systems from their inception. This approach does not stifle innovation; it enables it by creating confidence among leaders, employees, regulators, and the public that AI can be used responsibly at scale.
Spyridaki also underscores that AI governance does not exist in a legal vacuum. Longstanding public sector obligations around data protection, data sharing, security, and access to information already shape how governments deploy technology. AI specific regulations build on that foundation. Therefore, irrespective of regulatory regimes across jurisdictions, governments should internalise the governance principles of accuracy, accountability, and transparency that are common across the public sector.
Intentionality drives operational trust
In the public sector, the risks of AI range from personal harms (such as unfair benefit determinations) to systemic harms (such as erosion of trust in public institutions). AI governance mitigates these risks by establishing clear standards, accountability mechanisms, transparency requirements, and multidisciplinary oversight structures that include ethics, legal, and domain expertise.
Vrushali Sawant, a SAS data scientist and member of its data ethics practice, emphasises intentionality as the backbone of trustworthy AI. “Intentionality means designing with purpose and accountability,” particularly in public sector environments where trust and equity are paramount, she says.
Governments must begin by asking fundamental questions: Who benefits? Who could be harmed? Is AI the right tool for this problem? This is the ethical enquiry that must then persist throughout the AI lifecycle.
This intentionality extends into putting the principles of ‘responsible AI’ into practice by embedding them into AI systems. Ongoing monitoring, auditing, and remediation mechanisms ensure models remain aligned with policy, values, and public expectations.
Tools like Model Cards, which Sawant describes as “nutrition labels for AI”, play a critical role by documenting purpose, training data, fairness assessments, and limitations. Combined with audits and usage tracking, they transform governance from a checkbox into a living, visible practice that builds trust across technical and non‑technical stakeholders alike.
While using responsible AI systems is essential, Sawant emphasises that governments often underestimate risks that are ethical, social, and operational, not just technical. These include fraudulent digital services, deepfake-driven mis- and disinformation, biased automated decisions, and attacks on public sector systems such as smart city infrastructure. Shadow AI, which refers to unsanctioned use of AI tools or applications by employees or end users without approval or oversight of the IT department, poses major risks including data leaks, intellectual property loss, and compliance breaches. Because these risks do not sit neatly within cybersecurity or IT domains, they require broader governance frameworks that extend beyond technical controls.
A centralised view of models, agents, and use cases becomes crucial to make AI visible, governable, and ready to scale. It helps identify shadow AI and supports a foundational AI governance approach that accelerates innovation.
Literacy and culture lead to durable AI governance
Governance is as much about people as it is about technology. According to SAS’ principal trustworthy AI specialist, Josefin Rosén, “AI literacy really sits at the heart of building a culture of responsible innovation”. Without a baseline understanding of how AI systems work and where their limitations lie, it is difficult for government leaders, employees, and policymakers to procure, deploy and oversee their AI.
Building a culture that understands AI and AI governance enables more informed decision making, reduces risks, and builds greater trust in AI results. Yet many governments underinvest in educating their workforce in AI and AI governance relative to AI development, allocating only a small portion of their AI budgets to education and oversight.
As governments move toward greater automation, including AI agents and autonomous systems, the need for continuous vigilance has become even more pronounced. Trust does not emerge by accident. It is built intentionally through governance disciplines that evolve alongside technology.
Governments have important work to do to ensure their data and AI systems are as trustworthy as their employees and citizens need and expect them to be. The time to invest in AI governance is now.
---
Autor(en)/Author(s): Jennifer Robinson
Dieser Artikel ist neu veröffentlicht von / This article is republished from: Global Government Forum, 02.02.2026

