Beyond Security: Why 'Data Use Governance' is the Essential Architecture for Government AI
From Passive Security to Active Governance of Data Usage
Governments worldwide – from the European Union with its AI Act to the United States with its 'Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence' – are racing to establish frameworks for the (genuinely) safe and trustworthy deployment of Artificial Intelligence.
But beneath these high-level directives lies a universal reality that bridges technology issues and management practices: you can't have responsible AI without first mastering the appropriate and agreed ethics and policies related to the collection and use of the data that feeds it.
And that, in turn, requires a new discipline and a new mindset beyond simple data security. It requires smarts and professionalism in data use governance – in direct accordance with watertight ethics, practices and policies.
Government agencies have spent years focusing on data security and related governance . . . protecting citizens’ data from breaches. And yet, compared to the concept-in-action of data usage governance, the “data security” mindset is, to a large degree, passive; “set and forget”.
An Active Discipline
Data usage governance is an “active” discipline. It requires the creation of comprehensive data inventories, the defining of clear rules, and the automated enforcement of those rules.
Only through a comprehensively structured, dynamically monitored, action-oriented continuous improvement approach can public sector leaders fully unlock the benefits of AI – that is, without eroding the trust and confidence that is foundational to any government entity’s relationship with its publics.
The sweeping AI Executive Order from the U.S. White House in October 2023, under then-President Joe Biden, notably, created a baseline blueprint for an AI Bill of Rights, put that country’s government agencies under a clear mandate to ensure the AI they build and deploy is safe, secure, and trustworthy.
Traditionally, data governance focused on securing highly organised, structured database-and-spreadsheet type data inputs – stored in a central warehouse for predictable reports. By stark comparison, the new reality of data use governance must control how Artificial Intelligence systems and automated agents access a chaotic mix of data from all corners of an agency – from, for example, structured and unstructured files like emails, PDFs, and reports.
The core problem is that AI models can indiscriminately access and "learn" from any data they can reach, including sensitive and personally identifiable information (PII). Crucially, once learned by many current AI models, this data cannot be easily removed.
It's Not A Tech Problem, It's A Trust Problem
While the private sector worries about brand damage, the stakes for government are even higher: the erosion of public trust.
When an agency's AI systems misuse citizen data, it isn't just a compliance failure; it's a violation of the social contract.
To demonstrate and uphold that social contract of public trust, agencies must govern data based on four key contexts:
- Data Context (The "What"): What is this data? Is it a public record or a sensitive health document?
- Consent Context (The "Permission"): Did the citizen provide this data specifically for this purpose? Data for a water bill should not be used for law enforcement profiling.
- Regulatory Context (The "Rules"): What specific laws govern the use of this data, such as HIPAA or GDPR?
- Business Context (The "Why"): What is the specific, legitimate public purpose for this AI to use this data?
Answering these four questions is the only way to enforce responsible use policies. Without them, an agency cannot prove its actions are legal, compliant, or ethical.
And the risk is a loss of the citizenry’s trust that is nearly impossible to regain.
From Data Gatekeepers to Trustworthy, Purpose-Driven Innovators
In short, the transition to AI requires government agencies to shift from being reactive data gatekeepers to proactive enablers of responsible, purpose-driven, policy-centric, privacy-protecting innovation. And a growing consensus is emerging among technology leaders that the old, data-centric model of governance is no longer sufficient to underpin that imperative.
Ultimately, mastering data use governance is not just a technical or compliance challenge; it is a prerequisite for any government wanting to innovate responsibly. As agencies move forward, the choice is not if they will adopt AI, but how they will build the public trust necessary for it to succeed. That foundation is built on a proactive, policy-driven, and purpose-based approach to the data they hold.
Note from Author, Jordan Kelly (Editor-in-Chief):
This is a vast area of intensifying government, vendor and public interest. There’s a lot to unpack. I’ll be doing that in future deep dives and subject matter expert interviews.