AI-ENABLED PUBLIC GOVERNANCE IN DEVELOPING STATES: SERVICE DELIVERY GAINS, ACCOUNTABILITY RISKS, AND A PRACTICAL RISK-BASED REGULATORY MODEL
DOI:
https://doi.org/10.52152/wja5db40Keywords:
AI governance, digital government, accountability, risk-based regulation, public administration, developing states, trustworthy AI, algorithmic impact assessment, public sector innovationAbstract
Governments are moving quickly from small artificial intelligence (AI) pilots to operational use in public administration especially in citizen services, compliance, fraud detection, and planning. This shift is no longer theoretical: the United States’ consolidated federal inventory reported more than 1,700 AI use cases across agencies, including a significant subset classified as rights- or safety-impacting. At the same time, evidence from advanced administrations shows that well-designed “assistive” systems can produce measurable gains, such as sharply reduced response times in service workflows and time savings for public servants.
However, without clear governance, AI can weaken accountability through opaque decision pathways, biased outcomes linked to poor or unrepresentative data, staff over-reliance (“automation bias”), and weak or inaccessible channels for citizens to challenge outcomes. These risks are more acute in developing states where institutional capacity, procurement maturity, data governance, and independent oversight are often uneven. The central policy problem is therefore not whether governments should use AI, but how they can adopt it while preserving procedural fairness, explainability, and public trust.
This paper proposes a practical, risk-based governance model for the public sector that translates international principles into operational controls that resource-constrained administrations can implement. The model classifies government AI into low-, medium-, and high-risk uses, and aligns safeguards to impact. For high-risk systems such as eligibility and sanctions decisions, law-enforcement support, and biometric identification the paper specifies minimum deployment requirements: named accountability ownership, meaningful human oversight, pre-deployment impact assessment, proportionate explainability, data quality and fairness testing, security controls with audit trails, enforceable procurement clauses for vendor accountability, and accessible grievance and review mechanisms.
The paper also provides a phased implementation roadmap: (i) governance rules, procurement templates, and an AI registry; (ii) testing, monitoring, and audits; and (iii) stronger independent oversight, transparency, and redress. The paper’s contribution is a governance framework that enables service delivery gains while preventing the most damaging failure mode in public administration: high-impact automation that becomes effectively unchallengeable.
Downloads
Published
Issue
Section
License
Copyright (c) 2026 Lex localis - Journal of Local Self-Government

This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.


