The rapid advancement of artificial intelligence and digital technologies presents both unprecedented opportunities and fundamental challenges to the rule of law. The question before policymakers is not whether to regulate these technologies, but how to craft frameworks that safeguard constitutional rights and democratic principles while safeguarding lawful innovation that serves the public interest.
The Governance Crisis
Current negotiations around the UN Global Digital Compact offer a critical opportunity to establish international technology governance norms. However, without proactive rights-based frameworks, concerning trajectories are emerging:
National governments may adopt surveillance and AI systems without democratic oversight, enabling authoritarian control and suppressing civil liberties in the name of efficiency or security. For example, Ghana spent over US$250 million between 2018 and 2021 on a “safe city” project involving over 8,400 CCTV cameras equipped with facial recognition technology, with equipment from Chinese companies streaming data to a national surveillance center, raising concerns about the erosion of democratic norms.
Multilateral institutions, including the UN, risk missing the opportunity to reflect Global South perspectives in digital governance, perpetuating colonial patterns of exclusion in global policymaking. This governance gap is particularly stark given that African governments collectively spend over US$1 billion annually on digital surveillance technologies, yet many African data protection laws provide no regulation of automated processing at all, including countries like Chad, Côte d’Ivoire, Egypt, Malawi, Mali, Senegal, Seychelles, and Tunisia.
Donor agencies may fund digital initiatives that deepen inequities or surround communities with opaque, unaccountable systems, inadvertently undermining the very development goals they seek to achieve.
These trajectories directly undermine SDG 16 (peace, justice, and strong institutions) and the foundational principle of leaving no one behind.
Rights-Based Governance Framework
This paper proposes a comprehensive framework built on five core principles:
1. Human Rights Centrality: All AI governance must be anchored in internationally recognized human rights, particularly privacy, expression, non-discrimination, and democratic participation.
2. Inclusive Institutional Design: Governance bodies must ensure meaningful Global South and civil society representation in policy development.
3. Transparency and Accountability: AI systems affecting human rights require transparent development, public auditing, and clear accountability mechanisms.
4. Capacity Building: International cooperation must prioritize institutional capacity in developing nations and equitable tech access.
5. Precautionary Approach: Given the potential for irreversible harm, governance frameworks should require demonstration of safety and rights-compliance before deployment.
Policy Recommendations
For National Governments:
- Legislate and enforce digital rights protections, including privacy, expression, and transparent use of technology.
- Establish multi-stakeholder oversight bodies (with civil society representation) to monitor AI and other digital implementations.
- Host inclusive consultations before deploying impactful digital systems affecting citizens’ lives.
For Multilateral Institutions (including UN):
- Embed digital rights and civic representation into the architecture of the Global Digital Compact.
- Ensure meaningful civil society participation, particularly voices from marginalized and Global South communities, in governance dialogues.
- Commission regular assessments of rights impacts from digital tools and publish findings.
For Donor Agencies:
- Prioritize support for locally led civil society initiatives that enhance digital accountability and resilience.
- Tie funding for technology programs to explicit human rights safeguards and result-oriented benchmarks.
- Facilitate cross-region civil society networks advocating for inclusive digital governance.
For Technology Companies:
- Adopt rights-by-design principles in AI development processes.
- Submit to independent auditing of systems with significant social impact.
- Engage meaningfully with affected communities and civil society organizations.
- Ensure equitable access to beneficial AI applications.
Conclusion
This is our chance to create AI that respects and protects people’s rights. If we wait too long, we’ll get locked into technologies that weaken democracy and worsen existing inequalities.
As the UN negotiates the Global Digital Compact, the first comprehensive global framework for digital cooperation and AI governance, we need to seize the opportunity to ensure that we’re generating humanity–centered rules and guidelines for developing responsible AI. This means putting justice, fairness, and human dignity at the center of design and development.
About the Institute for AI Policy & Governance (IAPG)
The Institute for AI Policy & Governance works to ensure artificial intelligence development serves the global public interest, focusing on human rights protection and equitable access to AI benefits across all communities.

