FEATURE | AI GOVERNANCE | SINGAPORE
As 88 nations endorsed a landmark AI declaration in New Delhi, the world’s third-ranked AI hub found itself at a crossroads between global ambition and domestic urgency — armed with the most sophisticated voluntary governance architecture in Asia, yet constrained by the very logic that made it possible.
By Analysis Desk | February 22, 2026
Sources: AFP, Wikipedia, IMDA, MAS, Smart Nation Singapore, HRM Asia, CSIS, IAPP, Budget 2026 statements

I. The Summit, the Declaration, and the Silence of Binding Law
When the gavel fell in New Delhi on February 21, 2026, signalling the close of the AI Impact Summit, the applause was genuine — and the ambiguity was no less so. The New Delhi Declaration on AI Impact, endorsed by 88 countries including the United States, China, the European Union, Russia, and Singapore, represented the broadest multilateral consensus on artificial intelligence yet achieved. It called for AI that is ‘secure, trustworthy and robust.’ It acknowledged AI as ‘an inflection point in the trajectory of technological evolution.’ It called on nations to ensure that AI’s benefits are ‘shared by humanity.’
What it did not do was bind anyone to anything.
“The fact that this declaration drew such wide endorsement tells you what kind of agenda it is: one that is AI-industry approved, not one that meaningfully protects the public.” — Amba Kak, AI Now Institute
Critics were swift. Amba Kak, co-executive director of the AI Now Institute, called the outcome ‘another round of generic voluntary promises.’ The declaration’s broad endorsement — particularly by the United States, which had refused to sign the preceding Paris summit statement in 2025 — was read not as a triumph of multilateral cooperation but as evidence of the declaration’s harmlessness to powerful industry interests. The US delegation, led by Michael Kratsios, had explicitly stated that Washington ‘totally rejects global governance of AI,’ and yet found the New Delhi text sufficiently innocuous to append its signature.
Computing scientist and AI safety campaigner Stuart Russell offered a more measured reading, calling the commitments ‘not completely inconsequential’ while urging countries to build toward ‘binding legal commitments to protect their peoples.’ That gap — between the voluntary architecture that produced 88 signatures and the binding frameworks that might actually constrain harm — is precisely the terrain on which Singapore’s own AI governance story unfolds.

II. Singapore’s Position: Third in the World, First in Asia
Singapore arrived at New Delhi not as a passive signatory but as one of the world’s most architecturally sophisticated AI governance states. Tortoise Media’s Global AI Index ranks the city-state third globally — behind only the United States and China — and describes it as ‘Asia’s most dynamic AI hub after China.’ The International Monetary Fund estimates that approximately 77 percent of Singapore’s employed workers are highly exposed to AI, a figure substantially above the global average for advanced economies (60%) and dramatically above that of emerging markets (40%), owing to Singapore’s unusually high concentration of knowledge-sector employment.
Since launching its first National AI Strategy in 2019, Singapore has constructed a layered governance ecosystem that includes: the Model AI Governance Framework (2019, updated 2020), which provides voluntary principles-based guidance for private sector deployment; the AI Verify testing toolkit, now operated by the AI Verify Foundation with over 90 member organisations by 2025 and a Global Model Evaluation Toolkit aligned with OECD and GPAI standards; the Singapore AI Safety Red Teaming Challenge (editions in 2025 and 2026), which tested GenAI applications for data leakage risks across English and regional Asian languages; the Singapore Conference on AI (SCAI), which in April 2025 convened more than 100 voices from academia, industry and government, culminating in the Singapore Consensus on Global AI Safety Research Priorities; and most recently, the Model AI Governance Framework for Agentic AI, unveiled at the World Economic Forum on January 22, 2026 — the first governance framework of its kind in the world dedicated specifically to autonomous AI agents.
This is a remarkable institutional portfolio for a city-state of 5.8 million. But the word that recurs throughout every framework, every toolkit, every initiative, is the same word that haunts the New Delhi Declaration: voluntary.
“There aren’t any AI-specific laws or AI enforcement agencies in Singapore. Enforcement is limited to existing laws, such as those governing data protection, cybersecurity, copyright and online safety.” — IAPP Global AI Governance Overview, 2025

III. Voluntary by Design: A Strategic Choice and Its Tensions
Singapore’s preference for non-binding, principles-based governance is not an oversight or a failure of political will. It is a deliberate strategic positioning. As one of the world’s most trade-dependent open economies, Singapore has an acute institutional interest in remaining a hospitable location for technology investment. Regulatory regimes that are too prescriptive risk deterring the very industry activity — the AI Centres of Excellence, the hyperscaler cloud investments, the research partnerships with Google DeepMind, Microsoft, and Anthropic — that underpin the city-state’s ambition to be the region’s AI hub.
The logic runs deeper still. Singapore’s regulatory architecture has historically operated through what scholars of small-state governance describe as competitive advantage through credibility rather than coercive compliance — establishing itself as the trusted interlocutor between East and West, a jurisdiction that both China-adjacent and US-aligned firms can regard as a neutral, reliable operating environment. Mandatory AI law of the kind being developed in the European Union would risk disrupting this positioning, not least because Singapore’s tech ecosystem is deeply integrated into US-led supply chains.
This explains why Singapore championed the ASEAN Guide on AI Governance and Ethics (released during the fourth ASEAN Digital Ministers’ Meeting in February 2024) and why its AI Verify Foundation has worked intensively on global assurance interoperability — on making Singapore’s voluntary testing standards mutually recognisable with OECD and GPAI criteria. The goal is not regulation per se but governance through technical standard-setting: shaping the norms by which AI systems are evaluated, without constraining deployment through law.
Yet the same strategic logic that makes Singapore effective at multilateral AI norm-setting also exposes a structural vulnerability. Voluntary frameworks, no matter how technically sophisticated, cannot compel compliance from firms or foreign governments. And as the New Delhi Declaration demonstrated, the international community’s capacity to produce wide endorsement correlates inversely with the stringency of the obligations endorsed.

IV. The Agentic AI Framework: A World First, Still Voluntary
Singapore’s January 2026 Model AI Governance Framework for Agentic AI deserves particular scrutiny in the context of the New Delhi summit, because it illustrates both what Singapore does uniquely well and where the limits of voluntary governance become most acute.
Unlike conventional generative AI — which responds to prompts — agentic AI systems can independently reason, plan, and execute tasks: updating databases, authorising payments, managing workflows, adapting dynamically to their environment. The risks are commensurately greater: unauthorised actions, cascading failures, data leakage, and what the framework terms ‘automation bias,’ the tendency to over-trust systems that have previously performed reliably. The framework is structured around four governance dimensions: assessing and bounding risks upfront; ensuring meaningful human accountability through defined approval checkpoints; establishing technical safeguards throughout the deployment lifecycle; and conducting regular audits with training to recognise failure modes.
According to Workday research, 79 percent of organisations in Singapore are already deploying or piloting AI agents. The framework thus addresses an immediately practical governance gap, not a theoretical future risk. Legal commentators at Hogan Lovells have noted that ‘although the MGF does not impose binding legal obligations, it provides a strong indication of Singapore’s regulatory trajectory and establishes practical best practices for industry adoption.’ Whether that trajectory will eventually arrive at binding law — and on what timeline — remains Singapore’s central AI governance question.
79% of organisations in Singapore are already deploying or piloting AI agents — the fastest adoption rate in Southeast Asia.

V. The Workforce Question: Singapore’s Most Pressing Domestic Imperative
If the international AI governance debate is characterised by an excess of declaration and a deficit of enforcement, Singapore’s domestic AI challenge is, if anything, more urgent. The IMF’s assessment that 77 percent of Singaporean workers face high AI exposure is not merely a macroeconomic statistic. It is a social and political fact that PM Lawrence Wong acknowledged directly in his Budget 2026 statement on February 12, days before the New Delhi summit opened.
Wong announced the formation of a National AI Council, chaired by himself, to coordinate Singapore’s AI strategy across four ‘missions’ in advanced manufacturing, transport and connectivity, financial services, and healthcare. He also announced that SkillsFuture Singapore (SSG) and Workforce Singapore (WSG) would be merged into a single statutory board under joint oversight of the Ministry of Education and the Ministry of Manpower — a structural reform designed to create a ‘one-stop shop for skills training, career guidance, and job matching services.’ The objective, in Wong’s framing, is to ensure that ‘every Singaporean who is willing to adapt and learn will continue to secure a good job and earn a good living.’
The scale of the challenge is substantial. Singapore’s Civil Service College scenario analysis has described a trajectory in which knowledge workers in law, brokerage, and management consulting face pay cuts or layoffs as AI augmentation deepens. Research commissioned for Budget 2026 found that approximately nine in ten organisations have already changed or displaced roles due to AI. The World Economic Forum’s Future of Jobs Report 2025 projects that AI will create 170 million new roles globally by 2030 while displacing 92 million — a net positive, but one that masks profound distributional consequences, with displacement concentrated at the lower end of the skill spectrum and gains accruing disproportionately to capital and high-complementarity workers.
For Singapore, this distributional risk has a specific structural dimension. As CSIS analysts have documented, the city-state hosts a substantial migrant workforce concentrated in construction, domestic services, manufacturing, and logistics. These workers are largely excluded from SkillsFuture and related national upskilling programmes, while being among the most exposed to robotisation and automation. If Singapore undermines its commitments to narrowing the development gap among ASEAN member states by exporting the social costs of AI-driven displacement to source countries, the city-state’s claim to inclusive AI leadership becomes considerably harder to sustain.
“Fear cannot be Singapore’s response” to AI disruption. The government’s competitive advantage lies not in building frontier models but in deploying AI effectively, responsibly and at speed. — PM Lawrence Wong, Budget 2026

VI. Singapore as Governance Exporter: The ASEAN Dimension
Singapore’s most consequential contribution to AI governance may ultimately be less about what it does domestically and more about what it models and exports regionally. As a founding member of the Global Partnership on AI, an active contributor to the OECD AI Policy Observatory (despite not being an OECD member), and the convening authority behind the Singapore Consensus on Global AI Safety Research Priorities, Singapore occupies a structural position as a broker between AI governance cultures that might otherwise speak past each other.
Within ASEAN specifically, Singapore’s governance frameworks function as a de facto regional template. The ASEAN Guide on AI Governance and Ethics, released in February 2024, bears the unmistakable imprint of Singapore’s Model Framework. Countries in the region that lack the regulatory capacity to develop comprehensive domestic AI governance from scratch — which is most of them — effectively default to Singapore’s architecture as a reference standard. This is governance influence through technical credibility, not political coercion.
The agentic AI framework released in January 2026 has already been described by observers in resource-constrained public sector environments as ‘a strategic blueprint’ for digital government beyond Singapore’s borders. Former Maldives Minister of State Mohamed Shareef wrote that the framework’s explicitly iterative design — sector-agnostic, principle-based, built for continuous updating rather than comprehensive one-off legislation — is particularly suited to fast-developing economies that cannot afford to wait for perfect regulatory frameworks before acting. The insight is not merely operational; it is a model for navigating the fundamental mismatch between the pace of AI development and the pace of democratic legislative process.

VII. Financial Services: Where Voluntary Governance Meets Regulatory Teeth
There is one sector in Singapore where the transition from voluntary guidance toward something approaching binding obligation is already visible: financial services. In November 2025, the Monetary Authority of Singapore issued a consultation paper proposing Guidelines on AI Risk Management for all financial institutions under its purview. Unlike the Model AI Governance Framework, which is explicitly voluntary, the MAS guidelines would operate within MAS’s supervisory framework — meaning that non-compliance carries regulatory consequences, even if the guidelines themselves are framed as expectations rather than hard law.
The proposed guidelines cover board-level oversight of AI risk management; AI inventory maintenance and materiality assessment; lifecycle controls across data management, fairness, transparency, explainability, and human oversight; and third-party risk management. The consultation closed in January 2026. This is the closest Singapore has yet come to sector-specific mandatory AI governance, and it is notable that the sector chosen is the one in which Singapore’s regulatory credibility is most established and most economically consequential.
The MAS guidelines may represent a template for what Singapore’s AI governance transition could look like more broadly: a gradual migration from voluntary frameworks to sector-specific supervisory expectations, calibrated to the risk profile of each domain, preserving the innovation-friendly posture that attracts investment while progressively closing the enforcement gap that critics of voluntary governance have consistently identified.

VIII. The Road to Geneva: What Singapore Should Do Next
The next AI Impact Summit is scheduled for Geneva in 2027. In the interim, a UN panel on AI — the Independent International Scientific Panel on Artificial Intelligence, with 40 confirmed members — will begin working toward ‘science-led governance,’ as UN Secretary-General António Guterres described it. Singapore has an opportunity, and arguably an obligation, to shape that interlude constructively.
Several priorities deserve particular attention. Singapore should accelerate the migration of its voluntary Model Frameworks toward sector-specific supervisory expectations, following the MAS template, in healthcare, public sector AI deployment, and critical information infrastructure. The agentic AI framework’s launch is an opportune moment to assess which of its four governance dimensions require binding obligations — particularly around human accountability checkpoints in high-stakes deployments — rather than voluntary best practices.
Singapore should also use its convening authority to push the ASEAN AI governance conversation from principles to mechanisms. The ASEAN Guide on AI Governance and Ethics is a starting point, not an endpoint. Concrete proposals for mutual recognition of AI assurance assessments, common standards for algorithmic impact disclosure, and ASEAN-wide worker protection frameworks for AI-displaced migrant labour would represent a substantive regional contribution that goes beyond the New Delhi Declaration’s generic commitments.
Finally, Singapore should be willing to engage the deeper intellectual debate that the Delhi declaration conspicuously avoided: whether the innovation-friendly, voluntary, assurance-based governance model is adequate to the risk profile of increasingly autonomous AI systems, or whether — as Stuart Russell and the safety research community argue — binding legal commitments will ultimately be necessary to protect populations from risks that no voluntary framework can compel industry to internalise.
“Countries should build on these voluntary agreements to develop binding legal commitments to protect their peoples so that AI development and deployment can proceed without imposing unacceptable risks.” — Stuart Russell, AI Safety Researcher

Conclusion: The Ambition and the Gap
Singapore stands in 2026 at a genuinely remarkable position in the global AI landscape: third in the world by AI readiness, first in Asia by governance sophistication, uniquely positioned between the US-led innovation agenda and the safety-centred regulatory traditions of Europe, and increasingly important as a model for the Global South. Its voluntary frameworks are technically serious, iteratively designed, and internationally connected in ways that few comparable jurisdictions can match.
But the New Delhi Declaration is a reminder that the global AI governance project remains caught in a structural trap: the broader the consensus, the weaker the commitment; the weaker the commitment, the less protected the public. Singapore’s governance architecture is, in its essentials, built on the same voluntary logic as the declaration that 88 nations signed in New Delhi — and that critics said was ‘AI-industry approved, not one that meaningfully protects the public.’
The question Singapore must now answer — not for Delhi, but for Geneva, and for itself — is whether being the world’s most sophisticated practitioner of voluntary AI governance is sufficient, or whether leadership in this domain ultimately demands the harder work of defining what must be binding, who must be protected, and what risks no market signal will ever adequately price. The frameworks are in place. The infrastructure is being built. The workforce transition has begun. What remains is a political and moral choice about the limits of voluntarism — and Singapore, more than perhaps any other small state in the world, has both the standing and the capability to make that choice consequentially.

Sources: AFP/Yahoo News (February 21, 2026); Wikipedia – India AI Impact Summit 2026; IMDA.gov.sg; IAPP Global AI Governance: Singapore (2025); Smart Nation Singapore; MAS Guidelines for AI Risk Management (November 2025); HRM Asia – Budget 2026 AI Workforce Analysis; CSIS New Perspectives on Asia – AI and Singapore’s Foreign Workforce; Nemko Digital – Singapore AI Regulation; Hogan Lovells – Singapore Agentic AI Framework Client Alert; CDOTrends; GovInsider Asia; Budget 2026 statements (MyCareersFuture/WSG).
Word count: approximately 3,200 words | Reading time: 13–15 minutes