Executive Summary
On 3 March 2026, X (formerly Twitter) announced a policy suspending creators from its revenue-sharing programme for 90 days if they post AI-generated videos of armed conflicts without disclosure. Repeat offenders face permanent suspension. The announcement came against the backdrop of active US-Israel-Iran hostilities, framed by X Head of Product Nikita Bier as essential to protecting “information authenticity” during wartime.
This case study analyses the policy’s architecture, its embedded ideological tensions, enforcement limitations, and global outlook. It then draws specific implications for Singapore, a city-state that sits at the intersection of sophisticated digital infrastructure, high social media penetration, and acute sensitivity to information warfare given its multiracial, multi-religious social fabric.
| Key Argument: The policy’s significance lies not in its breadth — it is deliberately narrow — but in what its design choices reveal about the structural constraints facing platform governance in the generative AI era. |
1. Case Background
1.1 The Policy Announcement
X’s March 2026 announcement introduced a tiered financial penalty framework rather than a content-removal regime:
- First offence: 90-day suspension from the Creator Revenue Sharing Programme
- Repeat offence: Permanent suspension from the programme
- Detection mechanism: Community Notes (crowd-sourced), C2PA metadata signals, and unspecified technical detection
The policy targets a specific intersection: content that is (a) AI-generated, (b) depicts armed conflict, and (c) is published without disclosure. Content satisfying only one or two of these criteria is not addressed by this policy.
1.2 The Financial Architecture: Incentive-Based Rather Than Prohibitive
This is the analytically most significant feature of the policy. By operating through the revenue-sharing programme rather than as a content moderation rule, X has constructed a financial disincentive rather than a legal prohibition or a removal mandate.
| Critical Distinction: A creator who posts undisclosed AI conflict footage and is not enrolled in the revenue-sharing programme faces no sanction under this policy. The policy’s reach is structurally bounded by programme membership. |
This design choice is coherent with the Musk-era ideological position that content moderation constitutes censorship. The platform avoids:
- Ordering removal of specific content
- Creating categories of prohibited speech
- Engaging in the editorial judgements it has explicitly renounced since the 2022 acquisition
The policy thus represents a form of platform governance that routes enforcement through economic relationships rather than speech restrictions — a distinction with significant implications for both legal analysis and policy evaluation.
1.3 Ideological Context: The 2022–2026 Policy Arc
When Elon Musk completed his US$44 billion acquisition of Twitter in October 2022, the platform systematically dismantled its misinformation frameworks, including policies on COVID-19 health misinformation, election integrity, and synthetic media labelling. The stated rationale was that such policies constituted over-broad suppression of legitimate speech.
The March 2026 announcement represents a partial reversal of this trajectory, though structurally limited. The reversal was triggered not by domestic political pressure but by active armed conflict — suggesting that X’s tolerance for laissez-faire governance has an upper bound defined by geopolitical stakes rather than by platform ethics.
| Period | Moderation Stance | Key Actions |
| 2020–2022 (Twitter) | Active enforcement | COVID misinformation labels; synthetic media policy; election integrity rules |
| 2022–2025 (Post-acquisition) | Deregulatory / anti-censorship | Removal of most moderation policies; reduced Trust & Safety staffing; Community Notes as replacement |
| March 2026 (Current) | Selective re-regulation | AI disclosure requirement for conflict content within revenue programme; financial rather than speech-based sanction |
2. Analytical Framework
2.1 Governance-by-Contract vs. Governance-by-Norm
Platform governance theory distinguishes between norm-based governance (rules about permissible speech that apply to all users) and contract-based governance (rules embedded in commercial agreements that apply only to parties to those agreements). X’s policy is an instance of the latter.
This distinction matters because governance-by-contract is less susceptible to legal challenge on free speech grounds in most jurisdictions — a commercial entity may lawfully decline to pay a creator without being required to justify the decision as a matter of public law. However, it also produces coverage gaps that norm-based governance would not.
2.2 Enforcement Pathways and Their Limitations
Community Notes
Community Notes relies on a consensus mechanism among registered contributors to attach contextual labels to posts. Research on the system has identified three structural problems in the context of time-sensitive conflict content:
- Latency: Median labelling time for contested content is measured in hours, during which viral spread can achieve irreversible reach
- Coverage asymmetry: High-engagement accounts generate notes faster than low-engagement accounts, creating uneven protection
- Strategic manipulation: Organised actors can flood the system with misleading notes to suppress legitimate corrections
C2PA Metadata Signals
The Coalition for Content Provenance and Authenticity (C2PA) standard embeds cryptographic provenance data in AI-generated content at the point of creation. X’s reference to this standard as a detection mechanism is significant, but faces a well-documented limitation: metadata is stripped when files are re-encoded, screenshotted, or passed through messaging applications prior to upload. A sophisticated actor can trivially defeat metadata-based detection.
Coverage Gap: Non-Monetised Accounts
As noted above, the policy creates no obligation or sanction for creators outside the revenue-sharing programme. Given that the most strategically consequential synthetic media in conflict contexts is typically produced and distributed by state or state-adjacent actors — none of whom are enrolled in commercial creator programmes — the policy may have limited effect on the actors most capable of harm.
3. Outlook: Structural Trends and Scenarios
3.1 The Generative AI Escalation Curve
The immediate catalyst for X’s policy is the declining cost and increasing accessibility of video synthesis. Capabilities that required professional-grade computing infrastructure in 2022 are now accessible via consumer applications with minimal technical skill. This creates a structural escalation problem:
- Detection costs scale with adversarial sophistication — better synthesis models produce output that defeats current detection benchmarks
- Disclosure obligations are voluntary at point-of-creation and unenforceable at point-of-distribution for non-C2PA-compliant tools
- The gap between synthesis capability and detection capability is widening, not narrowing
3.2 Competitive Platform Dynamics
X’s policy change creates competitive pressure on other major platforms. Meta, YouTube, and TikTok each maintain more comprehensive AI-labelling frameworks, but enforcement consistency is variable. Should X’s financial penalty model prove effective — or more importantly, should it attract favourable regulatory and reputational treatment — other platforms may adopt similar incentive-architecture approaches as complements to existing removal-based policies.
3.3 Regulatory Trajectory
The European Union’s AI Act (2024) and Digital Services Act (2022) both impose obligations on platforms with respect to AI-generated content and systemic risks, though implementation timelines vary. In the United States, no equivalent federal framework exists. Singapore’s regulatory approach is discussed in Section 4.
A likely medium-term trajectory is regulatory fragmentation: platforms operating differentiated compliance frameworks by jurisdiction, creating arbitrage opportunities for state and non-state actors seeking to exploit the gaps between regulatory regimes.
3.4 Scenario Analysis: 2026–2028
| Scenario | Conditions | Likely Outcome |
| Policy Expansion | AI synthesis quality stabilises; regulatory pressure increases; advertiser demands | X broadens policy to non-monetised accounts; other platforms converge on similar frameworks |
| Policy Stagnation | Conflict subsides; no major enforcement action; regulatory pressure absent | Policy remains narrowly scoped; community notes system fails to scale; synthetic media normalises |
| Policy Collapse | Renewed free-speech criticism; advertiser withdrawal; political pressure | Policy reversed; X returns to deregulatory position; conflict content disclosure becomes entirely voluntary |
4. Singapore Context
4.1 Singapore’s Exposure Profile
Singapore’s exposure to the risks addressed by X’s policy is structurally distinct from that of larger Western democracies. Three characteristics define the local context:
- High digital penetration: Singapore has among the highest social media penetration rates in Southeast Asia, with X (Twitter) maintaining significant usage among educated professionals, journalists, and policymakers
- Ethnic and religious sensitivity: Singapore’s multiracial social compact is acutely vulnerable to synthetic media depicting interethnic or interreligious violence, whether domestic or extrapolated from foreign conflicts
- Geostrategic positioning: As a hub for regional media, financial services, and diplomatic activity, Singapore is a high-value target for state-sponsored information operations emanating from regional actors
4.2 Existing Regulatory Framework
Singapore has invested significantly in legislative infrastructure for information governance. The Protection from Online Falsehoods and Manipulation Act (POFMA, 2019) empowers ministers to issue correction directions and take-down orders for false statements of fact of public interest. The Online Safety Act (2022) extended obligations to social media services with significant reach in Singapore, requiring platforms to address harmful content categories.
However, neither instrument was designed with AI-generated synthetic media as a primary threat vector. POFMA’s “false statement of fact” framing creates definitional ambiguity for AI-generated video content: a video may be misleading without containing any explicitly false factual claim, operating instead through fabricated visual context.
| Regulatory Gap: Singapore’s current legislative framework does not contain a specific provision targeting AI-generated synthetic media in conflict or crisis contexts. POFMA’s false-statement-of-fact framework may be insufficient for visual deepfake content, which operates through context manipulation rather than explicit propositional falsehood. |
4.3 Institutional Capacity
The Infocomm Media Development Authority (IMDA) and the Ministry of Communications and Information (MCI) have both invested in media literacy and counter-misinformation infrastructure. The government’s Factually portal and the Digital for Life movement represent demand-side interventions. However, supply-side technical capacity for detecting AI-generated video at scale remains an open question.
4.4 Specific Risks for Singapore
Foreign Conflict Blowback
Synthetic videos depicting violence attributed to ethnic or religious groups — even in foreign conflict theatres — can generate domestic social tension in Singapore’s multiracial context. The US-Israel-Iran conflict, for example, carries potential for AI-fabricated content depicting Muslims or Jewish communities in ways that could be weaponised domestically.
Regional State-Actor Operations
Regional state actors with sophisticated information operations capabilities are known to target Singapore as part of broader influence campaigns. AI-generated conflict content offers a relatively low-cost, high-impact tool for such operations, particularly given Singapore’s role as a regional media and financial hub.
Platform Dependency Risk
Singapore’s regulatory framework depends substantially on platform cooperation. X’s partial reversal on content governance illustrates how quickly platform postures can shift. A Singapore regulatory environment that presumes stable platform compliance is structurally exposed to platform governance changes driven by ownership changes, commercial pressures, or geopolitical considerations outside Singapore’s control.
5. Solutions and Policy Recommendations
5.1 For Platform Operators (X and Peers)
5.1.1 Extend Policy Scope Beyond Monetised Accounts
The current policy’s limitation to revenue-sharing programme members creates a structural coverage gap. X should extend the disclosure obligation — though not necessarily the same financial sanction — to all accounts posting AI-generated conflict content. This could be implemented as a community-standards requirement rather than a revenue-programme requirement, reducing the legal complexity while expanding coverage.
5.1.2 Invest in Pre-Distribution Detection
Community Notes is a post-distribution mechanism. The strategic priority should be pre-distribution detection integrated into the upload pipeline. Partnerships with provenance-verification technology providers (e.g., C2PA-compliant synthesis tools, Adobe Content Authenticity Initiative) could enable automated flagging at upload rather than post-viral correction.
5.1.3 Publish Transparent Enforcement Metrics
Policy credibility requires verifiable enforcement data. X should publish quarterly transparency reports specifically addressing AI-generated content actions, including: number of cases reviewed, suspension rates, Community Notes latency for AI-conflict content, and detection method breakdown.
5.2 For Singapore Regulators and Policymakers
5.2.1 POFMA Amendment: Synthetic Media Provisions
The government should consider targeted amendments to POFMA to address AI-generated synthetic media as a distinct category. Specifically, a “misleading synthetic depiction” provision would allow ministers to issue correction or take-down directions for AI-generated audio-visual content that creates a materially false impression of events, even absent a discrete false factual claim.
5.2.2 Mandatory AI Provenance Labelling for News-Adjacent Content
Singapore could consider extending its Online Safety Act obligations to require platforms to implement AI-provenance labelling for content shared via accounts operated by news organisations, public figures, or verified entities. This would target the accounts whose synthetic media posts carry the greatest potential for influence at relatively low compliance cost.
5.2.3 Regional Coordination via ASEAN Digital Ministers Framework
Information operations that target Singapore rarely originate domestically. Regional coordination on AI-generated media standards — potentially through the ASEAN Digital Ministers (ADGMIN) framework — would reduce arbitrage opportunities for actors who exploit jurisdictional gaps. Singapore is well-positioned to lead such coordination given its existing digital governance reputation.
5.2.4 Investment in Technical Detection Capacity
IMDA should invest in or commission technical capacity for AI-generated video detection that is independent of platform cooperation. This serves a dual purpose: enabling autonomous government verification of suspected synthetic media, and providing a credible technical basis for POFMA or Online Safety Act actions that may be contested by platform operators.
5.2.5 Public Media Literacy: Conflict-Specific Curriculum
Existing media literacy frameworks focus predominantly on text-based misinformation. Given the accelerating realism of AI-generated video, Singapore’s Digital for Life programme should develop conflict-specific curricula that teach citizens to identify synthetic video markers, understand provenance metadata, and maintain epistemic caution during geopolitical crises.
5.3 Solutions Matrix
| Recommendation | Actor | Time Horizon | Complexity | Priority |
| Extend policy beyond monetised accounts | X / Platforms | Near-term | Low | High |
| Pre-distribution AI detection pipeline | X / Platforms | Medium-term | High | High |
| Transparency reporting (AI enforcement) | X / Platforms | Near-term | Low | Medium |
| POFMA synthetic media amendment | Singapore Govt | Medium-term | Medium | High |
| AI provenance labelling (news-adjacent) | Singapore / IMDA | Medium-term | Medium | High |
| ASEAN ADGMIN regional coordination | Singapore / MFA | Long-term | High | Medium |
| Technical detection capacity (IMDA) | Singapore / IMDA | Medium-term | High | High |
| Media literacy: conflict-specific curriculum | MCI / MOE | Near-term | Low | Medium |
6. Conclusion
X’s March 2026 AI disclosure policy is a significant but deliberately constrained intervention. Its significance lies in what it reveals about the architecture of platform governance in the generative AI era: a preference for incentive-based over norm-based regulation, a reliance on technical signals whose limitations are well-known, and a selective engagement with information integrity that is calibrated to geopolitical salience rather than to systematic risk.
For Singapore, the policy’s limitations are as instructive as its provisions. A city-state with acute vulnerability to information operations, high digital penetration, and a delicate social compact cannot rely on the commercial governance decisions of foreign platform operators as a primary line of defence. The policy creates an opportunity: to use X’s partial pivot as a catalyst for strengthening Singapore’s own legislative, technical, and regional frameworks for governing AI-generated synthetic media in conflict and crisis contexts.
The fundamental challenge ahead is that AI synthesis capability is outpacing both detection technology and regulatory framework development. Closing that gap will require coordinated action across platforms, governments, and civil society — action that is currently distributed, fragmented, and in the case of platform operators, structurally subordinate to commercial and ideological considerations that may not align with public interest.
| Closing Note: The question is not whether synthetic media will shape future conflicts — it already does. The question is whether governance frameworks will evolve fast enough to meaningfully shape the information environment in which those conflicts are perceived. |
References and Further Reading
Bier, N. (2026, March 3). Statement on AI disclosure policy for conflict content. X (Twitter).
European Parliament. (2024). Artificial Intelligence Act. European Union.
European Commission. (2022). Digital Services Act. European Union.
Infocomm Media Development Authority. (2023). Digital for Life movement overview. IMDA Singapore.
Ministry of Law Singapore. (2019). Protection from Online Falsehoods and Manipulation Act (POFMA). Singapore Statutes Online.
Ministry of Communications and Information. (2022). Online Safety Act. Singapore.
Riedl, M. J., & DiFranzo, D. (2022). Platform governance in the age of synthetic media. Journal of Information Policy, 12(1), 1–35.
Wardle, C., & Derakhshan, H. (2017). Information disorder: Toward an interdisciplinary framework for research and policy making. Council of Europe Report DGI(2017)09.
Witness. (2023). Deepfakes and conflict: A practical guide for human rights documenters. Witness.org.
World Economic Forum. (2024). Global Risks Report 2024: AI-enabled misinformation as a top-tier global risk. WEF.