The AI (Regulation) Briefing, what’s going on? Why should we care?
- persijndevries1
- Oct 10
- 13 min read

Image by TheDigitalArtist here.
By Persijn (Percy) De Vries
The tldr:
A fragmented picture exists as to global AI regulation with no prospective overarching international standards or agreements being likely to materialise soon as states prioritise national AI sectors
Regulations insofar they exist span from prescriptive in the EU, ‘middle-of-the-road’ in the UK and some U.S. states, leave room for state-centric control in China, and allow for ‘open access’ to users' data in India
The U.S. as the leading AI producer is unlikely to implement federal-level rules governing AI deployment and or safety as the EU has, which could have profound implications for global use
Any hope for a ‘Brussels effect’ and or Western-led international standard is unlikely to materialise, especially noting U.S. retrenchment from multilateral institutions and internal disagreement among Western states on regulation
Some international standards may be welcome considering the various implications that the use of AI systems may have on daily life, and stemming from its potential military applications
The added issue of potential sentience, while remote, remains unaddressed as legal and political frameworks aren’t prepared for this possibility
AI regulations have been legislated into existence over the past few years and will likely develop in a diverse and inconsistent pattern globally. Given the divergent political goals and priorities of the AI powers states and an increasingly fractious international order, regulatory ‘islands’ will become the norm. Noting AI’s increased role in daily life, rapid development, and issues such as its potential use in warfare, instruments of surveillance, and work, rules governing its deployment and use are increasingly needed. With various potential stemming consequences arising out of these developments, many unintended ones may also emerge. This article sets out a ‘picture’ of the status-quo, where AI regulations may end up, and what potential effect these could have on the development of systems across the world.
What’s happening across the world
The four major AI producing powers are the U.S., China, UK, and India according to various metrics[1]. These are broadly host to AI companies, research and development capacity, AI-related economic activity, and infrastructure for its development. While other states’ sectors are represented in the top10 such as the UAE, South Korea, Japan, and Singapore[2], this article focusses on a global top4. Notably the U.S. has a significant edge over those further down the ranking in terms of these core competencies. The EU is represented by Germany and France respectively, yet behind in capacity of that top4. However as the [the EU] world’s largest single market and its proactive role in forming rules affecting product development and trade, its given special focus within this briefing.
Noting the above prominence of the top4 and the EU’s position, focus is therefore placed on rules in these jurisdictions and their realised and potential effects on development and deployment. While various other significant jurisdictions such as Brazil, Indonesia, and Saudi Arabia have legislated for AI to a substantial extent[3], their ability to influence these systems’ designs is limited by their reduced roles in production of these systems. While admittedly legal ‘innovations’ could equally come from these jurisdictions such as the UAE being the first worldwide to open an AI ministry in 2017[4], Japan’s 2022 National AI strategy[5], or New Zealand’s priority of ‘human-centric’ AI development as set out in its ‘Algorithm Charter’[6].
United States
No overarching federal regulations exist, rather a system of regulatory islands exists as states individually opt to draw these up. They diverge from various layered regime regulating based on protecting individual data and best practices in California and Colorado, to States that haven’t enacted any legislation such as Wyoming[7]. States’ willingness to introduce regulations is often coupled with political persuasion. Although even liberal California fell short of introducing an explicit AI Act in 2024 post-Governor veto[8], it does have a ‘EU inspired’ GDPR-like data privacy regime[9].
At the federal level the Trump administration has taken the view to prioritise cutting rules with the repeal of Biden-era AI Safety orders and had previously backed an effort to introduce a federal decade-long moratorium on regulation[10]. That effort failed to pass following organised opposition within Congress[11], particularly from within the MAGA movement and Democrats.
Silicon Valley has so far enjoyed a particularly proximate relationship with the incumbent administration which has informed its approach to rule-making. The appointment of tech investor David Sacks to lead the President’s AI policy unit, courting of tech executives at the inauguration and a recent White House lunch[12], demonstrate such compared to relations during the Biden era. While these may forebode a tech-sector friendly approach, underlying political factors may create tensions for this relationship in future. Such being the MAGA base who are generally sceptical of Silicon Valley[13] or potential Democratic gains in the 2026 mid-terms. These could potentially collectively shift the congressional political calculus in favour of federal regulation.
Overarching these are efforts undertaken spanning administrations to silo and secure dominance of U.S. AI from Chinese competition[14], as a part of an emerging AI ‘arms race’[15]. Proactive efforts include measures to silo U.S. data from use in China and restrictions on technological component exports. American AI may still dominate in technical ability and global use, yet Chinese systems have proven to be more competitive in capability than previously expected, as was the case in DeepSeek’s ‘reveal’.
China
China employs a network of rules relating to specific AI use-cases, and an AI standards committee made of leading tech-sector representatives. Its notably one of the first states globally to proactively participate in setting standards on its development and use[16]. China’s government is also currently considering additional legislation, as absent is an overarching single framework. Concurrent to other industrial sectors and resources China’s approach is mainly informed by onshoring AI capacity[17], fire-walled from outside influences. Data and development sovereignty inform domestic production and deployment, with foreign companies being barred from market access[18].
Chinese systems are likely to develop in accordance with a rule order that facilitates development[19] over that of individual rights protections and permits an echo of the state’s viewpoints on various issues. Expect there in future AI systems are reflective of ‘Chinese [CCP] characteristics’. Take the example of an investigation conducted by Christo Grozev who ran his article on a Russian state associated cyberunit via DeepSeek, which churned out a Chinese-state centric-viewed (erroneous) summary of his article[20]. DeepSeek's editorialisation of summarised news and historical content has also seen it banned for use in government departments in Taiwan[21].
United Kingdom
To-date no explicit regulation exists in the UK, rather AI deployment and use is governed by a web of data privacy and digital services laws. Its government has over years consecutively set out in position and guidance papers the intention to do so[22], and developed oversight bodies to promote safe use standards[23]. The UK has been proactive in seeking agreement on ‘safety’ standards universally, being host to the first AI Safety Summit which produced the Bletchley Park Declaration in 2023[24]. Despite these efforts recent international pressure sourced mainly from the U.S., has shifted the calculus in favour of delaying legislating to a future undetermined date[25].
Indications are that this ‘accommodative’ strategy stems from a bet that tech and AI-related development can foster a boost for the government’s growth goals. Another is to maintain close bonds with the U.S., to continue courting favour with the Trump administration. Britain will likely continue to ‘hedge’ a third way between a U.S.-styled lax and prescriptive EU-regime. As demonstrated by the ‘Tech Prosperity Deal’[26] and recent investment announcements by American tech firms accompanying Trump’s recent state-visit[27]. Being lax on rules in this domain is at least for Britain rewarded with desired investment.
India
Neither has India currently legislated to regulate the deployment or use of AI, despite indications that it may do so. So far India’s approach has mainly been characterised as an ‘open door’ policy regarding AI company’s operations, with its citizens data consequently being comprehensively ‘mined’[28]. 2024 government-issued advisories stipulate out rules aspiring to prevent algorithmic discrimination and deepfakes[29], and a Developer’s Playbook for Responsible AI sets out the government’s framework for the AI-sector[30]. Further talk evolved around amending the 2000 IT Act to cover AI or introducing comprehensive AI regulations[31]. Central government and leading industry bodies currently are indicating light-touch rules that favour innovation and development, over that 2024 TEC draft consultation that recommended EU-styled risk-based regulations[32].
European Union
The EU’s 2024 ‘AI Act’ is to-date the most ‘prescriptive’ comparatively in these five examples, and sets out rules governing potential risk-levels rather than explicit user protections as is customarily addressed in EU law. Developers and deployers of ‘unacceptable risk’ AI systems are already bound by the Act, and soon ‘high risk’ will join them in August 2026[33]. However critique from leading AI companies such as Google and Meta[34], and leading political voices such as Mario Draghi[35], has prompted the EU’s Commission to clarify the Act’s provisions. The challenge arising out of getting its 27 members to implement the Act further complicate its realisation.
While the EU’s Commission and Parliament may have intended to set out clear rules for use, its eventual implementation may yet be complicated by overarching factors. Critique alluded to above coupled with the Trump administration’s explicit warnings not to regulate American companies[36], may lead to a ‘watering down’ of measures. The Commission is due to publish a ‘Digital Omnibus’ in December which additionally in response to critique seeks to simplify guidance issued alongside the Act’s provisions[37]. In the meantime perceived stringency of EU rules from mainly American companies restricts EU citizens access to the latest AI Systems. Google recently signed the EU’s AI Charter yet Meta refuses to do so[39]. Expect EU rules to tread the ‘fine line’ between the intent at protection above and producers’ demands to minimise these.
The bottom-line

Image by Dusan Cvetanovic here.
Overall a fragmented landscape of ‘regulatory islands’ exists in which international agreement on AI development and deployment such as called for by the UN Secretary-General[40] is unlikely to materialise. A fragmented international system in which multilateral institutions are less influential than regional and bilateral forums, and greater power competition means such an aspiration is highly unlikely. It should however be noted that all the AI powers covered in this article are signatories to non-binding international recommendations and standards for ethical development and use[41].
That fragmentation was explicitly on display when India’s Modi joined 40 other world-leaders at the September SCO summit in Beijing[42], commonly seen as a diplomatic rekindling in response to the Trump administration’s tariff policies. Despite Chinese declarations at this gathering to launch a ‘Global Governance Initiative’ intended to revitalise multilateral institutions where their previous benefactor the U.S. retreats from these[43], it’s unlikely to spur an international rule-making initiative.
Could alternatively a type of ‘Brussels effect’ take place in which non-EU states eventually follow the EU’s example? Unlikely, as pressures on the EU itself from outside from the Trump administration to not (further) regulate development[44], and lack of internal EU AI capacity currently mitigate its ability to set the global standard. Other jurisdictions may adopt risk-based rules structures eventually as demonstrated in the recent Californian bill[45], yet dominance of the AI market by non-EU entities[46] undercuts that potential. Additional reluctance from the Trump administration for international cooperation on development rules make this additionally less likely[47]. When ultimately adding in the diversified approaches of non-Western AI powers such as China and India, that reality becomes starker.
Why care?
The proliferation of AI systems pervades into aspects of daily life, which can equally return benefits as well as severe potential drawbacks. For example being able to ease time spent otherwise on ‘lower value’ tasks by scraping databases to produce summaries, could simplify academic or work research tasks. Yet equally dependency on AI systems to create summary materials could undercut or infringe on copyright and personal or sensitive data. While likewise ‘AI musician’ could produce desirable music content, that could come at the cost of crediting human musicians for their works[48]. Therefore rules become desirable in principle to set out a base of what constitutes ‘fair use’ for an AI system, and what prevents these issues from arising. The above being one example, to say nothing of the potential surveillance power implications deployment of AI systems could have for law enforcement.
Issues further arise out of its potential dual-use application in warfare. While debates persist on the degree of AI’s involvement[49], its use in the ‘Gospel Programme’[50] presents a dilemma of meeting demands for innovation versus certainty of protecting civilians in military strikes[51]. The inclusion of AI to analyse intelligence can equally assist and speed-up the decision-making process for air strikes, as much as it can allegedly increase room for erroneous ones that harm civilians[52]; as humans are increasingly removed from being ‘in the loop’. Further as innovations made in drone technology the inclusion of AI systems can give certain armed forces an asymmetric technological advantage in combat; expect therefore to see a continuing and ‘escalating’ AI arms race. It follows that as with other especially potentially ‘game changing’ technologies, a set of international ground rules to prevent excessive harm is desirable.
Rule-making becomes additionally salient when considering that AI systems are developed outside the direct reach of oversight. Unlike in the case of nuclear weapons, AI systems are developed in the private-sector, putting them beyond the immediate reach of regulatory control. While there are benefits to private-driven enterprise, a sector that has and will become more central to everyday-life, warrants a degree of oversight as to its development and use. The tech sector continues to advocate self-regulation from an angle of best knowledge on its limitations[53], however history provides examples of how these have fallen short of protecting civil liberties such privacy data in the Cambridge Analytica scandal[54]. To prevent a future ‘scandal’ occurring it may be desirable and moreover in the public interest to incentivise AI companies to produce ‘human and or data privacy friendly’ systems via rules-bound frameworks, rather than ones that are purely motivated by technological or financial advancement. It isn't for example unforeseeable that developers when facing dilemma in which potential financial or technological gain which comes at the cost of the end-users' privacy, would likely opt to prioritise their bottom-line. Predominantly as there isn't an overarching 'incentive' or coercive mechanism to prevent them from doing such. Therefore a potential conflict of interest remains when it comes to determining what’s permitted in technological development. To prevent such as dilemma occurring, at the very least some degree of oversight globally is desirable.
The (added) issue of sentience
Although an ancillary matter, if this does occur a fundamental rethink of our rights-systems would likely be required. While consciousness is subject to debate, even sceptics such as Sussex University’s Anil Seth can’t completely rule out the possibility[55]. Compare the capabilities of current AI Systems to that of an autonomous system’s ability to pass the Turing test in 2014[56], the trajectory of potential exponential development becomes possible. In such an event leading researchers point out our societies are inadequately politically, legally, or logistically prepared. Would a complete rewrite of human and AI rights codes be required? Or would human-AI relations be governed by a set of rules like codes on animal rights protections? Or a lack thereof as is the case with cattle across the world. Such an event could present humanity with a ‘shock’ that spurs a policy ‘innovation’, which could materialise in such a framework coming into existence. As the hole in the ozone layer did for the Montreal Protocol[57], or nuclear bombs testing effects did for the Test-Ban Treaty[58].
So what now?
It’s unlikely considering current circumstances that a global call or action to set minimum standards for AI development and use is forthcoming, however a possible sentience issue could spur action. AI development, as covered in this brief, is currently for the most part tied to sustaining and or supporting nation-building. In the case of a U.S. guided by an ‘America First' agenda, this is even more profoundly evident than previously. The U.S. and China still host most of the world’s production and fundraising capacity, and have for the most part taken protective measures against each other to preserve their domestic sectors. While they don’t necessarily constitute an AI duopoly, policies in other jurisdictions are shaped by their respective positions, see Britain and India in their hedging and accommodative approaches. The era of multipolar and greater powers competition in geopolitics also doesn’t bode well for those that call for collective action to address AI risks. Current trajectory suggests that the world will continue to splinter in AI rules and AI tools access alongside daily lived experiences as these systems usage becomes ubiquitous. However a (remotely) potential ‘shock’ or ‘black swan’ event[59] such as AI sentience could fundamentally alter the course of history, and spur a often-desired collective action call.
[1] Standford University, Global AI Power Rankings: https://hai.stanford.edu/news/global-ai-power-rankings-stanford-hai-tool-ranks-36-countries-in-ai; Tortoise Media ‘The Global AI Index’: https://www.tortoisemedia.com/data/global-ai#rankings; Our World in Data ‘Cumulative number of large-scale AI systems by country since 2017’: https://ourworldindata.org/grapher/cumulative-number-of-large-scale-ai-systems-by-country
[2] ibid.
[3] Iapp25 Global AI Law and Policy Tracker: https://iapp.org/resources/article/global-ai-legislation-tracker/
[4] UNESCO: https://www.unesco.org/creativity/en/policy-monitoring-platform/minister-state-artificial-intelligence; and iapp25 ‘Global AI Law and Policy Tracker’: https://iapp.org/resources/article/global-ai-legislation-tracker/
[5] Iapp25 ibid.
[6]New Zealand Government ‘Algorithm Charter’ (July 2020): https://data.govt.nz/assets/data-ethics/algorithm/Algorithm-Charter-2020_Final-English-1.pdf
[7] BCLP: U.S. Regulation Tracker https://www.bclplaw.com/en-US/events-insights-news/us-state-by-state-artificial-intelligence-legislation-snapshot.html
[8] Politico (May 2025) see more here, and https://www.politico.com/news/2025/05/12/how-big-tech-is-pitting-washington-against-california-00336484
[9] Bloomberg Law (2023): https://pro.bloomberglaw.com/insights/privacy/privacy-laws-us-vs-eu-gdpr/#data-protection
[10] Oxford Business Law (2025): https://blogs.law.ox.ac.uk/oblb/blog-post/2025/06/ai-regulation-politics-fragmentation-and-regulatory-capture and Financial Times (2025): https://www.ft.com/content/d6aac7f1-b955-4c76-a144-1fe8d909f70b
[11] Oxford Business Law (2025): https://blogs.law.ox.ac.uk/oblb/blog-post/2025/06/ai-regulation-politics-fragmentation-and-regulatory-capture; Financial Times (2025): https://www.ft.com/content/d6aac7f1-b955-4c76-a144-1fe8d909f70b; and Stimson Centre (2025): https://www.stimson.org/2025/ai-regulation-bigger-is-not-always-better/
[12] Financial Times (2025): https://www.ft.com/content/d6aac7f1-b955-4c76-a144-1fe8d909f70b
[13] ibid.
[14] BBC News (Sept 2025): https://www.bbc.co.uk/news/articles/creve4x8drgo
[15] BBC News (October 2025): here.; and Centre for Strategic and International Studies: https://www.csis.org/analysis/understanding-us-allies-current-legal-authority-implement-ai-and-semiconductor-export.
[18] White & Case Summary here; The Irish Times (April 2025) here; and CNBC: https://www.cnbc.com/2024/09/12/china-tech-companies-ai-models-vs-openai-google-meta.html.
[19] Forbes (2024): https://www.forbes.com/sites/johannacostigan/2024/03/22/chinas-new-draft-ai-law-prioritizes-industry-development/; and Georgetown Centre for Security and Emerging Technology (2024): https://cset.georgetown.edu/publication/china-ai-law-draft/
[21] Taiwan Ministry of Digital Affairs (January 2025): https://moda.gov.tw/press/press-releases/15104
[22] UK Govt Policy Paper (2023), Consultation on Copyright and AI (2024), AI Actions Opportunities Plan (2025), and Generative AI Framework Review Playbook for Government (2025)
[23] AI Standards Hub: https://aistandardshub.org/the-ai-standards-hub/
[24] UK Govt (2023) AI Safety Summit Overview and Bletchley Park Declaration.
[25] The Guardian (June 2025): https://www.theguardian.com/technology/2025/jun/07/uk-ministers-delay-ai-regulation-amid-plans-for-more-comprehensive-bill
[26] UK Govt (Sept 2025): Memorandum
[27] BBC News (Sept 2025): https://www.bbc.co.uk/news/articles/cx2nllgl3q7o
[28] Economist (Sept 2025): https://www.economist.com/asia/2025/09/17/ai-is-erupting-in-india
[29] Carnegie Endowment for International Peace (2024): https://carnegieendowment.org/research/2024/11/indias-advance-on-ai-regulation?lang=en
[30] Nasscom (2024): https://nasscom.in/ai/pdf/the-developer's-playbook-for-responsible-ai-in-india.pdf
[31] Carnegie Endowment for International Peace (2024): https://carnegieendowment.org/research/2024/11/indias-advance-on-ai-regulation?lang=en
[32] Financial Express (2024): https://www.financialexpress.com/business/digital-transformation-self-regulation-in-ai-models-on-the-cards-3674308/
[33] Netherlands Ministry of Economic Affairs (Sept 2025): AI Act Guide
[34] Euronews (July 2025): https://www.euronews.com/next/2025/07/02/commission-mulls-offering-companies-signing-ai-code-compliance-grace-period and CNBC (July 2025): https://www.cnbc.com/2025/07/30/big-tech-split-google-to-sign-eus-ai-guidelines-despite-meta-snub.html
[35] Euronews (Sept 2025): https://www.euronews.com/my-europe/2025/09/16/draghi-calls-for-pause-to-ai-act-to-gauge-risks
[37] Euronews (Sept 2025): https://www.euronews.com/my-europe/2025/09/16/draghi-calls-for-pause-to-ai-act-to-gauge-risks and more specifically here.
[39] CNBC (July 2025): https://www.cnbc.com/2025/07/30/big-tech-split-google-to-sign-eus-ai-guidelines-despite-meta-snub.html
[40] United Nations (Sept 2025): https://news.un.org/en/story/2025/09/1165942
[41] See Iapp25 here in regard to G20 Principles, UNESCO Recommendations, and or the OECD’s AI Policy Observatory specifically in drop-down(s) ‘Wider AI Context’.
[42] BBC News (August 2025): https://www.bbc.co.uk/news/articles/clyrwv0egzro; and Chatham House (Sept 2025): https://www.chathamhouse.org/2025/09/modis-sco-summit-visit-shows-china-and-india-want-reset-relations-dragon-elephant-tango
[43] BBC Monitoring (Sept 2025): https://monitoring.bbc.co.uk/product/b0004ium; and Chatham House (Sept 2025): https://www.chathamhouse.org/2025/09/it-may-take-generation-stable-new-world-order-emerge specifically here.
[44] The Times YouTube here; and Financial Times (Feb 2025): https://www.ft.com/content/b4e10389-1a66-4c3e-922e-a4d74b616ec6
[45] Politico (May 2025): https://www.politico.com/news/2025/05/12/how-big-tech-is-pitting-washington-against-california-00336484
[47] NBC News (Sept 2025): https://www.nbcnews.com/tech/tech-news/us-rejects-international-ai-oversight-un-general-assembly-rcna233478
[48] The Guardian (Jan 2025): https://www.theguardian.com/music/2025/jan/27/elton-john-paul-mccartney-criticise-proposed-copyright-system-changes-ai; Computerworld (May 2025) Opinion: https://www.computerworld.com/article/3992196/ai-vs-copyright.html
[49] RUSI (2024) Commentary: https://www.rusi.org/explore-our-research/publications/commentary/israels-targeting-ai-how-capable-it
[51] The Guardian (2023): https://www.theguardian.com/world/2023/dec/01/the-gospel-how-israel-uses-ai-to-select-bombing-targets;
[52] ibid. and here specifically.
[54] The Guardian (2018): https://www.theguardian.com/news/2018/mar/17/cambridge-analytica-facebook-influence-us-election
[55] Nautilus (2023): https://nautil.us/why-conscious-ai-is-a-bad-bad-idea-302937/; BBC News (2025): here.
[56] The Guardian (2014): https://www.theguardian.com/technology/2014/jun/09/what-is-the-alan-turing-test
[57] UN Environmental Programme: https://www.unep.org/ozonaction/who-we-are/about-montreal-protocol
[58] U.S. National Archives: https://www.archives.gov/milestone-documents/test-ban-treaty
[59] Black Swan event: https://www.britannica.com/topic/black-swan-event
Comments