Global AI Newsletter·Issue 2
I. Domestic Governance Developments
(I) Policy and Legislative Updates
(II) Law Enforcement and Judicial Updates
(III) International Cooperation Updates
1. China issued the China’s Initiative on Deepening China - ASEAN Digital Governance Cooperation.
(V) Enterprise Compliance Updates
II. International Governance Developments
(I) Policy and Legislative Updates
1. The U.S. House of Representatives introduced the Cyber Deterrence and Response Act of 2025.
2. U.S. Democratic lawmakers reintroduced the AI Bill of Rights Act.
4. The Australian Government released the National Artificial Intelligence Strategy.
6. U.S. senators promoted the Secure and Viable Chip Export Act to curb China’s AI development.
8. The European Union held the Cybersecurity Forum.
11. The European Electronic Components and Systems Forum was held in Malta.
13. The Japanese Government will ease data consent rules for artificial intelligence development.
(II) Law Enforcement and Judicial Updates
2. The European Commission imposed a fine of 120 million euros on X.
(III) International Cooperation Updates
2. The software service provider Salesforce released the Global AI Readiness Index Report.
I. Domestic Governance Developments
(I) Policy and Legislative Updates
On 2 December, the National Development and Reform Commission (NDRC), National Data Administration (NDA), Ministry of Education (MOE), Ministry of Science and Technology (MOST), and the Organization Department of the Communist Party of China Central Committee jointly issued the “Opinions on Strengthening the Construction of Data Factor Disciplines and Specialties and Digital Talent Teams”. The document aims to enhance the development of data factor-related disciplines and digital talent teams, establish a mechanism for adjusting discipline and specialty settings and a talent training model driven by technological progress and national strategic needs in the data field, activate the role of data factors as an innovation engine empowering new quality productive forces, and promote the integration of the education chain and talent chain with the industrial chain and innovation chain in the data sector.
The Opinions put forward several key initiatives: first, improving data factor disciplines and specialties under the guidance of national strategies, including optimizing the setup of disciplines and specialties, supporting hierarchical and classified development, and strengthening core teaching elements; second, advancing vocational education in the data industry oriented by industrial development, specifically building an industry-education integration ecosystem, promoting education and teaching reform, and enriching curriculum and textbook resources; third, prospering academic research in the data field supported by organized scientific research, including strengthening the construction of research organizations, accelerating research on key priority areas, and consolidating the foundation of scientific data; fourth, promoting industry-university-research-application collaboration in the data field through application scenarios as carriers, specifically constructing typical application scenarios, innovating collaborative training models, and building technological innovation platforms.
Link: https://www.ndrc.gov.cn/xwdt/tzgg/202512/t20251202_1402115.html
On 6 December, the Cyberspace Administration of China drafted the Measures for the Risk Assessment of Network Data Security (Draft for Comments) and solicited public comments. This draft is intended to regulate network data security risk assessment activities, safeguard network data security, and promote the lawful, reasonable and effective use of network data.
In terms of assessment content, network data processors handling important data shall conduct a risk assessment of their network data processing activities on an annual basis. If there is a significant change in the security status of important data that may adversely affect data security, a risk assessment shall be promptly conducted on the changed part and its impact. It is encouraged that network data processors handling general data conduct a risk assessment at least once every three years.
In terms of assessment methods, network data processors may conduct risk assessments on their own or entrust third-party assessment institutions. Those conducting self-assessments shall designate special personnel to take charge; those entrusting institutions shall give priority to certified assessment institutions and clarify the rights, responsibilities and confidentiality obligations of both parties through contracts. Assessment institutions shall not entrust other institutions and shall be responsible for the truthfulness, validity and completeness of the assessment reports. The same institution and its affiliated institutions shall not conduct assessments for the same network data processor more than three consecutive times.
Link: https://www.cac.gov.cn/2025-12/06/c_1766578179367262.htm
(II) Law Enforcement and Judicial Updates
On 1 December, the Shanghai Cyberspace Administration and the Shanghai Administration for Market Regulation jointly released five typical cases of failure to fulfill personal information protection obligations. Among them, three involved opening user data to the internet, one involved forcing users to provide mobile phone numbers to place orders, and one involved illegally using user information to issue false prescriptions. In accordance with the law, the cyberspace administration ordered the involved enterprises to rectify within a specified time limit, and imposed warnings and fines on them.
Link: https://mp.weixin.qq.com/s/vmqhYgpgB4zQgIWGHEc4-Q
On 4 December, the National Computer Virus Emergency Response Center (CVERC) announced 69 mobile applications that illegally collect and use personal information, which violate the “Personal Information Protection Law (PIPL)” in the following 11 aspects: (i) The privacy policy is not sufficiently prominent; (ii) The purposes, methods, and scope for collecting and using personal information are not comprehensive enough; (iii) Failed to inform individuals of information about other personal information processors; (iv) Started collecting personal information or enabling permissions for collecting personal information without obtaining users’ consent; (v) The functions for correcting, deleting personal information, and canceling user accounts are overly complicated; (vi) The acceptance and handling mechanism for complaints and reports is not sufficiently convenient; (vii) No accessible or prominent channel is provided for users to withdraw consent for personal information processing; (viii) No accessible or convenient option is provided for users to refuse personalized recommendations; (ix) Where personal information processors process sensitive personal information, they have not informed individuals of the necessity of processing sensitive personal information and its impact on personal rights and interests; (x) Failed to adopt corresponding security technical measures such as encryption and de-identification; (xi) No privacy policy.
Link: https://www.cverc.org.cn/zxdt/report20251204.htm
(III) International Cooperation Updates
1. China issued the China’s Initiative on Deepening China - ASEAN Digital Governance Cooperation.
On 4 December, the 2nd China-ASEAN Digital Governance Dialogue was held in Beihai, Guangxi. With the theme “Enhancing Exchanges and Cooperation, Promoting Connectivity”, the dialogue set three topics: “Cybersecurity”, “AI Governance” and “Cross-border Data Flow”.
Wang Jingtao, Deputy Director of the Cyberspace Administration of China (CAC), stated in his opening speech that China stands ready to work with all ASEAN countries to strengthen exchanges and cooperation, promote connectivity, intensify dialogue on key and hot-button issues in current digital governance, explore cooperation models, uphold security and stability, and serve regional development. Wang Jingtao put forward three suggestions for deepening China-ASEAN digital governance cooperation: First, uphold openness and cooperation, and strengthen the alignment of development strategies in the digital field; second, uphold shared security, and jointly address cybersecurity risks and challenges; third, uphold collaborative governance, and enhance the interoperability of rules.
During the meeting, China issued the “China’s Initiative on Deepening China-ASEAN Digital Governance Cooperation”. The initiative stated that China is willing to establish an efficient cybersecurity emergency response cooperation mechanism and a cybersecurity threat information sharing platform with ASEAN countries, strengthen exchanges and cooperation among enterprises, universities, research institutions and industry associations in the field of artificial intelligence, jointly participate in the formulation of global AI governance rules, explore policy and institutional arrangements for cross-border data flow, and jointly build a closer China-ASEAN Community with a Shared Future.
Link: https://www.cac.gov.cn/2025-12/04/c_1766577643987382.htm
On 4 December, President Xi Jinping of the People’s Republic of China and President Emmanuel Macron of the French Republic jointly attended and addressed the closing ceremony of the 7th Meeting of the China-France Business Council in Beijing.
In his speech, President Xi noted that China regards France as an important and indispensable economic and trade cooperation partner, and welcomes France to actively participate in the process of Chinese path to modernization. He also expressed support for capable and willing Chinese enterprises to make investments and pursue business ventures in France. President Xi stated that both sides should tap into the cooperation potential in emerging sectors such as artificial intelligence, green economy and digital economy, continue to promote the opening-up and cooperation of industrial and supply chains, and provide a fair, transparent, non-discriminatory and predictable business environment for enterprises of the two countries.
Link: https://www.gov.cn/yaowen/liebiao/202512/content_7050271.htm
On 1 December, at the “AI and Security Forum” of the 2025 “AI+” Industry Ecosystem Conference, Pei Wei, Deputy Secretary-General of the Internet Society of China, and Li Wei, Deputy Director of the Cloud Computing and Big Data Institute of the China Academy of Information and Communications Technology, officially released the Research Report on the Safe Development of Cloud-based Intelligent Agents.
The report is divided into three major sections: an overview of the safe development of cloud-based intelligent agents, the construction of protection systems, and prospects for the future development trends of the industry. The first section focuses on the safe development of cloud-based intelligent agents, sorting out the development context of cloud-based intelligent agents—from the large-scale implementation of intelligent agent scenarios, the accelerating pace of development, and the advancement of industrial scale to the gradual emergence of risks and challenges. It also elaborates on the construction of the institutional system for the security governance of cloud-based intelligent agents, the demands of industrial development, and the improvement of technical capabilities. The second section conducts an in-depth exploration of the construction of cloud-based intelligent agent security protection systems, proposing a four-step construction path for such systems: clarifying protection requirements, building technical frameworks, improving governance systems, and establishing a trusted ecosystem. Starting with four major sectors—government affairs, communications, energy, and finance—it lists practical cases of intelligent agent security protection. The third section presents prospects for the future development trends of the cloud-based intelligent agent security industry. From four perspectives—top-level design, market level, technical level, and ecological level—it analyzes the numerous challenges currently faced by the development of the cloud-based intelligent agent security industry and puts forward development directions to address these challenges in turn.
The report aims to provide decision-making references and practical guidelines for relevant industry stakeholders, facilitate the technological innovation of cloud-based intelligent agent security, the standardized development of the industry, and the improvement of governance systems. It ultimately seeks to achieve the development goal of “ethical AI and security with credibility”, and lay a solid security foundation for the high-quality development of the digital economy.
Link: https://www.ncsti.gov.cn/kjdt/ztbd/2925rgzncystdh/202512/t20251204_231106.html
(V) Enterprise Compliance Updates
On 1 December, a limited number of Nubia M153 engineering prototypes pre-installed with the technical preview version of Doubao Mobile Assistant went on sale. The Doubao Mobile Assistant team stated that the device could perform cross-app tasks for users, such as price comparison and batch file download and consolidation.
On the evening of 2 December, many users reported that when using Doubao Mobile Assistant to operate mobile phone functions, if WeChat was involved, the app would exit abnormally and even fail to log in. The technical team responded that its product requires active authorization from users to obtain the INJECT_EVENTS permission, and human intervention is also required when operating third-party apps if sensitive authorization is involved, so as to protect user privacy and control permissions.
On 5 December, the Doubao Mobile Assistant team stated that it would carry out standardized adjustments to its AI-enabled mobile phone operation capabilities. This adjustment will mainly restrict usage in the following three areas: first, usage scenarios that encourage genuine interaction, such as score brushing and incentive gaming; second, financial scenarios directly related to users’ fund security, such as banking and internet finance; third, gaming scenarios involving competitive rankings.
Link: https://mp.weixin.qq.com/s/GgR0ndeBuSk2R0sWo_Bq5g
II. International Governance Developments
(I) Policy and Legislative Updates
1. The U.S. House of Representatives introduced the Cyber Deterrence and Response Act of 2025.
On 1 December, August Pfluger, Republican Congressman for Texas, introduced the “Cyber Deterrence and Response Act of 2025”. The Act will task the Office of the National Cyber Director (ONCD) with formally identifying and holding accountable the responsible parties for recent cyberattacks against the United States, including foreign entities, individuals and other groups, and promote the improvement of the distribution of existing attack attribution data. The Act will also establish a new “National Distribution Framework” to enhance closer collaboration among multiple federal agencies, facilitate coordination of the attack attribution network across government bodies, and enable the private sector and allied nations to share relevant information. The Act has been referred to the House Committees on Foreign Affairs, Financial Services, and Oversight and Government Affairs for further consideration, and the Senate has not yet introduced an identical version to date.
Link: https://pfluger.house.gov/news/documentsingle.aspx?DocumentID=2687
2. U.S. Democratic lawmakers reintroduced the AI Bill of Rights Act.
On 2 December, Senator Edward J. Markey of Massachusetts and Congresswoman Yvette Clarke of New York reintroduced the “Artificial Intelligence Civil Rights Act”. The main provisions of the Act are as follows: first, to ensure that AI algorithms do not make discriminatory decisions based on race, gender or other unfair criteria when they affect people in key areas such as housing, employment, health and education; second, to require companies that develop and deploy AI systems to disclose how their algorithms operate, ensuring that the public can understand how these algorithms influence decisions; third, all AI systems must undergo rigorous testing both before and after deployment to guarantee their fairness and accuracy; fourth, to establish a robust regulatory framework to ensure that the use of AI systems complies with civil rights and social justice.
On 2 December, the European Union adopted a formal decision to establish the “Innovative Massive Public Administration Inter Connected Transformation Services European Digital Infrastructure Consortium” (IMPACTS-EDIC). The initiative aims to enable EU member states to jointly develop, deploy and operate integrated digital solutions under a distinctive governance model and legal personality. The consortium has its statutory seat in Athens, Greece, with founding member states including Greece, Croatia, Hungary, Poland, the Netherlands and Ukraine. Other member states may subsequently join on equal terms, thereby advancing broader European cooperation.
The core mission of IMPACTS-EDIC is to drive cross-border interoperability of digital public services and accelerate technology deployment to support the implementation of the “Interoperable Europe Act”. The Act is designed to facilitate collaboration among member states, European institutions and agencies and the development of innovative interoperability solutions, enhancing the efficiency of public services. Leveraging shared governance and technological innovation, IMPACTS-EDIC will promote the application of government technology and strengthen interoperability based on the “European Interoperability Framework” (EIF). In doing so, it will streamline administrative procedures, reduce societal burdens, and deliver benefits to citizens, businesses and the economy as a whole.
In terms of operational mechanisms, IMPACTS-EDIC will coordinate all parties through dedicated working groups with clearly defined powers and responsibilities, focusing on technical implementation, legal alignment and capacity building. This will ensure the effective advancement of the following core actions: developing interoperable solutions for the digitalisation of public services; promoting coordination with existing digital infrastructure consortia; engaging stakeholders; and driving the creation of advanced public services while upholding digital sovereignty.
Link: https://interoperable-europe.ec.europa.eu/interoperable-europe/news/commission-launches-impacts-edic
4. The Australian Government released the National Artificial Intelligence Strategy.
On 2 December, the Australian Government officially released the “National AI Plan”, defining its core objectives as “Capturing Development Opportunities, Widely Sharing Benefits and Safeguarding National Security”, and putting forward a cross-departmental national roadmap for AI capability building. Led by the Department of Industry, Science and Resources, the plan aims to drive the development of intelligent infrastructure, launch talent training initiatives, boost investment in scientific research, and facilitate industry-focused technology adoption and implementation. Meanwhile, it seeks to address AI-related risks by establishing relevant laws, regulations and regulatory frameworks, and promoting the practice of responsible AI application.
The plan sets out three core objectives explicitly: first, capturing development opportunities. By building intelligent infrastructure, supporting domestic AI technological capabilities and attracting global investment, it will help advance the large-scale development of the industry, create high-quality jobs, and enhance Australia’s competitiveness in the global AI landscape. Second, widely sharing benefits. It will push for the widespread adoption of AI technologies across all sectors, strengthen relevant training for Australian workers, and leverage AI to optimise public services, ensuring that people of all ages, regions and genders can equally access the conveniences brought by AI. Third, safeguarding national security. It will establish and improve relevant laws, regulations and regulatory frameworks to mitigate potential AI-related risks, promote the practice of responsible AI application, uphold Australia’s core values through international cooperation, and flexibly respond to emerging risks in the AI domain.
Link: https://www.industry.gov.au/publications/national-ai-plan
On 3 December, Accessibility Standards Canada, jointly with Innovation, Science and Economic Development Canada, launched the national standard “Accessible and Equitable Artificial Intelligence Systems” as a framework to guide public sector bodies and enterprises in achieving “accessibility plus fairness” when designing, procuring and deploying AI systems.
The core provisions of the standard are as follows: first, it requires the integration of usage scenarios for persons with disabilities and vulnerable groups into the design phase of AI systems, and avoids technological exclusion through accessible interface design, alternative interaction methods and other measures; second, it emphasises the identification and mitigation of biases during data collection and model training, and reduces disproportionate adverse impacts on specific groups through steps such as data quality assessment and representativeness review; third, it encourages organisations to establish AI risk management and accountability mechanisms, such as appointing senior roles responsible for AI governance, documenting the decision-making rationale of models, and providing users with appeal and explanation channels.
6. U.S. senators promoted the Secure and Viable Chip Export Act to curb China’s AI development.
On 3 December, as reported by U.S. media outlet MeriTalk, the United States Senate Committee on Foreign Relations was advancing the “Secure and Feasible Exports (SAFE) Chips Act” to strengthen chip export controls and prevent China’s rise in the field of artificial intelligence (AI). Co-sponsored by Senator Pete Ricketts of Nebraska and Senator Chris Coons of Delaware, the Act aims to codify the chip export restrictions currently in place under the Trump administration for a period of 30 months. Gregory Allen from the Center for Strategic and International Studies (CSIS) emphasized that despite China’s continuous improvement in AI model research and development capabilities, the United States still dominates the global AI chip market, and China’s technical level in manufacturing high-performance chips remains relatively low. He argued that controlling exports of chip manufacturing equipment would be a crucial factor in counterbalancing China’s AI rise.
Link: https://www.meritalk.com/articles/senators-consider-tougher-chip-controls-to-halt-chinas-ai-rise/
On 4 December, as reported by Bloomberg, the United States Congress Senate found itself deadlocked over two significant provisions related to artificial intelligence (AI) and investment in China during conference committee negotiations surrounding the National Defense Authorization Act (NDAA): firstly, the “Guaranteeing Access and Innovation for National Artificial Intelligence Act” (GAIN AI Act), which requires advanced AI chip manufacturers to prioritise meeting domestic US AI demand before exporting to China and other arms-embargoed countries; and secondly, the “Foreign Investment Guardrails to Help Thwart China Act” (FIGHT China Act), which seeks to codify the Treasury Department’s new outbound investment screening mechanism, restrict US capital flows to sensitive technology sectors in China, and establish transparency and sanctions regimes.
Currently, the Senate leans toward retaining these provisions, while the House of Representatives adopts a more cautious stance, fearing knock-on effects on industrial supply chains and diplomatic relations. Compromise proposals under discussion among lawmakers include: stripping certain provisions from the NDAA to be introduced as separate, subsequent standalone legislation; or drawing on the foreign military sales approval model to establish a special committee for case-by-case reviews of critical AI chip exports, aiming to strike a balance between national security and technological industry competitiveness.
Link: https://punchbowl.news/article/tech/gain-ai-ndaa-2/
8. The European Union held the Cybersecurity Forum.
On 4 December, the European Union hosted the Safer Internet Forum (SIF) in Brussels. Centered on the theme “Why age matters: Protecting and empowering youth in the digital age”, the Forum adopted a hybrid format (combining in-person and virtual participation) to explore how to ensure age-appropriate online experiences for minors through a proportionate, children’s rights-centric approach. Discussions covered age assurance methods and other complementary tools to support parents in making responsible choices, with specific focus on the video game sector. The Forum brought together diverse public and private stakeholders, including children and young people themselves, to share the latest developments within and beyond the EU and anticipate future trends in child online protection.
Link: https://better-internet-for-kids.europa.eu/en/sif
From 1 December to 4 December, to implement the “AU Data Policy Framework” and accelerate the establishment of the “Digital Single Market”, the African Union Commission (AUC) hosted a four-day validation workshop on the Continental Data Governance Framework in Addis Ababa, the capital of Ethiopia. Initiated and hosted by the African Union Commission, the workshop brought together experts from government officials of member states, representatives of regional organizations, academia, and international partners to jointly review and validate a number of policy documents and technical guidelines aimed at unifying and coordinating continental data governance.
The workshop reaffirmed that since its adoption in 2022, the “AU Data Policy Framework” has served as a benchmark for guiding member states in formulating national data strategies and systems, emphasizing transparency, accountability, inclusivity, fair competition, and the protection of digital rights. During the discussions, participants focused on the following types of issues: first, how to strike a balance between safeguarding data sovereignty and promoting cross-border data flows; second, how to reduce the impediment of regulatory fragmentation to regional digital trade and data-driven innovation through unified standards and mutual recognition mechanisms; third, how to strengthen data security and personal information protection to enhance public trust, thereby underpinning the sound development of the digital economy and emerging technologies such as artificial intelligence.
On 4 December, Henna Virkkunen, Executive Vice-President of the European Commission for Tech Sovereignty, Security and Democracy, Nadia Calvino, President of the European Investment Bank Group (EIB Group), and Merete Clausen, Deputy Chief Executive of the European Investment Fund (EIF), jointly signed a memorandum of understanding aimed at supporting the development and deployment of AI Gigafactories across the European Union.
The document establishes a cooperation framework to accelerate the financing and development of AI Gigafactories, which will serve as the core pillars of Europe’s future AI infrastructure. The EIB Group will provide tailored advisory support to consortia that responded to the European Commission’s “informal Call for Expression of Interest”, helping them transform forward-looking concepts into bankable concrete projects to participate in the formal call for the establishment of AI Gigafactories planned for early 2026, and paving the way for potential co-financing by the EIB.
This memorandum of understanding will advance the implementation of the “InvestAI” initiative, announced by Ursula von der Leyen, President of the European Commission, at the AI Action Summit in Paris in February 2025. The initiative plans to mobilise a €20 billion facility to support the construction of up to five AI Gigafactories—large-scale computing facilities dedicated to the development and training of next-generation AI models.
Beyond unlocking investment, this partnership aims to translate Europe’s AI vision into tangible, large-scale infrastructure to drive innovation, strengthen technological sovereignty, and establish the EU as a global leader in artificial intelligence.
11. The European Electronic Components and Systems Forum was held in Malta.
From 3 to 4 December, the European Forum for Electronic Components and Systems (EFECS) 2025 was held in St Julian’s, Malta. Under the theme “Accelerate Innovation: Building European Competitiveness”, the Forum brought together industry leaders, policymakers, researchers and public authorities from Europe’s electronic components and systems (ECS) sector to jointly explore Europe’s strategic priorities across the semiconductor value chain. The event highlighted the role of the Chips Joint Undertaking (JU) in driving research, innovation and capacity-building under the “European Chips Act”, while a dedicated exhibition area showcased EU-funded projects and industry–research collaboration outcomes, providing opportunities for extensive networking and consortium building.
On 4 December, the European Union’s AI Board held a meeting to discuss the Digital Simplification Package and review the current priorities for implementing the AI Act. Prior to this, the European Commission had put forward the “Digital Omnibus proposal”, aiming to streamline rules related to artificial intelligence, cybersecurity and data. This package also includes the “Data Union Strategy”, which seeks to ensure the availability of high-quality data for AI.
Against this backdrop, the AI Board explored the subsequent steps for the implementation of the “AI Act”. These include the Commission developing guidelines to bridge the most delayed standards, thereby providing stakeholders with guidance instruments while the standardisation process continues. Board members also discussed the national implementation progress of the AI Act, sharing good practices for addressing challenges and experiences in establishing effective coordination mechanisms. In addition, the AI Office presented the work and technological developments of AI in the fields of health and life sciences, and the Board endorsed the 2026 workplans for several subgroups.
Link: https://digital-strategy.ec.europa.eu/en/news/sixth-ai-board-meeting
13. The Japanese Government will ease data consent rules for artificial intelligence development.
On 4 December, a proposal released by the Government of Japan calls for revisions to the “Personal Information Protection Act”, which stipulates that individuals’ consent is required for sharing personal data with third parties and for collecting sensitive information (such as medical records or criminal records). To accelerate the development of artificial intelligence (AI), the Government of Japan plans to relax the consent requirements for accessing personal information while imposing tougher penalties for intentional misuse.
Under the draft of the major amendment to the Act, consent will no longer be required if the data is used solely for creating statistical information. Business leaders have argued that strict consent rules hinder AI research, which relies on vast amounts of training data. AI developers often collect information by automatically scanning publicly available web pages, where sensitive personal data may be present. The revised regulations will allow the use of such data without prior approval, provided that it is processed into statistical form rather than shared in a way that could identify individuals. The government views this move as part of a broader national strategy to strengthen economic security through AI.
Link: https://www.asahi.com/ajw/articles/16203430
(II) Law Enforcement and Judicial Updates
On 4 December, the European Commission announced the opening of a formal antitrust investigation to assess whether Meta’s new policy regarding AI providers’ access to WhatsApp breaches EU competition rules. In October 2025, Meta introduced a new policy named the “WhatsApp Business Solution”, which prohibits AI providers from using the tool that allows businesses to communicate with customers via WhatsApp. The European Commission is concerned that this policy may prevent third-party AI providers from offering their services through WhatsApp within the European Economic Area (EEA).
According to the European Commission, several AI providers already offer access to their AI assistants via WhatsApp in the EEA, enabling direct interaction between users and AI assistants within the app. However, under the new policy, this practice may be prohibited—competing AI providers with Meta may be blocked from reaching their customers through WhatsApp, while Meta’s own AI service “Meta AI” will remain accessible to users on the platform. The European Commission is of the view that this conduct may violate Article 102 of the Treaty on the Functioning of the European Union (TFEU) and Article 54 of the European Economic Area Agreement, which prohibit undertakings from abusing a dominant market position.
Link: https://ec.europa.eu/commission/presscorner/detail/en/ip_25_2896
2. The European Commission imposed a fine of 120 million euros on X.
On 5 December, the European Commission imposed a fine of €120 million on the globally renowned social media platform X under the Digital Services Act (DSA), finding that the platform had breached its transparency obligations under the DSA. The violations include three key issues: the misleading design of X’s “Blue Checkmark” feature, the lack of transparency in X’s ad library, and X’s failure to grant researchers access to the platform’s public data as required.
First, X’s use of the “Blue Checkmark” feature to indicate a “verified account” has deceived users, violating the DSA’s obligation for online service platforms to prohibit deceptive design in their services. On X, anyone can obtain the “Blue Checkmark” simply by paying a fee, while the platform does not substantively verify the account holder’s identity—making it difficult for users to judge the authenticity of the accounts they interact with. Second, X’s ad library has access barriers and lacks key information, hindering researchers and the public from independently scrutinising potential risks in online advertising and failing to meet the DSA’s accessibility requirements for platform ad libraries. Third, X has failed to fulfill its obligation under the DSA to provide researchers with access to the platform’s public data. X’s complete ban on eligible researchers from independently accessing public data is detrimental to research on the platform’s systemic risks.
In response to the aforementioned violations, the European Commission initiated an investigation on 18 December 2023 and adopted a formal non-compliance decision on 5 December 2025. X is required to submit specific corrective measures to address the misleading practices of the “Blue Checkmark” to the European Commission within 60 days, and a comprehensive action plan within 90 days.
Link: https://ec.europa.eu/commission/presscorner/detail/en/ip_25_2934
(III) International Cooperation Updates
On 1 December, the Consórcio Nordeste (Northeast Consortium) held an assembly in Teresina, Piauí, and announced the establishment of the Centro de Inteligência Artificial do Nordeste (CIAN, Northeast Artificial Intelligence Center). The center aims to unite federal universities, Dataprev, Huawei, and relevant departments of the Brazilian Federal Government to position the Northeast region as a hub for AI innovation focused on public sector applications.
CIAN operates around five strategic pillars: first, building high-performance computing and cloud infrastructure; second, systematically training AI and data professionals; third, developing AI solutions for areas such as public health, education, and financial management; fourth, promoting international technology transfer and cooperation—particularly collaboration with China in AI and data infrastructure; fifth, fostering a regional innovation ecosystem conducive to startups and research institutions. Supported by Dataprev, Huawei, and the Federal Government, the project aims to train approximately 40,000 relevant professionals within three years, establishing the Northeast as a key pillar for Brazil’s digital sovereignty and the digitalisation of public services.
On 1 December, the European Union (EU) and Singapore held their second Digital Partnership Council meeting in Brussels, reaffirming their intention to cooperate across a range of digital areas including artificial intelligence (AI) and cybersecurity. Both sides reiterated their commitment to enhancing mutual competitiveness, fostering innovation, and shaping digital rules and standards.
The European Commission and Singapore highlighted that they will deepen collaboration in the following areas going forward: first, in the field of AI, reaffirming the importance of the administrative arrangement on collaboration in AI safety and exchanging views on research in large language models, including the EU’s “Alliance for Language Technologies European Digital Infrastructure Consortium (ALT-EDIC)” and Singapore’s “Sea-Lion” project; second, in online safety and tackling scams, committing to jointly address risks arising from online platforms, continue exchanging views on the best ways to protect consumers, and focus on the protection and empowerment of minors online—including exploring the potential application of age verification tools; third, in trust services, exploring cross-border interoperable use cases for verifiable credentials (such as existing digital identity systems); fourth, in cybersecurity, continuing cooperation to ensure both markets are cyber-resilient, emphasizing the importance of bilateral and multilateral actions, and continuously evaluating cybersecurity risks; fifth, in data, leveraging the positive role of bilateral cooperation in boosting data flows, exploring possibilities for expanding such cooperation, and collaborating on data spaces; and finally, in semiconductors and quantum technologies, expressing interest in collaborative research through frameworks such as “Horizon Research” and welcoming cross-border investments in the semiconductor ecosystem.
Link: https://ec.europa.eu/commission/presscorner/detail/en/ip_25_2851
On 1 December, as reported by Bloomberg, the United States is advancing a new strategy aimed at enhancing supply chain security for computer chips and critical minerals required for AI technology through agreements with eight allied nations. Led by Jacob Helberg, Under Secretary of State for Economic Affairs at the U.S. Department of State, this initiative will involve a meeting at the White House on 12 December, with participating countries including Japan, South Korea, Singapore, the Netherlands, the United Kingdom, Israel, the United Arab Emirates and Australia. The focus of the meeting is to seek cooperation agreements in areas such as energy, critical minerals, high-end semiconductor manufacturing, AI infrastructure, and transportation logistics.
This strategy builds on the U.S. energy resource governance initiative launched during the Trump administration, aiming to ensure the supply chain security of critical minerals like lithium and cobalt. It also continues the Biden administration’s Mineral Security Partnership (MSP), promoting foreign investment and Western technological support for the mining industry in developing countries. Helberg noted that the AI sector currently presents a “bipolar” landscape, with competition primarily between the United States and China. This cooperation plan reflects a “U.S.-centric” strategy, intended to address challenges and opportunities in the AI sector alongside trusted allies, rather than merely responding to competition from China.
From 2 to 3 December, at the International AI Standards Summit held in Seoul, Republic of Korea, the International Electrotechnical Commission (IEC), the International Organization for Standardization (ISO), and the International Telecommunication Union (ITU) jointly issued the “Seoul Statement”. This statement outlines the three organizations’ shared vision for how international standards will support the development and deployment of trustworthy AI systems that benefit society, drive innovation, and uphold fundamental rights.
To advance sustainable development and ensure that all people and societies can benefit from the AI revolution, the “Seoul Statement” sets out four key commitments: first, to actively incorporate socio-technical dimensions in standards development; second, to deepen the understanding of the interplay between international standards and human rights, recognizing their importance and universality; third, to strengthen an inclusive and dynamic multistakeholder community for the development and application of international standards governing the design, deployment, and governance of AI; fourth, to enhance public-private collaboration on AI capacity building.
Through their complementary mandates and longstanding collaboration, IEC, ISO, and ITU are working to ensure that AI standards reflect global needs, support regulatory alignment, and foster interoperability, trust, and inclusion in the digital age.
Link: https://www.iso.org/news/2025/12/ai-standards-summit
On 3 December, the Cybersecurity and Infrastructure Security Agency (CISA) of the United States and the Australian Cyber Security Centre (ACSC) jointly released the “Principles for the Secure Integration of Artificial Intelligence in Operational Technology” – a guidance document developed with the participation of cybersecurity authorities from seven countries including Canada, Germany, the Netherlands, New Zealand, and the United Kingdom. It provides security guidelines for critical infrastructure sectors such as power, manufacturing, and energy when introducing AI systems.
The document emphasizes three key points: first, operational technology (OT) asset management and risk identification must take precedence. Organizations need to map the interactive relationships between AI and existing control systems, sensors, and networks, and assess the potential impact of false triggers and erroneous decisions on safe production; second, the “security-by-design” concept should be integrated into the model development and deployment phases, involving security reviews of training data, model supply chains, and third-party components to guard against data poisoning, backdoors, and unauthorized access; third, continuous monitoring of AI systems’ operational status in OT environments is required, with log recording, anomaly detection, and emergency response processes established to ensure rapid rollback to manual or traditional control modes if AI behaves abnormally.
These principles have formed a basic international security consensus on “AI + critical infrastructure”, enabling operators in the Americas and other regions to adopt AI for optimized scheduling and maintenance while controlling cybersecurity and physical security risks within acceptable limits.
On 3 December, The United Nations Educational, Scientific and Cultural Organization (UNESCO) Office for the Caribbean, in partnership with the Government of Trinidad and Tobago and the United Nations Development Programme (UNDP), launched a comprehensive national assessment initiative to evaluate the country’s preparedness for the ethical, inclusive, and human-centred adoption of artificial intelligence (AI). At the core of this effort is UNESCO’s “AI Readiness Assessment Methodology (RAM)” — the first international diagnostic tool developed to evaluate countries’ readiness to govern AI in alignment with human rights, ethical principles, and the Sustainable Development Goals.
The key dimensions of the assessment include legal and regulatory, social and cultural, economic, scientific and educational, and technological and infrastructural aspects, providing a holistic overview of national strengths, emerging risks, and critical gaps requiring attention to ensure responsible AI governance.
Link: https://www.unesco.org/en/articles/unesco-supports-trinidad-and-tobago-advancing-ai-readiness
On 2 December, the United Nations Development Programme (UNDP) released its latest report titled “The Next Great Divergence: Why AI may widen inequality between countries”. The report warns that if artificial intelligence (AI) is not properly harnessed, it may repeat the “Great Divergence” of the Industrial Revolution, widen the gap between developed and developing countries, and calls for policy measures to mitigate this risk.
The report emphasizes that while AI opens up new pathways for development, vast disparities in starting conditions between countries mean their situations in seizing opportunities and managing risks are vastly different. Without strong policy intervention, these gaps are likely to persist and widen, thereby reversing the long-standing trend of narrowing development gaps. The report also points out that although AI has enormous potential in healthcare, agriculture, and disaster management, its development should adhere to a “human-centered” approach rather than focusing solely on productivity. Without considering the core principles of AI ethics and inclusive governance, millions of jobs—especially those held by women and young people—will face significant automation displacement risks.
Link: https://www.undp.org/asia-pacific/publications/next-great-divergence
2. The software service provider Salesforce released the Global AI Readiness Index Report.
On 7 December, software service provider Salesforce released the “Global AI Readiness Index: Scaling Adoption of AI Agents in the Enterprise”, which assesses the readiness of 16 major markets for the “agentic AI” era. The report shows that global AI readiness is evaluated across five key dimensions: regulatory frameworks and infrastructure, AI adoption and diffusion, AI innovation capabilities, investment environment, and talent and skills development. It points out that while many countries have made relatively sound preparations in terms of AI regulatory systems, there remain significant gaps in areas such as innovation capabilities, investment support, and local AI ecosystem development—with small and medium-sized enterprises (SMEs) and workforce skills training in particular requiring urgent enhancement.
The report draws the following conclusions: In terms of regulatory frameworks, most countries have achieved a certain level of maturity in legal and data governance, but there is still a need to refine ethical and accountability frameworks related to “agentic AI”. In terms of innovation and investment, there are large disparities in AI innovation and investment: some countries lead in capital, enterprise ecosystems, and research collaboration, while others face bottlenecks in funding and technology translation. In terms of talent and skills, talent reserves and skills development are key drivers of AI application—especially the AI literacy and skills training of the general workforce.
Comments ()