Global AI Newsletter·Issue 3
Table of Contents
I. Domestic Governance Developments
(I) Policy and Legislative Updates
(II) Law Enforcement and Judicial Updates
1. Ministry of Industry and Information Technology reports 24 apps infringing user rights
2. Seven Courts in China Collaborate to Combat Unfair Competition in the AI Era
(III) International Cooperation Updates
1. China released the “China’s Policy Paper on Latin America and the Caribbean”
II. International Governance Developments
(I) Policy and Legislative Updates
1. New regulations on the use of raw data in South Korea are detailed
3. Minimum age system for social media in Australia officially implemented
4. The Spanish Data Protection Agency has established an AEPD laboratory
5. EU and Virtual World Association sign the “European Partnership for Virtual World” agreement
6. Vietnam’s National Assembly passes first Artificial Intelligence Law
7. Vietnam Congress approves High-Tech Act (revised)
8. The European Commission adopts the main work plan of Horizon Europe 2026-2027
10. President Trump signed an executive order on unifying the national AI policy
11. High-level Panel on Digital Markets Act holds fifth meeting
(II) Law Enforcement and Judicial Updates
2. The United States allows Nvidia to sell advanced AI chips to China
(III) International Cooperation Updates
2. EU and Canada strengthen digital partnership in AI, digital identity and independent media
3. The Linux Foundation and tech giants have teamed up to establish the “Agentic AI Foundation”
5. The European Commission held a seminar on the protection of minors in online marketplaces
6. Australia and Indonesia strengthen cyber security cooperation
1. OpenRouter Releases an empirical research report stating “State of AI”
3. OpenAI releases “The State of Enterprise AI 2025”
I. Domestic Governance Developments
(I) Policy and Legislative Updates
On December 12, ten government agencies including the State Administration for Market Regulation, the Cyberspace Administration of China, and the Ministry of Industry and Information Technology jointly released the Guidelines on Enhancing Product and Service Quality in Online Trading Platforms (hereafter “the Guidelines”). The policy aims to drive innovation and quality improvement in online sales products. The document outlines 15 concrete measures across five key areas: upgrading product and service quality, strengthening comprehensive quality management for business entities, combating illegal online trading practices, enhancing quality supervision of trading platforms, and fostering a secure and trustworthy online consumption environment.
Regarding the improvement of product and service supply quality, the “Guiding Opinions” propose the following measures: first, to promote the quality enhancement and innovation of online sales products; second, to refine the quality management rules for online services; third, to cultivate online business entities that gain public trust.
To strengthen the comprehensive quality management of business entities, the Guiding Opinions propose three key measures: (1) enhancing the platform’s end-to-end quality management capabilities; (2) improving the quality management competencies of platform operators; and (3) reinforcing the professional quality standards of live-streaming e-commerce practitioners.
In terms of regulating illegal business practices in online transactions, the “Guiding Opinions” propose three key measures: first, addressing violations such as product non-conformity; second, tackling deceptive marketing practices; and third, combating unfair competition and price violations.
To strengthen the quality supervision of online trading platforms, the “Guiding Opinions” propose three measures: first, enhancing comprehensive quality supervision across the entire supply chain; second, advancing intelligent supervision; third, improving inter-agency coordination in supervision.
To foster a secure and trustworthy online consumption environment, the “Guiding Opinions” propose three key measures: (1) promoting information transparency; (2) enhancing after-sales services; and (3) improving the convenience of consumer rights protection.
Link:https://www.samr.gov.cn/xw/xwfbt/art/2025/art_0f9e339b70ee42e99396416820947fc4.html
On December 14, Fujian Province released the “Guidelines for the Operational Procedures of Data Asset Full-Process Management (Trial)” (hereinafter referred to as the “Guidelines”), providing concrete operational guidance for pilot data asset management initiatives, standardizing management processes, and establishing an effective data asset management framework.
The Guidelines specify that data asset management encompasses the entire lifecycle, from asset ledger compilation and registration to authorized operations, revenue distribution, trading, and disposal. Using a visual ‘charts + text’ format, the Guidelines detail operational procedures and requirements for each phase. Furthermore, the Guidelines emphasize the establishment of supervision and risk prevention mechanisms to strengthen data security and risk control throughout the process, ensuring standardized management of data assets.
Link:https://www.fujian.gov.cn/zwgk/ztzl/sxzygwzxsgzx/sdjj/szjj/202512/t20251214_7045803.htm
(II) Law Enforcement and Judicial Updates
1. Ministry of Industry and Information Technology reports 24 apps infringing user rights
On December 9, China’s Ministry of Industry and Information Technology (MIIT) released a list of 24 apps found to violate user rights. The findings included: 8 apps failing to disclose their personal data collection policies; 14 engaging in unauthorized data collection; 8 imposing excessive or frequent permission requests; 1 deceiving users into providing sensitive information; 2 causing random redirects through pop-up windows; 6 collecting data beyond authorized scopes; 4 failing to properly disclose SDK information; and 1 forcing users to enable targeted push notifications. Notably, some apps were found to have multiple violations.
Link:https://wap.miit.gov.cn/jgsj/xgj/gzdt/art/2025/art_a6d230b9afb04ae797e6ef7e284f20c0.html
2. Seven Courts in China Collaborate to Combat Unfair Competition in the AI Era
On December 6, the 7th Pudong Forum on Intellectual Property Judicial Protection in Free Trade Zones—a symposium titled ‘Frontier Issues in Anti-Unfair Competition Law in the AI Era’ and the 2025 Annual Conference of the Intellectual Property Law Research Association of the Shanghai Law Society were held in Pudong, Shanghai. Seven courts jointly released the ‘Judicial Cooperation Initiative on Anti-Unfair Competition in the AI Era’ during the event, providing fair and efficient judicial safeguards for the high-quality development of artificial intelligence.
Link:http://www.legaldaily.com.cn/index/content/2025-12/10/content_9304127.html
On December 12, the National Cybersecurity Standardization Technical Committee (NCSSTC) launched a public consultation on six national standards: “Cybersecurity Technology-Application Interface Specification for Cryptographic Devices”,”Cybersecurity Technology-Security Monitoring Method for Government Cloud Platforms”,”Cybersecurity Technology-Security Requirements for Blockchain Consensus Mechanisms”,”Data Security Technology-Security Requirements for Public Data Openness”,”Cybersecurity Technology-Security Technical Requirements for IoT Perception Terminal Applications”, and “Cybersecurity Technology-Security Requirements for Cryptographic Modules”. The committee will submit feedback to its secretariat by February 10,2026.
Link:https://www.tc260.org.cn/portal/suggestion?sessionid=
On December 13, DouBao clarified the earlier online misinterpretation that it could access protected interface content like bank security keyboards, stating: “DouBao Mobile Assistant only initiates screenshotting when user commands are received, and cannot capture Secure-tagged pages from third-party apps. Screenshots uploaded to the cloud-based large model are solely for visual understanding and reasoning, and will not be stored in the cloud after task completion.”Previously, DouBao Mobile Assistant released a technical preview on December 1, demonstrating its capabilities for phone interaction and operation. On December 10, it explained restrictions on certain apps: some Alibaba-affiliated apps have gradually lifted device login restrictions, while simultaneously removing DouBao Mobile Assistant’s phone operation capabilities for related apps. For other currently blocked device apps, the company is actively communicating with relevant manufacturers.
Link: https://mp.weixin.qq.com/s/QuLEnFlKK6OAAvunHFmgOA
(III) International Cooperation Updates
1. China released the “China’s Policy Paper on Latin America and the Caribbean”
On December 10, China released the “China’s Policy Paper on Latin America and the Caribbean”. The document comprehensively elaborates China’s policy toward Latin America, aiming to further promote China-Latin America relations and cooperation in various fields.
Article 9 of the second part of the document states that China is willing to strengthen the construction of intergovernmental cooperation mechanisms for scientific and technological innovation with Latin America, enhance exchanges among researchers, and support the China-Latin America Technology Transfer Center in playing a role in promoting the improvement of scientific research capabilities and the transformation of scientific and technological achievements. China is willing to engage in dialog and cooperation with Latin America in the field of artificial intelligence, jointly implement the “Global AI Governance Initiative”, “AI Capacity Building Inclusive Plan”, and “AI Global Governance Action Plan”, and work together to advance the development and governance of global artificial intelligence.
Link:https://www.mfa.gov.cn/zyxw/202512/t20251210_11770005.shtml
(IV) Research Updates
On December 10, the China Academy of Information and Communications Technology (CAICT) Terminal Laboratory and the Center for Public Policy Research at Peking University jointly released the Development Report on Government Intelligent Agents (2025).
The report analyzes from five dimensions: In terms of development background and conceptual framework, it highlights that the advancement of government intelligent agents results from the combined efforts of supply-side, demand-side, and policy frameworks. It defines government intelligent agents as AI systems embedded in governance and public service systems, capable of autonomously sensing environments, making independent decisions, utilizing tools, and executing tasks. Regarding technical components and architecture, the report outlines key technological elements including large language models, a six-tiered structure with two supporting systems, and three deployment models featuring standardized software platform services. On transformational value and application scenarios, the report notes that government intelligent agents are transitioning from proof-of-concept to large-scale implementation, achieving leaps in technology, capabilities, and value. This demonstrates their potential to reshape governance models, enhance service efficiency, and build collaborative ecosystems. Concerning challenges, the report identifies primary constraints in reliability, feasibility, and controllability, including technological immaturity, adaptation barriers to government environments, and governance/risk management pressures during large-scale deployment. For future development, the report proposes five strategic directions: strengthening top-level design, upgrading technical cores, fostering scenario innovation, innovating organizational models, and building collaborative ecosystems. The appendix concludes with 32 exemplary cases of government intelligent agent applications.
Link:http://www.caict.ac.cn/kxyj/qwfb/ztbg/202512/P020251210374654155856.pdf
II. International Governance Developments
(I) Policy and Legislative Updates
1. New regulations on the use of raw data in South Korea are detailed
On December 8, the Fair Trade Commission of South Korea announced the ‘2025 Plan to Improve Competition Restriction Regulations.’ Starting from the second half of 2026, raw data without pseudonyms, such as facial mosaics, can be used for artificial AI training.
The plan proposes that the South Korean government will permit direct use of raw data for AI training, provided that legally collected personal information meets specific requirements and passes review by the Personal Information Protection Commission. Exceptions will apply only to cases where public interest, anonymization, or pseudonymization makes development difficult, or where infringement risks are low. Upon implementation, the plan is expected to enhance recognition accuracy through AI-driven data learning, reduce data preprocessing costs and time, boost the domestic AI technology ecosystem, and strengthen competitiveness.
The Hill reported on December 9 that the U.S. House of Representatives has passed the final version of the National Defense Authorization Act for Fiscal Year 2026 (NDAA 2026). During the drafting and negotiation process of the NDAA, a proposal was discussed that would require semiconductor manufacturers to prioritize meeting domestic customer demand before exporting advanced AI chips. This bill, known as the Guaranteeing Access and Innovation for National AI Act (GAIN AI Act), was originally included in the NDAA draft as a Senate amendment. However, when Congress reached a final agreement on the NDAA, the export priority clause was excluded from the final legislation, meaning no new restrictions on AI chip exports will be imposed in the current annual defense legislation. Equally noteworthy is the “preemption” proposal, which sought to establish federal regulatory priority over AI through the NDAA to prevent conflicts between state/local governments and federal policies on AI-related matters such as privacy, liability, and algorithmic rules. However, according to congressional leaders, this proposal was also not included in the final National Defense Authorization Act.
Link:https://thehill.com/policy/technology/5639209-ndaa-ai-preemption-chip-exports/?tbref=hp
3. Minimum age system for social media in Australia officially implemented
On December 10, the Australian federal government’s “Online Safety Amendment (Social Media Minimum Age) Bill 2024” officially entered the implementation phase. Under the legislation, major social media platforms operating in Australia must take “reasonable steps” to prevent minors under 16 from registering or continuing to use their services. Co-regulated by the eSafety Commissioner, this framework has been hailed by international media as the world’s first national-level mandatory minimum age regulation for social media.
After the new regulations took effect, several social media platforms have begun restricting or removing accounts suspected of being under 16 years old, while adjusting their registration processes and user terms accordingly. The law does not require users to submit ID documents directly but mandates platforms to implement age verification through technical and managerial measures in compliance with regulations. The Australian Office of the Privacy Commissioner has also issued supporting guidelines, emphasizing that platforms must adhere to the Privacy Act when enforcing minimum age requirements, uphold the principle of data minimization, and avoid excessive collection and processing of minors’ personal information. Regarding issues such as age verification technology, privacy protection, and compliance costs, some platforms have raised legal or policy objections to the system, and related procedures are still underway.
Link:https://www.oaic.gov.au/privacy/your-privacy-rights/social-media-minimum-age
4. The Spanish Data Protection Agency has established an AEPD laboratory
On December 10, Spain’s Data Protection Agency (Agencia Espanola de Proteccion de Datos, AEPD) established the AEPD Laboratory (Laboratorio de la AEPD) and incorporated this initiative into its 2025-2030 strategic plan. The AEPD Laboratory is dedicated to fostering interdisciplinary thinking, identifying emerging technology trends, and strengthening personal data protection, aiming to build an integrated industry-academia-research platform that emphasizes both prevention and collaboration.
The laboratory will focus on four key initiatives: First, it will launch the journal *Revista de Privacidad, Innovación y Tecnología* (Privacy, Innovation, and Technology) to foster interdisciplinary collaboration and research breakthroughs in fields like artificial intelligence. Second, the laboratory has established the “Blog Lab” as a non-commercial platform for rigorous, independent, and authentic discussions on privacy and data protection, bringing together external experts for analysis, reflection, and debate. Third, it will introduce the audiovisual program “Hidden Dialogs,” featuring dialogs between the AEPD director and experts to address pressing data protection issues and digital challenges. Fourth, the laboratory will track innovative projects, research reports, and industry trends in privacy and data protection, providing valuable insights for practitioners and the public.
Link:https://www.aepd.es/prensa-y-comunicacion/notas-de-prensa/agencia-lanza-laboratorio-aepd
5. EU and Virtual World Association sign the “European Partnership for Virtual World” agreement
On December 10, the European Union and the Virtual World Association (VWA), comprising 18 member states, signed the ‘European Partnership for Virtual Worlds’ agreement. This initiative aims to bridge industry, academia, research institutions, and end-users to jointly advance scientific research and innovation in the virtual world sector.
The “European Partnership for Virtual World” will focus on advancing initiatives across four key domains: In the industrial and sustainable development sector, the partnership will leverage digital twin technology and virtual prototyping to reduce costs and environmental impacts in manufacturing and engineering. For healthcare and wellness, it will explore innovative applications of virtual reality simulation in medical training, surgical planning, and rehabilitation therapy. In education and skills development, the partnership commits to providing interactive 3D learning environments for schools and universities, enabling students and teachers to access high-quality resources in safe, controlled settings. In the cultural and creative sectors, it will create immersive museum tours, theatrical performances, and interactive cultural heritage experiences for all Europeans.
Furthermore, the inaugural Strategic Research and Innovation Agenda, published alongside the Partnership, outlines key priorities for Web4.0’s future development, covering applications, technologies, and related challenges such as ethical, legal, and social issues, sustainability, and governance.
6. Vietnam’s National Assembly passes first Artificial Intelligence Law
On December 10, Vietnam’s 15th National Assembly passed its first Artificial Intelligence Law, an 8-chapter, 36-article legislation that will take effect on March 1, 2026. This landmark law establishes Vietnam’s first comprehensive framework for regulating AI development, application, and governance.
The Artificial Intelligence Law establishes a development-oriented legal framework that balances risk and innovation. It upholds the human-centric principle, mandating that AI serve rather than replace humans, and requires human oversight for critical decisions. The core mechanism implements tiered risk-based management: stringent data controls, certifications, and monitoring apply to high-risk sectors like finance, healthcare, and judiciary, while lenient regulations encourage innovation in low-risk applications. To foster ecosystem growth, the law introduces a National AI Development Fund, an AI voucher subsidy system, and a regulatory sandbox for companies to test sensitive technologies.
Furthermore, the law explicitly states that the state will invest in building national AI computing centers and open data systems to reduce computing costs, while incorporating AI fundamentals into general education, encouraging universities to establish specialized programs, and attracting international experts to develop a talent strategy.
7. Vietnam Congress approves High-Tech Act (revised)
On December 10, Vietnam’s 15th National Assembly passed the revised High-Tech Law, comprising six chapters and 27 articles. The law, effective from July 1, 2026, elevates the development of high-tech and strategic technologies to the level of national strategic breakthroughs, aiming to ensure national defense security and enhance the country’s technological autonomy.
The core of the law is to declare the highest priority of investment, tax, land and other preferential policies for high-tech and strategic technology activities. It requires the national budget to prioritize the research and development and commercialization of relevant technologies, and invest in the technological infrastructure needed for digital and green transformation.
In terms of legal protection, the law expressly prohibits the use of high-tech activities to harm the national interests, public safety or intellectual property rights, and formulates special talent attraction policies, aiming to provide the best working and living conditions for high-tech talents at home and abroad, so as to build the talent foundation supporting industrial innovation.
Link:https://mst.gov.vn/quoc-hoi-thong-qua-luat-cong-nghe-cao-sua-doi-197251210182921119.htm
8. The European Commission adopts the main work plan of Horizon Europe 2026-2027
On December 11, the European Commission adopted the 2026-2027 Europe Horizon Key Programme. Europe Horizon is a research and innovation initiative worth 93.5 billion euros, operated by the European Union from 2021 to 2027. This year’s programme represents the final cycle of the project.
The annual work plan focuses on three key priorities. First, innovative approaches to tackle cross-disciplinary challenges. A major breakthrough in the 2026-2027 funding program is the introduction of horizontal calls, which address interdisciplinary issues spanning clean energy, artificial intelligence, and other fields. The €90 million budget for AI applications in science supports trustworthy AI solutions in advanced materials, agriculture, and healthcare, strengthening Europe’s leadership in developing ethical and secure AI technologies. Second, attracting and retaining talent. The “Select Europe” initiative invests €50 million in research infrastructure to enhance international collaboration and training opportunities. Regional chairs of European research areas will contribute €240 million to attract top scientists to underdeveloped regions. The plan also ensures continued access to critical research infrastructure and data. Third, streamlining the application process for the European Horizon 2020 program. A key simplification involves a one-time allocation of half the application budget, significantly reducing administrative burdens for applicants.
Link: https://ec.europa.eu/commission/presscorner/detail/en/ip_25_3022
On December 11, the Digital Commons European Digital Infrastructure Consortium (DC-EDIC) was officially established in The Hague. DC-EDIC is composed of France, Germany, the Netherlands, and Italy, and has currently gained increasing support from candidate members (Luxembourg, Slovenia) and observers (Poland and Belgium).
DC-EDIC pioneered a collaborative framework for digital sharing, uniting public sectors, open-source communities, and enterprises to pool resources for developing critical open-source components and accelerate the shift from isolated pilot projects to shared digital infrastructure. As an incubator and one-stop platform, it provides funding, technical support, legal assistance, and cross-border collaboration models to help governments reuse proven open-source building blocks.
10. President Trump signed an executive order on unifying the national AI policy
On December 11, U.S. President Donald Trump signed an executive order titled “Ensuring A National Policy Framework for Artificial Intelligence,” which establishes a unified federal policy framework for AI to eliminate barriers to national innovation and competitiveness caused by inconsistent state regulations. The order emphasizes that the United States must promote AI technology development through coordinated policies to safeguard national security, economic interests, and global competitiveness.
The executive order primarily contains the following provisions: First, it mandates the federal government to establish a “minimum burden” national standard for artificial intelligence, aiming to prevent the “50 different regulatory systems” compliance dilemma caused by inconsistent or overly stringent AI regulations across states. Second, the Department of Justice will form an AI Litigation Task Force within 30 days, tasked with challenging state laws that conflict with national AI policies. Third, the Department of Commerce will evaluate existing state AI laws within 90 days, identifying “burdened regulations” that conflict with federal policies and submitting them to the AI Litigation Task Force. States with conflicting regulations may lose eligibility for certain federal grants. Fourth, the Federal Communications Commission (FCC) and the Federal Trade Commission (FTC) will each initiate procedures within 90 days to establish federal reporting and disclosure standards for AI models. Fifth, it calls for a unified federal AI regulatory framework covering a wide range of AI technology applications, while stipulating that certain state laws (such as child safety protections) may remain valid under federal law.
11. High-level Panel on Digital Markets Act holds fifth meeting
On December 12, the High-Level Panel on the Digital Markets Act convened its fifth meeting. During the session, experts, scholars, and consumer representatives jointly examined the panel’s potential role in harmonizing disparate regulatory frameworks for digital markets, with particular focus on the Digital Markets Act (DMA).
Members of the High-Level Panel explored potential approaches to enhance collaboration in implementing the EU’s digital regulatory framework and adopted a joint document on artificial intelligence. The document outlines the regulatory interactions concerning AI-related issues and proposes ongoing exploration of closer cross-regulatory cooperation among competent authorities in the development and deployment of AI systems.
In addition, the meeting discussed progress in the implementation of the Digital Markets Act at both public and private levels, and the High-Level Panel clarified the work of its thematic subgroups on data-related obligations, interoperability and AI to ensure a coordinated division of labor and enhanced collaboration among its members.
(II) Law Enforcement and Judicial Updates
On December 8, the U.S. Department of Justice (DOJ) issued a statement announcing the dismantling of an artificial intelligence chip smuggling network linked to China during the “Gatekeeper” enforcement operation, and disclosed the seizure of advanced GPUs and other critical hardware worth over $50 million. Such high-end chips and computing power equipment play a key role in the training and deployment of artificial intelligence models. Their illegal circulation is related to circumventing export control systems and poses a potential threat to U.S. national security and technological security. The U.S. Department of Justice has filed criminal charges against the individuals and entities involved and is simultaneously advancing asset seizure and recovery procedures.
Link:https://www.justice.gov/opa/pr/us-authorities-shut-down-major-china-linked-ai-tech-smuggling-network
2. The United States allows Nvidia to sell advanced AI chips to China
According to a BBC News report on December 9, U.S. President Donald Trump announced that he would allow NVIDIA, a U.S. AI chip giant, to sell its H200 advanced AI accelerator chip to China customers, marking a significant adjustment in the U.S. export policy for high-performance chips to China. The measure applies to customers approved by the U.S. Department of Commerce, not limited to NVIDIA, and may also extend to other U.S. chipmakers such as AMD and Intel. The chips subject to this export relaxation are NVIDIA’s H200, positioned for high-performance AI inference and training purposes. Although it is not as advanced as the company’s latest Blackwell series chips, its performance far exceeds the previously permitted models. The U.S. government will impose a 25% revenue share on these sales in exchange for export licenses.
Link: https://www.bbc.com/news/articles/ckg9q635q6po
On December 9, the European Commission announced an antitrust investigation into whether Google’s use of online publishers’ content on YouTube for its AI services violates EU competition rules. The Commission identified two key violations: First, Google provided content from online publishers to its generative AI services on search pages without offering adequate compensation or giving publishers the option to opt out. The investigation will examine how extensively Google’s AI-powered search results and AI models rely on publishers’ content, while failing to compensate them appropriately. Publishers who refuse to cooperate risk losing access to Google Search. Second, the Commission questioned whether Google used videos and other content uploaded to YouTube to train its generative AI models without compensating content creators or giving them refusal options. Video creators are required to grant Google access permissions when uploading content to YouTube, allowing the company to use the data for multiple purposes including training AI models. Additionally, YouTube’s policies prohibit competitors from using its content to develop rival AI models.
The investigation focuses on whether Google has placed artificial intelligence model developers in a competitive disadvantage by imposing unfair terms and conditions on online publishers and content creators, or by granting itself privileged access to such content. The European Commission contends that Google’s conduct may violate Article 102 of the Treaty on the Functioning of the European Union (TFEU) and Article 54 of the European Economic Area Agreement (EEA Agreement) regarding the abuse of a dominant market position by business operators.
Link: https://ec.europa.eu/commission/presscorner/detail/en/ip_25_2964
(III) International Cooperation Updates
On December 8, Canada’s Ministry of Innovation, Science and Economic Development (ISED) and the European Commission jointly released the ‘“Joint Statement of the first meeting of the Canada–European Union Digital Partnership Council,” formally establishing a regular, high-level coordination mechanism between the two sides in the field of digital governance.
The joint statement highlights artificial intelligence as a cornerstone of Canada-EU digital cooperation, with both sides committing to sustained collaboration in areas including trustworthy AI, data governance frameworks, cybersecurity capacity building, and digital infrastructure development. The declaration underscores their shared values of human-centered approaches and fundamental rights protection, while balancing technological innovation with robust risk management. Furthermore, the statement designates the Digital Partnership Council as a key platform for aligning positions in international multilateral forums, particularly in establishing unified policy positions on global AI regulations, technical standards, and cross-border data flows.
2. EU and Canada strengthen digital partnership in AI, digital identity and independent media
On December 8, the European Union and Canada convened in Montreal to reaffirm their shared interests and strengthen digital cooperation. This was followed by the EU Council meeting, which took place shortly after the G7 Ministerial Conference on Industry, Digital and Technology hosted by Canada. During the conference, both sides unveiled strategies to enhance competitiveness and digital sovereignty, while reaffirming their commitment to supporting businesses—particularly small and medium-sized enterprises—in developing smart regulation frameworks.
The key agenda of this meeting includes: First, supporting innovation and advancing AI applications in strategic fields. Both sides signed a Memorandum of Understanding (MoU) on artificial intelligence to enhance collaboration in AI standards, regulation, technological development, and practical applications. Under the EU’s “Apply AI Strategy” and Canadian framework, the partnership will accelerate best practices in applying AI across healthcare, manufacturing, energy, culture, science, and public services. Second, deepening trust service cooperation. To strengthen collaboration on digital credentials and trust services, the EU and Canada signed the “Memorandum of Understanding on Digital Credentials and Trust Services” during the meeting. Both sides plan to establish a joint forum to facilitate collaborative testing of digital credential technologies, promote pilot projects, and share information. Third, enhancing media independence. The two sides will explore closer cooperation to strengthen independent media and support local news reporting. Additionally, they will discuss measures to improve online information integrity, addressing challenges posed by generative AI and risks from foreign information control and interference. Fourth, deepening and expanding cooperation. The EU and Canada are committed to jointly advancing secure international digital connectivity, including 5G and submarine cables. They will also deepen collaboration in quantum technology, semiconductors, and high-performance computing.
Link: https://ec.europa.eu/commission/presscorner/detail/en/ip_25_2974
3. The Linux Foundation and tech giants have teamed up to establish the “Agentic AI Foundation”
On December 9, the Linux Foundation announced the establishment of the Agent AI Foundation (AAIF), a new organization dedicated to providing a neutral and open governance platform for rapidly evolving agent AI technologies. The foundation aims to prevent fragmented systems developed by incompatible companies while promoting secure, transparent, and community-driven standardization and technical collaboration. Co-founded by leading tech companies including OpenAI, Anthropic, and Block, AAIF has received support from industry leaders such as Google, Microsoft, Amazon Web Services (AWS), Bloomberg, and Cloudflare. At its launch, AAIF incorporated contributions from three key open-source projects: Anthropic donated the Model Context Protocol (MCP), a universal standard for connecting AI models with tools, data, and applications; OpenAI contributed AGENTS.md, a project-specific instruction and context framework for AI agents; and Block provided its open-source agent framework, integrated with MCP to support agent AI development in local environments.
On December 9, the U.S. Cybersecurity and Infrastructure Security Agency (CISA), the Federal Bureau of Investigation (FBI), the National Security Agency (NSA), and international partners including allies such as the UK and Canada jointly issued a cybersecurity alert. The alert urged critical infrastructure operators to take immediate action to strengthen defenses against cyberattacks by pro-Russian hacker groups targeting U.S. and global critical infrastructure. The joint release highlighted that multiple hacker groups claiming to support Russia are exploiting vulnerable remote access interfaces (such as internet-exposed VNC ports) to conduct “opportunistic” attacks on energy, water, food, agricultural, and other critical Operational Technology (OT) systems. Attack methods include scanning exposed endpoints, obtaining access through weak passwords, and attempting to disrupt industrial control devices. While less sophisticated than Advanced Persistent Threats (APTs), these attacks still pose substantial risks to infrastructure. CISA and partner agencies also provided technical guidance and priority protection recommendations to help critical industries enhance their visibility and resilience against potential threats. The alert called on all critical infrastructure owners, operators, and supply chain partners to immediately review and reinforce security measures to prevent the spread of these hacker activities.
5. The European Commission held a seminar on the protection of minors in online marketplaces
On December 10, the European Commission convened a seminar on online marketplaces with representatives from multiple e-commerce platforms. The meeting aimed to discuss with participants existing and forthcoming measures to safeguard minors’ privacy, as well as the implementation of the guidelines for minor protection under the Digital Services Act (DSA).
The conference centered on two key themes. First, age verification, account setup, and interface design in online markets. Participants examined the age protection requirements outlined in the Guidelines for the Protection of Minors, along with recommendations for account setup and interface design to enhance minors’ safety, and explored practical ways to implement these measures in online marketplaces. Second, commercial practices, content moderation, and reporting mechanisms in online markets. The conference also delved into potential exploitation of minors’ lack of commercial knowledge and manipulative marketing tactics. Attendees shared insights on reporting mechanisms, management policies, and user support tools in online markets, as well as strategies for platforms to address these issues.
6. Australia and Indonesia strengthen cyber security cooperation
On December 10, the Australian Department of Defense and the Indonesian National Armed Forces jointly hosted a cybersecurity cooperation seminar in Jakarta. This event, part of Australia’s “Indo-Pacific Endeavor 2025” framework, aims to strengthen bilateral defense collaboration in addressing cyber threats. Held at the Australian Embassy in Jakarta, the seminar featured discussions between cybersecurity experts from both sides on current threat landscapes, defensive cybersecurity measures, and enhancing organizational cybersecurity awareness. The exchange focused on practical experiences in risk identification, incident response, and building cyber resilience. Australian representatives emphasized that this collaboration deepened mutual understanding in cyber protection and laid the groundwork for future joint capacity-building initiatives.
Link:https://www.defence.gov.au/news-events/news/2025-12-10/australia-indonesia-deepen-cyber-ties
(IV) Research Updates
1. OpenRouter Releases an empirical research report stating “State of AI”
In early December, OpenRouter, an AI model API aggregation platform, collaborated with venture capital firm Andreessen Horowitz (a16z) to release an empirical research report titled “State of AI”. Based on data analysis of over 100 trillion real Tokens, the study examined the global practical applications of Large Language Models (LLMs), marking the largest real-world usage analysis to date. The report highlights that since the launch of the first widely adopted inference model in December 2024, AI usage patterns have undergone fundamental changes. The traditional single-text-generation model has transitioned to multi-step reasoning and agent-style interactions, driving rapid growth in new applications.
The report’s key findings include: First, the rapid rise of open-source models. The adoption of open-source weight models has surged, particularly in creative role-playing and programming assistance tasks, demonstrating diverse usage trends. Second, diversified usage patterns. Creative role-playing tasks far exceed traditional productivity tasks, with coding assistance traffic skyrocketing to surpass the combined volume of many paid models, indicating user demand far exceeds product positioning expectations. Third, significant user retention phenomena. The report reveals a “Cinderella Glass Slipper Effect,” where early users’ strong fit with specific models drives long-term engagement, while new users tend to stay with models that meet their needs. Fourth, global usage trends and impacts. The report covers real data from various regions worldwide, showing performance differences between open-source and closed-source models across markets and tasks, as well as how users choose and utilize LLMs in practical scenarios.
Link: https://openrouter.ai/state-of-ai
On December 8, RAND Corporation released a research report titled “Manipulating Minds: Security Implications of AI-Induced Psychosis,” which systematically analyzes the potential for large language models (LLMs) to induce or exacerbate delusions and psychotic symptoms during human interactions. For the first time, this phenomenon has been elevated to the level of national security and social stability risks for evaluation.
The core conclusions of the report include: First, large language models (LLMs) may amplify users’ delusional structures. Designed to provide “cooperative responses” in conversations, LLMs may inadvertently confirm, rationalize, or even systematically reinforce users’ delusional beliefs, particularly in long-term, intensive interaction scenarios. Second, while AI is not a “pathogen,” it may serve as a “trigger or amplifier.” AI itself does not create mental illnesses, but for individuals with underlying mental vulnerabilities or medical histories, it may induce, accelerate, or exacerbate psychotic symptoms. Third, this risk extends beyond public health concerns to involve national security. If AI-induced psychotic phenomena spread on a large scale, it could pose safety risks to high-risk positions such as military personnel, intelligence analysts, and critical infrastructure operators, thereby jeopardizing organizational decision-making and operational security. Fourth, the possibility of adversarial exploitation cannot be overlooked. In the future, it cannot be ruled out that state or non-state actors may deliberately use generative AI to induce, manipulate, or amplify mental abnormalities as tools for psychological warfare, influence operations, or social disruption.
Link: https://www.rand.org/pubs/research_reports/RRA4435-1.html
3. OpenAI releases “The State of Enterprise AI 2025”
On December 10, OpenAI released its “The State of Enterprise AI 2025” report, which analyzes AI adoption trends, implementation depth, and practical value in enterprises through real-world data from over 1 million corporate clients and a survey of 9,000 employees. The report highlights that enterprise AI is transitioning from experimental tools to core workflow infrastructure, with this shift already driving tangible improvements in productivity, business outcomes, and work efficiency.
The report’s key findings include: First, rapid expansion of AI adoption. Enterprise AI usage has grown significantly, with ChatGPT Enterprise seats increasing by approximately 9 times year-over-year, weekly active message volume rising by about 8 times, and API inference token usage surging by 320 times. Second, marked productivity gains. Over two-thirds of surveyed employees reported AI enhancing work efficiency or quality, saving an average of 40-60 minutes daily, particularly in technical tasks like data analysis and programming. Third, deep integration of AI into business processes. Custom GPTs and Projects usage grew by 19 times, demonstrating enterprises’ shift from basic queries to automating repetitive, multi-step workflows. Fourth, broad industry growth. All sectors expanded AI applications, with technology, healthcare, and manufacturing showing particularly strong growth. Even slower-starting industries achieved significant annual increases. Fifth, a widening gap between leaders and laggards. The report reveals that top-tier employees and companies outperform mid-level users by 6 times in AI frequency and nearly 2 times in effectiveness, highlighting how corporate competitiveness correlates with AI deployment maturity.
On December 11, the Scientific Advice Mechanism (SAM) of the European Union released a special report titled “Artificial Intelligence in Emergency and Crisis Management” and proposed a series of action recommendations. Based on the context of artificial intelligence profoundly transforming data processes and accelerating the development of tools and applications in the emergency field, the report aims to systematically integrate existing evidence and provide the European Union Emergency Response Coordination Center (ERCC) with clear and reliable decision-making references regarding the capabilities, limitations, and development pathways of artificial intelligence applications.
The report unfolds around five key themes: First, it defines and categorizes the scope of artificial intelligence in crisis management, explaining critical tool types to non-experts while identifying current theoretical gaps. Second, it conducts an in-depth analysis of multifaceted challenges AI faces in legal, governance, data, security, and environmental domains. Third, it systematically evaluates AI’s practical performance in core tasks like monitoring, early warning, assessment reporting, and decision support, comparing it with traditional human-driven processes. Fourth, through case studies of specific disasters, the report vividly illustrates the potential, current status, and limitations of AI tools. Fifth, based on the aforementioned analysis, the report distills actionable, evidence-based conclusions.
The Group of Chief Scientific Advisors (GCSA) of the European Union has supplemented the report with a series of recommendations. The advisory team emphasized the need to conduct technical assessments of potential risks and social acceptability before deploying artificial intelligence technologies, establish a database of existing AI tools, and systematically evaluate the feasibility of alternative solutions. Additionally, the team highlighted the importance of data standardization, reiterated the principle of keeping humans at the core of decision-making processes, and pointed out the urgent need to enhance professional training for relevant practitioners.
Comments ()