Skip to main content

Stability vs rights: How the US and China approach AI regulation

In early September 2023, 11 Chinese companies received clearance from the Cyberspace Administration to release their mass-market large language models (LLMs). None of these currently match the reasoning, articulation and problem-solving capabilities of Western models, notably OpenAI’s ChatGPT and its successor GPT-4 (see CHINA: Chinese LLMs advance within strict guardrails – Oxford Analytica Daily Brief, September 5, 2023).

Even so, the mushrooming of generative artificial intelligence (GenAI) tools on a global scale has led to accelerated debates about regulation, as policymakers, businesses and societies grapple with their manifold consequences – and potential large-scale risks. At the same time, international collaboration on AI seems unlikely in the context of current international rivalries, particularly between the United States and China. In the worst-case scenario, great power competition could incentivise development of potentially dangerous AI for strategic advantage.

In this piece, we focus on the risks of AI and how the United States and China are each approaching policy and regulation to contain potential threats. Both have similar concerns about the emergence of AI, but they identify and respond to them differently. China focuses on political stability, economic progress and addressing social concerns, while the United States prioritises the protection of fundamental rights and the maintenance of the market economy.

These different approaches will not only impact the way in which AI regulation develops, but also the evolution and use of AI across different regions into the future.