Search
Close this search box.
Commentary

U.S. and China should have a open-minded AI discussion, not a security-focused one

June 14, 2024

COMMENTARY BY:

Ao Gu
Ao Gu

Research Assistant Intern
Trade ‘n Technology Program

Cover Image Source: Getty Images, Royalty-Free

The recent U.S.-China AI dialogue in Geneva in early May 2024 illustrates the difficulty of constructing an AI governance structure. In short, the United States wants to ensure an AI governance system that is “secure, safe, trustworthy” while China expects it to adhere to the principle of “AI for good” and to expand the United Nations’ role in global AI governance. These differing priorities suggest a struggling scenario where global AI governance involving cooperation between the two major powers is, for the most part, insurmountable. Therefore, to facilitate this critical bilateral cooperation on global AI governance, the two countries should put more effort into facilitating AI’s role in providing public goods for developing countries, which grants the two countries the opportunity to have “candid and constructive” AI governance-related conversations.

AI’s role in promoting developing countries’ access to public goods is significant and requires an effective and fair AI governance structure. For instance, according to World Economic Forum’s new report Shaping the Future of Learning: The Role of AI in Education 4.0, generative AI is able to automate and augment around 20% of educator clerical tasks, reduce administrative burdens, and enable more time for teachers to focus on personalization, improving pedagogy, and supporting students’ socio-emotional needs. This application is critical in poverty-struck regions where there is a lack of educators and the need for more skilled workers. Another relevant sector is medical research in which a cutting-edge AI model is able to provide scientists with a robust protein chain prediction model to effectively invent antibodies or treatments for cancer, HIV or genetic diseases. For instance, AI models could have assisted in more rapidly developing antibodies during the recent coronavirus pandemic. Simultaneously, AI’s impact on the economy is significant and many companies such as Google, Tesla, Bosch, BMW, and Tencent have adopted AI in their manufacturing and programming processes, many of which take place in underdeveloped regions. To accomplish these tasks, neither the U.S. or China has adequate resources and experts. The latest OECD report has shown that the U.S. leads in private investment in AI development and China leads in the numbers of AI youth talent. If the two countries can establish an authentic and consistent AI global cooperation framework, the combined efforts by the U.S. and China will make significant contributions towards introducing AI technology in developing countries. 

Moreover, unlike advanced economies, developing countries have difficulty developing AI policies and infrastructure on their own and need external support to develop AI policies and digital infrastructures. Since AI technology might cause social tensions like large scale unemployment, governments need to have knowledge and associated infrastructure to compensate for the possible disruption caused by AI’s integration into the economy and society. According to the IMF’s analysis on the impact of AI on global labor markets, 40% of global employment is potentially exposed to AI. In advanced economies, about 60% of jobs may be impacted by AI, yet half of them can benefit from adopting AI. In developing countries, although AI’s exposure to existing employment is less than that of advanced economies, these countries do not have the range of infrastructures and technology to harness AI’s benefits. A multilateral AI framework, such as the one led by the UN or the World Bank and assisted by Washington and Beijing, can provide policy suggestions for developing countries to better integrate AI technology within their economies and societies. Indeed, the United States can play a critical role in helping those developing countries to establish a resilient policy toolbox against the likely-unprecedented AI-caused disruption of social norms due to its ongoing development of protecting American workers from the risks of artificial intelligence. Also, China, as the first country to build out its domestic AI regulations and associated monitoring agencies, can provide invaluable lessons for developing countries.

However, one significant obstacle in having cooperation on global AI governance remains and this is the sensitivities between the U.S. and China around the security concerns of AI applications. First, during their inaugural bilateral AI dialogue in May 2024 in Geneva, the U.S. addressed its concern about China’s “misuse” of AI in the military sector. Legislative activity in the U.S. Congress, such as the House Foreign Affairs Committee’s new AI bill, submitted on May 22, imposes new export control on AI models and further tightens the discussion of U.S.-China cooperation on AI governance. Second, in the near term, having a consensus on unimpeded cross-border data transactions is difficult because both the U.S. and China have similar security concerns on data privacy and the sensitivity of biometric information. Fortunately, the Cybersecurity Administration of China’s new regulation that allows certain levels of transnational data flow, offers a potential pathway for the two countries to advance this critical agenda for global AI governance.

Clarifying misconceptions of each others’ intentions in using AI is important to resolving obstacles in AI security. As per open resources describing their Track 2 dialogue in Thailand, Lt. Gen. Jack Shanahan, a retired Air Force pilot and first director of the Defense Department’s Joint AI Center, said that “it took us a while to understand, with the Chinese, that they look at lethal autonomous weapons systems—in their definition—just differently than we did.” While admittedly taxing, such constructive bilateral communication can help the two countries to avoid disputes related to the misconception of each other’s definition of AI and its application. Thus, keeping such channels open is the foundation for a continued AI-focused dialogue between China and the U.S., particularly with a focus on soft-landing agendas such as AI in education and medical research, which can pave the way for a consistent AI dialogue between the two governments.

Promoting global AI governance is a necessary step for not only halting the continued pessimism on U.S.-China exchanges on AI but also to encourage a consensus-based dialogue between all AI users and developers. Reaching this goal will admittedly be challenging. Some critics express concerns about the militarization of the application of AI such as “AI warfare,” “AI’s ethical dilemma” and “terminator scenario.” However, many technologies have faced a period of moral hazard following their creation, such as nuclear weapons in the mid-late 20th century; such a situation is not unique to AI. This pessimistic perspective of militarized AI adoption derives from the technological determinism that AI technology is essentially being used as a weapon against countries’ survival, triggering a new round of AI-linked arms racing. Instead of observing AI development from this pessimistic perspective, the technological indeterminism that underlines the intention of the technology’s usage is far more suitable for the current development of AI. As long as states and major stakeholders share a consensus on using AI for endowing public goods and generating economic prosperity, multilateral AI governance can begin to facilitate an inclusive and harmonious global AI ecosystem for the global good.