[Photo/IC]
By Zhu Qichao
The development and application of artificial intelligence are profoundly changing the ways in which human society evolves and people live, and they are fundamentally altering the nature of warfare. While it empowers various industries, it has also raised concerns and worries about the possible misuse, abuse, and indiscriminate use of AI weapons. China's Global Security Initiative Concept Paper, released in February, calls for strengthening international security governance in emerging technologies, including artificial intelligence, highlights the key directions for enhancing international security governance of military artificial intelligence, and issues position papers on the military application and ethical governance of AI, strengthens communication and exchanges with the international community on AI security governance, promotes the establishment of a participatory international mechanism, and forms a governance framework and standard norms with broad consensus.
I. Challenges in international security governance of military artificial intelligence
In February 2023, the first global Summit on Responsible Artificial Intelligence in the Military Domain held in The Hague passed a "Call to Action" emphasizing the importance of responsible development and use of military artificial intelligence systems, and reflecting some basic consensus in the international community regarding international norms for the military application of artificial intelligence. Yet only slightly over 60 out of the more than 80 countries represented at the conference supported the "call to action". Achieving broad consensus and coordinated action in the international security governance of military artificial intelligence still faces numerous challenges.
There is significant disagreement in the international community regarding the rules for lethal autonomous weapon systems. Many developing countries and non-governmental organizations, represented by the Non-Aligned Movement and the African Union, strongly advocate restricting or even banning lethal autonomous weapon systems and call for the creation of a legally binding international instrument on this matter. Some developed countries, typically the European Union, emphasize the dual-use nature of artificial intelligence and advocate a non-legally binding political declaration. Some big countries have a relatively passive attitude toward arms control for lethal autonomous weapon systems and firmly oppose the establishment of an international treaty banning lethal autonomous weapon systems.
Geopolitical confrontations significantly hinder global unity and cooperation among countries. Currently, great power competition and confrontations have peaked since the end of the Cold War, with a few countries being forcibly excluded from international cooperation frameworks and systems, lacking channels for comprehensive participation in discussions and the formulation of international norms for military applications of artificial intelligence. For instance, because of the Ukraine crisis, Russia has not been invited to participate in relevant summits on the responsible use of artificial intelligence in the military and has even been deliberately restricted from engaging in related dialogues and negotiations.
Artificial intelligence is a collective achievement of human intelligence and thus whole humanity should benefit from it. However, certain countries create technological barriers that restrict cooperation and exchange in artificial intelligence.
At the same time, the potential risks associated with the autonomous and inexplicable nature of artificial intelligence technology require global collaboration. While some countries like the US verbally vow strong support for international cooperation in the responsible use of artificial intelligence, they also treat AI technology as a tool to maintain their own dominance in geopolitical competitions. They form exclusive "small circles" or "small groups" and artificially create technological barriers in areas closely related to the development and application of artificial intelligence, such as chips and 5G communication, obstructing international exchanges in artificial intelligence technology. This has become a stumbling block for international cooperation in responsible AI governance.
II. Suggestions for promoting the construction of an international security governance framework for military AI
Although it is still difficult to reach an international treaty on regulating the military application of artificial intelligence, China has proposed ideas and plans on how to promote the construction of a community with a shared future for mankind in the field of artificial intelligence. In December 2021, China released the Position Paper of the People's Republic of China on Regulating Military Applications of Artificial Intelligence to the Sixth Review Conference of the Convention on Certain Conventional Weapons, advocating that all countries should uphold a global security concept featuring common, comprehensive, cooperative, and sustainable development, seek consensus on how to regulate the military application of artificial intelligence through dialogue and cooperation, and establish an effective governance mechanism. The "Global Security Initiative Concept Paper" once again pointed out that China is willing to strengthen communication and exchanges with the international community on artificial intelligence security governance, promote the establishment of a widely participated international mechanism, and form a governance framework and standard specifications with broad consensus. Following are suggestions on how to construct an international security governance framework for the military application of artificial intelligence:
Forge broad consensus. From the perspective of building a global community of shared future, we should give full play to the role of the United Nations platform, widely forge consensus among developed countries, emerging economies, and developing countries on the pursuit of the development, application, and security governance of artificial intelligence. We should fully consider the reasonable concerns of different countries, ethnic groups, and religious backgrounds, and establish guiding principles that integrate values and ethics, security and controllability, fairness and inclusiveness, openness and tolerance, and peaceful utilization.
Define the responsibilities of participants. The stakeholders of military artificial intelligence security governance not only include sovereign states and non-state actors that use artificial intelligence weapons, but also include parties involved in research and development, manufacturing, testing and evaluation, as well as supervision agencies of relevant industries and international organizations. In response to the possible consequences of misuse and abuse of military artificial intelligence, it is necessary to differentiate responsibilities among relevant stakeholders and jointly assume the responsibilities and obligations of military artificial intelligence security governance.
Define the governance content. Military artificial intelligence security governance must scientifically define the objects to be governed, so that relevant laws and regulations can be implemented with specific background constraints. Thus it is necessary to clarify that security governance should focus on the risks, harm, and negative impacts that may be caused by the military application of artificial intelligence. Scientific assessments should be conducted on accidents with different levels of damage, security risks of different types and levels, and negative impacts in different scenarios and fields, and reasonable discretion should be given to different stakeholders in tracing their responsibilities.
Improve governance tools. Military artificial intelligence security governance requires complex governance tools for support. These tools include both macro-level ethical guidelines and guiding principles, as well as specific laws and regulations, technical standards, and policy mechanisms. They can also include relevant databases, prototype systems, and testing and evaluation certification tools. The development of these governance tools requires both the proactive actions of sovereign states and the evaluation and supervision of industry associations and international organizations.
Expand communication platforms. Military artificial intelligence security governance involves a wide range of factors and is constantly evolving, even with elements of confrontation and gaming among different parties. It is necessary for sovereign states, industry enterprises, and international organizations to actively expand communication platforms, strengthen cross-field, cross-departmental, and cross-regional cooperation and exchanges, share new knowledge and experiences, and resolve conflicts and differences. Countries such as China, the US, and Russia, which are at the forefront of military artificial intelligence technology, need to strengthen communication and dialogue, explore the establishment of relevant trust measures, avoid the escalation of wars and conflicts and humanitarian disasters caused by the misuse and abuse of artificial intelligence, promote the construction of a community with a shared future for mankind in the field of military application of artificial intelligence, and work together with the international community to maintain global strategic stability and human peace and welfare.
The author is director of the National Defense Technology Strategy Thinktank, National University of Defense Technology.