Framework for AI Global Accord

Framework for AI Global Accord

 

Nazli Choucri

Professor
Political Science Department
Massachusetts Institute of Technology

 

The expansion of innovations and applications of Artificial Intelligence (AI) notwithstanding, the fact remains that the international community—public and private—has not as yet developed norms, rules, standards, nor has it reached the broad contours of shared principles and behaviors.

An initial step toward global accord was prepared by the Boston Global Forum—at the invitation of the United Nations Academic Impact and associated with the UN Centennial—and presented in Remaking the World Toward an Age of Global Enlightenment (Tuan, 2021).

Below are selections from Choucri (2021), entitled, "Framework for an AI International Accord."

Emergent Global Challenges

Advances in information and communication technologies—global Internet, social media, Internet of Things, and a range of related science-driven innovations and generative and emergent technologies—continue to shape a dynamic communication and information ecosystem for which there is no precedent.

These advances are powerful in many ways. Foremost among these in terms of salience, ubiquity, pervasiveness, and expansion in scale and scope is the broad area of AI. These advances have created a new global ecology, yet the ecology remains opaque and must be better understood—an ecology of “knowns” that is evolving in ways that remain largely “unknown.” Especially compelling is the acceleration of AI—in all its forms—with far-ranging applications for shaping a new global ecosystem.

Call for Accord on Artificial Intelligence

The world of AI today is framed by a set of unknowns—known unknowns and unknown unknowns—where technological innovations interact with the potential for the total loss of human control. Especially elusive is the management of embedded insecurities in applications of this technology and the imperatives of safety and sustainability.

Without adequate guidelines and useful directives, the undisciplined use of AI poses risks to the well-being of individuals and creates fertile ground for economic, political, social, and criminal exploitation. The international community recognizes the challenges and opportunities, as well as the problems and perils, associated with AI.

Many countries have already announced national strategies to promote the proper use and development of AI. At the operational level, there are as of yet no authoritative modes and methods for reviewing and regulating algorithms. This is yet another “open” space, in the full sense of the word.

At the core of this imperative is to establish a common understanding of policy and practice, anchored in general principles to help maximize the “good” and minimize the “bad” associated with AI. Given such ambiguities and uncertainties, it is not surprising that the international community has not yet fully grasped the implications of the new “unknowns” and the potential threats to the global order.

There is a long tradition of consensus-based social order founded on cohesion and agreement, and not the use of force nor formal regulation or legislation. It is often a necessary precursor for managing change and responding to societal needs. The foundational logic addresses four premises: What, Why, Who and How?

What

An international agreement on AI is about supporting a course of action that is inclusive and equitable. It is designed to focus on relationships among people, governments, and other key entities in society.

Why

To articulate prevailing concerns and find common convergence; to frame ways of addressing and managing potential threats—in fair and equitable ways.

Who

In today’s world, framing an international accord for AI must be inclusive of:

  • Individuals as citizens and members of a community
  • Governments who execute citizen goals
  • Corporate and private entities with business rights and responsibilities
  • Civil society that transcends the above
  • AI innovators and related technologies
  • Analysts of ethics and responsibility

None of the above can be left out. Each of these constitutes a distinct center of power and influence, and each has rights and responsibilities.

The starting point for implementation consists of five basic principles to provide solid anchors for Artificial Intelligence International Accord.

 

How

  • Fairness and Justice for All: The first principle is already agreed upon in the international community as a powerful aspiration. It is the expectation of all entities—private and public—to treat, and be treated with, fairness and justice.
  • Responsibility and accountability for policy and decision—private and public: The second principle recognizes the power of the new global ecology that will increasingly span all entities worldwide—private and public, developing and developed.
  • Precautionary principle for innovations and applications: The third principle is well established internationally. It does not impede innovation but supports it. It does not push for regulation but supports initiatives to explore the unknown with care and caution.
  • Ethics-in-AI: Fourth is the principle of ethical integrity—for the present and the future. Different cultures and countries may have different ethical systems, but everyone, everywhere recognizes and adopts some basic ethical precepts. At issue is incorporating the commonalities into a global ethical system for all phases, innovations, and manifestations of artificial intelligence.

Jointly, these four features—What, Why, Who, How—create powerful foundations for framing and implementing an emergent Artificial Intelligence International Agreement.

Artificial Intelligence International Accord

The Artificial Intelligence International Accord (AIIA) Draft Framework recognizes pathbreaking initiatives—notably the Convention on Cybercrime and the EU-General Data Protection Regulation (GDPR)—that signal specific policies to protect the integrity of information and the values that support this integrity.

In addition, the AIIA recognizes the ongoing deliberations in the European Union regarding the future of AI and best means of supporting EU objectives, as well as those of member states. Then, too, the Draft Framework acknowledges the deliberations of the United States National Commission on Artificial Intelligence, and the Report of its results (Schmidt et al., 2021).

Consistent with the legal principle of a rules-based international community, the Draft Framework consists of several initial procedural and operational strategies, as follows.

Framework Design

Consistent with the principles the provisions of the Convention on Cybercrime as well as the EU-General Data Protection Regulation (GDPR), and respecting the Social Contract for the AI Age, the AIIA draft framework is conceived and designed as:

  • A multi-stakeholder, consensus-based international agreement to establish common policy and practice in development, use, implementation and applications of AI.
  • Anchored in the balance of influence and responsibility among governments, businesses, civil society, individuals, and other entities.
  • Respectful of national authority and international commitments with required assurances of rights and responsibilities for all participants and decision-entities.
  • To consolidate the design into a formal International Accord, it is essential to
    • Review the structure and content of legal frameworks for AI at various levels of aggregation to identify elements essential for an international AI legal framework;
    • Recognize methods to prevent abuses by governments and businesses in uses of AI, Data, Digital Technology, and Cyberspace (including attacks on companies, organizations, and individuals, and other venues of the Internet);
    • Consolidate working norms to manage all aspects of AI innovations; and
    • Construct and enable response-systems for violations of rights and responsibilities associated with the development, design, applications, or implementation of AI.

Support System for the AIIA Framework

Based on the internationally recognized Precautionary Principle, the support system for the AIIA Framework is expected to facilitate and formalize the Framework and its implementation. The supports include the following products and processes:

  • Code of Ethics for AI Developers and AI Users.
  • Operational systems to monitor AI performance by governments, companies, and individuals.
  • Certification for AI Assistants to enable compliance to new standards.
  • Establishment of a multidisciplinary scientific committee to provide independent review and assessment of innovations in AI and directives for safe and secure application, consistent with human rights and other obligations.
  • Enabling a Social Contract for the AI Age to be supported by United Nations, governments, companies, civil society and the international community.
  • Consolidation of a World Alliance for Digital Governance as the global authority to enforce the emergent accord.

End Note: Challenges, Opportunities and Next Steps

The End Note addresses briefly some salient challenges, followed by highlights of opportunities, and concludes with a brief word of caution.

The Challenges

Technology and innovation are growing much faster than the regulatory framework anywhere, and most certainly at the international levels. Of course, we do not want regulations to change at the level of technological change—that would create chaos; you can imagine why and how.

We can expect innovations in AI to grow much faster than has been the case so far—due in large part to new generations being educated in AI early on. We tend to think that the key players are in the AI arena are companies, governments, and academic researchers, but we are overlooking youth as the growth-asset that will buttress both society and AI in the decades to come. It is foolhardy to ignore what are likely to be the real challenges, namely, the scale and scope of (a) unknowns, and (b) unknown “unknowns,” and (c) their intended and unintended consequences, individually and collectively.

The Opportunities

Among the major opportunities before us is to inquire: What is the best precedent? Is it nuclear power? Is it climate change? What are other high-risk areas? Usually, we respond to such questions long after the fact. But can we avoid this delay? At this point, we have an opportunity to seriously consider the properties of a global accord in AI before we are faced with a major disaster.

Of high value, for example, is to consider and address the role of ethics in courses on innovations in AI, as well as ethics for all uses and users. So, too, it is important to focus on international law relevant to AI. There are many other high-value issues to consider at this point. The reason is this: The lines of political contention are not yet clearly drawn among potentially conflicting perspectives (or countries).

Therefore, now is the opportunity to proceed before these are consolidated into “lines in the sand.”

References:

Choucri, N. (2021). Framework for an artificial intelligence international accord. In N. A. Tuan (Ed.), Remaking the world: Toward an age of global enlightenment (pp.27–44). Boston Global Forum, United Nations Academic Impact. http://hdl.handle.net/1721.1/141737

Tuan, N. A. (Ed.). (2021). Remaking the world: Toward an age of global enlightenment. Boston Global Forum, United Nations Academic Impact. https://bostonglobalforum.org/publications/the-age-of-global-enlightenment/

Schmidt, E., Work, R., Catz, S., Horvitz, E., Chien, S., Jassy, A., Clyburn, M., Louie, G., Darby C., Mark, W., Ford, K., Matheny, J. Griffiths, J-M., McFarland, K. & Moore, A. (2021). Final Report. The National Security Commission on Artificial Intelligence. https://www.nscai.gov/wp-content/uploads/2021/03/Full-Report-Digital-1.pdf

European Parliament and of the Council. (2016). Protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation) (REGULATION (EU) 2016/679). Official Journal of the European Union. http://data.europa.eu/eli/reg/2016/679/2016-05-04

Council of Europe. (2001). Convention on Cybercrime (ETS No. 185). Council of Europe. https://www.coe.int/en/web/conventions/full-list?module=treaty-detail&treatynum=185

Submitted by Anonymous on