You are here: American University School of International Service News Four Questions on AI Global Governance Following the G7 Hiroshima Summit

International

Four Questions on AI Global Governance Following the G7 Hiroshima Summit

By  | 

Leaders from the Group of Seven (G7) nations—the United States, United Kingdom, Canada, France, Germany, Italy, and Japan—convened in Hiroshima for a summit from May 19-21 to discuss potential multilateral solutions to global issues. At this year’s G7 summit, leaders discussed topics ranging from the war in Ukraine to clean energy economies and from global life expectancy to generative Artificial Intelligence (AI)—a type of AI that can generate text, images, audio, and video.   

After the summit, leaders expressed intent to develop standards and regulations for AI governance in line with the nations’ shared democratic values. It remains unclear at this time what exactly those standards would entail, but G7 leaders left the summit tasking relevant ministers to establish the Hiroshima AI process for discussions on generative AI by the end of 2023.

We asked SIS and Kogod School of Business professor Derrick Cogburn a few questions about the areas world leaders should consider when designing a framework for regulating AI and the importance of discussions on generative AI at the multilateral level. Cogburn is the co-faculty director of the AU Internet Governance Lab, serves as a faculty coordinator of the AU Strategic Research Initiative on Inclusive Technology Policy, and is a member of the Global Academic Network of the Center for AI and Digital Policy. Cogburn also teaches a graduate course at SIS called Big Data Analytics and Text Mining.

Note: American University is hosting the North American School of Internet Governance on June 10-11. The theme of NASIG 2023 is inclusive internet governance.

This year's Group of Seven summit featured a discussion among world leaders about artificial intelligence. After the summit, leaders of the G7 countries are now calling for the development of standards for AI. In your view, what specific areas should leaders address when designing this framework?
The outcome document of the 2023 G7 Summit indicates a broad and wide-ranging discussion. These global leaders addressed numerous important digital issues, including key global governance challenges, the ongoing digital divide, inclusive education, and global internet governance. dd>
While it is critical for the G7 to move in this direction, they are already behind some of the individual member countries, including the US and especially the European Union. The G7 framework will have to take into consideration these national and regional efforts as it develops its own strategy for regulating Artificial Intelligence. The issues related to AI are wide-ranging, each with its own challenges and opportunities. They range from autonomous self-driving cars to automated decision making on job applications and from generative AI creating art, music, and textual responses to prompts that seem eerily lifelike (think ChatGPT). 
This latter concept is very new and a very real threat and tremendous opportunity (hence the need for regulation, not banning). The “GPT” in ChatGPT stands for Generative, Pre-Trained, Transformer. GPT was enabled by the concept of the “transformer" in a 2017 Google research paper; these transformers allow neural networks and deep learning algorithms to maintain “attention," or a focus, on the specifics dimensions of a query through many, many layers of neural networks as it works its way through its large language model (which has been trained on an unprecedented amount of textual data, the specifics of which have not been revealed to the public). When a user enters a query, the model first tries to make sure it can understand the query, and then responds with a result generated from testing that query against its pre-trained language model, essentially generating the most likely prediction of what would be the appropriate response based on the data in its large language model. Since this LLM is trained on so much data, it can have widely varying responses to a query, including responses that are completely false but sound very authoritative (called hallucinations). But the pre-trained model is also subject to the wide range of biases and disinformation contained in the underlying textual data on which the model was trained. Think of all the websites, blog posts, reports, television, and meeting transcripts that are available publicly that could be feeding this model.
On the positive side, the “assistive” nature of these tools is immense and very real. For example, here is a response to your question (prompt) using ChatGPT. It took about four seconds to “think” and write this response:
== (AI Generated Content, from ChatGPT, 7:20 a.m. 23 May 2023)
When designing a framework for AI standards, leaders of the G7 countries should address several specific areas to ensure the responsible development and deployment of artificial intelligence. Here are some key considerations:
Ethical and Legal Frameworks: Leaders should focus on developing ethical guidelines and legal frameworks to govern AI systems. This includes addressing issues such as transparency, accountability, privacy, and security. Ethical considerations should encompass fairness, avoiding biases, and preventing discrimination in AI algorithms and decision-making processes.
Data Governance: Proper data governance is crucial for AI systems. Leaders should emphasize the responsible collection, storage, and use of data, ensuring compliance with relevant privacy regulations. It is important to strike a balance between data accessibility and protection, taking into account issues of consent, data ownership, and data sharing.
Safety and Security: Leaders should prioritize the safety and security of AI systems. This involves robust cybersecurity measures to protect against potential threats and vulnerabilities. Additionally, guidelines for ensuring the safety and reliability of AI applications should be established, particularly in critical areas such as autonomous vehicles, healthcare, and defense.
Bias and Fairness: Addressing bias in AI algorithms and systems is essential. Leaders should encourage the development of methods to detect and mitigate biases in training data, as well as the establishment of guidelines to ensure fairness and inclusivity in AI technologies. This includes promoting diversity and inclusivity in AI research and development teams.
Accountability and Liability: Clear guidelines should be set to determine accountability and liability when AI systems cause harm or make incorrect decisions. Leaders should consider issues of responsibility between developers, users, and AI systems themselves, and establish mechanisms to address potential legal and ethical challenges arising from AI deployment.
International Collaboration: Collaboration and information sharing among nations are essential for developing consistent AI standards. Leaders should promote international cooperation in AI research, development, and regulation, fostering partnerships that allow for the exchange of knowledge, best practices, and experiences.
Education and Workforce Impact: As AI continues to advance, leaders should focus on education and training programs to equip individuals with the skills needed to adapt to the changing job market. Leaders should invest in reskilling and upskilling initiatives to ensure that the workforce can thrive in the age of AI.
Socioeconomic Impacts: Leaders should address the socioeconomic impacts of AI, considering issues such as job displacement, income inequality, and the digital divide. Strategies should be developed to mitigate any negative effects on vulnerable communities and ensure equitable access to AI technologies.
By addressing these areas, leaders can lay the groundwork for a responsible and inclusive AI framework that promotes innovation while safeguarding societal values and well-being.
==
With these kinds of possibilities, we know generative AI will not be a simple fad but will continue to grow in terms of its potential and its challenges. 
In a statement, G7 leaders recognized "the governance of the digital economy should continue to be updated in line with our shared democratic values" and vowed to work with tech companies to drive "responsible innovation" that prioritizes security and safety, particularly for children. Given the rapid ascension of AI technologies, what could regulation look like? 
We know that the competitive pressures facing these companies mean they will not regulate themselves, even though they realize there should be guardrails in place. There are some good foundations in the Biden Administration Blueprint for an AI Bill of Rights
In the US, the Center for AI and Digital Policy helped lead a complaint to the Federal Trade Commission (FTC), essentially arguing that given the danger of generative AI, the FTC should impose a moratorium on the commercially available generative AI until key guardrails can be put in place. These guardrails will be incredibly difficult to develop and enforce, and in some cases, potential guardrails have drawn on the ideas in the Biden Blueprint but have also included ideas such as requiring AI generated content to have a digital watermark that can be automatically detected.  
Another key area is privacy and copyright. These large language models are trained on huge corpora of publicly available textual and visual data—data that was generated by other human beings but is now training these models to respond “like” these human beings. The models can sound like particular singers, create paintings like particular artists, generate textual responses like particular writers, and write code like particular programmers. All of this content may have been provided for free, but the AI companies are now charging, in some cases, for access to their own work. Granted, there is a “sweat of the brow” accommodation here, but where is the line?
Are there nations who have already successfully developed a framework to address safety and regulatory concerns surrounding AI, and if so, what can G7 leaders learn from these nations?
The European Union is at the forefront of establishing legislation on AI and is likely to be the world’s first comprehensive AI law. Most other countries are moving much more slowly, and that pace may allow the AI technology to continue to outpace their legislation.  
Leaders also left the summit tasking "relevant ministers" to establish the Hiroshima AI process through a G7 working group that will discuss issues surrounding generative AI by the end of this year. In a statement, leaders said these discussions would include topics like governance, copy rights, responsible utilization of AI, and disinformation. What is the importance of this discussion globally as new AI software becomes available?
Yes, these are the kinds of issues that need to be addressed, but the pace of addressing them is much too slow. The dangers continue to grow.  For example, a recent US Senate hearing on the oversight of AI opened a hearing with a completely fake AI generated verbal introduction by the chair, Senator Richard Bluementhal (D-CT).
The tremendous danger posed by generative AI to foster disinformation and confusion is buttressed by its incredible potential to assist citizens and level the playing field in terms of access to the world’s knowledge.