Latest Headlines
ARTIFICIAL INTELLIGENCE, UN AND GLOBAL SAFETY
The UN adopts a resolution that will guide players in the use of AI, writes Sonny Aragba-Akpore
When on October 30,2023 United States President Joe Biden signed an Executive Order to ensure that Artificial Intelligence (AI) be made safe and accessible to all humanity, he saw the future of things to come.
Perhaps he wanted to foreclose what could happen in future when the human race becomes addicted to the workings of AI.
For instance, the United States is in court with Apple because of what it termed the unnecessary monopoly the company enjoys for the proprietary of its products including I-Phones, I-pads and the rest on that platform.
Deriving from the Executive Order therefore, the US and122 other nations sponsored a position paper that could make the AI safe and available for all humanity at the UN General Assembly in New York on Thursday March 21,2024. And the UN General Assembly voted overwhelmingly for its endorsement.
Though sponsored by the United States and co-sponsored by 123 countries, including China, the UN adopted the proposal by consensus with a bang of the gavel and without a vote, meaning it has the support of all 193 member nations.
U.S. Vice President Kamala Harris and National Security Advisor Jake Sullivan called the resolution “historic” for setting out principles for using artificial intelligence in a safe way.
Secretary of State Antony Blinken called it “a landmark effort and a first-of-its-kind global approach to the development and use of this powerful emerging technology.”
Being the first of its kind to be approved by the General Assembly on artificial intelligence it gave support to an international effort to ensure the powerful new technology benefits all nations, respects human rights and is “safe, secure and trustworthy.”
The International Telecommunications Union (ITU) has been at the forefront promoting standards and regulations that could serve as guidelines for AI development, the U.N. resolution has added strength to ITU positions.
“AI must be in the public interest – it must be adopted and advanced in a way that protects everyone from potential harm and ensures everyone is able to enjoy its benefits,” Harris said in a statement.
At last September’s gathering of world leaders at the General Assembly, President Biden said the United States planned to work with competitors around the world to ensure AI was harnessed “for good while protecting our citizens from this most profound risk.”
And by October 30, 2023 he signed the Executive Order which gave birth to the sponsorship of Thursday March 21,2024.
As part of the Biden-Harris Administration’s comprehensive strategy for responsible innovation, the Executive Order builds on previous actions the President has taken, including work that led to voluntary commitments from 15 leading companies to drive safe, secure, and trustworthy development of AI.
The EO requires that developers of the most powerful AI systems share their safety test results and other critical information with the U.S. government. In accordance with the Defense Production Act, the Order will require that companies developing any foundation model that poses a serious risk to national security, national economic security, or national public health and safety must notify the federal government when training the model, and must share the results of all red-team safety tests. These measures will ensure AI systems are safe, secure, and trustworthy before companies make them public.
Develop standards, tools, and tests to help ensure that AI systems are safe, secure, and trustworthy. The National Institute of Standards and Technology will set the rigorous standards for extensive red-team testing to ensure safety before public release. The Department of Homeland Security will apply those standards to critical infrastructure sectors and establish the AI Safety and Security Board. The Departments of Energy and Homeland Security will also address AI systems’ threats to critical infrastructure, as well as chemical, biological, radiological, nuclear, and cybersecurity risks. Together, these are the most significant actions ever taken by any government to advance the field of AI safety:
Protect against the risks of using AI to engineer dangerous biological materials by developing strong new standards for biological synthesis screening. Agencies that fund life-science projects will establish these standards as a condition of federal funding, creating powerful incentives to ensure appropriate screening and manage risks potentially made worse by AI.
Protect Americans from AI-enabled fraud and deception by establishing standards and best practices for detecting AI-generated content and authenticating official content. The Department of Commerce will develop guidance for content authentication and watermarking to clearly label AI-generated content. Federal agencies will use these tools to make it easy for Americans to know that the communications they receive from their government are authentic—and set an example for the private sector and governments around the world.
Establish an advanced cybersecurity program to develop AI tools to find and fix vulnerabilities in critical software, building on the Biden-Harris Administration’s ongoing AI Cyber Challenge. Together, these efforts will harness AI’s potentially game-changing cyber capabilities to make software and networks more secure.
Order the development of a National Security Memorandum that directs further actions on AI and security, to be developed by the National Security Council and White House Chief of Staff. This document will ensure that the United States military and intelligence community use AI safely, ethically, and effectively in their missions, and will direct actions to counter adversaries’ military use of AI.
Although, the EO was specifically for the USA, the adoption of a resolution by the U.N. General Assembly has carved out a position that will guide all global players in the AI firmament.
Strangely over the past few months, the United States worked with more than 120 countries at the United Nations — including Russia, China and Cuba — to negotiate the text of the resolution adopted on Thursday March 21, 2024.
“In a moment in which the world is seen to be agreeing on little, perhaps the most quietly radical aspect of this resolution is the wide consensus forged in the name of advancing progress,” U.S. Ambassador Linda Thomas-Greenfield told the assembly just before the vote.
“The United Nations and artificial intelligence are contemporaries, both born in the years following the Second World War,” she said. “The two have grown and evolved in parallel. Today, as the U.N. and AI finally intersect we have the opportunity and the responsibility to choose as one united global community to govern this technology rather than let it govern us.”
Shortly after the vote, Representatives from the Bahamas, Japan, the Netherlands, Morocco, Singapore and the United Kingdom enthusiastically supported the resolution, joining the U.S. ambassador who called it “a good day for the United Nations and a good day for multilateralism.”
Thomas-Greenfield was quoted by Agency reports saying that she believes the world’s nations came together in part because “the technology is moving so fast that people don’t have a sense of what is happening and how it will impact them, particularly for countries in the developing world.
“They want to know that this technology will be available for them to take advantage of it in the future, so this resolution gives them that confidence,” Thomas-Greenfield said. “It’s just the first step. I’m not overplaying it, but it’s an important first step.”
The ITU plans big for AI and states the future will see large parts of our lives influenced by Artificial Intelligence technology. Machines can execute repetitive tasks with complete precision, and with recent advances in AI, machines are gaining the ability to learn, improve and make calculated decisions in ways that will enable them to perform tasks previously thought to rely on human experience, creativity, and ingenuity.
The ITU believes that “AI innovation will be central to the achievement of the United Nations’ Sustainable Development Goals (SDGs) by capitalizing on the unprecedented quantities of data now being generated on sentiment behavior, human health, commerce, communications, migration and more,” adding that ITU will provide a neutral platform for government, industry and academia to build a common understanding of the capabilities of emerging AI technologies and consequent needs for technical standardization and policy guidance.
By May 29, this year, when Nigeria
marks the first year of a new regime and speeches are being made at the Eagle Square or somewhere else in the country, global technology leaders will converge in Geneva, but Nigeria is not likely going to be on their minds as discussions will focus on AI governance that will explore the surge in global efforts to craft AI policy, regulation, and governance frameworks.
Aragba-Akpore is a member of THISDAY Editorial Board