Table of Contents
With the increasing use and development of artificial intelligence, the question arises as to how the world can collectively decide to regulate a powerful but potentially dangerous industry. Even if individual countries introduce appropriate legislation, will it matter if other countries do not follow suit?
The mostly rational, sometimes absurd fear of artificial intelligence (AI) is fueling calls for greater regulation and even a moratorium on further development of the technology.
This is to ensure that we do not unleash forces that are beyond our control.
However, this is likely to be a difficult undertaking as the capabilities of AI are not yet fully understood and there are many competing views and opinions about its benefits and dangers.
What makes matters worse is that any legal framework can only go as far as a regulatory authority’s jurisdiction allows. This could lead to AI being developed under looser controls and penetrating the broader digital ecosystem.
Demands for AI control
This has led some voices in government and business to call for global regulations on the development and use of AI. To date, there has been no serious attempt to tackle such a complex issue.
But as AI makes its way into the digital mainstream and countries begin to develop their own restrictions on the technology, there should be further impetus for a global solution.
Recently, Ursula von der Leyen, the President of the European Commission, called on the EU to take the lead in developing a global framework for AI, similar to the Intergovernmental Panel on Climate Change.
The aim is to promote the safe and responsible development of AI by bringing together the leading experts from government, business, science and other areas.
US President Joseph Biden then gave a speech at the United Nations in which he promised to work with world leaders to “ ensure that we use the power of artificial intelligence for good , while protecting our citizens from this great… Protect from danger. ”
However, none of these politicians speak for the entire world. It therefore remains unclear how global their preferred regulatory frameworks would actually be if implemented.
The only body that claims global prestige is the United Nations. Their efforts to curb AI globally are just beginning.
A proposal from the United Nations
Earlier this year, the UN Educational Scientific Organization (UNESCO) called on all countries in the world to fully implement its recommendation on the ethics of artificial intelligence, which was unanimously adopted by all member states in 2021.
The framework lists a set of values and principles to guide the development and implementation of AI.
It also provides a readiness assessment tool that allows regulators to determine whether users have the necessary skills and competencies to properly use the AI-powered resources available to them.
In addition, regulators are asked to report periodically on progress in dealing with AI in their country.
Regulations of any kind and the authorities responsible for enforcing them are often criticized – especially by those to whom they apply.
And while there are many examples of regulations gone amok (mattresses with labels that say, “ This label may not be removed under penalty of punishment ” ), it is fair to say that our world is without rules for Things like clean air, pure water and the safe handling of food and other goods would be much less pleasant.
So are there any precedents to follow when it comes to the global application of AI?
A possible model is the International Civil Aviation Organization (ICAO), says Roman Perkowski from telecommunications service provider TS2 .
ICAO, which has been under the auspices of the United Nations since 1944, oversees standards, practices and policies that enable countries to share airways and coordinate air operations for mutual benefit.
An important aspect of its mandate is the joint development of regulations and procedures between the individual aviation authorities.
This ensures that they do not work in conflict with each other or endanger flight operations across international borders.
This is certainly a difficult task with many competing goals and perspectives. But the idea of a global clearinghouse to help coordinate the individual regulatory measures of numerous countries would be an excellent starting point for AI.
However, the question remains whether there is enough national self-interest to create a common environment for intelligent technologies, as is the case with air transport.
Another reason for the difficulties is that there is still no clear consensus on the regulation of AI.
On the one hand, you don’t want it to do things that are harmful to the public, whether on its own or on behalf of malicious actors. On the other hand, you don’t want to stifle creative development or reduce the benefits of AI.
In a recent article on The Conversation , Stan Karanasios, an associate professor at the University of Queensland, Olga Kokshagina, an associate professor at the École des Hautes Etudes Commerciales du Nord in France, and Pauline C. Reinecke, a research fellow at the University of Hamburg, point out that indicates that leading AI developers and practitioners are calling on states to coordinately regulate this technology.
This is a good sign. But if it actually came to that, would these industry titans support such measures that serve the public interest, or would they try to shape the rules to suit their own interests?
Perhaps the most significant aspect of AI that hinders any form of regulation is the speed at which it develops.
We are still a long way from the most basic regulations at the national level, let alone a global framework.
Until that happens, the technology will likely work in ways that are still conceptual at this point. This is the nature of laws and regulations.
So we will likely see free play in the AI industry for now, relying on the wisdom and goodwill of scientists and business leaders to keep us safe.