Why Regulating Artificial Intelligence is Still Too Early?

Artificial intelligence should be carefully regulated as of now since the concept is still broad.

What should be heavily regulated is its applications including autonomous driving, cybersecurity and the military.

It's way too early to regulate a fundamental technology such as artificial intelligence. If you ask any expert as of today what should be regulated in AI, the answer would have to be, inevitably, “we don’t know”.

While the rapid progress of the technology should be seen with a positive lens, it is important to exercise some caution and introduce laws that will help the progress of AI technology.

Governments and organizations should be truthful about how they manage artificial intelligence in each process and not exaggerate what an algorithm can deliver.

By 2025, artificial intelligence (AI) will significantly improve our daily life by handling some of today's complex tasks with great efficiency.

What is Regulation?


Regulation consists of requirements the government imposes on private firms and individuals to achieve specific purposes
. Failure to meet regulations can result in fines, orders to cease doing certain things, or, in some cases, even criminal penalties.

There are two types of regulation: economic and social.

Economic regulation refers to rules that limit who can enter a business (entry controls) and what prices they may charge (price controls). For example, taxi drivers and many professionals (lawyers, accountants, beauticians, financial advisers, etc.) must have licences in order to do business; these are examples of entry controls. As for price controls, for many years, airlines, trucking companies, and railroads were told what prices they could charge, or at least not exceed. Companies providing local telephone service are still subject to price controls in all states.

Social regulation refers to the broad category of rules governing how any business or individual carries out its activities, with a view to correcting one or more “market failures.” A classic way in which the market fails is when firms (or individuals) do not take account of the costs their activities may impose on third parties (see externalities). When this happens, the activities will be pursued too intensely or in ways that fail to stem harm to third parties. For example, left to its own devices, a manufacturing plant may spew harmful chemicals into the air and water, causing harm to its neighbours. Governments respond to this problem by setting standards for emissions or even by requiring that firms use specific technologies (such as “scrubbers” for utilities that capture noxious chemicals before steam is released into the air).

What Are the 3 Stages of Artificial Intelligence?


Artificial intelligence deals with the area of developing computing systems which are capable of performing tasks that humans are very good at, for example recognising objects, recognising and making sense of speech, and decision making in a constrained environment.

There are 3 stages of artificial intelligence:

1. Artificial Narrow Intelligence (ANI), which has a limited range of capabilities. As an example: AlphaGo, IBM's Watson, virtual assistants like Siri, disease mapping and prediction tools, self-driving cars, machine learning models like recommendation systems and deep learning translation.

2. Artificial General Intelligence (AGI), which has attributes that are on par with human capabilities. This level hasn't been achieved yet. 

3. Artificial Super Intelligence (ASI), which has skills that surpass humans and can make them obsolete. This level hasn't been achieved yet. 

Progress of Artificial Intelligence Regulation Worldwide


The basic approach to regulation focuses on the risks and biases of AI's underlying technology, i.e., machine-learning algorithms, at the level of the input data, algorithm testing, and the decision model, as well as whether explanations of biases in the code can be understandable for prospective recipients of the technology, and technically feasible for producers to convey.

In 2017, Elon Musk called for the regulation of artificial intelligence.

According to NPR, the Tesla CEO was "clearly not thrilled" to be advocating for government scrutiny that could impact his own industry, but believed the risks of going completely without oversight are too high: "Normally the way regulations are set up is when a bunch of bad things happen, there's a public outcry, and after many years a regulatory agency is set up to regulate that industry. It takes forever. That, in the past, has been bad but not something which represented a fundamental risk to the existence of civilization." In response, some politicians expressed skepticism about the wisdom of regulating a technology that is still in development. 

Responding both to Musk and to February 2017 proposals by European Union lawmakers to regulate AI and robotics, Intel CEO Brian Krzanich has argued that AI is in its infancy and that it is too early to regulate the technology. Instead of trying to regulate the technology itself, some scholars suggested developing common norms including requirements for the testing and transparency of algorithms, possibly in combination with some form of warranty.


The Global Partnership on Artificial Intelligence was launched in June 2020, stating a need for AI to be developed in accordance with human rights and democratic values, to ensure public confidence and trust in the technology, as outlined in the OECD Principles on Artificial Intelligence (2019).

Regulating Artificial Intelligence Requires a Deeper Understanding


Artificial intelligence as a basic technology should not be regulated. It also seems impractical for the government to stop you from implementing a neural network on your laptop. However, there’re applications of AI, for example autonomous driving, that need regulation. AI also has new implications on antitrust (regulation of monopolies), that regulators have not yet thought through but should.

Most of the discussions about AI regulation stems from irrational fears about artificial super intelligence, rather than a deeper understanding of what it can and cannot do. Because AI today is still immature and is developing rapidly, heavy-handed regulation by any country will stunt that country’s AI progress.

However, some AI use cases need regulation both to protect individuals, and to accelerate its adoption. The automotive industry is already heavily regulated to ensure safety. Thinking through how these regulations should change in light of new AI capabilities such as autonomous driving will help the whole industry. Same goes for other areas, including pharmaceuticals, arms control, financial markets, and so on. But the regulation should be industry-specific, and based on a thoughtful understanding of the use-cases and on the outcomes we do/don’t want to see in specific sectors, rather than on the basic technology.

Governments play a big role in helping with the coming job displacements caused by AI, for example to provide basic income and retraining.

The rise of artificial intelligence is creating new ways for companies to compete, become dominant, and shut out competitors. Antitrust regulators are way behind corporations in understanding this new basis of competition, and have a lot of catching up to do.

Conclusion


Regulating AI applications should not be done in a way that stifles the existing momentum in research and development.

If governments decide to regulate AI any time soon, they would not know how to do it. What’s even worse, we could let people with absolutely no understanding of the technology do it. If we connect this to the previous idea of AI being a fundamental technology we have a recipe for disaster. This would be worse than having let the governments regulate the Internet in the 80s.

The leading AI researcher, Geoff Hinton, stated that it is very hard to predict what advances AI will bring beyond five years, noting that exponential progress makes the uncertainty too great.

Artificial intelligence should be regulated carefully because it is a fundamental technology, and at this point we would not know what to regulate or how to get enough international support for that to happen.

Legal and regulatory frameworks typically operate around a clear sense of who is acting, what their mindset was at the time of the action and where the act takes place. These are all fine principles in theory but as soon as this approach is applied in the context of AI, it is clear that new ways of thinking are required.

Share this article

Leave your comments

Post comment as a guest