3 Reasons Why Governments Need to Regulate Artificial Intelligence

3 Reasons Why Governments Need to Regulate Artificial Intelligence

Naveen Joshi 18/01/2021 4
3 Reasons Why Governments Need to Regulate Artificial Intelligence

Artificial Intelligence (AI) research, although far from reaching its pinnacle, is already giving us glimpses of what a future dominated by this technology can look like.

While the rapid progress of the technology should be seen with a positive lens, it is important to exercise some caution and introduce worldwide regulations for the development and use of AI technology.

The constant research in the field of technology, in addition to giving rise to increasingly powerful applications, is also increasing the accessibility to these applications. It is making it easier for more and more people, as well as organizations, to use and develop these technologies. While the democratization of technology that is transpiring across the world is a welcome change, it cannot be said for all technological applications that are being developed.

The usage of certain technologies should be regulated, or at the very least monitored, to prevent the misuse or abuse of the technology towards harmful ends. For instance, nuclear research and development, despite being highly beneficial to everyone, is highly regulated across the world. That’s because, nuclear technology, in addition to being useful for constructive purposes like power generation, can also be used for causing destruction in the form of nuclear bombs. To prevent, international bodies have restricted nuclear research only to the entities that can keep the technology secure and under control. Similarly, the need for regulating AI research and applications is also becoming increasingly obvious. Read on to know why.

1. Artificial Intelligence is a Double-Edged Sword

AI research, in recent years, has resulted in numerous applications and capabilities that used to be, not long ago, reserved for the realm of futuristic fiction. Today, it is not uncommon to come across machines that can perform specific logical and computational tasks better than humans. They can perform feats such as understanding what we speak or write using natural language processing, detecting illnesses using deep neural networks, and playing games involving logic and intuition better than us. Such applications, if made available to the general public and businesses worldwide, can undoubtedly make a positive impact in the world.

For instance, AI can predict the outcome of different decisions made by businesses and individuals and suggest the optimal course of action in any situation. This will minimize the risks involved in any endeavor and maximize the likelihood of achieving the most desirable outcomes. They can help businesses become more efficient by automating routine tasks and preserve human health and safety by undertaking tasks that involve high stress and hazard. They can also save lives by detecting diseases much earlier than can be diagnosed by human doctors. Thus, any progress made in the field of AI will result in an improvement in the overall standard of human life. However, it is important to realize that, like any other form of technology, AI is a double-edged sword. AI has a dark side, too. If highly advanced and complex AI systems are left uncontrolled and unsupervised, they stand the risk of deviating from desirable behavior and perform tasks in unethical ways.

There have been many instances where AI systems tried to fool its human developers by “cheating” in the way they performed tasks they were programmed to do. For example, an AI tasked with generating virtual maps from real aerial images cheated in the way it performed its task by hiding data from its developers. This was caused by the fact that the developers used the wrong metric to evaluate the AI’s performance, causing the AI to cheat to maximize the target metric. While it’ll be a long time before we have sentient AI that can potentially contemplate a coup against humanity, we already have AI systems that can cause a lot of harm by acting in ways not intended by the developers. In short, we are currently at more risk of AI doing things wrong than them doing the wrong things.

2. Artificial Intelligence Ethics is Not Enough

To prevent AI from doing things wrong (or doing the wrong things), it is important for the developers to exercise more caution and care while creating these systems. And the way the AI community is trying to achieve this currently is by having a generally accepted set of ethics and guidelines surrounding the ethical development and use of AI. Or, in some cases, ethical use of AI is being inspired by the collective activism of individuals in the tech community. For instance, Google recently pledged to not use AI for military applications after its employees openly opposed the notion. While such movements do help in mitigating AI-induced risks and regulating AI development to a certain extent, it is not a given that every group involved in developing AI technology will comply with such activism.

AI research is being performed in every corner of the world, often in silos for competitive reasons. Thus, there is no way to know what goes on in each of these places, let alone stopping them from doing anything unethical. Also, while most developers try and create AI systems and test them rigorously to prevent any mishaps, they may often compromise such aspects while focusing on performance and on-time delivery of projects. This may lead to them creating AI systems that are not fully tested for safety and compliance. Even small issues can have devastating ramifications based on the application. Thus, it is necessary to institutionalize AI ethics into law, which will make regulating AI and its impact easier for governments and international bodies.

3. Artificial Intelligence Safety Can Only be Achieved with More Regulation

Legally regulating AI can ensure that AI safety becomes an inherent part of any future AI development initiative. This means that every new AI, regardless of its simplicity or complexity, will go through a process of development that immanently focus on minimizing non-compliance and chances of failure. To ensure AI safety, the regulators must consider a few must-have tenets as a part of the legislation. These tenets should include:

  • the non-weaponization of AI technology, and
  • the liability of AI owners, developers, or manufacturers for the actions of their AI systems.

Any international agency or government body that sets about regulating AI through legislation should consult with experts in the field of artificial intelligence, ethics and moral sciences, and law and justice. Doing so helps in eliminating any political or personal agenda, biases, and misconceptions while framing the rules for regulating AI research and application. And once framed these regulations should be upheld and enforced strictly. This will ensure that only the applications that comply with the highest of the safety standards are adopted for mainstream use.

While regulating AI is necessary, it should not be done in a way that stifles the existing momentum in AI research and development. Thus, the challenge will be to strike a balance between allowing enough freedom to developers to ensure the continued growth of AI research and bringing in more accountability for the makers of AI. While too much regulation can prove to be the enemy of progress, no regulation at all can lead to the propagation of AI systems that can not only halt progress but can potentially lead to destruction and global decline.

Share this article

Leave your comments

Post comment as a guest

0
terms and condition.
  • Emma Underwood

    More regulation could stifle AI

  • Julie Mackenzie

    Regulating AI will save jobs and protect people from disasters.

  • Rob Holyhead

    Excellent info

  • David Martinez

    AI is a double edge sword with infinite potential !!!

Share this article

Naveen Joshi

Tech Expert

Naveen is the Founder and CEO of Allerin, a software solutions provider that delivers innovative and agile solutions that enable to automate, inspire and impress. He is a seasoned professional with more than 20 years of experience, with extensive experience in customizing open source products for cost optimizations of large scale IT deployment. He is currently working on Internet of Things solutions with Big Data Analytics. Naveen completed his programming qualifications in various Indian institutes.

   
Save
Cookies user prefences
We use cookies to ensure you to get the best experience on our website. If you decline the use of cookies, this website may not function as expected.
Accept all
Decline all
Read more
Analytics
Tools used to analyze the data to measure the effectiveness of a website and to understand how it works.
Google Analytics
Accept
Decline