Eschew Your Quest for Cybersecurity Perfection with Worst-Case AI Safety

Eschew Your Quest for Cybersecurity Perfection with Worst-Case AI Safety

Naveen Joshi 27/12/2022
Eschew Your Quest for Cybersecurity Perfection with Worst-Case AI Safety

Worst-case AI safety, closely related to the concept of failsafe AI, is used to reduce “s-risks” or human suffering-related risks. Such risks can very well be related to cybersecurity too.

AI keeps getting more and more intelligent by the day. Machine learning is developing to the point of making Artificial General Intelligence (AGI) human-like—the point at which AI will be able to "think" while performing actions and making decisions just like a human being would. In addition to decision-making and performing actions autonomously, which every AI-based tool can do, AGI also will possess distinctively human traits such as empathy and solidarity. Despite these positive points, AI can also cause problems to human beings. Such AI-generated problems are classified as “s-risks” or “suffering-related risks”. The concept of s-risks is still slightly vague and speculative. Worst-case AI is a kind of machine learning concept that seeks to reduce the threat of “particularly bad outcomes” from s-risks. Hypothetically, certain cybersecurity risks may also fall under such risks before hurting humanity in several ways.

Generally, AI-based cybersecurity tools achieve the task of providing data security perfectly. However, with worst-case AI, the impact of massive AI-generated s-problems needs to be minimized first, eschewing perfection and taking suffering losses along the way.

Using Tripwires to Mitigate Cybersecurity-Related S-Problems

Using_Tripwires_to_Mitigate_Cybersecurity-Related_S-Problems.png

Tripwires are tools used to thwart the working of an AI system if it demonstrates specific signs of causing dangerous cybersecurity-related s-risks autonomously. In simple words, a tripwire would simply destroy or shut down AI tools. Finding whether AI can attain sentience and perform malevolent cybersecurity attacks is tricky. Tripwires make it easy to limit advanced AI's ability to cause s-risks.

Conducting Research to Delve Deeper into S-Risks and Cybersecurity

Cybersecurity is a seemingly tame risk compared to other decidedly scarier ones that out-of-control AI can create. However, data scientists working on s-risks need to cover all areas in which AI can cause suffering to humans. Conducting proper research by answering the right questions and also creating robust AI governance policy and strategies in organizations and countries can minimize the threat of s-problems. As AI develops, the risks of negative artificial sentience also increase. To address such problems, specialized AI-based tools like failsafe AI and worst-case AI safety can be used. The minimization of s-problems is a pressing issue for AI users of today and the future.        

Share this article

Leave your comments

Post comment as a guest

0
terms and condition.
  • No comments found

Share this article

Naveen Joshi

Tech Expert

Naveen is the Founder and CEO of Allerin, a software solutions provider that delivers innovative and agile solutions that enable to automate, inspire and impress. He is a seasoned professional with more than 20 years of experience, with extensive experience in customizing open source products for cost optimizations of large scale IT deployment. He is currently working on Internet of Things solutions with Big Data Analytics. Naveen completed his programming qualifications in various Indian institutes.

   
Save
Cookies user prefences
We use cookies to ensure you to get the best experience on our website. If you decline the use of cookies, this website may not function as expected.
Accept all
Decline all
Read more
Analytics
Tools used to analyze the data to measure the effectiveness of a website and to understand how it works.
Google Analytics
Accept
Decline