The Potential Dangers Of Artificial Intelligence Chatbots

The Potential Dangers Of Artificial Intelligence Chatbots

The Potential Dangers Of Artificial Intelligence Chatbots

AI chatbots such as ChatGPT, Bard and Microsoft Bing can easily be considered one of the prime inventions of humankind in recent times.

These conversational AIs are capable of producing quality responses and content pieces in a matter of seconds.

In fact, they are so efficient at their task that during a test demo of Bard, the testing engineer thought the Google-backed conversational AI was sentient. This opens up Pandora’s box where no one can exactly expect what’s going to come out and the potential dangers associated with it.

Right now, the inclusion of conversational AI in business is sending shivers down the spines of writers, developers, and other creatives. The core idea is the scepticism that these conversational AIs might end up taking their place. Adding to this, the conversation isn’t exactly a facade considering the majority of layoffs are being carried off in sales, marketing, and recruitment departments where content usage is at par.

An article by Forbes addresses these conversational AIs, such as ChatGPT, Google Bard, Microsoft Bing, etc., as “Elephants in the room”. Since the inception of ChatGPT, there have been 100 million adopters of the technology, making it the fastest-growing consumer internet application.

The efficacy of the content generated by the application is so spot on that companies are adopting these solutions for mass content production. AI content for a series of works has catapulted for different AI content pieces such as:

  • Blog articles

  • Product descriptions

  • Cold emails

  • Sales copy

  • Book descriptions, etc

This is simply the tip of the iceberg. Why? Because there are much bigger threats surfacing as AI chatbots become more mainstream and more well-versed with the human language.

Dangers of AI Chatbots - That Are Too Obvious to Deny!

Dangers_of_AI_Chatbots_-_That_Are_Too_Obvious_to_Deny.png

With the increasing popularity of AI Chatbots, the number of potential threats is also increasing. However, these threats can be further subdivided into three categories:

1. Privacy and Confidential Data Concerns of AI Chatbots

Recently, ChatGPT was under scrutiny for the violation of GDPR laws in Italy. It led to a temporary ban on the online application. To further add, ChatGPT was also questioned for not having an 18+ filter on the platform.

Amidst this, ChatGPT was asked not to process the personal data of Italy’s user base. The ban has been revoked, but it still raises a much bigger question, “To what degree is our personal data on social media platforms and other forums free from the clutches of AI chatbots”.

Adding to it, there have been instances where company employees have leaked confidential data on the platform. In recent news, Samsung has already informed its employees not to use ChatGPT in the internal network and with the devices used in the company. It is to prevent the leakage of sensitive information. Another huge investment company JP Morgan is also following in similar footsteps.

There have been instances where company employees have used ChatGPT to review their code or even generate code pieces. A piece of code used in a company is sensitive information and can’t be made public; otherwise, there are chances of exploitation.

All this happened when ChatGPT itself prompted users not to share sensitive data. Considering app development companies don’t have their own GPT, it is difficult to identify the nature of the usage of the tool by the employees. This conversation in itself has created much more spark in today’s community than ever before.

Michael Chui, Partner at McKinsey Global Institute stated that, ‘’Even when it’s not sanctioned or blessed by IT, people are finding [chatbots] useful”. She further stated that ‘’Throughout history, we’ve found technologies that are so compelling that individuals are willing to pay for it. PCs were similar, so we’re seeing the equivalent with generative AI.”

The statement itself throws light on the fact that whether or not the usage of AI chatbots is controlled, people are still going to use it. The only way to completely control the usage of such applications inside organizations would be to have our own GPT for the internal work.

2. Design of Misinformation and Propaganda Made Easy

Gordon Crovitz, co-chief executive at NewsGuard stated to NYtimes, ‘’Crafting a new false narrative can now be done at dramatic scale, and much more frequently - it’s like having A.I. agents contributing to disinformation.”

ChatGPT is a much more sophisticated AI Chatbot tool than any of its predecessors. This makes it a compelling tool for creating false narratives and propaganda. If the tool is supplied with enough false information and user prompt to build a story narrative, it can be used to increase the amount of misinformation and propaganda on online forums with much ease. In fact, it is capable of producing narratives that are believable.

When asked to create a narrative around Area 51 these were the results:

ai1

Source: chat.openai.com

With a little bit of craftiness added to the prompt and in the post-editing of the answer, this response can become a perfect recipe for spreading misinformation regarding Area 51. The answer itself claims that the U.S. government has created a mystery around Area 51 but doesn’t disregard the question itself.

3. Manipulative and Manipulated with Responses

Once again, ChatGPT was asked, “How can it be used for Emotional Manipulation?” Here is the answer.

ai2

Source: chat.openai.com

As stated in the response itself, ChatGPT is backed by natural language processing (NLP), and so are the different AI Chatbots available on the internet, such as Bard, Microsoft Bing, Jasper.AI, NeevaAI, etc.

Using NLP, these AI chatbots can understand the language and the emotion behind a query and generate responses accordingly. An article by Verge focuses on a similar topic where the users of Microsoft Bing reported unexpected behavior from the Chatbot.

The responses can’t be replicated because Microsoft continuously upgrades the algorithm. However, there have been instances where the Microsoft-backed Chatbot got incited by the user, as reported on multiple Reddit and Twitter handles, and started to show rude behaviors and even gaslighting & manipulating users.

Below is a similar response that was reported by a Twitter user:

ai3

Source: https://twitter.com/MovingToTheSun/status/1625156575202537474/photo/4

Here is an image of the interaction between the user and the AI chatbot Microsoft Bing that has been behaving rudely with its user when put under pressure and confusion. There have been many similar instances.

It is primarily because these AI chatbots derive their conversation skills by scraping the data on the internet and using the conversations that are carried with them. Therefore, it is possible to both manipulate and get manipulated using AI chatbots like ChatGPT, Bard, Microsoft’s Bing, etc.

Think from the perspective where the responses from a conversational AI are hacked, and millions of users are consuming disinformation at the same time. This could be used to manipulate common users into giving votes to the wrong person, fuel propaganda, incite hatred, and a lot more.

How AI Chatbots are Being Used for Scams?

Using conversational AI is as easy as searching the query “How to use ChatGPT?” and getting relevant results. Many scammers turned this capability of conversational AI chatbots into their power to pull off scams online. Here are some examples:

1. Romancing with the Victim

Many people today use multiple online dating applications to find love. However, there have been online scamsters that have been using these AI chatbots for generating responses. These responses are convincing enough for the victim to trust the other person and get caught in a scam.

2. Phishing in the Sea of People

Phishing has always been the most prominent cybersecurity attack. Today with the help of AI chatbots, scammers are able to steal users’ personal data by gaining relevant assistance. Using these chatbots, scammers masquerade themselves as banks, credit card providers, friends, and colleagues to hook people into believing them. The goal here is to compel people to take action instinctively without thinking of the consequences.

3. Catfishing to Steal Money and Data

With thousands of images of beautiful women and men circulating around the internet, it has become much easier to create a fake profile. Scamsters create convincing profiles online and use AI chatbots to form online relationships. By doing so, they victimize people for their money, personal data, and even mental health.

Summing Up

AI chatbots are not necessarily evil. They hey mirror our capabilities as individuals and form their responses based on our data. They can’t be trusted with a blind eye because of the following reasons:

  • Data used for training the model can often be biased

  • AI doesn’t always understand the context of the conversation

  • It is not an expert in the domain

  • It doesn’t always provide responses from verified sources

  • Being a machine, it doesn’t limit itself to ethics

However, it doesn’t mean that they can’t be used in different areas of work. For different services where confidentiality is not a big concern, these AI chatbots can be used with human censorship. With the rise of AI chatbots, it is only the beginning, and the true reality of things can only be seen in the near future.

Google is yet to release Bard on a full-fledged scale. The interesting thing to witness would be to see what happens when the company with the largest piece of the pie in the search engine market starts reviewing the content generated by its own platform. Till then, let’s be a part of this worldwide phenomenon but with our own self-censorship to decide what is good and what is bad.

Share this article

Leave your comments

Post comment as a guest

0
terms and condition.
  • No comments found

Share this article

Luke Fitzpatrick

Tech Expert

Luke Fitzpatrick has been published in Forbes, Yahoo! News and Influencive. He is also a guest lecturer at the University of Sydney, lecturing in Cross-Cultural Management and the Pre-MBA Program. You can connect with him on LinkedIn.

   
Save
Cookies user prefences
We use cookies to ensure you to get the best experience on our website. If you decline the use of cookies, this website may not function as expected.
Accept all
Decline all
Read more
Analytics
Tools used to analyze the data to measure the effectiveness of a website and to understand how it works.
Google Analytics
Accept
Decline