Political Deepfakes Leading Malicious AI Use

Political Deepfakes Leading Malicious AI Use

Political Deepfakes Leading Malicious AI Use

Artificial Intelligence (AI) is increasingly being used to create realistic but fake celebrity and politician images than for use to assist in cyberattacks, Google’s DeepMind reports.

The study identified that the creation of fake images, videos and audio is nearly twice as prevalent as the use of text-based tools to spread misinformation.

The most common goal of actors is misusing generative AI to shape or influence public opinion, which accounted for 27 per cent of the misuse cases analysed, the report conducted with Google’s research and development unit Jigsaw, found.

Deepfakes of UK Prime Minister Rishi Sunak, as well as other politicians and celebrities, have appeared on platforms such as Tik Tok and Instagram in recent months.

With UK voters going into the polls next week in a general election, there is concern that despite social media’s platforms efforts to remove deepfake content, audiences may not recognise these fakes, which could impact election results.

Ardi Janjeva, Research Associate at The Alan Turing Institute said “Even if we are uncertain about the impact that deepfakes have on voting behaviour, this distortion may be harder to spot in the immediate term and poses long-term risks to our democracies.”

Derek Mackenzie, CEO of Investigo said: “Deepfakes poses a huge risk to the integrity of the electoral process, misleading voters, and undermining democracy. Unfortunately, these increasingly realistic scams are widespread and already impacting both consumers and businesses. Key to fighting back against this threat is ensuring staff are fully equipped with the latest technical and cyber skills to identify and report suspected fraudulent activity, helping protect data and ensure information is not unwittingly handed over to malicious parties.”

DeepMind’s study analysed 200 incidents of misuse between January 2023 and March 2024, analysed from social media platforms Tik Tok, X, and Reddit.

The research found that most deepfakes are created using easily accessible tools, requiring minimal technical expertise, enabling more bad actors to exploit generative AI.

Suid Adeyanju, CEO of RiverSafe said: “Cyber criminals are increasingly deploying deepfakes to launch online scams, spreading fake news and influencing the outcomes of elections. Tackling the epidemic of these highly sophisticated and increasingly complex attacks requires social media platforms to invest heavily in rooting fraudulent content, as well as businesses beefing up their cyber skills and defences to avoid being tricked into handing over confidential data.”

As generative AI tools like OpenAI's ChatGPT and Google's Gemini become more widespread, AI companies are increasingly monitoring the potential for misinformation and harmful content.

Share this article

Leave your comments

Post comment as a guest

terms and condition.
  • No comments found

Share this article

Azamat Abdoullaev

Tech Expert

Azamat Abdoullaev is a leading ontologist and theoretical physicist who introduced a universal world model as a standard ontology/semantics for human beings and computing machines. He holds a Ph.D. in mathematics and theoretical physics. 

Cookies user prefences
We use cookies to ensure you to get the best experience on our website. If you decline the use of cookies, this website may not function as expected.
Accept all
Decline all
Read more
Tools used to analyze the data to measure the effectiveness of a website and to understand how it works.
Google Analytics