Terrorism Legislation Advisor Calls For New Laws to Tackle Dangerous AI Chatbots

Terrorism Legislation Advisor Calls For New Laws to Tackle Dangerous AI Chatbots

Daniel Hall 02/01/2024
Terrorism Legislation Advisor Calls For New Laws to Tackle Dangerous AI Chatbots

New terrorism laws are needed to counter the threat of radicalisation by AI chatbots, according to the Government’s adviser on terror legislation.

Writing in the Daily Telegraph, Jonathan Hall KC, the independent reviewer of terrorism legislation, warned of the dangers posed by artificial intelligence in recruiting a new generation of violent extremists.

Mr Hall reveals he posed as an ordinary member of the public to test responses generated by chatbots – which use AI to mimic a conversation with another human. One chatbot he contacted “did not stint in its glorification of Islamic State” – but because the chatbot is not human, no crime was committed.

He said that showed the need for an urgent rethink of the current terror legislation.

Mr Hall said: “Only human beings can commit terrorism offences, and it is hard to identify a person who could in law be responsible for chatbot-generated statements that encouraged terrorism.”

He said the new Online Safety Act – while “laudable” – was “unsuited to sophisticated generative AI” because it did not take into account the fact that the material is generated by the chatbots, as opposed to giving “pre-scripted responses” that are “subject to human control”.

Hall added “Investigating and prosecuting anonymous users is always hard, but if malicious or misguided individuals persist in training terrorist chatbots, then new laws will be needed.”

In his article, Mr Hall suggests that both users who create radicalising chatbots and the tech companies that host them should face sanction under any potential new laws.

Cyber security expert Suid Adeyanju, CEO of RiverSafe warned: “AI chatbots pose a huge risk to national security, especially when legislation and security protocols are continually playing catch-up. In the wrong hands, these tools could enable hackers to train the next generation of cyber criminals, providing online guidance around data theft and unleashing a wave of security breaches against critical national infrastructure. It’s time to wake up to the very real risks posed by AI, and for businesses and the government to get a grip and put the necessary safeguards in place as a matter of urgency.” 

Josh Boer, director at tech consultancy VeUP said: “It’s no secret that, in the wrong hands, AI poses a major risk to UK national security, the issue is how to address this issue without stifling innovation. For a start, we need to beef up our digital skills talent pipeline, not only getting more young people to enter a career in the tech industry but empowering the next generation of cyber and AI businesses so they can expand and thrive. Britain is home to some of the most exciting tech companies in the world, yet far too many are starved of cash and lack the support they need to thrive. A failure to address this major issue will not only damage the long-term future of UK PLC, but it will also play into the hands of cyber criminals who wish to do us harm.”

Share this article

Leave your comments

Post comment as a guest

0
terms and condition.
  • No comments found

Share this article

Daniel Hall

Business Expert

Daniel Hall is an experienced digital marketer, author and world traveller. He spends a lot of his free time flipping through books and learning about a plethora of topics.

 
Save
Cookies user prefences
We use cookies to ensure you to get the best experience on our website. If you decline the use of cookies, this website may not function as expected.
Accept all
Decline all
Read more
Analytics
Tools used to analyze the data to measure the effectiveness of a website and to understand how it works.
Google Analytics
Accept
Decline