The concept of ‘fake news’ has emerged not only in journalism but also other, supposedly credible sources. AI and computer vision need to be put to good use to verify the authenticity of information on the web.
We are going through a dangerous phase in our history. Since the beginning of humankind, there haven't been many instances in which we have had to deal with a deadly global pandemic, political and social unrest in various parts of the world, and the looming, inevitable threat of climate change at the same time. Unfortunately, along with these things, another threat to humanity has re-emerged in the past decade or so—rampant misinformation.
Although it has been a near-continuous presence in the world for several centuries now, misinformation in its current online avatar is a dangerous tool, especially during a pandemic. The World Health Organization (WHO) and other governing bodies have repeatedly issued notices for people to be cautious while consuming information on the web related to the drugs and treatments for COVID-19. Accordingly, individuals can report to WHO any false details present on the internet related to the disease. The spreaders of misinformation intend to create discord amongst the masses and influence their opinions to suit their agendas. Mostly, misinformation is used to sow seeds of bias and hatred amidst the people. Unfortunately, misinformation is easily digested by people as, in many cases, it is a readymade validation of their existing prejudices and false opinions. Today, misinformation is widely prevalent in journalism and other fields. In fact, the spread of misinformation can be likened to that of a virus. The misinformation ‘infodemic’ mutates and travels through social media platforms and is lapped up by thousands of people knowingly or unknowingly. Ironically, AI is one of the reasons why misinformation can spread with the speed and efficiency it does today. Generators of misinformation may use cutting-edge technology to disperse their false theories in the public domain and make it appear like the gospel truth to the masses.
At the same time, AI, and its component computer vision, can also be used to eliminate misinformation from the web or at least provide a verification screen so that people check the authenticity of information online before digesting it. Here are a few tools with which we can deal with the rampant spread of misinformation:
The problem of misinformation spread can be dealt with effectively if AI-powered tools are used. The main purpose of using automated tools is that human intervention is minimized and the process can be carried out with zero (or minimal) involvement of partial opinions in the mix.
Organizations such as Google use specialized scoring tools and methods to rate websites for their content based on how correct the presented information is. The accuracy of data published on news websites and other information providers on the internet will be the yardstick by which websites will be given priority in search results. According to Google, their algorithms designate accuracy scores for each website based on the true authenticity of their content. The use of third-party links, plagiarism, and other factors play an important role in the assignment of ranks to each website. Google's efforts to curb misinformation online have also prompted other organizations to strive for the same purpose. There are several AI and big data-assisted tools that exist to eliminate online misinformation in today’s day and age. One such tool is called Crosscheck. In 2017, the Google and Facebook backed tool was used to eliminate misinformation on the web leading up to the French Presidential Elections held during that phase. At the time, the French news media had introduced Crosscheck to detect and eliminate or debunk false information reports related to the elections across the internet. According to First Draft News, the media organization that created Crosscheck, the tool was made for finding, verifying and publishing election-related content found on the internet. Crosscheck included or was assisted by other tools and components that facilitated its operations:
CrowdTangle assisted Crosscheck to discover and closely track real-time, evolving content on the web related to the elections.
Google provided its tool to monitor searches across the internet related to the elections.
A tool used to gather questions asked by the general public about the elections. Once collected, Hearken would then provide easy-to-understand responses in its capacity as an Engagement Management System.
The Check tool was for information verification purposes. The correctness of online information was checked by comparing it with massive swathes of (verified, correct) information in Crosscheck's databases.
This tool was deployed to using big data and predictive analysis to forecast viral incidents and events.
Yet another big data-driven tool that retained steadily incoming information from more than 600 news and media websites. The information was classified based on its usefulness and accuracy.
Apart from these, Pheme, an EU-invested tool, assisted the French media with gauging the accuracy of content and questions found online related to the all-important elections. Pheme was created in collaboration with other technology and information partners of the EU and it used big data, natural language processing and analysis, data mining, social network analysis, and data visualization to eliminate misinformation from the web. As we can see from the examples of Google and Crosscheck, AI and big data are vital to understanding how accurate something posted on the internet is. These tools should be used by governments worldwide and international bodies to detect and eliminate (or at least reduce) misinformation, lies, and baseless conspiracy theories on the internet through features such as email verification, social media analyses, and many more.
Doctored visual data on social media platforms are classifiable as misinformation too. Advanced computer vision and tools derived from it can be used to eliminate false information from these kinds of data. Facebook, for instance, uses ObjectDNA, amongst other tools and technologies, for the purpose. According to Facebook, ObjectDNA laser focuses on specificities within an image (while overlooking background clutter or noise) to gauge the veracity of the information present in it, unlike most standard computer vision systems. These specificities then allow the website administrators to flag any image for inappropriateness or inaccuracy if it is found to be so after inspection. The technology uses other flagged pictures for reference, even if any two images contain stark differences between them. Facebook also uses LASER, a text-processing tool that embeds several languages to ascertain semantic similarities or differences in sentences found in visual data. As a result, Facebook's tools work in the case of images as well as text.
Apart from regular social media posts, deepfakes can also be categorized in the bracket of false information-spreading elements. Problematic deepfakes have the power to spread misinformation on an industrial scale amongst the masses due to their impressive make-believe capabilities. Today, deepfakes can be dealt with by using certain tools, blockchain systems being one of them. As we will see later, the people who consume these kinds of information need to be aware of the possibilities of visual data being manipulated. To definitively deal with deepfakes, Facebook has assembled an AI Red team as well as a detection AI model with eight deep learning neural networks. The models were trained with unique datasets so that they can intuitively track down deepfakes, at least the contentious ones, and identify new ones in real-time with advanced computer vision-enabled data synthesis measures. As a result, when the system encounters a brand-new type of deepfake, it then looks for similar ones in order to train its AI models for it.
Using Facebook's example, we can see that visual data, such as social media content as well as deepfakes, can be scanned efficiently to nip misinformation in the bud in such places.
Most importantly, users must educate themselves through various sources in order to resist falling prey to misinformation on the internet. Simple things, like not going to shady, illicit websites, verifying messages and posts before forwarding them, and others, can stop misinformation in its tracks.
Big data, computer vision, and AI have advanced greatly over the last decade or so, but basic knowledge of what is right and wrong cannot be guaranteed by technologies alone. Users must show curiosity to get to the bottom of a piece of information before believing and acting on it. Most importantly, nothing found on the internet should ever be taken at face value. Check three or four sources for accuracy (the more the better), and you're good to go. As they say, take everything with a pinch of salt.
Naveen is the Founder and CEO of Allerin, a software solutions provider that delivers innovative and agile solutions that enable to automate, inspire and impress. He is a seasoned professional with more than 20 years of experience, with extensive experience in customizing open source products for cost optimizations of large scale IT deployment. He is currently working on Internet of Things solutions with Big Data Analytics. Naveen completed his programming qualifications in various Indian institutes.