The global health emergency brings infodemic risks under the spotlight
You should eat garlic, put on sesame oil or gargle mouthwash to protect yourself from infection with the novel coronavirus. In the last weeks the Internet and social media got filled up with odd remedies against coronavirus 2019-nCoV, together with fake news that fuelled fears, hate and reactions such as the boycott of Chinese restaurants, or racist attacks against Asian people. Conspiracy plots were also popular: many wrote a secret Chinese government lab created the virus in Wuhan and spread it accidentally, or that Big Pharma contributed to the virus dissemination to sell vaccines (that are not available yet).
Misinformation is dangerous and, together with the novel coronavirus emergency, the World Health Organisation warned against the risks of a massive infodemic, an over-abundance of inaccurate information that makes it hard for people to find trustworthy sources and reliable guidance when they need it. The WHO established a task force to debunk fake news about coronavirus, calling national governments to work for quality and transparent information.
Social networks – from Facebook to Twitter, up to TikTok – are also trying to fight the infodemic but, beyond coronavirus, fake news is an increasingly worrying phenomenon.
We all point an accusing finger at trolls and fake news machines, an illicit money-maker business aimed at influencing public opinion and consensus, from purchases to political vote. Scientific research is not exempt from this risk: Nature has recently relaunched a campaign against predatory journals, which are characterized by false or misleading information, deviation from best editorial practices, a lack of transparency, and the use of aggressive solicitation methods. According to their estimate, about 400 thousand papers are published every year on such journals to the detriment of the scientific community and its stakeholders, including patients.
From the users viewpoint, the mechanism for people to be trapped in fake news are well known. The polarisation of opinions around a certain topic generates confirmation bias, that prejudice that pushes us inside reassuring comfort zones where our truth is not being discussed. We end up in a filter bubble where algorithms provide us with contents more and more centered on our beliefs. And the difference between what is true and what is false fades away, say researchers.
And distraction makes it even worse. People aren’t always limited in critical thinking, sometimes they are simply distracted as, when sharing something online, they read too quickly, or look for more likes or engagement from their followers instead of checking content quality.
That’s the finding of a team of researchers from MIT and the University of Regina in Canada, whose new paper suggests that misinformation can be easily punched by calling people to pay more attention on their online behaviours. Is this an invitation to add the question ‘Are you sure?’ before clicking on ‘Share’?