For as long as it has been around, the Internet has been the first port of call as a symptom checker when people are feeling unwell. Worried about an unexplained ache or pain? Anxious about your general feeling of malaise? Ask Google.
But browsers frequently lead people straight to the very worst-case scenario: algorithms rank pages according to the recurrence of a keyword or how many clicks that page has got. The top-ranking sites get clicked repeatedly, and so they remain at the top of the search page, even if the diseases they suggest are rare or the information is dubious. Before you know it, you’ve convinced yourself you have just hours to live.
Self-diagnosis-by-search-engine can fuel anxiety and worry. But, in most cases, the cure will lie in an informed diagnosis made by a trusted medical professional. (Unless, of course, you’re a cyberchondriac – a term recently coined for hypochondriacs whose condition is exacerbated by access to endless online information on ailments).
But when someone moves from self-diagnosis to self-treatment, the provenance and intent of health information online can get a whole lot murkier.
Unverified health information has proliferated online in recent years, promoting miracle cures, dangerous diets, and alternative medical therapies – all generated without the editorial or medical oversight applied to its offline equivalent.
In a high-profile recent example, Gwyneth Paltrow’s “Goop” lifestyle brand was reported to Trading Standards and the Advertising Standards Authority in the UK for promoting “potentially dangerous” advice related to “unproven” health products. This included a supplement for pregnant women containing 110 per cent of the “daily value” dose of Vitamin A; NHS and WHO advice is explicitly to avoid supplements containing Vitamin A due to risks of harm to the unborn baby.
Even more sinister is the growing evidence that disinformation, spread by bots across popular online platforms, is being used to deliberately undermine official public health campaigns. In a recent report, the LSE’s Trust, Truth and Technology Commission identified “irresponsibility” as one of “five giant evils” fuelling the current information crisis:
“Irresponsibility arises because power over meaning is held by organisations that lack a developed ethical code of responsibility and that exist outside clear lines of accountability and transparency.”
When online platforms fail to prevent the spread of false health information by social media bots, it can have serious consequences: “The absence of transparent standards for moderating content and signposting quality can mean the undermining of confidence in medical authorities and declining public trust in science and research. This has been visible in anti-vaccination campaigns when Google search was found to be promoting anti-vaccine misinformation. All over Europe, the anti-vaccination movement, informed via social media, is leading to a measurable decline in the rate of vaccination.”
Research from the American Public Health Association in the US has shown that exposure to negative information on vaccination leads to increased hesitancy and delay amongst parents, who are then more likely to turn to the Internet for information and less likely to trust healthcare providers and public health experts on the subject.
In the UK, a recent Academy of Medical Sciences report found that only 37 per cent of the public trust evidence from medical research. Once trust has been eroded, it is very difficult for it to be restored.
So, what can be done to address the impact of fake health news and to reduce our willingness to believe and share it?
Firstly, media literacy and critical thinking need to be improved across society: equipping and empowering individuals to spot, critique and fact-check fake news – on whatever topic – when they come across it online. The BBC has some excellent practical tips on how to spot fake health stories.
Secondly, those who facilitate the spread of fake news – on whatever topic – must be held accountable. The final report from the high-profile Select Committee Inquiry on Disinformation and Fake News is likely to make far-reaching recommendations in this area, while the Law Commission has also been pondering whether, with the increasing spread of health misinformation and disinformation online, there is now a “legitimate question” whether the law relating to false claims should extend beyond traditional contexts such as “fraud, consumer protection and the administration of justice”.
The time has long gone when a “quack”, purveying ‘dodgy’ miracle cures or unfounded health advice, was easy to spot and dismiss. As the Law Commission put it: “false health claims might have [once] been tolerated on the basis of a broader commitment to freedom of expression. Could it now be argued that the potential harm caused by such conduct is so great that it justifies criminalisation?”
For more information, please visit AXA PPP healthcare Health Tech&You.