Fake news has become pernicious and widespread so in this article, we look at how the search engines are facing up to the enormous challenge of separating the real from the fake.
Those trying to combat the spread of fake news face a common set of challenges, such as those identified by CEO of OurNews, Richard Zack, which include:
- There are people (and state-sponsored actors) world-wide who are making it harder for people to know what to believe (e.g. through spreading fake news and misinformation, and distorting stories).
- Many people don’t trust the media or don’t trust fact-checkers.
- Simply presenting facts doesn’t change peoples’ minds.
Other challenges include:
- ‘Confirmation bias’ in humans means that we like to read stories that confirm our existing beliefs. This means that there will always be belief in many fake news stories.
- Young people (large users of social media) may be more susceptible to seeing and believing fake news according to research (Stanford’s Graduate School of Education). Most 18-to-24 year-olds consume news via social media. For example, half of teens (54 per cent) get news from social media, and 50 per cent get news from YouTube (CommonSense 2019) and research in 2020 found that over a quarter of 18-to 24 year-olds get their new from Instagram, 19 per cent from Snapchat, and 6 per cent from TikTok. With social media platforms also battling against a tide of fake news, this is a real challenge that extends beyond search engines.
- Fake news is attractive and often seems more interesting than truth.
- People find it difficult to spot fake news.
Search Engine Algorithms Promoting Fake News?
Another less obvious challenge that some search analysts have highlighted how search engine algorithms may promote sensational fake news above real stories and may also, therefore, be profiting from showing them. The thinking behind it is that people are simply drawn to click on links to stories / information that look sensational or controversial. When the links are clicked-on, this tells the search engine algorithm that the link was relevant to the search query (i.e. the search engine algorithm awards it ‘link relevance’). If this link is clicked on enough times by others and receives more link relevance, it will move up the search engine rankings and be given greater prominence, even though the page content may contain fake news. This positive feedback loop can, therefore, ensure that even a fake story can keep getting served, clicked upon, and ultimately become circulated and believed as truth.
In addition to getting revenue from adverts, search engines also track user behaviour and sell the data through real-time bidding and ad-driven search engines are able to show better metrics if they reward clicks on enticing links. This mean that links to sensational fake news stories and videos can drive (and be good for) search engine revenue rather than for the user who ends up reading fake stories. In short, it can be in a search engine company’s interest to simply show users what they want to read or watch, some of which may be fake.
What Are Search Engines Doing About The Problem?
Taking Google as the main example, search engines are keen to tell users what they are doing to combat the problem of fake news.
How Google Fights Disinformation – 3 Principles
Back in 2019, when the impact of fake news had been felt both in US elections and in wider society in what had been dubbed a ‘post-truth era’, Google (in its ‘How Google Fights Disinformation’ White Paper) laid out three foundational principles for how it would be tackling the spread of fake news / misinformation in Google Search, Google News, YouTube, and the company’s advertising systems going forward. These are:
1. Make Quality Count. Google says that its “ranking algorithms” treat websites and content creators fairly and evenly, but they also ensure the usefulness of Google’s services, as measured by user testing, and don’t foster the ideological viewpoints of the individuals that build or audit them.
2. Counteract Malicious Actors. For this, Google admits that “Algorithms cannot determine whether a piece of content on current events is true or false, nor can they assess the intent of its creator just by reading what’s on a page”. However, Google policies across Google Search, Google News, YouTube, and its advertising products clearly show what is prohibited and company says that it has “invested significant resources” in combatting deliberate ‘spam’ practices designed to deceive and get greater visibility for content.
3. Give Users More Context. This involves Google users being shown “Knowledge” or “Information” Panels in Google Search and YouTube, providing high-level facts about a person or issue, using labels to show that content has been fact-checked, as well as offering users the chance to see “Breaking News”, “Top News” shelves, and “Developing News” information panels.
Google also says that it has teamed up with outside news experts and dedicated “significant resources” to supporting quality journalism. For example, this includes launching the Google News Initiative (GNI) in 2018, participating in and providing financial support to the Trust Project (http://thetrustproject.org/), partnering with Poynter’s International Fact-Checking Network (IFCN), and supporting the work of researchers who explore the issues of disinformation and trust in journalism.
Back in 2019, a Stanford Cyber Policy Center report found that Bing’s SERPs contained dubious information more often than Google’s, and that “Bing returns disinformation and misinformation at a significantly higher rate than Google does”.
Nevertheless, Bing appears to have been tackling fake news / disinformation / misinformation in similar ways to Google. For example, Bing introduced fact-checking labels as far back as 2017.
In April 2020, as part of an announcement about how it was promoting trusted information in response to COVID-19, Microsoft outlined many of the ways that it tackles misinformation generally. For example, Microsoft highlighted how curated resources were being used across Bing, LinkedIn, Microsoft News and Microsoft Advertising, and how Bing could prioritise trusted news sources and could use algorithmic defences against certain types of misinformation.
COVID-19 Medical Misinformation Challenge
The COVID-19 pandemic brought the dangers of fake news into even sharper focus as medical misinformation became a very serious threat. To help counter this, Google announced (2020) that it was investing $6.5 million in funding global fact checkers to focus on coronavirus misinformation. Google’s YouTube also introduces a policy to tackle any content that contradicts WHO advice.
Also, in response to health misinformation (COVID-19), Microsoft created COVID-19 information hubs in 53 markets globally, with an experienced team editing content from more than 4,500 of its “trusted” news brands.
Automation and AI
Many people now consider automation and AI to be an approach and a technology that is ‘intelligent’, fast, and scalable enough to start to tackle the vast amount of fake news that is being produced and circulated. For example, Google and Microsoft have been using AI to automatically assess the truth of articles. Also, initiatives like the Fake News Challenge (http://www.fakenewschallenge.org/) seeks to explore how AI technologies, particularly machine learning and natural language processing, can be employed to combat fake news and supports the idea that AI technologies hold promise for significantly automating parts of the procedure human fact-checkers use to determine if a story is real or a hoax.
However, the human-written rules underpinning AI, and how AI is ‘trained’ can also lead to bias. Whilst AI can do many amazing things, it is also not yet at the stage where it is able to exercise anything like human judgement as this is based on past experience and gathered knowledge. This means that AI is not yet the single main way to tackle fake news at scale, although it is certainly helping.
Whether search engines benefit from fake news content or not, the problem of the spread of fake news goes way beyond search engines. Social media companies are also involved in an ongoing battle to tackle the problem, as are other national and global new media outlets of all kinds. Much of the focus of the fake news problem has actually been on social media companies (e.g. Facebook), who have also introduced their own measures to tackle it (e.g. fact checking and introducing their own curated news). The fact is that to tackle fake news involves wide co-operation, collaboration, and initiatives between multiple entities such as fact-checkers, civil society organisations, researchers, media and tech companies, government agencies and more to bring about a bigger societal change in the right direction.
If you would like to discuss your technology requirements please:
Back to Tech News