Google and Facebook are failing to take action to remove online scam adverts even after fraud victims report them, raising concerns that the reactive approach to fraudulent content taken by online platforms is not fit for purpose, Which? research has revealed.
The consumer champion’s survey found that a third (34%) of victims who reported an advert that led to a scam on Google said the advert was not taken down by the search engine, while a quarter (26%) of victims who reported an advert on Facebook that resulted in them being scammed said the advert was not removed by the social media site.
Which? believes that the significant flaws with the current reactive approaches taken to tackling online scams makes a clear case for online platforms to be given legal responsibility for preventing fake and fraudulent adverts from appearing on their sites.
Which? is calling for the government to take the opportunity to include content that leads to online scams in the scope of its proposed Online Safety Bill.
Of those who said they had fallen victim to a scam as a result of an advert on a search engine or social media, a quarter (27%) said they’d fallen for a fraudulent advert they saw on Facebook and one in five (19%) said a scam targeted them through Google adverts. Three per cent said they’d been tricked by an advert on Twitter.
The survey also highlighted low levels of engagement with the scam reporting processes on online platforms. Two in five (43%) scam victims conned by an advert they saw online, via a search engine or social media ad, said they did not report the scam to the platform hosting it.
The biggest reason for not reporting adverts that caused a scam to Facebook was that victims didn’t think the platform would do anything about it or take it down – this was the response from nearly a third (31%) of victims.
For Google, the main reason for not reporting the scam ad was that the victim didn’t know how to do so – this applied to a third (32%) of victims. This backs up the experience of Which?’s researchers who similarly found it was not immediately clear how to report fraudulent content to Google, and when they did it involved navigating five complex pages of information.
Worryingly, over half (51%) of 1,800 search engine users Which? surveyed said they did not know how to report suspicious ads that appear in their search listings, while over a third (35%) of 1,600 social media users said they didn’t know how to report a suspicious advert seen on social media channels
Another issue identified by victims that Which? has spoken to is that even if fake and fraudulent adverts are successfully taken down they often pop up again under different names.
One scam victim, Stefan Johansson, who lost £30.50, told Which? he had repeatedly reported a scam retailer operating under the names ‘Swanbrooch’ and ‘Omerga’ to Facebook.
He believes the social media site has a ‘scattergun’ approach to removing the ads and says that a week rarely goes by when he doesn’t spot dodgy ads in his newsfeed, posted by what he suspects are unscrupulous companies.
Another victim, Mandy, told Which? she was tricked by a fake Clarks ‘clearance sale’ advert she saw on Facebook. She paid £85 for two pairs of boots, but instead she received a large box containing a pair of cheap sunglasses.
‘I’ve had a lot of back and forth with my bank over the past six months, trying to prove that I didn’t receive what I ordered,’ Mandy said. Facebook has since removed this advert and the advertiser’s account.
The tech giants make significant profits from adverts, including ones that lead to scams. These companies have some of the most sophisticated technology in the world but the evidence suggests they are failing to use it to prevent scammers from abusing the platforms by using fake and fraudulent content on an industrial scale to target victims.
The combination of inaction from online platforms when scam ads are reported, low reporting levels by scam victims, and the ease with which advertisers can post new fraudulent adverts even after the original ad has been removed, suggests that online platforms need to take a far more proactive approach to prevent fraudulent content from reaching potential victims in the first place.
Consumers should also sign up to Which?’s scam alert service in order to familiarise themselves with some of the latest tactics used by fraudsters, particularly given the explosion of scams since the coronavirus crisis.
The consumer champion has also launched a Scam Sharing tool to help it gather evidence in its work to protect consumers from fraud. The tool has received more than 2,500 reports since it went live three weeks ago.
Adam French, Consumer Rights Expert at Which?, said: “Our latest research has exposed significant flaws with the reactive approach taken by tech giants including Google and Facebook in response to the reporting of fraudulent content – leaving victims worryingly exposed to scams.
“Which? has launched a free scam alert service to help consumers familiarise themselves with the latest tactics used by fraudsters, but there is no doubt that tech giants, regulators and the government need to go to greater lengths to prevent scams from flourishing.
“Online platforms must be given a legal responsibility to identify, remove and prevent fake and fraudulent content on their sites. The case for including scams in the Online Safety Bill is overwhelming and the government needs to act now.”
Google responded: “We’re constantly reviewing ads, sites and accounts to ensure they comply with our policies. As a result of our enforcement actions (proactive and reactive), our team blocked or removed over 3.1 billion ads for violating our policies.
“As part of the various ways we are tackling bad ads, we also encourage people to flag bad actors they’re seeing via our support tool where you can report bad ads directly. It can easily be found on Search when looking for “How to report bad ads on Google” and filling out the necessary information. It is simple for consumers to provide the required information for the Google ads team to act accordingly.
“We take action on potentially bad ads reported to us and these complaints are always manually reviewed.”
“We have strict policies that govern the kinds of ads that we allow to run on our platform. We enforce those policies vigorously, and if we find ads that are in violation we remove them. We utilize a mix of automated systems and human review to enforce our policies.”
A spokesperson for Facebook responded: “Fraudulent activity is not allowed on Facebook and we have taken action on a number of pages reported to us by Which?.
“Our 35,000 strong team of safety and security experts work alongside sophisticated AI to proactively identify and remove this content, and we urge people to report any suspicious activity to us. Our teams disable billions of fake accounts every year and we have donated £3 million to Citizens Advice to deliver a UK Scam Action Programme.”
A Twitter spokesperson said: “Where we identify violations of our rules, we take robust enforcement action.
“We’re constantly adapting to bad actors’ evolving methods, and we will continue to iterate and improve upon our policies as the industry evolves.”
To sign up to Which?’s scam alert service visit: www.which.co.uk/scamalerts