Streaming has changed the way we listen to music. In the UK, more than 80% of recorded music is now listened to via a streaming service rather than using traditional physical media like CDs and vinyl.
Linking the creators making the music and the fans listening to it through a streaming service is a complex network of companies that help make, promote and distribute recorded music.
The Competition and Markets Authority’s (CMA) study will examine the music streaming market, from creator to consumer, paying particular attention to the roles played by record labels and music streaming services.
As part of its assessment of how well the market is working for audiences, the CMA will consider whether innovation is being stifled and if any firms hold excessive power. The CMA’s study will help build a deeper understanding of how firms in the market influence listeners’ choices and experiences.
While focussing on potential harm to consumers, the CMA will also assess whether any lack of competition between music companies could affect the musicians, singers and songwriters whose interests are intertwined with those of music lovers.
If the CMA finds problems, it will consider what action may be necessary.
Andrea Coscelli, Chief Executive of the CMA, said: “Whether you’re into Bowie, Beethoven or Beyoncé, most of us now choose to stream our favourite music.
“A vibrant and competitive music streaming market not only serves the interests of fans and creators but helps support a diverse and dynamic sector, which is of significant cultural and economic value to the UK.
“As we examine this complex market, our thinking and conclusions will be guided by the evidence we receive.”
The CMA has also begun a market study of mobile ecosystems as well as launching the Digital Markets Unit in April 2021 – which is operating in shadow form pending legislation that will provide it with its full powers.
An independent CMA Inquiry Group is also separately investigating Sony’s completed acquisition of ‘artist and label’ services provider AWAL.
The market study takes place in parallel to a wide range of work being done by the UK government in these markets. While the CMA’s work will focus on competition issues, it will maintain a coherent approach with other related work including initiatives being undertaken by the Department for Digital, Culture, Media & Sport, the Intellectual Property Office and the Centre for Data Ethics and Innovation.
Every minute 167 million videos are watched by TikTok users
On YouTube 694,000 hours are streamed each minute, which equates to roughly a month in real-time
On Amazon $283,000 are spent by customers every minute
What can happen in an internet minute? Millions of videos, messages, emails, and texts are uploaded and viewed, and the content consumed adds up to hundreds of thousands of hours in real-time.
Every minute 167 million videos are watched by TikTok users, recent research by advertising specialists N.Rich reveals.
In the study, Statista data was analysed to calculate the public’s engagement with the most popular corners of the internet.
Facebook receives 44 million views each minute via Facebook Live. While on iMessage 12 million messages are sent on the Apple service in the same timeframe.
Each internet minute, 5.7 million searches take place on Google. While on Snapchat 2 million messages are sent via the app in that period of time.
Each minute, 694,000 hours are streamed on YouTube, which equates to roughly a month in real-time.
Newbie app Discord is responsible for 668,000 messages being sent every minute.
On Twitter, 575,000 tweets are posted during the same timeframe.
Video streaming site Netflix is also popular, with 452,000 hours watched each minute on the website.
Amazon lives up to its powerhouse reputation, with $283,000 spent on the e-commerce site each minute. That’s almost $7 million spent within 24 hours.
Commenting on the study, a spokesperson for N.Rich said, “With a vast number of people online and advertisers vying for the attention of potential customers, it’s vital that you speak to customers in a way that they hear you and feel heard too.
“You wouldn’t speak to your best friend the same way as your grandmother. That’s why you need to adjust your message for each platform and find the right customers where they are – be it on TikTok, Facebook, or elsewhere. ”
One internet minute
Platform
Amount per minute
TikTok
167 million videos watched by users
Facebook Live
44 million views received
iMessage (via Apple)
12 million messages sent
Google
5.7 million searches
Snapchat
2 million messages sent
YouTube
694,000 hours streamed
Discord
668, 000 messages sent
Twitter
575, 000 tweets posted
Netflix
452, 000 hours watched
Amazon
$283,000 spent
The study was conducted by N.Rich, which offers a rich array of intent data and ad inventory that enable marketers to drive awareness and lead generation effectively.
With 850 million downloads last year, TikTok is the world’s most popular app
However, four Facebook-owned social platforms appear among the top six, with a combined figure of more than two billion downloads
This accounts for almost half of the Top 10’s global download figures
A new study shows that while TikTok has been revealed as the world’s most popular and downloaded mobile app, social media giant Facebook dominates the market with four of its platforms.
Research conducted by app development company Bacancy Technology outlines that social video-sharing platform TikTok was downloaded a total of 850 million times last year, receiving 250 million more downloads than the second most popular app on the list, WhatsApp.
However, having acquired Instagram in 2012 and WhatsApp in 2014, Facebook takes up four spots in the Top 10 most popular apps of 2020, with a combined total of more than two billion downloads. This figure accounts for 46% of the total download figures within the top 10 list.
Following last year’s increase in remote working, conference call platform Zoom – which ranks fifth on the global list – received 477 million downloads last year, coupled with an increase in revenue of 317% over figures from 2019.
Top 10 Global Apps of 2020
App
Downloads 2020
TikTok
850 million
WhatsApp
600 million
Facebook
540 million
Instagram
503 million
Zoom
477 million
Messenger
404 million
Snapchat
281 million
Telegram
256 million
Google Meet
254 million
Netflix
223 million
Assessing a number of the most popular social platforms, the United States – which has the third largest population count in the world – ranks as the country with the highest active user counts for Tiktok, Instagram and Snapchat.
India, with the second highest population in the world, has the largest amount of users of both WhatsApp and Facebook.
Country with highest users counts for Apps (raw user figures)
App
Country with highest user count
# of users, in millions
% of population
TikTok
United States
65.9
19%
Whatsapp
India
390
27%
Facebook
India
320
22%
Instagram
United States
140
42%
Snapchat
United States
108
32%
However, analysing an app’s number of users as a percentage of each country’s population reveals that TikTok has the highest penetration in the Netherlands with 22% of its population using TikTok. This is followed by the US (19%) and Norway (18%).
Seven in 10 Spanish citizens are actively using Whatsapp, while three quarters of the population of the Philippines have active profiles on Facebook – a figure that trumps both America (57%) and the UK (55%).
Half of the Turkish population are active on their Instagram accounts – representing the country with the highest usage, and the same can be said of Snapchat’s popularity in Saudi Arabia, with 50% of the population holding active user accounts.
Naturally, these figures do not factor in fake accounts or users that may have more than one social media account, though they show an interesting indication of each app’s global popularity.
Country with highest users counts for Apps (as a percentage of population)
App
Country with highest user count
# of users, in millions
Population
% of population
TikTok
Netherlands
3.8
17,178,419
22%
US
65.9
333,230,770
19%
Norway
1
5,469,887
18.20%
Whatsapp
Spain
33
46,775,584
70%
Italy
35
60,359,657
58.00%
Germany
48
84,091,604
57%
Facebook
Philippines
83
111,249,116
74%
Thailand
51
70,000,662
72%
Mexico
93
130,486,512
71%
Instagram
Turkey
44
85,379,961
51%
Brazil
95
214,289,417
44%
US
140
333,230,770
42%
Snapchat
Saudi Arabia
19.6
35,433,662
55%
France
24.5
65,439,407
37%
United States
108
333,230,770
32%
TikTok triumphs once again as the world’s highest-grossing app, with reported revenue of 540 million USD last year. Despite lockdown restrictions limiting face-to-face contact, online dating platform Tinder lands in second place with 513 million USD, and streaming video giant YouTube rounds off the top three, snatching up 478 million USD.
Highest grossing app of 2020 (Global)
App
Revenue, millions USD
TikTok
540
Tinder
513
YouTube
478
Disney
314
Tencent Video
300
Piccoma
289
Line Manga
249
iQIYI
240
Netflix
209
Commenting on the findings, a spokesperson for Bacancy Technology said: “Lockdown restrictions and the ‘stay at home’ mantra of 2020 caused many of us to turn to the internet and various apps for entertainment and to indulge in some level of human interaction.
“TikTok’s seemingly endless library of entertainment has clearly captured the attention of millions of people, while the staple social media apps continue to be an integral part of our daily lives.”
This research was conducted by app development company Bacancy Technology, an exclusive hub of top software developers, UI/UX designers, QA experts and more, offering development services aimed at the creation of high-end, enviable applications.
With just three weeks to go until the much-anticipated St James Quarter phase one opening, the new fashion district celebrates launching its official social channels – by framing the perfect selfie spots to capture the city’s brand-new skyline!
Four giant picture frames can be found at some of Edinburgh’s most Instagrammable locations including Calton Hill, St Andrews Square, The Mound and Neighbourgood Market in Stockbridge, for followers to capture their own selfies and win £150 vouchers, #nofilter needed.
To enter, step into the frame, share your pic and tag and follow St JamesQuarter on Instagram and Facebook. The competition will run until Sunday 6th of June, with entries officially closing on Monday the 7th of June at 9pm.
Google and Facebook are failing to take action to remove online scam adverts even after fraud victims report them, raising concerns that the reactive approach to fraudulent content taken by online platforms is not fit for purpose, Which? research has revealed.
The consumer champion’s survey found that a third (34%) of victims who reported an advert that led to a scam on Google said the advert was not taken down by the search engine, while a quarter (26%) of victims who reported an advert on Facebook that resulted in them being scammed said the advert was not removed by the social media site.
Which? believes that the significant flaws with the current reactive approaches taken to tackling online scams makes a clear case for online platforms to be given legal responsibility for preventing fake and fraudulent adverts from appearing on their sites.
Which? is calling for the government to take the opportunity to include content that leads to online scams in the scope of its proposed Online Safety Bill.
Of those who said they had fallen victim to a scam as a result of an advert on a search engine or social media, a quarter (27%) said they’d fallen for a fraudulent advert they saw on Facebook and one in five (19%) said a scam targeted them through Google adverts. Three per cent said they’d been tricked by an advert on Twitter.
The survey also highlighted low levels of engagement with the scam reporting processes on online platforms. Two in five (43%) scam victims conned by an advert they saw online, via a search engine or social media ad, said they did not report the scam to the platform hosting it.
The biggest reason for not reporting adverts that caused a scam to Facebook was that victims didn’t think the platform would do anything about it or take it down – this was the response from nearly a third (31%) of victims.
For Google, the main reason for not reporting the scam ad was that the victim didn’t know how to do so – this applied to a third (32%) of victims. This backs up the experience of Which?’s researchers who similarly found it was not immediately clear how to report fraudulent content to Google, and when they did it involved navigating five complex pages of information.
Worryingly, over half (51%) of 1,800 search engine users Which? surveyed said they did not know how to report suspicious ads that appear in their search listings, while over a third (35%) of 1,600 social media users said they didn’t know how to report a suspicious advert seen on social media channels
Another issue identified by victims that Which? has spoken to is that even if fake and fraudulent adverts are successfully taken down they often pop up again under different names.
One scam victim, Stefan Johansson, who lost £30.50, told Which? he had repeatedly reported a scam retailer operating under the names ‘Swanbrooch’ and ‘Omerga’ to Facebook.
He believes the social media site has a ‘scattergun’ approach to removing the ads and says that a week rarely goes by when he doesn’t spot dodgy ads in his newsfeed, posted by what he suspects are unscrupulous companies.
Another victim, Mandy, told Which? she was tricked by a fake Clarks ‘clearance sale’ advert she saw on Facebook. She paid £85 for two pairs of boots, but instead she received a large box containing a pair of cheap sunglasses.
‘I’ve had a lot of back and forth with my bank over the past six months, trying to prove that I didn’t receive what I ordered,’ Mandy said. Facebook has since removed this advert and the advertiser’s account.
The tech giants make significant profits from adverts, including ones that lead to scams. These companies have some of the most sophisticated technology in the world but the evidence suggests they are failing to use it to prevent scammers from abusing the platforms by using fake and fraudulent content on an industrial scale to target victims.
The combination of inaction from online platforms when scam ads are reported, low reporting levels by scam victims, and the ease with which advertisers can post new fraudulent adverts even after the original ad has been removed, suggests that online platforms need to take a far more proactive approach to prevent fraudulent content from reaching potential victims in the first place.
Consumers should also sign up to Which?’s scam alert service in order to familiarise themselves with some of the latest tactics used by fraudsters, particularly given the explosion of scams since the coronavirus crisis.
The consumer champion has also launched a Scam Sharing tool to help it gather evidence in its work to protect consumers from fraud. The tool has received more than 2,500 reports since it went live three weeks ago.
Adam French, Consumer Rights Expert at Which?, said:“Our latest research has exposed significant flaws with the reactive approach taken by tech giants including Google and Facebook in response to the reporting of fraudulent content – leaving victims worryingly exposed to scams.
“Which? has launched a free scam alert service to help consumers familiarise themselves with the latest tactics used by fraudsters, but there is no doubt that tech giants, regulators and the government need to go to greater lengths to prevent scams from flourishing.
“Online platforms must be given a legal responsibility to identify, remove and prevent fake and fraudulent content on their sites. The case for including scams in the Online Safety Bill is overwhelming and the government needs to act now.”
Google responded: “We’re constantly reviewing ads, sites and accounts to ensure they comply with our policies. As a result of our enforcement actions (proactive and reactive), our team blocked or removed over 3.1 billion ads for violating our policies.
“As part of the various ways we are tackling bad ads, we also encourage people to flag bad actors they’re seeing via our support tool where you can report bad ads directly. It can easily be found on Search when looking for “How to report bad ads on Google” and filling out the necessary information. It is simple for consumers to provide the required information for the Google ads team to act accordingly.
“We take action on potentially bad ads reported to us and these complaints are always manually reviewed.”
“We have strict policies that govern the kinds of ads that we allow to run on our platform. We enforce those policies vigorously, and if we find ads that are in violation we remove them. We utilize a mix of automated systems and human review to enforce our policies.”
A spokesperson for Facebook responded: “Fraudulent activity is not allowed on Facebook and we have taken action on a number of pages reported to us by Which?.
“Our 35,000 strong team of safety and security experts work alongside sophisticated AI to proactively identify and remove this content, and we urge people to report any suspicious activity to us. Our teams disable billions of fake accounts every year and we have donated £3 million to Citizens Advice to deliver a UK Scam Action Programme.”
A Twitter spokesperson said:“Where we identify violations of our rules, we take robust enforcement action.
“We’re constantly adapting to bad actors’ evolving methods, and we will continue to iterate and improve upon our policies as the industry evolves.”
Facebook to make it harder for people to find groups and profiles that buy and sell fake reviews
16,000 trading groups removed with suspensions or bans for users who create these groups
it comes after CMA investigation found more evidence of misleading content
This latest action by the Competition and Markets Authority (CMA) follows reports that fake and misleading reviews continued to be bought and sold on the social media platforms.
In January 2020, Facebook committed to better identify, investigate and remove groups and other pages where fake and misleading reviews were being traded, and prevent them from reappearing.
Facebook gave a similar pledge in relation to its Instagram.com business in May 2020, after the CMA had identified similar concerns.
A follow-up investigation found evidence that the illegal trade in fake reviews was still taking place on both Facebook and Instagram and the CMA intervened for a second time.
Facebook has now removed a further 16,000 groups that were dealing in fake and misleading reviews. It has also made further changes to its systems for identifying, removing and preventing such content on its social media platforms to ensure it is fulfilling its previous commitments.
These include:
suspending or banning users who are repeatedly creating Facebook groups and Instagram profiles that promote, encourage or facilitate fake and misleading reviews
introducing new automated processes that will improve the detection and removal of this content
making it harder for people to use Facebook’s search tools to find fake and misleading review groups and profiles on Facebook and Instagram
putting in place dedicated processes to make sure that these changes continue to work effectively and stop the problems from reappearing
Andrea Coscelli, Chief Executive of the CMA, said: “Never before has online shopping been so important. The pandemic has meant that more and more people are buying online, and millions of us read reviews to enable us to make informed choices when we shop around.
“That’s why fake and misleading reviews are so damaging – if people lose trust in online reviews, they are less able to shop around with confidence, and will miss out on the best deals. It also means that businesses playing by the rules miss out.
“Facebook has a duty to do all it can to stop the trading of such content on its platforms. After we intervened again, the company made significant changes – but it is disappointing it has taken them over a year to fix these issues.
“We will continue to keep a close eye on Facebook, including its Instagram business. Should we find it is failing to honour its commitments, we will not hesitate to take further action.”
This move follows the UK Government’s announcement that a dedicated Digital Markets Unit (DMU) will be set up within the CMA from April 2021.
Once the necessary legislation is in place, this will introduce and enforce a new code for governing the behaviour of platforms that currently dominate the market. As part of this process, the CMA has been advising government on the design and implementation of a pro-competition regime for digital markets.
Rocio Concha, Director of Policy and Advocacy at Which?, said:“We’ve previously raised the alarm about fake review factories continuing to operate at scale on Facebook, leaving online shoppers at huge risk of being misled. The tech giant failed to meet its earlier commitment to the CMA, so it is positive that the regulator has stepped in and demanded more robust action.
“Facebook must deliver this time round – it has shown it has the sophisticated technology to eradicate these misleading review groups and needs to do so much more swiftly and effectively.
“The CMA and Facebook now need to monitor the situation and if the problems persist the regulator must take stronger measures to ensure that trust in online reviews does not continue to be undermined.
“Online platforms should also have greater legal responsibility for tackling fake and fraudulent content and activity on their sites.”
The charity is hosting its second virtual ‘Speak Out. Stay Safe’ assembly on Tuesday morning at 10am to help keep children safe and well during the pandemic
Parents and children are being encouraged to join the assembly that will be held on the NSPCC Facebook page
The first virtual assembly that aired online in June has had more than 100,000 views
Amid growing concerns about the impact of COVID19 on children’s safety and mental health and wellbeing the NSPCC is holding a second virtual ‘Speak Out. Stay Safe’ assembly on Facebook tomorrow (Tuesday February 23rd) at 10am.
The special broadcast suitable for children aged five and over aims to help them understand how to speak to a trusted adult if they feel anxious or worried, and it explains the support that Childline can offer.
It will also focus on some additional concerns that some children are experiencing due to the pandemic.
The assembly will see the return of guest hosts Ant & Dec and features an appearance from comedian David Walliams. The TV duo who have been long term supporters of the children’s charity hosted the first online assembly in June last year which received over 100,000 views on Facebook and YouTube.
A recording of Tuesday’s online assembly will also be available on the NSPCC’s website, Facebook and YouTube channel after the event for anyone who misses it.
One Facebook user who watched the first assembly said:” My boys loved it and following a family bereavement this gave them an opportunity to talk about their feelings. We took blank paper and marked it for every worry or fear we had and shared.”
With many vulnerable children still facing increased risks at home and with others struggling with their mental health due to the challenges posed by the pandemic, it’s vital that children know what to do and who to speak if something is worrying or upsetting them.
The coronavirus related worries the assembly will cover include, children not being able to see their family and friends, changes in daily routines, experiencing new feelings and spending more time online.
Guest hosts Ant & Dec said:
Ant: “After what has been an incredibly difficult start to the year for many young people, we feel privileged to once again be hosting the NSPCC’s virtual assembly for children and their families.
Dec: “We hope we can remind children that they don’t have to just carry their worries with them – they can always speak to someone they trust if they’re feeling sad, overwhelmed, or unsafe.”
Service Head of School Service, Janet Hinton said: “The lockdown has turned the lives of children upside down and many are struggling to cope with the challenges it has posed.
“Although our trained ‘Speak Out. Stay Safe’ staff can’t currently go into schools, it is essential that every child knows who they can turn to if they need help and support.
“After watching the assembly, parents and carers can continue this conversation with their children by visiting the NSPCC website where they can find additional activities.”
Prior to the pandemic, ‘Speak Out. Stay Safe’ had been delivered in 96% of primary schools across Scotland with trained NSPCC volunteers and staff delivering the assembly and workshop with the help of ‘Speak Out. Stay safe’ mascot, Buddy the speech bubble.
The importance of empowering children to understand that they have the right to speak out and stay safe has been highlighted in a recent court case which saw 63-year-old Sidney Sales from Luton jailed for three years after a girl spoke about the abuse she had suffered following seeing the NSPCC assembly at school.
Adults concerned about a child can contact the NSPCC helpline seven days a week on 0808 800 5000, or email help@nspcc.org.uk.
YummiCommunity to support parents through lockdown
Scottish eco-friendly toy company, Yummikeys, has launched an online Facebook community to help parents across the UK virtually unite during the pandemic and beyond.
YummiCommunity, an all-inclusive hub which will allow users to share stories, tips, tricks and woes, aims to make times a little less lonely during the third country-wide lockdown.
Hot-button topics related to parenthood, tips for getting through lockdown and activities to keep children occupied at home, will be mixed with light-hearted parenting humour, hubs where you can share your views, questions and worries, as well as the most up to date Yummikeys news and launches.
With a combined social following of almost 30,000 and with the majority of these being Facebook followers, Yummikeys founder Elspeth Fawcett, decided this was the best way to connect parents like her, who may feel they need a little extra support:
“The YummiCommunity is basically a parent and baby club, but virtual, allowing you to ask for advice, share your stories and have your mini parenting breakdowns (surprise, we all have them), in an environment that is all inclusive and completely non-judgemental.
“On a daily basis I get the loveliest messages, emails and comments saying how a set of Yummikeys has cooled sore gums, or a YummiNecklace has transformed feeding time, but I also have people saying they wish they had known about us, as well as the tips and tricks they get from our current socials, sooner.
“he idea of the YummiCommunity has been in my mind for a while, but when we went into lockdown 3, I knew I needed to make it happen. I want this platform to be a resource for new and seasoned parents and carers alike, as well as somewhere to ‘go’ when you need a little extra guidance.”
Lockdown has seen Yummikeys’ best sales months to date, with new parents eager to help sooth babies during a time where support from family and friends is not readily available. The YummiCommunity aims to offer additional encouragement to those parents as well.
The brand’s best-selling Yummikeys, Yummirings and YummiNecklace were joined in 2020 by the East-Lothian company’s new Ultrasound Necklace – a personalised piece of jewellery for mums that sees their ultrasound etched into a disc.
Platforms endorse the principle that no company should be profiting from COVID-19 vaccine mis/disinformation and commit to swifter responses to flagged content
Platforms will step up work with public health bodies to promote factual and reliable messages
Digital Secretary Oliver Dowden and Health Secretary Matt Hancock have agreed with social media platforms new measures to limit the spread of vaccine misinformation and disinformation and help people find the information they need about any COVID-19 vaccine.
At a virtual roundtable to address the growth of vaccine disinformation, Facebook, Twitter and Google committed to the principle that no company should profit from or promote COVID-19 anti-vaccine disinformation, to respond to flagged content more swiftly, and to work with authorities to promote scientifically accurate messages.
As the UK moves closer to developing a safe and effective COVID-19 vaccine, Mr Dowden and Mr Hancock used the roundtable to welcome the progress these companies have made in strengthening their policies towards false coronavirus information and helping publicise the steps people should take to prevent the spread of the virus.
But the ministers raised concerns about the length of time misleading and false information about coronavirus vaccines remains on platforms, and called for swifter action to tackle such content.
Together the platforms have now agreed:
To commit to the principle that no user or company should directly profit profit from COVID-19 vaccine mis/disinformation. This removes an incentive for this type of content to be promoted, produced and be circulated.
To ensure a timely response to mis/disinformation content flagged to them by the government.
To continue to work with public health bodies to ensure that authoritative messages about vaccine safety reach as many people as possible.
To join new policy forums over the coming months to improve responses to mis/disinformation and to prepare for future threats.
The forums will see the government, social media platforms, public health bodies and academia increase their cooperation and ongoing information sharing to deliver a better understanding of the evolving threat caused by false COVID-19 vaccine narratives.
Digital Secretary Oliver Dowden said: “Covid disinformation is dangerous and could cost lives. While social media companies are taking steps to stop it spreading on their platforms there is much more that can be done.
“So I welcome this new commitment from social media giants not to profit from or promote flagged anti-vax content, given that making money from this dangerous content would be wrong.”
Health Secretary Matt Hancock said: “After clean water, vaccination is the most effective public health intervention in the world and has saved countless lives across the globe, eradicating one disease entirely.
“I am encouraged that social media companies have agreed to do more to prevent the spread of dangerous misinformation and disinformation on their platforms.
“We want users to have greater access to reliable and scientifically-accurate information on vaccines from trusted sources like the NHS so they can make informed decisions to protect themselves and their loved ones.”
Vaccines are overwhelmingly safe and effective healthcare solutions. Ministers used the meeting, which also included representatives from fact-checking charities, academics and data experts, to highlight that robust action must be taken against misleading messaging and content online which could harm and discourage people from protecting themselves or their loved ones.
Throughout the pandemic the government’s Counter Disinformation Unit has been developing a picture of the extent, scope and reach of disinformation and working with online platforms to ensure appropriate action is taken.
The unit has observed a range of false narratives about coronavirus vaccines across multiple platforms, including widespread misuse of scientific findings and baseless claims challenging the safety of vaccines or plans for their deployment.
Ronan Harris, Google UK Managing Director, said: “Since the beginning of the covid-19 epidemic, we have worked relentlessly to promote authoritative content from the NHS and to fight misinformation.
“In the last few months, we have continued to update our policies to make sure that content contradicting scientific consensus about the virus is swiftly removed and demonetised.
“Today, we are redoubling our commitment to take effective action against covid vaccine misinformation and to continue to work with partners across Government and industry to make sure people in the UK have easy access to helpful and accurate Information.”
Katy Minshall, Head of UK Public Policy, Twitter UK, said: “We are focused on protecting the public conversation and helping people find authoritative information on Twitter – in May 2019, we launched a search prompt that serves people with credible vaccine information from the NHS.
“In January this year, we launched a dedicated COVID-19 search prompt, ensuring that when people come to the service for information, they’re met with authoritative, public health information first. To date, over 160 million people have visited the Twitter COVID-19 curated page, over two billion times.
Since introducing COVID misinformation policies in March, and as we’ve doubled down on tech, our automated systems have challenged millions of accounts which were targeting discussions around COVID-19 with spammy or manipulative behaviours.
“We remain committed to combating misinformation about COVID-19, and continue to take action on accounts that violate our Rules. We look forward to continued collaboration with government and industry partners in our work towards improving the health of the public conversation.
Rebecca Stimson, Facebook’s Head of UK Public Policy, said: “We’re working closely with governments and health authorities to stop harmful misinformation from spreading on our platforms.
“Ads that include vaccine hoaxes or discourage people from getting a vaccine are banned, we remove harmful misinformation about Covid-19 and put warning labels over posts marked as false by third party fact checkers.
“We’re also connecting people to accurate information about vaccines and Covid-19 whenever they search for these topics. In the first months of the pandemic we directed more than 3.5 million visits to official advice from the NHS and UK government and we’re pleased to continue to support public health efforts.”
Social media users are seriously underestimating their chances of falling victim to online fraud and suffering devastating emotional and financial consequences because tech giants are not doing enough to warn and protect them, Which? is warning.
The consumer champion’s latest research using an online community of Facebook users showed that a majority were lulled into a false sense of security by the platform’s social nature. They mistakenly assumed they could spot fraud and that the company’s systems would protect them effectively.
However Which? found a third of participants did not know that fake products might be advertised on the site – putting them at risk of falling victim to purchase scams. A quarter did not spot an investment scam advert with a fake endorsement from a celebrity.
If this was to be replicated across Facebook’s user base of 44 million Britons, huge numbers of users could potentially be at risk from fraudsters who lure in victims with fake accounts, posts and paid-for ads on the site.
The financial consequences for those tricked by these fraudsters as well as those who post scam adverts on websites and search engines like Google can be devastating.
Which? has heard from many victims of these types of scams – including a man who lost almost £100,000 after clicking on an online investment advert featuring fake endorsements from MoneySavingExpert’s Martin Lewis and Deborah Meaden from BBC show Dragons’ Den.
The emotional consequences are equally serious. Scam victims told Which? that it had shaken their confidence in themselves and their ability to trust other people. A woman who lost £30,000 to an investment scam which featured prominently on Google said she still feels shame and despair 15 months on from her ordeal, adding: “It breaks you as a person.”
Which? is calling on the Department for Digital, Culture, Media and Sport (DCMS) to act now and include online scams in the upcoming Online Harms Bill so that consumers are protected from this huge and growing problem.
Which? carried out in-depth research with an online community of Facebook users over 10 days, and also conducted a nationally representative online survey including 1,700 Facebook users, as part of its new policy report ‘Connecting the world to fraudsters? Protecting social media users from scams’.
The research found that older social media users are often more concerned about scams, and perceived as being at greater risk by their fellow users. But the findings suggested that younger people may actually be more susceptible to scams as they are more persuadable and more likely to take risks, such as taking part in online shopping and quizzes used by some fraudsters.
Knowledge among users of what Facebook does to protect people from becoming a victim of a scam was low, although users assumed Facebook did have systems and processes in place. However, when details of Facebook’s actual systems and processes were explained, users were sceptical about their effectiveness and questioned whether they are sufficient.
Just three in 10 (30%) respondents to Which?’s online survey of Facebook users said they were aware of the scam ad reporting tool introduced by the site in 2019. Only a third of these, 10 per cent overall, said they had used the tool themselves.
Which?’s research was conducted with a focus on Facebook due to its size and influence in the social media landscape. However, the consumer champion believes that the findings and implications of this research can be reasonably extended to apply to other similar social networking sites and online platforms.
The amount of money lost to fraud every year is huge. In the year to June 2020, Action Fraud received 822,276 fraud reports, and the value of losses from reported incidents was £2.3 billion. Action Fraud estimates that 85 per cent of all fraud in the year to June 2020 was carried out digitally.
Which? spoke to one man, retired and in his seventies, who lost almost £100,000 to a Bitcoin scam, which started in February 2019, by a company called Fibonetix. He had seen an online advert which had fake endorsements from celebrities including MoneySavingExpert’s Martin Lewis and businesswoman Deborah Meaden.
The man, who preferred to remain anonymous, told Which?: “Being scammed in this way was utterly devastating. I think about it virtually every day and it’s really affected my confidence, my ability to make decisions and has ultimately changed the person that I am. Fortunately I have been able to get through it with the support of my family.”
Another victim, a sound engineer in her forties, was searching for investment advice on Google and ended up filling in contact details with a firm that seemed legitimate. Receiving a phone call a few days later she then ended up falling victim to an incredibly sophisticated scam, which took place over several weeks, and lost £30,000. Her case is currently being investigated by the Financial Ombudsman Service.
She says the experience has impacted her mental and physical health and that “it’s been really traumatic. At the time it felt like no one cared or wanted to discuss my case with me. It breaks you as a human being and leaves you scared of the outside world.”
Despite it happening 15 months ago she says: “It’s still hard to trust yourself and others. Often people think these things only happen to older people and it takes a long time to not feel like an idiot. There’s a lot of shame and despair which hasn’t gone away and I’m still awaiting closure to this day.”
Which? is calling for online platforms, including social media sites, to be given greater responsibility to prevent scam content appearing on their platforms.
The government has a perfect opportunity to deliver this in the upcoming online harms bill, and if not ministers must set out their proposals for further legislative action to effectively protect consumers from online scams.
Rocio Concha, Director of Policy and Advocacy at Which?, said:“The financial and emotional toll of scams can be devastating and it is clear that social media firms such as Facebook are failing to step up and properly protect users from fraudsters on their sites.
“The time for serious action on online scams is now. If the government doesn’t grasp the opportunity to deliver this in the upcoming online harms bill, it must urgently come forward with new proposals to stem the growing tide of sophisticated scams by criminals online.”