Social media sites rife with scam car insurance ‘ghost brokers’, says Which?

Social media sites are rife with dodgy companies offering car insurance that is either non-existent or missing key details, resulting in tens of thousands of drivers being potentially left uninsured on the roads, Which? research has found.

‘Ghost broking’ is a scam that cost its average victim £1,950 last year. It involves ‘brokers’ forging insurance paperwork completely or more commonly selling victims a ‘real’ policy at a reduced price, by changing some of the victim’s details in the application, such as their address or claims record. It leaves those affected potentially liable for fraud and at risk of penalties for driving uninsured.

Ghost brokers mainly operate online, particularly on social media. In May, Which? searched on social media platforms for profiles and pages that showed signs of being run by scammers.

Which? analysed the first 50 pages returned from a search for ‘cheap car insurance’ on Facebook, Instagram and TikTok. Of the 47 profiles that matched Which?’s search on Instagram, more than half, 25, appeared to be offering quotes or cover to UK drivers, while showing no signs of being authorised by the Financial Conduct Authority (FCA).

In a separate search, Which? found one Instagram profile that boasted it could save customers ‘up to 50%’ on their premium – it also offered ‘NCB (no-claims bonus) Documents’ and ‘Speeding Ticket Removal’. It had 45,900 followers – more than the five biggest insurers combined – and claimed to have ‘over six years experience in [its] field’. It also had a sister profile with an additional 15,200 followers. Which? flagged these to Instagram, and both have since been taken down.

On Facebook, seven pages of the 50 profiles were dubious. On video-sharing site TikTok, two of the 50 profiles analysed were suspect.

Experts Which? spoke with in the police and insurance industry seem to agree that ghost brokers generally operate most prolifically on Facebook and Instagram.

According to the Insurance Fraud Bureau, last year insurers collectively reported more than 21,000 policies that could be connected to the scam.

Some victims will not report being scammed because they are too embarrassed. Others might be aware their quotes have been manipulated, but ghost brokers can be persuasive in downplaying the significance of this.

Some ghost brokers also put real effort into creating a positive word-of-mouth buzz, which helps them seem trustworthy.

Some 517 cases of ghost broking – with losses totalling £1 million – were reported to Action Fraud in 2021. However, this will only be people who make a report to Action Fraud and actually know that they have bought a fraudulent policy. The true numbers are likely to be much higher.

Many of these losses, unsurprisingly, were from young drivers, who face the steepest premiums. Ghost brokers also heavily target non-native English speakers.

People who have not even bought a policy can also be affected by the scam through having their address or other details used as part of forged insurance paperwork.

To test how social media platforms are vetting unregulated insurance middlemen, Which? set up six accounts of its own on Facebook, Instagram and TikTok, claiming to be car insurance brokers.

Which? promised cheap quotes and asked interested drivers to contact via a mobile phone number or directly message through the website.

The two profiles Which? set up on Facebook were taken down by the site within a few days, as was an Instagram profile linked to an email address containing the word ‘ghostbrokerscammer’. However, a second Instagram profile, connected to a less conspicuous email with a ‘normal’ name (e.g. ‘johnsmith’), stayed up for 35 days until Which? took it down.

The two TikTok profiles, one linked to a ‘ghostbrokerscammer’ email, also stayed up for the same period.

Which? believes social media companies should have stronger processes in place to protect consumers from fraudulent pages offering financial services.

When the Online Safety Bill comes into force, platforms should be required to prevent this kind of activity. To ensure this is the case, Which? is calling on the government to amend the Bill to ensure its definition of fraud does not allow some scammers to slip through the net and to guarantee that Ofcom has appropriate powers to adequately enforce the Bill when it becomes law.

Meanwhile, consumers should be wary of insurance brokers selling their services on social media and carry out other basic background checks to ensure they are not buying a fraudulent or misleading insurance policy – and are dealing with a company that is actually authorised by the FCA.

Jenny Ross, Which? Money Editor, said: “Ghost broking is a really nasty kind of fraud, where scammers operate by stealth and typically take advantage of those who feel locked out of, or bewildered by, the car insurance market.

“Social media sites must do much more to crack down on car insurance scammers that are infiltrating their sites and harming consumers, and should address these problems now, ahead of the Online Safety Bill becoming law.

“The Online Safety Bill should require platforms to tackle this type of fraudulent content. The government must ensure this happens by amending the Bill so that its definition of fraud does not allow some scammers to slip through the net and guaranteeing Ofcom is ready to enforce these new laws when they come into force.”

Online Safety Bill: second reading at Westminter this week

TOUGH new internet laws to protect young people, uphold free speech and make sure there are no safe spaces for criminals online return to Parliament for their second reading this week.

  • Online safety legislation protecting children will be debated in the Commons
  • Comes as new plans to support vulnerable people and fight falsities online are launched
  • Funding boost will help people’s critical thinking online through a new expert Media Literacy Taskforce alongside proposals to pay for training for teachers and library workers

MPs will debate the government’s groundbreaking Online Safety Bill which requires social media platforms, search engines and other apps and websites allowing people to post content to improve the way they protect their users. 

Ofcom, the regulator, will have the power to fine companies failing to comply with the laws up to ten per cent of their annual global turnover, force them to improve their practices and block non-compliant sites. Crucially, the laws have strong measures to safeguard children from harmful content such as pornography and child sexual abuse.

Ahead of Tuesday’s debate, the government is launching the next phase of its Online Media Literacy Strategy. It aims to help vulnerable and ‘hard-to-reach’ people, such as those who are digitally excluded or from lower socio-economic backgrounds, navigate the internet safely and teach them to spot falsities online. 

The Department for Digital, Culture, Media and Sport (DCMS) will spend £2.5 million to advance the plan through the next year including on training, research and providing expert advice.

This includes a new Media Literacy Taskforce featuring experts from a range of disciplines and a boost to the Media Literacy Fund, which gives teachers and local service providers the skills they need to teach people to improve their critical thinking of what they see online.

Digital Secretary Nadine Dorries said: “The time has come to properly protect people online and this week MPs will debate the most important legislation in the internet age.

“Our groundbreaking Online Safety Bill will make the UK the safest place to surf the web. It has been significantly strengthened following a lengthy period of engagement with people in politics, wider society and industry.

“We want to arm everyone with the skills to navigate the internet safely, so today we’re also announcing a funding boost and plans for experts to join forces with the government to help people spot dodgy information online.

Thinking critically online has never been more important. There was a rise in misinformation and disinformation on social media and other online platforms during the global pandemic and the Kremlin continues to use disinformation to target UK and international audiences to justify its actions in Ukraine.

Ofcom research shows adults are often overconfident in their ability to detect disinformation and only 32 per cent of children aged 12 to 17 know how to use online flagging or reporting functions.

Forty per cent of adult internet users do not have the skills to assess online content critically and children up to the age of 15 are particularly vulnerable.

A new Media Literacy Taskforce with 18 experts from a range of relevant organisations, including Meta, TikTok, Google, Twitter, Ofcom and the Telegraph as well as universities and charities, will work with the government as part of its strategy to tackle disinformation and help hard-to-reach and vulnerable groups in society think about what they see on the web, including improving their ability to protect their data and privacy.

The taskforce will look at new ways to identify and reach people most in need of education. This could include working through local authorities or coordinating support offered by local services to roll out training.

The Media Literacy Fund will expand a pilot ‘Train the Trainer’ programme which ran last year to give teachers, library workers and youth workers more skills to help boost people’s critical thinking skills.

New research will be commissioned to understand the root causes of poor media literacy and on the effectiveness of different methods which aim to build people’s resilience to misinformation and disinformation.

The fund will have a broader scope including a focus on improving media literacy provision for people who are particularly vulnerable online – such as children or people suffering with mental health issues.

Since it launched in July 2021, the Online Media Literacy Strategy has provided £256,000 in grant funding to five organisations to adapt media literacy resources for teachers working with disabled children, run a successful awareness campaign to promote Safer Internet Day and empower LGBTQ+ young people with tools to deal with online abuse.

Nick Poole, Chief Executive of the Chartered Institute of Library and Information Professionals (CILIP) said: “Media literacy is the key to helping people lead healthier, happier and safer lives online, particularly the most vulnerable and hardest-to-reach in our society.

“As a member of the DCMS Media Literacy Taskforce, I welcome the breadth and ambition of this new Action Plan, which demonstrates the government’s commitment to this important agenda. As librarians and information professionals, we look forward to playing our part in bringing it to fruition.”

Will Gardner OBE, CEO of Childnet International and a Director of the UK Safer Internet Centre said: “Media literacy is a core part of Childnet’s work with children, young people, parents and carers, and we fully support the Media Literacy focus and work of the DCMS. This work has never been as important as it is now.

“There is a great deal of work being done in this space in the UK. The government is playing an important role in helping to identify where there are gaps and where focus or learning is needed, and then supporting responses to that.

“As part of the UK Safer Internet Centre, in February 2022 we worked closely with the DCMS in helping to promote the Safer Internet Day campaign to LGBTQ+ young people. We fully support the continued focus of the Action Plan, including ensuring that ‘hard-to-reach’ groups are supported as well as those who are particularly vulnerable online.”

Safer Internet Day: Digital Minister announces greater protections for children from online pornography

  • Online Safety Bill will force pornography websites to prevent underage access including by using age verification technologies
  • New measure goes further than the bill’s existing protections by bringing all websites offering pornography online into scope
  • Children will be better protected from online pornography under new measures to bring all websites that display it into scope of the government’s pioneering new internet safety laws.

On Safer Internet Day, Digital Minister Chris Philp is announcing the Online Safety Bill will be significantly strengthened with a new legal duty requiring all sites that publish pornography to put robust checks in place to ensure their users are 18 years old or over.

This could include adults using secure age verification technology to verify that they possess a credit card and are over 18 or having a third-party service confirm their age against government data.

If sites fail to act, the independent regulator Ofcom will be able fine them up to 10 per cent of their annual worldwide turnover or can block them from being accessible in the UK. Bosses of these websites could also be held criminally liable if they fail to cooperate with Ofcom.

A large amount of pornography is available online with little or no protections to ensure that those accessing it are old enough to do so. There are widespread concerns this is impacting the way young people understand healthy relationships, sex and consent. Half of parents worry that online pornography is giving their kids an unrealistic view of sex and more than half of mums fear it gives their kids a poor portrayal of women.

Age verification controls are one of the technologies websites may use to prove to Ofcom that they can fulfil their duty of care and prevent children accessing pornography.

Digital Minister Chris Philp said: “It is too easy for children to access pornography online. Parents deserve peace of mind that their children are protected online from seeing things no child should see.

“We are now strengthening the Online Safety Bill so it applies to all porn sites to ensure we achieve our aim of making the internet a safer place for children.”

Many sites where children are likely to be exposed to pornography are already in scope of the draft Online Safety Bill, including the most popular pornography sites as well as social media, video-sharing platforms and search engines. But as drafted, only commercial porn sites that allow user-generated content – such as videos uploaded by users – are in scope of the bill.

The new standalone provision ministers are adding to the proposed legislation will require providers who publish or place pornographic content on their services to prevent children from accessing that content.

This will capture commercial providers of pornography as well as the sites that allow user-generated content. Any companies which run such a pornography site which is accessible to people in the UK will be subject to the same strict enforcement measures as other in-scope services.

The Online Safety Bill will deliver more comprehensive protections for children online than the Digital Economy Act by going further and protecting children from a broader range of harmful content on a wider range of services.

The Digital Economy Act did not cover social media companies, where a considerable quantity of pornographic material is accessible, and which research suggests children use to access pornography.

The government is working closely with Ofcom to ensure that online services’ new duties come into force as soon as possible following the short implementation period that will be necessary after the bill’s passage.

The onus will be on the companies themselves to decide how to comply with their new legal duty. Ofcom may recommend the use of a growing range of age verification technologies available for companies to use that minimise the handling of users’ data. The bill does not mandate the use of specific solutions as it is vital that it is flexible to allow for innovation and the development and use of more effective technology in the future.

Age verification technologies do not require a full identity check. Users may need to verify their age using identity documents but the measures companies put in place should not process or store data that is irrelevant to the purpose of checking age. Solutions that are currently available include checking a user’s age against details that their mobile provider holds, verifying via a credit card check, and other database checks including government held data such as passport data.

Any age verification technologies used must be secure, effective and privacy-preserving. All companies that use or build this technology will be required to adhere to the UK’s strong data protection regulations or face enforcement action from the Information Commissioner’s Office.

Online age verification is increasingly common practice in other online sectors, including online gambling and age-restricted sales. In addition, the government is working with industry to develop robust standards for companies to follow when using age assurance tech, which it expects Ofcom to use to oversee the online safety regime.

Online safety law to be strengthened to stamp out illegal content

Bill strengthened with new list of criminal content for tech firms to remove as a priority

  • List includes online drug and weapons dealing, people smuggling, revenge porn, fraud, promoting suicide and inciting or controlling prostitution for gain
  • New criminal offences will be added to the bill to tackle domestic violence and threats to rape and kill
  • Flagship UK laws to protect people online are being toughened up with new criminal offences and extra measures to force social media companies to stamp out the most harmful illegal content and criminal activity on their sites quicker.

Digital Secretary Nadine Dorries has announced extra priority illegal offences to be written on the face of the bill include revenge porn, hate crime, fraud, the sale of illegal drugs or weapons, the promotion or facilitation of suicide, people smuggling and sexual exploitation. Terrorism and child sexual abuse are already included.

Previously the firms would have been forced to take such content down after it had been reported to them by users but now they must be proactive and prevent people being exposed in the first place.

It will clamp down on pimps and human traffickers, extremist groups encouraging violence and racial hate against minorities, suicide chatrooms and the spread of private sexual images of women without their consent.

Naming these offences on the face of the bill removes the need for them to be set out in secondary legislation later and Ofcom can take faster enforcement action against tech firms which fail to remove the named illegal content.

Ofcom will be able to issue fines of up to 10 per cent of annual worldwide turnover to non-compliant sites or block them from being accessible in the UK.

Three new criminal offences, recommended by the Law Commission, will also be added to the Bill to make sure criminal law is fit for the internet age.

Digital Secretary Nadine Dorries said: “This government said it would legislate to make the UK the safest place in the world to be online while enshrining free speech, and that’s exactly what we are going to do.

“Our world leading bill will protect children from online abuse and harms, protecting the most vulnerable from accessing harmful content, and ensuring there is no safe space for terrorists to hide online.

“We are listening to MPs, charities and campaigners who have wanted us to strengthen the legislation, and today’s changes mean we will be able to bring the full weight of the law against those who use the internet as a weapon to ruin people’s lives and do so quicker and more effectively.”

Home Secretary Priti Patel said: “The internet cannot be a safe haven for despicable criminals to exploit and abuse people online.

Companies must continue to take responsibility for stopping harmful material on their platforms. These new measures will make it easier and quicker to crack down on offenders and hold social media companies to account.”

The new communications offences will strengthen protections from harmful online behaviours such as coercive and controlling behaviour by domestic abusers; threats to rape, kill and inflict physical violence; and deliberately sharing dangerous disinformation about hoax Covid-19 treatments.

The UK Government is also considering the Law Commission’s recommendations for specific offences to be created relating to cyberflashing, encouraging self-harm and epilepsy trolling.

To proactively tackle the priority offences, firms will need to make sure the features, functionalities and algorithms of their services are designed to prevent their users encountering them and minimise the length of time this content is available. This could be achieved by automated or human content moderation, banning illegal search terms, spotting suspicious users and having effective systems in place to prevent banned users opening new accounts.

New harmful online communications offences:

Ministers asked the Law Commission to review the criminal law relating to abusive and offensive online communications in the Malicious Communications Act 1988 and the Communications Act 2003.

The Commission found these laws have not kept pace with the rise of smartphones and social media. It concluded they were ill-suited to address online harm because they overlap and are often unclear for internet users, tech companies and law enforcement agencies.

It found the current law over-criminalises and captures ‘indecent’ images shared between two consenting adults – known as sexting – where no harm is caused. It also under-criminalises – resulting in harmful communications without appropriate criminal sanction.

In particular, abusive communications posted in a public forum, such as posts on a publicly accessible social media page, may slip through the net because they have no intended recipient. It also found the current offences are sufficiently broad in scope that they could constitute a disproportionate interference in the right to freedom of expression.

In July the Law Commission recommended more coherent offences. The Digital Secretary today confirms new offences will be created and legislated for in the Online Safety Bill.

The new offences will capture a wider range of harms in different types of private and public online communication methods. These include harmful and abusive emails, social media posts and WhatsApp messages, as well as ‘pile-on’ harassment where many people target abuse at an individual such as in website comment sections. None of the offences will apply to regulated media such as print and online journalism, TV, radio and film.

The offences are:

A ‘genuinely threatening’ communications offence, where communications are sent or posted to convey a threat of serious harm.

This offence is designed to better capture online threats to rape, kill and inflict physical violence or cause people serious financial harm. It addresses limitations with the existing laws which capture ‘menacing’ aspects of the threatening communication but not genuine and serious threatening behaviour.

It will offer better protection for public figures such as MPs, celebrities or footballers who receive extremely harmful messages threatening their safety. It will address coercive and controlling online behaviour and stalking, including, in the context of domestic abuse, threats related to a partner’s finances or threats concerning physical harm.

A harm-based communications offence to capture communications sent to cause harm without a reasonable excuse.

This offence will make it easier to prosecute online abusers by abandoning the requirement under the old offences for content to fit within proscribed yet ambiguous categories such as “grossly offensive,” “obscene” or “indecent”.

Instead it is based on the intended psychological harm, amounting to at least serious distress, to the person who receives the communication, rather than requiring proof that harm was caused. The new offences will address the technical limitations of the old offences and ensure that harmful communications posted to a likely audience are captured.

The new offence will consider the context in which the communication was sent. This will better address forms of violence against women and girls, such as communications which may not seem obviously harmful but when looked at in light of a pattern of abuse could cause serious distress. For example, in the instance where a survivor of domestic abuse has fled to a secret location and the abuser sends the individual a picture of their front door or street sign.

It will better protect people’s right to free expression online. Communications that are offensive but not harmful and communications sent with no intention to cause harm, such as consensual communication between adults, will not be captured. It will have to be proven in court that a defendant sent a communication without any reasonable excuse and did so intending to cause serious distress or worse, with exemptions for communication which contributes to a matter of public interest.

An offence for when a person sends a communication they know to be false with the intention to cause non-trivial emotional, psychological or physical harm.

Although there is an existing offence in the Communications Act that captures knowingly false communications, this new offence raises the current threshold of criminality. It covers false communications deliberately sent to inflict harm, such as hoax bomb threats, as opposed to misinformation where people are unaware what they are sending is false or genuinely believe it to be true.

For example, if an individual posted on social media encouraging people to inject antiseptic to cure themselves of coronavirus, a court would have to prove that the individual knew this was not true before posting it.

The maximum sentences for each offence will differ. If someone is found guilty of a harm based offence they could go to prison for up to two years, up to 51 weeks for the false communication offence and up to five years for the threatening communications offence.

The maximum sentence was six months under the Communications Act and two years under the Malicious Communications Act.

Professor Penney Lewis, Commissioner for Criminal Law, said: “The criminal law should target those who specifically intend to cause harm, while allowing people to share contested and controversial ideas in good faith.

“Our recommendations create a more nuanced set of criminal offences, which better protect victims of genuinely harmful communications as well as better protecting freedom of expression.

“I am delighted that the Government has accepted these recommended offences.”

Child abuse image crimes in Scotland pass 3,000 in five years

Calls for stronger Online Safety Bill

  • Child abuse image offences recorded by Police Scotland up 13% last year and reach over 3,100 in just five years
  • Social media being used as a conveyor belt to produce child abuse images on an industrial scale’
  • NSPCC sets out five-point plan to strengthen Online Safety Bill so it decisively disrupts the production and spread of child abuse material on social media

More than 3,000 child abuse image crimes were recorded by Police Scotland over the last five years, the NSPCC has revealed today.

Data obtained from Police Scotland shows the number of offences relating to possessing, taking, making, and distributing child abuse material peaked at 660 last year (2020/21) – up 13% from 2019/20.

The NSPCC previously warned the pandemic had created a ‘perfect storm’ for grooming and abuse online.

The charity said social media is being used by groomers as a conveyor belt to produce and share child abuse images on an industrial scale. It added that the issue of young people being groomed into sharing images of their own abuse has become pervasive.

The NSPCC is urging the UK Culture Secretary Nadine Dorries to seize the opportunity to strengthen the Online Safety Bill, so it results in decisive action that disrupts the production and spread of child abuse material on social media.  

The child protection charity said that behind every offence could be multiple victims and images, and children will continue to be at risk of an unprecedented scale of abuse unless the draft legislation is significantly strengthened.

Ahead of a report by Parliamentarians who scrutinised the draft Online Safety Bill expected next week, the NSPCC, which has been at the forefront of campaigning for social media regulation, set out a five-point plan to strengthen the legislation so it effectively prevents online abuse.

The charity’s online safety experts said the Bill currently fails to address how offenders organise across social media, doesn’t effectively tackle abuse in private messaging and fails to hold top managers liable for harm or give children a voice to balance the power of industry.

The NSPCC is critical of the industry response to child abuse material. A Facebook whistle-blower recently revealed Meta apply a return-on-investment principle to combatting child abuse material and don’t know the true scale of the problem as the company “doesn’t track it”.

And research by the Canadian Centre for Child Protection has raised concerns about whether some platforms have consistent and effective processes to takedown child abuse images, with some companies pushing back on removing abuse images of children as young as ten.

NSPCC Chief Executive, Sir Peter Wanless, said: “The staggering amount of child sexual abuse image offences is being fuelled by the ease with which offenders are able to groom children across social media to produce and share images on an industrial scale.

“The UK Government recognises the problem and has created a landmark opportunity with the Online Safety Bill. We admire Nadine Dorries’ declared intent that child protection is her number one objective.

“But our assessment is that the legislation needs strengthening in clear and specific ways if it is to fundamentally address the complex nature of online abuse and prevent children from coming to avoidable harm.” 

The NSPCC’s five-point plan lays out where the Online Safety Bill must be strengthened to:

  1. Disrupt well-established grooming pathways: The Bill fails to tackle convincingly the ways groomers commit abuse across platforms to produce new child abuse images. Offenders exploit the design features of social media sites to contact multiple children before moving them to risky livestreaming or encrypted sites. The Bill needs to be strengthened to require platforms to explicitly risk assess for cross platform harms.
  2. Tackle how offenders use social media to organise abuse: The Bill fails to address how abusers use social media as a shop window to advertise their sexual interest in children, make contact with other offenders and post digital breadcrumbs as a guide for them to find child abuse content. Recent whistle-blower testimony found Facebook groups were being used to facilitate child abuse and signpost to illegal material hosted on other sites.
  3. Put a duty on every social media platform to have a named manager responsible for children’s safety: To focus minds on child abuse every platform should be required to appoint a named person liable for preventing child abuse, with the ultimate threat of criminal sanctions for product decisions that put children in harm’s way.
  4. Give the regulator more effective powers to combat abuse in private messaging: Private messaging is the frontline of child abuse but the regulator needs clearer powers to take action against companies that don’t have a plan to tackle it. Companies should have to risk assess end-to-end encryption plans before they go ahead so the regulator is not left in the dark about abuse taking place in private messaging.
  5. Give children a funded voice to fight for their interests: Under current proposals for regulation children who have been abused will get less statutory protections than bus passengers or Post Office users. There needs to be provision for a statutory body to represent the interests of children, funded by an industry levy, in the Bill.

The NSPCC is mobilising supporters to sign an open letter to Nadine Dorries asking the UK Culture Secretary to make sure children are at the heart of the Online Safety Bill.

The NSPCC’s full analysis of the draft Online Safety Bill is set out in their ‘Duty to Protect’ report.

Record number of recorded grooming crimes in Scotland

Calls for UK Government to bolster online safety legislation

  • Offences of Communicating Indecently with a Child recorded by Police Scotland increase by 80% in last five years
  • True scale of grooming likely to be higher as Facebook tech failures saw drop in removal of abuse material during pandemic
  • UK Government Culture Secretary Oliver Dowden urged to strengthen draft Online Safety Bill to ensure it responds to the rising threat

Crimes of communicating a sexual message to a child have risen by 80 per cent in the last five years to an all-time high, Police Scotland figures obtained by the NSPCC reveal.

Offenders are exploiting risky design features on apps popular with children, the child protection charity has warned.

The NSPCC is calling on the UK Government to respond by ensuring the ambition of the Online Safety Bill matches the scale of the biggest ever online child abuse threat.

The data provided by Police Scotland reveals:

  • there were 685 offences of Communicating Indecently with a Child recorded between April 2020 and March 2021
  • that’s an increase of 80 per cent from 381 in 2015/16
  • there was also an increase of 5 per cent from 2019/20 – making the number of crimes recorded in the last year a record high
  • for offences against children under the age of 13, the number of recorded crimes rose by 11 per cent, from 334 to 370, between 2019/20 and 2020/21

A 15-year-old girl told one of our Childline counsellors: “I’ve been chatting with this guy who’s like twice my age. This all started on Instagram but lately our chats have been on WhatsApp.

“He seemed really nice to begin with, but then he started making me do these things to ‘prove my trust to him’, like doing video chats with my chest exposed.”*

The NSPCC believes last year’s figures do not give a full understanding of the impact of the pandemic on children’s safety online.

The charity cites that in the last six months of 2020 Facebook removed less than half of the child abuse content it had previously, due to two technology failures.

The charity says tech firms failed to adequately respond to the increased risk children faced during lockdowns because of historic inaction to design their sites safely for young users.

The NSPCC welcomes the recent flurry of safety announcements from companies such as Instagram, Apple and TikTok, but says tech firms are playing catch up in responding to the threat after years of poorly designed sites.

The charity is calling on the Culture Secretary Oliver Dowden to step up the ambition of the UK Government’s Online Safety Bill to ensure proposals comprehensively tackle an online abuse threat that is greater than ever.

The NSPCC says the Draft Online Safety Bill published in May needs to go much further to keep children safe and ensure it creates a practical response that corresponds to the scale and nature of the child abuse problem.

The Bill is due to be scrutinised by a Joint Committee of MPs and Lords from September, which experts say is a crucial opportunity to ensure legislation provides solutions that comprehensively fix the way platforms are exploited by abusers.

The NSPCC wants to see the Bill strengthened to address how abuse rapidly spreads across platforms and ensure it responds effectively to content that facilitates abuse.

Joanne Smith, NSPCC Scotland policy and public affairs manager, said: “The failings of tech firms are resulting in record numbers of children being groomed and sexually abused online.

“To respond to the size and complexity of the threat, the UK Government must make child protection a priority in legislation and ensure the Online Safety Bill does everything necessary to prevent online abuse.

“Legislation will only be successful if it achieves robust measures to keep children truly safe now and in the future.”

The NSPCC is also urging Facebook to invest in technology to ensure plans for end-to-end encryption will not prevent the tech firm from identifying and disrupting abuse.

The charity says Facebook should proceed only when it can prove child protection tools will not be compromised and wants tougher measures in the Online Safety Bill to hold named-managers personally liable for design choices that put children at risk.

The NSPCC has been calling for Duty of Care regulation of social media since 2017 and has been at the forefront of campaigning for the Online Safety Bill.

Hundreds of children safeguarded as online abuse reports increase

Hundreds of children have been safeguarded by police enforcement as reports of online child sexual abuse increased during the last year, information released today by Police Scotland shows.

Police Scotland’s 2020-21 Quarter 4 Performance Report and Management Information showed there were a total of 1,966 child sexual abuse crimes recorded during the year, an increase of 5.9% compared to last year (1,857) and 24.9% greater than the five year average of 1,574.

The Performance Report outlines the safeguarding of 434 children through the enforcement of 649 National Online Child Abuse Prevention (NOCAP) packages between September 2020 and March this year.

NOCAP packages provide intelligence and evidence which underpins investigations carried out to identify and arrest online child abusers.

Deputy Chief Constable Fiona Taylor said: “The rise in reports online child sexual abuse has continued and accelerated during this period, and the Performance Report draws attention to vital work to safeguard hundreds of children through the enforcement of National Online Child Abuse Prevention (NOCAP) packages.

“Online child sexual abuse is a national threat and tackling it is a priority for Police Scotland. The implementation of our Cyber Strategy will ensure we continue to build capacity and capability to keep people safe in the virtual space.”

The reports also provide an insight into the effect of coronavirus restrictions on the policing needs and requirements of communities during 2020-21.

Crime reports fell overall, with 6,361 fewer violent crimes reported compared to the previous year, a decrease of 10% while there were also 55 fewer road fatalities, decreasing 33% from 165 to 110.

Acquisitive crime, such as housebreakings and shoplifting, fell by 18% year on year (from 109,460 to 89,731).

Detection rates increased in a number of crime categories where reported offences had decreased, including overall violent crime (up 3.3% points) and acquisitive crime (up 0.3% points).

However reported frauds increased by 38.2% from 10,875 in 2019-20 to 15,031 during the last year, up 78.1% on the five-year average of 8,439 reported crimes.

DCC Taylor said: “The reporting year 2020-21 was truly an exceptional period, covering from just a few days after the country first entered lockdown up until the beginning of April 2021.

“While it may be years before some of the changes to how people live their lives and to the nature of crime are fully understood, this information demonstrates the significant impact coronavirus restrictions have had on reported crime, detection rates and other policing requirements during this unique time.

“Overall violent crime reduced by around 10% year on year. Year on year increases of violent crime were reported during only the months of July and August, when restrictions had been eased.

“Acquisitive crime, such as shoplifting, also declined overall by almost a fifth compared to the year before and against the five-year average.

“The number of people killed and seriously injured on our roads is down about a third on the year before.”

“While this is to be welcomed, it is important to note reductions in reported crime did not occur in every category.

“As restrictions ease, we will continue to report on changes to the policing requirements of communities and the challenge of maintaining higher detection rates in the context of reported crime which is closer to pre-pandemic levels, as well increasing demand in areas such as fraud and online child abuse.”

An NSPCC Scotland spokesperson said: “These latest figures are further evidence of the increasing risk to children posed by child sexual offenders online.

“It is right and crucial that Police Scotland is tackling these crimes as a priority, through arresting suspects and working with partners to raise awareness of the issue. But it is clear we cannot continue with the status quo, where it’s left to law enforcement to tackle child abuse but social networks fail to do enough to proactively prevent and disrupt it from happening in the first place.

“The UK Government needs to deliver on its promise to put the protection of children front and centre of the Online Safety Bill, with tech firms being held to account if they fail in their duty of care.”

The 2020-21 Q4 Performance Report will be presented to the Scottish Police Authority’s Policing Performance Committee on Tuesday, 8 June.

The Performance Report and Management information can be found by clicking here https://www.scotland.police.uk/about-us/our-performance/

Scottish adults support tough new laws and sanctions on tech firms to combat child abuse

  • Poll shows widescale public support for stronger legislation to protect children from abuse online
  • Comes as NSPCC report says UK Government’s Online Safety Bill must be more ambitious to comprehensively tackle sexual abuse
  • Charity chief calls for no compromise on children’s safety being at the heart of new laws

The Scottish public overwhelmingly back robust new laws to protect children from abuse on social media and wants bosses to be held responsible for safety, new polling suggests.

An NSPCC/YouGov survey found that more than nine in ten respondents (95%) in Scotland want social networks and messaging services to be designed to be safe for children.

The poll of more than 2,000 adults across the UK*, of which 179 respondents were from Scotland, shows huge support for putting a legal requirement on tech firms to detect and prevent child abuse, while backing strong sanctions against directors whose companies fail.

91% of respondents in Scotland want firms to have a legal responsibility to detect child abuse, such as grooming, taking place on their sites.

And almost four in five Scottish adults (79%) support prosecuting senior managers of social media managers if their companies consistently fail to protect children from abuse online, while 83% of respondents want social media bosses fined for consistent failures.

NSPCC Chief Executive Sir Peter Wanless said it shows a huge public consensus for robust Duty of Care regulation of social media.

He is urging the UK Culture Secretary Oliver Dowden to listen by ensuring his landmark Online Safety Bill convincingly tackles online child abuse and puts the onus on firms to prevent harm. He set out the UK Government’s vision for legislation in December.

The survey found that just ten per-cent of Scottish adults think sites are regularly designed safely for children, but 77% support a legal requirement for platforms to assess the risks of child abuse on their services, and take steps to address them.

It come as the NSPCC’s ‘Delivering a Duty of Care’ report, released earlier this week, assessed plans for UK legislation against its six tests for the UK Government to achieve bold and lasting protections for children online.

It found that UK Government is failing on a third of indicators (nine out of 27), with tougher measures needed to tackle sexual abuse and to give Ofcom the powers they need to develop and enforce regulation fit for decades to come.

Sir Peter Wanless said: “Today’s polling shows the clear public consensus for stronger legislation that hardwires child protection into how tech firms design their platforms.

“Mr Dowden will be judged on whether he takes decisions in the public interest and acts firmly on the side of children with legislation ambitious enough to protect them from avoidable harm.

“For too long children have been an afterthought for Big Tech but the Online Safety Bill can deliver a culture change by resetting industry standards and giving Ofcom the power to hold firms accountable for abuse failings.”

The NSPCC is calling for legislation to be more robust so it can successfully combat online child abuse at an early stage and before it spreads across platforms.

They want a requirement for tech firms to treat content that facilitates sexual abuse with the same severity as material that meets the criminal threshold.

This means clamping down on the “digital breadcrumbs” dropped by abusers to guide others towards illegal material. These include videos of children just moments before or after they are sexually abused – so-called ‘abuse image series’ – that are widely available on social media.

The charity also want Ofcom to be able to tackle cross platform risks, where groomers target children across the different sites and games they use – something firms have strongly resisted.

In its report, the NSPCC called on the UK Government to commit to senior management liability to make tech directors personally responsible for decisions on product safety.

They say this is vital to drive cultural change and provide an appropriate deterrent against a lax adoption of the rules.

The charity wants to see senior management liability similar to the successful approach in financial services. Under the scheme, bosses taking decisions which could put children at risk could face censure, fines and in the case of the most egregious breaches of the Duty of Care, criminal sanctions.

They warn that the UK Government has softened its ambition and at present just propose liability for narrow procedural reasons, which will only to be enacted later down the line.

The NSPCC has been the leading voice for social media regulation and the charity set out detailed proposals for a Bill in 2019.

The UK Government’s White Paper consultation response in December set out the framework for an Online Safety Bill that is expected in the Spring.