Ofcom: Proposed measures to improve children’s online safety

As the UK’s online safety regulator, we have published a package of proposed measures that social media and other online services must take to improve children’s safety when they’re online:

In this article, Ofcom explain some of the main measures and the difference we expect them to make. Whether you are a parent, carer or someone working with children, this can help you understand what is happening to help children in the UK live safer lives online.

Protecting children is a priority

Protecting children so they can enjoy the benefits of being online, without experiencing the potentially serious harms that exist in the online world, is a priority for Ofcom.

We’re taking action – setting out proposed steps online services would need to take to keep kids safer online, as part of their duties under the Online Safety Act.

Under the Act social media apps, search and other online services must prevent children from encountering the most harmful content relating to suicide, self-harm, eating disorders, and pornography. They must also minimise children’s exposure to other serious harms, including violent, hateful or abusive material, bullying content, and content promoting dangerous challenges.

What will companies have to do to protect children online?

Firstly, online services must establish whether children are likely to access their site – or part of it. And secondly, if children are likely to access it, the company must carry out a further assessment to identify the risks their service poses to children, including the risk that come from the design of their services, their functionalities and algorithms. They then need to introduce various safety measures to mitigate these risks.



Our consultation proposes more than 40 safety measures that services would need to take – all aimed at making sure children enjoy safer screen time when they are online. These include:

  • Robust age checks – our draft Codes expect services to know which of their users are children in order to keep protect them from harmful content. In practice, this means that all services which don’t ban harmful content should introduce highly effective age-checks to prevent children from accessing the entire site or app, or age-restricting parts of it for adults-only access.
  • Safer algorithms – under our proposals, any service that has systems that recommend personalised content to users and is at a high risk of harmful content must design their algorithms to filter out the most harmful content from children’s feeds, and downrank other harmful content. Children must also be able to provide negative feedback so the algorithm can learn what content they don’t want to see.
  • Effective moderation – all services, like social media apps and search services, must have content moderation systems and processes to take quick action on harmful content and large search services should use a ‘safe search’ setting for children, which can’t be turned off and must filter out the most harmful content. Other broader measures require clear policies from services on what kind of content is allowed, how content is prioritised for review, and for content moderation teams to be well-resourced and trained.

What difference will these measures make?

We believe these measures will improve children’s online experiences in a number of ways. For example:

  • Children will not normally be able to access pornography.
  • Children will be protected from seeing, and being recommended, potentially harmful content.
  • Children will not be added to group chats without their consent.
  • It will be easier for children to complain when they see harmful content, and they can be more confident that their complaints will be acted on.

Our consultation follows proposals we’ve already published for how children should be protected from illegal content and activity such as grooming, child sexual exploitation and abuse, as well as how children should be prevented from accessing pornographic content.

Next steps

Our consultation is open until 17 July and we welcome any feedback on the proposals. We expect to finalise our proposals and publish our final statement and documents in spring next year.

Please submit responses using the consultation response form (ODT, 108.1 KB).

Have you booked a Brazilian Butt Lift?

Edinburgh residents are being urged to contact the City of Edinburgh Council if they have booked a procedure known as a Brazilian Butt Lift (BBL) in the Capital this weekend.

The Council’s Environmental Health team has been made aware that there may be BBL operations taking place from Friday 26th April to Sunday 28th April through people responding to social media posts.

Other local authorities in the United Kingdom have received complaints after similar procedures were carried out resulting in people suffering serious health complications such as sepsis.

Cllr Neil Ross, Convener of the Regulatory Committee at the City of Edinburgh Council, said: “We have been made aware that there may be procedures known as Brazilian Butt Lifts being performed in Edinburgh this weekend and we have concerns about the safety of such procedures.

“We are concerned about the potential risk to public health and would urge anyone who may have booked such a procedure this weekend to contact us as a matter of urgency.”

Anyone who may have a BBL procedure booked from Friday 26 April to Sunday 28 April in Edinburgh should e-mail environmentalhealth@edinburgh.gov.uk or phone 0131 200 2000.

Smartphone Free Childhood call for WhatsApp to reverse age reduction policy

Smartphone Free Childhood, the grassroots parents’ movement, has called on WhatsApp to reverse today’s change in age policy, which lowers the minimum age of use from 16 to 13 years old. 

As of April 11th, anyone in Europe over the age of 12 can now legally access the messaging service, after WhatsApp made a planned change to its age restriction policy. 

This comes despite growing national calls for greater protections for children around smartphone and social media use, including from the 60,000 parents who have joined Smartphone Free Childhood since it launched spontaneously eight weeks ago.

A recent nationwide poll found that 95% of parents said they wanted big tech companies to do more to protect their children, with 80% believing that age limits on social media were too low. 

Daisy Greenwell, co-founder of Smartphone Free Childhood said: “WhatsApp are putting shareholder profits first and children’s safety second.

“Reducing their age of use from 16 to 13 years old is completely tone deaf and ignores the increasingly loud alarm bells being rung by scientists, doctors, teachers, child safety experts, parents and mental health experts alike. This policy boosts their user figures and maximises shareholder profits at the expense of children’s safety.

“Lowering their age restrictions from 16 to 13 sends a message to parents that WhatsApp is safe for those over the age of 12, and yet a growing body of research suggests otherwise.

“Meanwhile parents and teachers in the Smartphone Free Childhood community regularly reach out to us to share that it is a contributor to bullying, sleep disruption and access to harmful content.”

Meanwhile a growing body of research continues to raise serious questions about how suitable closed group messaging apps are for children and young teens. One recent study² found that 56% of students aged 8-18 reported that they had experienced cyberbullying in their class WhatsApp groups. 

Elsewhere, heavy use of screen media has been associated with shorter sleep duration and more mid-sleep awakening (in a study of more than 11,000 British children³) and many teachers have anecdotally reported to Smartphone Free Childhood that late night activity on WhatsApp is an increasing problem affecting children’s mood and ability to concentrate in class. 

Speaking about her recent report in partnership with the Association of School and College Leaders, Dr Kaitlyn Regehr Associate Professor of Digital Humanities at UCL said: “Our report shows that private, or closed, groups can enable more extreme material being shared, which in turn can have implications for young people’s offline behaviours.

Young people increasingly exist within digital echo-chambers, which can normalise harmful rhetoric.”

Furthermore, numerous reports link WhatsApp to children accessing extreme content – including sexual imagery, self-harm material5 and videos of extreme violence such as beheadings and terrorist attacks. Studies proves that nearly a quarter of people viewing such content on social media will experience symptoms of PTSD6.

Meanwhile, the end-to-end encryption on WhatsApp threatens children’s safety on the app, making it hard for parents to understand who their children are talking to and leaving them at risk of grooming by sexual predators.

One in ten children report7 using the messaging site to talk to people they don’t already know, and one in six 14-16 year-olds have received something distressing from a stranger on the site. 

Despite these significant concerns, WhatsApp have as yet given no indication of how they plan to protect all the new under-16 users on their site, or how they will comply with UK law to remove the millions of under-13s already on the platform. 

Child sexual abuse image crimes at record high in Scotland last year

  • Child sexual abuse image offences recorded by Police Scotland increased by 15 per cent between April 2022 and March 2023
  • NSPCC wants robust implementation of the Online Safety Act with Ofcom encouraged to strengthen its approach to tackling child sexual abuse
  • Meta urged to pause rollout of end-to-end encryption until plans for Facebook and Instagram can be risk assessed under new online safety regulations

The number of child sexual abuse image offences recorded by Police Scotland were at a record high last year – up by 15 per cent from the previous year, data analysed by the NSPCC has revealed.

A total of 765 offences where child abuse images were collected and distributed, were logged in 2022/23 according to Police Scotland data 1.  

Since 2017/18, when the NSPCC first called for social media regulation, a total of 3,877 crimes have been recorded while children and families have waited for online safety laws.

The charity said the figures show the need for swift and ambitious action by tech companies to address what is currently happening on their platforms and for Ofcom to significantly strengthen its approach to tackling child sexual abuse through effective enforcement of the Online Safety Act.

The figures come as insight from Childline shows young people being targeted by adults to share child sexual abuse images via social media and the calculated use of end-to-end encrypted private messaging apps by adults to find and share child abuse images.

A 14-year-old girl told the NSPCC-run service: “One night I got chatting with this guy online who I’d never met and he made me feel so good about myself. He told me he was 15, even though deep down I didn’t believe him.

“I sent him a couple of semi-nudes on Snap(chat), but then instantly regretted it. I asked him to delete the pics, but he just kept on making me do stuff for him not to post them – like getting me to strip live on camera. I just want to block him, but if I block him he will just post the pictures.”

A 15-year-old boy told Childline: “A while ago I saw a video on YouTube about how a guy was busting paedophiles and creeps on the internet by pretending to be a kid, and I kind of wanted to do a similar thing.

“I looked around Instagram for the creepiest accounts about kids my age and younger. In the end, I came across this link on one of their stories. It’s a link to a WhatsApp group chat in which [child sexual abuse material] is sent daily! There are literally hundreds of members in this group chat and they’re always calling the kids ‘hot’ and just being disgusting.”

  1. Police Scotland recorded crime data on the Scottish Government website.
Police Force2017/182018/192019/202020/212021/222022/23Total
Scotland6585545846606627653877

Online Safety Act implementation

The NSPCC said that disrupting online child sexual abuse taking place at increasing levels will require regulated tech platforms to introduce systemic changes to their products to stop them being used to organise, commit, and share child abuse.

A consultation into Ofcom’s first codes for companies to adopt to disrupt child sexual abuse on their platforms closed last week.

The NSPCC want these measures introduced without delay but urged Ofcom to begin work on a second version of the codes that will require companies to go much further.

The charity said companies should be required to use technology that can help identify and tackle grooming, sextortion and new child abuse images.

They also want tougher measures for private messaging services to make child protection a priority, including in end-to-end encrypted environments.

The NSPCC warned that Meta’s roll-out of end-to-end encryption on Facebook and Instagram will prevent authorities from identifying offenders and safeguarding victims.

The charity wants plans paused until Meta can prove child safety will not be compromised and have urged parties to find a balance between the safety and privacy of all users, including children. The NSPCC said further rollout should be delayed until Ofcom can study Meta’s risk assessment as part of the new regulatory regime.

Sir Peter Wanless, NSPCC Chief Executive, said: “It’s alarming to see online child abuse continue to rise, especially when tech companies should be acting to make their sites safe by design ahead of incoming regulation.

“Behind these crimes are children who have been targeted by adults who are able to organise and share sexual abuse with other offenders seamlessly across social media and messaging apps.

“The Online Safety Act sets out robust measures to make children fundamentally safer on the sites and apps they use so they can enjoy the benefits of a healthy online experience.

“Ofcom has been quick off the blocks but must act with greater ambition to ensure companies prioritise child safety in the comprehensive way that is so desperately needed.”

Susie Hargreaves OBE, Chief Executive of the Internet Watch Foundation, the UK’s front line against child sexual abuse imagery online, said: “This is a truly disturbing picture, and a reflection of the growing scale of the availability, and demand, for images and videos of children suffering sexual abuse.

“The people viewing and sharing and distributing this material need to know it is not a victimless crime. They are real children, suffering real abuse and sexual torture, the effects of which can linger a lifetime.

“That more and more people are trying to share and spread this material shows we should all be doing everything we can to stop this, building more, and innovative solutions to keep children safe.

“The IWF is ready to support technology companies and Ofcom in implementing the Online Safety Act to help make the UK the safest place in the world to be online.”

Hold the front page: Rebrand for Scottish Newspaper Society

The trade association for Scotland’s news publishers, the Scottish Newspaper Society, has been renamed Newsbrands Scotland, dropping “newspaper” from its title for the first time in a history stretching back 108 years.

While printed newspapers remain an important part of news publishers’ operations, the name change reflects modern newsrooms which reach far bigger audiences through digital platforms, with news operations working round the clock to deliver news to readers in the format they want, when they want it.

Newsbrands Scotland’s inaugural president, National World plc’s chief commercial officer Mark Hollinshead, said: “Our newsbrands reach more people than they ever did in the print-only days and the new name of our trade association reflects the multi-platform reality of the modern newsroom.”

Industry research [TGI, December 2022] shows that nine out of ten adults in Scotland engage with print or digital newsbrands at least once a week and are seven per cent more likely to rely on newspapers to stay informed than UK adults. And according to the latest JICREG analysis, 96 per cent of Scots read a local brand once a month.

Mark added: “Ever since the advent of the internet, Scottish news publishers have been evolving and innovating to keep their audiences well-served with up-to-the-minute, trusted information and analysis, and the audience figures speak for themselves.

“Scottish newsbrands keep communities across the country well-informed and connected, have a vital role to play in holding national and local politicians to account for the decisions they make, and are an essential means for services, businesses and charities to communicate with their users and customers.”

Further research from the news industry marketing body Newsworks reveals people are 2.4 times more likely to find news brands reliable than social media, and three-quarters believe it is important their news comes from a respected and recognised news provider.

Newsbrands Scotland director John McLellan said “Our titles continue to provide a depth and breadth of coverage that few, if any, networks can match, and the fact that all our members are independently regulated is also vital for maintaining public trust.

“Readers want to know they are being provided with professionally produced news, and our commercial partners benefit because readers recognise they are in a trusted environment.

“News publishers also continue to support and train the journalists of the future, and it’s important for our name to reflect an industry that is always looking forward.”

The rebranding project was a collaborative effort across member companies, with the branding design produced by DC Thomson and the marketing campaign devised by National World, with input from News UK and Newsquest Scotland.

“This was a very good example of publishers working together for the benefit of the whole sector in Scotland, whether society members or not,” added John McLellan.

Online Safety Bill ready to become law

  • The Online Safety Bill has been signed off by the Houses of Parliament and will become law soon
  • the bill will make the UK the safest place in the world to be online by placing new duties on social media companies – honouring our manifesto commitment
  • the bolstered bill has been strengthened through debate, with firmer protections for children, more control for adults and clarity for social platforms

The Online Safety Bill has passed its final Parliamentary debate and is now ready to become law.

This major milestone means the government is within touching distance of delivering the most powerful child protection laws in a generation, while ensuring adults are better empowered to take control of their online lives, while protecting our mental health.

The bill takes a zero-tolerance approach to protecting children and makes sure social media platforms are held responsible for the content they host. If they do not act rapidly to prevent and remove illegal content and stop children seeing material that is harmful to them, such as bullying, they will face significant fines that could reach billions of pounds. In some cases, their bosses may even face prison.

The bill has undergone considerable parliamentary scrutiny in both the Houses and has come out with stronger protections for all.

Technology Secretary Michelle Donelan said: “The Online Safety Bill is a game-changing piece of legislation. Today, this government is taking an enormous step forward in our mission to make the UK the safest place in the world to be online.

“I am immensely proud of what we have achieved with this bill. Our common-sense approach will deliver a better future for British people, by making sure that what is illegal offline is illegal online. It puts protecting children first, enabling us to catch keyboard criminals and crack down on the heinous crimes they seek to commit.

“I am deeply thankful to the tireless campaigning and efforts of parliamentarians, survivors of abuse and charities who have all worked relentlessly to get this bill to the finish line.”

Without this groundbreaking legislation, the safety of children across the country would be at stake and the internet would remain a wild west of content, putting children’s lives and mental health at risk. The bill has a zero-tolerance approach to protecting children, meaning social media platforms will be legally responsible for the content they host and keeping children and young people safe online.

Social media platforms will be expected to:

  • remove illegal content quickly or prevent it from appearing in the first place, including content promoting self-harm
  • prevent children from accessing harmful and age-inappropriate content
  • enforce age limits and age-checking measures
  • ensure the risks and dangers posed to children on the largest social media platforms are more transparent, including by publishing risk assessments
  • provide parents and children with clear and accessible ways to report problems online when they do arise

In addition to its firm protections for children, the bill empowers adults to take control of what they see online. It provides three layers of protection for internet users which will:

  1. Make sure illegal content will have to be removed
  2. Place a legal responsibility on social media platforms to enforce the promises they make to users when they sign up, through terms and conditions
  3. Offer users the option to filter out harmful content, such as bullying, that they do not want to see online

If social media platforms do not comply with these rules, Ofcom could fine them up to £18 million or 10% of their global annual revenue, whichever is biggest – meaning fines handed down to the biggest platforms could reach billions of pounds.

Also added to the bill are new laws to decisively tackle online fraud and violence against women and girls. Through this legislation, it will be easier to convict someone who shares intimate images without consent and new laws will further criminalise the non-consensual sharing of intimate deepfakes.

The change in laws will make it easier to charge abusers who share intimate images and put more offenders behind bars and better protect the public. Those found guilty of this base offence have a maximum penalty of 6 months in custody.

Former Love Island star and campaigner Georgia Harrison said: “Violence against women and girls is so common, with one in three women in the UK having experienced online abuse or harassment.

“The Online Safety bill is going to help bring this to an end, by holding social media companies accountable to protect women and girls from online abuse.”

Under the bill, the biggest social media platforms will have to stop users being exposed to dangerous fraudulent adverts by blocking and removing scams, or face Ofcom’s huge new fines.

The government has recently strengthened the bill even further, by amending the law to force social media firms to prevent activity that facilitates animal cruelty and torture (such as paying or instructing torture). Even if this activity takes place outside the UK but is seen by users here, companies will be forced to take it down.

Anticipating the bill coming into force, the biggest social media companies have already started to take action. Snapchat has started removing the accounts of underage users and TikTok has implemented stronger age verification.

Ofcom Chief Executive, Dame Melanie Dawes said: “Today is a major milestone in the mission to create a safer life online for children and adults in the UK. Everyone at Ofcom feels privileged to be entrusted with this important role, and we’re ready to start implementing these new laws.

“Very soon after the bill receives Royal Assent, we’ll consult on the first set of standards that we’ll expect tech firms to meet in tackling illegal online harms, including child sexual exploitation, fraud and terrorism.”

While the bill has been in progress, the government has been working closely with Ofcom to ensure changes will be implemented as quickly as possible when it becomes law.

The regulator will immediately begin work on tackling illegal content and protecting children’s safety, with its consultation process launching in the weeks after Royal Assent. It will then take a phased approach to bringing the Online Safety Bill’s into force.

Passing of the Online Safety Bill ‘a momentous day for children’ says NSPCC chief

  • The Online Safety Bill will finally require tech companies to make their sites safe by design for children
  • New laws come after years of campaigning and robust political scrutiny in Parliament
  • NSPCC CEO Sir Peter Wanless thanks survivors and bereaved parents and urges tech companies to seize the opportunity offered by regulation
  • Survivors of online child abuse tell how the Online Safety Bill will address further preventable harm to countless other children
  • Charity releases video with young people welcoming the Online Safety Bill

The NSPCC has welcomed the passing of the Online Safety Bill, a ground-breaking piece of legislation they say will radically change the landscape for children online.

After years of campaigning, tech companies will now have a legal duty to protect children from sexual abuse and harmful material on social media sites, gaming apps and messaging services.

The UK Government first promised regulation to help protect children online at the NSPCC’s annual conference in 2018, following the launch of the charity’s Wild West Web campaign

The charity has helped strengthen the legislation during its long journey through UK Parliament, ensuring that it results in regulation that comprehensively protects children online.

The charity says the legislation will mark a new era for children’s safety at a time when online child abuse offences are at a record high and children continue to be bombarded with harmful suicide and self-harm content on social media.

In August this year, NSPCC Scotland revealed that more than 3,500 online grooming crimes* had been recorded by Police Scotland over the past six years while the legislation was being discussed. Last year (2022/23), 593 Communicating Indecently with a Child offences were recorded in Scotland, with more than half of the offences against children under the age of 13.  

The Online Safety Bill was published in May 2021 and has been subject to robust scrutiny and debate by MPs, Lords and civil society.

Its importance was starkly highlighted by the inquest into the death of 14-year-old Molly Russell in September last year, which ruled that the self-harm and suicide content that Molly had been recommended on social media had contributed to her death.

Ruth Moss, whose 13-year-old daughter Sophie died by suicide after viewing harmful content online, joined forces with Molly’s father Ian Russell and other parents whose children died following exposure to harmful content online, to form the Bereaved Parents for Online Safety group to strengthen the protections in the Bill.

The Edinburgh nurse has been campaigning with the NSPCC for several years for robust new legislation that would force tech bosses to make their sites safe for children.

Ruth said: “I’m pleased that the Bill has passed. I have always argued that self-regulation by tech companies hasn’t worked. Numerous families, including mine have highlighted these failings over many years. So, I welcome the bill wholeheartedly. It is a big step in offering protection to children online.

“For at least two years, we struggled to keep my daughter Sophie safe online. In spite of removing devices, restricting internet use, implementing parental controls and having conversations about internet safety, these were not enough to prevent her from being exposed to websites that promoted self-harm, suicide and contained dark, graphic, harmful material. Complaining to internet and social media companies was either impossible or futile.

“The impact of Sophie viewing this harmful material was a deterioration in her existing mental health struggles, with devastating consequences. Sophie was 13 years old when she died by suicide. We will never truly recover from her death, and it is rightly every parents worse nightmare.

“This Online Safety Bill may not solve all the issues that children have online. But it is essential to start regulating online platforms. They have a duty of care to keep their users safe to the best of their ability.

“Of course, I do have some reservations about the Online Safety Bill. It is a new piece of legislation that has not had the chance to be tested – so there will be some unknowns. And no piece of legislation will be perfect. We will only really know if the legislation goes far enough over time. 

“The Bill will need to stay relevant. If we look at other initial law, it develops over time with the changing environments in which we live. Technology changes and with it, the legislation around it will need to keep up. But this is a good first step. It sends a message to tech companies that safety should not be compromised for the sake of profit and that tech companies cannot deny responsibility for keeping their service users safe on their websites.

“In my opinion, the enforcement of the Bill is key. This will be challenging. It will require Ofcom going up against some of the most powerful and influential organisations in the world. Ofcom will have a difficult job. Currently, I have confidence that they will do what is necessary to uphold the legislation where needed, however time will tell.

“As with any piece of complex legislation, there were amendments that did not get passed around legal but harmful content for adults, the appointment of a children’s advocate and other areas that I would like to have seen included in the bill. But again, no Bill is perfect, and I am pleased to see it passed.”

The Online Safety Bill has been shaped in large part by survivors of abuse, bereaved parents and young people themselves who have campaigned tirelessly to ensure the legislation leads to real-world change for children.

Aoife (19) from East Kilbride, South Lanarkshire, was 15 when she was exploited by a man online who pretended to be a teenager. She said: “He convinced me to send him photos and then blackmailed me with them.

“It was terrifying but luckily I knew to report to Child Exploitation and Online Protection (CEOP) and he was eventually convicted.

“I know this kind of thing happens all the time – we need the new law to stop it from hurting more lives.”

Sir Peter Wanless, NSPCC Chief Executive, said: “We are absolutely delighted to see the Online Safety Bill being passed through Parliament. It is a momentous day for children and will finally result in the ground-breaking protections they should expect online.

“At the NSPCC we hear from children about the completely unacceptable levels of abuse and harm they face online every day. That’s why we have campaigned strongly for change alongside brave survivors, families, young people and parliamentarians to ensure the legislation results in a much safer online world for children.

“Children can benefit greatly from life online. Tech companies can now seize the opportunity to embrace safety by design. The NSPCC is ready to help them listen to and understand the online experiences of their young users to help ensure every child feels safe and empowered online.”

The NSPCC’s commitment to protect children online does not end with the passing of the Bill and the charity will continue to advocate to ensure it results in truly safe online spaces for children.

Online Safety Bill Timeline:

  • 2014 NSPCC launches a campaign calling for a new offence to make online grooming a crime, by making it illegal for an adult to send a child a sexual message.  50,000 people signed our petition
  • 2015 The Government included the offence in the Sexual Offences Act 2015, but it took two more years of sustained campaigning before they finally brought the offence into force so police could take action and arrest offenders
  • April 2017 Sexual Communication with a Child became an offence
  • April 2017 The NSPCC first called on Government to bring in statutory regulation of social networks
  • Dec 2017 NSPCC call for tech companies to have a legal duty of care to keep children safe
  • April 2018 Launch of NSPCC’s Wild West Web campaign 
  • June 2018 Following an NSPCC campaign, then Culture Secretary Matt Hancock commits to legislate to protect children  
  • Feb 2019 Taming the Wild West Web was published outlining a plan for regulation 
  • April 2019 Government publishes the Online Harms White Paper 
  • January 2020 Online Harms paving bill, prepared by the Carnegie Trust and introduced by Lord McNally, was selected for its first reading in the Lords 
  • February 2020 Government publish initial consultation to the Online Harms White Paper, announcing Ofcom as the likely watchdog 
  • September 2020 NSPCC sets out six tests for the Online Harms Bill in its Regulatory Framework 
  • December 2020 Government published its Online Harms White Paper consultation response  
  • March 2021 NSPCC analysis of the consultation response found significant improvement is needed in a third of areas of proposed legislation if the Online Safety Bill is to extensively protect children from avoidable harm and abuse 
  • May 2021 Government publishes draft Online Safety Bill  
  • September 2021 Parliamentary scrutiny begins, and the NSPCC publish Duty to Protect – An assessment of the draft Online Safety Bill against the NSPCC’s six tests for protecting children 
  • October 2021 Facebook whistleblowerFrances Haugen gives evidence to the Joint Committee on the Draft Online Safety Bill 
  • December 2021 The joint committee on the Draft Online Safety Bill call for a number of changes to the legislation to better prevent child abuse 
  • January 2022 DCMS Committee back NSPCC’s call for the Online Safety Bill to put a legal duty on tech firms to disrupt the way offenders game social media design features to organise around material that facilitates abuse  
  • January 2022 The Petitions Committee also called for the Online Safety Bill to be strengthened 
  • March 2022 NSPCC urge Government to listen to the overwhelming consensus of Parliamentarians, civil society and the UK public to close significant gaps in the way Online Safety Bill responds to grooming 
  • March 2022 The Government publishes the Online Safety Bill 
  • April 2022 Online Safety Bill has its Second Reading and NSPCC publish its Time to Act report which sets out where the Bill can be improved to become a world-leading response to the online child abuse threat 
  • May 2022 Online Safety Bill Committee Stage begins 
  • July 2022 Online Safety Bill Report Stage debates  
  • Summer 2022 Online Safety Bill delayed by two changes to Prime Minister
  • September 2022 Inquest into the death of 14-year-old Molly Russell finds social media contributed to her death
  • December 2022 Online Safety Bill returns to Parliament
  • December 2022 Bereaved Families for Online Safety formed to campaign for strong protections for children and families through the Online Safety Bill
  • January 2023 Conservative MP rebellion backs NSPCC amendment that forces Government to commit to holding senior tech managers liable for harm to children
  • January 2023 Online Safety Bill begins its journey through the House of Lords
  • Spring 2023 Government amendments strengthen protections for children following campaigning by civil society, including NSPCC and Bereaved Families for Online Safety
  • September 2023 Online Safety Bill due its Third Reading in the House of Lords and to return to Parliament for final passage

New tech partnership with social media to ‘stop the boats’

  • Partnership with social media companies to clamp down on people smugglers’ operations online
  • Illegal crossings remain down on last year and returns are at their highest level since 2019
  • Extra funding and resources for law enforcement to tackle harmful content

A voluntary partnership between social media companies and government will accelerate action to tackle people smuggling content online, such as criminals sharing information about illegal Channel crossings, Prime Minister Rishi Sunak has announced today [Sunday 6th August].

It comes as new figures show the government continues to make progress on the Prime Minister’s plan to stop the boats: crossings remain down on last year, the legacy asylum backlog has been reduced by a third since December 2022, and enforced returns of people with no right to be in the UK are at their highest level since 2019.

While figures from the NCA show that over 90% of online content linked to people smuggling is taken down when social media companies are notified, the partnership between tech firms and government will drive forward efforts to clamp down on the tactics being used by criminal gangs who use the internet to lure people into paying for crossings.

This content can include discount offers for groups of people, free spaces for children, offers of false documents and false claims of safe passage – targeting vulnerable people for profit and putting people’s lives at risk through dangerous and illegal journeys.

Prime Minister Rishi Sunak said: “To stop the boats, we have to tackle the business model of vile people smugglers at source.

“That means clamping down on their attempts to lure people into making these illegal crossings and profit from putting lives at risk.

“This new commitment from tech firms will see us redouble our efforts to fight back against these criminals, working together to shut down their vile trade.”

Home Secretary Suella Braverman said: “Heartless people smugglers are using social media to promote their despicable services and charge people thousands of pounds to make the illegal journey into the UK in unsafe boats.

They must not succeed.

“This strengthened collaboration between the National Crime Agency, government and social media companies will ensure content promoting dangerous and illegal Channel crossings doesn’t see the light of day.”

The partnership will build on the close working already in place between government and social media companies, and includes a range of commitments to explore increased collaboration.

Under this initiative, social media companies will look to increase cooperation with the National Crime Agency to find and remove criminal content and step up the sharing of best practice both across the industry and with law enforcement.

The voluntary partnership also includes a commitment to explore ways to step up efforts to redirect people away from this content when they come across it online. This approach is already widely being used successfully by platforms, for example around harmful content promoting extremism or eating disorders, where people are presented with alternative messages to displace, rebut or undermine the damaging content they searched for – diverting them away from harmful messaging and misinformation.

Alongside the partnership, the government will also set up a new centre led by the National Crime Agency and Home Office to increase the capacity and capability of law enforcement to identify this content on social media platforms.

Known as the ‘Online Capability Centre’, backed by £11m funding, its work will focus on undermining and disrupting the business model of organised crime groups responsible for illegal crossings and using the internet to facilitate these journeys by intensifying efforts to combat their online activity.

The centre will be staffed by highly trained technical specialists alongside law enforcement officers and will work by building a clearer picture of the scale of illegal immigration material online.

They will work with internet companies to identify more of this material, notifying platforms so they can take the appropriate action. The centre will also focus on developing and building a bank of intelligence around the criminal networks who are promoting people smuggling services online, which will help improve law enforcement’s ability to identify content and in turn help drive investigations.

To harness the potential of new technology such as AI to clamp down on criminals’ content, government will also hold a ‘hackathon’ event with industry experts in order to develop innovative new tools which will better detect people smugglers’ publicly available content online, to help social media companies take it down more quickly.

Government will also intensify the existing work taking place with social media companies ahead of the Online Safety Bill coming into effect.

Once in force, under the Bill social media companies will be required to make sure their systems and processes are designed to prevent people coming into contact with illegal content created by people smugglers, minimise how long this content is available online and remove it as soon as possible once they become aware of it.

Alongside this, the Bill also requires major platforms to publish annual transparency reports setting out what they’re doing to tackle online harms. This could include information around how content around illegal migration is spread across platforms, how frequently it is uploaded, and what systems and processes companies have in place to deal with this kind of content.

The partnership confirmed today also builds on the work of the “Social Media Action Plan”, a voluntary agreement between the Home Office, National Crime Agency and five major social media platforms in 2021 to increase understanding of how organised criminals used their platforms to promote illegal services.

To date, this cooperation has seen more than 4,700 posts, pages or accounts have been removed or suspended as a result, increasing disruption of organised crime groups’ activity, and today’s partnership will drive further progress.

Stopping the boats is one of the Prime Minister’s top five priorities and the government is fully focused on delivering his whole system plan to tackling illegal migration. This includes:

  • stepping up law enforcement activity, with 50% more illegal working visits carried out in the first half of this year compared to the first half of last year
  • tackling the legacy asylum backlog, which has reduced by nearly a third since the end of December
  • passing the Illegal Migration Act which will ensure that people who come to the UK illegally will be detained and swiftly removed.

Working with international partners to tackle this global challenge is another key strand of efforts to stop the boats, and since taking office the PM has secured new agreements with allies, including strengthened partnerships with France and Albania which will see 40% more patrols on French beaches, and have resulted in a 90% drop in Albanian small boat arrivals in the first quarter of 2023 compared to the same period last year.

New rules to crack down on illegal ads and protect children online

  • Crack down on fake celebrity endorsements and illegal weapons adverts as new Government rules safeguard consumers and protect children
  • Ministers will convene a new taskforce to drive industry-led action
  • Proposed rules will strike a balance between internet safety and supporting innovation

Social media platforms, websites and services like advertising display networks will have to take tougher action to stop children seeing age-restricted adverts for products like alcohol or gambling.

Fake celebrity scams and pop-up malware from hackers will also be clamped down on as part of new rules to make advertising regulation fit for the digital age.

The plans are published today by the government in response to its Online Advertising Programme.

Online advertising includes the banners or displays which appear around the content of a website, results prioritised at the top of search engines, and pop-ups on a user’s screen. It helps businesses grow by reaching targeted audiences and can be cheaper and quicker than traditional advertising formats. Last year it accounted for three quarters (£26.1 billion) of the £34.8 billion spent on advertising in the UK.

Its rapid development, combined with changes in technology and complex supply chains between marketers and platforms, make it difficult to stop illegal ads appearing.

People frequently encounter fraudulent celebrity endorsements for financial scams, legitimate-looking pop-ups containing hidden malware, and promotions for products prohibited under UK law – such as weapons, drugs, counterfeit fashion and fake ticketing.

Children can be exposed to ads for age-restricted products such as alcohol, gambling and adult-rated films and games.

Creative Industries Minister Sir John Whittingdale said: “Advertising is a huge industry in which Britain is a world leader. However, as online advertising has taken a steadily bigger share, the rules governing it have not kept pace and so we intend to strengthen them to ensure consumers are properly protected.

“Our plans will shut down the scammers using online adverts to con people out of their cash and will stop damaging and inappropriate products being targeted at children.

“We will make sure that our proposed regulation helps keep people safe while supporting and enhancing the legitimate advertising industry so it can maximise its innovation and potential.”

There is currently a self-regulatory system for the content and placement of online adverts in the UK, overseen by the Advertising Standards Authority (ASA). The ASA has a strong record of delivering consistent, effective results and holding legitimate advertisers accountable. However regulators are not empowered to act to address illegal harms in the same way as harmful advertising by legitimate businesses.

The government intends to introduce new rules to tackle illegal paid-for online adverts and increase protections for children. A range of targeted legislative and non-legislative measures will address the most serious risks linked to online advertising. This approach complements the Online Safety Bill, which is targeted at user generated content, and will build on measures tackling fraudulent advertising in that legislation.

The new statutory regulation will put more responsibilities on major players across the online advertising supply chain. As well as online publishers, apps and websites serving ads, ‘adtech’ intermediary services which facilitate the placement and distribution of online adverts will be in scope. Promotional posts by social media influencers where they receive payment or free products will also be covered.

Social media firms, search engines and other websites will be required by law to have proportionate systems and processes to stop people being served illegal adverts, and prevent under-18s seeing adverts for products and services illegal to be sold to them. This will improve safety, transparency and consumer trust by introducing more effective action while supporting industry growth.

In due course, the government will launch a further consultation on the details of potential legislation – including its preferred choice for a regulator to oversee the new illegal paid-for advertising rules. New legislation would not affect the ASA’s remit for the content and placement of legitimate paid-for advertising online.

Ministers will this week convene a new taskforce to gather more evidence around illegal advertising and build on industry initiatives to tackle harms and increase protections for children before the legislation is introduced.

The taskforce will be chaired by Creative Industries Minister John Whittingdale and Mark Lund, the chair of the Advertising Standards Board of Finance and former president of McCann UK and Europe.

The group will include representatives from across the advertising industry, including the ASA, as well as tech trade bodies, consumer groups and the government’s Anti-Fraud Champion, Anthony Browne.

Mark Lund, chair of The Advertising Standards Board of Finance and deputy chair of the Online Advertising Taskforce, said: “UK advertising is a dynamic engine for the UK economy because it’s creative and trusted.

“So, I’m delighted to be helping lead in the task force’s role in strengthening industry’s response to illegal harms advertising and the protection of children online,  building on the long-term success of the ASA and the self-regulation system in keeping both trust and creativity at world leading levels.”

Anti-Fraud Champion Anthony Browne said: “We remain absolutely committed to fighting fraud and this is another example of the government delivering on a pledge from its pioneering Fraud Strategy.

“Eighty percent of fraud is cyber enabled and it often starts with fraudulent posts and adverts on social media. I am therefore pleased to see new measures being introduced to tackle these.

“The government will continue to work with industry, and law enforcement, to prevent fraud from happening and ensure better support is given to the public.”

Seven Days to Stamp Deadline – Make a ‘Card Commitment’ to boost positivity

Use expiring stamps to bring a moment of joy

It’s seven days before millions of non-barcoded Royal Mail stamps become invalid, and the UK’s Greeting Card Association is asking Brits to use one of those stamps this week to change someone’s life.

Before the 31 July expiration date, the GCA, which is proud to represent many local high street card retailers, is encouraging people this week to make a ‘card commitment’, using one of those stamps to bring the power of thoughtfulness to someone who really needs it.

“Sending and exchanging cards promotes wellbeing and mental health, lighting up the life of recipients and senders alike,” said GCA chief executive officer Amanda Fergusson.

“What’s more that simple act nurtures local independent businesses on the high streets we all love, supports local charities and organisations in the communities we care for and helps protect the Royal Mail delivery service we all treasure.

“The use of an expiring stamp next week to connect with someone who would love to know you’re in their thoughts, would be a small act that may have an incredible impact,” added Amanda.

The GCA will encourage Brits making a #Cardmitment this week to share their card-sending stories on the GCA’s Instagram site and social media feeds of its 500 members – from small high street card retailers to some of the largest publishers in a creative industry worth over £1.5bn to the UK economy.

The altruistic act of sending a card can be powerful, reducing the sender and recipient’s negativity, stress and loneliness, and promoting positive mental health.

Sending greeting cards can be a way to spread kindness and positivity, and doing so makes the sender and recipient feel connected, and better about themselves.

The suggestion marks the beginning of a significant GCA #Cardmitment campaign that, over the coming months, will highlight how powerful the simple, British act of sending a card can be to individuals, communities and society.

#Cardmitment

The most influential Wimbledon players of 2023

Novak Djokovic is the most influential tennis player at Wimbledon 2023 but who else makes the top ten?

  • Novak Djokovic is the most influential tennis player competing at Wimbledon this year, generating up to £54,700 per sponsored Instagram post 
  • Nick Kyrgios is the second highest earning on Instagram, earning potentially up to £22,700 on each sponsored post
  • Andy Murray is the third most influential, earning up to £12,800 on each post, whilst Stefanos Tsitsipas is fourth and earning up to £12,200

A new study names Novak Djokovic as the most influential tennis player competing at Wimbledon in 2023, earning potentially £54,700 per sponsored Instagram post. 

The research, conducted by CasinoAlpha.co.nz used an Instagram pricing calculator to create a list of the most influential Wimbledon tennis stars competing in this year’s tournament and established how much they can earn per sponsored Instagram post.   

Novak Djokovic, the current men’s singles Wimbledon champion, is the most influential tennis player at Wimbledon this year. He could earn up to £54,700 from each sponsored post on Instagram.

He has the highest following of any Wimbledon tennis player at 13 million and an engagement rate of 2.7%. Djokovic earned his following through winning seven Wimbledon titles throughout the years and 94 single titles all together throughout his tennis career at various competitions.

Nick Kyrgios has been named as the second most influential tennis player at Wimbledon this year, earning up to £22,700 per Instagram post. Although never having won Wimbledon before, Kyrgios is a favourite to win in 2023. He has the second highest follower count at 4.1 million and a high engagement rate of 3.7%. 

The third most influential competitor is Andy Murray who can earn up to £12,800 on each sponsored post on Instagram.

Another men’s singles champion, Andy Murray has won the Wimbledon championship twice and holds 46 single titles leading to him becoming one of the favourites to win for the third time at Wimbledon this year. He has a following of 2 million and a large engagement rate of 6.1%.

The fourth most influential tennis player at Wimbledon this year is Stefanos Tsitsipas, potentially earning up to £12,200 per sponsored Instagram post. The tennis star, currently living in Monte Carlo, has a large following of 1.8 million and a 3.3% engagement rate.

Matteo Berrettini is the fifth most influential player at Wimbledon on Instagram. His following of 1.6 million and an engagement rate of 7.1% can earn him up to £11,100 on each sponsored post. 

Iga Swiatek is the sixth most influential tennis player at Wimbledon and the most influential female tennis player at Wimbledon this year.

Her combination of a follower count of 1.3 million and a high engagement rate of 9% could potentially earn her up to £9,300 on each sponsored Instagram post.

Commenting on the findings, Tudor Turiceanu, CEO of CasinoAlpha.co.nz said: “Overall, Wimbledon is accepted as one of the greatest tennis tournaments in the tennis calendar and is the oldest tennis tournament in the world dating back to 1877.

“The players at Wimbledon are considered to be the best in the world and the Wimbledon title can allow a tennis player to become more influential on social media, reaping rewards both on the court and off.”

NameInstagram handleInstagram follower count Engagement rateHow much they can earn per sponsored Instagram post 
Novak Djokovic djokernole13,000,0002.7%£54,700
Nick Kyrgios k1ngkyrg1os4,100,0003.7%£22,700
Andy Murrayandymurray2,000,0006.1%£12,800
Stefanos Tsitsipas stefanostsitsipas981,800,0003.3%£12,200
Matteo Berrettini matberrettini1,600,0007.1%£11,100
Iga Swiatek iga.swiatek 1,300,000 9%£9,300
Ons Jabeuronsjabeur 913,1005.1%£7,100
Coco Gauffcocogauff890,5000.2%£7,000
Jannik Sinnerjanniksin841,0005.8%£6,700
Aryna Sabalenka sablenka_aryna661,6009.7%£5,600