Experts convene for day one of first global AI Safety Summit

  • The US, France, Singapore, Italy, Japan and China among nations confirmed to attend Bletchley Park Summit
  • historic venue will play host to crucial talks around risks and opportunities posed by rapid advances in frontier AI
  • Secretary of State Michelle Donelan to call for international collaboration to mitigate risks of AI

Leading AI nations, businesses, civil society and AI experts will convene at Bletchley Park today (Wednesday 1 November) for the first ever AI Safety Summit where they’ll discuss the global future of AI and work towards a shared understanding of its risks.

Technology Secretary Michelle Donelan will open the event by welcoming an expert cast list before setting out the UK government’s vision for safety and security to be at the heart of advances in AI, in order to enable the enormous opportunities it will bring.

She will look to make progress on the talks which will pave the way for a safer world by identifying risks, opportunities and the need for international collaboration, before highlighting consensus on the scale, importance and urgency for AI opportunities and the necessity for mitigating frontier AI risks to unlock them.

The historic venue will play host to the landmark 2-day summit, which will see a small, but focused group comprising of AI companies, civil society and independent experts gather around the table to kickstart urgent talks on the risks and opportunities posed by rapid advances in frontier AI – especially ahead of new models launching next year, whose capabilities may not be fully understood.

The US, France, Germany, Italy, Japan and China are among nations confirmed as attendees at the AI Safety Summit. Representatives from The Alan Turing Institute, The Organisation for Economic Cooperation and Development (OECD) and the Ada Lovelace Institute are also among the groups confirmed to attend, highlighting the depth of expertise of the delegates who are expected to take part in crucial talks.

As set out by Prime Minister Rishi Sunak last week, the summit will focus on understanding the risks such as potential threats to national security right through to the dangers a loss of control of the technology could bring. Discussions around issues likely to impact society, such as election disruption and erosion of social trust are also set to take place.

The UK already employs over 50,000 people in the AI sector and contributes ​​£3.7 billion to our economy annually. Additionally, the UK is home to twice as many AI companies as any other European country, and hundreds more AI companies start up in the UK every year, growing our economy and creating more jobs. 

As such, day one of the summit will also host several roundtable discussions dedicated to improving frontier AI safety with key UK based developers such as Open-AI, Anthropic and UK based Deepmind. Delegates will consider how risk thresholds, effective safety assessments, and robust governance and accountability mechanisms can be defined to enable the safe scaling of frontier AI by developers.

Secretary of State for Technology, Michelle Donelan MP said: “AI is already an extraordinary force for good in our society, with limitless opportunity to grow the global economy, deliver better public services and tackle some of the world’s biggest challenges.

“But the risks posed by frontier AI are serious and substantive and it is critical that we work together, both across sectors and countries to recognise these risks.

“This summit provides an opportunity for us to ensure we have the right people with the right expertise gathered around the table to discuss how we can mitigate these risks moving forward. Only then will we be able to truly reap the benefits of this transformative technology in a responsible manner.”

Discussions are expected to centre around the risks emerging from rapid advances in AI, before exploring the transformative opportunities the technology has to offer – including in education and areas for international research collaborations.  

The Secretary of State will be joined by members of the UK’s Frontier AI Taskforce – including its Chair, Ian Hogarth – which was launched earlier this year to evaluate the risks of frontier AI models, and by representatives from nations at the cutting-edge of AI development.

They will also look at what national policymakers, the international community, and scientists and researchers can do to manage the risks and harness the opportunities of AI to deliver economic and social benefits around the world.

Day one will conclude with a panel discussion on the transformative opportunities of AI for public good now and in the long-term, with a focus on how it can be used by teachers and students to revolutionise education.

Technology Secretary Michelle Donelan will also take to the podium to deliver closing remarks to delegates, before the curtain falls on what is hoped will be an historic first day of the first ever global AI Safety Summit.

AI Summit is dominated by Big Tech and a “missed opportunity”

  • More than 100 UK and international organisations, experts and campaigners sign open letter to Rishi Sunak
  • Groups warn that the “communities and workers most affected by AI have been marginalised by the Summit.”
  • “Closed door event” is dominated by Big Tech and overly focused on speculative risks instead of AI threats “in the here and now”- PM told
  • Signatories to letter include leading human rights organisations, trade union bodies, tech orgs, leading academics and experts on AI

More than 100 civil society organisations from across the UK and world have branded the government’s AI Summit as “a missed opportunity”.

In an open letter to Prime Minister Rishi Sunak the groups warn that the “communities and workers most affected by AI have been marginalised by the Summit” while a select few corporations seek to shape the rules.

The letter has been coordinated by the TUC, Connected by Data and Open Rights Group and is released ahead of the official AI Summit at Bletchley Park on 1 and 2 November. Signatories to the letter include:

  • Major and international trade union confederations – such as the TUC, AFL-CIO, European Trade Union Confederation, UNI Global, International Trade Union Confederation representing tens of millions of workers worldwide
  • International and UK human rights orgs – such as Amnesty International, Liberty, Article 19, Privacy International, Access Now
  • Domestic and international civil society organisations – such as Connected by Data, Open Rights Group, 5 Rights, Consumers International.
  • Tech community voices – such as Mozilla, AI Now Institute and individuals associated to the AI Council, Alan Turing Institute & British Computing Society
  • Leading international academics, experts, members of the House of Lords

Highlighting the exclusion of civil society from the Summit, the letter says: “Your ‘Global Summit on AI Safety’ seeks to tackle the transformational risks and benefits of AI, acknowledging that AI “will fundamentally alter the way we live, work, and relate to one another”.

Yet the communities and workers most affected by AI have been marginalised by the Summit.

The involvement of civil society organisations that bring a diversity of expertise and perspectives has been selective and limited.

This is a missed opportunity.”

Highlighting the Summit’s lack of focus on immediate threats of AI and dominance of Big Tech, the letter says: “As it stands, the Summit is a closed door event, overly focused on speculation about the remote ‘existential risks’ of ‘frontier’ AI systems – systems built by the very same corporations who now seek to shape the rules.

For many millions of people in the UK and across the world, the risks and harms of AI are not distant – they are felt in the here and now.

“This is about being fired from your job by algorithm, or unfairly profiled for a loan based on your identity or postcode.

People are being subject to authoritarian biometric surveillance, or to discredited predictive policing.

Small businesses and artists are being squeezed out, and innovation smothered as a handful of big tech companies capture even more power and influence.

To make AI truly safe we must tackle these and many other issues of huge individual and societal significance. Successfully doing so will lay the foundations for managing future risks.

Calling for a more inclusive approach to managing the risks of AI, the letter concludes: “For the Summit itself and the work that has to follow, a wide range of expertise and the voices of communities most exposed to AI harms must have a powerful say and equal seat at the table. The inclusion of these voices will ensure that the public and policy makers get the full picture.

In this way we can work towards ensuring the future of AI is as safe and beneficial as possible for communities in the UK and across the world.”

Senior Campaigns and Policy Officer for Connected by Data Adam Cantwell-Corn said: ““AI must be shaped in the interests of the wider public. This means ensuring that a range of expertise, perspectives and communities have an equal seat at the table. The Summit demonstrates a failure to do this.

“The open letter is a powerful, diverse and international challenge to the unacceptable domination of AI policy by narrow interests.

“Beyond the Summit, AI policy making needs a re-think – domestically and internationally – to steer these transformative technologies in a democratic and socially useful direction.”

TUC Assistant General Secretary Kate Bell said: “It is hugely disappointing that unions and wider civil society have been denied proper representation at this Summit.

“AI is already making life-changing decisions – like how we work, how we’re hired and who gets fired.

“But working people have yet to be given a seat at the table.

“This event was an opportunity to bring together a wide range of voices to discuss how we deal with immediate threats and make sure AI benefits all.

“It shouldn’t just be tech bros and politicians who get to shape the future of AI.”

Open Rights Group Policy Manager for Data Rights and Privacy Abby Burke said: ““The government has bungled what could have been an opportunity for real global AI leadership due to the Summit’s limited scope and invitees.

“The agenda’s focus on future, apocalyptic risks belies the fact that government bodies and institutions in the UK are already deploying AI and automated decision-making in ways that are exposing citizens to error and bias on a massive scale.

It’s extremely concerning that the government has excluded those who are experiencing harms and other critical expert and activist voices from its Summit, allowing businesses who create and profit from AI systems to set the UK’s agenda.”

The full letter reads:

An open letter to the Prime Minister on the ‘Global Summit on AI Safety’

Dear Prime Minister,

Your ‘Global Summit on AI Safety’ seeks to tackle the transformational risks and benefits of AI, acknowledging that AI “will fundamentally alter the way we live, work, and relate to one another”.

Yet the communities and workers most affected by AI have been marginalised by the Summit.

The involvement of civil society organisations that bring a diversity of expertise and perspectives has been selective and limited.

This is a missed opportunity.

As it stands, the Summit is a closed door event, overly focused on speculation about the remote ‘existential risks’ of ‘frontier’ AI systems – systems built by the very same corporations who now seek to shape the rules.

For many millions of people in the UK and across the world, the risks and harms of AI are not distant – they are felt in the here and now.

This is about being fired from your job by algorithm, or unfairly profiled for a loan based on your identity or postcode.

People are being subject to authoritarian biometric surveillance, or to discredited predictive policing.

Small businesses and artists are being squeezed out, and innovation smothered as a handful of big tech companies capture even more power and influence.

To make AI truly safe we must tackle these and many other issues of huge individual and societal significance. Successfully doing so will lay the foundations for managing future risks.

For the Summit itself and the work that has to follow, a wide range of expertise and the voices of communities most exposed to AI harms must have a powerful say and equal seat at the table. The inclusion of these voices will ensure that the public and policy makers get the full picture.

In this way we can work towards ensuring the future of AI is as safe and beneficial as possible for communities in the UK and across the world.

XL Bully type dogs to be banned

From 31 December 2023 breeding, selling, advertising, rehoming, abandoning and allowing an XL Bully dog to stray will be illegal

New laws banning XL Bully type dogs have been laid in Parliament today, as the Government adds the breed to the list of dogs banned under the Dangerous Dogs Act.

The announcement fulfils the Government’s pledge to put in place laws to ban the breed by the end of the year and protect the public following a concerning rise in fatal attacks.

Under the new rules, which come into force at the end of the year, it will be illegal to breed, sell, advertise, exchange, gift, rehome, abandon or allow XL Bully dogs to stray in England and Wales.

From this date, these dogs must be kept on a lead and muzzled in public. Owners of XL Bully dogs are recommended to start training their dog to wear a muzzle and to walk on a lead ahead of the legal restrictions coming into force.

Breeders have also been told to stop mating these types of dogs from now in preparation of it being a criminal offence to sell or rehome these dogs.

From 1 February 2024 , it will then become illegal to own an XL Bully dog if it is not registered on the Index of Exempted Dogs. By staggering these two dates, current owners of this breed will have time to prepare for these new rules.

Owners who wish to keep their dogs will have until the end of January to register them and will be forced to comply with strict requirements. As well as being muzzled and kept on a lead in public, these dogs must also be microchipped and neutered.

Dogs under one year when the ban comes in must be neutered by the end of the year, older dogs must be neutered by the end of June.

From 1 February, owners without a Certificate of Exemption face a criminal record and an unlimited fine if they are found to be in possession of an XL Bully type, and their dog could be seized.

Environment Secretary Thérèse Coffey said: “We are taking quick and decisive action to protect the public from tragic dog attacks and today I have added the XL Bully type to the list of dogs prohibited under the Dangerous Dogs Act.  

“It will soon become a criminal offence to breed, sell, advertise, rehome or abandon an XL Bully type dog, and they must also be kept on a lead and muzzled in public. In due course it will also be illegal to own one of these dogs without an exemption.

“We will continue to work closely with the police, canine and veterinary experts, and animal welfare groups, as we take forward these important measures.”

Owners may choose to have their dog put to sleep by a vet, with compensation provided to help with these costs. Further details on how to apply for compensation and the evidence required to make a claim will be provided soon.  

As part of the process, the definition of the ‘XL Bully’ breed type has also been published today. This follows meetings of an expert group, convened by the Environment Secretary and made up of police, local authority, vets and other animal welfare experts to help define the breed. The definition provides clear assessment criteria for owners and enforcement authorities and is a requirement under the Dangerous Dogs Act in order to deliver the ban.

Owners can access the most up to date information on what action they need to take and when on this dedicated page, Prepare for the ban on XL Bully dogs – GOV.UK (www.gov.uk).

Owners whose dogs are dangerously out of control are already breaking the law, and the enforcement authorities have a full range of powers to apply penalties to them. Under the Dangerous Dogs Act, people can be put in prison for up to 14 years, be disqualified from ownership or their dangerous dogs can be euthanised.