Experts convene for day one of first global AI Safety Summit

  • The US, France, Singapore, Italy, Japan and China among nations confirmed to attend Bletchley Park Summit
  • historic venue will play host to crucial talks around risks and opportunities posed by rapid advances in frontier AI
  • Secretary of State Michelle Donelan to call for international collaboration to mitigate risks of AI

Leading AI nations, businesses, civil society and AI experts will convene at Bletchley Park today (Wednesday 1 November) for the first ever AI Safety Summit where they’ll discuss the global future of AI and work towards a shared understanding of its risks.

Technology Secretary Michelle Donelan will open the event by welcoming an expert cast list before setting out the UK government’s vision for safety and security to be at the heart of advances in AI, in order to enable the enormous opportunities it will bring.

She will look to make progress on the talks which will pave the way for a safer world by identifying risks, opportunities and the need for international collaboration, before highlighting consensus on the scale, importance and urgency for AI opportunities and the necessity for mitigating frontier AI risks to unlock them.

The historic venue will play host to the landmark 2-day summit, which will see a small, but focused group comprising of AI companies, civil society and independent experts gather around the table to kickstart urgent talks on the risks and opportunities posed by rapid advances in frontier AI – especially ahead of new models launching next year, whose capabilities may not be fully understood.

The US, France, Germany, Italy, Japan and China are among nations confirmed as attendees at the AI Safety Summit. Representatives from The Alan Turing Institute, The Organisation for Economic Cooperation and Development (OECD) and the Ada Lovelace Institute are also among the groups confirmed to attend, highlighting the depth of expertise of the delegates who are expected to take part in crucial talks.

As set out by Prime Minister Rishi Sunak last week, the summit will focus on understanding the risks such as potential threats to national security right through to the dangers a loss of control of the technology could bring. Discussions around issues likely to impact society, such as election disruption and erosion of social trust are also set to take place.

The UK already employs over 50,000 people in the AI sector and contributes ​​£3.7 billion to our economy annually. Additionally, the UK is home to twice as many AI companies as any other European country, and hundreds more AI companies start up in the UK every year, growing our economy and creating more jobs. 

As such, day one of the summit will also host several roundtable discussions dedicated to improving frontier AI safety with key UK based developers such as Open-AI, Anthropic and UK based Deepmind. Delegates will consider how risk thresholds, effective safety assessments, and robust governance and accountability mechanisms can be defined to enable the safe scaling of frontier AI by developers.

Secretary of State for Technology, Michelle Donelan MP said: “AI is already an extraordinary force for good in our society, with limitless opportunity to grow the global economy, deliver better public services and tackle some of the world’s biggest challenges.

“But the risks posed by frontier AI are serious and substantive and it is critical that we work together, both across sectors and countries to recognise these risks.

“This summit provides an opportunity for us to ensure we have the right people with the right expertise gathered around the table to discuss how we can mitigate these risks moving forward. Only then will we be able to truly reap the benefits of this transformative technology in a responsible manner.”

Discussions are expected to centre around the risks emerging from rapid advances in AI, before exploring the transformative opportunities the technology has to offer – including in education and areas for international research collaborations.  

The Secretary of State will be joined by members of the UK’s Frontier AI Taskforce – including its Chair, Ian Hogarth – which was launched earlier this year to evaluate the risks of frontier AI models, and by representatives from nations at the cutting-edge of AI development.

They will also look at what national policymakers, the international community, and scientists and researchers can do to manage the risks and harness the opportunities of AI to deliver economic and social benefits around the world.

Day one will conclude with a panel discussion on the transformative opportunities of AI for public good now and in the long-term, with a focus on how it can be used by teachers and students to revolutionise education.

Technology Secretary Michelle Donelan will also take to the podium to deliver closing remarks to delegates, before the curtain falls on what is hoped will be an historic first day of the first ever global AI Safety Summit.

AI Summit is dominated by Big Tech and a “missed opportunity”

  • More than 100 UK and international organisations, experts and campaigners sign open letter to Rishi Sunak
  • Groups warn that the “communities and workers most affected by AI have been marginalised by the Summit.”
  • “Closed door event” is dominated by Big Tech and overly focused on speculative risks instead of AI threats “in the here and now”- PM told
  • Signatories to letter include leading human rights organisations, trade union bodies, tech orgs, leading academics and experts on AI

More than 100 civil society organisations from across the UK and world have branded the government’s AI Summit as “a missed opportunity”.

In an open letter to Prime Minister Rishi Sunak the groups warn that the “communities and workers most affected by AI have been marginalised by the Summit” while a select few corporations seek to shape the rules.

The letter has been coordinated by the TUC, Connected by Data and Open Rights Group and is released ahead of the official AI Summit at Bletchley Park on 1 and 2 November. Signatories to the letter include:

  • Major and international trade union confederations – such as the TUC, AFL-CIO, European Trade Union Confederation, UNI Global, International Trade Union Confederation representing tens of millions of workers worldwide
  • International and UK human rights orgs – such as Amnesty International, Liberty, Article 19, Privacy International, Access Now
  • Domestic and international civil society organisations – such as Connected by Data, Open Rights Group, 5 Rights, Consumers International.
  • Tech community voices – such as Mozilla, AI Now Institute and individuals associated to the AI Council, Alan Turing Institute & British Computing Society
  • Leading international academics, experts, members of the House of Lords

Highlighting the exclusion of civil society from the Summit, the letter says: “Your ‘Global Summit on AI Safety’ seeks to tackle the transformational risks and benefits of AI, acknowledging that AI “will fundamentally alter the way we live, work, and relate to one another”.

Yet the communities and workers most affected by AI have been marginalised by the Summit.

The involvement of civil society organisations that bring a diversity of expertise and perspectives has been selective and limited.

This is a missed opportunity.”

Highlighting the Summit’s lack of focus on immediate threats of AI and dominance of Big Tech, the letter says: “As it stands, the Summit is a closed door event, overly focused on speculation about the remote ‘existential risks’ of ‘frontier’ AI systems – systems built by the very same corporations who now seek to shape the rules.

For many millions of people in the UK and across the world, the risks and harms of AI are not distant – they are felt in the here and now.

“This is about being fired from your job by algorithm, or unfairly profiled for a loan based on your identity or postcode.

People are being subject to authoritarian biometric surveillance, or to discredited predictive policing.

Small businesses and artists are being squeezed out, and innovation smothered as a handful of big tech companies capture even more power and influence.

To make AI truly safe we must tackle these and many other issues of huge individual and societal significance. Successfully doing so will lay the foundations for managing future risks.

Calling for a more inclusive approach to managing the risks of AI, the letter concludes: “For the Summit itself and the work that has to follow, a wide range of expertise and the voices of communities most exposed to AI harms must have a powerful say and equal seat at the table. The inclusion of these voices will ensure that the public and policy makers get the full picture.

In this way we can work towards ensuring the future of AI is as safe and beneficial as possible for communities in the UK and across the world.”

Senior Campaigns and Policy Officer for Connected by Data Adam Cantwell-Corn said: ““AI must be shaped in the interests of the wider public. This means ensuring that a range of expertise, perspectives and communities have an equal seat at the table. The Summit demonstrates a failure to do this.

“The open letter is a powerful, diverse and international challenge to the unacceptable domination of AI policy by narrow interests.

“Beyond the Summit, AI policy making needs a re-think – domestically and internationally – to steer these transformative technologies in a democratic and socially useful direction.”

TUC Assistant General Secretary Kate Bell said: “It is hugely disappointing that unions and wider civil society have been denied proper representation at this Summit.

“AI is already making life-changing decisions – like how we work, how we’re hired and who gets fired.

“But working people have yet to be given a seat at the table.

“This event was an opportunity to bring together a wide range of voices to discuss how we deal with immediate threats and make sure AI benefits all.

“It shouldn’t just be tech bros and politicians who get to shape the future of AI.”

Open Rights Group Policy Manager for Data Rights and Privacy Abby Burke said: ““The government has bungled what could have been an opportunity for real global AI leadership due to the Summit’s limited scope and invitees.

“The agenda’s focus on future, apocalyptic risks belies the fact that government bodies and institutions in the UK are already deploying AI and automated decision-making in ways that are exposing citizens to error and bias on a massive scale.

It’s extremely concerning that the government has excluded those who are experiencing harms and other critical expert and activist voices from its Summit, allowing businesses who create and profit from AI systems to set the UK’s agenda.”

The full letter reads:

An open letter to the Prime Minister on the ‘Global Summit on AI Safety’

Dear Prime Minister,

Your ‘Global Summit on AI Safety’ seeks to tackle the transformational risks and benefits of AI, acknowledging that AI “will fundamentally alter the way we live, work, and relate to one another”.

Yet the communities and workers most affected by AI have been marginalised by the Summit.

The involvement of civil society organisations that bring a diversity of expertise and perspectives has been selective and limited.

This is a missed opportunity.

As it stands, the Summit is a closed door event, overly focused on speculation about the remote ‘existential risks’ of ‘frontier’ AI systems – systems built by the very same corporations who now seek to shape the rules.

For many millions of people in the UK and across the world, the risks and harms of AI are not distant – they are felt in the here and now.

This is about being fired from your job by algorithm, or unfairly profiled for a loan based on your identity or postcode.

People are being subject to authoritarian biometric surveillance, or to discredited predictive policing.

Small businesses and artists are being squeezed out, and innovation smothered as a handful of big tech companies capture even more power and influence.

To make AI truly safe we must tackle these and many other issues of huge individual and societal significance. Successfully doing so will lay the foundations for managing future risks.

For the Summit itself and the work that has to follow, a wide range of expertise and the voices of communities most exposed to AI harms must have a powerful say and equal seat at the table. The inclusion of these voices will ensure that the public and policy makers get the full picture.

In this way we can work towards ensuring the future of AI is as safe and beneficial as possible for communities in the UK and across the world.

‘Game-changing’ exascale super-computer planned for Edinburgh

Edinburgh has been selected to host a next-gen supercomputer fuelling economic growth, building on the success of a Bristol-based AI supercomputer, creating high-skilled jobs

  • Edinburgh nominated to host next-generation compute system, 50 times more powerful than our current top-end system
  • national facility – one of the world’s most powerful – will help unlock major advances in AI, medical research, climate science and clean energy innovation, boosting economic growth
  • new exascale system follows AI supercomputer in Bristol in transforming the future of UK science and tech and providing high-skilled jobs

Edinburgh is poised to host a next-generation compute system amongst the fastest in the world, with the potential to revolutionise breakthroughs in artificial intelligence, medicine, and clean low-carbon energy.

The capital has been named as the preferred choice to host the new national exascale facility, as the UK government continues to invest in the country’s world-leading computing capacity – crucial to the running of modern economies and cutting-edge scientific research.

Exascale is the next frontier in computing power, where systems are built to carry out extremely complex functions with increased speed and precision. This in turn enables researchers to accelerate their work into some of the most pressing challenges we face, including the development of new drugs, and advances in nuclear fusion to produce potentially limitless clean low-carbon energy.

The exascale system hosted at the University of Edinburgh will be able to carry out these complicated workloads while also supporting critical research into AI safety and development, as the UK seeks to safely harness its potential to improve lives across the country.

Science, Innovation and Technology Secretary Michelle Donelan said: “If we want the UK to remain a global leader in scientific discovery and technological innovation, we need to power up the systems that make those breakthroughs possible.

“This new UK government funded exascale computer in Edinburgh will provide British researchers with an ultra-fast, versatile resource to support pioneering work into AI safety, life-saving drugs, and clean low-carbon energy.

“It is part of our £900 million investment in uplifting the UK’s computing capacity, helping us forge a stronger Union, drive economic growth, create the high-skilled jobs of the future and unlock bold new discoveries that improve people’s lives.”

Computing power is measured in ‘flops’ – floating point operations – which means the number of arithmetic calculations that a computer can perform every second.  An exascale system will be 50 times more powerful than our current top-end system, ARCHER2, which is also housed in Edinburgh.

The investment will mean new high-skilled jobs for Edinburgh, while the new national facility would vastly upgrade the UK’s research, technology and innovation capabilities, helping to boost economic growth, productivity and prosperity across the country in support of the Prime Minister’s priorities.

UK Research and Innovation Chief Executive Professor Dame Ottoline Leyser said: “State-of-the-art compute infrastructure is critical to unlock advances in research and innovation, with diverse applications from drug design through to energy security and extreme weather modelling, benefiting communities across the UK. 

“This next phase of investment, located at Edinburgh, will help to keep the UK at the forefront of emerging technologies and facilitate the collaborations needed to explore and develop game-changing insights across disciplines.”

Secretary of State for Scotland, Alister Jack, said: “We have already seen the vital work being carried out by ARCHER2 in Edinburgh and this new exascale system, backed by the UK government, will keep Scotland at the forefront of science and innovation.

“As well as supporting researchers in their critical work on AI safety this will bring highly skilled jobs to Edinburgh and support economic growth for the region.”

The announcement follows the news earlier this month that Bristol will play host to a new AI supercomputer, named Isambard-AI, which will be one of the most powerful for AI in Europe.

The cluster will act as part of the national AI Research Resource (AIRR) to maximise the potential of AI and support critical work around the safe development and use of the technology.

Plans for both the exascale compute and the AIRR were first announced in March, as part of a £900 million investment to upgrade the UK’s next-generation compute capacity, and will deliver on two of the recommendations set out in the independent review into the Future of Compute.

Both announcements come as the UK prepares to host the world’s first AI Safety Summit on 1 and 2 November.

The summit will bring together leading countries, technology organisations, academics and civil society to ensure we have global consensus on the risks emerging from the most immediate and rapid advances in AI and how they are managed, while also maximising the benefits of the safe use of the technology to improve lives.