A new mission announced by the Prime Minister will accelerate the use of AI in life sciences to tackle the biggest health challenges of our generation
Missed opportunity, say civil society organisations
A new mission announced by the Prime Minister will accelerate the use of AI in life sciences to tackle the biggest health challenges of our generation.
In a speech on Thursday, the Prime Minister announced that a £100 million in new government investment will be targeted towards areas where rapid deployment of AI has the greatest potential to create transformational breakthroughs in treatments for previously incurable diseases.
The AI Life Sciences Accelerator Mission will capitalise on the UK’s unique strengths in secure health data and cutting-edge AI.
The Life Sciences Vision encompasses 8 critical healthcare missions that government, industry, the NHS, academia and medical research charities will work together on at speed to solve – from cancer treatment to tackling dementia.
The £100 million will help drive forward this work by exploring how AI could address these conditions, which have some of the highest mortality and morbidity.
For example, AI could further the development of novel precision treatments for dementia. This new government funding for AI will help us harness the UK’s world-class health data to quickly identify those at risk of dementia and related conditions, ensure that the right patients are taking part in the right trials at the right time to develop new treatments effectively, and give us better data on how well new therapies work.
By using the power of AI to support the growing pipeline of new dementia therapies, we will ensure the best and most promising treatments are selected to go forwards, and that patients receive the right treatments that work best for them.
AI driven technologies are showing remarkable promise in being able to diagnose, and potentially treat, mental ill health. For example, leading companies are already using conversational AI that supports people with mental health challenges and guides them through proactive prevention routines, escalating cases to human therapists when needed – all of which reduces the strain on NHS waiting lists.
This funding will help us to invest in parts of the UK where the clinical needs are greatest to test and trial new technologies within the next 18 months. Over the next 5 years, we will transform mental health research through developing world-class data infrastructure to improve the lives of those living with mental health conditions.
Prime Minister Rishi Sunak said: “AI can help us solve some of the greatest social challenges of our time. AI could help find novel dementia treatments or develop vaccines for cancer.
“That’s why today we’re investing a further £100 million to accelerate the use of AI on the most transformational breakthroughs in treatments for previously incurable diseases.”
Secretary of State for Science, Innovation and Technology Michelle Donelan said: “This £100 million Mission will bring the UK’s unique strengths in secure health data and cutting-edge AI to bear on some of the most pressing health challenges facing the society.
“Safe, responsible AI will change the game for what it’s possible to do in healthcare, closing the gap between the discovery and application of innovative new therapies, diagnostic tools, and ways of working that will give clinicians more time with their patients.”
Health and Social Care Secretary Steve Barclay said: “Cutting-edge technology such as AI is the key to both improving patient care and supporting staff to do their jobs and we are seeing positive impacts across the NHS.
“This new accelerator fund will help us build on our efforts to harness the latest technology to unlock progress and drive economic growth.
“This is on top of the progress we have already made on AI deployment in the NHS, with AI tools now live in over 90% of stroke networks in England – halving the time for stroke victims to get the treatment in some cases, helping to cut waiting times.”
Building on the success of partnerships already using AI in areas like identifying eye diseases, industry, academia and clinicians will be brought together to drive forward novel AI research into earlier diagnosis and faster drug discovery.
The government will invite proposals bringing together academia, industry and clinicians to develop innovative solutions.
This funding will target opportunities to deploy AI in clinical settings and improve health outcomes across a range of conditions. It will also look to fund novel AI research which has the potential to create general purpose applications across a range of health challenges – freeing up clinicians to spend more time with their patients.
This supports work the government is already doing across key disease areas. Using AI to tackle dementia, for example, builds on our commitment to double dementia research funding by 2024, reaching a total of £160 million a year.
The Dame Barbara Windsor Dementia Mission is at the heart of this, enabling us to accelerate dementia research and give patients the access to the exciting new wave of medicines being developed.
Artificial Intelligence behind three times more daily tasks than we think
- Most people believe they only use AI once a day when in fact it’s three times more
- One in two of us (51%) feel nervous about the future of AI, with over a third concerned about privacy (36%) and that it will lead to mass unemployment (39%)
- However, nearly half of people recognise its potential for manufacturing (46%), over a third see its role in improving healthcare (38%) and medical diagnosis (32%), and a quarter of people think it can help in tackling climate change (24%)
- As the AI Safety Summit nears, over a third (36%) think the government needs to introduce more regulation as AI develops
The surge in Artificial Intelligence (AI) has left a third of us fearing the unknown, yet we have three times as many daily interactions with AI than most people realise, new research from the Institution of Engineering and Technology (IET) reveals.
On average, the UK public recognises AI plays a role in something we do at least once a day – whether that be in curating a personalised playlist, mapping out the quickest route from A to B, or simply to help write an email.
However, hidden touch points can be found in search engines (69%), social media (66%), and streaming services (51%), which all discretely use AI, as well as tools such as Google translate (31%) and autocorrect and grammar checkers (29%).
Despite its everyday use, over half of us (51%) admit nervousness about a future with AI – with nearly a third of people feeling anxious about what it could do in the future (31%). Over a third are concerned about privacy (36%) and feeling it will lead to mass unemployment (39%).
Those surveyed who felt nervous, do so because of not knowing who controls AI (42%) and not being able to tell what is real or true with AI generated fakes (40%). They also expressed concerns that AI will become autonomous and out of control (38%). And that it will surpass human intelligence (31%).
But people do recognise and welcome the role it will play in revolutionising key sectors, such as manufacturing (46%) and healthcare (39%) and specifically medical diagnosis (32%), as well as tackling issues such as climate change (24%).
Dr. Gopichand Katragadda, IET President and a globally recognised AI authority, said: “Artificial Intelligence holds the potential to drive innovation and enhance productivity across diverse sectors like construction, energy, healthcare, and manufacturing. Yet, it is imperative that we continually evolve ethical frameworks surrounding Data and AI applications to ensure their safe and responsible development and utilisation.
“It is natural for individuals to have concerns about AI, particularly given its recent proliferation in technical discussions and media coverage. However, it’s important to recognise that AI has a longstanding presence and already forms the foundation of many daily activities, such as facial recognition on social media, navigation on maps, and personalised entertainment recommendations.”
As the UK AI Safety Summit nears (1-2 November) – which will see global leaders gather to discuss the risks associated with AI and how they can be mitigated through coordinated action – the research reveals 36% of Brits think the government need to do more to regulate and manage AI development, with 30% of those who feel nervous about AI, feeling that Government regulations cannot keep pace with AI’s evolution.
Those surveyed also shared their concerns on the lack of information around AI and lack of skills and confidence to use the technology, with over a quarter of people saying they wished there was more information about how it works and how to use it (29%).
Gopi added: “What we need to see now is the UK government establishing firm rules on which data can and cannot be used to train AI systems – and ensure this is unbiased.
“This is necessary to ensure AI is used safely and to help prevent incidents from occurring – and it is fundamental to maintaining public trust, which underpins the economic and social benefits AI can bring.”
The research for the IET was carried out online by Opinion Matters from 16 October – 18 October 2023 amongst a panel resulting in 2,008 nationally representative consumers responding from across the UK.
To find out more about the IET’s work in AI, please visit: What the IET is doing around AI.
AI Summit dominated by Big Tech and a “missed opportunity” say civil society organisations
- More than 100 UK and international organisations, experts and campaigners sign open letter to Rishi Sunak
- Groups warn that the “communities and workers most affected by AI have been marginalised by the Summit.”
- “Closed door event” is dominated by Big Tech and overly focused on speculative risks instead of AI threats “in the here and now”- PM told
- Signatories to letter include leading human rights organisations, trade union bodies, tech orgs, leading academics and experts on AI
More than 100 civil society organisations from across the UK and world have branded the government’s AI Summit as “a missed opportunity”.
In an open letter to Prime Minister Rishi Sunak the groups warn that the “communities and workers most affected by AI have been marginalised by the Summit” while a select few corporations seek to shape the rules.
The letter has been coordinated by the TUC, Connected by Data and Open Rights Group and is released ahead of the official AI Summit at Bletchley Park on 1 and 2 November. Signatories to the letter include:
- Major and international trade union confederations – such as the TUC, AFL-CIO, European Trade Union Confederation, UNI Global, International Trade Union Confederation representing tens of millions of workers worldwide
- International and UK human rights orgs – such as Amnesty International, Liberty, Article 19, Privacy International, Access Now
- Domestic and international civil society organisations – such as Connected by Data, Open Rights Group, 5 Rights, Consumers International.
- Tech community voices – such as Mozilla, AI Now Institute and individuals associated to the AI Council, Alan Turing Institute & British Computing Society
- Leading international academics, experts, members of the House of Lords
Highlighting the exclusion of civil society from the Summit, the letter says: “Your ‘Global Summit on AI Safety’ seeks to tackle the transformational risks and benefits of AI, acknowledging that AI “will fundamentally alter the way we live, work, and relate to one another.
“Yet the communities and workers most affected by AI have been marginalised by the Summit. The involvement of civil society organisations that bring a diversity of expertise and perspectives has been selective and limited.
“This is a missed opportunity.”
Highlighting the Summit’s lack of focus on immediate threats of AI and dominance of Big Tech, the letter says: “As it stands, the Summit is a closed door event, overly focused on speculation about the remote ‘existential risks’ of ‘frontier’ AI systems – systems built by the very same corporations who now seek to shape the rules.
“For many millions of people in the UK and across the world, the risks and harms of AI are not distant – they are felt in the here and now.
“This is about being fired from your job by algorithm, or unfairly profiled for a loan based on your identity or postcode.
“People are being subject to authoritarian biometric surveillance, or to discredited predictive policing.
“Small businesses and artists are being squeezed out, and innovation smothered as a handful of big tech companies capture even more power and influence.
“To make AI truly safe we must tackle these and many other issues of huge individual and societal significance. Successfully doing so will lay the foundations for managing future risks.”
Calling for a more inclusive approach to managing the risks of AI, the letter concludes: “For the Summit itself and the work that has to follow, a wide range of expertise and the voices of communities most exposed to AI harms must have a powerful say and equal seat at the table. The inclusion of these voices will ensure that the public and policy makers get the full picture.
“In this way we can work towards ensuring the future of AI is as safe and beneficial as possible for communities in the UK and across the world.”
Senior Campaigns and Policy Officer for Connected by Data Adam Cantwell-Corn said: “AI must be shaped in the interests of the wider public. This means ensuring that a range of expertise, perspectives and communities have an equal seat at the table. The Summit demonstrates a failure to do this.
“The open letter is a powerful, diverse and international challenge to the unacceptable domination of AI policy by narrow interests.
“Beyond the Summit, AI policy making needs a re-think – domestically and internationally – to steer these transformative technologies in a democratic and socially useful direction.”
TUC Assistant General Secretary Kate Bell said: “It is hugely disappointing that unions and wider civil society have been denied proper representation at this Summit. AI is already making life-changing decisions – like how we work, how we’re hired and who gets fired.
“But working people have yet to be given a seat at the table.
“This event was an opportunity to bring together a wide range of voices to discuss how we deal with immediate threats and make sure AI benefits all.
“It shouldn’t just be tech bros and politicians who get to shape the future of AI.”
Open Rights Group Policy Manager for Data Rights and Privacy Abby Burke said: “The government has bungled what could have been an opportunity for real global AI leadership due to the Summit’s limited scope and invitees.
“The agenda’s focus on future, apocalyptic risks belies the fact that government bodies and institutions in the UK are already deploying AI and automated decision-making in ways that are exposing citizens to error and bias on a massive scale.
“It’s extremely concerning that the government has excluded those who are experiencing harms and other critical expert and activist voices from its Summit, allowing businesses who create and profit from AI systems to set the UK’s agenda.”