£12 million for UK projects to address rapid AI advances

Close-up of a hand pointing at a tablet with a stylus.

A series of breakthrough projects has been awarded £12 million to address the challenges of rapid advances in artificial intelligence (AI).

Three initiatives in the UK will look to tackle emerging concerns of generative and other forms of AI currently being built and deployed across society.

The projects cover the health and social care sectors, law enforcement and financial services.

An additional two projects, funded by UK Research and Innovation (UKRI), are looking at both how responsible AI can help drive productivity and how public voices can be amplified in the design and deployment of these technologies.

Funding has been awarded by Responsible AI UK (RAI UK) and form the pillars of its £31 million programme that will run for four years. RAI UK is led from the University of Southampton and backed by UKRI, through the UKRI Technology Missions Fund and the Engineering and Physical Sciences Research Council (EPSRC). UKRI has also committed an additional £4 million of funding to further support these initiatives.

Addressing complex socio-technical challenges

Professor of AI Gopal Ramchurn, from the University of Southampton and CEO of RAI UK, said the projects are multidisciplinary and bring together computer and social scientists, alongside other specialists.

He added:

These projects are the keystones of the Responsible AI UK programme and have been chosen because they address the most pressing challenges that society faces with the rapid advances in AI.

The projects will deliver interdisciplinary research that looks to address the complex socio-technical challenges that already exist or are emerging with the use of generative AI and other forms of AI deployed in the real-world.

The concerns around AI are not just for governments and industry to deal with – it is important that AI experts engage with researchers and policymakers to ensure we can better anticipate the issues that will be caused by AI.

Turning the UK into a powerhouse for future AI development

Since its launch last year, RAI UK has delivered £13 million of research funding. It is developing its own research programme to support ongoing work across major initiatives such as the AI Safety Institute, the Alan Turing Institute, and Bridging Responsible AI Divides UK.

RAI UK is supported by UKRI, the largest public funder of research and innovation, as part of government plans to turn the UK into a powerhouse for future AI development.

Dr Kedar Pandya, UKRI Technology Missions Fund Senior Responsible Owner and Executive Director at EPSRC said:

AI has great potential to drive positive impacts across both our society and economy. This £4 million of funding through the UKRI Technology Missions Fund will support projects that are considering the responsible use of AI within specific contexts. These projects showcase strong features of the responsible AI ecosystem we have within the UK and will build partnerships across a diverse set of organisations working on shared challenges.

These investments complement UKRI’s £1 billion portfolio of investments in AI research and innovation, and will help strengthen public trust in AI, maximising the value of this transformative technology.

Using AI to support police and courts

The £10.5 million awarded to the keystone projects was allocated from the UKRI’s Technology Missions Fund investment at the inception of RAI UK last year.

This includes nearly £3.5 million for the PROBabLE Futures project, which is focusing on the uncertainties of using AI for law enforcement.

Its lead Professor Marion Oswald MBE, from Northumbria University, said that AI can help police and the courts to tackle digital data overload, unknown risks and increase operational efficiencies.

She added:

The key problem is that AI tools take inputs from one part of the law enforcement system but their outputs have real-world, possibly life changing, effects in another part – a miscarriage of justice is only a matter of time.

Our project works alongside law enforcement and partners to develop a framework that understands the implications of uncertainty and builds confidence in future probabilistic AI, with the interests of justice and responsibility at its heart.

Limited trust in large language models

Around £3.5 million has also been awarded to a project addressing the limitations of large language models, known as LLMs, for medical and social computers.

Professor in Natural Language Processing Maria Liakata, from Queen Mary, University of London, said:

LLMs are being rapidly adopted without forethought for repercussion.

For instance, UK judges are allowed to use LLMs to summarise court cases and, on the medical side, public medical question answering services are being rolled out.

Our vision addresses the socio-technical limitations of LLMs that challenge their responsible and trustworthy use, particularly in medical and legal use cases.

Power back in hands of people who understand AI

The remaining £3.5 million is for the Participatory Harm Auditing Workbenches and Methodologies project led from the University of Glasgow.

According to principle investigator Dr Simone Stumpf, its aim is to maximise the potential benefits of predictive and generative AI while minimising potential for harm arising from bias and ‘hallucinations’, where AI tools present false or invented information as fact.

She added:

Our project will put auditing power back in the hands of people who best understand the potential impact in the four fields these AI systems are operating in.

By the project’s conclusion, we will have developed a fully featured workbench of tools to enable people without a background in artificial intelligence to participate in audits, make informed decisions, and shape the next generation of AI.

Read more about the three AI projects and RAI UK.

Including public voices in Responsible AI

UKRI have invested an additional £4 million of support through the UKRI Technology Missions Fund to both support the keystone projects and additional satellite projects.

£750,000 has been awarded to the Digital Good Network, The Alan Turing Institute and the Ada Lovelace Institute to ensure that public voices are attended to in AI research, development and policy.

The project will synthesise, review, build and share knowledge about public views on AI and engaging diverse publics in AI research, development and policy. A key aim of the project will be to drive equity-driven approaches to AI development, amplifying the voices of underrepresented groups.

Project lead, Professor Helen Kennedy, said:

Public voices need to inform AI research, development and policy much more than they currently do. It brings together some of the best public voice thinkers and practitioners in the UK, and we’re excited to work with them to realise the project’s aims.

Understanding the Responsible AI landscape

A further £650,000 has been awarded to The Productivity Institute to gain insights on how the uptake of responsible AI can be in incentivised through incentive structures, business models and regulatory frameworks.

The institute wishes to better understand how responsible AI can drive productivity and ensure the technologies are deployed responsibly across society and enhance the UK’s prosperity.

Project lead Professor Diane Coyle said:

This is an opportunity for the UK to drive forward research globally at the intersection of technical and social science disciplines, particularly where there has been relatively little interdisciplinary research to date. We are keen to enhance connections between the research communities and businesses and policymakers.

Further information

Professor Gopal Ramchurn

Professor of AI Gopal Ramchurn from the University of Southampton is principal investigator for the project and the CEO and director of RAI UK. His research at Southampton focuses on the design of responsible AI across energy and disaster management. He is also the CEO of Empati limited, a climate tech start-up, and Chairman of Sentient Sports, a Sports-AI start-up.

RAI UK

RAI UK is connecting UK research into responsible AI to leading research centres and institutions around the world, delivering world-leading best practices for how to design, evaluate, regulate, and operate AI-systems in ways that benefit people, society and the nation.

UKRI Technology Missions Fund

The UKRI Technology Missions Fund is designed to exploit the UK’s global leadership in transformative technologies to help solve specific problems, while also helping cement that leading position. Overall, UKRI is investing £250 million in Technology Missions to enable new and existing capabilities and capacity in artificial intelligence, quantum technologies and engineering biology in the years 2023 to 2025 and beyond. A further £70 million has been announced to support future telecommunications.

Top image:  Credit: primeimages, E+ via Getty Images

This is the website for UKRI: our seven research councils, Research England and Innovate UK. Let us know if you have feedback or would like to help improve our online products and services.