Responsible AI projects to boost creative and environment sectors

Man wearing a VR headset and holding guitar

Three BRAID Responsible AI Demonstrator projects will explore the uses of AI in live music and the arts as well as the environmental governance of AI.

The Arts and Humanities Research Council’s (AHRC) Bridging Responsible artificial intelligence (AI) Divides (BRAID) programme has funded £3.5 million to three new projects.

The projects will investigate how responsible AI tools can create more opportunities in the creative sector and build environmental resilience.

AI technologies of the future

This research is supporting the development of responsible AI technologies of the future that:

  • the public can trust
  • businesses will adopt
  • address current societal, economic and environmental challenges

The BRAID programme was launched by AHRC in 2022, with a total of £15.9 million in planned funding through to 2028.

In partnership with the Ada Lovelace Institute and the BBC, BRAID’s multidisciplinary team is led by co-directors Professor Ewa Luger and Professor Shannon Vallor at The University of Edinburgh.

BRAID seeks to enrich, expand and connect a mature, sustainable and responsible AI ecosystem by leveraging the power of the arts and humanities to enable more humane, inspired, equitable and resilient forms of AI innovation.

Responsible approaches to AI

BRAID Co-Directors Professors Ewa Luger and Shannon Vallor said:

There is an urgent demand for more responsible approaches to the use of AI in the creative sector, and to the governance of AI’s rapidly accelerating environmental impacts.

BRAID are thrilled to welcome these three groundbreaking projects that leverage the power of the arts and humanities alongside AI expertise to address these challenges head-on.

AHRC Executive Chair Professor Christopher Smith said:

We are excited to support research that explores the positive role of AI in the creative and environmental sectors at a time when the debate around the responsible use of AI is dominating so many discussions.

These projects will support artists and creators in working with new technologies while protecting creators in sectors such as live music and contemporary art.

AI is an exciting new leap in technology and AHRC is proud to fund research on how to use AI to build resilience and responsibility in the creative and environmental sectors.

Equitable futures for the live music sector

Performance, Participation, Provenance and Reward (P3R) in responsible AI is led by Professor Jen Parker-Starbuck at Royal Holloway, University of London.

P3R is delivered by Royal Holloway University of London as part of the CoSTAR National Lab for Creative Industries with academic partners:

  • Abertay University
  • University of Surrey and the Institute for People-Centred AI
  • National Film and Television School

P3R will galvanise new and developing musical talent around access to the next generation of responsible AI tools and services.

It will explore new technologies, new business models and new approaches to data provenance in pursuit of an equitable future in live music.

New responsible tools

Specifically, it aims to provide artists, audiences and venues the capabilities needed to manage ethical provenance and reward, with the freedom to explore new forms of value arising out of performance and participation.

P3R will develop a demonstrator with the support of artists, audiences, venues and other industry stakeholders, that puts new responsible tools into the hands of musicians.

It will help them find audiences in new places and seeks to address the increasing erosion of grassroots venues across the UK.

Creating immersive art experiences through AI

Responsible use of AI in the creation, archiving, reactivation and conservation of artworks and their archives is led by Dr Lydia Farina at the University of Nottingham and co-investigators based in:

  • Coventry University
  • Goldsmiths London South Bank
  • University of Dundee
  • University of Exeter
  • National Archives

This demonstrator generates key new knowledge on responsible innovation and creativity when AI is used to create, document, reactivate and conserve artworks and their archives.

Reactivating artworks in the context of this project will involve performing or exhibiting them again, drawing on their current documentation but also providing opportunities for further documentation.

The project will explore how AI can help to reactivate, document and archive complex artworks, including:

  • artworks created through AI
  • how to curate the exhibition of artworks, whether through historical records or their reactivations
  • preserving them for posterity

Resolving challenges

The project will explore how using AI could resolve what are now considered insurmountable challenges for archives and museums tasked with documenting and preserving often mutable, complex and hybrid artworks in their collections in perpetuity.

For instance, by paying close attention to how audiences engage with the work, the project will learn about the potential for AI to assist artists and audiences in navigating diverse accounts of joy, loss and regret.

Using AI to reenact past recordings of artworks may enable interactions between members of the same community in different times and in this way transcend time boundaries within the same community.

On the other hand, when used by audiences of different communities, the same use of AI has the potential to bridge cultural and geographical boundaries.

Exploring sustainability and environmental resilience

Sustainable AI futures is led by Professor Samantha Walton at Bath Spa University.

This project interrogates how we can govern AI’s environmental impacts responsibly and addresses this challenge by looking at different aspects of governance tools.

This project will develop tools to support responsible AI in relation to climate and environment and investigate the social life of environmental governance tools including:

  • how they work
  • how they are made
  • how to improve them
  • their potential for unintended consequences
  • their wider political, social, and ethical implications

Sustainable AI futures addresses this challenge by analysing AI environmental governance at three interconnected scales: policy and strategy, code and data, and material infrastructure.

At each level, the project will develop a toolkit to address gaps in the governance of AI’s environmental impacts.

Ultimately, Sustainable AI futures is creating tools for responsible AI that address both the environmental impacts of AI itself, and the use of AI to support sustainability.

Top image:  Credit: Eleganza, E+ via Getty Images

Help us improve your experience by taking three minutes to tell us what you think of the UKRI website. You can also let us know if you have specific feedback or you can join UKRI’s research panel.