BRAID aims to bridge the divides between academic, industry, policy and regulatory work on responsible AI (artificial intelligence), integrating arts, humanities and social science research within a responsible AI ecosystem.
BRAID (previously ‘Enabling a responsible ecosystem’) is a six-year national research programme funded by UKRI Arts and Humanities Research Council (AHRC), led by programme directors Professor Ewa Luger and Professor Shannon Vallor at the University of Edinburgh in partnership with the Ada Lovelace Institute and BBC Research and Development.
BRAID seeks to use innovative arts and humanities research to help enrich, expand, and connect a mature and sustainable responsible AI ecosystem. ‘Responsible AI’ is the practice of developing and using AI technologies that address ethical concerns and minimise the risk of negative consequences.
Aims of the programme
- Learn lessons from the first wave of responsible AI work to identify the barrier and divides to a mature and effective responsible AI ecosystem.
- Bring the arts and humanities more fully and centrally into the responsible AI ecosystem, to invest responsible AI with more depth and breadth.
- Foster more honest and diverse public conversations about positive humane vision for AI, not just harm reduction.
- Widen access to responsible AI knowledge and practices in the UK and demonstrate their value in concrete contexts, through policy collaboration, fellowships and demonstrator projects.
BRAID scoping to embed responsible AI in context
Ten projects have been funded, totalling £2.2 million. The projects started 1 February 2024 for six months.
The projects will define what responsible AI is across sectors such as education, policing and the creative industries. They will produce early-stage research and recommendations to inform future work in this area.
The projects illustrate how the UK is at the forefront of defining responsible AI and exploring how it can be embedded across key sectors.
Read more about the BRAID scoping projects.
BRAID fellowships
The fellowships aim to enable and support AI research and development to develop solutions for the responsible use of AI.
The fellowships are structured so that researchers work directly with non-academic stakeholders from industry, the third sector, and government, on current responsible AI challenges. The aim is to develop new insights, drive responsible innovation, and deliver impact in the field.
We have awarded 17 fellowships with partners including Microsoft, Diverse AI and the BBC.
Read more about the BRAID fellowships.
BRAID Responsible AI Demonstrators
The demonstrator funding opportunities launched in April 2024, with projects due to be funded for three years from January 2025.
The demonstrator projects will seek to address real-world challenges facing sectors, businesses, communities and publics in the responsible development and application of AI technologies. Demonstrators will involve an intervention designed to advance responsible AI in a specific context.
The ambition is to demonstrate the transformative power of embedding responsible, human-centred approaches and thinking at the earliest stages of the AI research and development pipeline, and across the AI lifecycle.