The RAI UK skills programme is set up to ensure that research and best practices in responsible AI are used to strengthen the UK’s responsible AI skills base. It has four key pillars:
- national responsible AI skills frameworks
- upskilling or reskilling within or across industries or sectors, schooling with a focus on responsible AI
- raising public awareness of responsible AI
- addressing equity and disadvantage in AI through education
We invite grant proposals that align with our goal to advance the UK’s AI strategy by expanding AI skills beyond technical practitioners and promoting the upskilling and reskilling of diverse citizens and businesses. Our focus is on fostering the development and transfer of skills among industry, government, educators and the general public.
Projects should deliver responsible AI skills resources, which will be made available openly to the RAI UK ecosystem. Resources may include anything from lesson plans and activities, workshops, organisational toolkits, professional development resources, visual aids or videos, to online learning platforms, courses or apps.
These resources should be based on empirical evidence, established or new methodologies, or experience. Projects should clearly articulate how and why the resources relate to the underpinning evidence, methodologies or experience.
In cases where evidence is lacking, or where it is not clear what an appropriate pedagogical strategy or effective learning resource might be, projects may need to include a stronger research element.
Your proposed RAI project should respond to skills challenges from at least one of RAI UK’s responsible AI challenge themes and it should align with at least one of the four skills programme’s pillars.
The three RAI UK themes with example skills challenges are as follows.
Responsible AI-powered organisations and economies
Organisational professional development programmes face the challenge of keeping up with rapid advances in AI. A systems-based approach to responsible AI can frame skills initiatives and collaboration across sectors, yet most training available is not framed in this way. Tailored training programmes, informed by an understanding of the consequences of AI deployment, may be needed for different levels of an organisation, particularly in high-stakes domains, but the extent of customisation required remains unclear.
Addressing harms, maximising benefits of AI
How AI is framed and applied introduces new opportunities but also trade-offs for individuals, industries and societies where the potential benefit may be outweighed by negative impacts on a wide range of issues.
This includes issues such as:
- privacy
- bias
- accessibility
- labour rights
- social justice
- sustainability of people, organisations and the environment
This introduces the need for new interdisciplinary and critical thinking skills for deployment and oversight to ensure:
- AI’s safety, for example ensuring the system functions as intended with regards to ethics, policy, legal and technical aspects
- AI security, for example ensuring the system is robust to malicious interference
- reduction of negative environmental impacts
- reduction of misuse
Moreover, addressing the educational needs of underrepresented groups is essential, yet they may have specific needs that are not well understood. What is needed to be in place so that AI works for the benefit of people and societies while harms are minimised?
AI law, governance, and regulation in an international context
The UK needs to meet the challenges of when and how to govern and regulate AI within the international digital economy. In the National AI Strategy, the UK government emphasised the desire to encourage startups and small and medium sized enterprises or businesses (SMEs) to adopt AI, while acknowledging the increasing need to meet AI regulation. Diverse AI policies across the world create a pressing need to upskill government, regulators, industry, SMEs and the public for effective AI governance and leadership. It also includes:
- addressing issues around knowledge exchange under export controls
- providing inclusive AI education
- accessing security-sensitive expertise
In the specific context of education, generative AI requires careful oversight to balance learning and teaching gains with academic integrity.
Funding available
Funding is available for between three and eight grants. RAI UK will fund up to £100,000 at 80% full economic cost.
Duration
Successful projects must begin by 1 September 2024 and last between six and 12 months.