Aim
The UK’s ability to produce novel AI decision support solutions is vital for responding to the challenges facing the national security and defence sectors. The previous UK government shared some of these challenges in the Integrated Review Refresh 2023: Responding to a more contested and volatile world and since its publication, the global landscape has become more complex.
Consequently, UKRI and EPSRC, in collaboration with National Security Technology and Innovation Exchange and UK government partners, are inviting applicants to attend a joint sandpit to deliver new, innovative, multidisciplinary and transformative approaches to AI decision support for national security and defence.
A collaborative sandpit approach has been chosen to generate research applications that:
- take into account the needs of UK defence and security stakeholders from across government
- form new collaborations between researchers, innovators and government users of research (stakeholders) in diverse research areas
- create new and transformative research ideas in using AI to support decision making, and allowing researchers to pitch projects for funding to test and de-risk novel ideas
- address key research challenges that are identified and described at the sandpit
- can be led by researchers who have not worked in this sector before
No prior involvement with the defence or security sector is required, but it is our intention that participants at the sandpit will remain engaged with stakeholders from the defence and security sectors and be inspired to form longer term collaborations.
Sandpit
The sandpit will be an intensive, inclusive, interactive and creative environment, supporting a diverse group of participants from a range of disciplines and backgrounds in UKRI’s remit to work together.
We recognise the value in enabling collaboration across disciplines which may not usually come together to address the challenges being tackled. The unique opportunity provided by this sandpit will be that attendees will have access to government stakeholders, to drive the research towards real-world scenarios.
The sandpit will be overseen by a director, who will be supported by a team of mentors. The director, mentors and a small number of stakeholders will attend the sandpit but will not be eligible to receive research funding. Instead, their role will be to assist participants in defining and exploring challenges in this area. The director and mentors will act as independent reviewers, making a funding recommendation on the emergent projects.
The sandpit process can be broken down into several stages:
- defining the scope of any research to address the UK’s defence and security challenges
- cultivating a common language and terminologies amongst people from a diverse range of backgrounds and disciplines
- sharing understandings of the challenges, and the expertise brought by the participants to the sandpit, and perspectives from relevant stakeholders
- immersing participants in collaborative thinking processes and ideas sharing to construct innovative approaches
- capturing the outputs in the form of highly innovative research projects
- a funding decision on those projects at the sandpit using ‘real-time’ peer review
Scope
Recent advances in AI technologies hold exciting potential to enhance decision support in the national security and defence sectors. In an increasingly contested and volatile world, exploiting AI for decision support is expected to yield new intelligence insights beyond the capability of human analysts, by identifying threats and emerging risks and boosting productivity.
In defence and national security contexts, leaders are tasked with making high-stakes decisions. These decisions require processing and analysing vast amounts of information from diverse sources, which can range from satellite imagery to cybersecurity data to social media analysis. Traditionally, human analysts have handled this information, but as the volume and complexity of the data have increased, it has become harder to sift through it manually in a timely manner. It is therefore critical to enable effective interaction between humans and AI systems.
For example:
- military commanders may need to quickly assess battlefield conditions, including troop movements, weather, terrain, and enemy activities
- government analysts must analyse vast amounts of radio frequency signals and network traffic in real time to identify and mitigate potential threats, such as unauthorised communications, jamming attempts, or hostile activities
- cyber defence teams must identify, in real time, suspicious activity on a network that might signal a breach or ongoing attack
In these scenarios, assurance is required that AI systems are reliable, transparent, and accurate when supporting decision making. It is imperative that we understand how AI arrives at its conclusions, which means the AI must provide explainable insights, showing not just the outcome but the reasoning behind it as well as validation of sources. Additionally, in this context we must have confidence that AI systems are free from biases that could skew analysis, especially in high-stakes environments where inaccurate outputs could have significant consequences.
This sandpit therefore seeks to create new capabilities that deliver responsible, ethical and trustworthy technologies for decision support using AI, with the ability to identify and prioritise risks in data. The sandpit will enable understanding of the real-world context in which the interventions may be used. This will be achieved by bringing researchers, innovators, and problem owners from across a range of disciplines together in new collaborations for application driven research and innovation.
Participants at the sandpit will be introduced to a number of defence and security scenarios by users of technology from across government and will be encouraged to approach problems in an interdisciplinary manner. For that reason, we encourage applications from a range of disciplines including but not limited to:
- AI technologies
- behavioural sciences
- mathematical sciences
- human factors
- natural language processing
- statistics
- linguistics
- computer sciences
- signal, wireless and network processing
- ethics
- audio visual analysis
- engineering
- high performance computing
- modelling and simulation
- digital twinning
- cybersecurity
- psychology
- digital forensics
- sociology
- legal studies
- responsible research and innovation
Funding available
It is expected that three projects will be funded, sharing up to £3 million of total funding at 80% full economic cost.
Accommodation will be provided during the residential component of the sandpit. However, participants must make their own travel arrangements. Travel and subsistence costs will be reimbursed.
Since this sandpit is partially residential, and where employers cannot help, EPSRC, in line with UKRI policy, will cover the costs of any additional childcare or caring responsibilities, which is deemed necessary during this period.
Trusted Research and Innovation (TR&I)
UKRI is committed in ensuring that effective international collaboration in research and innovation takes place with integrity and within strong ethical frameworks. TR&I is a UKRI work programme designed to help protect all those working in our thriving and collaborative international sector by enabling partnerships to be as open as possible, and as secure as necessary. Our TR&I principles set out UKRI’s expectations of organisations funded by UKRI in relation to due diligence for international collaboration.
As such, applicants for UKRI funding may be asked to demonstrate how their proposed projects will comply with our approach and expectation towards TR&I, identifying potential risks and the relevant controls you will put in place to help proportionately reduce these risks. See further guidance and information about TR&I, including where you can find additional support.