Can deepfakes undermine justice?

The UK’s role in tackling a growing threat to human rights evidence.

In a digital age where every owner of a smartphone has the potential to record history with videos, photos and audio, user-generated evidence used in legal adjudication has become a crucial tool in the fight for justice.

Across the world, these digital traces of truth are increasingly indispensable in holding perpetrators to account. This type of evidence has been used to expose mass human rights violations, war crimes and police brutality and supported trials in international and domestic courts.

Yet, as technology advances so does the sophistication of digital deception know as deepfakes. Hyper-realistic but entirely fabricated images, videos or audio recordings created using machine learning have become a mainstream issue, raising critical questions about the trustworthiness of what we see and hear online.

Also, as the quality of deepfakes improves, they are becoming harder to detect, blurring the line between reality and fiction in ways that could have serious implications for justice.

The UK is playing a pivotal role in addressing this challenge through the TRUE project, an ambitious interdisciplinary research initiative that seeks to understand the impact of deepfakes on the credibility of user-generated evidence in human rights accountability processes.

Led by a team of experts across law, psychology, and linguistics, TRUE is the first systematic effort to explore whether the rise of deepfakes has eroded trust in this vital form of evidence.

The deepfake dilemma

Professor Yvonne McDermott Reese, Principal Investigator explains:

Deepfakes are more than just a technological curiosity.

Manipulated media can be used to create convincing but false narratives, potentially exonerating guilty parties or implicating innocent ones. The implications are particularly worrying in the context of mass atrocity trials, where user-generated evidence is playing an increasingly important role in prosecuting war crimes, crimes against humanity, and genocide.

The concern is not just that deepfakes can create convincing false narratives that could be used to fabricate evidence, but that the mere possibility of their existence could sow doubt about the authenticity of legitimate evidence.

This phenomenon, known as ‘plausible deniability’, can be exploited by those seeking to discredit genuine footage, undermining the epistemic value of user-generated content in legal proceedings. If the public and legal professionals begin to mistrust all digital evidence, it could significantly weaken the ability of courts and human rights bodies to deliver justice.

The TRUE project: a UK-led initiative

Recognising the urgent need to address these challenges, the TRUE project is spearheading a comprehensive investigation into how deepfakes are impacting trust in user-generated evidence. It will do this through three interlinked investigation clusters.

The first cluster focuses on case law analysis, compiling a database of criminal trials where user-generated evidence has been introduced to prosecute atrocity crimes. This database will allow researchers to assess how often deepfakes or concerns about them have been raised in court, and how these issues have influenced the outcomes of trials.

The second cluster involves experimental studies with both laypeople and legal professionals. By examining how different groups perceive the trustworthiness of user-generated evidence, including their susceptibility to deepfake-induced doubts, the project aims to identify the factors that most significantly influence these perceptions.

The third cluster will conduct jury simulation exercises to delve deeper into the deliberative processes of jurors. By analysing the language used during deliberations, the TRUE team hopes to uncover how concerns about deepfakes are articulated and whether they play a decisive role in shaping verdicts.

Professor McDermott Rees adds:

Our research shows that human rights documenters, investigators, legal professionals, and judges will need to consider the challenges posed by the proliferation of synthetic media if we want to safeguard the credibility of user-generated evidence in the new hybrid AI-Human Media Ecosystem.

Guiding the way forward

In addition to its research activities, the TRUE project has developed, in collaboration with partners, a world-leading guide to assist judges and fact-finders in evaluating this kind of evidence.

This guide, translated into multiple languages including Arabic, French, Spanish, and Ukrainian, provide crucial insights into assessing the authenticity of digital images, analysing metadata, and understanding the broader context of the evidence.

By equipping legal professionals with the tools they need to critically assess digital evidence, the TRUE project is helping to safeguard the integrity of human rights accountability processes.

Ukrainian Supreme Court Judge, Nataliya Antonyuk, says:

As a result of your research, members of the judiciary in Ukraine now have a clear view of the advantages and disadvantages of [user-generated evidence], and how such evidence can be preserved, authenticated, verified, presented, and evaluated.

The project also convened a conference at the Honourable Society of the Inner Temple in London, which was attended by over 100 legal practitioners from around the world.

In a post-conference survey, 100% of respondents agreed or strongly agreed with the proposition that they ‘now have a better understanding of the key benefits and challenges of open source and user-generated evidence’ and ‘of the factors that should be taken into account in the assessment of user-generated evidence’.

The team have also worked closely with investigative human rights organisations in developing methodologies to help them ensure the evidence they gather can be used in court one day.

Catriona Murdoch, Partner at Global Rights Compliance, said:

[This research] enabled us to further consider how the admissibility and weight of open-source information might be challenged in court, and how our methodologies might address those issues in advance. Since 2023, Yvonne has been working closely with us on the development of our Open-Source Investigative Methodology for our Starvation Mobile Justice Team.

A global challenge with a UK response

As deepfakes become more prevalent and more sophisticated, the need for robust, interdisciplinary research like that conducted by TRUE is increasingly apparent.

In a world where the line between truth and deception is becoming ever more blurred, ensuring the credibility of user-generated evidence is crucial. The stakes are high, not just for the victims of human rights abuses but for the very concept of justice itself. The UK’s leadership in the TRUE project offers hope that, even in the face of sophisticated digital manipulation, the truth can still prevail.

In the coming years, to ensure that the world is ready for the potential impacts of artificial intelligence on the law of evidence, the team will continue to disseminate their research findings to:

  • journalists
  • human rights investigators
  • governments
  • lawyers
  • judges

TRUE is funded by Horizon Europe with the UK participation supported by the UK government, under the Horizon Europe guarantee.

Top image:  Credit: Delmaine Donson, E+ via Getty Images

This is the website for UKRI: our seven research councils, Research England and Innovate UK. Let us know if you have feedback or would like to help improve our online products and services.