Tracking hate on social media

A digital platform that identifies and monitors hateful comments on social media is helping address online hate speech and real-world hate crime.

Police-recorded hate crimes in England and Wales are at their highest levels since records began, and online hate speech is spreading particularly rapidly. A survey by Ofcom in 2020, for example, found half of 12 to 15-year-olds had seen hateful content online, up from a third since 2016.

Addressing the problem

Hate on social media can translate into real-world harm, so the problem requires action both on and offline. Cardiff University’s Economic and Social Research Council (ESRC) funded HateLab, a global hub for data and insight into hate crime and speech, is addressing the problem on both fronts.

Led by Professor Matthew Williams, the hub has conducted research to measure online hate and its impact in the real world. The team has developed a digital dashboard, known as Harms Evaluation & Response Observatory (HERO), that uses artificial intelligence to detect and monitor online hate speech in real time over multiple platforms.

Collecting and analysing data

Collection and analysis of this data has generated vital evidence to inform national policy, such as Welsh Government responses to emerging community tensions. It has been used to improve operational decision-making by police forces, enabling more targeted responses to potential threats.

HateLab technologies have also helped shape anti-hate campaigns for international events, including the Women’s Euros 2022 and Men’s World Cup 2022.

A spin-out company, Nisien.ai, was launched in 2023 to commercialise the technology, enabling a wider range of businesses and organisations to detect and mitigate online harm.

About the project

In 2012, Professor Williams conducted research for the Welsh Government into rising hate crime in Wales. The findings revealed increasing numbers of people reporting victimisation on social media.

Professor Williams explains:

This was a relatively novel phenomenon at the time, and it inspired us to research online hate. How was hate on the streets was being engineered to function online, and did its impact differ? What was the nature and scale of the problem? Could social media data be used to predict crime rates on the streets?

With ESRC funding, Professor Williams established the research hub HateLab to provide insights into the nature and dynamics of hate speech, both on and offline, and how it might be effectively marshalled.

Professor Williams says:

For the first time, we could trace hate online in real time because every instance was recorded on social media sites.

We could follow breadcrumb trails of where something was posted, what time, who was sharing and how far it spread. It provided such a rich and extensive body of information that we needed new methods to collect and analyse the data.

From left to right: Professor Pete Burnap, Sefa Ozalp (research student), Professor Matthew Williams. Credit: Professor Matthew Williams

Further ESRC funding

In 2017, the HateLab team used further ESRC funding to work with computer scientists and develop a digital dashboard, now known as HERO.

HateLab research established a link between online hate and real-world harm with a study that explored hate crime after EU Exit. It revealed that spikes in online hate, for example, anti-immigrant rhetoric, were closely followed by hate crimes on the street in the same area. The team believed harnessing such data could provide valuable insights to many organisations.

Professor Williams explains:

Governments can use the information to inform policy or detect where community tensions might be stirred up. It can help police with operational decisions, such as where and when to deploy resources. Charities were also interested, as they could use the data to better protect their communities.

Pilot schemes were established with Welsh Government, the National Online Hate Crime Hub (run by the National Police Chiefs’ Council), and LGBTQIA+ anti-violence charity Galop. The requirements from this diverse group of partners were used to co-creatively build and test the HERO platform.

Impact of the project

Following the pilot schemes, HateLab’s technologies and services have been adopted by a number of businesses and organisations, helping identify and mitigate online hate. A spin-out company, Nisien.ai, was established in 2023 to commercialise the technologies.

Pre-empting crime outbreaks

HERO has been integrated into the UK-wide National Online Hate Crime Hub. The hub, the point of contact for all victims of online hate crime, used the platform to monitor hate speech around terror attacks and key moments of the EU Exit process.

Enabling staff to better understand the dynamics of hate speech propagation has led to improved response times, better support for victims and more effective allocation of resources.

The hub continues to use HERO to produce intelligence reports for police, senior civil servants and MPs.

Director of the National Online Hate Crime Hub, Paul Giannasi, says, HateLab’s data and insights “transformed the way my team of police officers and civilian staff at the National Online Hate Crime Hub monitor and tackle online hate crime” and that HERO “fundamentally changed the way we monitor the spread of hate speech during national events.”

Monitoring social cohesion

The Welsh Government’s Community Cohesion teams monitor community tensions in their respective regions, working with partners to mitigate issues as they arise. HERO improved monitoring of anti-migrant content related to the settlement of Ukrainian refugees and provided data that fed into national threat assessments on extremist activity.

A spokesperson from the Welsh Government’s Inclusion and Cohesion Team says:

Social media users have become more savvy in the way they direct abuse, and often do not use openly hateful language, instead choosing to use more coded words… The search function on HateLab provides a way of homing in on these terms… and provides us with a method of widening our searches and be more dynamic to developing terms or words.

Facilitating anti-hate interventions

LGBTQIA+ anti-violence charity Galop used HateLab technology to monitor homophobic and transphobic comments online, such as during the ‘Monkeypox’ outbreak. The charity was then able to deploy counter-narratives to dispel myths and untruths.

HateLab is currently developing automated tools that use generative artificial intelligence to counter hate speech with tailored responses.

Reducing hate around sporting events

HateLab provided data to EE and BT to inform their ‘Hope United’ campaigns for the Women’s Euros and Men’s World Cup in 2022. The Women’s Euros campaign featured Hope United football shirts that reflected online abuse from players’ social media accounts.

Will MacNeil, Design Director of creative agency The Mill, says:

[HateLab’s] tracking of hate across the tournament allowed us to confidently talk about the levels of misogynistic hate our Hope United players received during the Euros and reflect that in reactive press, digital out of home and social, which ran over the finals weekend.

Building on this technology, Nisien.ai has recently developed HERO Panoptic to identify, anticipate and respond to online threats, reducing the impact of hate on athletes, clubs and fans.

Top image:  Credit: Kenneth Cheung, iStock Unreleased via Getty Images

This is the website for UKRI: our seven research councils, Research England and Innovate UK. Let us know if you have feedback or would like to help improve our online products and services.