Online Hate Speech: How Big is the Problem, and What Can be Done? Research

By Edward Crook on November 14th 2016

One 2015 Deloitte study found that the average American checks his or her phone 46 times every day and reports in the UK range up to as many as 85 times daily.

For many, social media opens up new communities, perspectives and support. This is at the core of Ditch The Label, one of the world’s largest anti-bullying charities. DTL is a digital charity using online networks to offer guidance to those affected by bullying. With as many as 28% of young people on Twitter experiencing bullying on the network, there’s a growing need for this type of work.

In this DTL study, we tracked millions of tweets to better understand hate speech and cyberbullying more broadly, in the US and the UK.

Gathering our data

We define hate speech as expression or incitement of hatred based on prejudice; in this study based on race, gender identity and expression, or sexual orientation. There are of course many other types of hate speech that go beyond the current scope but warrant further research.

Firstly, we searched for derogatory and insensitive language. Many keywords are context dependent, so complex queries and iterations were needed to ensure data accuracy. We also searched for neutral conversation about racism, homophobia and transphobia, to measure levels of hate speech relative to constructive debate.

Racial prejudice

There’s no shortage of racially-charged language online. Our study tracked over 7.6 million tweets containing racial insults, compared with only 7 million tweets about issues of racism. It doesn’t make for easy reading.

However, there are also signs of progress. Since 2015, constructive debate has begun to overtake derogatory language, suggesting that the network is increasingly used to raise awareness. The Black Lives Matter movement has contributed to this on both sides of the Atlantic.

Males are more likely to discuss racism on Twitter, but there is now a larger female voice in the debate compared with 2012.

Discrimination research has long pointed to ingroup and outgroup bias (that is, an ‘us, them’ mentality) as both cause and symptom.

We found that when filtering racial insults to those containing both first and third person plural pronouns (e.g.: ‘us’ and ‘them’), men contributed 63% (compared with 59% overall). The UK contributed 9% compared with 6% overall. The finding suggests that the ‘us, them’ mentality is disproportionately strong among all males. In the UK, this finding implicates this mentality may pose a barrier in tackling discrimination.

Homophobia

In our first DTL gender study we found direct links between masculinity norms and homophobia. Though derogatory language does continue online, on Twitter we found that debate about homophobia outweighed insults 8:1. Key events fuelled this debate, including the Sochi 2014 Winter Olympics and the more recent Orlando shootings.

Homophobic language is used more often by males and more common in the US than in the UK.

Derogatory language is also more common among sports fans and, concerningly, among those who list family and parenting in their bios. Our study points to a need for more positive gay role models within sports communities. There is also further work to tackle discrimination transference between generations within family contexts (educators play a pivotal role in this process).

Transphobia

Of our three topics, transphobia has seen the steepest progress online over the past four years.

This may reflect increased awareness and need to be vocal about issues affecting the transgender community. Over time, more authors have begun commemorating days such as International Day Against Homophobia and Transphobia and Transgender Day of Remembrance.

Debate surrounding transgender issues also included shows such as Orange is the new Black and Law & Order: SVU. Depiction of the transgender roles proved controversial, but it did fuel debate about the discrimination still faced by many transgender people.

Cyberbullying

In the second half of our study we looked at online bullying through a broader lens (not limited to specific discrimination areas).

What we found supports current Twitter advice: responding to bullying on Twitter often only escalates the conflict. Recipient replies had a negative outcome in 44% of cases, compared with only 3% positive outcomes.

We also found that bullying follows a wide range of conversation topics online, giving ground to the claim that bullying reflects the person bullying, rather than the person being bullied.

What can be done?

Understanding the size and nature of the problem in the first step. But what can be done to tackle levels of hate speech in online networks?

While social media can shape how we view one another, it is also a reflection of our existing views and prejudices. For this reason, censorship risks pushing the conversation underground. In other words, with censorship prejudice would be dormant, not diminished.

A more effective path, I would argue, is to encourage more discussion about prejudice, equipping authors with the skills to recognize it in both others and themselves.

Specific demographics, based on region, gender and interests, are overrepresented in online hate speech. The research aim is not to vilify these groups, but rather to invite further questioning: why are these groups more prone to hate speech and what preventative measures can be put in place?

Finally, discrimination in all three groups relies in part on ingroup-outgroup dynamics (the ‘us-them’ mentality). Challenging this requires positive role models within these groups who are able to raise awareness and demystify the outgroup.

For a more in-depth view of the research, click here for our interactive viz or here for the full report.


Edward Crook

@Brandwatch

Edward is Brandwatch Research Manager for NA & LATAM. Based in New York, he’s fascinated by networks, linguistics and #socialforgood