fbpx

Uncovered:
Online Hate Speech
in the Covid Era

A social media data study analyzing millions of data points to understand how hate speech has evolved online between 2019 and 2021

Brandwatch Logo
Brandwatch Logo

Brandwatch and Ditch the Label teamed up to analyze 263 million conversations to understand how hate speech has evolved online in the US and UK between 2019 and 2021.

From the anti-Asian sentiment that has peaked since the pandemic began, to the racism that spurred and reacted to the Black Lives Matter protests, to the transphobia and homophobia sparked by celebrity comments and attacks, we have seen a devastating surge in internet-based hate speech in recent years.

974 Days | 263 million mentions of online hate speech

We found that:

Instances of and discussions around online hate speech have increased 38% since the beginning of the pandemic in March 2020.

Online discussions around violent threats have increased by 22% since the start of the pandemic. This increase in violent rhetoric correlated with big events, such as violent attacks against people in Asian communities and the Black Lives Matter protests in the summer of 2020.

In both the US and the UK, reported incidents of hate crimes correlated with online discussions around hate speech.

About the report

Following our 2016 Cyberbullying and Hate Speech report and the 2019 Transphobia Study, we’ve used social data from multiple online sources, including social media sites, forums, and blogs, to track and measure the rates of discussion around and use of online hate speech, as well as conversations about cancel culture.

Liam Hackett
Dr. Liam Hackett
CEO of Ditch the Label

“Throughout the pandemic, Ditch the Label has been at the helm of helping young people navigate a range of unique challenges. Anecdotally, we found that increasing amounts of our service users were reporting online hate speech and trolling, so we had suspected an uplift in cases, however we lacked the data to objectively define the relationship between the pandemic and the rate of online hate. This report shines a vital and sobering light on the very real and devastating experiences of millions worldwide, as they battle not only their own personal struggles, but navigate through alarming rates of online toxicity and abuse.

It is clear from this report that online hate speech has reached an all-time high and, to some communities, is at an unbearable extreme. It is my hope that this vital piece of research will illuminate the true extent of online hate to positively influence societal behaviors and policy to better protect people online.”

Notes on the methodology

For the purposes of this report we have focused on online hate relating to sexual orientation, gender and gender identity, and race and ethnicity. It’s important to acknowledge that not all manners of hate speech are covered within the scope of this analysis.

This report analyzes US and UK data from forums, blogs, and several social media sites from the beginning of 2019 to mid-2021. To protect victims of online hate, we have paraphrased any examples to ensure they’re not searchable.

On gender breakdowns found in the data (eg “Men were two times more likely to post about cancel culture than women were.“), Brandwatch uses a curated database of almost 45k names to estimate the gender of an author. This is not a perfect methodology, but has proven accurate enough to help analysts model broad trends.

Trigger warning and content warning

This report contains content that some audiences may find upsetting and triggering. Analysis touches on topics including transphobia, racism, homophobia, sexism, misogyny, and violent threats against a wide range of identities. Please be aware that some of the data presented in this report is uncensored in places. We recommend that anybody below the age of 16 has parental consent before exploring this report.

How has hate speech trended?

Examples of and discussions about hate speech surged during the summer of 2020, with a reignited focus on Black Lives Matter and racism against Asian people as the pandemic raged on. We plotted all of our data against key events that happened in the three-and-a-half year timeline between 2019 and mid-2021.

US
April 2019 Transgender military ban comes into effect
July 2019 Congresswomen being told to go “back to where they came from”
March 2020 WHO declares COVID-19 pandemic
March 2020 Influential voices refer to COVID-19 as the "Wuhan Flu" or "China Flu"
May 2020 George Floyd murdered
June 2020 Black Lives Matter protests
November 2020 US General Election
January 2021 Insurrection at the US Capitol
March 2021 Murder of eight people, including six Asian women, at Atlanta-area spas and beauty parlors
UK
January 2019 An MP being called a nazi outside of Parliament
February - April 2019 Focus on the impact Brexit has had on anti-immigrant sentiment
December 2019 UK general election
March 2020 WHO declares COVID-19 pandemic
June 2020 Black Lives Matter protests
March 2021 Murder of Sarah Everard by an off-duty Metropolitian Police Service officer
August 2021 Mass shooting in Plymouth carried out by an individual who engaged with the incel movement

Has online hate speech conversation correlated with reports of hate crimes?

In both the US and the UK, incidents of reported hate crimes correlated with online discussions of and examples of hate speech.

Prior to the start of the pandemic, volumes for both online hate speech discussion and actual reported hate crimes remained fairly consistent and unchanged. However, rates of hate speech and reported hate crimes increased in the UK and US in 2020.

In the US, it seems that the peak for incidents of hate crimes occurred prior to the peak for hate speech discussions and examples of hate speech online.

However, the UK hate speech conversation spiked first followed by a more steady rise in hate crimes. That reported hate crime and online discussion around hate speech rose roughly in tandem in both countries suggests a troubling link between online words and 'real-world' action. This is something which has been explored previously in other studies.

Which types of hate speech are most common?

There were 23 million references to violent threats between 2019 and mid-2021, and the volume has increased since the pandemic began.

Hate speech can take many forms: violent threats, references to violent events, slurs, epithets, tropes, and hateful imagery or symbols. Discussions about violence and threats online saw a 22% increase following the start of the pandemic and the resurgence of the Black Lives Matter movement in the summer of 2020. Perpetrators threatened minority groups and individuals online while victims discussed their experiences of being threatened.

How does hate speech manifest online?

Data gathered using Brandwatch Consumer Research across social media sites, forums, and blogs | 2019 - mid-2021

Violent Threats
Slurs / Tropes
Images

Analysis

Slurs / tropes: The most common form of online hate speech we found was the use of slurs and tropes. Victims of hate speech posted about the circumstances when they were called hateful names or slurs and how unsafe and deeply hurt they felt following those incidents. Perpetrators of hate speech attacked others using racist, sexist, and homophobic terms. Following the onset of the pandemic, new slurs, especially targeting the Asian community, began circulating.

Violent threats: Violent threats were the second most common form of online hate speech. Violent threats or discussions of violent threats were most likely to occur on forums.

Images: Images and symbols of hate drove the least amount of online discussion compared to other forms, but saw the largest increase in volume respectively since the pandemic began (+28%). People posted about Black Lives Matter signs being defaced with swastikas and other racist symbols. Others shared images of Asian people with messages about COVID-19 or telling them to return to where they came from.

Geographical differences: Larger shares of the online hate speech discussion in the US were related to hateful imagery or violent threats. In the UK, only 4% of hate speech discussions related to hateful imagery, while in the US the share was 8%.

How has the discussion about racist and ethnicity-based hate speech evolved?

There were 50.1 million discussions about or examples of racist or ethnicity-based hate speech between 2019 and mid-2021.

Between 2019 and mid-2021, on average there was a new post about race or ethnicity-based hate speech every 1.7 seconds.

Discussions about and examples of racist hate speech peaked during the Black Lives Matter protests in the summer of 2020. People used racial slurs to refer to those participating in the protests, while supporters of the movement often shared instances where they had been the victim of racist hate speech. Ethnicity-based hate speech was spurred primarily with the beginning of the pandemic as anti-Asian hate became significantly more common and widespread (we’ll dive into this more in the section below).

Victims of racist / ethnicity-based hate speech

With the surge of discussion about racism and racial inequities, individuals often took to the internet to post about the experiences they had being the victims of hate speech. Some told stories about being yelled at in public or mocked in work or school environments. Many described how those memories had lingered with them for their entire lives and left them feeling unsafe or unwelcome. Many of these posts were shared with messages about raising awareness of racism in society, and people often expressed their devastation or shared similar experiences in response to these posts.

Examples of reports of hate speech:

WARNING:
Uncensored Examples

“Yesterday I was racially abused by a group of teenagers. One of the boys yelled out and called me a “p***” as I was walking past them. As I worked past I asked him to say it again louder to my face and he ignored me.”

“An actual living nazi is harassing me online because I have been posting about #BLM. He firmly believes all this white supremacy bullshit and has called me the N word daily now. I tried to block him and he just started messaging me from another account.”

“The first time I was called a n***** I was 9 years old. I still remember that day.”

“I feel compelled to share my experience after the discussions about the NHS and racism. I had a patient call me a racial slur my first day working at the nurses desk. Not a single staff member acknowledged that it happened so I just left. If you are shocked by racism alive and well today, you aren’t paying attention.”

“So many kids called me towel head in school and bullied me for being Persian. I hated every minute of going to school with them.”

Perpetrators of racist / ethnicity-based hate speech:

Some people have unfortunately taken to the internet as a place to spew hatred and intolerance. Many of these posts were found in arguments individuals were having, where someone began using someone else’s race or ethnic background as a reason to not believe them or listen to them.

Examples of hate speech:

WARNING:
Uncensored Examples

“Dont disrespect me!! Youre just a c*** and a b*tch.”

“You need to shut up and leave the US NOW! You just demean this country and you don’t belong here. Go home to Somalia!”

“IM TELLING YOU THIS. YOU ARE JUST A CURRY SMELLING P***!”

“I wouldn’t hire a jew even if they made me, so shut up you jew b*tch!”

“The white race defeted you s**** because you were weak. We enslaved n****** because they were dumb. We won land from the indians because they were sissies. Its why I pay people to cut my damn lawn.”

“Tell me how it feels being a part of the ugliest and least desirable race in this world, you n****”

How has anti-Asian hate evolved since the start of the pandemic?

There were 5.5 million examples of or discussions about online hate speech against Asian communities since 2019. Anti-Asian hate speech increased by 2770% in 2020 compared to 2019.

Anti-Asian sentiment and hate speech peaked with the announcement of the COVID-19 pandemic in March 2020. References to the virus as "China virus" or "kung flu" have been connected to discussions about the use of those terms as well as increased hate speech against Asian people. Incidents of violence, threats, and attacks on Asians, Asian-Americans, or British Asians increased over the summer of 2020 which brought a focus to Asian hate and the #StopAsianHate hashtag.

The focus on Asian hate dropped in the fall of 2020 and winter of 2021. Following the murder of six Asian women and two others in March of 2021 in Atlanta, a renewed online rally of support for the Asian-American community regarding the hate crimes and hate speech was born. Celebrities such as Sandra Oh, Daniel Dae Kim, and George Takei all called for increased awareness, support, and an end to the violent rhetoric and attacks on Asian communities.

Corresponding to the growth in hate speech, we also saw a 3101% increase in posts that Brandwatch categorized as being fearful when people discussed anti-Asian hate. References to violent threats against Asian communities or individuals represented 34% of the discussion about anti-Asian hate speech.

Anti-Asian hate online since the pandemic demonstrates how quickly hate speech evolves. Many of these terms and slurs now levied at Asians both online and in person did not exist two years ago prior to COVID. These terms have been used millions of times since then, including by leaders and authority figures who help legitimize the use of some of those terms.

Has there been an increase in gender and gender-identity-based hate speech?

There were 110 million discussions about or examples of gender and gender-identity-based hate speech between 2019 and mid-2021.

There was a significant increase in conversation around gender-identity-based hate speech around the start of the pandemic in March 2020 – it’s now up 28% since the start of the pandemic.

In the US and UK, discussion about gender-based hate speech also increased during the summer of 2020, and the build up to the US presidential election seemed to drive much of this. People often attacked others on social media with gender-based slurs or attacks based on their political beliefs. In the UK, the cause of the increase of gender-based hate speech over the summer of 2020 is less clear, but instances of people telling others to “man up” and “be a man” contributed to this rise.

Slurs against women, including terms such as “b*tch”, “c*nt”, and “sl*t”, were prevalent in online discussion. Women spoke about being called these terms and often discussed hearing them on the street or in public settings such as the gym or stores. Some comments were in response to what women were wearing which sparked conversations about how what someone wears does not justify attacks on them. In online conversations, some men seemed to occasionally rely on these gender-based attacks when arguing or disagreeing with women on the internet.

The transphobic hate speech discussion increased by 10% since the start of 2019.

In the 2.5 years prior to 2019 there were 3.2 million posts including transphobic hate speech or discussing transphobic hate speech which increased to 3.5 million since 2019. Consistent with previous reports, the most common transphobic slurs were “tranny” and “shemale”, with over 1.5 million and 1.6 million uses of the phrases online since 2019 respectively. Akin to gender-based hate speech in general, transgender-based hate speech increased in the lead up to the 2020 election, with slurs being used as an insult in unrelated political conversations. These terms are also used to attack cis-gendered women who people believe have masculine qualities, or cis-gendered men who demonstrated more effeminate characteristics.

Phrases like "act like a man" or "be a man" are often attached to insinuations about the victim's gender identity or sexual orientation. These attacks were also often attached to political disagreements and seemed to increase in the build up to the 2020 presidential election.

Non-binary-based hate speech discussion increased by 260% between 2019 and mid-2021. Much of this conversation was driven by an increase in the usage of the term “enby” to attack those who identify as non-binary though, like other terms, it can also be used as a self-label without negative connotations. Some of the non-binary hate speech discussion overlapped with the transgender hate speech discussion as people used terms attacking both groups in single posts. One of the common attacks people levied against the non-binary community was asking why they should use non-gendered pronouns in conversation when the attacker felt a gendered term should be used.

How has hate speech based on sexual orientation shifted?

There were 9.3 million discussions about and instances of hate speech regarding sexual orientation between 2019 and mid-2021. Queerphobia, biphobia, and acephobia have all seen an increase in conversation in this time period.

We also found that men are two times more likely to post examples of or discussions about hate speech in the context of sexual orientation than women.

Analysis

Homophobia: Homophobia made up the largest percentage of discussion around and examples of hate speech about sexual orientation (79%). There was a 2% rise in homophobic hate speech between 2019 and mid-2021. One of the largest drivers of discussion about homophobia was in response to an incident involving Jussie Smollett, an actor who allegedly staged a hate crime attack based on his sexual orientation and race in January of 2019. The incident sparked conversations about the status of homophobia and racism in the US. It also drove discussion about whether, because of the allegedly staged nature of the attack, victims of real hate crimes would be believed by others.

Lesbophobia: Lesbophobia did not see a large shift driven by the pandemic or any of the events of the 2.5 year period we studied. Much of the hate speech and abuse was seen in the form of women being called a dyke, butch, Kiki, or other words suggesting sexual inuendo. Some lesbian women described their experiences being harassed online or feeling unsafe in public because of comments they had received or threats that were made against them by others.

Queerphobia: Queerphobia saw the largest increase in discussion, growing by 51%. Some of this was driven by conversations about Black Lives Matter including discussions about Black people who identify as queer being an at-risk population for victims of violence and abuse. One common theme was people being unfamiliar with what the term “queer” meant and the history of the term being used as a slur. Some questioned whether “queer” was the proper term to describe others without knowing how they identified.

Biphobia: There was a 9% increase in discussions about biphobia over the 2.5 year period studied. Celebrities coming out as bisexual encouraged dialogue with a focus on acceptance, despite hate and bullying levied at them.

Acephobia: There was a 14% increase in discussions about acephobia since 2019. Perpetrators of hate speech often questioned whether asexuality was real and questioned whether those identifying as asexual had just not met the right person yet.

What’s the impact of celebrities, leaders, and influencers?

Celebrities drove many of the spikes in conversation about hate speech. Whether they used hate speech themselves or were the victim of it, people often used those circumstances to drum up conversation about the topic.

Much of the conversation about hate speech and, unfortunately, the use of hate speech was driven by celebrities, leaders, and influencers. According to the data, the use of slurs or violent language against different groups from leaders and influencers seem to prompt a spike in the use of similar words or phrases by others.

An example of this was the use of terms to describe COVID-19 that began at the beginning of the pandemic. The week that President Trump first referred to COVID-19 as the “China Virus” there were 151,716 mentions of the term online that week alone, up from 1,024 references to the term the week prior. Since his first instance of using “China Virus” to describe COVID there have now been over 1.2 million posts using the phrase.

There also were instances where celebrities came forward as victims of hate speech, whether online or in person. Some tried to shine a light on the topic of online harassment and misogyny, homophobia, or racism. In other instances, people rallied behind celebrities who had publicly been the victim of hate speech at the hands of other celebrities or politicians.

How has the debate of cancel culture fared on the internet?

Discussions about cancel culture have increased by 242% since the start of the pandemic.

Conversations about cancel culture surged in the shadows of the Black Lives Matter movement and the murder of George Floyd. Many supporters of the movement called for the removal of statues, while those opposing their removal often referred to this as the “epitome of cancel culture” and used heritage as a key justification for keeping them.

Many discussions about different celebrities and debates over whether they should be cancelled bubbled up in the time frame we studied, including:
  • Jimmy Fallon for participating in SNL sketches that featured blackface
  • JK Rowling following comments that sparked controversy in the transgender community
  • Ellen Degeneres once accusations surfaced of her harboring a toxic work environment at her show
  • Hilaria Baldwin after she was accused of pretending to be Spanish
  • Piers Morgan for his comments about Meghan Markle, her mental health, and her accusations about racism in Buckingham Palace

The debate about the existence and the value of cancel culture has continued to rage online. Supporters of cancel culture advocated that it keeps people accountable for their actions and teaches lessons about the consequences of certain behaviors. People upset by cancel culture took to the internet to voice frustrations that it can be unforgiving, deter free speech, and can have real-life impacts on people's careers, relationships, and mental health.

Other Findings:
Men were two times more likely to post about cancel culture than women were.

Amidst the cancel culture conversation there was also a focus on pile-ons, where a larger group of individuals attacks or argues with a single individual or smaller group. Much of the focus was on how interconnected pile-ons were to cancel culture and how detrimental they both are. Pile-on conversation featured in discussions around the death of Caroline Flack, a TV presenter who died by suicide in February 2020. Flack had been the centre of episodes of harassment and bullying on social media, as well as incidents of pile-ons when people reacted to things happening in her private life that came under scrutiny from the public.

Example mentions:
“It is so terrifying that social media has turned into a place where we shame one another and find others' mistakes to call them out to the world. We all make mistakes and demonizing them is not helpful or kind. Cancel culture has gone way past condoning criminals and has started targeting people who never had the opportunity to learn from things they have done in the past that need to be corrected.”

“Cancel culture is one of the worst parts about the internet. I can see how in some cases people need to be cancelled, but the vast majority of time the best thing to do to respond to someone saying or doing something inappropriate is to be calm, cool, and collected and to focus on educating them. That will help them learn rather than rushing to ruin their lives.”

To get support on any of the issues highlighted in this report or to find out more about and support the vital work of Ditch the Label, please visit www.DitchtheLabel.org.

Falcon.io is now part of Brandwatch.
You're in the right place!

Existing customer?Log in to access your existing Falcon products and data via the login menu on the top right of the page.New customer?You'll find the former Falcon products under 'Social Media Management' if you go to 'Our Suite' in the navigation.

Paladin is now Influence.
You're in the right place!

Brandwatch acquired Paladin in March 2022. It's now called Influence, which is part of Brandwatch's Social Media Management solution.Want to access your Paladin account?Use the login menu at the top right corner.