The Most-Subscribed YouTubers and Channels
By Joshua BoydAug 4
Published March 21st 2019
Fake news isn’t always obviously fake. In fact, the most successful fake news will always look pretty plausible.
It’s fake news week on the Brandwatch blog, and I wanted to pick out a few examples of fake news to see if I could track how it was shared using network analysis.
Here we’ll look at different kinds of fake news, and the different ways it’s generated interest and credibility among some audiences.
Here’s a classic example of fake news for you.
Recently a site named NPC News published the following story:
AOC opposes Daylight Savings Time because “the extra hour of sunlight drastically speeds up climate change”
The story was quickly picked up by Snopes who fact checked and, unsurprisingly, confirmed it wasn’t true.
Of course, the fact that it wasn’t true didn’t stop people from sharing the story. According to BuzzSumo, the story has had more than 21k engagements on Facebook and was shared 68 times on Twitter.
Using Brandwatch Analytics we took a look at the people sharing the story on Twitter to see what the conversation looked like.
Here’s a network analysis showing shares and retweets. Arrows show retweets, the size of the nodes represent the number of retweets the account has had, and we’ve highlighted accounts that have generated shares in blue.
It would be natural to assume that the account generating the most retweets was the source itself, but NPC News is represented by the second largest blue account, over on the left.
The account that’s been most successful in sharing this story is a Trump supporter with a strong network.
They generated the most retweets and a whole bunch of comments – ranging from questioning the article’s truth to believing it’s true and insulting the politician.
Of course, while some people questioned the truth of the article, others shared and added their own criticisms of AOC.
The above goes to show how easy it is to get caught up in misinformation that fits in with the perceptions you already hold. It also shows that the source of fake news isn’t very powerful until people share and believe it. It’s often other influential voices that get the misinformation out there.
Here’s a random entry from Snopes:
Despite this being questioned much earlier in time, sites are still publishing posts with headlines attesting to the power of rosemary to improve your memory by 75%.
There might be some truth in rosemary helping with your memory, but improving it by 75% is a big claim. That said, many sites still cite this figure when talking about the amazing powers of rosemary.
I took a look at articles that include “rosemary” and “memory” in their titles, and found the ones that specifically mention the 75% improvement using BuzzSumo. I then looked at the sites that backlink to these articles.
Here are a number published in the last year. The nodes in blue are articles that have links back to them, and the nodes in pink are the articles that are linking to those in blue.
What we’re seeing here isn’t just the sharing of fake news, but also the interconnectivity of those sites.
One blog about how rosemary can have miraculous effects on memory can be linked to by multiple others. This helps the site climb the ranks in Google and lends legitimacy to their claims.
This is a pretty simple example, but it demonstrates how fake news can gain credibility.
I heard this story first through word of mouth, when a friend read it out loud directly from one of his social media feeds as he cackled with laughter.
Here’s the basic premise that various sites reported:
A man from Waterbury in Connecticut faces divorce after his wife found out that he was not actually deaf and had been faking it for more than 62 years to avoid having to listen to her.
At first listen, it’s kind of funny. But it doesn’t make sense (he would still be able to hear his wife, right?) and, more importantly, it’s not true.
Here’s the Snopes verdict:
So why was the joke article so popular? It gathered more than 100k engagements on social media according to BuzzSumo.
Looking at the Twitter conversation from the beginning of March up to the 17th, it was mainly men who shared the article.
Nagging wives with mischievous husbands is a fairly tired trope, but there’s obviously still some comedy value in it for the above sharers.
In our three examples we’ve shown a political example, a health example, and a jokey example. All of the stories were false, and yet they were all shared widely by people and organizations who want to believe them.
In each example there is something about the story that appeals to a certain group of people, be it those who are anti-AOC, pro-alternative remedies, or fond of funny stories that reinforce stereotypes.
None of the examples are so completely ridiculous that they could never be true (although they are highly suspicious to many).
So, the common theme is that all of the fake news here has some level of plausibility at least for some people – namely, those who shared them in affirmation.
With our visualizations we have tried to show the relationships fake news has with people and organizations. Firstly, we saw that influencers have the power to spread fake news beyond what it’s sources originally had the capability to. In the second, we showed how networks of re-affirmation can help give fake news credibility. And in the final example we showed how fake news often appeals to particular groups of people (in this case, men!)
All of our examples are fairly small scale, but they each represent a different side of fake news and the ways it can be used to push particular narratives. If you want to read more about fake news, check out our current series – Fake News Week – on the Brandwatch blog.