Bigger, Better Brandwatch: James Stanier on Flexible Working and a Global Engineering Team
By Gemma JoyceApr 17
Published March 22nd 2019
Fake News, misinformation, disinformation – or whatever else you’d like to call it – has posed challenges for individuals, government bodies, companies, and brands alike. Here I’ll discuss some communication theory, technology, and what companies or brands might want to think about as we navigate this landscape.
While I won’t spend much time defining my fake news terms here, it is an important part of the discussion. Here’s a paper that resulted in a fake news typology including: news satire, news parody, fabrication, manipulation, advertising, and propaganda. I’ll generally be referring to the more generally referred to kind of fake news involving deception of an intentional variety.
It’s also important, but not a focus here, to remember that fake news didn’t just start, despite its prominence in the public discourse and ease of distribution online. Here’s some history from Smithsonian.com. That said, according to the Edelman Trust Barometer 2019 (PDF), 73% of us “worry about false information or fake news being used as a weapon.”
How do our models of communication work when it comes to fake news?
I recently read (well, listened to) a book called The Information: A History, a Theory, a Flood by James Gleick and it got me thinking about our models and theories of communication and the technology that has defined them.
Two mathematicians at Bell Laboratories, Claude Shannon and Warren Weaver, are credited with creating our first mass-adopted communication model in 1949. The Shannon-Weaver Model was built with telecommunications in mind, and the figure used to describe it on Wikipedia so good I had to include it here:
(Check out the file history for the image here for a good laugh.)
Here’s one with labels:
Claude and Shannon’s model essentially says that there is a source or sender that creates a message, a transmitter that encodes the message into a signal, a channel that delivers the signal, and a receiver who receives and decodes the message in some destination. Importantly, there is also noise in this model – noise being disturbances, of any kind, that prevent the intended message from reaching the receiver.
For example, as you are reading this, I am the information source, my brain and computer are working together to encode it for transmission, the internet is our channel, and you are the receiver who is decoding it (and, presumably, you are somewhere). Noise could be your dog barking at you for not paying enough attention to her, the work email from your boss that just popped up in the upper right corner of your computer screen, or that WhatsApp notification that just flashed up on your phone.
After a quick bit of commentary we’ll be done with theory, I promise. This model is great and has helped us humans communicate and create technology for a long time, but as with any model, it has its limitations and gaps. Here are some that matter particularly in the context of fake news:
I bring all of this up to say that each step in this model and most other ones that I’ve seen present opportunities for fake news to have an impact. If I were to make one of those attention-grabbing figures it would look something like this:
Fake news isn’t new so this really isn’t new either. But the pace, reach, and technology of distribution has changed.
I’ll hone in on the three gaps above in the context of media or online channels and fake news.
We won’t be able to address all possible routes of fake news impacting on our communication model, so let’s be prepared. Here are a few ideas for you to consider.
It’s critical that you understand how others see you and your brand – particularly when it comes to fake news.
A good place to start is to take an inventory of the feedback or signals you’re receiving from customers, consumers generally, and online – This allows you to develop a deeper understanding of the way the world perceives you. Take stock of how fake news is or isn’t impacting your brand, and where your vulnerabilities might be. With this you can sort out how to allocate your limited resources – whether its dealing with direct fake news impact, better defining your value with customers, engaging people IRL, creating content, or others.
As you think about how to best allocate your resources in the context of fake news consider what sources people are telling us they trust, customize to your business, and adjust based on additional data. For example (PDF): While many place blame on traditional media for fueling fake news, according to the most recent Edelman Trust Barometer, people have become more engaged with it this year and trust it more than other sources.
While it’s difficult to predict and then prepare for a scenario in which you or your brand are on the wrong end of a fake news firestorm, by investing now you’ll be better prepared to manage it.
Define your brand’s internal mental model for fake news and its potential impact on you – This will help you “speak the same language” internally in the event of a crisis.
Develop a shared understanding of impact and how you’ll measure it (this is a really really hard problem, don’t underestimate it). This means you and your leadership should agree on the metrics, insights, and analysis that answer difficult and abstract questions like, “Does this fake news matter?”, “How bad is it?”, or “Is now the time to take action?”.
Invest in the data, technologies, and tools that enable you to detect and measure fake news’ impact on your brand. This means everything from social media, to market research, to customer feedback, and more.
Your people are what make that data and technology useful – constantly invest in them. Give them feedback. Hold them accountable. Let them hold you accountable.
If you’ve defined your terms and invested in your people and technologies, you’re ready to wrap them up in a process that connects your various teams in order to efficiently make decisions and allocate resources in the event of a fake news impact.
This process will need to be flexible enough to allow for a variety of fake news scenarios (and others), but clear enough to quickly assemble decision-makers. The success of fake news is at least partially determined by its ability to re-direct resources and attention, so if you’re making better decisions about your resources in a fake news crisis, you’re already better managing it.
You might take it one step further and take my crudely modified communications model figure from above (yes, that “attention-grabbing” one) and use it to red team each point in your process – This will only make it more robust.
Reminder: This process shouldn’t only be reactive; if you’re identifying knowledge gaps now you can start filling them now by creating content, engaging audiences, communicating, and building trust.
Here’s the doom and gloom part of the story – technology is changing rapidly and it’s hard to keep up! Well that wasn’t so bad.
But, with new technology producing DeepFake mashups of Steve Buscemi and Jennifer Lawrence and scary good synthesized audio of President Obama – we have reason to shake a little in our boots. To Buscemi’s response on The Late Show with Stephen Colbert, “It makes me sad that somebody spent that much time on that.”, I say, it’s only going to get easier and faster.
Not to add too many layers of scary to this, but as TechCrunch put it, OpenAI built a text generator so good, it’s considered too dangerous to release. So now we have computer generated audio, video, and text that is quickly becoming indistinguishable from the human form. I’ll let your imagination fill in the gaps here when it comes to your company.
While this presents a future market for technology and companies working on detecting these kinds of things or verifying the “real-ness” of human generated content; we’re not there yet and preparing our processes to manage a crisis will be our best bet for now.
Thanks to Andy Schaul for writing for us during Fake News Week. You can find him on Twitter here.