Humans and Health

Post-Truth and Cognitive Bias Part I: Bias, Bias Everywhere

‘May you live in interesting times’ is a Chinese curse condemning the recipient to live through times of turmoil – a phrase that members of the political class have recently taken to heart. A phrase that, on further research, isn’t a Chinese curse at all but one that still feels very apt given the Oxford English Dictionary’s word of the year is post-truth.

This new adjective relates to situations in which objective facts are less influential in shaping public opinion than appeals to emotion or personal belief. So instead of conversing in objective truth, if something feels true then it probably is. The media was taken aback, questioning how it could possibly happen. After all, how could a species whose name means “wise man” in Latin not take facts into account when making big decisions, say in elections or referenda? Psychologists, however, have known for a quite some time that humans aren’t that wise and maybe this post-truth world we find ourselves in isn’t so surprising.

Wise people should be objective and rational. They should parse information in an unbiased manner, weigh the pros and cons and come to a decision. Humans, on the other hand, are filled with bias, bias that can shape their decisions without them being aware of it. It isn’t entirely clear what can cause someone to have a particular bias but they are typically shaped by experiences in childhood, adolescence and early adulthood.

When faced with a decision we often turn to the media for the information needed to form our opinions. Language, however, can be easily manipulated to change the way we respond to an issue. Daniel Kahnemann and Amos Tversky were two psychologists who pioneered this idea of “framing”. As part of their research they asked a group of students the following question:

Imagine the US is preparing for the outbreak of an unusual disease, which is expected to kill 600 people. Two programs for combating the disease have been proposed. Assume that the exact consequences of each program are:

If program A is adopted, 200 people will be saved.

If program B is adopted there is 1/3 probability that 600 people will be saved and 2/3 probability that no one will be saved.

Which of the two programs will you favour?

In this scenario 72% of the respondents adopted the risk-averse program A. The certainty of saving 200 lives appeared more attractive than the risk of no-one being saved. A second group was asked the same question but given the following possible outcomes:

If program C is adopted, 400 people will die.

If program D is adopted there is 1/3 probability that no one will die, and 2/3 probability that 600 people will die.

When presented with these choices 78% of the group decided to take the risk with program D. It’s evident that programs A and C are identical, as are programs B and D. The only difference is the framing. The first set is framed in terms of saving life and the second set, losing life. The experiment was repeated with different groups of people, including doctors, and the same trend was found – framing language in terms of losing life made the group engage in more risk-taking behaviour. It doesn’t take much imagination to see that politicians could easily manipulate this effect. £350 million a week for the NHS anyone?

Our ability to misinterpret information isn’t limited to rhetorical slight of hand – bias can colour our numerical skills too. During the EU referendum, the Online Privacy Foundation conducted a survey of over 11,000 leave and remain voters on social media. Initially they were presented with data describing the effectiveness of a skin cream; both groups were equally likely to interpret it correctly. When the same data were shown in the context of immigration and crime rates, however, their numeracy skills abandoned them. If the statistics didn’t support a voter’s view their ability to correctly interpret them dropped. In some age groups it went down as much as 50%.

A family connection with a party can influence our voting intentions, a motivation that could be described as “post-truth” but one as old as the vote itself. These connections can grow and form part of someone’s identity, influencing opinions on ostensibly unrelated subjects, a bias known as motivated reasoning. A recent Pew survey showed that Republicans are significantly less likely to believe scientists know that climate change is occurring, that human activity drives it or that said scientists properly report their findings. Psychologist Stephen Lewandowsky investigated this idea further by looking for a link between conservative ideologies and climate change denial. Their study found a strong link between belief in the free market and climate change denial, leading him to theorise that their objections were born out of a rejection of the need for governments to regulate business polluting practices. Political ideology influencing scientific opinion is politically motivated reasoning.

This strong, personal party identification can override seemingly rational decisions to change voting intention. Political researchers Edzia Carvalho and Kristi Winters conducted a series of interviews with members of the UK electorate in the run-up to the 2010 general election. They were interviewed about “Cleggmania”, a period of increased popularity for the Liberal Democrat leader Nick Clegg. A group of previous Labour and Conservative voters, swayed by his performance, and sincerely considering voting for the Liberal Democrats were identified. However, on election day, confronted with the ballot, their preexisting party identification made the act of voting for a different party uncomfortable. Rather than go against their identification, these voters stayed with their prior party of choice.

These are just a handful of examples of where our inherent biases can influence our decision-making. In my opinion, it is largely the combination of these biases and their impact on voting in high profile elections and referenda that has culminated in the post-truth world we find ourselves in. However unhelpful bias may seem, it serves a purpose. It acts as a mental shortcut, helping us come to a decision when we have little information about a specific topic. Our hunting and gathering ancestors found these shortcuts helpful when making decisions about whether to continue foraging in an area that’s becoming depleted of fruit or to gamble on moving on to another part of the forest where richer pickings are not necessarily on offer.

They are not as useful for the more complicated decisions we face today. We live in an incredibly complex world filled with uncertainty and it’s no wonder we fall back on biases to help us try and make sense of it all. So how do we overcome our inherent biases? This is a key question we need to answer if we are to engage the world in meaningful and constructive discussion at a time when liberals and conservatives are becoming equally averse to engaging with each other’s points of view.

Keir Birchall

Featured image courtesy of UK Parliament via Flickr

For more information on Science and Technology, please follow Impact’s Science and Technology Facebook page or Twitter on @impact_sci

Categories
Humans and HealthLifestyleScience
One Comment
  • Michael Bluth
    19 April 2017 at 20:32
    Leave a Reply

    This was a very interesting and well-written article which I very much enjoyed reading, good stuff ?

  • Leave a Reply