Conflicted Read online

Page 6


  Evans realised that Wikipedia is the perfect place to investigate this question (I’m going to use ‘Evans’ as a shorthand for the team of researchers he led). Wikipedia is a remarkable feat of teamwork. Each page is overseen by an ad hoc community of volunteer editors. Behind every topic there is a ‘talk page’, which anyone can open up to observe what goes on behind the scenes of the page you see. In the talk page, editors debate proposed additions and deletions, and engage in elaborate arguments, as they try to persuade each other of what should be included on the public-facing page. Some teams produce better-quality pages than others. We know this because Wikipedia assigns a grade for quality to each page, based on how readable, accurate, comprehensive and well-sourced it is.

  Evans used machine learning to identify the political leanings of hundreds of thousands of editors – whether they were ‘red’ or ‘blue’ – based on their edits of political pages. He was then able to identify the political make-up of thousands of editorial teams, including those working on pages relating to politics and social issues. Some articles were run by highly polarised teams made up of red and blue editors, others by editors who were more in alignment with each other. Here’s what Evans discovered: the more polarised the team, the better the quality of the page.

  Ideologically polarised teams were more competitive – they had more arguments than more homogeneous or ‘moderate’ teams. But their arguments improved the quality of the resulting page. The conversations they had on the talk pages were longer, because neither side was willing to give in on a point without a struggle. These longer arguments generated better-quality content, as assumptions were unearthed and arguments honed. Editors working on one page told the researchers, ‘We have to admit that the position that was echoed at the end of the argument was much stronger and balanced.’ That ‘have to’ is important: the begrudging way that each side came to an agreement made the answer they arrived at stronger than it otherwise would have been. As Evans puts it, ‘If they too easily updated their opinion, then they wouldn’t have been motivated to find counter-factual and counter-data arguments that fuel the conversation.’

  Egoism – the need to be seen to be right – and tribalism – the desire to see our group win – are usually portrayed only as enemies of good disagreement. Understandably so, because in most cases they are. The productive competition between Wikipedia’s editing teams, however, suggests that even tribalism can bear intellectual fruit, providing the participants share a common goal and agree on rules of conduct (of which more later). The best thing to do with our tendency to make self-centred arguments is to harness it.

  What connects Socrates, Buffett and the Wikipedians is their understanding of a profound truth about human cognition: our intelligence is interactive.

  * * *

  Since Socrates’ time, the ability to reason has been heralded as humanity’s supreme attribute, the thing that sets us apart from other species. This raises a tricky question. If reasoning towards the truth is humanity’s superpower, why is everyone so bad at it?

  If you were asked to help people arrive at more accurate beliefs and better decisions, you’d probably start by improving their ability to spot their own errors. After all, nobody can be sure they’re right about anything until they’ve fully considered why they might be wrong. But we are all generally very bad at this. We cling to our opinions even in the face of evidence to the contrary. If I believe the world is going to hell in a handcart, I’ll only notice bad news and screen out the good. If I’ve decided that a politician is brilliant, I’ll only notice her achievements and ignore her screw-ups. Once I’ve decided that the Moon landings were a hoax, I’ll seek out YouTube videos that agree with me and wave away the counter-evidence.

  Psychologists have now established beyond doubt that people are more likely to notice and consider evidence that confirms what they believe, and to discount anything that suggests the opposite. Humans have an instinctive aversion to the possibility they are wrong; they deploy their powers of reason to persuade themselves that they are right even when they’re not. Armed with a hypothesis, we bend the world around it. This characteristic, known as ‘confirmation bias’, appears to be a serious problem for our species. It makes us more likely to deceive ourselves and to believe the lies of others, and less likely to see anyone else’s point of view. ‘If one were to attempt to identify a single problematic aspect of human reasoning that deserves attention above all others, the confirmation bias would have to be among the candidates for consideration,’ says Raymond Nickerson, a psychologist at Tufts University. Cleverness is no cure for this problem; studies have found that intelligent and educated people are just better at persuading themselves they’re right, since they are more skilled at generating self-justifying arguments.

  This presents a puzzle. Why has evolution endowed us with a tool that is at once both incredibly sophisticated yet faulty enough that if you purchased it from a shop you would send it back? A pair of evolutionary psychologists called Hugo Mercier and Dan Sperber have offered an intriguing answer to this question. If our reasoning capacity is so bad at helping individuals figure out the truth, they say, that’s because truth-seeking isn’t its function. Instead, reason evolved to help people argue.

  Homo sapiens is an intensely collaborative species. Smaller and less powerful than other species – weedy, compared to our Neanderthal forebears – humans have nevertheless managed to dominate almost any environment they have set foot in, mainly because they are so good at banding together to get stuff done. To that end, humans have evolved a finely tuned set of abilities for dealing with other humans. In Mercier and Sperber’s view, reasoning is one of those social skills. Reason evolved to help people do things with other people – to hunt down prey, make a fire, build a bridge. The giving and asking for reasons enabled individuals to influence others and win them to their side, and it also had the effect of making people accountable for their own actions (‘OK, let me explain why I took more than my share of mammoth meat . . .’). The point of being able to come up with reasons is to present them to others in support of your argument, or to knock down someone else’s – that is, to argue.

  It’s not hard to see why those with superior reasoning ability would have been more likely to survive and pass on their genes. The ability to give and examine reasons turns disagreements that might have become violent, even fatally so, into arguments. If I want to start a fire and you want to build a shelter, we can exchange reasons for and against doing so, rather than fighting about it. Those who were particularly skilled at taking part in this back-and-forth would be better at heading off threats, and able to display their competence to the group, winning allies and impressing potential mates.

  The giving and asking of reasons is an important way for people to establish the kind of relationship that will enable them to collaborate. For you to trust me as someone with whom you can do business (literally or metaphorically), I can’t just say I want something, or that I disagree. I need to explain why and I expect the same from you. The only people we don’t expect this from are small children, who, when asked to justify their wants, tend simply to say ‘BECAUSE I WANT IT’. Teaching children to say something more persuasive after ‘because’ is a vital part of their socialisation. Parents can encourage that by modelling it. When you have a disagreement with your child, try and give them reasons for why you want them to do something, even when all you really want to say is ‘BECAUSE I SAY SO’.

  Mercier and Sperber are ‘interactionists’, as opposed to ‘intellectualist’ thinkers. For intellectualists, the purpose of reason is to enable individuals to gain knowledge of the world. But as we’ve seen, reason often seems to be used to entrench whatever we want to believe, regardless of whether it is true or not. In the interactionist view, reason didn’t evolve to help individuals reach truths, but to facilitate communication and co-operation. In other words, reasoning is inherently social, and makes us smarter only when we practise it with other people in a process
of argument. Socrates was on to something.

  The myth of the rational individual who can think his (and it usually has been ‘his’) way through any problem in magnificent isolation is powerful but misleading. For a start, while humans have accumulated a vast store of collective knowledge, each of us alone knows surprisingly little, certainly less than we imagine. In 2002, the psychologists Frank Keil and Leonid Rozenblit asked people to rate their own understanding of how zips work. The respondents answered confidently – after all, they used zips all the time. But when asked to explain how a zip works, they failed dismally. Similar results were found when people were asked to describe climate change and the economy. We know a lot less than we think we do about the world around us. Cognitive scientists call this ‘the illusion of explanatory depth’, or just ‘the knowledge illusion’.

  What enabled humans to conquer the planet is not that we ‘think for ourselves’; it is instead our unrivalled ability to think in groups. There is nothing we do, from getting dressed, to using a computer, that does not rely on the knowledge of other people. Each of us is plugged into a vast network of knowledge, passed down from the dead and shared among the living. The more open and fluid your local network, the smarter you can get. Open disagreement is one of the main ways we have of raiding other people’s expertise while donating our own to the common pool.

  However, as Socrates knew, disagreements only generate truth under certain conditions, one of them being what Mercier and Sperber call a ‘division of cognitive labour’. In the ideal discussion, each individual focuses mainly on the search for reasons for their preferred solution, while the rest of the group critically evaluates those reasons. Everyone throws up their own hypotheses, which are then tested by everyone else. That’s a much more efficient process than having each individual trying to come up with and evaluate all the different arguments on both sides of the issue and it’s likely to lead to better decisions.

  This solves the puzzle of why evolution bequeathed us confirmation bias. In the context of a well-functioning group discussion, confirmation bias is a feature, not a bug, though only if we use it as nature intended. Think about what it’s like when someone contradicts you. You feel motivated to think of all the reasons you’re right, and to cite them in your support, at least if it’s something you care about or when it’s important to be seen to be correct (this is why Mercer and Sperber prefer the term ‘myside bias’ to ‘confirmation bias’: it only kicks in when your identity or status is threatened). That’s an emotional response as much as a cognitive one. Some people might advise you to put your emotions to one side and evaluate arguments purely rationally. But by allowing your emotion to drive your search for good arguments, you’re actually doing something productive: contributing new information and new ways of thinking about the problem to the group.

  You might be doing it for selfish or narrow reasons – maybe you want to justify yourself and prove how smart you are. Even then, you’ll help the group generate a diversity of viewpoints, as people strive to put their reasons forward. Since everyone has an incentive to knock down competing arguments, the weakest arguments get dismissed while the strongest arguments survive, bolstered with more evidence and better reasons. The result is a much deeper and more rigorous thought process than any one of you could have carried out alone. That’s exactly how the Wikipedia editing process works, according to James Evans’s study. It’s how Warren Buffett designs the decision-making process for investment. It’s the principle that underlies Socratic dialogue.

  Looked at through the interactionist lens, confirmation bias isn’t something to eliminate; it’s something to harness. Under the right conditions, it raises the collective intelligence of a group. What are those conditions? First, the group must disagree openly, with each individual feeling genuinely compelled, and able, to put their best case forward. Second, and most fundamentally, the members of the group must have a common interest – in the truth, or the right decision. If each member is only defending their own position, or trying to get one-up on everyone else, then the weaker arguments don’t get eliminated and the group won’t make progress. When each person takes a strong position and at the same time allows themself to be swayed by better arguments, the group moves forward.

  Confirmation bias, like conflict itself, is curvilinear, operating on an inverted U-curve. A lot of it is bad; so is none of it. I’ve sat around tables at work where most people don’t express a strong point of view and simply accept whatever the most confident person in the room says. The result is a lifeless discussion in which the dominant view isn’t tested or developed. Just as in romantic relationships, you can be left wondering how committed those people are to whatever project they are pursuing. You might also, of course, wonder whether the leaders of the company have made it clear that they do not wish to be disagreed with, and that dissenters will be penalised.

  I’ve also sat at tables when different individuals fight their corner, sometimes even slightly beyond the point that seems reasonable to do so. Those discussions can be rumbustious and uncomfortable at times but they are generally higher quality and, when conducted respectfully, they can bring the members of a team closer together. Having said that, individuals who never back down from their viewpoint waste everyone’s time. There are a lot of annoying people on the far side of the inverted U – and a lot of fruitless debates. You should bring your whole, passionate, biased self to the table, but you also must judge when to separate yourself from the argument you’ve been pursuing.

  The chemistry of disagreement is inherently unstable. It’s always threatening to move towards one extreme or the other. Self-assertion becomes aggression, conviction becomes stubbornness, the urge to conform becomes the instinct to herd. Over centuries, we’ve developed processes and institutions to stabilise this volatility and provide the right conditions for productive disagreement. Foremost among them is the institution of modern science. But even among scientists, bias can get out of hand.

  * * *

  Four hundred years ago, Francis Bacon warned against what we now call confirmation bias: ‘The human understanding when it has once adopted an opinion . . . draws all things else to support and agree with it.’ In order to solve this problem Bacon formulated what became known as the scientific method. He instructed scholars to test their theories against real-world observation, so that they could ‘analyse nature by proper rejection and exclusion’. Following Bacon, science developed into a discipline, and a community with a division of cognitive labour. Scientists publish research on the topics they care about and try to build a case for their theory. Their work is peer-reviewed and examined by other experts in their field. Scientists try to knock down each other’s arguments, at the same time as they learn from each other. Science makes the most of reason’s social nature.

  Much as we celebrate great individual scientists, it is scientists as a group who make progress. Confirmation bias runs amok when an individual is isolated from people who disagree with them, no matter how brilliant their mind. Isaac Newton spent the last decades of his life immersed in a futile quest to turn base metals into gold. If that work didn’t lead anywhere, it was at least partly because he did it alone, without collaborators or reviewers. When he published his groundbreaking work in physics, by contrast, Newton was drawing on the published work of others (‘standing on the shoulders of giants’, as he put it), and doing so in the knowledge that mathematicians and astronomers across Europe would pounce on any weak arguments.

  For the most part, this system has worked very well, leading to the huge advances in medicine and technology that define modernity. It is when science’s participants forget how to disagree well that things can get bent out of shape, as the story of John Yudkin illustrates.

  In the early 1980s, Western governments, after consulting with the world’s top nutrition scientists, told us to change the way we ate. If we wanted to stay healthy, they said, we needed to cut back on foods rich in saturated fats and cholesterol. By and l
arge, we did as we were told. Steak and sausages were replaced with pasta and rice, butter with margarine and vegetable oils, and eggs and toast with muesli and low-fat yoghurt.

  Instead of becoming healthier, however, we grew fatter and sicker. In the decades that followed, a public health catastrophe unfolded. Obesity, which until then had been relatively stable, rose dramatically, as did the incidence of related diseases, like diabetes. In recent years, the advice has changed. Although we’re still advised to moderate our fat consumption, we are told to beware of another enemy of our health, one that is just as bad, if not worse: sugar.

  It would be natural to believe that this sharp change of emphasis came about because nutrition science advanced and new discoveries were made. Not true. The scientific evidence was there all along. It was overlooked because nutrition scientists had forgotten how to disagree with each other and allowed confirmation bias to run riot.

  John Yudkin’s book Pure, White and Deadly was published in 1972. It warned the world that the real threat to people’s health was not fat, but sugar. ‘If only a small fraction of what we know about the effects of sugar were to be revealed in relation to any other material used as a food additive,’ wrote Yudkin, ‘that material would promptly be banned.’

  Yudkin, a professor of nutrition at Queen Elizabeth College in London, noted that refined sugar has been a major part of Western diets for only 300 years; in evolutionary terms, it is as if we have, just this second, taken our first dose of it. Saturated fats, by contrast, are so intimately bound up with our evolution that they are abundantly present in breast milk. To Yudkin’s thinking, it seemed more likely that the recent innovation, rather than the prehistoric staple, was making people sick. He also believed that the evidence that fat is bad for us was relatively weak. He argued that sugar was the more likely cause of obesity, heart disease and diabetes. In the 1960s, the debate over whether sugar or fat was most harmful was a lively one. But by the time Yudkin wrote his book, the commanding heights of his field had been seized by proponents of the fat hypothesis, and most nutrition scientists had signed up to a new consensus: a healthy diet is a low-fat diet. Yudkin led a diminishing band of dissenters.