How should cognitive mechanisms in general go about producing sound beliefs? Part of the answer, it seems, is that they should be free of bias. Biases have a bad press, in part because a common definition is “inclination or prejudice for or against one person or group, especially in a way considered to be unfair.” However, psychologists often use the term in a different manner, to mean a systematic tendency to commit some specific kind of mistakes. These mistakes need not have any moral, social, or political overtones.
A first kind of bias simply stems from the processing costs of cognition. Costs can be reduced by using heuristics, that is, cognitive shortcuts that are generally reliable but that in some cases lead to error. A good example of this is the “availability heuristic” studied by Tversky and Kahneman. It consists in using the ease with which an event comes to mind to guess its actual frequency. For instance, in one experiment participants were asked whether the letter R occurs more frequently in first or third position in English words. Most people answered that R occurs more often in first position, when actually it occurs more often in third position. The heuristic the participants used was to try to recall words beginning with R (like “river”) and words with R in the third place (like “bored”) and assume that the ease with which the two kinds of words came to mind reflected their actual frequency. In the particular case of the letter R (and of seven other consonants of the English alphabet), the availability heuristic happens to be misleading. In the case of the other thirteen consonants of the English alphabet, the same heuristic would give the right answer.
Although the availability heuristic can be described as biased—it does lead to systematic errors—its usefulness is clear when one considers the alternative: trying to count all the words one knows that have R as the first or third letter. While the heuristic can be made to look bad, we would be much more worried about a participant who would engage in this painstaking process just to answer a psychology quiz. Moreover, the psychologist Gerd Gigerenzer and his colleagues have shown that in many cases using such heuristics not only is less effortful but also gives better results than using more complex strategies. Heuristics, Gigerenzer argues, are not “quick-and-dirty”; they are “fast-and-frugal” ways of thinking that are remarkably reliable.
The second type of bias arises because not all errors are created equal. More effort should be made to avoid severely detrimental errors—and less effort to avoid relatively innocuous mistakes.
Here is a simple example illustrating how an imbalance in the cost of mistakes can give rise to adaptive biases. Bumblebees have cognitive mechanisms aimed at avoiding predators. Among their predators are crab spiders, small arachnids that catch the bees when they forage for nectar. Some crab spiders camouflage themselves by adopting the color of the flowers they rest on: they are cryptic. To learn more about the way bumblebees avoid cryptic predators, Thomas Ings and Lars Chittka created little robot spiders. All the robots rested on yellow flowers, but some of them were white (noncryptic) while others were yellow (cryptic). To simulate the predation risk, Ings and Chittka built little pincers that held the bees captive for two seconds when they landed on a flower with a “spider.”
In the first phase of the experiments, two groups of bumblebees, one facing cryptic spiders and the other facing noncryptic spiders, had multiple opportunities to visit the flowers and to learn which kind of predators they were dealing with. Surprisingly, both groups of bumblebees very quickly learned to avoid the flowers with the spiders—even when the spiders were cryptic.
Yet the camouflage wasn’t ineffective: to achieve the same ability to detect spiders, the bumblebees facing camouflaged spiders spent nearly twice as long inspecting each flower. This illustrates the cost of forming accurate representations of one’s environment: the time spent inspecting could not be spent harvesting.
But there is also an asymmetry in the costs of mistakenly landing on a flower with a spider (high cost) versus needlessly avoiding a spider-free flower (low cost). This second asymmetry also affected the bumblebees’ behavior. On the second day of the experiment, the bees had learned about the predation risks in their environment. Instead of spending forever (well, 0.85 seconds) inspecting every flower to make sure it carried no spider, the bumblebees facing the cryptic spiders settled for a higher rate of “false alarms”: they were more likely than the other bees to avoid flowers on which, in fact, there were no spiders.
This experiment illustrates the exquisite ways in which even relatively simple cognitive systems adjust the time and energy they spend on a cognitive task (ascertaining the presence of a spider on a flower) to the difficulty of the task on the one hand, and to the relative cost of a false negative (assuming there is no spider when there is) and of a false positive (assuming there is a spider when there is not) on the other hand. This difference in cost results in a bias: making more false positive than false negative errors.
This bias, however, is beneficial.
Is Bias Always Bad? by Manuel Fraga is licensed under a Creative Commons Attribution 4.0 International License.