by Pieter van den Berg
We all know humans are social animals. So it makes sense that much of our psychology is geared towards analyzing social situations and choosing appropriate actions for them. This social psychology has to work in a complex and messy world. Every social interaction is different, involving different parties with their own desires and interests, and different possible actions to choose from. To make matters worse, we have to operate under considerable uncertainty: it is often impossible to accurately predict how our actions will turn out.
How has evolution equipped us with an effective psychology to navigate social decision making in this complex and uncertain world? This is not an easy question. Social evolution is intricate, because the fitness consequences of social behaviors depend on what others in the population are doing. A cooperative individual may be successful in a population full of trustworthy interaction partners, but that same individual may fare much worse when surrounded by cheaters. To gain insight in how evolution shapes social behaviors despite these intricacies, scholars have often relied on highly simplified models of social evolution, often reducing the messy social world to just a single type of interaction such as (most famously) the Prisoner’s Dilemma Game. The idea is that if we understand how evolution hypothetically shapes behavior in that very specific situation, we will gain an understanding of how evolution shaped parts of our social psychology.
But does evolution really tailor behavior separately for each specific social circumstance that may arise? This seems unlikely – especially given the fact that individuals often don’t even know all the specifics of the situations they find themselves in. By now, a realization has settled in that the human mind is not a smooth optimization machine, but often works in ways that seem unsophisticated or crude. This is exemplified in what are referred to as ‘cognitive biases’ – proof that people are not consistent or systematically deviate from acting in their own rational interest. Such biases can be the result of a psychological machinery that is not perfectly attuned to the situation at hand, but rather operates by using ‘heuristics’: simple behavioral strategies that work well across a range of situations, but that can ‘misfire’ in some specific circumstances.
Do people apply such rough rules of thumb in social situations, leading to behavior that is suboptimal or inconsistent? A few recent evolutionary modelling studies suggest that we might expect them to. Bear & Rand have shown that evolution can produce a psychology where individuals ‘intuitively’ invest in cooperative relationships, even if that relationship will not extend far enough into the future to make that investment worthwhile. In 2018, I developed a model to investigate how individuals are selected to behave in a ‘messy’ world with many different types of social interactions and some uncertainty about which situation they are in. This model showed that evolution predictably leads to the emergence of simple heuristic strategies that often cooperate, even in situations where it is guaranteed to lead to bad outcomes. These ‘social heuristics’ even evolved when individuals had only intermediate uncertainty about the social interactions they engaged in, and could implement more sophisticated strategies (tailoring behavior to specific circumstances) without any extra cost.
For a new study that was just published in Evolution and Human Behavior, we conducted a decision making experiment in which we confronted participants with a similar situation as the individuals in our evolutionary simulation model. Through software specially designed for this study, our test subjects were repeatedly coupled with other participants to engage in a social interaction. At the core, the social interactions were simple: both participants in a pair had to simultaneously choose whether or not to ‘help’ the other. Helping provided a benefit to the interaction partner (in points that were later converted to real money), but it also had a consequence for the helper herself. This consequences of helping varied between situations: sometimes it was directly beneficial to the helper, but in other situations it carried a crippling cost. Between different experimental treatments, we varied how much uncertainty the participants had about the consequences of helping. This ranged from no uncertainty at all, via partial uncertainty (participants were told that the consequence of helping lied in some range), to complete uncertainty.
The social heuristic our participants were using was effective in turning a situation of uncertainty into a situation of virtual certainty. But, as a side-effect, it also led them to cooperate more.
Our results show that in the treatments with more uncertainty, participants helped their interaction partners more often than if they had little uncertainty. The reason behind this can be found in social heuristics. Most participants were interpreting the uncertainty range they were given in a very simple way: they just acted as if the real consequence of helping was given by the center of the uncertainty range. For example, a participant who was told that the consequence of helping would be anywhere between a cost of 5 and 15 points acted the same way as a participant who was certain that it was 10 points. This is a simple solution to dealing with uncertainty: if you do not know which of many possible scenarios is going to unfold, just choose one that seems typical or likely and act as if you are certain that this is what is going to happen.
The social heuristic our participants were using was effective in turning a situation of uncertainty into a situation of virtual certainty. But, as a side-effect, it also led them to cooperate more. To see why, consider a simplified version in which there are just three possibilities, each equally likely: the cost of helping is either 5, 10 or 15 points. Let’s say you would be willing to help at a cost of 5 of 10 points, but not 15. If you’re like our participants, you will reduce your uncertainty by assuming you are in the most typical situation: you will just act as if you are sure that the cost is 10 points, and so you will help. This means you are helping more than if you would not have had uncertainty, because then you would only have helped in two thirds of the cases. Our experimental results can be explained in a similar way: because participants who had to deal with uncertainty used a heuristic, they ended up helping more than their counterparts that suffered no uncertainty.
So what do these results teach us about the evolution of human social behavior? They confirm that human minds are not calculators aimed at optimizing behavior in every possible scenario that might arise, but that their cognitive solutions have been shaped by a world of considerable complexity and uncertainty. The way our minds deal with this uncertainty is not to painstakingly account for it, but to reduce it to something more manageable with limited losses to inconsistency and suboptimality. Such heuristics can probably play out in many ways, but our experiment shows that they can lead to a higher willingness to cooperate with others.