HBES Elections 2023

The Human Behavior and Evolution Society (HBES) is currently conducting elections for all offices.

Votes will be collected from April 15 to May 15, 2023.

We kindly ask you to cast your vote and use the opportunity to check/update your account.

1) HBES 2023 Election:

To cast your vote:

i) Go to HBESwebsite at http://www.hbes.com

ii) Login with your credentials (“Login” menu top right). In case you have forgotten your password, please reset it (“Forgot password?”).

iii) Once you are logged in, click on “2023 HBES Elections” in the pale yellow bar at the top of the page. This will open the ballot page.

iv) Please cast your vote, either by selecting from nominated candidates, or by providing names of alternative candidates.

v) Once you have indicated your vote, click on “Submit your vote” on bottom of the page.

Please note that only one vote from a member is permitted; i.e., once you have logged in to the website and submitted your vote, you won’t be able to vote again.

2) HBES Account information:

Please use the opportunity to check – and, if necessary, update – your account information while you are logged in to the HBES Website.

You can access your HBES account by clicking on “Update Profile” in the top menu.

Check that your information is complete and up-to-date. “Submit” any changes by clicking on the button on the bottom of the page.

Thank you!

How the mind decides to help and harm: Welfare tradeoffs among US and Argentine students and members of the Shuar and Tsimane of the Amazon

– by Andrew W. Delton. Photo credit by Arnulfo Cari Ista: author Adrian Jaeggi leads a member of the Tsimane community through a task measuring helping

“It is interesting to contemplate an entangled bank, clothed with many plants of many kinds, with birds singing on the bushes, with various insects flitting about, and with worms crawling through the damp earth” (Darwin, 1859, Origin of Species). As we stand beside a river and gaze at such a bank, images enter our minds unbidden. We need not cogitate to see the warblers and the dragonflies, the churn of the water, or the moss-covered rocks. Our perception of the scene feels unmediated.

This, we know, is an illusion. Vision arises from a fiendishly complicated set of adaptations, including eyes for capturing photons bouncing off nearby objects and computations performed by the nervous system for turning this raw sensory data into a useful picture of the world.

Feelings, too, enter our minds unbidden. When our child comes crying to us with a scraped knee, warmth rises in our breast and we want to console her. When a friend needs us to cover the check, we are only too happy to do so. When an enemy commits a faux pas, a bolt of cold glee takes hold—this accidental gift will help us burnish our reputation at his expense.

Desires to help or harm also feel unmediated. But, as with vision, are these desires also created by complex and largely unconscious computations? Our team, including Daniel Sznycer, Adrian Jaeggi, Julian Lim, and other collaborators, designed a series of studies to find out.

We studied how people decide to help or harm, specifically whether they do so with precision. We suspected that the felt simplicity of these desires is actually created by computations involving precise variables. These variables encode how much a person is willing to trade off their own welfare to help or hurt another person. Given their function, we call them welfare tradeoff ratios.

In our studies your task was to decide whether to give one sum of money to a specific other person (perhaps your best friend or an acquaintance) or to keep a different sum of money for yourself. For instance, would you take $54 for yourself or give your friend $37? Probably you’d take the $54. But what if you would get only $46? Or $39? Or $31? Now you might switch to giving the $37 to your friend.

We had participants make many decisions (in some cases up to 60). Across decisions, we varied exactly how much money was at stake for both people. In most, you decided whether to pass up money for yourself to give to the other person—helping. More rarely you decided whether to pay to prevent the other person from getting money—harming.

To measure precision, we examined how consistently participants chose who got what. For instance, if you would forgo $46 to give your friend $37, it would be inconsistent to later keep $31 for yourself rather than give your friend $37. Why pass up a large amount only to keep a smaller sum? If precise variables for welfare tradeoffs create desires to help or harm, we predicted that people would make few inconsistent decisions. Were we correct?

First, we studied university students in the United States and Argentina. As suspected, they were very consistent when deciding to help or harm. We measured consistency in two ways. One way was strict, requiring that people make no inconsistent choices at all in a set of many decisions. On this measure, if people were responding randomly, then they would be consistent only 1% of the time. In fact, our students were consistent 70% of the time or more. Our second measure was forgiving and did not require perfection. Here, random choices would lead to consistency about 70% of the time. In fact, on this measure, students were consistent 94% of the time or more.

We also found that people were more generous with friends than acquaintances. This didn’t surprise us. But it shows the students understood our task and took it seriously.

We found what we expected. Yet students are strange. They have undergone years of formal schooling in math. They live in advanced, industrial democracies. Perhaps something about their evolutionarily unprecedented lives caused them to be so precise—rather than, as we hypothesized, a universal psychology for making welfare tradeoffs.

To find out, we conducted similar studies among small-scale communities of people who forage and farm: the Shuar of the Ecuadorian Amazon and the Tsimane of the Bolivian Amazon. Did we find similar results among people who lead very different lives from university students?

Yes. Among the Shuar, on our strict measure of consistency, they were between 25% and 41% consistent. This is lower than among students but to be expected: We gave members of Shuar more decisions, making the task much harder. (Had they responded randomly, they would have been consistent a mere 0.05% of the time; recall that for students the number was 1%.) On the more forgiving measure, members of the Shuar were consistent 85% of the time (random responding would have been 68%).

Unlike the students, Shuar participants were not more or less likely to help or harm different categories of people—whether siblings, friends, or acquaintances. This surprised us but it could make sense given recent violence in the area; for protection, members of the Shuar might have been interested in shoring up connections even with distant associates.

Finally, we studied members of the Tsimane. The method we used with them did not allow us to calculate consistency. But they had no problem making many tradeoffs between themselves and other people. And like the students, the Tsimane participants were more generous with close others than distant others (for instance, community members versus outsiders).

Altogether, in three of three tests of quantitative consistency, university students from the US and Argentina and members of the Shuar made tradeoffs with precision. Members of the Tsimane found similar decisions easy and intuitive.

Desires to help or harm appear generated by precise variables in the mind.

Read the article: Delton, A.W., Jaegii, A.V., Lim, J., Sznycer, D., Gurven, M., Robertson, T.E., Sugiyama, L.S., Cosmides, L, & Tooby, J. (in press). Cognitive foundations for helping and harming others: making welfare tradeoffs in industrialized and small-scale societies. Evolution and Human Behavior.

High fidelity: How forager myth transmission rules scaffold faithful transmission of accumulated cultural knowledge

– by Michelle Scalise Sugiyama

For most of our species’ existence, culturally shared knowledge has been stored in memory and transmitted orally. This presents an impediment to the emergence of cumulative culture: how to encode, retrieve, and transmit accumulated knowledge in a portable, readily accessible format that resists corruption. Ethnologists have long observed that, in hunter-gatherer cultures, narrative is used for this purpose. Widely characterized as teachings by their Indigenous proprietors, forager oral traditions are known to encode a broad range of ecological knowledge. However, this strategy substitutes one memorization task for another: if a story is misremembered, the knowledge it encodes faces the same fate. Thus, the effectiveness of storytelling as an information management system hinges on the production of high-fidelity copies from telling to telling, begging the question of how this is accomplished.

Research on the formal properties of oral narrative–both prose and verse–shows that these traditions are maintained through the application of multiple mnemonic strategies, many of which exploit cognitive biases. Formal and genre constraints further facilitate recall by limiting the performer’s options as a tale or poem is being unraveled from memory. To take a modern example, the coarse subject matter, humorous tone, laconic style, distinctive meter, and AABBA rhyme scheme of limerick sharply delimit the sentiment, syllables, and words that can follow the line, “There once was a man from Nantucket.” However, research indicates that exploitation of cognitive biases does not guarantee full retention. A series of four experiments using folk stories found that subjects recalled minimally counterintuitive items better than mundane items in the stories. If these findings are representative of oral story transmission in general, we would expect “mundane” content (i.e., generalizable knowledge) to be lost over time, but this prediction is belied by the plethora of ecological knowledge encoded in forager narrative. How do oral cultures manage this?

Our study tested the hypothesis that, to meet the challenge of high-fidelity replication, foraging peoples have developed myth transmission rules. Our inspiration was the following description of Klamath and Modoc storytelling:

“Myth narration occurred principally on winter nights and informal sanctions prohibited myth narration at other times. For example, the Modoc believed that telling myths during the day would cause one to be bitten by a rattlesnake. . . . Klamath myth narration ideally ceased at the end of winter. . . . telling myths after this time would purportedly delay the long-awaited arrival of spring. . . . Only adults were permitted to narrate. . . . However, participation as a listener was unrestricted and generally included the entire winter household. . . . Myths preferably were not mixed with other oral tradition forms. . . . Narrators were not permitted to deviate greatly from local versions. If narrators diverged excessively, listeners would interrupt and engage in debate until the correct version was decided upon. (306-307)”

These restrictions limit myth narration to winter nights, confining transmission to large blocks of leisure time when people are gathered together. This practice minimizes distractions, enabling listeners to give full attention to the story, and “copies” the story to several minds simultaneously. Adult-only narration increases the probability that myths are told by persons who know them thoroughly, while mixed-age audience composition ensures that older generations are present to check for accuracy and younger generations are present to learn the myths. Finally, fear of negative sanctions discourages people from breaking these rules. Collectively, these restrictions increase the chances that the “right” version of a myth—and the knowledge it encodes–gets copied to the next generation.

To test our hypothesis, we searched the forager ethnographic record for descriptions of oral storytelling, which we analyzed for the presence of eight rule types: (1) transmission by the most proficient storytellers (2) under low-distraction conditions with (3) multiple individuals and (4) generations in attendance, and the application of measures that (5) prevent, identify, and/or correct mistakes, (6) maintain audience attention, (7) negatively sanction rule violations and/or (8) incentivize rule compliance. Although our sample was heavily biased toward North American foragers, we found descriptions for 80 different cultures, distributed across six continents and diverse biomes.

All of the predicted rule types were present on at least three continents, and seven types were present on at least four. Myth recitation was largely the prerogative of older adults and preferentially occurred during periods of low economic activity. Most tellings occurred in the context of formal or informal social gatherings, with mixed-age audiences the norm. Rules regulating narrator performance occurred in 50 cultures, and included the use of prompting (e.g., call-and-response), repetition, song, and other forms of ostensive communication to engage and sustain audience attention. Listeners, in turn, were commonly expected to signal periodically that they were still awake, and to interrupt and correct the narrator if mistakes were made. Evidence of sanctions was limited to Africa and the Americas, and consisted largely of a belief that misfortune would follow rule transgressions.

These findings point to additional factors at play in the emergence of cumulative culture. Symbolic behaviors (e.g., myth, song, dance, visual art, games, names) and rules surrounding their performance have been largely unexplored as systems that support high-fidelity encoding and transmission of generalizable knowledge. Re-conceptualizing these behaviors as information technologies may help us better understand how evolved cognitive capacities, ecological constraints, and human inventions interact to produce the ratchets that make cumulative culture possible.

Read the original article: Scalise Sugiyama, M., & Reilly, K.J. (in press). Cross-cultural forager myth transmission rules: implications for the emergence of cumulative culture. Evolution & Human Behavior.

Do Harsh Environments Trigger Early Puberty? Using Historical Data to test Evolutionary Hypotheses

–  by Tony Volk

In 1993, Jay Belsky and his colleagues noticed that girls who lacked fathers and/or grew up in harsher environments seemed to also have early puberty. In one of the first evolutionary developmental hypotheses since Bowlby, they proposed that girls might be maturing faster in harsh environments as a way of reproducing before they died in those harsh environments. These sorts of trade-offs are known as life history theory (LHT).

Psychologists have been studying LHT for 30 years and recent summaries of the pubertal LHT data showed that there was a small, but statistically significant, relationship between growing up in a harsh environment and early puberty (for both boys and girls). While it relied on correlational data (that can’t determine between cause and effect), it nevertheless seemed like a promising example of a successful evolutionary developmental hypothesis. However, there was a problem. Data from evolutionary anthropologists, who mostly looked at poorer countries, didn’t support this LHT pubertal relationship. A survey of evolutionary researchers showed that this was one of the biggest points of disagreement between evolutionary psychologists and anthropologists. I decided to try and break the tie by looking at historical (and hunter-gatherer) data.

Why does historical behavioral matter if we’re interested in explaining behavior today? The reality of evolution is that it works in a forward direction. It solves today’s problems tomorrow by filtering which genes get passed into future generations. Thus, in order to understand the adaptations we have today, we have to look into the past to understand what problems they solved for our ancestors. For example, we crave sugar and salt today not because that’s adaptive in the modern world (it’s not!), but because it was adaptive in the past when those valuable resources were scarce. This is important because modern environments pose many new challenges while eliminating many older challenges. Similarly, modern hunter-gatherers (who are not necessarily the same as past hunter-gatherers) can shed some light on the kinds of challenges and opportunities that humans may have faced in the past when most people lived as hunter-gatherers. To piece together the best picture of what harsh environments had to do with LHT pubertal acceleration in our evolutionary past, I gathered a variety of cross-cultural data: historical texts and records, skeletal remains, forensic and medical science, archaeological artifacts, DNA lineages, and hunter-gatherer data.

To start with, the past experience of “harsh” was very different. In earlier research I had shown that almost half of all past humans died before puberty- that’s seriously harsh! Growing up, those who survived would have regularly witnessed others dying of disease, hunger, or violence. In contrast, children in developed countries have a 1-2% chance of dying before puberty- that’s a lot lower! What I view as perhaps humanity’s greatest achievement was a challenge for psychologists who wanted to measure harsh environments in modern, developed countries where disease, hunger, and violent deaths are far less common. Indeed, most psychologists use indirect cues that ranged from father absence (as a proxy for low paternal care), to moving frequently (a source of uncertainty), to not eating out in nice places or having old sneakers (indicating low resources). The reality is that these modern cues just don’t map onto the highly visible and valid cues in the past. What’s more, in the past harsher environments invariably meant fewer resources. The historical poor had no free schooling, medicine, housing, or food. Past children born in harsher environments were more likely to die of starvation, disease, or violence- three closely intertwined causes of mortality.

So how could a child in harsher environments speed up their puberty? Puberty costs hundreds of thousands of calories. Preparing the female body for pregnancy and delivery costs hundreds of thousands more, with expensive lactation to follow. In the face of war, poverty, and/or disease, where did these extra calories come from? To paraphrase an old commercial, “Where’s the beef”? The reality was that there wasn’t any extra food for the historically poor and/or powerless. They could trade off adult size for earlier development, but that would only put them at risk for worse pregnancy and infant outcomes as well as put them at a severe competitive disadvantage with any adults who did not sacrifice ultimate growth for speed. In contrast, in the past, the rich and powerful could afford extra food, health advantages, and better security. In essence, they could afford to pubertally accelerated life histories. So did they?

Yes, they did. Faced with less harsh (and less unpredictable) environments, historical and hunter-gatherer elites accelerated their growth and menarche, reproduce earlier due to greater calories and looser social rules, enjoyed increased fertility due to energetic and behavioral choices, and they co-opted other adults to care for their larger broods (by paying or enslaving them). In women, perhaps the clearest example is the wealthy’s use of wet nurses. This saved historically wealthy mothers calories and allowed for shorter interbirth intervals while imposing energetic and reproductive costs on the poorer wet nurses. In men, history shows that historical men often translated wealth into polygyny, sometimes achieving incredulous levels of reproductive success that left a genetic footprint on entire populations (e.g., 1% of the world or 8% of Asians, are descendants of Genghis Khan and his sons).

To summarize, historical (and hunter-gatherer) data consistently showed the opposite pattern as we see in modern populations. Across history, cultures, and geography, the wealthy and powerful translated their status and resources into earlier puberty and increased fertility while the poor lived in harsher environments and have delayed puberty and reduced fertility. This casts doubt on the causal feasibility of the modern patterns as being an adaptation to the past. I believe that these data not only shed light on this one aspect of Life History Theory, but that they also show how important understanding historical data is for evolutionary research. We are adapted to solve yesterday’s problems, so for anyone interested in evolutionary origins of human behavior today, my advice is to start by examining our evolutionary past.

Read the original article: Volk, A.A. (2023). Historical and hunter-gatherer perspectives on fast-slow life history strategies. Evolution & Human Behavior, 44(2), 99-109.

Around the World, Who Spends Time on Their Looks and Why? A 93-country study with 93,000 participants.

by Marta Kowal & Laith Al-Shawaf

Around the world, who spends more time on appearance enhancement, and why?

Recently, psychologist Marta Kowal led an international team of hundreds of researchers in an attempt to answer these questions. The study took place in 93 countries and involved 93,158 participants.

Kowal and colleagues defined “enhancing one’s physical attractiveness” as one of the following activities: applying makeup or using other cosmetics, grooming one’s hair, putting effort into clothing style, caring for body hygiene for the express purpose of enhancing attractiveness, exercising, following a specific diet, or any other activity geared toward improving one’s physical attractiveness.

The results show that 99% of the sample spent more than 10 minutes a day caring for their physical appearance. On average, people in the study spent a whopping four hours a day enhancing their appearance.

Kowal and her colleagues found that beauty-enhancing behavior varied across both genders and countries. On average, women spent 23 minutes more than men improving their physical appearance per day. The five countries where improving one’s looks was the most time-consuming activity were Tunisia, Thailand, Ghana, Morocco, and Nigeria. The five countries where people spent the least time improving their looks were Nepal, Switzerland, Finland, Denmark, and Norway.

Kowal and her team also tested which country- and individual-level variables most strongly predicted beauty-enhancing behaviors. The results show that time spent on social media was the strongest predictor of time spent improving one’s looks. (But keep in mind that due to the correlational nature of the data, all the regular caveats about causality apply).

The second strongest predictor was belief in traditional gender roles. The more someone adhered to traditional gender roles, the more time they spent caring for their physical appearance. Interestingly, this finding applied to both women and men.

Romantic relationship status emerged as another important predictor. The study found that people currently dating spent the most time enhancing their attractiveness. Self-declared singles devoted far less time to such activities, as did married individuals.

Finally, age was also an important predictor. In this 93-country study, the youngest and the oldest participants devoted the largest amount of time to enhancing their appearance, with middle-aged people being the least prone to engaging in such behaviors. However, the authors point out that there’s a lot that we still don’t know—is it the case that the youngest and the oldest people have more free time than busy middle-aged individuals, who are preoccupied with their children, professional careers, and aging parents? Or are other factors driving the results? More research is needed to answer this and other follow-up questions.

Overall, the researchers found that attractiveness-enhancing behaviors were universal in their 93-country sample, with 99% of people spending more than 10 minutes per day. Other key study takeaways include the finding that time spent on social media was the number one predictor of time invested in enhancing one’s looks, and the fact that the predictors of attractiveness-enhancing behaviors were largely the same across countries.

(Note: this blog was originally posted on Laith Al-Shawaf’s Psychology Today blog here)

Read the original article here: Kowal, M., et al. 187 co-authors. (2023). Predictors of enhancing human physical attractiveness: data from 93 countries. Evolution and Human Behavior43(6), 455-474.

What do evolutionary researchers really believe about human psychology and behavior?

– by Daniel Kruger, Maryanne Fisher, & Catherine Salmon

Research in evolutionary psychology attracts considerable attention, from both enthusiasts and critics. You might ask, why study what evolutionary minded researchers believe? Why would anyone be interested in knowing such details? There are several important reasons. For one, we repeatedly see articles in journals, as well as the popular press, that are misrepresentations or misconceptions of what those in the field perceive most evolutionary scholars to believe. As a result, many evolutionary-based researchers have devoted considerable effort in pointing out errors in people’s conceptions of what an evolutionary approach to human behavior entails. Researchers spend a non-trivial portion of their time attempting to correct such misconceptions and they seem to end up having to do so repeatedly.

Second, those who utilize an evolutionary perspective in their research are often viewed uniformly by those who use different approaches. The field itself is not monolithic in belief: there are competing theoretical models and phenomena which are accepted to greater and lesser degrees. There were even debates on what the focus of evolutionary research should be: studying actual behavior, counting offspring, or identifying design features of the mind. Several topics studied under an evolutionary umbrella are contentious or controversial, both within and outside the field. The perception of homogeneity by others should not be surprising, as it is well established that members of an in-group see the uniqueness between individual members while ignoring the variability across members of other groups (the “outgroup homogenization effect”).

Third, evolutionary theory continually advances, and different models or beliefs may rise or fall in popularity over time. Assessing beliefs at multiple points of time would allow us to see how theories become established and how beliefs change, or become more sophisticated, over time. Documenting the heterogeneity in scholars’ views allows us to clarify the level of belief on specific topics, as well as demonstrate the overall variability in beliefs within the field.

We investigated the extent of belief in several key and contested aspects of human psychology and behavior in a broad sample of nearly 600 evolutionary-informed scholars. This study was part of a larger project, the Survey of Evolutionary Scholars on the State of Human Evolutionary Science, which is an international collaboration to assess the state of the field. Results indicate there are both core beliefs shared among evolutionary scholars, as well as phenomena accepted by varying proportions of scholars. The misperception that everyone who approaches the study of human behavior from an evolutionary perspective holds the same views was challenged by variation in agreement across items. There are also differences in the prevalence of beliefs between those trained in Anthropology and Psychology.

Nearly all participants believed that developmental environments substantially shape human adult psychology and behavior, refuting accusations of genetic determinism. Nearly all participants believed that there are differences in human psychology and behavior based on sex differences from sexual selection, and that there are individual differences in human psychology and behavior resulting from different genotypes. These concepts are currently controversial in mainstream social science.

About three-quarters of participants believed that there are within-person differences in psychology and behavior across the menstrual cycle, an area which generates considerable debate and sometimes contradictory findings. Three-fifths believed that the human mind consists of domain-specific, context-sensitive modules, another focus of criticisms from outside and even from within the field. Psychologists were more likely to believe in menstrual cycle effects and mental modularity than Anthropologists were.

About half of participants believed that behavioral and cognitive aspects of human life history vary along a unified fast-slow continuum. Life histories represent investments in important aspects of survival and reproduction – growth and development, acquiring reproductive partners, taking care of offspring, etc. Biologists tend to examine differences in life histories between species, whereas psychological research has focused on life history variation among humans, especially in relation to experiences in childhood. Anthropologists tend to use biodemographic measures of life history (e.g., pubertal timing, age at first birth, number of children), whereas psychologists have used psychometric life history assessments (e.g., self-report surveys). Initial survey measures assumed that human life history varied along one continuous dimension, whereas more recent work has indicated that human life history may have multiple, though related, dimensions. Previous work has also shown that the scientific literature on life history is separated by field, suggesting that this work lacks a common focus.

Only about 40% of participants believed that group-level selection has substantially contributed to human evolution. Belief in group selection has waxed and waned over the decades, with the rise of increasingly complex models such as multi-level selection. Natural selection depends on variation, and academic progress is facilitated by tests of competing hypotheses from different theoretical models or research programs. The extent of specific beliefs may change over time, as research accumulates additional evidence to support or refute specific claims.

Also, it is important to note that as many as a third of participants answered “Don’t know” for some of the items. Participants commented on these topics in open-ended items, some saying that they really did not know enough about an area, others noting they had complex beliefs or that their beliefs depended on the way a construct was defined. Some indicated that they had seen mixed support, and thus there was not enough evidence to decide either way.

Overall, the paper clarifies the actual positions of evolutionary researchers, which should help reduce some misunderstandings and shows that there are competing perspectives even among those who identify as evolutionists. Scientific progress is facilitated when critics have an accurate understanding and can direct arguments and research at the beliefs which are actually held.

Read the original paper: Kruger, D.J., Fisher, M.L., & Salmon, C. (2022). What do evolutionary researchers believe about human psychology and behavior? Evolution and Human Behavior, 44(1), 11-18.

The reputational costs of retaliation: Why withdrawing cooperation is better than punishing a non-reciprocator

– by Sakura Arai

Two-person cooperation is ubiquitous in human society. You and your roommate take turns cooking dinner. You water your neighbors’ garden while they are on vacation, and in return, they feed your pet while you are away. But what if your partner fails to return the favor?

Inflicting a cost on those who fail to reciprocate—punishment—is one solution. Think of fines for littering, parking violations, or overdue books. When three or more people cooperate to achieve a common goal and share the resulting benefits (such as clean streets and libraries), punishment does sustain cooperation. But punishing can be counterproductive when two people are trading favors—especially in a biological market where there is competition for good cooperative partners.

Imagine how you could “punish” your partner. You could serve spoiled food to the roommate who keeps “forgetting” to make dinner, or salt your neighbors’ garden when they fail to feed your pet. But would these malicious actions change their mind and encourage them to start reciprocating again? To make matters worse, other people may think that you are a bad cooperator—and vengeful too. They may not want you as a cooperative partner.

There is an alternative to punishing: You can simply withdraw cooperation from your non-reciprocating partner. This communicates the same point—you don’t like the way you were treated. Plus, withdrawing may save your face. It’s possible that their excuses are true and they actually couldn’t return the favor due to injury, mistakes, or bad luck. By conveying the message without directly harming your partner, you may appear forgiving and even considerate.

In group cooperation, withdrawing cooperation has disadvantages that do not exist in two-person cooperation. Withdrawing cooperation from a free rider simultaneously penalizes members of the group who are good contributors. It may also entail abandoning the entire group project and the benefits that come with it. Neither is the case in two-person cooperation, especially when alternative partners are available.

In two-person cooperation, a non-reciprocator can be sanctioned by withdrawing cooperation or by punishing. And your reputation may suffer if you continue to do favors for partners who do not reciprocate. You may appear to be a pushover and easy to take advantage of. Then your partner will certainly keep exploiting you, and so might other people. In terms of reputation, you may be better off sanctioning than just continuing to help.

So, for your reputation, what should you do when your partner fails to reciprocate? We asked over 400 US residents, as a third-party observer, what they would think of someone who took one of the three responses: punish, withdraw cooperation, or neither (keep cooperating).

Here’s a short scenario we presented to participants: Imagine two people, Alex and Casey, interacting with each other through an economic game. There are two roles in this game: giver and receiver. The giver is given $5 and then decides either to share $5 with the receiver or take $5 from the receiver. We told them that Alex and Casey played this game for 3 rounds (without knowing how many rounds there would be). In round 1, Alex was the giver and Casey was the receiver; Alex gave Casey $5. In round 2, Casey became the giver and gave Alex $0. Namely, Alex cooperated with Casey in round 1, but Casey did not reciprocate in round 2.

In round 3, Alex became the giver again. Participants learned that Alex made one of three responses:

  • Punish: Alex took $5 from Casey
  • Withdraw cooperation: Alex gave $0 to Casey
  • No sanction (keep cooperating): Alex gave $5 to Casey.

We then asked participants to rate Alex on 24 adjectives: cooperative, generous, aggressive, vengeful, incompetent, gullible, etc. The goal was to see what reputations (plural intended) they inferred from each response. Did people see Alex as mean and vengeful? How generous and trustworthy did Alex appear? Did she seem gullible or exploitable?

Withdrawing cooperation always had better reputational consequences than punishing. When punishing Casey’s failure to reciprocate, Alex was evaluated as less cooperative—an average of related adjectives such as generous, trustworthy, likable, considerate—and more vengeful—mean, aggressive, unforgiving—than when withdrawing. Moreover, people found the punisher less preferable as a potential cooperation partner than the withdrawer.

What inferences did people make when Alex did not sanction Casey at all?  Participants thought she was highly cooperative and desirable as a partner. But she was also seen as easier to exploit—more exploitable, gullible, incompetent—than when she withdrew cooperation or engaged in restorative punishment (thus recouping her investment in Casey). These two negative sanctions were equally effective ways for Alex to enhance her reputation as difficult to exploit.

Restorative and costly punishment were different, however. In a follow-up study, punishment was costly: To inflict a $5 cost on Casey, Alex had to pay $5. This made her seem as exploitable as the non-sanctioner. Both lost an extra $5: The punisher paid to retaliate and the non-sanctioner paid to keep helping Casey, a partner who took without giving in return. People inferred Alex was easier to take advantage of in both cases, compared to simply withdrawing further cooperation.

A reputation as more difficult to exploit may prevent others from mistreating you. But this reputation can be gained by withdrawing cooperation or restorative punishment: negative sanctions that do not entail extra costs for you. And both kinds of punishment—restorative and costly—produce reputational costs: Compared to withdrawing cooperation, punishing makes you appear more vengeful, less cooperative, and less desirable as a partner. So, in two-person cooperation, withdrawing cooperation may be the best option when your partner fails to return the favor. Our studies show that investigating reputations—multiple aspects of reputation—can shed new light on the functions of motivations to sanction those who don’t give back.

Read the article here: Arai, S., Tooby, J., & Cosmides, L. (2023). Why punish cheaters? Those who withdraw cooperation enjoy better reputations than punishers, but both are viewed as difficult to exploit. Evolution and Human Behavior, 44(1), 50-59.

Beware the foe who feels no pain

– by Wilson Merrell, PhD Candidate in Social Psychology at the University of Michigan

Insensitivity to pain is a valuable asset when it comes to physical altercations. From ancestral conflict between warring coalitions to boxers in a 12-round fight, individuals who are relatively insensitive to pain are better able to persist, and more likely to succeed, in their respective conflicts compared to individuals who are sensitive to pain. What are the implications of such a tactical advantage when it comes to sizing up potential antagonists? This was the primary question my co-authors and I set out to answer in our recent paper published in Evolution & Human Behavior. Theoretically, we turned to the Formidability Representation Hypothesis, which posits that the various assets and liabilities a target possesses are summarized into a single size- and strength-based representation. Because assessments of size and strength have historically and phylogenetically been primary determinants of fighting ability, this summary facilitates decisions about whether to fight or flee in the face of physical conflict.

This hypothesis has a host of empirical support. For example, people armed with a weapon, something that would make them a more formidable adversary, are represented as larger and stronger than people who are unarmed. Conversely, someone suffering from a broken leg is a less formidable adversary and, correspondingly, is represented as smaller and weaker than a fully healthy person. Looking at pain insensitivity through the lens of the Formidability Representation Hypothesis leads to a relatively simple prediction: people who are insensitive to pain, and thus more able to persist and succeed in physical conflict, will be represented as physically larger and stronger than people who are sensitive to pain.

Testing the link between pain insensitivity and physical size

We tested this causal link in our first two studies with a sample of just over 650 U.S.-based participants from online crowd work platforms. Participants read about a man who was either insensitive to pain (someone doesn’t feel pain strongly when he gets a shot at the doctor’s office, stubs his toe, or has a paper cut) or sensitive to pain (someone who feels pain strongly during all of those events). We next asked them how physically formidable they thought this man was—they judged his height, muscularity, and overall size. As we expected, the man who was insensitive to pain was judged to be taller, more muscular, and overall larger than the man who was sensitive to pain. In line with other predictions derived from the Formidability Representation Hypothesis related to mental representations of potential foes, the pain-insensitive man was also deemed to be more aggressive, higher status, and a bigger risk-taker than the pain-sensitive man.

Building on this foundation, we next tested the reverse relationship between pain sensitivity and physical characteristics: would more formidable people be judged as more insensitive to pain? We reasoned the answer would be “yes” given the tendency to minimize costly errors—confronted with a formidable foe, it is safer to erroneously assume they are insensitive to pain than to erroneously assume they are sensitive to pain. Our final study tested this reasoning with an additional U.S. sample of around 300 people from an online platform. Here, participants viewed a man holding an object that could be used as a weapon (like a kitchen knife—enhancing his formidability), or an object that could not be construed as a weapon (like a spatula—decreasing his formidability). Men holding knives or garden shears were judged to feel less pain when they experienced things like hitting their head on a piece of furniture than men holding spatulas or watering cans. Independently, and in replication of previously demonstrated results, the former were also seen as angrier than the latter.

What’s next?

This intimate connection between pain insensitivity and physical formidability suggests a host of future theoretical directions. In the context of cultures that face high levels of physical conflict, stoicism in the face of painful rituals may act as formidability signals that bolsters one’s reputation, especially if permanent physical evidence of this ritual exists. Our results may also provide an additional level of explanation that could help advance understanding of contemporary societal inequities. For instance, situations with high levels of group-based inequality, like the healthcare and criminal justice systems, are also social contexts where individuals in power are often tasked with making decisions based on assessments of pain, size, and threat. Consider that Black men in the United States are stereotyped as larger (exacerbating excessive use of force from police) and more insensitive to pain (leading to systematic undertreatment for pain) than White men. Our findings suggest that these harmful judgments about pain insensitivity and size could compound one another. Future research would benefit from examining mental representations of pain insensitivity, physical size, and perceived threat in these specific contexts to mitigate harmful social outcomes.

Read the original paper: Fessler, D.M.T., Merrell, W., Holbrook, C., & Ackerman, J. (2023). Beware the foe who feels no pain: associations between relative formidability and pain sensitivity in three U.S. online studies. Evolution and Human Behavior, 44(1), 1-10.

(Photo credit: Hermes Rivera)

Preschoolers Consider Opportunity Cost and Familiarity When Helping the Victim of a Moral Transgression

– By Kristy J. J. Lee and Peipei Setoh

Kindness and compassion are often espoused as virtues to be cultivated from an early age. Yet, from the benefactor’s perspective, helpful behavior incurs a cost to the self, namely the opportunity to put one’s resources to alternative uses that improve personal wellbeing. For example, donating money to a charity entails forgoing the opportunity to buy a much-coveted personal item; volunteering at a food bank during the holidays entails forgoing the opportunity to rest and recharge or spend time with loved ones. While adults regularly think about opportunity cost, less is known about whether children similarly assess opportunity cost when deciding to help others. Do children help less when the opportunity cost of helping is high? Does the opportunity cost of helping matter less when the beneficiary is a familiar person?

To answer these questions, researchers at Nanyang Technological University conducted a study with 120 five- and six-year-olds in Singapore. The study examined children’s helping behavior toward the victim of a moral transgression. Children were randomly assigned to one of four conditions that varied on Cost of Helping (High-Cost/Low-Cost) and Victim Familiarity (Familiar Victim/Unfamiliar Victim).

In the High-Cost condition, children were promised an attractive reward of stickers if they completed a coloring task within time constraints. Therefore, the time and energy spent on helping the victim could instead be used to earn a reward from a productive task. In the Low-Cost condition, children were not required to complete any task. Helping the victim was not particularly costly because children had plenty of time and energy to spare. In the Familiar Victim condition, children interacted with the victim actress prior to the moral scenario. In the Unfamiliar Victim condition, children had no prior contact with the victim actress.

Next, children witnessed an actress destroy another actress’s tower of blocks and responded to the victim’s pleas for help in rebuilding her tower. Prompts ranged from generic expressions of dismay (“Oh no, my tower is destroyed…”) to increasingly explicit appeals for help (“Will you help me to rebuild the tower?”). Children’s helping behavior was then assessed. If a child helped to pick up the blocks and rebuild the tower, this counted as helping behavior. If a child refused outright to help or showed inaction despite the prompts, this counted as not helping.

Children helped most in the Low Cost + Familiar Victim condition (86.67%), where there were no competing demands on their time, and where the victim was a familiar person thereby increasing their motivation to help. Helping rates in other conditions, where barriers to helping included high opportunity cost and/or a lack of familiarity with the victim, were substantially lower (High-Cost + Familiar Victim: 36.67%; High-Cost + Unfamiliar Victim: 46.67%; Low-Cost + Unfamiliar Victim: 63.33%). Non-helpers occasionally cited reasons such as, “I’m busy!” “I’m coloring!” Some children also expressed that they were only willing to help after completing their task at hand: “Later! I’ll help you after I finish coloring.” This points to children’s awareness that helping the victim could cost them the opportunity of completing their task within time constraints and earning a reward.

As it turns out, children think about the same questions that adults often ask themselves when deciding to help others: What do I stand to lose? Can I expect something in return? Indeed, as observed in the present study, children’s decisions to help appear to be guided by two factors: “I’ll help… if I know you and have time to spare!”

Read the original paper: Lee, K. J. J., & Setoh, P. (2022). Early prosociality is conditional on opportunity cost and familiarity with the target. Evolution and Human Behavior, 44(1), 39–49. https://doi.org/10.1016/j.evolhumbehav.2022.10.003

Grandmaternal allomothering may include the prenatal period

– Delaney Knorr

Evolutionary theory has long supported the idea that, among humans, allomothers are critical players in improving offspring fitness. Allomothers are kin, or less commonly non-kin, who help out the mother-offspring dyad. Often allomothers help by providing food, child care, or various forms of social support that increase offspring fitness. But from an evolutionary perspective, when do allomothers start helping the mother-child dyad? Much of the anthropological research has focused on weaning as a critical point of intervention for allomothers because this is a vulnerable life stage for the offspring. However, there are many vulnerable periods of development. Recently, looking to earlier life stages, scholars have shown that allomothers play a critical role in breastfeeding by teaching mothers how to properly breastfeed and emotionally supporting them to continue to do so. Additionally, even earlier, having allomothers around to emotionally support the mother during childbirth and physically aid in catching the baby are critical for offspring fitness. What about even before the birth–   do allomothers assist during pregnancy? We offer one of the first perspectives to consider a prenatal allomaternal effect.

Pregnancy is a critical time of development where the mother and fetus are vulnerable to stress. Indeed, the developmental origins of health and disease  framework has established connections between stressors and stress experienced during pregnancy, adverse birth outcomes, and long-term health disparities. Thus, evolutionary biologists may expect allomothers to start investing in the mother-child well-being during utero for these benefits to offspring fitness.

In this paper, we take an evolutionary perspective to ask how allomothers may help during pregnancy. We focus on (soon-to-be) grandmothers in this paper as they have (1) clear inclusive fitness benefits, (2) related reproductive experiences to the (soon-to-be) mother, and (3) greater reproductive expertise than other kin categories. We also focus on three relationship characteristics: geographic proximity, emotional social support, and communication levels. Geographic proximity has often been shown to be an important relationship characteristic in previous studies on allomothers. These studies are usually conducted with societies that had no way to provide other kinds of support without being geographically close-by. Thus, geographic proximity is a good proxy for the help that is done in person, like chores or food provisioning. However, our study investigates other kinds of support that do not need to be transmissible in person today, such as communication and emotional social support.

Our study makes use of survey data from 216 pregnant women living in Southern California. These women all identified as Latina (a diverse ethnic category referring to Latin American heritage), were of various socio-economic backgrounds (e.g., food security and education levels), and represented the full range of trimesters. Latinas living in the U.S. tend to live in three-generation homes more commonly than other groups. Additionally, the cross-cultural importance of family among this group is a helpful context in which to ask about experiences of family. We discuss possible cultural explanations in the full paper.

We asked how each grandmother influenced three independent measures of maternal mental health: depression, anxiety, and pregnancy-related anxiety. Each model tested a different relationship characteristic among both the maternal grandmother and paternal grandmother, while controlling for the effects of father relationship characteristics.

Our findings show that greater levels of emotional support from and communication with maternal grandmothers were significantly associated with lower levels of depression, above the effects of fathers in the same categories. While our study only looks at maternal mental health, other studies have found that depression was tied to low-birth weight and preterm birth, which in turn have been associated with various morbidities and mortality.

We would expect from a developmental origins of health and disease perspective that both grandmothers may be interested in offsetting maternal stress, for the sake of the offspring. Instead, we find (consistent with other allomother literature) that maternal grandmothers are statistically more influential allomothers than paternal grandmothers. This may be due to the long-term mother/daughter relationships that are distinct from mother-in-law/daughter-in-law ones, or perhaps due to other evolutionary explanations discussed in the full paper.

Our results also add to the growing evidence that geographic proximity itself is not always a critical component of grandmaternal allomothering. This finding also suggests some feasible implications for public health. Funding call minutes, phones, and internet connections to increase a family’s ability to stay in contact with each other, when living across borders or when visitation is otherwise not possible, could positively contribute to perinatal mental health.

We suggest that grandmaternal allomothering includes the prenatal period. We observe that social support and communication with maternal but not paternal grandmothers are associated with mental health benefits for mothers. More work is needed to connect this prenatal grandmaternal influence to offspring postnatal outcomes. By including measures of grandmaternal instrumental support and infant outcomes, future work could also further our understanding of grandmaternal involvement in the context of fetal programming.

Read the original paper here.

Knorr and Fox (2023). An evolutionary perspective on the association between grandmother-mother relationships and maternal mental health among a cohort of pregnant Latina women. Evolution and Human Behavior, 44(1): 30–38.