Some “Psychological Weapons” Infants and Young Children Have to Get Others to Love Them

– by Carlos Hernández Blasi and David F. Bjorklund

It would not be much of an exaggeration to say that a major theme in evolution is “save the children,” especially for a slow-developing species that invests much in few offspring, such as humans. However, although a child’s survival is in the best interest of both parents and children, ancient parents could not be indiscriminate on how much care and attention they give to any one child. For most of our species’ existence, infant mortality rates were high, with nearly half of all children dying before reaching puberty. It is therefore important for infants and young children to endear themselves to adults, particularly their parents, to insure they get the care they need to survive and thrive. To do so, infants and children have developed methods of communication that change over the course of development, from cries and facial expressions to more sophisticated vocal and cognitive cues. In short, children have evolved a set of “psychological weapons” to attract adults to them and increase the chances that they will receive the care they need to grow up and become adult members of their community.

Infants enter the world with some perceptual, motor, and communication systems that serve to promote their interaction with the people who care for them. For example, although newborns’ eyesight is poor, they see most clearly objects that are about 10 inches in front of them, which is about the distance between mother’s and baby’s faces when nursing. Infants’ cries convey their physical and emotional states to adults, and they come into the world with a number of reflexes that, in the right contexts, promote closeness, such as the sucking and grasping reflexes. Babies also possess some physical facial characteristics that are very appealing to adults, even to those adults who profess not to like babies all that much. These include a large head relative to body size, large eyes relative to head size, a flat nose, high forehead, and rounded cheeks. The Nobel Laurette Konrad Lorenz, referred to these features as Kindchenschema, or baby schema. Nearly 80 years of research has shown that adults and even children generally view babies who possess high levels of “baby schema” as cute and respond affectionately toward them. Thus, although in some sense infants are perceptually, motorically, and cognitively immature, from an evolutionary perspective they can be seen as being quite smart, having some features that get adults to pay attention to them and perhaps to care and love them.

However, babies grow up, and they lose the special “cuteness” afforded by the baby schema. Yet, compared to other mammals, human children are dependent on their caregivers for a remarkably long time, and it would make sense for natural selection to provide children (and adults) with other mechanisms that promote care. This has been the focus of our research for more than a decade, examining some features of early childhood (essentially the preschool age, between about two and six years) that may increase attention to and caring for children beyond infancy. Anthropologists tell us that this is the age when in many traditional societies children are weaned and start to spend more time with people outside the family. Children in this period of life are certainly more autonomous than they were as infants, but they are still unable to feed or otherwise fend for themselves. Might preschool-age children also have some “smart” features that, like the baby schema of infancy, promote their surviving and thriving?

In our first studies, we found that one potential “psychological weapon” of preschool children to keep adults tuned to (and informed about) them involved some of the often humorous things they said, reflecting a form of what we called cognitive immaturity. For example, adults and older adolescents listening to a young child talking about some magical- or supernatural-thinking explanation such as, “The sun is not out today because it’s mad” or “The high mountains are for long walks, and the short mountains are for the short ones,” were viewed not only as funny, but as endearing and signaled that these children still likely needed caring and support. However, not all expressions of cognitive immaturity were viewed positively by adults and older adolescents. When the same children made some immature statements about more mundane topics, such as, “I will remember all 20 cards!” (something typically out of the range of their cognitive skills) or “I couldn’t prevent looking into the box for a while, and I lost the treat!” (exhibiting their difficulty to regulate their actions), adults and older adolescents did not typically react the same way they did to immature supernatural thinking. In fact, we found that children expressing immature natural thinking typically don’t make a positive impression. Rather, adults feel a bit overwhelmed, if not bothered, by this type of cognitive immaturity. In other studies, we found that the typical immature voices of preschool children, regardless of speech content, evoke a similar effect to the funny, supernatural thinking of children, triggering a positive impression in adults and adolescents in general and blocking negative feelings towards them. We also found that preschool children’s faces continue to prompt positive feelings in adults and adolescents, but they were not as powerful as either children’s voices or their verbalized thoughts to inform about children’s intelligence or their vulnerability.

Finally, in our most recent study, we found that, overall, when pitted against one another, young children’s voices prevail over young children’s thinking, in terms of conveying to adults both positive affect and some reliable information about their degree of vulnerability. In contrast, young children’s thinking is apparently more relevant than young children’s voices to inform their potential caregivers about their intelligence level – but only when children verbalize magical or supernatural explanations – and for making negative impressions – but only when they verbalize more realistic or natural narrations. In sum, our studies show that, though young children are still highly dependent on others, “nurture” speaking, they are actually very smart in “nature,” displaying different, possibly evolved cues that keep them connected to those who can help them to survive and thrive in their early development.

Read the original article: Hernández Blasi, C., Bjorklund, D. F., Agut, S., Nomdedeu, F. L., & Martínez, M. Á. (2024). Children’s evolved cues to promote caregiving: Are voices more powerful than thoughts in signaling young children’s attributes and needs to adults? Evolution and Human Behavior45(5), 106609.

The Perils of Group-Living

– by Robin Dunbar

Solving the problem of how to live in large, demographically stable groups is probably the single most important achievement of anthropoid primates, and especially so of humans. Group-living does not come for free. A variety of centrifugal forces constantly threaten to destabilise groups. One of these is the infamous public goods dilemma. Being willing to live in a group with others involves – necessitates, in fact – an implicit promise not to cheat on the deal. I have to allow you a fair share of the benefits, just as you have to allow me a fair share. However, as we all know, it always pays me to take a bit more than my fair share because, by doing so, I gain a modest but significant fitness advantage on you – at your expense, of course!

If access to benefits is a function of physical strength, then it will always pay those who can get away with it to use a little extra force to extract a few extra benefits. The issue for everyone else is how much exploitation should you be prepared to put up with as the price of living in the group – given that the opportunity cost (or, as economists used to call it with a lot more literary imagination, your regret) is to live alone and miss out on all the genuine benefits of group life?

The Norse world of medieval Iceland was the archetypal example of the problem. With no formal political structures (it was the Wild West of the early medieval period), it lacked judges and police to enforce good behaviour. The result was that violence was endemic and very disruptive. It spawned psychopaths with fiercesome reputations. One such individual was Egil Skallagrimsson, who made himself very rich by shamelessly helping himself to a lot of other people’s land and property (and occasionally wives). If necessary, he simply killed you. Such behaviour often generated vendettas that dragged on for years. In one such case, a third of all the adult males in the community were killed in a vendetta that lasted a generation. Of 23 families, four lost all their males; only 11 survived without losing any. Analysis of data from hunter-gatherer societies indicates that the proportion of all deaths that are due to homicide increases linearly with living group size, such that in bands of just 50 people half of all deaths are due to homicide (Dunbar 2022). Why do small scale societies put up with this?

In one sense, of course, they don’t. As I have shown (Dunbar 2022), what they do is introduce social institutions that allow them to manage violent behaviour, especially among the young males. These include marital arrangements that increase the number of people who can lean on badly behaved individuals, charismatic leaders (whose friendly advice we heed out of respect), communal feasts (where we bond) and, especially, men’s clubs (where boys who fall out are made to sit down together to make peace – without, by the way, actually talking about it, just by bonding).

This is all very well, of course, but it ignores the elephant in the room: bad behaviour pays. This raises the intriguing question as to whether there is a significant selection factor favouring bad behaviour that promotes whatever genes might be involved, despite the costs to the rest of the community?

Many studies have tried to determine whether violence pays by determining whether males who murder gain more wives and/or reproduce more often. Broadly speaking, they do. But, as the ecologist David Lack (of Lack’s Principle) reminded us back in the 1950s, pumping out babies doesn’t necessarily translate into loads of grandchildren. As often as not, having too many babies results in many dying, such that fitness is lower.

At this point, the medieval Icelanders come to our rescue, because they left us an amazing literary record of their daily lives – the family sagas (or histories) that recount in considerable detail the goings-on in the various communities and families, not to mention who married (or otherwise) who and what offspring they had. Using these records, the Viking Age historian Anna Wallette (of Lund University, Sweden) and I have been able to place all the family pedigrees into a single interlinked database of over 1200 males, for a great many of whom we can trace their ancestry, siblings and descendants over 3-5 generations. We used a sample of 13 known killers (who had killed 1-19 other men) and 31 non-killers to test whether killers had higher fitness than socially matched non-murderers – not just in terms of their own descendants but also in terms of the fitness of their collateral male relatives. In other words, for the first time, we were able to test the hypothesis that violence pays by examining inclusive fitness.

The results (Dunbar & Wallette 2024) revealed that, despite having a 40% higher risk of being themselves killed than the average non-killer, killers had twice of many wives and offspring as matched non-murderers (men who were never recorded as murdering anyone) and nearly four times the inclusive fitness through their male siblings – providing they themselves survived to die in their own beds. More importantly, the brothers benefitted enormously: if the killer survived, their inclusive fitness was around three times that of the brothers of a non-killer.

In other words, violence does pay. This does not, however, mean runaway selection in favour of ever more violent individuals. Too many violent individuals would cause the collapse of communal life. Since our capacity to survive and reproduce depends on the group, this would negatively impact our fitness. In the end, it is a balance of the costs and benefits. Humans seem to be especially good at finding social controls that allow us to live in unusually large groups – not because we are naturally altruistic, but because we are good at finding workable solutions. Viking Iceland offers us a glimpse of how bad it can become when the social controls are absent.

Read the original article: Dunbar, R.I.M. & Wallette, A. (2024). Are there fitness benefits to violence? The case of medieval Iceland. Evolution & Human Behavior 45: 106614

Time or Resources? For Allocare, It Depends on the Environment

– by Elic Weitzel, Kurt Wilson, & Rich Sosis (Image credit: John Shaver)

Alloparental care—investment in offspring that are not one’s own—takes many diverse forms. Alloparenting can look like an older sister watching over a younger brother while their parents are out. It can look like an aunt or uncle helping to pay for their nieces’ and nephews’ college education. It can look like a grandmother providing her grandchildren with food that she collected, or it can look like a teenage next-door neighbor being paid to babysit for an evening.

These various types of allocare all accomplish a key outcome, providing for another’s offspring, but they each have different costs and benefits. Some involve directly provisioning children with resources of some sort, such as food or money that once given cannot readily be shared or regained. Giving away resources means that the alloparent had to first pay the costs of obtaining the resource and second the cost of not consuming or using it themselves. Other alloparenting activities involve expenditures of time rather than resources, such as supervising or playing with children. These forms of care do not require material resources to be transferred, but the opportunity costs of spending time in allocare can be consequential. Time spent supervising a group of children is time not spent doing anything else.

In thinking about both different forms of care, we theorized that care could be divided up roughly into two types. The first is care which entails an additive cost structure in which an alloparent must pay an equal cost for each additional child they care for. These children receive the benefits of care directly in proportion to the costs paid by the alloparent: a 1:1 ratio. Generally speaking, this most commonly reflects resource investment. For example, when a wealthy aunt pays for $10,000 worth of their nephew’s college education, that is $10,000 the aunt no longer has. The second type of care involves a declining marginal cost structure in which each additional child cared for costs less than the previous (at least up to a point). This is generally exemplified by time investments, such as when an older child cares for their younger sibling. If they were to care for two younger siblings simultaneously, they would certainly incur additional costs but not twice the costs of supervising just one. The second sibling can be supervised more cheaply since the older sibling is already spending the time watching the first child. In contrast, if our hypothetical wealthy aunt were to cover two nephews’ college tuitions, she must fully pay the same amount for the second.

Children require some blend of time and resources invested in them to survive and thrive, but presumably there exist situations to which each of these types of allocare is better suited. To explore this, we constructed a type of computer simulation called an agent-based model. An agent-based model functions like a video game with many different characters, all programed to do certain things. The outcome of the interaction of all the different agents making their own, simple, decisions becomes the focus of study – because these interactions can cause quite complex phenomena to emerge. In our model, we designed four types of characters, or agents, that each behave differently, following the strategy and payoff categories of alloparenting. One type does not alloparent at all, but only provides care to their own children with an additive cost structure, or resource investment. The second type also only provides care to their own children but does so in line with a time investment strategy characterized by declining marginal costs. The third and fourth types of agents both parent and alloparent, providing care characterized by additive and declining marginal cost structures, respectively.

In our model, we gave each of these agents a non-specific “currency,” which can be thought of either as time or resources, and let them go about their lives. These agents age, reproduce, care for children, and die. As they do, we track how much currency they keep, how much they spend, who they share that currency with, and—importantly—how many times they reproduce and how many of their children reach reproductive maturity. We evaluated the evolutionary success of each care strategy by looking at the number of children born to members of that agent type who survive to adulthood themselves.

What we found was that the cost structures of allocare matter differently in different socioecological situations. When we provided agents with abundant currency (think a resource rich environment) the two allocare strategies outperformed the two parenting-only strategies, producing an average of 4.75 surviving offspring versus the parenting-only strategy’s average of 3.75. Agents who provided allocare performed equivalently well in these simulations regardless of whether they paid additive or declining marginal costs. In other words, when resources are abundant the type of care didn’t matter, only the fact that they were providing some form of allocare. However, when we reduced the amount of currency we provided to agents, simulating contexts of scarcity, only agents who provided allocare with declining marginal costs (time investment) did well while the other three types struggled to effectively birth and raise children. This is because time-investment allocare, with its declining marginal cost structure, is characterized by what is known as an economy of scale: an economic situation in which scaling up production allows you to lower your costs. By providing only care that has declining marginal costs, alloparents are able to gain evolutionary fitness benefits by producing more surviving offspring. When an agent alloparents for someone else, or someone else alloparents their children, neither is paying a 1:1 ratio of resource to child but saving some resources to better ensure greater survival.

While this model is necessarily less complex than the real-world, the results from this set of simulations allow us to make several predictions about real-world alloparenting. Based on these insights from our model, we expect that time investments—watching, supervising, or teaching children—will be more common in socioecological contexts of scarcity. In contrast, contexts of abundant time and resources might lead to more varied manifestations of allocare. We also predict that time-investment allocare with declining marginal costs will be more universal than resource-investment allocare with additive costs as it is adaptive over a wider set of socioecological conditions. More work is needed, but available ethnographic observations appear to support this prediction, with many anthropologists noting that time-investment care, such as children’s playgroups, are particularly common across societies and ecological contexts.

There remains much work to be done investigating the costs and benefits of different forms of allocare. Our model is, after all, only a model. Real-world anthropological data must now be marshalled to evaluate the predictions we outline. Future work can also adjust our model in various ways to explore related questions, such as the reality that children often require both forms of allocare or require different forms at different points in their development. As we move forward, combining the power of computational modeling with ethnographic observation provides a uniquely powerful approach to testing complex theoretical ideas that will better help us explain the commonalities and differences in alloparenting strategies across cultures.

Read the original article: Weitzel, E.M., Wilson, K.M., Spake, L., Schaffnit, S.B., Lynch, R., Sear, R., Shaver, J.H., Shenk, M.K., & Sosis, R. (2024). Cost structures and socioecological conditions impact the fitness outcomes of human alloparental care in agent-based model simulations. Evolution & Human Behavior, 45(5), 106613.

Maternal Depression: A Catalyst for Cooperation?

Image: woman in Uganda

– by Alessandra Cassar

Women around the world experience maternal depression, particularly around the time of pregnancy and childbirth. With around 10-15% of mothers in high-income countries and up to 25% in low- and middle-income countries experiencing depression during or after pregnancy, it is essential that we understand its causes and effects. But from an evolutionary perspective, why would such a costly condition even exist, if it affects not only the mother and her infant but potentially the entire family?

Patricia Schneider, Chukwuemeka Ugwu, and I decided to explore one specific theory rooted in evolutionary psychology: that maternal depression may not be solely a dysfunction but an evolved mechanism. We tested whether maternal depression could act as an unconscious bargaining strategy for mothers who had exhausted all other strategies to elicit help and support from their social network, particularly the baby’s father. This idea is controversial but offers a potential explanation for why a condition so detrimental could have persisted over evolutionary time.

Evolutionary Perspectives on Depression. Several evolutionary hypotheses suggest that depression might have evolved as an adaptive response. Depression could enable individuals to obtain help from others, encourage cognitive changes to solve social problems, or prevent individuals from engaging in risky behaviors. In other words, while depression is painful, it might serve important functions, especially when faced with adverse social situations. This idea gains more traction when we look at maternal depression, where a mother’s wellbeing directly impacts her offspring’s chances of survival.

Perinatal depression, occurring during pregnancy or shortly after birth, presents a particular challenge. Symptoms like sadness, fatigue, and loss of interest in usual activities make it hard for mothers to care for their infants. This has led some researchers to propose that maternal depression might serve as a signal—a costly, honest cry for help—that triggers increased investment from others, particularly those closest to the mother (and genetically closest to the infant).

The Bargaining Model Hypothesis. The bargaining model of maternal depression advanced by Edward Hagen proposes that depression acts as an unconscious strategy for a mother to elicit support. In evolutionary terms, a mother who experiences adversity—such as a lack of support from the baby’s father or social group—could “bargain” for more help by stopping activities that are beneficial to the baby and even to herself. This behavior, costly to everyone, herself included, would signal, through her visible depressive symptoms, that she cannot continue to care for the baby on her own and she truly needs help. This could encourage the father or others to step in and provide the necessary assistance, ensuring the survival of the offspring.

Our Work. To test this hypothesis, we conducted a study in Uganda, involving nearly 300 women around the time of giving birth. Our focus was on whether women showing signs of depression received more cooperation from their social network—specifically, their spouse, kin, and other close individuals—compared to those who did not display such symptoms. In our work, we propose to use a quasi-experimental method called regression discontinuity design (RDD), an empirical method to estimate the potentially positive causal effect of perinatal depression on cooperation within a mother’s social group, despite an expected negative relationship between the two. The negative relationship between social support and maternal well-being is bi-directional: lack of support can cause depression, while depression may reduce support as individuals tend to isolate themselves. This well-known relationship can be seen in the downward sloping predicted lines in Figure 1. However, the bargaining hypothesis suggests that depressive symptoms may also increase cooperation from others, adding a positive effect from depression to support. But how could we test for that since one cannot randomize who is depressed and who is not? Our proposed method is based on the assumption that around the threshold for depression advised by psychiatrists, being a few points below or a few points above is almost random. In this case, we could estimate whether there is a positive “jump” in support between the women right below and those right above such threshold.

Key Findings. Our findings provide some evidence in favor of the bargaining hypothesis. Comparing women just below and just above the threshold for potential depression, we observed that mothers who were at the threshold of displaying depressive symptoms did indeed receive more help, particularly from their spouse (see the positive jump/discontinuity at the threshold in the graphs of Figure 1), maternal grandparents, and a few other close kin. These results suggest that, at least in some cases, maternal depression might trigger increased cooperation from key individuals in a mother’s network.

Interestingly, non-kin such as neighbors and friends, while generally supportive, did not seem to react as strongly to a mother’s depressive symptoms. This aligns with the idea that depression’s signaling function might be more effective among genetic relatives, whose fitness is directly tied to the survival of the child.

Figure caption: Discontinuity plot: Spouse. Each graph represents the predicted line resulting from a linear regression of the corresponding support type on the depression index with a structural break at 0 (equivalent to EPDS=10), and a 95% confidence interval. The vertical axis shows the respondent’s assessment of how often (Always=5, Most of the times=4, Some of the time=3, Rarely=2, Never=1) the individual named above typically: watched children for her (Watch), helped take care of her or the baby in case of sickness (Care), gave guidance about taking care of the baby (Guide), talked with the respondent (Talk), and would give money in case of need (Money). See the original EHB article for other relationships.

The Role of the Spouse: A Critical Relationship. Among all relationships, the one with the baby’s father appeared to be the most important. Women with more support from a loving and helpful partner were at significantly lower risk for depression. Conversely, conflicting or controlling relationships elevated the risk of depression. The baby’s father not only provided the most help but also responded most strongly to his partner’s early signs of depression. This was especially true if the relationship with the mother was one of conflict (a paradox at first, but precisely what is predicted by the bargain idea). This could highlight the evolutionary significance of pair bonding and cooperative parenting.

In ancestral environments, where raising a child often required contributions from both parents (and others), depression could have evolved as a strategy for mothers to secure greater investment from their partner, ensuring the survival of their offspring. Our findings support this idea, suggesting that perinatal depression may indeed catalyze cooperation from fathers and close kin.

Implications for Mental Health and Public Health Policy. While our study provides some evidence for the bargaining model, it’s important to emphasize that the findings do not imply that maternal depression is beneficial or should be left untreated. Depression is a painful condition that requires serious attention and care. However, understanding its potential evolutionary roots could open new avenues for treatment, particularly in the realm of family and marital therapy.

If maternal depression serves as a signal of need within a social group, then interventions that focus on improving relationships and increasing support from family members could be particularly effective. For example, recent research has shown that psychotherapy interventions, such as cognitive behavioral therapy (CBT), can have positive and lasting effects on women’s mental health and their children’s well-being. Our findings suggest that incorporating family therapy, especially to address conflicts with a spouse or other family members, could further enhance these treatments.

Conclusion: Depression as a Call for Cooperation. Maternal depression remains a complex and multifaceted condition. While much of the focus has been on its negative consequences, our research adds to the growing body of evidence suggesting that it may also serve an adaptive function. By acting as a catalyst for increased cooperation, maternal depression might help mothers in challenging environments secure the support they need to care for their offspring.

Understanding depression through an evolutionary lens doesn’t diminish the suffering it causes but offers a deeper perspective on why such a costly condition might exist. As we continue to explore the relationship between mental health and social support, this knowledge could pave the way for more effective treatments that not only address the symptoms of depression but also the underlying social dynamics that contribute to its persistence.

Read the original article: Cassar, A., Schneider, P.H., & Ugwu, C. (2024). Maternal depression as a catalyst for cooperation: evidence from Uganda. Evolution & Human Behavior, 45(4), 106575.

 

An issue of EHB

E&HB call for papers for two special issues

– by Deb Lieberman, Editor-in-Chief of Evolution & Human Behavior

I am pleased to announce a formal call for research papers to be part of two upcoming special issues/sections in Evolution & Human Behavior (EHB).

The first special issue is focused on evolved adaptations that function in the domains of physical contact, contagion, and intimacy. There have been discussions/round tables on the topic of the evolution of kissing and close contact (I recall HBES presentations on this a while back). In addition to some of the papers arising from these discussions/roundtables, I would like to invite the larger HBES community to submit their work on this topic.

The second call for papers is on the topic of personality, individual differences, and clinical psychology. While all papers that fall under this umbrella are welcome (which encompasses a lot, I know), I’d like to specifically encourage papers examining evolutionary origins, function, individual differences, neural correlates, development, cross-cultural patterns, and measurement of dark triad personality styles—narcissism, psychopathy, and Machiavellianism.

If you are interested in contributing to either collection, please send a proposal email to EHBeditor@proton.me by the new deadline of Dec 1st and provide a short paragraph describing your proposed contribution. Papers themselves are due April 30th 2025; papers submitted without a proposal may still be considered until this date. Research papers will be given priority over discussions or theoretical contributions, however, a comprehensive review of either literature would be most welcome. To prevent doubling up on efforts and the possibility of joint authorship on research/review ventures, please let me know your interests in your email. Last, HBES members will be given priority over non-members should I get swarmed with proposals, so please renew (https://www.hbes.com/membership-join/).

Warm regard,

Deb Lieberman

“Who’s going to do the dishes?” Lessons from hunter-gatherers

– by Angarika Deb & Christophe Heintz (photo credit: ChatGPT)

Just yesterday, my partner and I got into an argument regarding who has been vacuuming the last couple of times, around the house. And now that we expecting some guests, who should be the one to do it? Household arguments about undone tasks (or having done more than one’s fair share!) are a staple part of life. The actual arguments might look different from one couple to the next: some might vigorously argue aloud, providing continuous information to one another about how they are making up their minds and why; others might silently and implicitly negotiate, leaving some chores clearly unfinished to prod the other partner into picking up their slack. But underneath these, there’s a shared commonality: they’re all bargaining problems. eir slack. But underneath these, there’s a shared commonality: they’re all bargaining problems.

We can fit household bargaining onto a Nash bargaining model: imagine two partners dividing a shared pool of resources that is valuable to both, but limited in amount like leisure time. Partners can demand to split the total leisure time 50/50, or even ⅔ and ⅓ each. Both these splits seem compatible, and the house can still keep running. But if both ask for ⅔ of the total available leisure time, something in the house remains undone. The interesting question is, who gets to ask for a greater share? We suggested that each partner makes these decisions based on the fallback options they have in case the demands they make are incompatible with their partner’s, and risks ending the relationship.

This bargaining process with fallback options helps explain widespread and stable gender inequalities: women systematically have worse fallback options than their male counterparts. However, what constitutes these fallback options vary from one society to the next. Here, we studied households from two hunter-gatherer communities, the Mbendjele BaYaka based in Congo and the Agta from the Philippines, who have remarkable equality between men and women, socially, politically and domestically. We published our findings in Evolution and Human Behavior.

The lifestyles of these two groups – and other immediate-return hunter-gatherers like them – are quite distinct from our industrialised economies and settled way of life: they operate in environments with lower levels of food security; lead a nomadic existence; have very few material possessions or single-ownership of goods; and all individuals – including children – enjoy considerable autonomy over their movement and lives. Their fallback options are based mainly on their social capital—i.e., how many friends and helpers they have—since they own few material possessions.  If their marriage were to dissolve or if they were to get into a serious fight with their spouse, they could then rely upon these friends. The bargaining model we outline above, predicted that, the better one’s fallback options the greater their bargaining power and share of leisure time, within the household. To find out if that was indeed the case in these hunter-gatherers, we calculated each person’s daily average leisure hours: we observed everyone from 6 AM to 6 PM, noting down what they were doing every hour, for weeks. We also observed them giving gifts to each other, and studying how they share food in their daily lives, to establish each person’s social capital.

Our results were surprising, and contrary to what we initially predicted: individuals with higher social capital did not enjoy a higher proportion of leisure time than their spouse. Why would this be? From our ethnographic knowledge, we speculated that this was likely due to the assertive egalitarianism present in these two hunter-gatherer groups: when groups have strong norms enforcing equality between individuals, these norms can end up shaping the fallback options available to individuals and thus, modulate any individual-level power dynamics that can arise. No single individual, socially wealthy as they may be, will have the opportunity to dominate their partner into doing more work in the house, and picking up their slack.

In line with this, we found that across households, despite differences in social capital, both partners had equal amounts of leisure time, both for the BaYaka and the Agta. This was remarkable, given what we know of industrialised societies, where women usually shoulder a higher burden of household tasks. Decades of studying households across most countries, has revealed that women are putting in a substantially higher amount of time in household chores, than the men – usually 70%, but sometimes as much as 90% – even when they are employed in full-time or part-time jobs.

If you are a feminist, like the authors of this blog, it is encouraging to find that there are human societies where, not only do men and women have equal political status, but also operate on egalitarian terms in the household. Our current work provides evidence that differences in one’s social capital need not convert into individual-level power differences; and suggests the potentially important role of social norms in shaping household behaviour. Our future work will test this more directly. The gender equality thus documented, is a promising note to suggest that we in our industrialised societies can have the same, if the right kind of group-level practices and norms – such as bilocal residence after marriage, equal political voice for men and women, involvement in subsistence activities, etc – are adopted.

Read the original article here: Deb, A., Saunders, D., Major-Smith, D., Dyble, M., Page, A.E., Salali, G.D., Migliano, A.B., Heintz, C., & Chaudhary, N. (2024). Bargaining between the sexes: outside options and leisure time in hunter-gatherer households. Evolution & Human Behavior, 45(4), 106589.

The face of a hunter: When judging a book by its cover makes sense

– by Adar Eisenbruch

Photo credits: Hadza hunter (top) by Kristopher Smith; Tsimane hunter (middle) by Michael Gurven

Many of us were taught as children not to judge a book by its cover, meaning not to make assumptions about people based on their appearance. Yet we do it anyway. For example, people whose faces look “competent” – i.e., they look like they will be good at what they do – are no better at running a company than others, but they are nonetheless more likely to get hired as a CEO.

If judging people by their faces is irrational, why do people do it so persistently? Probably because we’ve evolved to. There are many cases in which preferences evolved because they were beneficial to our ancestors (e.g. the desire to eat as much sweet food as possible), but they produce bad outcomes today (e.g. health problems). How we judge other people’s faces might fall into this category, too. Someone’s face might not predict who’d be a good corporate executive, but it can tell you about other traits that were more relevant to our ancestors, like how much they like children or how good a fighter they are.

My colleagues Kris Smith, Chris von Rueden, Cliff Workman, Coren Apicella and I recently combined data from Tsimane foragers from Bolivia and Hadza hunter-gatherers from Tanzania – for whom knowing who in their community is a better or worse hunter is a matter of great importance and interest – with data from American couch potatoes (or to be more polite, online participants in a sedentary, agricultural, post-industrial population) to discover another area in which face perception is accurate. First, Tsimane and Hadza individuals judged the men in their communities on hunting skill. Then, headshots of those Tsimane and Hadza men who had been evaluated were shown to the American participants, who were asked to judge them on “ancestral productivity.” Ancestral productivity refers to how good a hunter-gatherer a person would be (e.g. ability to hunt, gather, make tools, survive the elements). Previous research has shown that American undergraduates (for whom ancestral productivity has no obvious relevance) want to be friends with and are more generous towards individuals they perceive as high in ancestral productivity.

We found a positive correlation between the peer evaluations of hunting ability and the Americans’ perceptions of ancestral productivity based on just one face photo. This means that the men who Americans thought looked like more productive hunter-gatherers actually were the better hunters, at least according to their peers.

Could this be caused by both the peer informants and the American participants picking up on something visible in the target men, like attractiveness, and inferring productivity from that? In other words, could this be an example of the “halo effect” that social psychologists are familiar with? Probably not. There’s evidence that the halo effect doesn’t operate among the Hadza the way it does among Americans, and several studies of forager societies have found that peer judgements of hunting ability track objectively measured hunting returns. In other words, when you ask foragers who in their community is a good hunter, they know what they’re talking about.

Could we have found this positive correlation because the American participants happened to be experienced hunters and outdoors enthusiasts who may have learned what a good hunter looks like? No. We asked them about their hunting experience and other outdoor skills, and we verified that the sample was not stacked with Eagle Scouts and archery instructors.

A better explanation is that humans have evolved to evaluate each other on hunting ability. Our ancestors depended on each other for collaborative hunting and food sharing, and they chose their social partners on those bases. They had to be able tell how good a hunter someone was – quickly, easily, from just a look if that’s all the information they had. Individuals who could accurately perceive hunting skill in others would have had better hunting partners and more reliable food sharing relationships. This means more calories available to themselves and their kin, and therefore more descendants. Played out over evolutionary time, this created the ability to perceive hunting skill from the faces of others, an ability that is still present even in people for whom it has no contemporary utility.

For this to work, there must be some observable traits in the face that correlated with hunting ability ancestrally. In other words, there must be some way(s) in which good hunters look different from bad hunters. What are those cues? We don’t know. We tested some of the usual suspects of face metrics (e.g. facial width-to-height ratio, symmetry), but none were a good explanation. This is an open question.

So far, we’ve only discussed the results for men’s faces. But one of the Hadza datasets also included women’s faces, which had been evaluated on gathering (rather than hunting) ability by their campmates. So can Americans also tell which women are better gatherers? No. In fact, quite the opposite. The better gatherers (based on peer evaluations) were perceived as less ancestrally productive by our online participants.

Why are people not only unable to judge female ancestral productivity from the face, but actually misjudge it? In our data, it seems to be due to age. The American participants perceived older women as less productive, even though their peers reported that they’re better gatherers. Perhaps it was less important for our ancestors to evaluate women’s gathering skill than men’s hunting skill, so we did not evolve a corresponding ability for judging women’s faces. Perhaps there are stereotypes in the US (but not among the Hadza) about older women’s abilities that influenced our participants. Perhaps both, and there are other possible explanations as well. There’s clearly more research to do here.

To return to not judging a book by its cover: I’ve always thought that was a weird saying, because you can actually tell a lot about a book from its cover. Scary stories usually have a picture of misty woods or a font that looks like dripping blood; Moby Dick always has a whale on it. The fact that humans can perceive men’s hunting ability from their faces, and we are socially attracted to those high in hunting ability, might help explain some of the modern cases in which people seem to be misled by others’ looks. In effect, people might be choosing CEOs and congresspeople by relying the same facial features that our ancestors used to choose hunting partners and campmates. It turns out that you can judge people by their looks, if you know the right questions to ask.

Read the original article here: Eisenbruch, A.B., Smith, K.M., Workman, C.I., von Rueden, C., & Apicella, C.L. (2024). US adults accurately assess Hadza and Tsimane men’s hunting ability from a single face photograph. Evolution & Human Behavior, 45(4), 106598.

Why women cheat: mate-switching vs. dual-mating

– by Macken Murphy

Socially monogamous birds, like humans, often have “affairs.” In some species, these liaisons seem to serve a dual mating strategy. The females prioritize good looks in extra-pair mates (e.g., more complex ornamentation) and good parenting in primary partners (extra-pair males generally don’t invest in young). Good looks in males are thought to signal genetic benefits, providing the females with more robust—or, at least, more attractive—offspring. And so, many ornithologists take aesthetic differences between extra-pair and primary mates in these species as evidence that the females use extra-pair mating to make a combo deal: good investment at home and “good genes” from outside.

Evolutionary psychologists would have told a similar story about humans two decades ago, just with different evidence. It was popular to argue that women’s infidelity evolved due to the competitive advantage female ancestors gained from conceiving with more attractive affair partners and then raising their affair partner’s child with their more invested primary partners. A flashy series of studies from the previous several years had suggested that women’s behavior and preferences changed around ovulation, prioritizing cues to “good genes” when conception was likely. While the reasoning was less direct than that found in avian species, the initial ratification of this clever prediction lent credence to the underlying dual-mating hypothesis. If women prize different traits at ovulation compared to the rest of the month, perhaps they prize different partners as well, and recruit them towards different ends.

However, the golden age of ovulatory shifts was brief. Failed replications, skepticism about methods, and insinuations of p-hacking cast doubt on the original experiments. Then, newer, more rigorous research often found null results. This anti-climax prompted some scholars to wonder: What if the problem lies deeper, with dual-mating itself?

In 2017, David Buss and his colleagues proposed an alternative primary explanation for women’s infidelity: mate-switching. Drawing on evidence that women who cheat are more likely to be in love with their affair partners and less likely to be satisfied in their relationships than men who cheat, they argued that infidelity primarily helps women assay and seduce replacements. To paraphrase Buss, you wouldn’t quit your job before finding a new one, so why would you dump your mate before getting a better one?

(This hypothesis also has precedent in birds: One small study of cockatiels found that female extra-pair copulation lead to mate-switching in all cases, and these re-pairings were “trade-ups” in that they had higher expected reproductive success.)

However, though it was a persuasive article, its key evidence is open to multiple interpretations, and much of the empirical support for mate-switching is based on data from women who hadn’t had affairs. And, really, it’s perfectly coherent for women’s infidelity to serve a dual-mating strategy without shifting strategies around ovulation. Humans exhibit notable sexual stability across the cycle, so the ovulatory shift sub-hypothesis may not have been the best test of human strategic dualism to begin with.

A more straightforward test of dual-mating than looking for periovulatory changes would be to follow the avian research and test if affair partners are more handsome than primary partners. This pattern would suggest dual-mating since better-looking men are generally accepted to offer more genetic benefits, even if the only benefit they offer is better-looking offspring. On the other side, a similar empirical test of the mate-switching hypothesis—or, at least, its trading-up utility—would be to check whether women prefer their affair partners to their primary partners overall.

Finally, one way to test these hypotheses against each other would be to check if women see their primary partners or affair partners as better dad material. Parental attractiveness ratings are a clean way to pit these hypotheses against each other, as they clearly have dueling predictions. Dual-mating argues the primary partner is the intended father figure for offspring, and mate-switching argues it’s the affair partner. So, if women’s affairs mainly serve a dual-mating strategy, the primary partner should be the better dad. If women’s affairs primarily aid a switch to a new mate, the affair partner should be viewed as the better dad.

Given the utility and similarity of these tests, we decided to conduct them as a package. I, along with primatologist Dr. Caroline Phillips and psychologist Dr. Khandis Blake, recruited a multinational sample of 254 people who had affairs and—in a pre-registered study with open data and materials—had them rate their affair partner and their primary partner in terms of their mate value, their parental attractiveness, and their physical attractiveness.

If mate-switching drove most women’s infidelity, affair partners should have been rated as higher in mate value—but they were not. Women rated their primary partners and affair partners almost exactly equal in overall desirability. Instead, our human results followed the exact structure one would expect from strategic dualists: Affair partners were more physically attractive than primary partners and primary partners were more parentally attractive than affair partners. Our result was the best-case scenario for dual-mating and provides evidence that psychological adaptations to acquiring genetic benefits underlie women’s infidelity.

A line graph showing that both women & men rate affair partners higher than their primary partners on physical attractiveness but much lower on parental attractivenessFigure caption: Interaction plot comparing men’s and women’s ratings of their primary partners and their affair partners, in terms of physical attractiveness and parental attractiveness.

However, dual-mating cannot explain all infidelity. Women in our study reported utilizing extra-pair mating as a means to various ends, including, sometimes, switching mates. Further, the primary motivation for infidelity likely varies based on ecological factors (e.g., women may primarily use infidelity to obtain additional resources in resource-scarce environments). We believe the prevalence of dual-mating in our ancestry likely varied predictably with ecology, and so our results should not be extrapolated across all contexts.

Lastly, it’s conspicuous that men followed the same pattern, cheating up in terms of looks and down in terms of parenting. This gender similarity was a bit surprising to us at first. However, since men’s affairs are broadly accepted to have evolved through producing more offspring, men who cheat, too, are “just conceiving” with affair partners and co-parenting with primary partners. Therefore, it might make sense that they prioritize the conceptive benefits signaled by good looks (e.g., fertility) in affair partners and motherly qualities in primary partners.

While we look forward to further tests of dual-mating’s relevance to human affairs, for now, our results suggest that women—and, surprisingly, men—follow a pattern common in birds: better parenting at home and better looks on the side.

Read the original article here: Murphy, M., Phillips, C.A., Blake, K.R. (2024). Why women cheat: testing evolutionary hypotheses for female infidelity in a multinational sample. Evolution & Human Behavior, 45(5), 106595.

Dominant vs prestigious leaders: Do children from more egalitarian and hierarchical societies differ in their preferences?

– by Maija-Eliina Sequeira, Narges Afshordi, & Anni Kajanus

Social hierarchies are an inherent part of human social life, and children learn to recognise and navigate them from infancy, suggesting a universal tendency to do so. But how does our environment shape our preferences for who to learn from, or who should lead? In our recent article, we asked: How and when do children learn to recognise different forms of high status? And (how) does this vary cross-culturally? We were particularly interested in how levels of societal inequality might shape how children think about social status, since inequality has been linked to dominant leaders appearing more appealing.

Prior research has distinguished between two bases of high social rank, prestige and dominance. While dominance-based hierarchies are found across many species, prestige seems to be more specific to humans and connected to the importance of cultural learning and cooperation. To efficiently acquire remarkably complex cultural knowledge and skills, humans must know, from an early age, who to learn from. We orient toward prestigious individuals; those who are admired and emulated by others and presumably have the skills to succeed in our particular environment. Unlike dominant individuals, those high in prestige tend to be amicable and to have influence, rather than coercive power, over others. While children recognize both dominance and prestige as forms of high status, preference for prestige seems to increase with age, as does aversion toward dominance.  Some US-based studies have also shown that prestigious leaders are preferred over dominant leaders.

But does this vary across cultures? We collected and analysed data from children aged 4-11 years in three very different socio-cultural contexts: Colombia, Finland and the US. Societal inequality is relatively low in Finland and relatively high in Colombia, compared to global averages, and we therefore supposed that children in Colombia might show more of a preference for dominance compared to children in Finland, with those in the US in between.

In the study, the children first watched two sets of cartoons where a subordinate character – called Dimo – interacted with both a dominant-type and a prestigious-type character. We then asked children a series of questions designed to identify:

  1. Do children recognise dominance and prestige as signals of high status?
  2. Do they distinguish between dominance and prestige?
  3. Do they choose to learn from a dominant or a prestigious character?
  4. Do they prefer to assign leadership to a dominant or a prestigious character?

Finally, we showed children a new image of two characters with subordinate and dominant body language and asked them which one they would be and why, to determine whether they self-identify more with a dominant or a subordinate character and their reasoning. Across the different sets of questions, we were interested in identifying shared tendencies and developmental changes across the three groups as well as any cross-cultural differences.

Recognising and distinguishing between dominance and prestige

Children in all three contexts recognised and differentiated between dominance and prestige.  As we expected, they got better at doing so with age; our youngest children (four-year-olds) could identify dominance and prestige as signals of high status but were not distinguishing between them. By five years of age, they were doing so in the direction we expected; saying that Dimo would prefer and sit next to the prestigious character, and fear the dominant character.

Interestingly, there were cross-cultural differences in two of the distinguishing questions. Children in Colombia were less likely to say that Dimo feared the dominant and would sit next to the prestigious character compared to children in both Finland and the USA.

Learning novel names for novel objects

We showed children a novel object and explained that the dominant character called it one invented name (e.g., ‘modi’) and the prestigious another (e.g., ‘kapi’). We then asked them what they thought it was called.  Overall, children were more likely to give the name provided by the prestigious character, and this increased with age. We found no evidence of cultural differences in who children chose to learn from.

Assigning leadership

Children in all three contexts also tended to choose the prestigious character as a leader across three leadership tasks, and the older ones did so more than the younger, again suggesting a shift towards prestige with age. In leadership questions, we also found cross-cultural differences in children’s answers; children in Finland were more likely than children in Colombia to choose the prestigious character as a leader across the three tasks.

Self-identification

Finally, children identified with the subordinate character more than the dominant character in the image. This self-identification with the subordinate increased with age and was stronger in Finland than in Colombia.

Interpreting the results

We used our familiarity with the field sites and findings from ethnographic fieldwork when designing the study and interpreting statistical analyses. Overall, we found a shared tendency across the three contexts to favour prestige, and an increase in this preference with age. There was therefore a shared developmental shift between 4 and 11 years towards choosing to learn from, and assigning leadership to, the prestigious character, lending support to evolutionary models of social learning.

We also found cross-cultural differences in children’s answers, in the expected direction; children in Finland showed a stronger preference for prestige than those in Colombia. Ethnographic data from Colombia and Finland highlighted differences between these contexts such as the relative normalisation of authoritarian parenting and dominant-type interactions in Colombia vs. their almost complete absence in children’s lives in Finland, where children were actively taught to avoid displays of dominance. We consider that while dominance is seen as inherently negative in Finland, this is not necessarily the case in Colombia, and so children do not develop such a strong aversion to -or fear of- dominance.

The results draw attention to both the importance of conducting developmental research with children in a diverse range of societies, and the value of interdisciplinary approaches that consider child development as a process that occurs within a cultural context.

Read the original article here: Sequeira, M.-E., Afshordi, N., Kajanus, A. (2024). Prestige and dominance in egalitarian and hierarchical societies: children in Finland favor prestige more than children in Colombia or the USA. Evolution & Human Behavior, 45(4), 106591.

Are heritable individual differences explained by balancing selection or mutation-selection-drift balance?

– by Brendan Zietsch

A key question for evolutionary psychologists is: what selection pressures have shaped human traits and how do they vary and covary across individuals? Recent genomics studies have revealed a wealth of evidence that sheds light on these questions. In my paper, “Genomic findings and their implications for the evolutionary social sciences”, I aimed to bring together these findings while explaining the conceptual and technical background that is often assumed knowledge for reading the primary reports.  I also outlined what I see as the implications of these findings for psychological life history theory and for our interpretation of individual differences more generally.

The key question that genomic studies can answer is, which form of selection has shaped genetic variation in human traits: negative selection or balancing selection?  Negative (or purifying) selection removes harmful variants and depletes genetic variation. Genetic variation is maintained due to a balance between this depletion against the constant influx of new genetic variation from mutations. Balancing selection, on the other hand, refers to forms of selection that actively maintain genetic variation. It can occur when the relationship between trait value and individual fitness varies over time or place (fluctuating selection) or sex (sexually antagonistic selection), or when it depends on the rarity of the trait in the population (negative frequency-dependent selection), or when an allele’s effect on fitness depends on the other allele at the same locus (heterozygote advantage). So, the question: is the genetic variation in traits today shaped by a history of negative selection or balancing selection?

Genomics studies can be evolutionarily informative because they reveal the genetic architecture of human traits. Roughly, genetic architecture refers to the character of the genetic variation that underlies trait variation, especially the number of genetic variants that contribute to heritable variation and the how the frequencies of those variants relate to their effect sizes. Negative selection and balancing selection produce different genetic architectures (see below),. Therefore, from the genetic architecture of traits, we can make inferences about which form of selection has shaped each trait.

Certain features of genetic architecture are consistent across many traits that have been subject to genomic analysis, including traits that are of interest to evolutionary psychologists: life history traits such as age at puberty; morphological traits like waist-to-hip ratio, BMI, and height; cognitive-based traits like educational attainment; personality traits like neuroticism; and mental disorders like schizophrenia.

One common feature is that the heritability of such traits is spread evenly across thousands of genetic variants. Under no selection, or under balancing selection, we would expect that many variants might influence a trait but that a small number of these account for most of the trait variance. That’s because we know that traits are influenced by rare variants with large effect sizes, and there is no reason, other than negative selection, for this not to be true of common variants as well. As a mathematical necessity, in that case, a relatively small number of common variants with large effects would account for most of the trait variance. Instead, we see that any one variant only accounts for a tiny percentage of trait heritability, which is exactly what we would expect under negative selection.

Another feature shared among traits is that both common and rare variants contribute substantially to trait heritability. Several lines of evidence suggest that rare variants contribute disproportionately to trait variance, relative to what is expected under neutral selection or balancing selection, where virtually all the variation is expected to be accounted for by common variants. This pattern is expected under negative selection, because selection is less effective against rare deleterious alleles than common deleterious alleles. Modelling shows that balancing selection can only maintain variation at intermediate frequencies.

A third feature of genetic architecture observed across traits is that variants’ effect sizes are negatively associated with their minor allele frequency. Rarer variants tend to have larger effects than common variants, which invariably have tiny effects. The only known explanation for this pattern is that selection against harmful variants (i.e. negative selection) eliminates any common variants with large (or even moderate sized) effects, whereas rarer variants, being less visible to selection, are able to remain at low frequency in the population even with larger effects.

A fourth feature is that younger alleles (i.e. arose by mutation more recently) explain more heritability per-locus. This is expected under negative selection: deleterious alleles that have not been around as long have had less time to be eliminated by natural selection. Under balancing selection we would expect the opposite, since it would maintain variation at loci that affect the trait under selection for longer than under evolutionary drift.

These observations constitute pervasive evidence that the genetic variation in complex traits has been shaped by negative selection, and provide no evidence that it has been shaped by balancing selection. This conclusion is backed by formal tests for negative and balancing selection. These tests aggregate evidence across significantly trait-associated variants identified in genomewide association studies. They reveal that traits of interest to evolutionary psychologists show significant evidence of having been shaped by negative selection and significant evidence in the opposite direction of the criteria regarding balancing selection.

Overall, these findings mean we should not reach for balancing selection as an explanation of individual differences, as has been very common in the evolutionary social sciences. Balancing selection has been argued to have maintained a plethora of individual differences including promiscuous and monogamous individuals, cheaters and cooperators, progressives and conservatives, risk takers and hesitators, long-term planners and short-term opportunists, and aggressive hawks and peaceful doves. Indeed, various authors have argued that variation in personality traits in general is maintained by balancing selection. Genomics findings suggest that such explanations are highly unlikely.

The findings also have implications for psychological life history theory, insofar as proponents have argued that genetic covariation among traits is aligned along a fast-slow life history dimension due to balancing selection. If balancing selection has not shaped genetic (co)variation in traits, as the evidence suggests, then this claim does not get off the ground. In the paper I also discuss implications for dimensional theories of personality variation. In short, I argue that if personality variation is the result of a mess of countless genetic variants across the whole genome, many of which are rare in the population or even private to the individual, variation in personality probably does not have a simple dimensional structure (e.g. the Big Five). Rather individuals probably vary in every way possible. The Big Five may just reflect the dimensions of variation that matter to perceivers. ] We are most interested in a relatively narrow segment of all the ways people vary – we have words for (and make personality questionnaire items primarily about) the Big Five personality factors because these are relevant to our social and self-perceptions. But we don’t have words relating to blink rate, for example, even though it’s a behaviour that is socially visible (though usually unnoticed) and varies widely between individuals. The same would apply to countless other ways individuals vary that are not socially relevant or important. So all these other ways individuals vary do not make it into our personality models.

In all, the wealth of recent genomic findings gives strong insights into the history of selection on the traits we are interested in as evolutionary psychologists, as well as pointing to surprising new ways of interpreting individual differences.

Read the original article here: Zietsch, B.P. (2024). Genomic findings and their implications for the evolutionary social sciences. Evolution & Human Behavior, 45(4), 106596.