Thursday, December 31, 2009

Review of 2009

Before I make my predictions for 2010, let me review the ones I made for 2009:

The Neanderthal genome will be fully sequenced. There will be no evidence of interbreeding with modern humans (although proponents of the multiregional model will remain unconvinced). By comparing this genome with ours, we may reconstruct the genome of archaic humans who lived almost a million years ago and who were ancestral to Neanderthals and modern humans.

The Neanderthal genome was not fully sequenced this year. Because much of the reconstructed DNA turned out to be contamination from modern humans, the researchers had to start over. So far, there is no sign of Neanderthal admixture in the modern European genome, including the European variant of the microcephalin gene (which some had thought to have been of Neanderthal origin).

Meanwhile, work will begin on sequencing the genome of early modern humans (10,000 – 40,000 years ago). This project should ultimately prove to be more interesting by showing us how much modern humans have evolved during their relatively short existence. We will probably find out that John Hawks erred on the low side in concluding that natural selection had changed 7% of the human genome over the past 40,000 years.

The work has begun, at least for European populations, but it’s raising more questions than answers. The big question is the fate of the hunter-gatherers who inhabited Upper Paleolithic and Mesolithic Europe. Were they ancestral to present-day Europeans? Or were they replaced by a farming population from the Middle East? To date, the data indicate little genetic continuity between late hunter-gatherers and early farmers. On the other hand, the same data show little continuity between early farmers and present-day Europeans.

Perhaps founder effects are muddling the picture. Whenever some hunter-gatherers adopted agriculture, they and their descendants probably expanded considerably at the expense of neighboring groups. Early farmers would thus come from a very unrepresentative sample of late hunter-gatherers. The transition may have been just as bumpy between early farmers and later populations. Thus, the lack of genetic continuity could be more apparent than real.

In any case, paleo-geneticists should be looking not only at junk DNA but also at genes that have real-world effects. When did ancestral Europeans come to look the way they do now? Was it a gradual process that began when modern humans arrived in Europe some 35,000 years ago? Or did it happen rapidly?

We should also examine genes that regulate growth of brain tissue (like the aforementioned microcephalin variant). How intelligent were Europeans 30,000 years ago, 15,000 years ago, 5,000 years ago?

With the 150th anniversary of The Origin of Species, much will appear in 2009 about Charles Darwin and his life. We already know how he came up with his theory of evolution (Darwin salted away almost everything he wrote), although a few questions remain unanswered. What would he have done if he had lived longer? What did he have in mind for future projects?

Nothing really new has turned up. In 2009, historians questioned the view that Darwin deliberately held back his theory for two decades out of fear of a religious backlash. In fact, he was more afraid of not being taken seriously. He wished to build up his academic reputation before tackling the issue of evolution:

To use the modern jargon, he had to have the papers on the table to prove his worth. … Evolutionary ideas had emerged sensationally in the anonymously published Vestiges of the Natural History of Creation (1844), which could be pooh-poohed by the orthodox scientists of the day as a mishmash of half-baked ideas from the pen of a pseudo-scientist (it was subsequently revealed as the work of the antiquarian and publisher Robert Chambers). Darwin would not want to be taken lightly or dismissed as a mountebank: he was evidently a realist when it came to his career. (Fortey, 2009)

The Second Great Depression will not begin in 2009. In any case, what scares me is not the prospect of a sudden drop in the standard of living. Rather, it’s that of a gradual decline to almost half its current value. That scenario is scarier and likelier. And it’s probably already started. For the past fifteen years, median wages have stagnated despite decent economic growth. What will happen when growth stays in the 0-2% range?

The current recession is drawing to a close, but this should be no cause for self-congratulation. The coming decade will likely bring us stagflation, i.e., declining incomes and rising prices, especially for the basics of life (food, housing, oil, and most other commodities).

The cause? Globalization. The playing field is being leveled, and we’re competing more and more with the rest of the world for the basics of life. With free circulation of capital, goods, and labor—and that seems to be where our elites wish to go—incomes everywhere will tend to gravitate to the same level. There will still be income disparity within each country, perhaps even more (as in Latin America), but the average income level will be similar around the world, except for those countries that opt out of the globalist project.

Things might not be so bad for us if the world’s resources could be increased indefinitely. Our incomes could then stagnate while everyone else catches up (unlikely, though, since much of the world lacks the necessary social capital and political stability). In any case, even that scenario seems unrealistic. Demand is starting to outstrip supply for a number of key commodities, and not just oil. This is what the Club of Rome predicted back in 1972 in its report The Limits to Growth. Unfortunately, this pessimism seemed to be proven wrong when commodity prices fell into a long-term slump after the 1982 recession.

I suspect that warnings about peak oil are going unheeded now because similar warnings were made over thirty years ago. Just more of the same doom and gloom. Or so many think.


Endersby, J. (2009). Creative designs? How Darwin's Origin caused the Victorian crisis of faith, and other myths, TLS 7.3.16.

Fortey, R.A. (2009). How Darwin evolved, TLS 6.1.1.

Wednesday, December 16, 2009

The Montreal massacre. Part II

In debating the causes of the Montreal Massacre, we must confront the psychological similarities between Marc Lépine and his father. Both seem to have had low thresholds for ideation and expression of violence. Was this quick temper passed down from father to son? Or are the similarities only fortuitous?

If these psychological characteristics had passed down from father to son, they could have done so in only one of two ways. The preferred explanation is some kind of role-model conditioning, i.e., Marc Lépine looked to his father as a model for future behavior. But this seems unlikely. Marc hated his father. He repeatedly said so and reproached his mother for not doing enough to protect him from his father’s rage. Nor did he try to renew contact with his father after his parents broke up.

We’re thus left with some kind of unconscious conditioning that occurred before the age of 7 (when his parents broke up) and then began to express itself after puberty. But this too seems unlikely. For one thing, it implies a kind of infancy determinism that is no longer widely accepted in child psychology (Harris, 1998; Kagan, 1996; Kagan, 1998). As Jerome Kagan (1996) points out:

If orphans who spent their first years in a Nazi concentration camp can become productive adults (Moskovitz, 1983) and if young children made homeless by war can learn adaptive strategies after being adopted by nurturing families (Rathbun, DeVirgilio, & Waldfogel, 1958; Winick, Meyer, & Harris, 1975), then one can question the belief that the majority of insecurely attached l-year-olds are at high risk for later psychological problems. Even the behavioral differences among animals in laboratory settings are not very stable from infancy to reproductive maturity: "The findings offer meager support for the idea that significant features of social interactions at maturity are fixed by experiences in early development" (Cairns & Hood, 1983, p. 353). This conclusion affirms a discovery, now more than 20 years old, that even the stereotyped, bizarre social behaviors of 6-month-old isolated macaques can be altered by placing them with younger female monkeys over a 26-week period (Suomi & Harlow, 1972). These facts are also in accord with data on humans. Werner and Smith (1982), who followed a large sample of children from infancy to early childhood, concluded, "As we watched these children grow from babyhood to adulthood, we could not help but respect the self-righting tendencies within them that produced normal development under all but the most persistently
adverse circumstances" (p. 159).

For another thing, children behaviorally resemble their parents even if removed from parental influence shortly after birth. When adopted children are compared with their biological parents, we see moderate to high heritability in transmission of male aggressiveness (Baker et al., 2007; Barker et al., 2009; Rhee and Waldman, 2002). Furthermore, the non-genetic factors seem largely unrelated to parental influence, being either peer pressure outside the home or developmental accidents before and after birth (Harris, 1998).

Thus, to explain the psychological similarities between Marc Lépine and his father, the likeliest cause is genetic transmission. Either that or the similarities are just fortuitous.

The latter possibility should not be ruled out. When Monique Lépine sued for divorce, Rachid Gharbi strenuously denied her claims that he had abused their son. So perhaps we’re hearing just one side of the story. Remember, these proceedings happened before the liberalization of Canada’s divorce laws. Monique had to provide evidence of abuse to get her divorce.

To find out Rachid Gharbi’s side of the story, I turned to an article by an Algerian author on the “Impact of parental upbringing on child development.” The author’s first point is that Algerian upbringing is sex-specific. In boys, violent behavior is accepted and even encouraged:

In Algerian society for example, children are raised according to their sex. A boy usually receives an authoritarian and severe type of upbringing that will prepare him to become aware of the responsibilities that await him in adulthood, notably responsibility for his family and for the elderly. This is why a mother will allow her son to fight in the street and will scarcely be alarmed if the boy has a fall or if she sees a bruise. The boy of an Algerian family is accustomed from an early age to being hit hard without whimpering too much. People orient him more toward combat sports and group games in order to arm him with courage and endurance—virtues deemed to be manly. (Assous, 2005)

The purpose of this upbringing is not to suppress male aggressiveness but rather to channel it in the right direction: loyalty to the family and to tradition:

It is true that, in Arab and Muslim culture, parents are encouraged to discipline a child and to teach him obedience and submission by first using methods of communication and patience, but in the case of rebellion and especially of non-respect of Islamic laws it is recommended to use corporal punishment (e.g., a child who does not practice prayer is reprimanded as early as 10 years of age). (Assous, 2005)

Parental control is especially problematic after puberty:

Thus, during adolescence for example, the child will become more and more difficult to control. This behavioral disorder evidently pushes the parents to display more ill treatment in their authority and the schoolteachers to be more severe. (Assous, 2005)

The author presents an analysis of this corporal punishment, on the basis of cases brought to the notice of hospital authorities:

- it is directed much more at boys than at girls, by a ratio of almost 3 to 1

- it is inflicted (in order of importance) by schoolteachers, parents, neighbors, and other relatives

- it usually involves the use of blunt, non-cutting objects: a belt, a pipe, or a wire

- it is directed (in order of importance) at the head, arms or legs, belly, and chest

- the injuries (in order of importance) are multiple fractures, bruises, burns, scratches, and bites

The author goes on to note:

In Algerian society, even today, the absolute authority of parents over their children is seldom called into question by adults if it is exercised judiciously and without apparent adverse effects, and even though it often happens that more or less serious incidents cannot be avoided by parents in the grip of an intense anger that they cannot manage to control. (Assous, 2005)

Clearly, Rachid Gharbi and Monique Lépine had different notions of how young boys should be brought up. Monique came from a cultural background where corporal punishment is a last resort and usually takes the form of spanking. The preferred form of punishment is shaming: the boy is made to realize that he has done something wrong. At that point, his sense of shame will do the rest. If the boy has no sense of shame, he is considered to be abnormal, if not mentally ill.

In contrast, Algerian parents use shaming mainly to control girls. For boys, it seems to be at best a secondary or even tertiary means of control, the main ones being the threat and use of corporal punishment.

Why is child discipline so different in Algeria? The reason seems to be that violence is much more omnipresent. The average Algerian male is more ready and willing to use violence preemptively or in self-defense. Social peace is maintained largely by an implicit balance of terror: violence is deterred by the threat of retaliation—if not by the victim, then by a male relative. Evidently, the balance cannot always be maintained...

This aspect of Algerian life is described by Frantz Fanon in Les damnés de la terre:

It’s a fact, the magistrates will tell you, that four fifths of the cases heard involve assault and battery. The crime rate in Algeria is one of the highest in the world, they claim. There are no petty delinquents. When the Algerian, and this applies to all North Africans, puts himself on the wrong side of the law, he always goes to extremes (Fanon, 2004, p. 222)

The act of violence itself shows less restraint and the precipitating causes seem banal:

Autopsies undeniably establish this fact: the killer gives the impression he wanted to kill an incalculable number of times given the equal deadliness of the wounds inflicted.

… Very often the magistrates and police officers are stunned by the motives for the murder: a gesture, an allusion, an ambiguous remark, a quarrel over the ownership of an olive tree or an animal that has strayed a few feet. The search for the cause, which is expected to justify and pin down the murder, in some cases a double or triple murder, turns up a hopelessly trivial motive. Hence the frequent impression that the community is hiding the real motives.
(Fanon, 2004, p. 222)

The reason for this state of affairs ultimately goes back to the recentness of central authority in Algeria. Before the French conquest in the 19th century, each family depended on its male members to defend its interests. There were law courts, but they had no power to enforce their rulings. It was up to the aggrieved party to do the enforcement.

This situation was not unique. In fact, it was typical of all human societies and remains so in many parts of the world today. It changed only with the rise of central authority and its monopoly on the use of violence. With this change, the State put an end to the worst sort of tyranny: the daily fear of being assaulted or even killed, not by a foreign invader but by someone in your own town or village. The violent male went from hero to zero.

Initially, people complied with the new order by changing their behavior within the limits of phenotypic plasticity. The result was a more peaceful society where violent males were less often imitated, celebrated, and accommodated. This shift in the mean phenotype then contributed to a slower but similar shift in the mean genotype, by creating an environment that favored the reproduction of certain individuals at the expense of others. There was thus Baldwinian selection for individuals less predisposed to violence and more predisposed to submissiveness.

This process is described by the historical economist Gregory Clark with respect to England. Once central authority had become established, male homicide fell steadily from the twelfth century to the early nineteenth. Meanwhile, there was a parallel decline in blood sports and other forms of exhibitionist violence (cock fighting, bear and bull baiting, public executions) that nonetheless remained legal throughout this period. Clark ascribes this behavioral change to the reproductive success of upper- and middle-class individuals who differed statistically in their predispositions from the much larger lower class, including predispositions to violence. Although initially a small minority in medieval England, such individuals grew in number and their descendants gradually replaced the lower class through downward mobility. By the nineteenth century, their lineages accounted for most of the English population (Clark, 2007, pp. 124-129, 182-183; Clark, 2009).


Assous, A. (2005). L’impact de l’éducation parentale sur le développement de l’enfant, Hawwa, 3(3), 354-369.

Baker, L.A., K.C. Jacobson, A. Raine, D.I. Lozano, and S. Bezdjian. (2007). Genetic and environmental bases of childhood antisocial behavior: a multi-informant twin study, Journal of Abnormal Psychology, 116, 219-235.

Barker, E.D., H. Larson, E. Viding, B. Maughan, F. Rijsdijk, N. Fontaine, and R. Plomin. (2009). Common genetic but specific environmental influences for aggressive and deceitful behaviors in preadolescent males, Journal of Psychopathology and Behavioral Assessment, 31, 299-308.

Clark, G. (2009). The indicted and the wealthy: surnames, reproductive success, genetic selection and social class in pre-industrial England,

Clark, G. (2007). A Farewell to Alms. A Brief Economic History of the World, Princeton University Press, Princeton and Oxford.

Fanon, F. (2004). The Wretched of the Earth, New York: Grove Press.

Harris, J. R. (1998). The nurture assumption: Why children turn out the way they do, Free Press.

Kagan, J. (1998). Three Seductive Ideas, Cambridge, Harvard University Press.

Kagan, J. (1996). Three pleasing ideas, American Psychologist, 51, 901-908.

Rhee, S.H., and I.D. Waldman. (2002). Genetic and environmental influences on antisocial behavior: A meta-analysis of twin and adoption studies, Psychological Bulletin, 128, 490-529.

Thursday, December 10, 2009

The Montreal massacre ... 20 years later

On December 6, 1989, a 25-year-old man walked into Montreal’s École polytechnique and murdered fourteen women. The event is still being debated … twenty years later.

We know the immediate cause. The murderer, Marc Lépine, felt that places like the École polytechnique were training women to take jobs that had been mainly held by men like himself. In April 1989 he had met with a university admissions officer and complained about how women were taking over the job market. Earlier still, he had spoken out to men about his dislike of feminists, career women, and women in traditionally male occupations such as the police, saying that women should remain at home and care for their families. This resentment may have been exacerbated by his inability to find a girlfriend. He was generally ill at ease around women, tending to boss them around and showing off his knowledge in front of them

For many people, the debate stops there. Marc Lépine resented women, especially ‘feminists’, and this resentment led to the Montreal massacre. For others, such resentment does not in itself explain what happened. Lépine’s personal history points to a longstanding tendency toward asociality, short-temperedness, and ideation of violent behavior, particularly after he reached puberty in the late 1970s:

Late 1970s – When his sister made fun of him for not having a girlfriend, he fantasized about her death and made a mock grave for her.

September 1981 – He applied to join the Canadian Armed Forces as an officer cadet but was rejected during the interview process because he seemed antisocial and unable to accept authority.

1982-84 – At junior college, colleagues saw him as being nervous, hyperactive, and immature.

1987 – He lost his job at a hospital because of aggressive behavior, disrespect of superiors, and carelessness in his work. He was enraged at his dismissal, and at the time described a plan to go on a murderous rampage and then commit suicide. His friends said he was unpredictable and would fly into rages when frustrated.

Some people trace this behavioral pattern to his early childhood, specifically as the son of a Catholic French-Canadian mother, Monique Lépine, and a Muslim Algerian father, Rachid Liass Gharbi. The latter’s psychological profile looks very similar to Marc Lépine’s:

Gharbi was an authoritarian, possessive and jealous man, frequently violent towards his wife and his children. Gharbi had contempt for women and believed that they were only intended to serve men. He required his wife to act as his personal secretary, slapping her if she made any errors in typing, and forcing her retype documents in spite of the cries of their toddler. He was also neglectful and abusive towards his children, particularly his son, and discouraged any tenderness, as he considered it spoiling. In 1970, following an incident in which Gharbi struck his son so hard that the marks on his face were visible a week later, his mother decided to leave. (Marc Lépine - Wikipedia)

His mother had divorced his father over the issue of abuse, which had extended to the children. Beaten by his father, Rachid Liass Gharbi, for such minor problems as singing too loudly or failing to greet him in the morning, Lépine had learned to fear him.

"He was a brutal man," Monique Lépine told the court, "who did not seem to have any control over his emotions... It was always a physical gesture, a violent gesture, and always right in the face." Monique's sister confirmed these details to the judge, although Gharbi protested that they were not true. Nevertheless, the judge awarded custody to Monique. Still, young Gamil was not free of the man until he was 7 years old, and the exposure for that long to Gharbi's temper and beliefs had a strong influence. The boy so hated him that when he was 13, he changed his name to Marc Lépine.
(Ramsland, 2004)

The hypothesis here is that Gharbi exerted a profound influence on his son’s future psychological development. Some rightwing bloggers go so far as to suggest that Marc Lépine himself became a Muslim, as evidenced by the beard he grew as a young man. This is unlikely for several reasons:

– He was baptized a Roman Catholic and received no religious instruction. His mother describes him as "a confirmed atheist all his life."

– He had no contact with his father past the age of 7.

– At the age of 14, he legally changed his name from Gamil Rodrigue Liass Gharbi to Marc Lépine. This was motivated partly by hatred of his father and partly by a desire to avoid being treated as an Arab at school.

– His suicide note contains no Islamic references. In fact, his use of several Latin expressions (Ad Patres, Casus Belli, Alea Jacta Est) suggests he still felt some connection with Roman Catholicism.

– As for his beard, he grew it to cover up his acne.

O.K., so Marc Lépine was not a crypto-Muslim. But maybe he unconsciously imitated his father’s behavior. We often hear this kind of argument at trials where a violent offender is shown to have had an equally violent father. The offender should thus be judged more leniently, given his poor role-model.

In Marc Lépine’s case, this kind of argument is at the limit of credibility. Remember, Gharbi was a hated parental figure who had left Lépine’s life at the age of 7. More to the point, we see the same psychological similarity between parents and their children even when the children are taken away shortly after birth and put up for adoption:

… compared to genetic children, American adoptees have a higher overall risk of contact with mental health professionals, specifically for eating disorders, learning disabilities, personality disorders and attention deficit hyperactivity disorder … They also have lower achievement and more problems in school, abuse drugs and alcohol more, and fight with or lie to parents more than genetic children …

… Adoptees may be genetically predisposed to negative outcomes at higher rates than the general population. Genetic factors clearly contribute to alcohol and drug addiction, as well as to some mental disorders like attention deficit hyperactivity disorder and schizophrenia …. An association between nonviolent criminality has been found between European adoptees and their genetic parents … Furthermore, research with Swedish adoptees suggests 55-60% of their educational performance is explained by genetic factors, and that the number of years of school adoptees complete is significantly related to how many years their genetic mothers completed
(Gibson, 2009).

On the specific issue of male aggressiveness, we see moderate to high heritability when adopted children are compared with their biological parents. A heritability of 40% is suggested by a meta-analysis of 51 twin and adoption studies (Rhee & Waldman, 2002). A later twin study indicates a heritability of 96%, the subjects being 9-10 year-olds from diverse ethnic backgrounds (Baker et al., 2007). This higher figure is due to the closer ages of the subjects and the use of a panel of evaluators to rate each of them. According to the latest twin study, heritability is 40% when the twins have different evaluators and 69% when they have the same evaluator (Barker et al., 2009).

This is not to say that the Montreal massacre was genetically inevitable. If Lépine had found a girlfriend, who would have put up with him, he would have probably become a man like his father but nothing more serious. The tragedy on December 6, 1989 resulted from three interacting factors: 1) a latent predisposition to violence, probably in the form of low thresholds for ideation and expression of violent behavior; 2) lack of close friends, especially female friends; and 3) an enabling ideology.


Baker, L.A., K.C. Jacobson, A. Raine, D.I. Lozano, and S. Bezdjian. (2007). Genetic and environmental bases of childhood antisocial behavior: a multi-informant twin study, Journal of Abnormal Psychology, 116, 219-235.

Barker, E.D., H. Larson, E. Viding, B. Maughan, F. Rijsdijk, N. Fontaine, and R. Plomin. (2009). Common genetic but specific environmental influences for aggressive and deceitful behaviors in preadolescent males, Journal of Psychopathology and Behavioral Assessment, early view.

Gibson, K. (2009). Differential parental investment in families with both adopted and genetic children, Evolution and Human Behavior, 30, 184-189.

Lépine, Monique & H. Gagné. (2008). Aftermath, Viking.

Lépine, Marc. (1989). Lettre de Marc Lépine,

Marc Lépine - Wikipedia

Ramsland, K. (2004), Gendercide – The Montreal Massacre.

Report of Coroner’s Investigation

Rhee, S.H., and I.D. Waldman. (2002). Genetic and environmental influences on antisocial behavior: A meta-analysis of twin and adoption studies, Psychological Bulletin, 128, 490-529.

Thursday, December 3, 2009

Recent evolution of complex behavior

I once knew an African student who told me that his language had no words for “good” or “evil”. When the missionaries translated their materials into his language, they had to write “Jesus is beautiful” instead of “Jesus is good.”

This sort of semantic evolution has occurred in all human languages. People have expressed new concepts by recycling older ones. Typically, these older concepts refer to primary sensations that are largely hardwired, i.e., ‘beautiful’, ‘cold’, ‘hot’, and so on.

Interestingly, it looks as if some of this hardwiring is being reused when we imagine evolutionarily recent concepts. For instance, when we recognize someone as friendly, we use neural pathways that are associated with the recognition of warmth.

During the autumn of 2006, a series of volunteers arrived at Yale University's psychology building. Each was greeted in the lobby by a researcher, who accompanied them up to the fourth floor. In the elevator, the researcher casually asked the volunteer to hold the drink she was carrying while she noted down their name. The subjects did not know it, but the experiment began the moment they took the cup.Once in the lab, the 40 or so volunteers read a description of a fictitious person and then answered questions about the character. Those who had held an iced coffee, rather than a hot one, rated the imaginary figure as less warm and friendly, even though eachvolunteer had read the same description. Answers to other questions about the figure, such as whether the character appeared honest, were unaffected by the type of drink. (Williams & Bargh, 2008).

Similarly, we have developed the ability to form moral judgments by building upon a mental algorithm that serves to judge cleanliness:

In one recent study, Simone Schnall at the University of Plymouth, UK, and colleagues showed half their volunteers a neutral film and the other half the toilet scene from the film Trainspotting. (The uninitiated need only use their imagination here: the clip features what is described as the "worst toilet in Scotland".) Those who viewed the Trainspotting clip subsequently made more severe judgements about unethical acts such as cannibalism than volunteers who had viewed the neutral scene. Exposing subjects to a fart smell and placing them in a filthy room had a similar effect (Schnall et al., 2008).

Chen-Bo Zhong of the University of Toronto in Canada and Katie Liljenquist, now at Brigham Young University in Provo, Utah, asked volunteers to read a first-person account of either an ethical act or an act of sabotage. They then had to rate the desirability of various household objects, including soap, toothpaste, CD cases and chocolate bars. Those who had read the sabotage story showed a greater preference for the cleaning products than those who had not.

A self-perception of physical cleanliness thus seems linked to one of moral cleanliness. Moreover, like Pontius Pilate, the mere act of washing your hands reduces feelings of moral responsibility.

… In another part of their study, Zhong's team asked volunteers to recall an unethical deed from their past. Under the guise of a health and safety precaution, he then gave half the subjects antiseptic wipes to clean their hands. The participants were then asked if they would take part in another experiment, this time to help out a desperate graduate student. Only 40 per cent of the subjects who had cleaned their hands volunteered, compared with almost three-quarters of those who hadn't. (Zhong & Liljenquist, 2006)

These studies suggest an answer to a question that has bothered me. How is that complex and relatively recent social behaviors often have moderate to high heritabilities? Surely such hardwiring would have taken an impossibly long time to evolve? Just think of all the genes involved…

Well, the answer is surprisingly simple. Natural selection has simply altered a hardwired algorithm that already exists. There’s no need to build it all from scratch. You just jerry-rig what you already have.


Mangan’s Miscellany is back!! In fact, he's back at his old address ( and at a new one (


Giles, J. (2009). Icy stares and dirty minds: Hitch-hiking emotions, New Scientist 2725:* 15 September 2009

Schnall, S., J. Haidt, G.L. Clore, A.H. Jordan. (2008).
Disgust as embodied moral judgment, Personality and Social Psychology Bulletin, 34, 1096-1109.

Williams, L.E. & J.A. Bargh. (2008).
Experiencing physical warmth promotes interpersonal warmth, Science, 322, 606-607

Zhong, C.B. & K. Liljenquist, K. (2006). Washing away your sins: Threatened moralityand physical cleansing, Science, 313, 1451-1452.

Thursday, November 26, 2009

Does civilization select against intelligence?

We know the brain has been evolving in human populations quite recently," said paleoanthropologist John Hawks at the University of Wisconsin at Madison.

Surprisingly, based on skull measurements, the human brain appears to have been shrinking over the last 5,000 or so years.

"When it comes to recent evolutionary changes, we currently maybe have the least specific details with regard the brain, but we do know from archaeological data that pretty much everywhere we can measure - Europe, China, South Africa, Australia - that brains have shrunk about 150 cubic centimeters, off a mean of about 1,350. That's roughly 10 percent," Hawks said.

"As to why is it shrinking, perhaps in big societies, as opposed to hunter-gatherer lifestyles, we can rely on other people for more things, can specialize our behavior to a greater extent, and maybe not need our brains as much," he added. (source)

It’s usually assumed that humans have steadily increased in intellectual capacity. But what if this trend reversed with the advent of civilization? As societies grow more complex, perhaps the average human has not had to know so much. He or she can ‘delegate’ tasks (not that such delegation is always voluntary). Perhaps civilization has made us dumber, not smarter.

Yes, the ‘great civilizations’ have made major contributions to the arts and sciences, typically through upper-class patronage of creative individuals—who otherwise would have to worry about their next meal. The down side, however, is that this emancipation of creativity requires a much larger number of helots. The latter also specialize in their own way—in doing the grunt work that others consider beneath them.

In the ancient world, intellectual life—the debating, pondering, and creating of new ideas—was confined to a small powerless minority, too few in number to generate the critical mass that makes intellectual ferment possible. There were no conferences, no academic journals, and no scientific associations. For the most part, there were only isolated individuals who felt estranged from the world around them. The more renowned ones had disciples in their entourage. But that was it.

This situation contrasts with that of Western Europe and then North America from the 17th century to the 20th. That intellectual ferment was broad-based. It took place within a large swath of the population that could understand the ideas being generated, and that could argue the pros and cons. It was this democratization of intellectual activity that made the West so exceptional.

I’m increasingly convinced that extreme social stratification—i.e., the creation of a small class of intellectuals and a much larger helot class—is inimical to true scientific progress. The intellectuals are too few in number, and too dependent on the system, to make any real contribution.

Thursday, November 19, 2009

Skin bleaching

Since the mid-20th century, ‘skin bleaching’ has become more and more common among dark-skinned populations. It involves lightening skin color by means of topical preparations that contain hydroquinone, cortisone, or mercury. These products are effective, but prolonged use may damage the skin by making the epidermis thinner and by breaking down collagen fibers. Despite being condemned by the medical profession and increasingly restricted by governments, they can easily be obtained in various places: hair-stylist salons, subway stations, African public markets, etc.

Skin bleachers seem to be used the most in South Asia and its diaspora. Next come sub-Saharan Africa and its diaspora (West Indies, Brazil, United States, Western Europe, etc.), the Philippines and elsewhere in Asia. The market is mainly young and female. Thus, rate of use is 61.4% among Surinamese women of Indian origin less than 26 years old, as compared to 13.1% among young Surinamese of other origins (Javanese, African, etc.) (Menke, 2002).

In Africa, rate of use is 25% in Bamako, Mali, up to 52% in Dakar, Senegal, up to 35% in Pretoria, South Africa, and up to 77% in Lagos, Nigeria (Ntambwe, 2004). The practice has become so widespread that it has been nicknamed maquillage – ‘make-up’ (Ondongo, 1984). According to one African specialist, men encourage it by considering light-skinned women to be more attractive, intelligent, moral, desirable, and chaste. In contrast, dark-skinned women are said to look mean, evil, stupid, and untrustworthy (Ntambwe, 2004). This opinion is consistent with the results of a survey among Ghanaian women. Most of the respondents thought that men prefer light skin in a woman: “Sometimes if you really want to marry a particular man, you have to bleach” (Fokuo, 2009)

In Jamaica, users do not seem motivated by shame of their Black identity. Surveys show them having as much racial self-esteem as non-users. The motivation is more to make one’s face ‘cool’, to imitate one’s peers, to look pretty and attract a partner, and to feel good about oneself. There is also the influence of popular culture, such as Eurocentric beauty contests and singers who glorify women with light brown skin. In the dance-hall song Browning, Buju Banton says he loves his light-skinned girlfriend, his ‘browning’, more than his car, his motorbike, and his money. In Bleach On, Captain Barkey tells girls to keep on bleaching their skin (Charles, 2009).

Strangely, these products have become increasingly popular among South Asians, Africans, and West Indians for the past half-century, yet the same period has also seen these peoples regain much of their cultural independence. In advertising, magazines, or TV serials, one sees many more women from the local population than there were before.

Actually, it’s not so strange. Back when the local media recycled images of women from Western sources, the female audience had trouble identifying with them; there was a gap between the two. Because these images are now adapted to the local reality, they project a stronger normative influence on local women, who are keener to imitate them. These women, however, are still darker-skinned than the somatic norm being projected. This is especially so with Indian ‘Bollywood’ films but is also the case with serial dramas in Latin America and the Arab world.


Charles, C.A.D. (2009). Skin bleachers’ representations of skin color in Jamaica, Journal of Black Studies, 40, 153-170.

Charles, C.A.D. (2003). Skin bleaching, self-hate, and Black identity in Jamaica, Journal of Black Studies, 33, 711-728.

Fokuo, J. Konadu. (2009). The lighter side of marriage: Skin bleaching in post-colonial Ghana, Research Review NS, 25(1), 47-66.

Menke, J. (2002). Skin bleaching in multi-ethnic and multicolored societies. The case of Suriname, paper presented to the CSA Conference, Nassau, Bahamas, May 27 – June 1, 2002, Coping with Challenges, Contending with Change.

Ntambwe, M. (2004), 'Mirror mirror on the wall, who is the FAIREST of them all?' Science in Africa, March.

Ondongo, J. (1984), Noir ou blanc ? Le vécu du double dans la pratique du « maquillage » chez les Noirs, Nouvelle Revue d’Ethnopsychiatrie, 2, 37-65.

Thursday, November 12, 2009

Lévi-Strauss and gene-culture co-evolution

With the recent death of Claude Lévi-Strauss, there has been an outpouring of praise for his contributions to anthropology, notably the struggle for a more politically conscious anthropology and the shift from biological determinism to cultural determinism. This praise tells us more about the praisers than about Lévi-Strauss himself. In truth, he scarcely resembled the image presented in most of his obituaries, having denounced in his book Tristes tropiques the intrusion of a “utopian spirit” into his field.

With a shelter of legalistic and formalistic rationalism, we similarly build an image of the world and society where all problems can be settled by a courtroom approach whose logic is artful maneuvering, and we do not realize that the universe is no longer composed of what we are talking about.

Nor was he a complete cultural determinist. Like many thinkers of his generation, he felt that culture has contributed just as much as biology to differences among human populations. This is not, however, the same as believing that biology has created only skin-deep differences. He made this clear in a speech at our university in 1979:

… I would not feel truly anthropologist or structuralist if I did not accept that all questions should be discussed, and the question of the respective share of nature and nurture in human culture seems to me one of the most important ones we can and ought to ask ourselves. This issue has been made sterile for years and years by the false categorizations of physical anthropology related to the belief in the existence of human races.

However, we must not forget that, as anthropologists, the aspects of the question that will always appeal to us will be much less the genetic determination of culture or cultures than the cultural determination of genetics. By this I mean that a culture always will be made much less by its members’ gene pool than it will contribute to shaping and altering this gene pool.

The selection pressure of culture—the fact that it favors certain types of individuals rather than others through its forms of organization, its ideas of morality, and its aesthetic values—can do infinitely more to alter a gene pool than the gene pool can do to shape a culture, all the more so because a culture’s rate of change can certainly be much faster than the phenomena of genetic drift. (Lévi-Strauss, 1979, p. 24-25)

He is clearly referring here to the concept of gene-culture co-evolution. But just what are these genetic traits that cultures have shaped differently in different human populations? He doesn’t seem to mean minor physiological processes, like an improved ability to digest milk or carbohydrates. In fact, he seems to be referring to mental and behavioral traits, especially when he mentions ‘ideas of morality’. Is he saying that there has been selection for differences in moral capacity among human populations?

And if cultures have shaped different gene pools differently wouldn’t these gene pools be ‘races’? Did Lévi-Strauss think through this line of thought? Perhaps in denying the race concept he was simply making the kind of ritual denunciation that most anthropologists make … and only half-believe.

It is probably too late to find out what he really meant. This is not a line of thought that he seems to have pursued in his other publications, at least none I am aware of.


Lévi-Strauss, C. (1985). Claude Lévi-Strauss à l’université Laval, Québec (septembre 1979), prepared by Yvan Simonis, Documents de recherche no. 4, Laboratoire de recherches anthropologiques, Département d’anthropologie, Faculté des Sciences sociales, Université Laval.

Lévi-Strauss, C., (1955). Tristes tropiques, Paris.

Thursday, November 5, 2009

Was Roman Britain multiracial?

Historians often assume that the Romans changed Britain politically but not demographically. The indigenous elites adopted Roman culture while the mass of the population remained Celtic. When the Anglo-Saxons arrived in the fifth century, much of this population fled to Wales and Cornwall, where they would retain their language and traditions. Meanwhile, those who remained behind were obliterated through a process of ethnic cleansing and coerced assimilation.

This historical account may be false. First, the Roman occupation seems to have brought profound demographic change. This has been suspected for some time on the basis of unusual burial objects and epigraphic inscriptions that record the presence of individuals from throughout the Roman Empire. Now, after analyzing remains from two burial grounds near Roman York, a research team has concluded that the buried individuals had diverse geographic origins (Leach et al., 2009). In particular, the craniometric data revealed many of sub-Saharan or Egyptian origin. At the ‘Trentholme Drive’ burial ground, 66% clustered most closely with Europeans, 23% with sub-Saharan Africans, and 11% with Egyptians. At the ‘Railway’ burial ground, the proportions were 53% European, 32% sub-Saharan, and 15% Egyptian.

York was a legionary fortress, so these individuals may have been legionnaires. There are, in fact, epigraphic references to African soldiers and even a written account about one in a history of the Emperor Septimius Severus (146-211 AD) (Scriptores Historiae Augustae, p. 425).

On another occasion, when he was returning to his nearest quarters from an inspection of the wall at Luguvallium (Carlisle) in Britain, at a time when he had not only proved victorious but had concluded a perpetual peace, just as he was wondering what omen would present itself, an Ethiopian soldier, who was famous among buffoons and always a notable jester, met him with a garland of cypress-boughs. And when Severus in a rage ordered that the man be removed from his sight, the Ethiopian by way of jest cried, it is said, “You have been all things, now, O conqueror, be a god.”

Why were these Africans so far from home? In the case of the Egyptians, Rome thought it unwise to station soldiers among people of the same ethnic background. The temptation would be strong to side with the locals if a rebellion occurred. In the case of the sub-Saharan Africans, they were recruited into the army for the same reason that Germanic barbarians were recruited: Rome could not meet its manpower requirements solely from within its empire. There was also a perception that the Romans had become soft and that barbarians made better soldiers.

Finally, Rome, like many multi-national empires, had a policy of moving people around in order to promote a common identity and to eliminate ethnic distinctiveness. The Assyrians had perfected this policy, e.g., the deportation of the Jews to Babylon and their replacement by other peoples. The Roman authorities used their army to this end. They wished to create an atomized society where regionalism or ethnicity could not mobilize resistance to imperial rule.

It is likely that these legionnaires had a major demographic impact wherever they were stationed, especially if we include the many officials, petty functionaries, traders, and others who came in their wake. Much of Roman Britain thus seems to have been Romanized in culture and multiethnic in origin.

This, in turn, calls for a few other reinterpretations. Wales and Cornwall are not Celtic-speaking today because they took in Romano-British refugees fleeing Anglo-Saxon invaders. They were simply those parts of Britain that had remained Celtic in language, culture, and population. The rest—present-day England—had long become heavily Romanized and cosmopolitan.

Nor do we have to postulate a process of ethnic cleansing and coerced assimilation to explain the extinction of Roman Britain in the 5th and 6th centuries. As Seccombe (1992) points out, the Roman Empire suffered from negative population growth. Not enough people married and had children to offset relatively high mortality among infants and young adults. In breaking down local collective identities, whether ethnic or regional, the Empire had created an atomized and increasingly anonymous society without the carrots and sticks that tightly knit societies use to push individuals down the path of family formation.

Once Rome had pulled its troops out of Britain in the early 5th century, there was no longer an inflow of people to offset the demographic deficit. The local population fell into decline, and the decline accelerated in the 6th century when plagues killed three out of every ten people. The Romano-British needed no help from the Anglo-Saxons to die out. They did it largely on their own.


Leach, S., M. Lewis, C. Chenery, G. Müldner, & H. Eckardt. (2009). Migration and diversity in Roman Britain: A multidisciplinary approach to the identification of immigrants in Roman York, England, American Journal of Physical Anthropology, 140, 546-561.

Scriptores Historiae AugustaeSeptimius Severus 22:4-6, transl. D. Magie (1922-1932) Vol 1, London: Heinemann.

Seccombe, W. (1992). A Millennium of Family Change. London: Verso.

Thursday, October 29, 2009

Are we all Middle Easterners now?

Dienekes is arguing that Middle Eastern farmers demographically replaced Europe’s original population between 8,000 and 3,000 years ago. This argument seems to be proven by two recent papers that show no genetic continuity between Europe’s late hunter-gatherers and early farmers. The continent’s current gene pool seems to owe very little to the original Upper Paleolithic and Mesolithic inhabitants. So goes his argument.

This argument raises one obvious problem. It implies that the physical characteristics of Europeans, especially northern Europeans, arose recently and over a short time.

How short? As late as 7500 years ago, hunter-fisher-gatherers still inhabited Europe above a line running from the Netherlands to the Black Sea. The line then gradually moved north, reaching northern Germany about 5500 BP and the eastern and northern agricultural areas of Scandinavia around 4300 BP. This leaves very little time for the evolution of the northern European phenotype, i.e., lightening of the skin to pinkish-white and diversification of hair and eye color into a wide range of hues. This phenotype is attested by historical records going back over two thousand years, so we’re left with a time window of less than five thousand years.

Is that enough time for so much phenotypic change? Perhaps, but the selection pressures would have to be very strong.

Let’s turn to the first of the two papers. Bramanti et al. (2009) compared mtDNA sequences from late hunter-gatherers and early farmers who had lived in northern and central Europe (Lithuania, Poland, Russia, Germany). There was no evidence of genetic continuity between the two populations.

But this paper raises several other points:

1. Modern Europeans are almost as distant genetically from the early farmers as they are from the late hunter-gatherers. To be ancestral to modern Europeans, these farmers and their descendents would need a very low female population size (less than 3,000 individuals). As the authors admit, this figure is well below current archaeological estimates (124,000 individuals).

2. The sample sizes are very small for the early farmers (25 individuals) and the late hunter-gatherers (20 individuals).

3. The sample of late hunter-gatherers covers a much longer time frame (15,400 – 4300 BP) than does the sample of early farmers (7650 - 7400 BP).

In sum, the authors have tried to describe the gene pool of late European hunter-gatherers with data from 20 individuals spread over four countries and over some 11,000 years.

Can such a sample be representative? Doubtful. Besides the smallness of the sample, the late hunter-gatherers were not a homogeneous population. By their time, Europe had completely changed ecologically. Open tundra had given way to forest and it was no longer possible to hunt wandering herds of reindeer. Hunter-gatherers now lived in smaller and more localized groups. Each group would have had its own genetic profile as a result of genetic drift and founder effects.

Even if these 20 individuals fairly represented late hunter-gatherers, the genetic continuity hypothesis is not disproved by genetic differences between them and early farmers. Undoubtedly, some hunter-gatherers adopted farming earlier than others and thus contributed more to the early farmer gene pool. Others never adopted farming and thus contributed nothing. Founder effects would have been considerable.

There are thus two serious problems with Bramanti et al. (2009):

1. The sample of late hunter-gatherers is too small and too scattered over space and time to be representative of the late hunter-gatherer gene pool;

2. The genetic continuity hypothesis does not assume that the early farmer gene pool was a representative cross-section of the late hunter-gatherer gene pool.

Let’s turn to the other paper. Malmström et al. (2009) retrieved mtDNA from 19 late hunter-gatherers and 3 early farmers who lived in southern Scandinavia. The late hunter-gatherers show no genetic continuity with the early farmers or with modern Scandinavians but they do show genetic continuity with modern Baltic populations (i.e., Latvians). This seems consistent with archaeological evidence that the eastern Baltic was a refugium for Europe’s last hunter-gatherers. Indeed, the inland boundaries of Latvia, Lithuania, and Old Prussia may hark back to a time when these people fished and sealed from coastal stations part of the year and then moved some distance inland to hunt game the rest of the year.

This study has the merit of being more narrowly focused in time and space. Like the other study, however, it suffers from very small sample sizes and the likelihood of founder effects. In fact, the early farmer sample is so small that genetic continuity with modern Scandinavians is unsure.

What now?

The challenge now will be to enlarge this sample of late hunter-gatherers. By ‘enlarge’, I don’t simply mean a larger sample. I also mean a larger number of geographic locations to be sampled. Late hunter-gatherers were a heterogeneous bunch. Some contributed a lot to the future gene pool. Others went extinct.

The ‘losers’ were small inland hunting bands with low population densities. They were less able to integrate agriculture into their nomadic way of life and also more likely to retreat in the face of much larger farming communities.

The ‘winners’ were semi-sedentary coastal groups with relatively high population densities. Because such groups depended more on fishing and sealing than on hunting and gathering, they could more readily integrate farming into their lifestyle, if only as a secondary subsistence activity. They were also more numerous and likelier to withstand encroachment by farming communities.


Bramanti, B., M.G. Thomas, W. Haak, M. Unterlaender, P. Jores, K. Tambets, I. Antanaitis-Jacobs, M.N. Haidle, R. Jankauskas, C.-J. Kind, F. Lueth, T. Terberger, J. Hiller, S. Matsumura, P. Forster, & J. Burger. (2009). Genetic discontinuity between local hunter-gatherers and Central Europe’s first farmers, Science, 326, 137-140

Malmström, H., M.T.P. Gilbert, M.G. Thomas, M. Brandström, J. Storå, P. Molnar, P.K. Andersen, C. Bendixen, G. Holmlund, A. Götherström, & E. Willerslev (2009). Ancient DNA Reveals Lack of Continuity between Neolithic Hunter-Gatherers and Contemporary Scandinavians, Current Biology, doi:10.1016/j.cub.2009.09.017

Thursday, October 22, 2009

Face and gender recognition

It is told that an elder came to Scete with his son who was not yet weaned. The boy was raised in the monastery and did not know there were women. When he became a man, the demons represented images of women to him. He was astonished and informed his father. Now one day the two of them went to Egypt and, seeing some women, the young man told his father, “Father, those are the ones who would come and see me at Scete during the night!”

Sayings of the Fathers (Apophthegmata Patrum) 5th century (Regnault, 1966, p.73)

We seem to be born with the ability to recognize the human face. Even infants as young as 1 month old show a consistent, spontaneous preference for face-like stimuli over nonface-like patterns. Such recognition seems guided by an inborn representation of the main facial features, particularly the eyes and the mouth (Pascalis & Kelly, 2008). Brain-damaged subjects provide further evidence of a mental module that specifically processes facial images:

Associative visual agnosia does not always seem to affect the recognition of all types of stimuli equally. The selectivity in some cases of agnosia lends support to the view that there are specialized systems for recognizing particular types of stimuli. The best known example of this is prosopagnosia, the inability to recognize faces after brain damage. Prosopagnosics cannot recognize familiar people by their faces alone, and must rely on other cues for recognition such as a person’s voice, or distinctive clothing or hairstyles. The disorder can be so severe that even close friends and family members will not be recognized. Although many prosopagnosics have some degree of difficulty recognizing objects other than faces, in some cases the deficit appears strikingly selective for faces. (Farah, 1996)

If this mental representation is inborn, does it come in two forms, one for a female face and another for a male face? Or is it gender-neutral? By studying visual adaptation to facial images, Little et al. (2005) concluded that different neural populations process male and female faces. This difference seems to exist at the level of higher-level neurons that code for the entire face, rather than for specific characteristics (Bestelmeyer et al., 2008). These findings were partially replicated by Jaquet (2007), who found evidence for both common and sex-selective neurons.

Ramsey-Rennels and Langlois (2006) reviewed the literature on male and female face recognition by infants:

First, 3- to 4-month-olds have more difficulty discriminating among male faces and subsequently recognizing them than they do female faces (Quinn et al., 2002). Second, older infants are more skilled at categorizing female faces than they are at categorizing male faces: Whereas 10-month-olds easily recognize that a sex-ambiguous female face does not belong with a group of sex-typical female faces, they have more difficulty excluding a sex-ambiguous male face from a group of sex-typical male faces (data interpretation of Younger & Fearing, 1999, by Ramsey et al., 2005). In addition, there is a lag between when infants recognize that female voices are associated with female faces and when male voices are associated with male faces; infants reliably match female faces and voices at 9 months (Poulin-Dubois, Serbin, Kenyon, & Derbyshire, 1994) but do not reliably match male faces and voices until 18 months. Even at 18 months, infants are more accurate at matching female faces and voices than they are at matching male faces and voices (Poulin-Dubois, Serbin, & Derbyshire, 1998).

This evidence could be interpreted in two ways: a) infants better recognize female faces because they have more experience with mothers than with fathers; or b) female face recognition develops earlier than male face recognition because humans have evolved to recognize a female caregiver at an early age. To date, there has been no attempt to replicate the above findings with mother-absent/father-present infants. Quinn et al. (2002) found that such infants show a weak preference for male faces (59%) but there is no indication that they are better at recognizing male faces than female ones.


Bestelmeyer, P.E.G., B.C. Jones, L.M. DeBruine, A.C. Little, D.I. Perrett, A. Schneider, L.L.M. Welling, & C.A. Conway. (2008). Sex-contingent face aftereffects depend on perceptual category rather than structural encoding, Cognition, 107, 353-365.

Duchaine, B.C., G. Yovel, E.J. Butterworth, & K. Nakayama. (2006). Prosopagnosia as an impairment to face-specific mechanisms: Elimination of the alternative hypotheses in a developmental case, Cognitive Neuropsychology,

Farah, M.J. (1996). Is face recognition ‘special’? Evidence from neuropsychology, Behavioural Brain Research, 76, 181-189.

Jaquet, E. (2007). Perceptual aftereffects reveal dissociable adaptive coding of faces of different races and sexes, PhD thesis, School of Psychology, University of Western Australia.

Little, A.C., L.M. DeBruine, & B.C. Jones. (2005). Sex-contingent face aftereffects suggest distinct neural populations code male and female faces, Proceedings of the Royal Society of London, Series B, 272, 2283-2287.

Pascalis, O., & D.J. Kelly. (2008). Face processing, in M. Haith & J. Benson (eds.) Encyclopedia of Infant and Early Childhood Development, pp. 471-478, Elsevier.

Quinn, P.C., Yahr, J., Kuhn, A., Slater, A.M., & Pascalis, O. (2002). Representation of the gender of human faces by infants: A preference for female. Perception, 31, 1109–1121.

Ramsey-Rennels, J.L., & J.H. Langlois. (2006). Infants’ differential processing of female and male faces, Current Directions in Psychological Science, 15, 59-62.

Regnault, D.L. (1966). Les sentences des pères du désert. Les Apophtegmes des pères. Sarthe: Abbaye Saint-Pierre de Solesmes.

Thursday, October 15, 2009

Sexual selection and ancestral Europeans

I have argued that sexual selection has varied within our species in both intensity and direction (men selecting women or women selecting men) (Frost, 2006; Frost, 2008). In particular, it seems to have varied along a north-south gradient with men being more strongly selected in the tropical zone and women in the temperate and arctic zones. Women appear to have been most strongly selected among humans inhabiting ‘continental steppe-tundra’. This kind of environment creates the highest ratio of females to males among individuals willing to mate—by making it too costly for men to provision additional wives and by greatly raising male mortality over female mortality through long hunting distances.

Today, tundra is generally limited to discontinuous patches of land: arctic islands and coastlines, alpine areas above the tree line, etc. Yet it is only when tundra covers large land areas that it can support large herds of migrating herbivores. Such herds can in turn support a relatively large human population, but at the cost of high male mortality—because the men have to cover long distances to seek out and follow the wandering herds.

As late as 10,000 years ago, continental steppe-tundra covered an extensive land mass, particularly in Eurasia. It was thus one of the main adaptive landscapes of modern humans during their evolution outside Africa. In particular, it might explain the unusual physical appearance of Europeans, i.e., their feminized face shape and their complex of highly visible color traits (diverse palette of hair and eye colors, depigmentation of skin color to pinkish-white).

At this point, people ask: “But why would this sexual selection play out only in ice-age Europe? What about northern Asia? There must have been lots of steppe-tundra there as well.”

There was, but it lay much further north than in Europe and was less hospitable to humans. It was all the more inhospitable because it stretched further into the heart of Eurasia and away from the warming and moistening influence of the Atlantic. Thus, the Asian steppe-tundra never supported as many humans as did the European steppe-tundra. Indeed, it seems to have been devoid of human life at the height of the last ice age (Goebel, 1999, pp. 218, 222-223).

On a map of ice-age Eurasia, the steppe-tundra belt would look like a large blotch covering the plains of northern and eastern Europe plus a narrower strip running farther north across Asia. By a geographic accident—a large mass of ice covering Scandinavia—it had been pushed much further south in Europe than elsewhere. This was where the steppe-tundra could support substantial and continuous human settlement.

When making this argument, I usually stress the word ‘continuous.’ But the word ‘substantial’ is probably more important. The larger the population, the greater the chance that interesting variants will appear through mutation:

Small populations have limited variability at any one time and low absolute incidence of mutation, and they may be subject to genetic drift. They are also likely to be narrowly localized and so more subject to rapid extinction by a regional catastrophe. … Other things being equal, the larger the population the more potential variability, at least, it is likely to have and the larger its absolute rate of mutation will be. (Simpson, 1953, p. 297)


Frost, P. (2008). Sexual selection and human geographic variation, Special Issue: Proceedings of the 2nd Annual Meeting of the NorthEastern Evolutionary Psychology Society. Journal of Social, Evolutionary, and Cultural Psychology, 2(4), pp. 169-191.

Frost, P. (2006). European hair and eye color - A case of frequency-dependent sexual selection? Evolution and Human Behavior, 27, 85-103.

Goebel, T. (1999). Pleistocene human colonization of Siberia and peopling of the Americas: An ecological approach. Evolutionary Anthropology, 8, 208‑227.

Simpson, G.G. (1953). The Major Features of Evolution, New York: Columbia University Press.

Thursday, October 8, 2009

Facial skin color and sexual selection

The human mind seems to use facial color to determine whether a person is male or female. A man has a relatively dark facial color that contrasts poorly with his lip and eye color. Conversely, a woman has a relatively light facial color that contrasts sharply with her lip and eye color (Russell, 2003; Russell, 2009; Russell, in press).

This kind of sex-recognition algorithm has been a channel for sexual selection in many species. When selecting a mate, an animal tends to choose the ones most easily recognizable as the opposite sex. Over many generations, such selection will cause the relevant sex-specific cues to be accentuated (Manning, 1972, pp. 47-49).

The degree of accentuation will depend on the intensity of sexual selection and on whether males have been selecting females or females selecting males. Among ancestral humans, sexual selection seems to have varied in both intensity and direction along a north-south gradient (Frost, 2006; Frost, 2008). In the tropical zone, women gathered food year-round, so a second wife would cost little in terms of food provisioning. With more men becoming polygynous, fewer women were left unmated. The pressure of sexual selection was thus on men, with women being the ones who could pick and choose mates.

This situation reversed outside the tropical zone. First, polygyny was costlier because women could not gather food in winter. Second, male mortality exceeded female mortality because men had to hunt over longer distances. Together, these two trends resulted in too few men competing for too many women. This was particularly so on continental steppe-tundra, where women had almost no opportunities for food gathering and where men had to hunt wandering herds of herbivores over long distances. The pressure of sexual selection was thus on women, with men being the ones who could pick and choose mates.

Sexual selection and lightening of skin color

If light skin is perceived as a sign of femininity, sexual selection of women should tend to lighten female skin. This kind of selection became possible once ancestral humans had left the tropical zone. On the one hand, there was less natural selection for dark skin as a barrier to intense UV radiation. On the other, as explained above, there was stronger sexual selection of women because they outnumbered men on the mate market. Women should thus be increasingly lighter-skinned than men with increasing distance from the tropical zone, this sex difference being greatest among those humans that once inhabited the large expanses of continental steppe-tundra in northern and eastern Europe. Since most skin pigmentation genes are not sex-linked, selection for lighter-skinned women would also lighten mean skin color (i.e., both males and females). Thus, mean skin color should likewise lighten along the same north-south gradient.

How do these predictions stack up against reality? They accurately describe geographic variation in mean skin color (Frost, 2008). But they poorly describe geographic variation in female depigmentation relative to male skin color. In fact, female skin reflectance exceeds male skin reflectance the most among humans at medium latitudes with medium skin color (Frost, 2007; Madrigal & Kelly, 2006). This may be a ‘ceiling effect’. Northern and eastern Europeans are close to the physiological limit of skin depigmentation. Their women cannot be much whiter than the mean skin color because they have, so to speak, very little headroom left—the mean skin color is already scrunched up against the ceiling of maximum skin whiteness.

Sexual selection and increase in facial color contrast

There seems to be similar geographic variation in the contrast between facial color and lip/eye color. This contrast is weakest among tropical humans. It is strongest, however, not among northern/eastern Europeans but among East Asians (Russell, in press). This is largely because East Asians have dark eyes and relatively light facial skin. The contrast effect is even stronger if we factor in their jet-black hair, which further sets off the lightness of the female face. Nonetheless, facial color contrast is no more sexually dimorphic among East Asians than it is among Europeans (Russell, in press).

Why would Europeans score lower than East Asians on facial color contrast? It may be that sexual selection for dark eyes and dark hair relaxed among ancestral Europeans once their facial skin had lightened to the point of becoming pinkish-white. At that point, the color contrast was more than sufficient. This, in turn, may have allowed rare color preference to generate sexual selection for diverse hair and eye colors. This process may have then acquired a dynamic of its own that competed with the older dynamic of facial color contrast. Alternately, rare color preference may have always been a weak selection pressure that manifests itself only under conditions of intense sexual selection (Frost, 2006; Frost, 2008).


In sum, if we examine geographic variation in skin color and in facial color contrast, the pattern is largely consistent with increasingly intense sexual selection of women along a north-south gradient. This selection would have been minimal among tropical humans and maximal among arctic humans, particularly those that once lived on continental steppe-tundra—where polygyny was constrained the most and where male mortality exceeded female mortality the most. There are, however, deviations from the expected pattern that may be due to ceiling effects and release of sexual selection for rare hair and eye colors.

Among ancestral Europeans, this process of sexual selection seems to have been a multi-stage process. It likely began c. 30,000 BP with the first penetration by modern humans of the steppe-tundra belt (southwestern France). This initial stage would correspond to certain physical changes that are common to Europeans and East Asians. Stage I ended with the onset of the glacial maximum (c. 20,000 BP), which blocked East-West gene flow by merging the Fenno-Scandian and Ural icecaps and by forming large glacial lakes along the Ob (Rogers, 1986; Crawford et al, 1997). Stages II and III would correspond to later physical changes that are specific to Europeans.

Stage I – head hair lengthens, face shape feminizes, skin lightens (30,000–20,000 BP ?)
Stage II – skin lightens to pinkish-white (20,000–15,000 BP ?)
Stage III – hair and eye color diversifies (15,000–10,000 BP ?)


Crawford, M.H., Williams, J.T., & Duggirala, R. (1997). Genetic structure of the indigenous populations of Siberia. American Journal of Physical Anthropology, 104, 177-192.

Frost, P. (2008). Sexual selection and human geographic variation, Special Issue: Proceedings of the 2nd Annual Meeting of the NorthEastern Evolutionary Psychology Society. Journal of Social, Evolutionary, and Cultural Psychology, 2(4), pp. 169-191.

Frost, P. (2007). Comment on Human skin-color sexual dimorphism: A test of the sexual selection hypothesis, American Journal of Physical Anthropology, 133, 779-781.

Frost, P. (2006). European hair and eye color - A case of frequency-dependent sexual selection? Evolution and Human Behavior, 27, 85-103.

Madrigal, L., & W. Kelly. (2006). Human skin-color sexual dimorphism: A test of the sexual selection hypothesis, American Journal of Physical Anthropology, 132, 470-482.

Manning, A. (1972). An Introduction to Animal Behaviour, 2nd edition, London: Edward Arnold.

Rogers, R.A. (1986). Language, human subspeciation, and Ice Age barriers in Northern Siberia. Canadian Journal of Anthropology, 5, 11‑22.

Russell, R. (in press). Why cosmetics work. In Adams, R., Ambady, N., Nakayama, K., & Shimojo, S. (Eds.) The Science of Social Vision. New York: Oxford University Press.

Russell, R. (2009). A sex difference in facial contrast and its exaggeration by cosmetics, Perception, 38, 1211-1219.

Russell, R. (2003). Sex, beauty, and the relative luminance of facial features, Perception, 32, 1093-1107.

Thursday, October 1, 2009

Facial color and sex recognition

Upper left: average of 22 Caucasian female faces
Upper right: average of 22 Caucasian male faces
Lower left: white pixels are where the female average is lighter than the male average
Lower right: white pixels are where the male average is lighter than the female average

To a large degree, we do not learn to recognize whether a human face is male or female (Bestelmeyer et al., 2008; Little et al., 2005). This mental task is mainly performed by a hardwired algorithm that uses certain visual cues, one of them being facial color (Frost, 1994). Men are more reddish-brown in complexion because their skin has more melanin and hemoglobin (Edwards & Duntley, 1939). Women are paler and show greater contrast between the color of their face and that of their lips and eyes (Russell, 2009). This algorithm is used not only for visual recognition but also for tasks apparently related to sexual attraction and social dominance (Feinman & Gill, 1978; Ioan et al., 2007).

Richard Russell has investigated the way we use facial color to identify male and female human faces. In one experiment, he morphed together 22 photos of Caucasian female faces and then 22 photos of Caucasian male faces. The participants were clean-shaven and did not wear make-up. As we can see from the above composites, the visually average face is noticeably lighter when it is female than when it is male. There is also greater contrast between facial color and lip/eye color on the female face than on the male one.

Russell (in press) argues that the human mind uses lip and eye color as a benchmark for visual processing of facial color:
If female skin is lighter than male skin, but female eyes and lips are not lighter than male eyes and lips, there should be greater luminance contrast surrounding female eyes and lips than male eyes and lips. This would be important, because the visual system is sensitive to contrast rather than to absolute luminance differences. Indeed, luminance contrast is the cue to which most neurons in the early visual cortex respond. Moreover, contrast internal to the face would be robust to changes in illumination. The black ink of this text under direct mid-day sun reflects more light than does the white page under dim indoor lighting, yet in both contexts the text appears black and the page appears white because the contrast between the two is constant. In the same way, a sex difference in contrast could be a particularly robust cue for sex classification. If there is a sex difference in contrast it would also mean that the femaleness of the face could be increased by lightening the skin or by darkening the eyes and lips—either change would increase the contrast. (Russell, in press)

He goes on to argue that this sex difference in facial color is reflected in the development of women’s cosmetics.

The received style of cosmetics involves darkening the eyes and lips while leaving the rest of the face largely unchanged. This is one of two patterns of cosmetic application that could increase facial contrast (the other being to significantly lighten the entire face, except for the eyes and lips). (Russell, in press)

This same pattern has appeared in a wide range of culture areas (ancient Egypt, Mesopotamia, South Asia, East Asia, Mesoamerica), in some cases independently of influence from other culture areas.


Bestelmeyer, P.E.G., B.C. Jones, L.M. DeBruine, A.C. Little, D.I. Perrett, A. Schneider, L.L.M. Welling, & C.A. Conway. (2008). Sex-contingent face aftereffects depend on perceptual category rather than structural encoding, Cognition, 107, 353-365.

Edwards, E.A., & Duntley, S.Q. (1939). The pigments and color of living human skin. American Journal of Anatomy, 65, 1-33.

Feinman, S. & G.W. Gill. (1978). Sex differences in physical attractiveness preferences, Journal of Social Psychology, 105, 43-52.

Frost, P. (1994). Preference for darker faces in photographs at different phases of the menstrual cycle: Preliminary assessment of evidence for a hormonal relationship, Perceptual and Motor Skills, 79, 507-514.

Ioan, S., Sandulache, M., Avramescu, S., Ilie, A., & Neacsu, A. (2007). Red is a distractor for men in competition. Evolution and Human Behavior, 28, 285-293.

Little, A.C., L.M. DeBruine, & B.C. Jones. (2005). Sex-contingent face aftereffects suggest distinct neural populations code male and female faces, Proceedings of the Royal Society of London, Series B, 272, 2283-2287.

Russell, R. (in press) Why cosmetics work. In Adams, R., Ambady, N., Nakayama, K., & Shimojo, S. (Eds.) The Science of Social Vision. New York: Oxford University Press

Russell, R.( 2009). A sex difference in facial contrast and its exaggeration by cosmetics, Perception, 38, 1211-1219

Russell, R. (2003). Sex, beauty, and the relative luminance of facial features, Perception, 32, 1093-1107.

Russell, R. & P. Sinha. (2007). Real-world face recognition: The importance of surface reflectance properties, Perception, 36, 1368-1374.

Russell, R., P. Sinha, I. Biederman, & M. Nederhouser. (2006). Is pigmentation important for face recognition? Evidence from contrast negation. Perception, 35, 749-759.

Thursday, September 24, 2009

Female face shape and sexual selection

Denise Liberton, an anthropologist at Pennsylvania State University, has been studying variation in human facial features. At an upcoming meeting of the American Society of Human Genetics, she’ll be presenting a comparative study of European and West African facial morphology. The main thrust of her presentation is that the shape of the face has differentiated among human populations in part through a selective force that acts primarily on women—and not on both sexes.

We found that several pairwise distances differed between the sexes. For example, the distance from the brow to nasal bridge was found to be more than 5% larger in females than males. We then tested for an interaction between sex and genetic ancestry by testing for differences in the slopes of the ancestry association between males and females. Although the pattern differed slightly between samples, after Bonferroni correction many correlations were the found to be same in both sexes. However, females in all three samples had many additional significant correlations that were not seen in males, while males had very few correlations that were not found in females. The results of these analyses suggest that selection on females is driving the differentiation in facial features among populations. (Liberton et al., 2009)

What is this selective force that acts mainly on female morphology and carries male morphology along in its wake? I suspect Denise Liberton has sexual selection in mind. If so, this finding would support Darwin’s belief that “the races of man differ from each other and from their nearest allies, in certain characters which are of no service to them in their daily habits of life, and which it is extremely probable would have been modified through sexual selection” (Darwin, 1936 [1888], p. 908).

Darwin was puzzled not only by the considerable physical differences separating humans from apes, but also by the considerable physical differences among human populations (Darwin, 1936 [1888], p. 530-531). He concluded that sexual selection was “the most efficient” cause of this differentiation (Darwin, 1936 [1888], p. 908).

Yet sexual selection usually acts on males in other mammals. The females are the ones who normally do the selecting. This is because they must take time out from the mate market for pregnancy, breastfeeding, and infant care. Meanwhile, the males never really leave the mate market, with the result that too many of them are competing for a limited number of available females.

This mammalian ‘law’ has influenced much writing about sexual selection in humans. According to Naomi Wolf, author of The Beauty Myth, "for women to compete with women through 'beauty' is a reversal of the way in which natural selection affects all other mammals" (Wolf, 1990, p. 3). She points to indigenous peoples in sub-Saharan Africa, Australia, and New Guinea as proof that the original human state was one of males vying for the attention of females.

I could cite other writers, but the gist of their argument is always the same. Authentic human nature is represented today by indigenous tropical peoples. They are what we were. Therefore, human nature is about polygynous males who devote little time and energy to raising their progeny and a lot to seducing the limited number of females. Women are thus the ones who have been sexually selecting.

In this kind of argument, we means ‘people of non-tropical origin,’ particularly those of European descent. Yet clearly this argument is false. We were not them for a long time. Europeans have an evolutionary history going back some 35,000 years on their continent. And this was when and where they evolved their current physical appearance: the shape of their face, the color of their skin, hair, and eyes; the length and form of their head hair. To understand why Europeans look the way they do, we should understand how their environment of sexual selection differed from that of tropical humans.

Ancestral humans were exposed to pressures of sexual selection that varied along a north-south axis. In the tropical zone, women could gather food year-round, thus making the cost of a second wife relatively low. With so many being scooped up, female mates were a limited resource. Too many men had to compete for too few women. The pressure of sexual selection was thus on men, with women being the ones who could pick and choose mates.

This situation reversed as humans moved away from the tropical zone. First, it became costlier for a man to provide for a second wife because women contributed less to the family food supply, the longer winters reducing opportunities for food gathering. Second, male mortality increased relative to female mortality because men had to hunt over longer distances. Together, these two trends resulted in too few men competing for too many women. This was particularly so on continental steppe-tundra, where women had almost no opportunities for food gathering and where men had to hunt wandering herds of herbivores over long distances (Frost, 2006; Frost, 2008).

Because of a geographic accident, i.e., a glacial mass over Scandinavia, it was in Europe during the last ice age (25,000 to 10,000 years ago), specifically on the northern and eastern plains, that continental steppe-tundra reached furthest to the south and covered the most territory during the time of modern humans. And this was when and where Europeans came to look European. They did not change in physical appearance because of climatic adaptation. The cause was a change in the direction and intensity of sexual selection: men were now selecting women, and to a much greater degree than elsewhere.


Darwin, C. (1936) [1888]. The Descent of Man and Selection in relation to Sex. reprint of 2nd ed., The Modern Library, New York: Random House.

Frost, P. (2008). Sexual selection and human geographic variation, Special Issue: Proceedings of the 2nd Annual Meeting of the NorthEastern Evolutionary Psychology Society. Journal of Social, Evolutionary, and Cultural Psychology, 2(4), pp. 169-191.

Frost, P. (2006). European hair and eye color - A case of frequency-dependent sexual selection? Evolution and Human Behavior, 27, 85-103.

Liberton, D.K., K.A. Matthes, R. Pereira, T. Frudakis, D.A. Puts, & M.D. Shriver. (2009).
Patterns of correlation between genetic ancestry and facial features suggest selection on females is driving differentiation. Poster #326, The American Society of Human Genetics, 59th annual meeting, October 20-24, 2009. Honolulu, Hawaii.

Wolf, N. (1990). The Beauty Myth. Toronto: Random House.

Thursday, September 17, 2009

Adoption and parental investment

It has long been known that children are likelier to be abused, neglected, or murdered by stepparents than by birth parents. This kind of genetic discrimination seems consistent with kin selection theory: parents are expected to care more for children who share kinship with them, as opposed to a purely legal and social relationship (Daly & Wilson, 1980).

If stepchildren are mistreated because they are not kin, we should see the same mistreatment of adopted children. To test this hypothesis, Gibson (2009) surveyed parents with at least one genetic and one adopted child over the age of 22, the idea being to compare the two groups of children for total parental investment. Contrary to expectation, the parents invested more in their adopted children than in their own:

This study categorically fails to support the hypothesis that parents bias investment toward genetically related children. Every case of significant differential investment was biased toward adoptees. Parents were more likely to provide preschool, private tutoring, summer school, cars, rent, personal loans and time with sports to adopted children (Gibson, 2009).

Why? One can imagine the parents making no distinction, but why would they discriminate against their own children? The answer seems to be that the adopted siblings made greater demands.

Adoptees were more likely than genetic offspring to have ever received public assistance, been divorced or been arrested. They also completed fewer years of schooling and were more likely to have ever required professional treatment for mental health, alcohol and drug issues.

… The current study may demonstrate cases where “the squeaky wheel gets the grease.” Summer school and private tutors are often remedial, and the fact that adopted children were more likely to receive them suggests they required them more often than genetic ones. The same can be said for rent, treatment and public assistance. Adoptees may have more difficulty establishing themselves relative to genetic children, and the fact that they divorce more often suggests they also have more difficulty staying established. Addiction and divorce may put adoptees in situations that require more parental investment. Parents provide more for adoptees not because they favor them, but because they need the help more often.
(Gibson, 2009)

For many behavioral traits, adoptees seem to differ genetically not only from their adoptive parents but also from the general population:

This supports other research showing that, compared to genetic children, American adoptees have a higher overall risk of contact with mental health professionals, specifically for eating disorders, learning disabilities, personality disorders and attention deficit hyperactivity disorder … They also have lower achievement and more problems in school, abuse drugs and alcohol more, and fight with or lie to parents more than genetic children …

… Adoptees may be genetically predisposed to negative outcomes at higher rates than the general population. Genetic factors clearly contribute to alcohol and drug addiction, as well as to some mental disorders like attention deficit hyperactivity disorder and schizophrenia …. An association between nonviolent criminality has been found between European adoptees and their genetic parents … Furthermore, research with Swedish adoptees suggests 55-60% of their educational performance is explained by genetic factors, and that the number of years of school adoptees complete is significantly related to how many years their genetic mothers completed ...
(Gibson, 2009).

All of this may explain why parents invest more in adopted children than in their own. But why do any parents adopt? Doesn’t such a decision, in itself, contradict kin selection theory?

The contradiction may be more apparent than real. Most adoptive parents have fertility problems and cannot have children on their own. Their only other option is to remain childless.

It may be that adopting fulfills a common instinct to reproduce and parents do it because it produces positive emotions. When people cannot have children biologically, adoption gives them a way to fulfill the “drive” to parent, maladaptive or not. (Gibson, 2009)


Daly, M., & M. Wilson. (1980). Discriminative parental solicitude: A biological perspective. Journal of Marriage and the Family, 2, 277-288.

Gibson, K. (2009). Differential parental investment in families with both adopted and genetic children, Evolution and Human Behavior, 30, 184-189.