Saturday, May 30, 2015

The monster in the mirror

Cyborg She, a love story about a female android and a shy young man (credit: Gaga Communications, for use in critical commentary)


Can humans and robots get along together? Actually, they already do in a wide range of applications from surgery to assembly lines. The question is more vexing when the robots are androids—human-like creatures that can recognize faces, understand questions, and behave as social, emotional, and affective beings. It is this aspect that troubles us the most, partly because it creates a power to manipulate and partly because it transgresses the boundary between human and nonhuman.

A manipulative female android appears in the recent British film Ex Machina. Ava exploits Caleb's sexual desire and sense of compassion, convincing him to help her escape from the research facility. She succeeds but leaves him behind, trapped in the building. This kind of negative portrayal runs through many sci-fi movies of the past four decades. In some, particularly the Terminator series (1984, 1991, 2003, 2009, 2015), androids are evil and seek to destroy mankind. In The Stepford Wives (1975), they are simply tools of wicked people: in a small town, the men conspire to murder their wives and replace them with lookalike android homemakers. In Westworld (1973), a Wild West theme park becomes a killing field when a gunslinger robot begins to take his role too seriously.

In other movies, the portrayal is more nuanced but still negative. Blade Runner (1982) assigns the human Rick Deckard the role of a bad good-guy who seeks out and kills android "replicants." Deckard hunts them down mercilessly, the only exception being Rachael, whom he rapes. Conversely, the replicants emerge as good bad-guys who show human mercy, particularly in the final scene when the last surviving one saves Deckard from death. This theme is further developed in AI (2001), where a couple adopt an android boy, named David, after their son falls victim to a rare virus and is placed in suspended animation. When their biological son is unexpectedly cured, and refuses to accept his new sibling, they decide to abandon David in a forest, much as some people get rid of unwanted pets. He meets another android, Gigolo Joe, who explains why David's love for his adoptive mother can never be reciprocated:

She loves what you do for her, as my customers love what it is I do for them. But she does not love you, David. She cannot love you. You are neither flesh nor blood. You are not a dog or a cat or a canary. You were designed and built specific, like the rest of us, and you are alone now only because they are tired of you, or they replaced you with a younger model or were displeased by something you said or broke.

In short, androids can love humans, but this love has a corrupting effect, making humans more callous and self-centered than ever. 

Some American and British movies have featured androids in unambiguously positive roles, like some of the droids in Star Wars (1977), Lisa in Weird Science (1985), Bishop in Aliens (1986), and Data in the TV series Star Trek: The Next Generation (1987-1994). Usually, however, androids are either villains or tragic heroes. One might conclude, therefore, that this dominant view is the logical one that emerges when thoughtful people weigh all the pros and cons.

And yet, we have the example of another cinematographic tradition where androids are viewed quite differently.

The Japanese exception?

Japan has diverged from Western countries in the way it depicts androids on screen. This is especially so in three productions that have appeared since the turn of the century:

This TV series begins in the near future with Hideki, a young man who lives on a remote farm. He has never had a girlfriend and decides to go to a prep school in Tokyo, where he can meet other people his age. On arriving in the big city, he is surprised to see so many androids, called “persocoms.” The life-sized ones are expensive, but many of his college friends have mini-persocoms—small fairy-like creatures, a bit larger than Tinkerbell, who can take email messages, help with schoolwork, provide GPS directions, or simply sing and dance to keep your spirits up.

One night, walking home, he sees a girl's body in the trash piled alongside the curb. He takes a closer look, realizes it's a persocom, and takes it home, where he manages to turn it on. But the persocom—a strangely beautiful girl with large eyes and floor-length hair—can speak only one word and knows nothing about the world. Hideki tries to teach her how to live in society, but he too is socially inept, so other people have to step in to provide help and advice.

From time to time, we see the girl with a children's book that Hideki bought to teach her how to read. It is about a place called Empty Town where people remain secluded in their homes and refuse to venture outside. At the end of each episode, we see this town and a female figure wandering through its deserted streets.

Chobits seems to have been made principally for a mature male audience, while containing elements that normally appear in magazines for teen and pre-teen girls. This is not surprising, given that it was created overwhelmingly by female storyboarders and animators.

Most of this movie is set in the present. There are obvious similarities with The Terminator (1984): an android arrives from the future in an electrical discharge; it has superhuman strength and, initially, no emotions; and near the end it must crawl around on its arms because it has lost the lower half of its body. But the similarities end there. The android is female and has come to befriend a shy young man, Kiro, who is spending his 20th birthday alone. She is, in fact, a creation of an older Kiro who wishes to change the course of his life. In this role, she saves him from a gunman who would otherwise leave him a cripple and, later, from a devastating earthquake. She also breaks his vicious circle of shyness/withdrawal, thus transforming him from a boy into a man.

The changes to Kiro are paralleled by changes to her. She develops feelings of jealousy and becomes conscious of her appearance; after being mutilated by a collapsing wall, she begs Kiro to leave, so that he will no longer see what she has become. In these final moments of her life, she tells Kiro that she can "feel his heart." The rest of the building then collapses on her, and when he later retrieves her remains from the rubble, he clings to them, overwhelmed by grief.

This TV series features a timid boy called Heita who attends a private high school. He feels a chasm between himself and the world of love, preferring to be alone in places like the school's science lab. One day, however, he enters the lab and finds the inanimate body of an android girl. When he touches her teeth, she comes to life and asks him to give her a name. He chooses “Kyuuto” because her serial number is Q10 … and because she’s cute.

She follows Heita everywhere, and the principal tries to head off a potential scandal by enrolling her at the school and making the boy her caretaker. Heita tells his science teacher that he doesn't want the job and asks her to turn the android off, but she simply smiles and says there is no going back. The rest of the series recounts the weird love that develops between Heita and Kyuuto.

A common theme

You may have noticed a common theme: male shyness. It's nothing new in Japanese society. Indeed, it seems to prevail in all societies where the father invests much time and energy in providing for his wife and children. In exchange, he wants to be sure that the children are his own. So monogamy is the rule, and something is needed to keep the same man and woman together.

In such a context, male shyness deters men from sexual adventurism, i.e., wandering from one woman to another. Of course, the shyness must not be so strong that it leaves a man with no mate at all. This is not a problem in traditional societies, where intermediaries can step in and help the process along. It becomes a major problem, however, in modern societies where each man is expected to be a sexual entrepreneur.

Male shyness is becoming pathological in today’s Japan. The pathology even has a name: hikikomori—acute withdrawal from all social relationships outside the family. Numbers are hard to come by, but such people may exceed over a million in Japan alone, with 70-80% of them being men (Furlong, 2008). These figures are really the tip of the iceberg, since many men can lead seemingly normal lives while having no intimate relationships.

A form of therapy?

When the Japanese talk about future uses of androids, they invariably talk about elder care or home maintenance. It is really only in movies and manga comics that the subject of loving relationships is explored, and this is where we see the greatest difference between Japanese and Westerners. The latter seem pessimistic, seeing such love as manipulative or corrupting. In contrast, the Japanese see it as beneficial, even therapeutic.

Who is right? Some insight may be gleaned from research on love dolls, which occupy an early stage of the trajectory that leads to affective androids. In a study of 61 love doll owners, Valverde (2012) found them to be no different from the general population in terms of psychosexual functioning and life satisfaction. In contrast, the rate of depression was much higher among individuals who did not own a love doll but were planning to buy one. It seems likely, then, that the dolls are enabling these men to achieve a healthier psychological state. We will probably see a similar therapeutic effect with affective androids. 

But will this psychological improvement help such men move on to real human relationships? After all, many of them will simply be too unattractive, too socially marginal, or too lacking in personality to make the transition. Others may prefer androids to real women. This point comes up in Chobits when a woman tells Hideki that she feels jealous of his android and its perfect beauty.

One thing is sure. No android, no matter how lifelike, can procreate. When Hideki is walking with a friend by a lake, he is warned that an android can never be as good as a real human. We then see a woman in a boat, with two young children. This fact also explains the convoluted ending of Cyborg She. There can be no happy ending until Kiro's life path is fully rectified, and this can happen only when he becomes a husband and father. Through a series of unusual events, the android's memory is transferred to a similar-looking woman who then travels back in time to meet Kiro after the earthquake. 

Although we will soon have androids that can recognize individual humans and respond to them affectively, there are no procreative models on the drawing board. This limitation will have to be recognized before we begin to use them for therapeutic purposes.

Two different paths

Why does Japan have a more positive attitude toward androids in particular and robots in general? Most observers put it down to the animist roots of the country’s religion, Shinto, which teaches that everything has a spirit, be it the sun, the moon, mountains, trees, or even man-made objects (Mims, 2010). In contrast, Christianity teaches that only humans have souls, so there is no moral difference between swatting a fly and killing an android. When Deckard rapes Rachael, he is merely masturbating. She loves him, but her love can only have a corrupting effect because humans of Christian heritage feel no need to reciprocate. 

This cultural explanation isn’t perfect. For one thing, the divergence between Japan and the West is less obvious the farther back in time you go (Anon, 2013). Before the 1970s, robots were generally likeable characters on the American big screen or small screen, from the Tin Man of The Wizard of Oz (1939) to the robot of Lost in Space (1965-1968). There was even a romance genre: in the seventh episode of The Twilight Zone (1959), a female android saves a man from the loneliness of solitary confinement.

The change of attitude among cineastes seems to have happened during the 1970s. Perhaps not coincidentally, the same decade saw a parallel change of attitude in the business community. Previously, with the West moving toward an increasingly high-wage economy, automation and robotization were considered inevitable, since there would be nobody available to do low-paying jobs. This attitude changed during the 1970s with the growing possibilities for outsourcing of high-wage manufacturing jobs to low-wage countries and, conversely, insourcing of low-wage workers into industries that could not outsource abroad (construction, services, etc.). This easier access to cheap labor made the business community less interested in robots, so much so that robotics research has largely retreated to military applications. There is very little research into use of robots as caregivers or helpmates. 

This new economic reality has spawned a strange form of Japan-bashing in the press, as in this Washington Post story:

There are critics who describe the robot cure for an aging society as little more than high-tech quackery. They say that robots are a politically expedient palliative that allows politicians and corporate leaders to avoid wrenchingly difficult social issues, such as Japan's deep-seated aversion to immigration, its chronic shortage of affordable day care and Japanese women's increasing rejection of motherhood.

"Robots can be useful, but they cannot come close to overcoming the problem of population decline," said Hidenori Sakanaka, former head of the Tokyo Immigration Bureau and now director of the Japan Immigration Policy Institute, a research group in Tokyo. "The government would do much better spending its money to recruit, educate and nurture immigrants," he said. (Harden, 2008)

Of course, this kind of argument could be stood on its head. Aren’t we using immigration as a means to evade the challenges of caring for an aging population and robotizing low-paying jobs out of existence? 


It is no longer fashionable to believe that economics can influence culture and ideology. Yet there seems to be some linkage between the growing indifference toward robots in our business community and the growing hostility toward them in our popular culture. In Japan, major corporations like Honda strive to rally popular opinion in favor of robotics. In the West, big business plays no such role and, if anything, has to justify its relative indifference. There is thus no organized faction that can push back against anti-robotic views when and if they arise.

So we will fail in robotics because we’re not trying very hard to succeed. This is one of those basic rules of life: if you don’t try, not much is going to happen.

But will the Japanese succeed? I cannot say for sure. I can only say there is a lot of pent-up demand for personal robots, especially androids with affective capabilities. Modern society is creating loneliness on a massive scale with its war on “irrational” and “repressive” forms of sociality—like the family and the ethny. I remember doing fieldwork among elderly people on Île aux Coudres and expecting no end of trouble with my stupid questions about attitudes toward skin colour in a traditional mono-ethnic environment. I needn’t have worried. The interviewees showed an unusual degree of interest in my questions and would talk for hours on end. Then I discovered these people typically went for days—sometimes weeks—with no human contact at all. And then others would tell me that so-and-so next door had committed suicide, not because of terminal illness but because of terminal loneliness.

Mark my words. When cyber-Tinkerbells start appearing in stores, people will come in droves to snatch them up like there’s no tomorrow. And many will also be snatching up the life-sized equivalents—even if they cost as much as a Lamborghini.


Anon. (2013). Debunked: Japan's "Special Relationship with Robots", Home Japan 

Chobits (2002). Japanese TV series, directed by Morio Asaka, 26 episodes 

Cyborg She (2008). Japanese drama, directed and written by Kwak Jae-yong 

Furlong, A. (2008). The Japanese hikikomori phenomenon: acute social withdrawal among young people, The Sociological Review, 56, 309-325 

Harden, B. (2008). Demographic crisis, robotic cure? Washington Post, January 7 

Mims, C. (2010). Why Japanese Love Robots (And Americans Fear Them), MIT Technology Review, October 12 

Q10 (2010). Japanese TV series, directed by Kariyama Shunsuke and Sakuma Noriyoshi, 9 episodes  

Valverde, S.H. (2012). The modern sex doll-owner: a descriptive analysis, master's thesis, Department of Psychology, California State Polytechnic University.

Saturday, May 23, 2015

Birth of a word

Memorial service for Walther Rathenau (Wikicommons - German Federal Archives). His assassination introduced a new word into French and, shortly after, into English.

A reader has written me about my last post:

It is extremely unlikely that "racism" is an attempt at translating something like Völkismus. Between Hitler's preference for Rasse (race) over Völk and the fact that the Nazis drew on authors like Chamberlain (whose antisemitism would also tend towards privileging Rasse over Völk) and Gobineau (who wrote in French), there is no support to be found for a derivation that would make "racism" appear to be related to the less virulent of the two strains of German nationalism (the romantic-idealistic one which relished being able to point at linguistic differentiation - like Völk vs. populus/people/peuple - and speculating about vague semantic correlates thereof). 

The simple fact of the matter is that "racism" is not any kind of translation but just a combination of a widely used term with a lexologically highly productive suffix. Critical use of "racism" basically starts in the 1920s with Théophile Simar. And Hirschfeld, whose book Racism secured wider currency for the term, clearly wanted to espouse an anthropological concept just as much as Boas et. al. did, although he didn't offer any detailed discussion beyond his roundabout rejection of traditional ideas. BTW, Hirschfeld lectured in the U.S. in 1931. While he wrote his German manuscript in 1933/1934, he may well have employed the term "racism" years earlier.

The best authority on this subject is probably Pierre-André Taguieff, who seems to have read everything about racism, racialism, or colorism. He found that continuous use of the word “racism” began in the 1920s, initially in French and shortly after in English. There is little doubt about the historical context:

In a book published late in 1922, Relations between Germany and France, the Germanist historian Henri Lichtenberger introduced the adjective racist in order to characterize the "extremist," "activist," and "fanatical" elements in the circles of the German national and nationalist right as they had just recently been manifested by the assassination in Berlin, on June 24, 1922, of Walther Rathenau:

The right indignantly condemned Rathenau's murder and denied any connection with the murderers. A campaign was even planned to expel from the Nationalist party the agitators of the extreme right known as "Germanists" or "racists," a group (deutschvölkische) whose foremost leaders are Wulle, Henning and von Graefe, and whose secret inspirer is supposed to be Ludendorff.

[...] The context of the term's appearance is significant: the description of the behavior of the "German nationals" and more precisely the "activist," "extreme right" fraction. The adjective racist is clearly presented as a French equivalent of the German word völkische, and always placed in quotation marks. [...] The term, having only just appeared, is already charged with criminalizing connotations.

In 1925, in his reference book L'Allemagne contemporaine, Edmond Vermeil expressly reintroduced the adjective racist to translate the "untranslatable" German term völkische and suggested the identification, which became trivial in the 1930s of (German) racism with nationalist anti-Semitism or with the anti-Jewish tendencies of the nationalist movement in Germany in the 1920s:

It is in this way that the National German Party has little by little split into two camps. The "racist" (völkische) extreme right has separated from the party. Racism claims to reinforce nationalism, to struggle on the inside against all that is not German and on the outside for all those who bear the name German. [...] (Taguieff, 2001, pp. 88-89)

The term “racist” thus began as an awkward translation of the German völkische to describe ultranationalist parties. Initially, the noun "racism" did not exist, just as there was no corresponding noun in German. It first appeared in 1925, and in 1927 the French historian Marie de Roux used it to contrast his country’s nationalism, based on universal human rights, with radical German nationalism, which recognized no existence for human rights beyond that of the Volk that created them. "Racism [...] is the most acute form of this subjective nationalism," he wrote. The racist rejects universal principles. He does not seek to give the best of his culture to "the treasure of world culture." Instead, the racist says: "The particular way of thinking in my country, the way of feeling that belongs to it, is the absolute truth, the universal truth, and I will not rest or pause before I have ordered the world by law, that of my birth place" (Taguieff, 2001, p. 91-94).

This was the original meaning of "racism," and it may seem far removed from the current meaning. Or maybe not. No matter how we use the word, the Nazi connotation is always there, sometimes lingering in the background, sometimes in plain view.


The noun "racism" was derived in French from an awkward translation of the German adjective völkische. Unlike the original source word, however, it has always had negative and even criminal connotations. It encapsulated everything that was going wrong with German nationalism in a single word and, as such, aggravated a worsening political climate that ultimately led to the Second World War.

When that war ended, the word "racism" wasn't decommissioned. It found a new use in a postwar context of decolonization, civil rights, and Cold War rivalry. Gradually, it took on a life of its own, convincing many people—even today—that the struggle against the Nazis never ended. They're still out there! 

It would be funny if the consequences weren't so tragic. Our obsession with long-dead Nazis is blinding us to current realities. In Europe, there have been many cases of Jews being assaulted and murdered because they are Jews. These crimes are greeted with indignation about how Europe is returning to its bad ways, and yet in almost every case the assailant turns out to be of immigrant origin, usually North African or sub-Saharan African. At that point, nothing more is said. One can almost hear the mental confusion.


Frost, P. (2013). More thoughts. The evolution of a word, Evo and Proud, May 18 

Taguieff, P-A. (2001). The Force of Prejudice: On Racism and its Doubles, University of Minnesota Press.

Saturday, May 16, 2015

Age of reason

Rally in Sydney (Wikicommons). Antiracists see themselves as open-minded individuals at war with hardline ideologues.


The interwar years gave antiracism a new lease on life, thus reversing a long decline that had begun in the late 19th century. This reversal was driven largely by two events: the acrimonious debate over U.S. immigration in the mid-1920s and Hitler's rise to power in the early 1930s. Many people, especially academics, were convinced of the need for an uncompromising war on "racism"—a word just entering use as a synonym for Nazism.

The war on racism began in the social sciences, especially through the efforts of John B. Watson in psychology and the Boasian triad in anthropology (Franz Boas, Ruth Benedict, Margaret Mead). After initially holding a more balanced view, these social scientists began to argue that genes contribute little to differences in behavior and mental makeup, especially between human populations.

In addition to the political context, there was also the broader cultural setting. The 1920s brought a flowering of African and African-American influences on popular culture, as seen in the Harlem Renaissance, the emergence of jazz, and the infatuation with art nègre. African Americans were viewed no longer as an embarrassment but as a source of excitement and novelty. In this role, black singers, musicians, and artists would lead the way in mobilizing mainstream support for the war on racism, such as Marian Anderson in her concert at the Lincoln Memorial and Paul Robeson through his political activism.

Would things have turned out differently if the immigration debate of the 1920s had been less acrimonious or if Hitler had not come to power? The most widespread answer seems to be "no"—sooner or later, men and women of reason would have broken free of the ideological straightjacket imposed by racism, social Darwinism, and hereditarianism. Franz Boas said as much in an interview he gave in 1936: "I will try to clean up some of the nonsense that is being spread about race those days. I think the question is particularly important for this country, too; as here also people are going crazy" (JTA, 1942).

How true is this view? Was the war on racism a healthy reaction to a mad ideology?

First, the word "racism" scarcely existed in its current sense back then. Continuous use dates from the 1920s and initially referred to the integral "blood and soil" nationalism that was becoming popular, especially in Germany, the word "racist" itself being perhaps a translation of the German Völkisch. Its use in a broader sense is largely postwar and has rarely been positive or even neutral. It's an insult. The racist must be re-educated and, if necessary, eliminated.

If the racist is no longer an ignorant person but rather a villain, and if he is defined by his impulses or negative passions (hate, aggressive intolerance, etc.), then the evil is in him, and his case seems hopeless. The antiracist's task is no longer to lead the "racist" towards goodness, but rather to isolate him as a carrier of evil. The "racist" must be singled out and stigmatized. (Taguieff, 2013)

The term "social Darwinism" likewise came into use well after the period when it was supposedly dominant:

Bannister (1988) and Bellomy (1984) established that "social Darwinism" was all but unknown to English-speaking readers before the Progressive Era. Hodgson's (2004) bibliometric analysis identified a mere eleven instances of "social Darwinism" in the Anglophone literature (as represented by the JSTOR database) before 1916. Before 1916 "social Darwinism" had almost no currency whatsoever [...].

"Social Darwinism" did not acquire much greater currency between 1916 and 1943; a mere 49 articles and reviews employ the term. (Leonard, 2009)

The term did not become commonplace until 1944 with the publication of Social Darwinism in American Thought by Richard Hofstadter. Since then it has appeared 4,258 times in the academic literature. Like "racism" it has seldom been used positively or neutrally:

"Social Darwinism" had always been an epithet. From its very beginnings, reminds Bellomy (1984, p. 2), "social Darwinism" has been "heavily polemical, reserved for ideas with which a writer disagreed." (Leonard, 2009).

The term "hereditarianism" likewise entered common use long after its supposed golden age. According to Google Scholar, "hereditarian" and "hereditarianism" appear 0 times in the literature between 1890 and 1900, 6 times between 1900 and 1910, 8 times between 1910 and 1920, 18 times between 1920 and 1930, and 52 times between 1930 and 1940. In most cases, these terms seem to have been used pejoratively.

Thus, all three words entered common use when the beliefs they described were no longer dominant. More to the point, these words were more often used by opponents than by proponents, sometimes exclusively so.

Of course, an ideology doesn't need a name to exist. Many people engaged in racial thinking without bothering to label it. As Barkan (1992, p. xi) observes: “Prior to that time [the interwar years] social differentiation based upon real or assumed racial distinctions was thought to be part of the natural order.” It is difficult, however, to describe such thinking as an ideology, in the sense of a belief-system that seeks obedience to certain views and to a vision of what-must-be-done. William McDougall (1871-1938) was a prominent figure in psychology and is now described as a "scientific racist," yet his views showed little of the stridency we normally associate with ideology:

Racial qualities both physical and mental are extremely stable and persistent, and if the experience of each generation is in any manner or degree transmitted as modifications of the racial qualities, it is only in very slight degree, so as to produce any moulding effect only very slowly and in the course of generations.

I would submit the principle that, although differences of racial mental qualities are relatively small, so small as to be indistinguishable with certainty in individuals, they are yet of great importance for the life of nations, because they exert throughout many generations a constant bias upon the development of their culture and their institutions. (Mathews, 1925, p. 151)

Similarly, the anthropologist William Graham Sumner (1840-1910) is described today as a "social Darwinist," even though the term was never applied to him during his lifetime or long after. He did believe in the struggle for existence: "Before the tribunal of nature a man has no more right to life than a rattlesnake; he has no more right to liberty than any wild beast; his right to pursuit of happiness is nothing but a license to maintain the struggle for existence..." (Sumner, 1913, p. 234). He saw such struggle, however, as an unfortunate constraint and not as a normative value. Efforts to abolish it would simply transfer it to other people:

The real misery of mankind is the struggle for existence; why not "declare" that there ought not to be any struggle for existence, and that there shall not be any more? Let it be decreed that existence is a natural right, and let it be secured in that way. If we attempt to execute this plan, it is plain that we shall not abolish the struggle for existence; we shall only bring it about that some men must fight that struggle for others. (Sumner, 1913, p. 222).

Yet his belief in the struggle for existence was not associated with imperialism and “might makes right.” Indeed, he considered imperialism a betrayal of America's traditions and opposed the Spanish-American War and America’s subsequent annexation of the Philippines. A class of plutocrats would, he felt, come into being and foment imperialist wars in the hope of securing government subsidies and contracts (Wikipedia, 2015).

Herbert John Fleure (1877-1969), a geographer and anthropologist, is similarly described today as a "scientific racist" who saw racial differentiation taking place even at the micro level of small communities:

[...] Fleure accepted the reality of racial differentiation even in Europe, where all the populations exhibit types of diverse origins living and maintaining those type characters side by side in spite of intermarriage and of absence of any consciousness of diversity. These various types, each with mental aptitudes and limitations that are in some degree correlated with their physique, make diverse contributions to the life of each people. (Barkan, 1992, p. 60)

Nonetheless, he condemned the "folly" of confusing such differentiation with language and nation states (Barkan, 1992, pp. 60-64). He also became a strong opponent of Nazism and attacked anti-Semitism in his lectures and articles (Kushner, 2008).

I could give other examples, but why bother? There was a spectrum of racial thinking that encompassed a wide range of scholars, many of whom were sympathetic to the plight of minorities. This variability is hardly surprising, given that racial thinking of one sort or another was typical of most educated people who came of age before the 1930s. Indeed, we are regularly treated to the discovery that some respected person, like Winston Churchill or Albert Schweitzer, was, in fact, a racist. This historical reality is embarrassing not just because the people in question are still role models, but also because it undermines the notion that antiracism freed us from an ideological straitjacket.


Words like "racism," "social Darwinism," and "hereditarianism" create the impression that a single monolithic ideology prevailed before the triumph of antiracism. Actually, the truth was almost the reverse. There was initially a wide spectrum of beliefs, as is normally the case before one belief pushes out its rivals and imposes its vision of reality. Antiracism triumphed because it was more ideological than its rivals; it possessed a unity of purpose that enabled it to neutralize one potential opponent after another. Often, the latter were unaware of this adversarial relationship and assumed they were dealing with a friendly ally.

History could have played out differently. Initially a tool in the struggle against Nazi Germany, antiracism became critically dependent on a postwar context of decolonization and Cold War rivalry. Without this favorable context, it would have had much more trouble seizing the moral high ground and locking down normal discourse. Its revival would have likely stalled at some point.

A world without antiracism could have still brought in laws against discrimination, particularly for the basics of life like housing and employment. But such efforts would have been driven not by ideology but by a pragmatic wish to create a livable society, like modern-day Singapore. In this alternate world, rational people would act rationally. They would not, for instance, be blindly sticking to antiracist principles—and insisting that everyone else do likewise—in the face of the demographic tsunami now sweeping out of Africa.

Social scientists in particular would be acting more rationally. They would not have to assume human sameness and arrange the facts accordingly. They would not face the same pressure to ignore embarrassing data, to choose the less likely explanation, and to keep quiet until ... until when? They would be free to work within the earlier, and more fruitful, paradigm that viewed human differences as a product of genes, culture, and gene-culture interaction. 

Such a paradigm could have absorbed findings on learning and conditioned reflexes, perhaps even better than the one we have now. Indeed, the current paradigm has trouble explaining why the effects of conditioning disappear at different rates, depending on what one has been conditioned to do. For instance, people lose a conditioned fear of snakes and spiders much more slowly than a conditioned fear of electrical outlets, even though the latter are more dangerous in current environments (Cook et al., 1986; Ohman et al., 1986). Conditioning, like learning in general, seems to interact not with a blank slate, but rather with pre-existing mental algorithms that have modifiable and non-modifiable sections.
Of course, this is not how history played out. We are living under an ideology that claims to be an anti-ideology while demanding the sort of conformity normally found in totalitarian societies. In the past, this contradiction largely went unnoticed, perhaps because the full extent of the antiracist project remained poorly known. Or perhaps people chose not to know. Increasingly, however, even the pretence of not knowing is becoming difficult. As French philosopher Alain Finkielkraut wrote, "the lofty idea of the 'war on racism' is gradually turning into a hideously false ideology. And this anti-racism will be for the 21st century what communism was for the 20th century" (Caldwell, 2009).


Barkan, E. (1992). The Retreat of Scientific Racism: Changing Concepts of Race in Britain and the United States Between the World Wars, Cambridge University Press.

Caldwell, C. (2009). Reflections on the Revolution in Europe, Penguin.

JTA (1942). Dr. Franz Boas, Debunker of Nazi Racial Theories, Dies in New York, December 23 

Kushner, T. (2008). H. J. Fleure: a paradigm for inter-war race thinking in Britain, Patterns of Prejudice, 42 

Leonard, T.C. (2009). Origins of the myth of social Darwinism: The ambiguous legacy of Richard Hofstadter's Social Darwinism in American Thought, Journal of Economic Behavior & Organization, 71, 37-51

Mathews, B. (1925). The Clash of Colour. A Study in the Problem of Race. London: Edinburgh House Press.

Ohman et al. (1986). Face the Beast and Fear the Face: Animal and Social Fears as Prototypes for Evolutionary Analyses of Emotion, Psychophysiology, 23, 123-145.

Sumner, W.G. (1913). Earth-hunger and other essays, ed. Albert Galloway Keller, New Haven: Yale University Press. 

Taguieff, P-A. (2013). Dictionnaire historique et critique du racisme, Paris: PUF.

Wikipedia (2015). William Graham Sumner

Saturday, May 9, 2015

Behaviorism and the revival of antiracism

John B. Watson conditioning a child to fear Santa Claus. With a properly controlled environment, he felt that children can be conditioned to think and behave in any way desired


After peaking in the mid-19th century, antiracism fell into decline in the U.S., remaining dominant only in the Northeast. By the 1930s, however, it was clearly reviving, largely through the efforts of the anthropologist Franz Boas and his students.

But a timid revival had already begun during the previous two decades. In the political arena, the NAACP had been founded in 1910 under the aegis of WASP and, later, Jewish benefactors. In academia, the 1920s saw a growing belief in the plasticity of human nature, largely through the behaviorist school of psychology.

The founder of behaviorism was an unlikely antiracist. A white southerner who had been twice arrested in high school for fighting with African American boys, John B. Watson (1878-1958) initially held a balanced view on the relative importance of nature vs. nature. His book Psychology from the Standpoint of a Behaviorist (1919) contained two chapters on "unlearned behavior". The first chapter is summarized as follows:

In this chapter, we examine man as a reacting organism, and specifically some of the reactions which belong to his hereditary equipment. Human action as a whole can be divided into hereditary modes of response (emotional and instinctive), and acquired modes of response (habit). Each of these two broad divisions is capable of many subdivisions. It is obvious both from the standpoint of common-sense and of laboratory experimentation that the hereditary and acquired forms of activity begin to overlap early in life. Emotional reactions become wholly separated from the stimuli that originally called them out (transfer), and the instinctive positive reaction tendencies displayed by the child soon become overlaid with the organized habits of the adult.

By the mid-1920s, however, he had largely abandoned this balanced view and embraced a much more radical environmentalism, as seen in Behaviorism (1924):

Our conclusion, then, is that we have no real evidence of the inheritance of traits. I would feel perfectly confident in the ultimately favorable outcome of a healthy, well-formed baby born of a long line of crooks, murderers and thieves, and prostitutes (Watson, 1924, p. 82)

[...] Give me a dozen healthy infants, well-formed, and my own specified world to bring them up in and I'll guarantee to take any one at random and train him to become any type of specialist I might select—doctor, lawyer, artist, merchant-chief, and yes, even beggar-man and thief, regardless of his talents, penchants, tendencies, abilities, vocations, and race of his ancestors. I am going beyond my facts and I admit it, but so have the advocates of the contrary and they have been doing it for many thousands of years. (Watson, 1924, p. 82)

Everything we have been in the habit of calling "instinct" today is a result largely of training—belongs to man's learned behavior. As a corollary from this I wish to draw the conclusion that there is no such thing as an inheritance of capacity, talent, temperament, mental constitution, and characteristics. These things again depend on training that goes on mainly in the cradle. (Watson,1924, p. 74).

Why the shift to extreme environmentalism? It was not a product of ongoing academic research. In fact, Watson was no longer in academia, having lost his position in 1920 at Johns Hopkins University after an affair with a graduate student. At the age of 42, he had to start a new career as an executive at a New York advertising agency. Some writers attribute this ideological shift to his move from academia to advertising:

Todd (1994) noted that after Watson lost his academic post at Johns Hopkins, he abandoned scientific restraint in favor of significantly increased stridency and extremism, such that there were "two Watsons—a pre-1920, academic Watson and a post-1920, postacademic Watson" (p. 167). Logue (1994) argued that Watson's shift from an even-handed consideration of heredity and environment to a position of bombast and extreme environmentalism was motivated by the need to make money and the desire to stay in the limelight after he left academia. (Rakos, 2013)

There was another reason: the acrimonious debate in the mid-1920s over immigration, particularly over whether the United States was receiving immigrants of dubious quality. Rakos (2013) points to Watson's increasingly harsh words on eugenics and the political background: "It is probably no coincidence that only in the 1924 edition of the book—published in the same year that Congress passed the restrictive Johnson-Lodge Immigration Act—did Watson express his belief that behaviorism can promote social harmony in a world being transformed by industrialization and the movement of peoples across the globe."  

Eugenics is mentioned, negatively, in his 1924 book:

But you say: "Is there nothing in heredity-is there nothing in eugenics-[...] has there been no progress in human evolution?" Let us examine a few of the questions you are now bursting to utter.

Certainly black parents will bear black children [...]. Certainly the yellow-skinned Chinese parents will bear a yellow skinned offspring. Certainly Caucasian parents will bear white children. But these differences are relatively slight. They are due among other things to differences in the amount and kind of pigments in the skin. I defy anyone to take these infants at birth, study their behavior, and mark off differences in behavior that will characterize white from black and white or black from yellow. There will be differences in behavior but the burden of proof is upon the individual be he biologist or eugenicist who claims that these racial differences are greater than the individual differences. (Watson, 1924, p. 76)

You will probably say that I am flying in the face of the known facts of eugenics and experimental evolution—that the geneticists have proven that many of the behavior characteristics of the parents are handed down to the offspring—they will cite mathematical ability, musical ability, and many, many other types. My reply is that the geneticists are working under the banner of the old "faculty" psychology. One need not give very much weight to any of their present conclusions. (Watson, 1924, p. 79) 


Antiracism did not revive during the interwar years because of new data. Watson's shift to radical environmentalism took place a half-decade after his departure from academia. It was as an advertising executive, and as a crusader against the 1924 Immigration Act, that he entered the "environmentalist" phase of his life. This phase, though poor in actual research, was rich in countless newspaper and magazine articles that would spread his behaviorist gospel to a mass audience.

The same could be said for Franz Boas. He, too, made his shift to radical antiracism when he was already semi-retired and well into his 70s. Although this phase of his life produced very little research, it saw the publication of many books and articles for the general public. As with Watson, the influence of external political events was decisive, specifically the rise of Nazism in the early 1930s.

In both cases, biographers have tried to explain this ideological shift by projecting it backward in time to earlier research. Boas' antiracism is often ascribed to an early study that purported to show differences in cranial form between European immigrants and their children (Boas, 1912). Yet Boas himself was reluctant to draw any conclusions at the time, merely saying we should "await further evidence before committing ourselves to theories that cannot be proven." Later reanalysis found no change in skull shape once age had been taken into account (Fergus, 2003). More to the point, Boas continued over the next two decades to cite differences in skull size as evidence for black-white differences in mental makeup (Frost, 2015).

Watson's radical environmentalism has likewise been explained by his Little Albert Experiment in 1920, an attempt to condition a fear response in an 11-month-old child. Aside from the small sample size (one child) and the lack of any replication, it is difficult to see how this finding could justify his later sweeping pronouncements on environmentalism. There were admittedly other experiments, but they came to an abrupt end with his dismissal from Johns Hopkins, and little is known about their long-term effects:

Watson tested his theories on how to condition children to express fear, love, or rage—emotions Watson conjectured were the basic elements of human nature. Among other techniques, he dropped (and caught) infants to generate fear and suggested that stimulation of the genital area would create feelings of love. In another chilling project, Watson boasted to Goodnow in summer 1920 that the National Research Council had approved a children's hospital he proposed that would include rooms for his infant psychology experiments. He planned to spend weekends working at the "Washington infant laboratory." (Simpson,2000) 

Watson did apply behaviorism to the upbringing of his own children. The results were disappointing. His first marriage produced a daughter who made multiple suicide attempts and a son who sponged off his father. His second marriage produced two sons, one of whom committed suicide (Anon, 2005). His granddaughter similarly suffered from her behaviorist upbringing and denounced it in her memoir Breaking the Silence. Towards the end of his life Watson regretted much of his child-rearing advice (Simpson, 2000).


Anon (2005). The long dark night of behaviorism, Psych 101 Revisited, September 6

Boas, F. (1912). Changes in the Bodily Form of Descendants of Immigrants, American Anthropologist, 14, 530-562. 

Fergus, C. (2003). Boas, Bones, and Race, May 4, Penn State News 

Frost, P. (2015). More on the younger Franz Boas, Evo and Proud, April 18 

Rakos, R.F. (2013). John B. Watson's 1913 "Behaviorist Manifesto: Setting the stage for behaviorism's social action legacy, Revista Mexicana de analisis de la conducta, 39(2)  
Simpson, J.C. (2000). It's All in the Upbringing, John Hopkins Magazine, April 

Watson, J.B. (1919). Psychology from the Standpoint of a Behaviorist,  

Watson, J. B. (1924). Behaviorism. New York: People's Institute.

Saturday, May 2, 2015

Impressions of Russia

The Battle for Sevastopol, now showing in Russian theatres
The young man shook his head. “No, I can’t say I’m pro-Putin. There’s too much corruption in Russia, with too much money going to the wrong people. We should become more Western. Instead, we’re moving in the other direction.”

Finally, I thought, a liberal critic of Putin. The young man continued. “Here it’s not too bad, but in Moscow you can see the change. They’re all over. Please, don’t get me wrong, I don’t hate anyone, but I feel uncomfortable when there are so many of them. Sometimes, I wonder whether I’m still in Russia.”


Much had changed since my last visit ten years ago. Driving into the city of Voronezh from the airport, I could see entirely new neighborhoods, supermarkets, office buildings, and the like. In 2003, there was only one shopping mall in the whole city, and it was nothing special. Now, there were malls as huge as any in Toronto. Things had likewise improved for some of our old friends and acquaintances. A few had moved up into the growing middle class, including one couple who showed us their new palatial home on the outskirts.

Yet the bulk of the population seemed no better off, and in some ways worse off. Ten years ago, jobs were there for the taking. The pay may have been lousy, but it was money. Now, the competition is intense even for those jobs. An unemployed man told me: “It’s hard to find work now. Employers will hire immigrants because they work for much less and won’t complain. And there are a lot of them now, mainly from Central Asia, but also from places all over.”

Sour grapes? Perhaps. But it’s consistent with what a Quebec building contractor had told me earlier. “I no longer bother with Russian construction projects because there’s always a Russian company that will put in an absurdly low bid. The only way he can stay within budget is by hiring illegal immigrants. Everyone knows it, but nothing is ever done to stop it.”


I wasn’t surprised to see Ukrainian refugees in a big city like Voronezh, but it was surprising to see so many in remote farming villages. And each refugee family had a horror story to tell. It’s one thing to hear these stories from professional journalists; it’s another to hear them from ordinary people who aren’t being paid to say what they say. This is an underappreciated factor in the growing anger among Russians against the Ukrainian government.

After all that’s happened, I don’t see how eastern Ukraine will ever accept being ruled by Kiev. It’s like a marriage that has crossed the line between verbal abuse and physical violence.


We were standing outside a fast food kiosk. “I just don’t get it,” said my wife. “Prices are almost as high here as in Canada, yet the wages are a lot lower. How do people manage to survive?”

A young man overheard her. “The people who don’t survive are the ones you don’t get to see.”


Postwar housing projects cover most of the city. They are now aging badly, and North Americans wouldn’t hesitate to call them “slums.” We like to think that slums cause crime, broken homes, and stunted mental development. Yet, here, you can walk up about in safety, families are usually intact, and the children are studying hard to become engineers, scientists, ballet dancers, or what have you.


We were sitting in a restaurant with two young Russians, a lawyer and a university teacher. “Will there be war?” said one, looking worried. I tried to be reassuring, saying no one wanted war. But I wasn’t sure myself.

There was another question. “But do the Americans know what they’re getting into?” I shook my head. Few people in the West know much about Russia, and what little they do is worse than useless.


Hitler said it would be like kicking in the door of a rotten building. That’s how it seemed at first. And then the war dragged on and on, grinding down one German division after another. If—God forbid—war happens another time, we’ll probably see the same pattern. Without a higher purpose, the average Russian man often retreats into indolence, alcoholism, and self-destructive behavior. Give him that purpose, and he will fight for it with almost superhuman power.

One of my professors ascribed it to the yearly cycle of traditional farm life. For most of the year, the muzhik slept a lot and whiled away his days in aimlessness. But when it came time to plough the fields or bring in the harvest, he had to pull out all stops and work continuously from dawn to dusk.


It’s the 70th anniversary of victory in the Great Patriotic War, and reminders can be seen everywhere. There has been a spate of new war movies, including one about the Battle for Sevastopol. It’s hard not to see references to the current conflict.