Astrogator's Logs

New Words, New Worlds
Rest
Artist, Heather Oliver             

Archive for the 'Biology & Culture' Category

Sex by Choice: the Highest Compliment

Monday, March 5th, 2012

Anyone with a functioning cortex knew that Rush Limbaugh is a vile slug from the moment he uttered his first nasty lie. His recent comments about accomplished, brave law student Sandra Fluke are not surprising, nor is his stone-ignorant equation of contraception with frequency of intercourse: he must have confused responsible sex with his own frantic consumption of Viagra – now there’s unnaturally-induced sex on demand! However, Limbaugh is not the disease, merely its symptom. The belated, lukewarm bleatings and hedgings from the Republican “leadership” and from his advertisers are telling, as is their obfuscation of the fact that contraception is already covered by health insurance; the sole difference is the existence of a co-pay.

In the last year or so, we have seen exclusion of women from decisions that affect them almost exclusively, attempts to defund Planned Parenthood, to define miscarriage as murder, to add invasive, needless sonograms to the already enormous difficulties of getting an abortion. The freak show parade that is this year’s Republican presidential lineup is banging the tin drum of “returning to family values” — aka female poverty and powerlessness, probably because all of them have little knowledge of and interest in education, the environment, the economy, international diplomacy or anything of value to anyone beyond Ponzi-scheme millionaires who live in gated communities with private security. The US is going the way of Wahhabi Saudi Arabia – perhaps a fitting trajectory, since the country seems unwilling or unable to curb its fossil fuel consumption.

The open war on women declared by the Republican Party shows how the Teabaggers and Jesufascists have kidnapped rational, civil discourse in favor of a punitive primitivism that denies basic human decency and is steadily encroaching on hard-won women’s rights. It is no surprise that most foes of contraception are fundies of abrahamic religions, which are disasters for women in any case. However, make no mistake about it: contraception has nothing to do with freedom of religion. The kernel of this sickening backlash is the wish to deny women autonomy. Nothing changed the dynamics of gender interactions like contraception. For the first time in human history, women could reliably regulate the outcome of sexual congress. It removed the specter of unwanted pregnancy – and with that, women could enjoy sex as uninhibitedly as men, finally undoing the predator/prey equation so beloved of evo-psycho Tarzanists the world over.

Ironically, the exercise of contraception, which makes joyful sex possible, is uniquely human. The only partial exception may be our bonobo cousins, who use sex as social glue (often, note bene, initiated by the female members of the group). Contrary to the corrosive lies of benighted fundies, most animals do not choose sex. They go into heat and mate compulsively. In some cases, females exercise mate choice; in others, mating pairs form monogamous bonds. But only humans incorporate sex into their repertoire of chosen pleasures, whether they’re fertile or not. So contrary to the idiotic natterings that “sex on demand” is animal-like, exercising sexual choice is in fact the highest compliment for the activity. It transforms it from instinct, compulsion or random outcome solidly into something treasured, something freely chosen – which, again contrary to the fundies’ nonsense, makes it far more meaningful and powerful than the joyless autopilot version. It is the opposite of prostitution, which is undertaken as a profession and requires control and foregoing of spontaneous pleasure by its practitioners – not that Limbaugh et al are clear on complex concepts.

This is what contraception made possible, and what is at stake here. If people want human women to become truly animal-like, they should recall that most mammals do not recognize paternity, the most common family unit is a female with sub-adult offspring and female mammals routinely abort or kill offspring when they deem the circumstances unpropitious for raising a brood. And if they think that contraception is murder, they can return to the good old days when masturbation was in a similar category. However, all this hypocrisy and twisting of facts really attempts to cloud the core issue: women as equals. By targeting this, the Jesufascists and their ilk across all nations and religions are playing on the primitive fears of men, especially at times of instability and unrest, when it’s far easier to turn on Others than to act constructively for a better collective future. As James Tiptree Jr. (Alice Sheldon) famously had a protagonist state in The Women Men Don’t See:

“Women have no rights, except what men allow us. Men // run the world. When the next real crisis upsets them, our so-called rights will vanish like—like that smoke. We’ll be back where we always were: property. And whatever has gone wrong will be blamed on our freedom, like the fall of Rome was. You’ll see.”

Contrary to Freud’s notorious question, the recurrent problem of civilization, as prevalent today as in ancient Sumer, is how to define male roles which satisfy male egos without wreaking terminal havoc. Women still have essentially no power – Tiptree’s dictum still obtains, even in the First World. I personally believe that our societal problems will persist as long as women are not treated as fully human, including the right to be sexual beings by choice. The resorting to medical excuses in support of available contraception, nice as it is, diverts the attention from the central, irreducible issue of women’s basic autonomy and fundamental rights as full humans. The various attempts to improve women’s status, ever subject to setbacks and backlashes, are our marks of successful struggle to attain our full species potential. If we cannot solve this thorny and persistent problem, we may still survive — we have thus far. However, I doubt that we’ll ever truly thrive, no matter what technological levels we achieve.

Won’t ANYONE Think of the Sexbots?!

Monday, February 20th, 2012

From 2008 to 2009 I was a fellow (gratis) of the Institute for Emerging Ethics and Technology. IEET is the public face of a non-libertarian branch of transhumanism that considers itself left-leaning progressive – by US standards, that is. In 2009 I left IEET, because the willful ignorance of biology and the evopsycho blather got to me (as did the fact that there were no non-white non-males in any position of power there). A month ago, they contacted me to ask if I would like to have more of my essays reprinted on their site, and if I’d answer a questionnaire.

The quality of the comments on the IEET site made me decide not to publish there. On the other hand, the questionnaire gave rise to thoughts, especially in light of the steady erosion of women’s status across the globe. I won’t quote lengthy specifics, they’re all around us: from US congressmen trying to pass laws that classify miscarriage as murder to the resurgence of religious fundamentalism and its relentless seepage into mainstream politics. Here’s the list of questions, which I call The Ok, The Bad and the Funny:

The Future of Feminism

  1. How do you think “sex selection” is going to impact women? In China and India, millions of female fetuses have been aborted… do you think this will continue as sex selection becomes more widespread? Or will it even out – or even favor girls? I’ve read that sex selectors choose girls over boys in places like South Korea and Japan…
  2. Women are advancing quickly in political and business positions. They are now 60% of college students, even in graduate programs. Do you think this trend will continue, enabling women to become the dominant gender in many parts of the world?
  3. Getting pregnant no longer requires a male mate. Do you predict a gradual or sudden decline of marriage, for this reason? Do you predict more single mothers-by-choice? An increasingly wide variety of family groups?
  4. Will men become irrelevant, if propagation can occur between two women via parthenogenesis? Is this something that is scientifically possible in the near future?
  5. Do you think the word “feminism” is going to be dated in 40-50 years, because we’ll moving towards a genderless society? That there will be numerous possible gender options, that are easily changeable? Or will there be “women” for at least the next 100 years?
  6. How do you think social institutions will change, as nations become “feminized” due to increasing female presence in power positions? Will increasing women’s power effect education? International relations? Economies? Democracy? the environment?
  7. I have noticed that women don’t seem as interested in cryonics, or life extension. Am I right about that? Why is that? Do you think women are as intrigued by immortality as men? If not, how will progress in life extension proceed in a future where men are declining in influence?
  8. Do you think male and female sex robots will be prevalent in the future? Will they dominate sexuality? Or will only men be interested in them? Will prostitution, and the sex industry in general, be impacted, or replaced by sexbots?
  9. Men presently – generally – have greater physical size and strength than women. Will this change in the future, via various enhancements and augmentations? Will the two genders become physically equal in all aspects? If so, how will this change the dynamics between them?
  10. Genetically, it appears that men are likely to be “outliers” – to be at the far extremes in either intelligence or stupidity. Do you see this changes in the future, via genetic engineering? Do you see women eventually winning 1/2 the Nobel Prizes every year, or even more, for example?
  11. There are still cultures that practice customs like Female Genital Mutilation and Arranged Marriages and Honor Killings. Do you see those misogynistic practices ending soon? Or will they be tolerated for several more decades, because many are disinclined to assert Western ideals on traditional cultures? How will notions about “religion” change as women gain in power?
  12. Do you see other technologies looming ahead that will deeply impact women? A male birth control pill, for example – how would that change society?
  13. Finally, do you see women entering science and tech in larger numbers? Do you think they have different interests in these fields? Do you think they have goals and inventions and purposes they want to accomplish, that differ from male goals/inventions/purposes?

The attentive reader will notice several overarching attributes.  Beyond the gender essentialism, half of the questions are “But… what about the men??” including the angst about family configurations, control of reproduction and, not least, sex robots – clearly, a burning issue. Also telling is that the questionnaire is titled “The Future of Feminism” rather than “The Future of Women”. Insofar as feminism is the simple yet radical notion that women are fully human and should be treated as such, it is frankly stunning that many people, and not just Anders Breivik or the Taliban, are virulently hostile to feminism. This speaks volumes about the prevailing assumptions of the first globally linked human civilization and its likeliest future direction: women’s prospects look ever bleaker as the global economy circles the drain, since women are shoved back into chattelship, illiteracy and poverty whenever there’s a downturn or an upheaval. Variations of Margaret Atwood’s The Handmaid’s Tale have replayed in many places within my lifetime.

“Feminizing” keeps popping up in the questionnaire as well. It looks like the IEET definition of the term is “if at least one woman is present in X” regardless of the final configuration. As far as I’m concerned, for any X to be feminized it requires more than 50% female representation — and last I looked, men still own and run just about everything on this planet. Equally importantly, “feminizing” that makes a difference also requires the high female representation to be across the board (not stuffed into the lower echelons, as research techs in science or gofers in corporations and governments); last but decidedly not least, it requires that X not be devalued in prestige, authority and compensation because it is XX-heavy (math in Japan, partly because during the shogunate merchants were classified as below peasants, and samurai did not soil themselves with money affairs: their wives handled all that, hence no prestige was attached to numbers; medicine in the USSR: most doctors were women, and the perks and pay scale of the profession were low).

Yet another characteristic of the questionnaire is conflation of cause and effect. As one example, everyone knows that many more women should have won Nobels but they were pushed aside by ambitious men with influential mentors and devoted wives. Lise Meitner, Rosalind Franklin, Chien-Shiung Wu, Jocelyn Bell, Lynn Margulis, Jane Goodall, Susan Berget – to name just the very tip of the iceberg. Additionally, intelligence is far more complex than the isolated genius cliché propagated by the IEET questionnaire. As for idiotic suggestions by evopsychos — as a representative example, the contention that men bequeath genes “for brain size” from the Y chromosome, thereby making men routinely more intelligent than women: beyond contributions to spermatogenesis, the next major function determined by Y-linked genes is ear hair.

Here’s another example of muddled causation:  when women see that most men who sign up for cryonic preservation share the physical and emotional attributes of the male cast in The Big Bang Theory, no wonder they’re electing not to join these men in their thermos jar dreams – or in eternal post-rupture bliss (cryonics’ current zero chance of success strictly aside).

I could go on at considerable length but I won’t beat up on the questionnaire too much: it did try to include sciency questions, basic and unfocused as they were. Some of the questions amused me in a sad way, making me conclude that the movement might best be dubbed “transhumorism”. Nevertheless, the concerns mirrored in it do not bode well for the future of humanity. In the end, we will get the fate we deserve as a species. But I’ll say this much: feminism will become irrelevant when questionnaires like this become irrelevant.

Images: 1st, the cast of The Big Bang Theory (as well as an encapsulation of the dynamics); 2nd, self-explanatory; 3rd, the sex dolls of First Androids.

The Persistent Neoteny of Science Fiction

Thursday, December 29th, 2011

“Science fiction writers, I am sorry to say, really do not know anything. We can’t talk about science, because our knowledge of it is limited and unofficial, and usually our fiction is dreadful.”

Philip K. Dick

When Margaret Atwood stated that she does not write science fiction (SF) but speculative literature, many SF denizens reacted with what can only be called tantrums, even though Atwood defined what she means by SF. Her definition reflects a wide-ranging writer’s wish not to be pigeonholed and herded into tight enclosures inhabited by fundies and, granted, is narrower than is common: it includes what I call Leaden Era-style SF that sacrifices complex narratives and characters to gizmology and Big Ideas.

By defining SF in this fashion, Atwood made an important point: Big Ideas are the refuge of the lazy and untalented; works that purport to be about Big Ideas are invariably a tiny step above tracts. Now before anyone starts bruising my brain with encomia of Huxley, Asimov, Stephenson or Stross, let’s parse the meaning of “a story of ideas”. Like the anthropic principle, the term has a weak and a strong version. And as with the anthropic principle, the weak version is a tautology whereas the strong version is an article of, well, religious faith.

The weak version is a tautology for the simplest of reasons: all stories are stories of ideas. Even terminally dumb, stale Hollywood movies are stories of ideas. Over there, if the filmmakers don’t bother with decent worldbuilding, dialogue or characters, the film is called high concept (high as in tinny). Other disciplines call this approach a gimmick.

The strong version is similar to supremacist religious faiths, because it turns what discerning judgment and common sense classify as deficiencies to desirable attributes (Orwell would recognize this syndrome instantly). Can’t manage a coherent plot, convincing characters, original or believable worlds, well-turned sentences? Such cheap tricks are for heretics who read books written in pagan tongues! Acolytes of the True Faith… write Novels of Ideas! This dogma is often accompanied by its traditional mate, exceptionalism – as in “My god is better than yours.” Namely, the notion that SF is intrinsically “better” than mainstream literary fiction because… it looks to the future, rather than lingering in the oh-so-prosaic present… it deals with Big Questions rather than the trivial dilemmas of ordinary humans… or equivalent arguments of similar weight.

I’ve already discussed the fact that contemporary SF no longer even pretends to deal with real science or scientific extrapolation. As I said elsewhere, I think that the real division in literature, as in all art, is not between genre and mainstream, but between craft and hackery. Any body of work that relies on recycled recipes and sequels is hackery, whether this is genre or mainstream (as just one example of the latter, try to read Updike past the middle of his career). Beyond these strictures, however, SF/F suffers from a peculiar affliction: persistent neoteny, aka superannuated childishness. Most SF/F reads like stuff written by and for teenagers – even works that are ostensibly directed towards full-fledged adults.

Now before the predictable shrieks of “Elitist!” erupt, let me clarify something. Adult is not a synonym for opaque, inaccessible or precious. The best SF is in many ways entirely middlebrow, as limpid and flowing as spring water while it still explores interesting ideas and radiates sense of wonder without showing off about either attribute. A few short story examples: Alice Sheldon/James Tiptree’s A Momentary Taste of Being; Ted Chiang’s The Story of Your Life; Ursula Le Guin’s A Fisherman of the Inland Sea; Joan Vinge’s Eyes of Amber. Some novel-length ones: Melissa Scott’s Dreamships; Roger Zelazny’s Jack of Shadows; C. J. Cherryh’s Downbelow Station; Donald Kingsbury’s Courtship Rite. Given this list, one source of the juvenile feel of most SF becomes obvious: fear of emotions; especially love in all its guises, including the sexual kind (the real thing, in its full messiness and glory, not the emetic glop that usurps the territory in much genre writing, including romance).

SF seems to hew to the long-disproved tenet that complex emotions inhibit critical thinking and are best left to non-alpha-males, along with doing the laundry. Some of this comes from the calvinist prudery towards sex, the converse glorification of violence and the contempt for sensual richness and intellectual subtlety that is endemic in Anglo-Saxon cultures. Coupled to that is the fact that many SF readers (some of whom go on to become SF writers) can only attain “dominance” in Dungeons & Dragons or World of Warcraft. This state of Peter-Pan-craving-comfort-food-and-comfort-porn makes many of them firm believers in girl cooties. By equating articulate emotions with femaleness, they apparently fail to understand that complex emotions are co-extensive with high level cognition.

Biologists, except for the Tarzanist branch of the evo-psycho crowd, know full well by now that in fact cortical emotions enable people to make decisions. Emotions are an inextricable part of the indivisible unit that is the body/brain/mind and humans cannot function well without the constant feedback loops of these complex circuits. We know this from the work of António Damasio and his successors in connection with people who suffer neurological insults. People with damage to that human-specific newcomer, the pre-frontal cortex, often perform at high (even genius) levels in various intelligence and language tests – but they display gross defects in planning, judgment and social behavior. To adopt such a stance by choice is not a smart strategy even for hard-core social Darwinists, who can be found in disproportionate numbers in SF conventions and presses.

To be fair, cortical emotions may indeed inhibit something: shooting reflexes, needed in arcade games and any circumstance where unthinking execution of orders is desirable. So Galactic Emperors won’t do well as either real-life rulers or fictional characters if all they can feel and express are the so-called Four Fs that pass for sophistication in much of contemporary SF and fantasy, from the latest efforts of Iain Banks to Joe Abercrombie.

Practically speaking, what can a person do besides groan when faced with another Story of Ideas? My solution is to edit an anthology of the type of SF I’d like to read: mythic space opera, written by and for full adults. If I succeed and my stamina holds, this may turn into a semi-regular event, perhaps even a small press. So keep your telescopes trained on this constellation.

Note: This is part of a lengthening series on the tangled web of interactions between science, SF and fiction. Previous rounds: Why SF needs…

…science (or at least knowledge of the scientific process): SF Goes McDonald’s — Less Taste, More Gristle
…empathy: Storytelling, Empathy and the Whiny Solipsist’s Disingenuous Angst
…literacy: Jade Masks, Lead Balloons and Tin Ears
…storytelling: To the Hard Members of the Truthy SF Club

Images: 1st, Bill Watterson’s Calvin, who knows all about tantrums; 2nd, Dork Vader, an exemplar of those who tantrumize at Atwood; 3rd, shorthand vision of my projected anthology.

Opining about SF and Fantasy

Friday, November 25th, 2011

Charles Tan, the Bibliophile Stalker, interviewed me for SF Signal.  The result just appeared on SFS.  Here’s an excerpt:

“Do you have a preference for SF or fantasy?

I will read anything that is not a hack job; what matters is the quality of the story telling, not the mode.  Of course, I have preferences, just as I like dark chocolate and goat cheese: I gravitate to stories that are hybrid and/or hard to categorize; I still have a soft spot for well-written space opera (with C. J. Cherryh and C. S. Friedman at the top of that list) and particular types of alternative history fantasy (for example, Jacqueline Carey’s Renaissance Europe, in which Celtic Ireland and Minoan Crete are still going strong); I tend to dislike cyberpunk, though I loved Melissa Scott; and steampunk makes me break out in hives, automata in particular – to say nothing of corsets.  At my age and exposure, I can tell from reading the first and last page if it’s worth my while to read the whole work.”

ETA: Paul Gilster of Centauri Dreams caught all the echoes of the interview in his recent entry Reflections on a Mythic Voyager.  I originally had a link to it, but removed it when his post got deluged by comments from nerdtrolls “explaining” why Paul was terribly, terribly wrong in liking the words of a dark female ethnic, no matter what objective credentials she has.

Cool Cat by Ali Spagnola

Who Will Be Companions to Female Kings?

Sunday, November 20th, 2011

“I wish to be as God made me.”
— Emily Brontë

My father’s nameday happened to fall during my last visit to Hellas: Saint George, whose day often falls during Easter celebrations. His DVD player had been hit by lightning, so my sister and I bought him a replacement as a gift. To check the new gizmo, we used a DVD of Jane Campion’s The Piano.

I saw The Piano when it first appeared, twenty years ago. It made an indelible impression on me and has been in and out of my thoughts ever since. For one, it was filmed in Aotearoa, one of my three Valinors – the other two being Hellas and Alba, particularly their islands. Aotearoa became home to one of the few bona fide seafaring cultures on the planet, and I still pinch myself to ensure that The Lord of the Rings really was made in that wondrous place, rather than on Hollywood plywood sets. For another, The Piano is a witch’s brew of potent myths: part Wuthering Heights, part Demetra/Persephone, part Lilith, Lucifer and Adam – but primarily it’s a tale of what happens to an uncompromising creator who happens to be a woman.

The hero of The Piano is brought to mesmerizing life by Holly Hunter who deserved the slew of awards she got, including the Oscar. Fierce Ada McGrath, a Victorian mail order bride with a fatherless daughter, has decided to communicate with the world solely through a self-invented sign language, the occasional terse written message… and, most crucially, her piano. The two men she encounters in New Zealand, the stunning landscape she finds herself in, her precocious child (who turns from devotee to traitor, played by a young Anna Paquin at her fey best) all influence her life dramatically; but for Ada, the crux of her existence is the piano.

Unlike a “proper” lady, Ada does not merely tickle its ivories. She’s a composer and virtuoso player of immense talent and of course there is no possible place for her among either the Scots colonialists or the Maori natives. Nobody comprehends her, not even her ensorcelled child or her besotted lover, even though they love her in large part because of her uniqueness. There is no place or companion in the world, then or now, for a woman of implacable will and focus who will not compromise or yield in her determination to pursue her vocation.

If Ada were a man, she could choose the roles highlighted by the two male characters. She could embrace her own culture wholeheartedly like her husband, or attempt to “go native” like her lover. But for a woman, either choice would mean submission to a rigid, confined existence. Westerners are more familiar with Victorian women’s suffocated lives, yet Maori women fared little better in a culture as hierarchical and despotic as any of its European counterparts. In either case, Ada would have to renounce her music – an outcome equivalent to losing her life.

So there are two endings to the film, though the second one is truer: after a forcible mutilation, Ada almost joins her piano in the depths of the Pacific. On the surface of things, she decides to stay with the living – now a tamed hausfrau teaching colonial children their piano scales, a tiger pulling a plow, a queen ant who has shed her wings; but in her dreams, which are more real than her awake moments, she is underwater, swaying above her now-silent piano like a strand of kelp.

Something similar happens to another hero of an Aotearoa-based film: Paikea Apirana (Keisha Castle-Hughes) in Niki Caro’s Whale Rider, adapted from Witi Ihimaera’s book. Pai is clearly destined to be the next chief: she has the bloodlines, the talent and charisma, the stamina and will. In mythical terms, not only did her father (himself a maverick) name her after the legendary Maori ancestor who came from Hawaiki riding a whale but she also bears a second soul: that of her twin brother, dead at birth along with her mother. But… Pai has the wrong equipment between her legs. This makes her useless to her grandfather, who considers Pai’s efforts “defiling to the tradition” and tries to train anyone except his granddaughter for the position of chief.

Paikea tries every possible way to reach the core of the crusty old jerk. Finally, when he emphatically rejects her by not coming to a school presentation she has dedicated to him, Paikea in her great desolation calls to the sea. A pod of whales follows her voice, and they beach themselves to reach her. They are slowly suffocating and the rescue efforts of the community prove fruitless, until Pai straddles the pod leader and spurs it back into the water, making the rest follow.

Like The Piano, Whale Rider has two endings: in the happy one, Pai’s grandfather finally embraces his granddaughter as the next chief; her sculptor father finishes his abandoned waka and the entire community joyfully launches the canoe, with Paikea presiding over a crew of both women and men rowing warriors. But this is only possible (barely) because Paikea’s demonstration of prowess is so spectacular and so public that failure to acknowledge it would be disrespecting the ancestors. The real ending of the film comes earlier, with Pai on the whale as it slowly submerges, taking Pai with it into the depths. Her grandfather is right: like Ada, Paikea has no real place in her culture. If she cannot conform, she must die or, at best, live as a lonely outcast: the witch by the forest clearing; the madwoman in the attic; the Yorkshire parson’s three “touched” daughters, circling the kitchen table reciting passages of their novels to each other.

This is the near-universal fate of women who defy the roles set for them and demand real concessions to their talent. People may love talented women, often because of their gift; but the all-too-common aftermath consists of testing whether these women will give up their vocation as proof of love, abjure or dilute it as a way of fitting into the glass slipper or the red-hot iron shoe. There are no meet companions for women like Ada; just as Tsars interred their daughters in the terem because they had no companions of their rank, female creators must live alone and die early – or live long enough to curse the despised talent that devours them like fire consumes faggots.

Persistent, the Yorkshire parson’s daughters kept “scribbling” and paid to get their books published: Wuthering Heights, Jane Eyre, Agnes Grey. The one who survived long enough to bend to conventionality stopped writing after her marriage; she died soon afterwards, during a troubled pregnancy imposed by her husband to seal her surrender. As the least conventional of these three sisters intuited, only elementals can be mates to such women: the Moon; the Heath; the Wind; the Sky; the Sea.

Elizabeth Lynn reached the same conclusion in The Woman who Loved the Moon.  So did Diane Duane in Lior and the Sea. The story, part of Duane’s Middle Kingdoms universe, appeared in Moonsinger’s Friends, a collection in honor of André Norton. In it, unlike Le Guin’s Earthsea, female wizards wield rods of power without celibacy strictures and with nary a ripple in the society. Lior is a very powerful Rodmistress who has chosen to practice in a small fishing village even though she could be the equivalent of an Archmage. She has willing and happy bedmates of both genders, yet none is her equal; none comprehends her magic.

In her aloneness, Lior talks with the Sea. And just as the whales hear Paikea, the Sea hears Lior, and learns to love her. It sends avatars to woo her: first a beautiful horse; then an equally beautiful man. Lior finally has a true companion who fills the chambers of her soul. But in the end, the Sea must return to its element. Faced with her possible futures, Lior decides to join the Sea — just like Ada and Paikea in their far likelier bleak alternative endings:

“But few who say so have stood on that jeweled beach by night and heard the Sea whisper again and again, as if to another self:

“You dared…”

Images and sounds: Ada (Holly Hunter) and Flora (Anna Paquin) McGrath, The Piano; Paikea Apirana (Keisha Castle-Hughes), Whale Rider; Ada’s signature theme by Michael Nyman.

Skin Deep

Tuesday, October 25th, 2011

One of my readers brought James Howard Kunstler to my attention – yet another cranky prophet-wannabe with no credentials in the domains he discusses.  A few years ago, he opined on tattoos thusly: “Tattooing has traditionally been a marginal activity among civilized people, the calling card of cannibals, sailors, and whores. The appropriate place for it is on the margins, in the back alleys, the skid rows. The mainstreaming of tattoos is a harbinger of social dysfunction.”  You see, social collapse has nothing to do with predatory banks, preemptive invasions, punitive theocracies, unequal distribution of wealth.  No sir!  Tattooing is the true cause of the impending apocalypse.  Along with rock music, long-haired men and women in the professional work force.

I have mentioned a few times that my father’s side were seamen – captains and engineers in the merchant marine.  They lived the hard lives of sailors, probably softened by their relative status within the iron hierarchy of ships.  Several died away from home, including my grandfather and two of my father’s brothers who died in their twenties of TB, a perennial scourge of the profession back then.  We don’t even know where some are buried.  In their brief shore leaves, I suspect many of them got tattoos.  I recall seeing a shadow under the thin fabric of my eldest uncle’s summer shirt, but I didn’t get to ask him before death took him.

This August, I got a tattoo.  I asked my friend Heather Oliver (whose artwork graces this site and my stories) to create a design for me.  She rose to the challenge magnificently.  By this act I wanted to honor my father’s line, now going extinct (I’m the sole twig left of that once-great tree); to mark the narrow escape from my first brush with cancer; and to remind myself that I should try to finish and publish my stories before the Hunter stoops on me for the final time.

Tattooing means different things across people and cultures – but it’s interesting to consider that outside the West, tattooing done willingly was often a status symbol, from the Scythians to the Maori.  To a large extent, it is also considered a rite of passage and/or a signal of entry to a soldier-like fraternity, whether this is the army, a criminal organization, a prison group or the Knights of St. John; in this guise, the practice has been associated with masculine “bravery” (since it involves pain) and group identity.

These aspects of the process are highlighted in one of the best SF novels, Donald Kingsbury’s Courtship Rite: an Earth ship has ended up on a planet whose lifeforms are poisonous, forcing the human settlers into carefully regulated cannibalism – although they have retained enough technology to engineer some foodstuffs.  Children are raised communally and watched for signs of a talent.  When one is discerned, they receive their first tattoo, become members of an extended family and acquire human status (aka: they’re no longer potential food).

In my own stories, the Koredháni people, who consciously decided to adapt themselves to their new planetary home, are matrilineal and polyandrous because of a dearth of women (they also hail from the Minoans, who seem to have had at least one of these tendencies).  The second night after a handfasting, the co-husbands give the newcomer a tattoo (using nanotech, which is prominent in their living arrangements).  The design, chosen by the newcomer, almost invariably marks his previous allegiance or provenance, so that memory of his lineage is kept alive even after he is part of his wife’s hearth.

On the individual level, people often get a tattoo to decorate a scar – a gesture of defiance against the ravages of illness.  Also, recent technology advances have made possible the use of tattoos as medical monitors: glucose meters for diabetics, for example, removing the need for constant needle jabs.  The flip side of all this neat stuff, of course, is forcible tattoing, which predated the Nazi concentration camps: slaves and soldiers were routinely tattooed in the Roman empire to prevent them from running off. It was deemed more humane than branding.  Its Hellenic name “stigma” (dotting) led to the term stigmatize, with its known connotations.

For me, it’s interesting to think that tattoos, despite their vaunted “permanence”, are among the first of our parts to disappear when our bodies rest in fire, water or earth – unless we have the luck of the young Pazyryk warrior priestess who merited six horses in her journey to the other world.  Her kurgan was filled with water which then froze.  So when Natalia Polosmak opened the tomb in 1993, its occupant emerged almost entirely intact, from her wild silk blouse to her gold-inlaid felt headdress… and the ravishing soot tattoo on her shoulder.  She was a shaman; and in the end, tattoos are talismans: a way of reconnecting with what we sundered from when we became (perhaps too) self-aware.

Images: 1st, Deena Metzger’s famous self-portrait; 2nd, a recreation of the Pazyryk shaman’s tattoo; 3rd, Candleflame Sprite; design by Heather D. Oliver, execution by Deirdre Doyle.

If They Come, It Might Get Built

Monday, October 3rd, 2011

Sic itur ad astra (“Thus you shall go to the stars.”)
— Apollo, in Virgil’s Aeneid

Last Friday, several hundred people from a wide cross-section of the sciences and humanities converged on Orlando, Florida, to participate in the DARPA-sponsored 100-Year Starship symposium.  As the name tells, this was a preliminary gathering to discuss the challenges facing a long-generation starship, from propulsion systems to adapting to extraterrestrial homes.

I was one of the invited speakers.  I won’t have the leeway of long decompression, as I must immediately submerge for a grant.  However, I think it’s important to say a few words about the experience and purpose of that gathering.  Given the current paralysis of NASA, activities like this are sorely needed to keep even a tiny momentum forward on the technologies and mindsets that will make it possible to launch long-term crewed ships.

Open to the public, the event lasted two and a half days, the half being summations.  Content-wise, half was about the usual preoccupations: propulsion systems, starship technologies, habitats.  The other half covered equally important but usually neglected domains: biology, society, ethics, communicating the vision.  The talks were brief – we were each given 20 minutes total – and varied from the very broad to the very specific.  The presentations that I attended were overall high quality (though I personally thought “exotic science” should have been folded into the SF panels); so were the questions and discussions that followed them.  The age distribution was encouraging and there were many women in the audience, of which more anon.

Some aspects of the symposium did dismay me.  Structurally, the six or seven simultaneous tracks (with their inevitable time slippages) not only made it hard to go to specific talks but also pretty much ensured that the engineers would go to the propulsion talks, whereas the historians would attend those about ethics.  The diversity quotient was low, to put it mildly: a sea of pale faces, almost all Anglophones.  Most tracks listed heavily to the XY side.  This was particularly egregious in the two SF author panels, which sported a single woman among nine men – none with a biological background but heavy on physicists and AI gurus.  It was also odd to see long biosketches of the SF authors but none of the presenters in the official brochure.

Most disquieting, I sensed that there is still no firm sense of limits and limitations.  This persistence of triumphalism may doom the effort: if we launch starships, whether of exploration or settlement, they won’t be conquerors; they will be worse off than the Polynesians on their catamarans, the losses will be heavy and their state at planetfall won’t resemble anything depicted in Hollywood SF.  Joanna Russ showed this well in We Who Are About To…  So did Chelsea Quinn Yarbro in Dead in Irons.  But neither story got the fame it deserves.

On the personal side, I had the pleasure of seeing old friends and finally seeing in the flesh friends whom I had only met virtually.  I was gratified to have the room overflow during my talk.  My greatest shock of happiness was to have Jill Tarter, the legend of SETI, the inspiration for Ellie Arroway in Contact, not only attend my talk but also ask me a question afterwards.

I hope there is sustained follow-up to this, because the domain needs it sorely.  Like building a great cathedral, it will take generations of steady yet focused effort to build a functional starship.  It will also require a significant shift of our outlook if we want to have any chance of success.  Both the effort and its outcome will change us irrevocably.  I will leave you with three snippets of my talk (the long version will appear in the Journal of the British Interplanetary Society):

“An alternative title to this talk is ‘Distant Campfires’. A Native American myth said that the stars are distant campfires, where our ancestors are waiting for us to join them in storytelling and potlatch feasts.  Reaching and inhabiting other planets is often considered an extension of human exploration and occupation of Earth but the analogy is useful only as a metaphor. To live under strange skies will require courage, ingenuity and stamina – but above all, it will require a hard look at our assumptions, including what it means to be human.”

.

“In effect, by sending out long-term planetary expeditions, we will create aliens more surely than by leaving trash on an uninhabited planet.  Our first alien encounter, beyond Earth just as it was on Earth, may be with ourselves viewed through the distorting mirror of divergent evolution.”

.

“If we seek our future among the stars, we must change for the journey – and for the destination.  Until now, we have participated in our evolution and that of our ecosphere opportunistically, leaving outcomes to chance, whim or short-term expedience.  In our venture outwards, we’ll have to overcome taboos and self-manage this evolution, as we seek to adapt to the new, alien worlds which our descendants will inhabit.

One part of us won’t change, though: if we ever succeed in making our home on earths other than our own, we will still look up and see patterns in the stars of the new night skies.  But we will also know, each time we look up, that we’re looking at distant campfires around which all our relatives are gathered.”

Images: 1st, sunset, September 27, 2011, Sarasota, Florida (photo, Athena Andreadis); 2nd, Spaceborn (artist, Eleni Tsami)

Are Textbooks Science Fiction?

Tuesday, September 27th, 2011

by Joan Slonczewski

Today I have the great pleasure of hosting my friend Joan Slonczewski, who will discuss how textbooks can fire the imagination of future scientists.  Dr. Slonczewski is Professor of Biology at Kenyon College where she teaches, does research in microbiology, and writes a leading undergraduate textbook, Microbiology: An Evolving Science (W. W. Norton). She is also an SF author well-known for incorporating real science in her fiction, as highlighted by her justly famous A Door into Ocean.  Her recent SF novel, The Highest Frontier (Tor/Macmillan), shows a college in a space habitat financed by a tribal casino and protected from alien invasion by Homeworld Security.

When the film Avatar opened, it drew many critiques based on science. The planet Pandora could not exist around a gas giant; the neural-linked ecosystem would have no predators; and the Na’vi should have six limbs, like other Pandoran fauna. The greatest flaw was that the Na’vi have breasts, although their class of creatures are not mammals. Non-mammals having breasts would be an error unthinkable in real science.

Yet I wonder what might happen if an introductory textbook in biology were to receive scrutiny similar to that of Avatar.  If non-mammals should not be shown with breasts, does it follow that true mammals, named for the mammary gland, should indeed show breasts? The typical textbook section on “mammalian diversity” shows scarcely a mammary gland. One would never guess that we drink milk from cattle, mares, camels, and reindeer. The more modern books do show prominent breasts on a human. In other words, a view of life surprisingly similar to Avatar.

I first saw the fictional aspect of textbooks from the viewpoint of a science fiction author writing a college text, Microbiology: An Evolving Science (W. W. Norton).  As a fiction author–my book A Door into Ocean won the Campbell award–I well know the dilemma of “hard SF,” which aims to invent a future world of gadgets that don’t yet exist based on science that actually does. Even “hard” science fiction often dodges inconvenient points about exceeding the speed of light, breathing the air on any planet where the starship lands, and mating with the seductive native “aliens.”

A textbook, I thought, would be different. My coauthor John Foster would correct what I wrote, and our publisher provided a throng of editors and expert reviewers. The art budget paid for stunning visuals from a first-rate graphic arts firm whose artists actually check details in the primary literature.

Our early illusions about textual perfection fell away in the light of reviewer comments based on errors entrenched in other books, and editorial “corrections” that often made clearer English but muddier science. But the art process was what really made me think of fiction. Early on we chose a “palette” in which color conveys information: DNA was purple, RNA was blue, proteins red, yellow, or green. And cell interiors, with their nucleus, mitochondria, and so on, offered a rainbow of colors from lilac to salmon. Our color-coded figures are more than informative; they are gorgeously attractive, so much so that prospective adopters have been known to caress them on the page.

But DNA is not “really” purple, and RNA is not really blue. Chloroplasts are indeed green, as typically shown, but mitochondria are not red aside from a few of their iron-bearing proteins. And what of individual atoms as ray-traced blue and red balls and sticks? This aspect of science art goes beyond fiction–it is fantasy.

Despite their limitations, the visuals in a textbook illustrate in that they form a pattern in the reader’s mind; a pattern that deepens understanding of a concept. This aim of illustration is actually shared by the best science fiction. Frank Herbert’s Dune illustrates how water scarcity drives an ecosystem. Octavia Butler’s Lilith’s Brood illustrates how organisms trade genetic identity for survival.

So if textbook art is “fictional,” what needs to be “correct”? The mental patterns formed by the text and art need to be honest; to spark genuine insights that lead to understanding. A cell’s nucleus is not “really” lavender in color, but the colored shape draws attention to the nucleus as a compartment enclosing the precious DNA. By contrast, an image depicting the nuclear contents as spilling out of the cell would not yield insight, but confusion. A troubling new group of textbooks aim to sow such confusion–books with titles like Exploring Creation with Biology and The Lies of Evolution.  Such books aim to inoculate “inquiring preteens” against the founding principles of biology, geology, and cosmology.

If deliberate confusion is the worst sin of any book, the next worst sin is boredom. Teachers can make students read the most boring book; but will they stay awake?

A key decision we made for Microbiology: An Evolving Science was to tell stories. We told how Bangladeshi women taught the world to fight cholera. How life began out of atoms formed by stars that died long before our own sun was born. How a high school boy testified at the Scopes trial that humans evolved from microbes. How Louis Pasteur as a student discovered mirror symmetry in biomolecules–a tool that astrobiologists may use to reveal life on other worlds.

A textbook, like science fiction, should raise questions. Is there microbial life on Mars–and what might it look like? Textbooks should take the reader to new places where we’ve never been–and perhaps could never go, such as the interior of a cell, the electron cloud of an atom, or a planet where people have three sexes.  Like science fiction, a textbook should inspire people to learn more about real science, and even become scientists.  After Jurassic Park came out, some scientists felt embarrassed by the book’s technical flaws and its portrayal of money-mad dinosaur cloners.  But so many students came to Kenyon College wanting to clone dinosaurs that we founded a new program in molecular biology.

My latest work of fiction, The Highest Frontier, has already drawn complaints. The space elevator won’t work; the casino-financed satellite can’t be built; and the aliens could not really evolve like viruses. Let’s hope at least the book inspires students to pursue virology.

Athena’s coda:  Readers of this blog know the reasons why I detest Avatar, which go beyond its sloppy science; so do some attendees of Readercon 2010, because Joan and I had a lively exchange about it in a panel.  Even so, I entirely agree with what Joan says here, as attested by The Double Helix: Why Science Needs Science Fiction.

In my essays and talks, I have repeatedly used A Door into Ocean as an example of outstanding “hard” SF that does not trumpet its hardness and also contains the additional layers and questioning of consequences that make it compelling fiction.

I also had the privilege of reading the penultimate version of The Highest Frontier.  The novel is an unusual combination of space opera and grounded near-future extrapolation — and Harry Potter aficionados would love it if they found out about it (unfortunately unlikely, given the proliferation of unlinked subgenre ponds in speculative fiction).  It’s fascinating to compare and contrast it with Morgan Locke’s Up Against It, also from Tor.  Both are set in beleaguered space habitats where cooperative problem-solving is the only viable option; both literally brim with interesting concepts, vivid characters and exciting thought experiments.

The two novels are proof of three things: women can write stellar hard SF; scenarios for a long-term human presence in space that ignore biology (very broadly defined) are doomed; and I need not despair of finding SF works that engage me… provided that authors as talented as these continue to be published against least-common-denominator tides.

Images: 1st, a Sharer of Shora (from A Door into Ocean) as envisioned by Rowan Williams; 2nd, Slonczewski’s microbiology textbook opens with the NASA Phoenix lander and asks, “Is there life on Mars?”; 3rd, a glimpse of the habitat in The Highest Frontier.

High Frontiers and Cheap Snarks

Friday, September 23rd, 2011

Note: This article has a coda about the CERN neutrino results, which came out while I was writing it.

Two seemingly disparate but actually related items came up in the news recently. One was the discovery of a planet circling a close binary star system given the placeholder name of Kepler 16. The other was the publication of a viral protein’s crystal structure.

What, I hear you ask, do these have in common? Well, for one both projects used crowdsourcing (now going by the PR-friendly term “citizen science”). The other commonality was the anti-scientist hype: the media trumpeted gleefully that non-scientists are more prescient and clever than scientists. In contrast to plodding experts, prophetic film directors (“OMG, Tatooine!”) and intrepid gamers simply vault over obstacles and gracefully yet squarely hit the target. Kinda like Luke Skywalker homing on the tiny dot of vulnerability in the planet-sized Death Star with little flying experience and eyes wide shut because, ya know, the Force is with him.

Let’s parse the circumbinary planet first. Close to half the stars are in binary configurations, and about half of these have accretion disks. Hence, the likelihood of planets in such systems is very high. Astrophysicists’ models have shown that a planet can stably orbit either around one member of a widely separated pair or around a very tight pair. The first discovery of a planet circling a close binary dates from 1993 (or 2003, if one counts the final confirmation of the original observation). What makes the Kepler 16 system a first is that its planet appears to be smaller than Jupiter. As for prescience, beyond the astrophysicists’ theoretical calculations, Isaac Asimov had written Nightfall and Chesley Bonestell had painted Double Star long before Tatooine was even a solitary neuronal firing in George Lucas’ brain.

So now on to the crystal structure that “had stumped scientists for years but was solved by gamers in a few days”. To begin with, this was not the first crowdsourcing scientific project as touted. The honors for that must go to SETI@home, launched in 1997. There have been many others since, across disciplines. Beyond that, the people in the protein folding contest used a program developed by scientists (FoldIt) and half of the dozen or so participating “gamers” were biologists themselves. Crucially, they were given NMR and X-ray diffraction data to constrain and guide their steps. Finally, the result (a model, which means it’s still hypothetical) primarily aided crystallographers in placing heavy metal elements so as to get well-formed crystals, whose X-ray patterns gave the real, definitive proof of the structure.

Parenthetically, protein folding is a topic of perennial fascination to both creationists and believers in the strong anthropic principle. Many non-biologists, including physicists who blithely delve into biology in popsci books, are fond of intoning ad nauseam that amino acid strings would take billyuns and billyuns of years to fold correctly – hence god/intelligent design/a privileged universe/fine tuning of constants. In fact, with one exception that I can think of, proteins that have been unraveled into amino acid strings never re/fold at all (nor do they fold efficiently or correctly in programs like Rosetta, that presume complete lack of folding). Proteins fold as they get made, while they emerge from the ribosome. So they fold locally to achieve partial energy minima (so-called secondary structure) and these partly folded structures quickly coalesce into the final tertiary structure. On the technical side, making protein crystals is a difficult, delicate art – the biological equivalent of glass blowing. Like coaxing cells into growing, it’s part craft, part experience-based knowledge so deep that it becomes instinct.

Involving many people in parsing scientific data is a tremendous idea: it gets non-scientists familiar with the concepts, process and vocabulary of science, it can accelerate portions of the analysis, and it helps forge a sense of collective purpose and achievement. The great success of crowdsourcing highlights the unique human ability to notice anomalies instead of undeviatingly following protocols as computers do. This human attribute, not so incidentally, is one of the strongest arguments for sending crewed exploration teams to places like Mars.

At the same time, scientists are not stodgy techs in lab coats (whose wearing is almost entirely confined to movies; in real life, MDs are far likelier to sport such togs). To be a good scientist, let alone a great one, you must possess not only knowledge, rigor and stamina but also imagination and the informed, trained intuition that enables you to recognize patterns as well as deviations from them (aka “the prepared mind”). And distributed data churning won’t replace trained experimentation and thinking any time soon – or later.

Anglo-Saxon cultures have a strong anti-intellectual streak. Some of it is the lingering mystique of the British gentleman dilettante; some is the American obsession with self-determination. Yet the same people who treat scientists like class enemies and jeer at their painstaking mindsets and work habits follow woo gurus – from homeopaths to investment advisors to Teabagger televangelists – with unsurprising outcomes.

If people really think that they can do science better than trained scientists, I invite them to apply this reasoning to other domains and have the next person they meet on the street do their root canals or wire their house for electricity. Those who participate in citizen science are praiseworthy, citizens in the full sense of the word. Nothing but good can come from the practice – except for the demagogic triumphalism of those journalists whose self-satisfied ignorance vitiates every hard-won gain achieved by the scientist/layperson partnerships.

It’s a natural human reaction to ridicule what one fears and/or doesn’t understand, though adults are supposed to mature beyond this juvenile tendency. The question then becomes why science, whose record is far better than that of just about any other human endeavor, has become a bugaboo rather than a vision and an integral part of this culture. It’s a question well worth remembering when all the GOP presidential candidates fall all over themselves to deny evolution – and one of them might lead what is still struggling to remain the most powerful country on this planet.

[Click on this image to see legible larger version]

Coda: The news that CERN’s OPERA project recorded anomalous results of neutrino speeds got its share of “Roll over, Einstein” smartass quotes although, thankfully, the hype didn’t reach the proportions of NASA’s “arsenic” bacteria. Neutrinos are literally the changelings of the particle clan but the claim is far from proven and the paper has still not been peer-reviewed.

If it proves true, it won’t give us hyperdrives nor invalidate relativity. What it will do is place relativity in an even larger frame, as Eisteinian theory did to its Newtonian counterpart. It may also (finally!) give us a way to experimentally test string theory… and, just maybe, open the path to creating a fast information transmitter like the Hainish ansible, proving that “soft” SF writers like Le Guin may be better predictors of the future than the sciency practitioners of “hard” SF.

Images: 1st, Double Star by Chesley Bonestell (used as a Life cover in 1954); 2nd, a schematic of a protein structure; 3rd, Neutrinos by ever-sharp xkcd.

Safe Exoticism, Part 2: Culture

Friday, September 2nd, 2011

Note: This 2-part article is an expanded version of the talk I gave at Readercon 2011.

Part 1: Science

Recently, I read a round table discussion at the World SF blog whose participants were international women SF/F writers.  The focus was, shall we say, intersectional invisibility.  One item that came up was the persistence of normalizing to Anglo standards.

Also recently I started Patrick Leigh Fermor’s Mani travelogue.  In the prologue I ran into the following sentence: “There is not much here about his wartime service in Crete, where for two years in the mountains he organized the resistance to the Nazi occupation.”  In other words, for those who read this introduction (or Anthony Lane’s and David Mason’s swooning accounts of Fermor), the Cretans became sidekicks in their own country, in their own struggle – like the Arabs in T. E. Lawrence’s memoirs.

There are two asides to this.  Fermor’s best known doing, the Kreipe kidnapping, conferred no strategic or tactical advantage, although the German reprisals were very real: they slaughtered and burned the village of Anóghia, the home of bard Níkos Ksiloúris.  Like most of its kind, the action served to maintain Allied control over the “unruly” native resistance.  Additionally, Fermor was frequently airlifted to Cairo, to decompress and receive his wages.  The Cretans were not invited along.  They remained in Crete, subject to said reprisals.  But Fermor was British gentry.  It was his version of reality that got heard, became canon history and granted him fame and fortune.

In Part 1, I said that if I wrote about New Orleans, readers and critics would be on me like a brick avalanche.  I followed the recent conniptions of the British SF contigent over Connie Willis’ depiction of WWII London.  She got terms wrong, she got details wrong, blah blah blah.   Care to know how many things Greg Benford got wrong about Bronze Age and contemporary Mycenae in Artifact?  Care to know what I think of Neil Gaiman’s “There is nothing uniquely Greek about the Odyssey?”  For that matter, you hear endless hymns about Ian McDonald’s books – until you discuss Brasyl with a Brazilian or Hyberabad Days with an Indian.

Myths and history that recedes into legend reach us already as palimpsests.  When The Iliad became standardized, the events it recited were already half a millennium old.  Such stories bear all kinds of revisionist tellings, and the more resonant they are the more ways they can be re/told.  If you want to see a really outstanding retelling of Oedípus Rex from Iocáste’s point of view, watch Denis Villeneuve’s film Incendies based on Wajdi Mouawad’s play Scorched.  However, whenever people embed stories in a culture they haven’t lived in and know intimately, I’m wary.  This, incidentally, is true across genres.  For example, I can’t quite trust Martin Cruz Smith’s Russia, although Arkady Renko is a truly stellar creation.  If you read John Fowles’ The Magus side by side with his French Lieutenant’s Woman, the disparity in authenticity is palpable.  Marguerite Yourcenar knew Hellás; Mary Renault, not so much.

There is nothing wrong with writers using other cultures than their own, especially if they’re good storytellers with sensitive antennae.  But when such works are taken for the real thing, the real thing often gets devalued or rejected outright, just as real science gets rejected in SF in favor of notions that are false or obsolete and often duller than the real thing.  It’s like people used to canned orange juice disdaining the freshly squeezed stuff because it contains pulp.  Or like James Ruskin forming his opinion of women’s bodies from classical statues and then struck impotent when he discovered that real women possess pubic hair.

There’s another equivalence between science and non-Anglo cultures in speculative fiction.  Namely, the devil’s in the details.  You need to have absorbed enough of your subject’s essence to know what counts, what needs to be included for verisimilitude.  You may get the large picture right by conscientious research; you may get by with bluffing – but small things give away the game even when the bigger items pass cursory inspection.  The diminutive of Konstantin in Russian is not Kostyn, it’s Kostya.  Hellenic names have vocative endings that differ from the nominative.  The real thing is both more familiar and more alien than it appears in stories written by cultural tourists.  And often it’s the small touches that transport you inside another culture.

When outsiders get things right, they get saluted as honorary members of the culture they chose to depict and deserve the accolade.  Outsiders can sometimes discern things in a culture that embedded insiders cannot see.  Mark Mazower wrote riveting histories of Salonica and my people’s resistance during WWII that I recommend to everyone, including Hellenes.  Roderick Beaton and Paul Preuss wrote absorbing novels set in Crete that are inseparable from their setting (Ariadne’s Children and Secret Passages).  And Ellen Frye’s The Other Sappho may have dated considerably in terms of its outlook – but you can tell that Frye lived in Hellás for a long time and spoke idiomatic Hellenic, whereas Rachel Swirsky’s A Memory of Wind suffers from a generic setting despite its considerable other merits.

Then we have the interesting transpositions, like Jack McDevitt’s A Talent for War.  If you don’t know he’s loosely retelling the wars of the Hellenic city-states against the Persians, you enjoy the story just fine.  But if you do know, the underdrone adds emotional resonance. By knowing Hellenic history past the surface, McDevitt got something else right almost inadvertently: Christopher Sim is a parallel-universe portrait of Áris Velouchiótis, the most famous WWII resistance leader in Hellás.  On the other hand, Ian Sales turned Eurypides’ careful psychological setup into wet cement in Thicker than Water, his SF retelling of Ifighénia in Tavrís (to say nothing of the name changes, with Orris and Pyle for Oréstis and Pyládhis winning the tin ear award).

Previously, the costs and intrinsic distortions of translation stood between stories of other cultures told by their own members and Anglophone readership.  With SF/F writers of other nations increasingly writing in more-than-fluent English, this is no longer the case.  The double-visioned exiles that camp outside the gates of SF/F might be just what the genre needs to shake it out of its self-satisfied monoculture stupor.  The best-known examplar of this is Isak Dinesen (Karen Blixen) whose bewitching stories have never gone out of print, though her Kenyan memoirs have their share of noble savage/colonial glamor problems.  Of course, one swallow does not bring the spring: reading one author per culture won’t result in major shifts; singletons cannot serve as blanket representatives of their culture — they remain individuals with unique context-colored viewpoints.

I think we should encourage cross-fertilization or, to use a biological term, back-breeding to the original stock.  We need to listen to the voices from outside the dominant culture, if we don’t want speculative fiction to harden into drab parochial moulds.  We need to taste the real thing, even if it burns our tongues.  Burt Lancaster (but for the accent) was a memorable Don Fabrizio in the film version of Giuseppe di Lampedusa’s Il Gattopardo; but Ghassan Massoud swept the floor with his Anglo co-stars as Salahu’d-Din in The Kingdom of Heaven.  Although, to be thorough, Salahu’d-Din was a Kurd.  So he might have had blue or gray eyes.

Images: 1st, Peter O’ Toole in another quintessence of palatable exoticism, David Lean’s Lawrence of Arabia;  2nd, Memoirs of Hadrian by Marguerite Yourcenar; 3rd, Lubna Azabal as Nawal Marwan in Villeneuve’s Incendies.

Related entries:

Iskander, Khan Tengri

Being Part of Everyone’s Furniture; Or: Appropriate Away!

A (Mail)coat of Many Colors: The Songs of the Byzantine Border Guards

Evgenía Fakínou: The Unknown Archmage of Magic Realism

Added note:  Almost concurrently, Aliette de Bodard and Cora Buhlert discuss aspects of the same issue.  The synchronicity suggests that the time may be ripe for a change!

Safe Exoticism, Part 1: Science

Wednesday, August 31st, 2011

Note: This 2-part article is an expanded version of the talk I gave at Readercon 2011.

I originally planned to discuss how writers of SF need to balance knowledge of the scientific process, as well as some concrete knowledge of science, with writing engaging plots and vivid characters. But the more I thought about it, the more I realized that this discussion runs a parallel course with another; namely, depiction of non-Anglo cultures in Anglophone SF/F.

Though the two topics appear totally disparate, science in SF and non-Anglo cultures in SF/F often share the core characteristic of safe exoticism; that is, something which passes as daring but in fact reinforces common stereotypes and/or is chosen so as to avoid discomfort or deeper examination. A perfect example of both paradigms operating in the same frame and undergoing mutual reinforcement is Frank Herbert’s Dune. This is why we get sciency or outright lousy science in SF and why Russians, Brazilians, Thais, Indians and Turks written by armchair internationalists are digestible for Anglophone readers whereas stories by real “natives” get routinely rejected as too alien. This is also why farang films that attain popularity in the US are instantly remade by Hollywood in tapioca versions of the originals.

Before I go further, let me make a few things clear. I am staunchly against the worn workshop dictum of “Write only what you know.” I think it is inevitable for cultures (and I use that term loosely and broadly) to cross-interact, cross-pollinate, cross-fertilize. I myself have seesawed between two very different cultures all my adult life. I enjoy depictions of cultures and characters that are truly outside the box, emphasis on truly. At the same time, I guarantee you that if I wrote a story embedded in New Orleans of any era and published it under my own culturally very identifiable name, its reception would be problematic. Ditto if I wrote a story using real cutting-edge biology.

These caveats do not apply to secondary worlds, which give writers more leeway. Such work is judged by how original and three-dimensional it is.  So if a writer succeeds in making thinly disguised historical material duller than it was in reality, that’s a problem. That’s one reason why Jacqueline Carey’s Renaissance Minoan Crete enthralled me, whereas Guy Gavriel Kay’s Byzantium annoyed me. I will also leave aside stories in which science is essentially cool-gizmos window dressing. However, use of a particular culture is in itself a framing device and science is rarely there solely for the magical outs it gives the author: it’s often used to promote a world view. And when we have active politics against evolution and in favor of several kinds of essentialism, this is something we must keep not too far back in our mind.

So let me riff on science first. I’ll restrict myself to biology, since I don’t think that knowledge of one scientific domain automatically confers knowledge in all the rest. Here are a few hoary chestnuts that are still in routine use (the list is by no means exhaustive):

Genes determining high-order behavior, so that you can instill virtue or Mozartian composing ability with simple, neat, trouble-free cut-n-pastes (ETA: this trope includes clones, who are rarely shown to be influenced by their many unique contexts). It runs parallel with optimizing for a function, which usually breaks down to women bred for sex and men bred for slaughter. However, evolution being what it is, all organisms are jury-rigged and all optimizations of this sort result in instant dead-ending. Octavia Butler tackled this well in The Evening and the Morning and the Night.

— The reductionist, incorrect concept of selfish genes. This is often coupled with the “women are from Venus, men are from Mars” evo-psycho nonsense, with concepts like “alpha male rape genes” and “female wired-for-coyness brains”. Not surprisingly, these play well with the libertarian cyberpunk contingent as well as the Viagra-powered epic fantasy cohort.

— Lamarckian evolution, aka instant effortless morphing, which includes acquiring stigmata from VR; this of course is endemic in film and TV SF, with X-Men and The Matrix leading the pack – though Star Trek was equally guilty.

— Its cousin, fast speciation (Greg Bear’s cringeworthy Darwin’s Radio springs to mind; two decent portrayals, despite their age, are Poul Anderson’s The Man Who Counts and The Winter of the World).  Next to this is rapid adaptation, though some SF standouts managed to finesse this (Joan Slonczewski’s A Door into Ocean, Donald Kingsbury’s Courtship Rite).

— The related domain of single-note, un-integrated ecosystems (what I call “pulling a Cameron”). As I mentioned before, Dune is a perfect exemplar though it’s one of too many; an interesting if flawed one is Mary Doria Russell’s The Sparrow. Not surprisingly, those that portray enclosed human-generated systems come closest to successful complexity (Morgan Locke’s Up Against It, Alex Jablokov’s River of Dust).

— Quantum consciousness and quantum entanglement past the particle scale. The former, Roger Penrose’s support notwithstanding, is too silly to enlarge upon, though I have to give Elizabeth Bear props for creative chuzpah in Undertow.

— Immortality by uploading, which might as well be called by its real name: soul and/or design-by-god – as Battlestar Galumphica at least had the courage to do. As I discussed elsewhere, this is dualism of the hoariest sort and boring in the bargain.

— Uplifted animals and intelligent robots/AIs that are not only functional but also think/feel/act like humans. This paradigm, perhaps forgivable given our need for companionship, was once again brought to the forefront by the Planet of the Apes reboot, but rogue id stand-ins have run rampant across the SF landscape ever since it came into existence.

These concepts are as wrong as the geocentric universe, but the core problems lie elsewhere. For one, SF is way behind the curve on much of biology, which means that stories could be far more interesting if they were au courant. Nanobots already exist; they’re called enzymes. Our genes are multi-cooperative networks that are “read” at several levels; our neurons, ditto. I have yet to encounter a single SF story that takes advantage of the plasticity (and potential for error) of alternative splicing or epigenetics, of the left/right brain hemisphere asymmetries, or of the different processing of languages acquired in different developmental windows.

For another, many of the concepts I listed are tailor-made for current versions of triumphalism and false hierarchies that are subtler than their Leaden Age predecessors but just as pernicious. For example, they advance the notion that bodies are passive, empty chassis which it is all right to abuse and mutilate and in which it’s possible to custom-drop brains (Richard Morgan’s otherwise interesting Takeshi Kovacs trilogy is a prime example). Perhaps taking their cue from real-life US phenomena (the Teabaggers, the IMF and its minions, Peter Thiel…) many contemporary SF stories take place in neo-feudal, atomized universes run amuck, in which there seems to be no common weal: no neighborhoods, no schools, no people getting together to form a chamber music ensemble, play soccer in an alley, build a telescope. In their more benign manifestations, like Iain Banks’ Culture, they attempt to equalize disparities by positing infinite resources. But they hew to disproved paradigms and routinely conflate biological with social Darwinism, to the detriment of SF.

Mindsets informed by these holdovers won’t help us understand aliens of any kind or launch self-enclosed sustainable starships, let alone manage to stay humane and high-tech at the same time. Because, let’s face it: the long generation ships will get us past LEO. FTLs, wormholes, warp drives… none of these will carry us across the sea of stars. It will be the slow boats to Tau Ceti, like the Polynesian catamarans across the Pacific.

You may have noticed that many of the examples that I used as good science have additional out-of-the-box characteristics. Which brings us to safe exoticism on the humanist side.

Part 2: Culture

Images: 1st, Bunsen and his hapless assistant, Beaker (The Muppet Show); 2nd, the distilled quintessence of safe exoticism: Yul Brynner in The King and I.

Related entries:

SF Goes McDonald’s: Less Taste, More Gristle

To the Hard Members of the Truthy SF Club

Miranda Wrongs: Reading too Much into the Genome

Ghost in the Shell: Why Our Brains Will Never Live in the Matrix

“Are We Not (as Good as) Men?”

Tuesday, August 23rd, 2011

– paraphrasing The Sayer of the Law from H. G. Wells’ The Island of Dr. Moreau

When franchises get stale, Hollywood does reboots — invariably a prequel that tells an origin story retrofitted to segue into already-made sequels either straight up (Batman, X-Men) or in multi-universe alternatives (Star Trek). Given the iconic status of the Planet of the Apes original, a similar effort was a matter of time and CGI.

In The Rise of the Planet of the Apes, we get the origin story with nods to the original: throwaway references to the loss of crewed starship Icarus on its way to Mars; a glimpse of Charlton Heston; the future ape liberator playing with a Lego Statue of Liberty. As Hollywood “science” goes, it’s almost thoughtful, even borderline believable. The idea that the virus that uplifts apes is lethal to humans is of course way too pat, but it lends plausibility to the eventual ape dominion without resorting to the idiotic Ewok-slings-overcome-Stormtrooper-missiles mode. On the other hand, the instant rise to human-level feats of sophistication is ridiculous (more of which anon), to say nothing of being able to sail through thick glass panes unscathed.

The director pulled all the stops to make us root for the cousins we oppress: the humans are so bland they blend with the background, the bad guys mistreat the apes with callous glee… and the hero, the cognitively enhanced chimpanzee Caesar (brought to disquieting verisimilitude of life by Andy Serkis), not only fights solely in defense of his chosen family… but to underline his messianic purity he has neither sex drive nor genitals. This kink underlines the high tolerance of US culture for violence compared to its instant vapors over any kind of sex; however, since Project Nim partly foundered on this particular shoal, perhaps it was a wise decision.

As it transpires, Ceasar is exposed to little temptation to distract him from his pilgrimage: there are no female hominids in the film, except for the maternal vessel who undergoes the obligatory death as soon as she produces the hero and a cardboard cutout helpmate there to mouth the variants of “There are some things we weren’t meant to do” — and as assurance that the human protagonist is not gay, despite his nurturing proclivities. Mind you, the lack of a mother and her female alliances would make Caesar (augmented cortex notwithstanding) a permanent outcast among his fellows, who determine status matrilinearly given the lack of defined paternity.

Loyal to human tropes, Caesar goes from Charly to Che through the stations-of-the-cross character development arc so beloved of Campbel/lites. Nevertheless, we care what happens to him because Serkis made him compelling and literally soulful. Plus, of course, Caesar’s cause is patently just. The film is half Spartacus turning his unruly gladiators into a disciplined army, half Moses taking his people home — decorated with the usual swirls of hubris, unintended consequences, justice, equality, compassion, identity and empathy for the Other.

Needless to say, this reboot revived the topic of animal uplift, a perennial favorite of SF (and transhumanist “science” which is really a branch of SF, if not fantasy). Human interactions with animals have been integral to all cultures. Myths are strewn with talking animal allies, from Puss in Boots to A Boy and His Dog. Beyond their obvious practical and symbolic uses, mammals in particular are the nexus of both our notions of exceptionalism and our ardent wish for companionship. Our fraught relationship with animals also mirrors preoccupations of respective eras. In Wells’ Victorian England, The Island of Dr. Moreau struggled with vivisection whereas Linebarger’s Instrumentality Underpeople and the original Planet of the Apes focused on racism (plus, in the latter, the specter of nuclear annihilation). Today’s discussions of animal uplift are really a discussion over whether our terrible stewardship can turn benign — or at least neutral — before our inexorable spread damages the planet’s biosphere past recovery.

When SF posits sentient mammal-like aliens, it usually opts for predators high in human totem poles (Anderson’s eagle-like Ythrians, Cherryh’s leonine Hani). On the other hand, SF’s foremost uplift candidates are elephants, cetaceans – and, of course, bonobos and chimpanzees. All four species share attributes that make them theoretically plausible future companions: social living, so they need to use complex communication; relative longevity, so they can transmit knowledge down the generations; tool use; and unmistakable signs of self-awareness.

Uplift essentially means giving animals human capabilities – primary among them high executive functions and language. One common misconception seems to be that if we give language to near-cousins, they will end up becoming hairy humans. Along those lines, in Rise chimpanzees, gorillas and orangutans are instantly compatible linguistically, emotionally, mentally and socially. In fact, chimpanzees are far closer to us than they are to the other two ape species (with orangutans being the most distant). So although this pan-panism serves the plot and prefigures the species-specific occupations shown in the Ape pre/sequels, real-life chances of such coordination, even with augmentation, are frankly nil.

There is, however, a larger obstacle. Even if a “smart bomb” could give instant language processing capability, it would still not confer the ability to enunciate clearly, which is determined by the configuration of the various mouth/jaw/throat parts. Ditto for bipedal locomotion. Uplift caused by intervention at whatever level (gene therapy, brain wiring, grafts) cannot bring about coordinated changes across the organism unless we enter the fantasy domain of shapeshifting. This means that a Lamarckian shift in brain wiring will almost certainly result in a seriously suboptimal configuration unlikely to thrive individually or collectively. This could be addressed by singlet custom generation, as is shown for reynards in Crowley’s Beasts, but it would make such specimens hothouse flowers unlikely to propagate unaided, much less become dominant.

In this connection, choosing to give Caesar speech was an erosion of his uniqueness. Of course, if bereft of our kind of speech he would not be able to give gruff Hestonian commands to his army: they would be reliant on line of sight and semaphoring equivalents. However, sticking to complex signed language (which bonobos at least appear capable of, if they acquire it within the same developmental time window as human infants) would keep Caesar and his people uncanny and alien, underlining the irreducible fact of their non-human sentience.

Which brings us to the second fundamental issue of uplift. Even if we succeed in giving animals speech and higher executive functions, they will not be like us. They won’t think, feel, react as we do. They will be true aliens. There is nothing wrong with that, and such congress might give us a preview of aliens beyond earth, should SETI ever receive a signal. However, given how humans treat even other humans (and possibly how Cro-Magnons treated Neanderthals), it is unlikely we’ll let uplifted animals go very far past pet, slave or trophy status. In this, at least, Caesar’s orangutan councillor is right: “Human no like smart ape,” no matter how piously we discuss the ethics of animal treatment and our responsibilities as technology wielders.

Images: top, Caesar; bottom, Neanderthal reconstruction (Kennis & Kennis, National Geographic). What gazes out of those eyes is — was — human.

The Unknown Archmage of Magic Realism

Thursday, August 11th, 2011

Between long hours at the lab and a bout of lingering illness, I have let the blog go quiet for a while.

However, I haven’t been totally inactive. A year ago, I wrote an essay about the fact that writers feel free to use Hellenic contexts (myths, history, location), blithely assuming they know my culture well enough to do so convincingly. In that essay, I also stated that Hellás may be home to the best magic realist alive right now: Evgenía Fakínou. As a follow-up, I wrote a brief introduction to her work, which appeared today at the SFF Portal helmed by Val Grimm. Here is the conclusion:

“Fakínou’s books are full of vision quests, awakenings, boundary crossings. All have open endings, with their protagonists poised at thresholds on the last page. At the same time, they make their readers whole by reclaiming a past that might have led to an alternative future. Fakínou is a windwalker, a weaver of spider silk. I’m sorry she is not world-famous, but even sorrier for the dreamers who will never get a chance to lose – and find – themselves in her work.”

Image: Astradhení (Starbinder), Evgenía Fakínou’s first novel.

The Death Rattle of the Space Shuttle

Monday, July 25th, 2011

I get out of my car,
step into the night,
and look up at the sky.
And there’s something
bright, traveling fast.
Look at it go!
Just look at it go!

Kate Bush, Hello Earth

[The haunting a capella chorus comes from a Georgian folk song, Tsin Tskaro (By the Spring)]

I read the various eulogies, qualified and otherwise, on the occasion of the space shuttle’s retirement.  Personally, I do not mourn the shuttle’s extinction, because it never came alive: not as engineering, not as science, not as a vision.

Originally conceived as a reusable vehicle that would lift and land on its own, the shuttle was crippled from the get-go.  Instead of being an asset for space exploration, it became a liability – an expensive and meaningless one, at that.  Its humiliating raison d’ être was to bob in low earth orbit, becoming a toy for millionaire tourists by giving them a few seconds of weightlessness.  The space stations it serviced were harnessed into doing time-filling experiments that did not advance science one iota (with the notable exception of the Hubble), while most of their occupants’ time was spent scraping fungus off walls.  It managed to kill more astronauts than the entire Apollo program.  The expense of the shuttle launches crippled other worthwhile or promising NASA programs, and its timid, pious politics overshadowed any serious advances to crewed space missions.

In the past, I had lively discussions with Robert Zubrin about missions to Mars (and Hellenic mythology… during which I discovered that he, like me, loves the Minoans).  We may have disagreed on approach and details, but on this he and I are in total agreement: NASA has long floated adrift, directionless and purposeless.  Individual NASA subprograms (primarily all the robotic missions), carried on in the agency’s periphery, have been wildly successful.  But the days when launches fired the imagination of future scientists are long gone.

It’s true that the Apollo missions were an expression of dominance, adjuncts to the cold war.  It’s also true that sending a crewed mission to Mars is an incredibly hard undertaking.  However, such an attempt — even if it fails — will address a multitude of issues: it will ask the tough question of how we can engineer sustainable self-enclosed systems (including the biological component, which NASA has swept under the rug as scientifically and politically thorny); it will allow us to definitively decide if Mars ever harbored life; it will once again give NASA – and the increasingly polarized US polity – a focus and a worthwhile purpose.

I’m familiar with all the counterarguments about space exploration in general and crewed missions in particular: these funds could be better used alleviating human misery on earth; private industry will eventually take up the slack; robotic missions are much more efficient; humans will never go into space in their current form, better if we wait for the inevitable uploading come the Singularity.

In reality, funds for space explorations are less than drops in the ocean of national spending and persistent social problems won’t be solved by such measly sums; private industry will never go past low orbit casinos (if that); as I explained elsewhere, we in our present form will never, ever get our brains/minds into silicon containers; and we will run out of resources long before such a technology is even on our event horizon, so waiting for gods… er, AI overlords won’t avail us.

Barring an unambiguous ETI signal, the deepest, best reason for crewed missions is not science. I recognize the dangers of using the term frontier, with all its colonialist, triumphalist baggage. Bravado aside, we will never conquer space. At best, we will traverse it like the Polynesians in their catamarans under the sea of stars. But space exploration — more specifically, a long-term crewed expedition to Mars with the express purpose to unequivocally answer the question of Martian life — will give a legitimate and worthy outlet to our ingenuity, our urge to explore and our desire for knowledge, which is not that high up in the hierarchy of needs nor the monopoly of elites. People know this in their very marrow – and have shown it by thronging around the transmissions of space missions that mattered.

It’s up to NASA to once again try rallying people around a vision that counts.  Freed of the burden of the shuttle, perhaps it can do so, thereby undergoing a literal renaissance.

“We choose to go to the moon in this decade and do the other things, not because they are easy, but because they are hard, because that goal will serve to organize and measure the best of our energies and skills, because that challenge is one that we are willing to accept, one we are unwilling to postpone, and one which we intend to win.”

John Fitzgerald Kennedy, September 1962

Images: Pat Rawlings, Beyond; Randy Halverson, Plains Milky Way; European Space Agency, High Aurora.

Ghost in the Shell: Why Our Brains Will Never Live in the Matrix

Thursday, June 23rd, 2011

Introductory note: Through Paul Graham Raven of Futurismic, I found out that Charles Stross recently expressed doubts about the Singularity, god-like AIs and mind uploading.  Being the incorrigible curious cat (this will kill me yet), I checked out the post.  All seemed more or less copacetic, until I hit this statement: “Uploading … is not obviously impossible unless you are a crude mind/body dualist. // Uploading implicitly refutes the doctrine of the existence of an immortal soul.”

Clearly the time has come for me to reprint my mind uploading article, which first appeared at H+ magazine in October 2009. Consider it a recapitulation of basic facts.

When surveying the goals of transhumanists, I found it striking how heavily they favor conventional engineering. This seems inefficient and inelegant, since such engineering reproduces slowly, clumsily and imperfectly, what biological systems have fine-tuned for eons — from nanobots (enzymes and miRNAs) to virtual reality (lucid dreaming). An exemplar of this mindset was an article about memory chips. In it, the primary researcher made two statements that fall in the “not even wrong” category: “Brain cells are nothing but leaky bags of salt solution,” and “I don’t need a grand theory of the mind to fix what is essentially a signal-processing problem.”

And it came to me in a flash that most transhumanists are uncomfortable with biology and would rather bypass it altogether for two reasons, each exemplified by these sentences. The first is that biological systems are squishy — they exude blood, sweat and tears, which are deemed proper only for women and weaklings. The second is that, unlike silicon systems, biological software is inseparable from hardware. And therein lies the major stumbling block to personal immortality.

The analogy du siècle equates the human brain with a computer — a vast, complex one performing dizzying feats of parallel processing, but still a computer. However, that is incorrect for several crucial reasons, which bear directly upon mind portability. A human is not born as a tabula rasa, but with a brain that’s already wired and functioning as a mind. Furthermore, the brain forms as the embryo develops. It cannot be inserted after the fact, like an engine in a car chassis or software programs in an empty computer box.

Theoretically speaking, how could we manage to live forever while remaining recognizably ourselves to us? One way is to ensure that the brain remains fully functional indefinitely. Another is to move the brain into a new and/or indestructible “container”, whether carbon, silicon, metal or a combination thereof. Not surprisingly, these notions have received extensive play in science fiction, from the messianic angst of The Matrix to Richard Morgan’s Takeshi Kovacs trilogy.

To give you the punch line up front, the first alternative may eventually become feasible but the second one is intrinsically impossible. Recall that a particular mind is an emergent property (an artifact, if you prefer the term) of its specific brain – nothing more, but also nothing less. Unless the transfer of a mind retains the brain, there will be no continuity of consciousness. Regardless of what the post-transfer identity may think, the original mind with its associated brain and body will still die – and be aware of the death process. Furthermore, the newly minted person/ality will start diverging from the original the moment it gains consciousness. This is an excellent way to leave a clone-like descendant, but not to become immortal.

What I just mentioned essentially takes care of all versions of mind uploading, if by uploading we mean recreation of an individual brain by physical transfer rather than a simulation that passes Searle’s Chinese room test. However, even if we ever attain the infinite technical and financial resources required to scan a brain/mind 1) non-destructively and 2) at a resolution that will indeed recreate the original, additional obstacles still loom.

To place a brain into another biological body, à la Mary Shelley’s Frankenstein, could arise as the endpoint extension of appropriating blood, sperm, ova, wombs or other organs in a heavily stratified society. Besides being de facto murder of the original occupant, it would also require that the incoming brain be completely intact, as well as able to rewire for all physical and mental functions. After electrochemical activity ceases in the brain, neuronal integrity deteriorates in a matter of seconds. The slightest delay in preserving the tissue seriously skews in vitro research results, which tells you how well this method would work in maintaining details of the original’s personality.

To recreate a brain/mind in silico, whether a cyborg body or a computer frame, is equally problematic. Large portions of the brain process and interpret signals from the body and the environment. Without a body, these functions will flail around and can result in the brain, well, losing its mind. Without corrective “pingbacks” from the environment that are filtered by the body, the brain can easily misjudge to the point of hallucination, as seen in phenomena like phantom limb pain or fibromyalgia.

Additionally, without context we may lose the ability for empathy, as is shown in Bacigalupi’s disturbing story People of Sand and Slag. Empathy is as instrumental to high-order intelligence as it is to survival: without it, we are at best idiot savants, at worst psychotic killers. Of course, someone can argue that the entire universe can be recreated in VR. At that point, we’re in god territory … except that even if some of us manage to live the perfect Second Life, there’s still the danger of someone unplugging the computer or deleting the noomorphs. So there go the Star Trek transporters, there go the Battlestar Galactica Cylon resurrection tanks.

Let’s now discuss the possible: in situ replacement. Many people argue that replacing brain cells is not a threat to identity because we change cells rapidly and routinely during our lives — and that in fact this is imperative if we’re to remain capable of learning throughout our lifespan.

It’s true that our somatic cells recycle, each type on a slightly different timetable, but there are two prominent exceptions. The germ cells are one, which is why both genders – not just women – are progressively likelier to have children with congenital problems as they age. Our neurons are another. We’re born with as many of these as we’re ever going to have and we lose them steadily during our life. There is a tiny bit of novel neurogenesis in the olfactory system and possibly in the hippocampus, but the rest of our 100 billion microprocessors neither multiply nor divide. What changes are the neuronal processes (axons and dendrites) and their contacts with each other and with other cells (synapses).

These tiny processes make and unmake us as individuals. We are capable of learning as long as we live, though with decreasing ease and speed, because our axons and synapses are plastic as long as the neurons that generate them last. But although many functions of the brain are diffuse, they are organized in localized clusters (which can differ from person to person, sometimes radically). Removal of a large portion of a brain structure results in irreversible deficits unless it happens in very early infancy. We know this from watching people go through transient or permanent personality and ability changes after head trauma, stroke, extensive brain surgery or during the agonizing process of various neurodegenerative diseases, dementia in particular.

However, intrepid immortaleers need not give up. There’s real hope in the horizon for renewing a brain and other body parts: embryonic stem cells (ESCs, which I discussed recently). Depending on the stage of isolation, ESCs are truly totipotent – something, incidentally, not true of adult stem cells that can only differentiate into a small set of related cell types. If neuronal precursors can be introduced to the right spot and coaxed to survive, differentiate and form synapses, we will gain the ability to extend the lifespan of a brain and its mind.

It will take an enormous amount of fine-tuning to induce ESCs to do the right thing. Each step that I casually listed in the previous sentence (localized introduction, persistence, differentiation, synaptogenesis) is still barely achievable in the lab with isolated cell cultures, let alone the brain of a living human. Primary neurons live about three weeks in the dish, even though they are fed better than most children in developing countries – and if cultured as precursors, they never attain full differentiation. The ordeals of Christopher Reeve and Stephen Hawking illustrate how hard it is to solve even “simple” problems of either grey or white brain matter.

The technical hurdles will eventually be solved. A larger obstacle is that each round of ESC replacement will have to be very slow and small-scale, to fulfill the requirement of continuous consciousness and guarantee the recreation of pre-existing neuronal and synaptic networks. As a result, renewal of large brain swaths will require such a lengthy lifespan that the replacements may never catch up. Not surprisingly, the efforts in this direction have begun with such neurodegenerative diseases as Parkinson’s, whose causes are not only well defined but also highly localized: the dopaminergic neurons in the substantia nigra.

Renewing the hippocampus or cortex of a Alzheimer’s sufferer is several orders of magnitude more complicated – and in stark contrast to the “black box” assumption of the memory chip researcher, we will need to know exactly what and where to repair. To go through the literally mind-altering feats shown in Whedon’s Dollhouse would be the brain equivalent of insect metamorphosis: it would take a very long time – and the person undergoing the procedure would resemble Terry Schiavo at best, if not the interior of a pupating larva.

Dollhouse got one fact right: if such rewiring is too extensive or too fast, the person will have no memory of their prior life, desirable or otherwise. But as is typical in Hollywood science (an oxymoron, but we’ll let it stand), it got a more crucial fact wrong: such a person is unlikely to function like a fully aware human or even a physically well-coordinated one for a significant length of time – because her brain pathways will need to be validated by physical and mental feedback before they stabilize. Many people never recover full physical or mental capacity after prolonged periods of anesthesia. Having brain replacement would rank way higher in the trauma scale.

The most common ecological, social and ethical argument against individual quasi-eternal life is that the resulting overcrowding will mean certain and unpleasant death by other means unless we are able to access extra-terrestrial resources. Also, those who visualize infinite lifespan invariably think of it in connection with themselves and those whom they like – choosing to ignore that others will also be around forever, from genocidal maniacs to cult followers, to say nothing of annoying in-laws or predatory bosses. At the same time, long lifespan will almost certainly be a requirement for long-term crewed space expeditions, although such longevity will have to be augmented by sophisticated molecular repair of somatic and germ mutations caused by cosmic radiation. So if we want eternal life, we had better first have the Elysian fields and chariots of the gods that go with it.

Images: Echo (Eliza Dushku) gets a new personality inserted in Dollhouse; any port in a storm — Spock (Leonard Nimoy) transfers his essential self to McCoy (DeForest Kelley) for safekeeping in The Wrath of Khan; the resurrected Zoe Graystone (Alessandra Torresani) gets an instant memory upgrade in Caprica; Jake Sully (Sam Worthington) checks out his conveniently empty Na’vi receptacle in Avatar.

Why I Won’t Be Taking the Joanna Russ Pledge

Thursday, June 16th, 2011

A Geology Lesson

Here, the sea strains to climb up on the land
and the wind blows dust in a single direction.
The trees bend themselves all one way
and volcanoes explode often.
Why is this? Many years back
a woman of strong purpose
passed through this section
and everything else tried to follow.

— Judy Grahn, from She Who

Between the physical death of Joanna Russ and the latest endless lists and discussions about women’s visibility and recognition in SF/F, well-meaning people have come up with the Russ Pledge. Namely, a pledge to acknowledge and promote women’s work.

As recent history has shown, Twitter notices don’t start revolutions, let alone sustain them. Even if they did, I won’t be taking the Russ pledge for the simplest of reasons. I have been implementing it for the last forty-plus years. It’s not a cute button on my lapel. It’s not a talking point in my public persona. I cannot take it off when I take off my clothes. It’s not an option. It’s an integral component of my bone marrow that has shaped my personal and professional life.

Long before her death, Russ had been marginalized for being too prickly, a prominent target of the “tone” argument. Even many women found her uncomfortable — she might annoy the Powers that Be and compromise paltry gains. As if good behavior brought acceptance to boys’ treehouses. As if she didn’t threaten the status quo by her mere existence, let alone her uncompromising stories, essays and reviews. Most people know of The Female Man and How to Suppress Women’s Writing, if only by rumor, but the rest of her opus is just as radical. If you want to have your preconceptions soothed by feel-good feminism, Russ is not your woman.

It’s not surprising that eventually she burned out (“chronic fatigue syndrome”), like most people in equivalent circumstances. She kept showcasing true aliens — women as autonomous beings with agency! — and asking questions outside the box. She kept pointing out that even if you have been “promoted” from field hand to house servant you can still be sold down the river. An uncomfortable reminder for those who keep clinging to the hope of “change from within”, the illusion that being abjectly nice to the ensconced gatekeepers and kicking the more disenfranchised below will ensure decent treatment, or even survival.

Joanna Russ paved the way for all who walk the path of real change not merely with words, but with her body. Like the women in folk ballads who got buried alive so that bridges would stand, she deserves more than pious twitterings now that she’s safely dead. I recognize the good intentions of those who promote this pledge in her name. But enough already with “mistress lists” and their ilk. If people want to really do something, I suggest (and mind you, this is a suggestion, not the forcible penectomy some obviously consider it to be) that they read women’s books. Publish them for real money, as in pro-rate presses – pathetic as pro rates are, these days. Review them in major outlets. Nominate them for prestigious awards. Hire them as editors, columnists and reviewers (not slush readers or gofers) in major venues, in more than token numbers.  Teach them in courses.

Unconscious bias is a well-documented phenomenon and is alive and thriving even (especially) in self-labeled “progressive” communities. Women have shown up the arguments for intrinsic inferiority by winning open chairs in orchestras when performing behind curtains and winning major literary awards when hiding behind pseudonyms. But this does not change the dominant culture. And it does not make up for the oceans of creative talent that got lost — suppressed or squandered in anger or hopelessness.

I will let Russ herself have the last word. It’s striking how ageless she remains:

“Leaning her silly, beautiful, drunken head on my shoulder, she said, “Oh, Esther, I don’t want to be a feminist. I don’t enjoy it. It’s no fun.”

“I know,” I said. “I don’t either.” People think you decide to be a “radical,” for God’s sake, like deciding to be a librarian or a ship’s chandler. You “make up your mind,” you “commit yourself” (sounds like a mental hospital, doesn’t it?).

I said Don’t worry, we could be buried together and have engraved on our tombstone the awful truth, which some day somebody will understand:

WE WUZ PUSHED.”

from On Strike Against God

Miranda Wrongs: Reading Too Much into the Genome

Friday, June 10th, 2011

Introductory note: My micro-bio in several venues reads “Scientist by day, writer by night.” It occurred to me that for lo these many years I have discussed science, scientific thinking and process, space exploration, social issues, history and culture, books, films and games… but I have never told my readers exactly what I do during the day.

What I do during the day (and a good part of the night) is investigate a process called alternative splicing and its repercussions on brain function. I will unpack this in future posts (interspersed among the usual musings on other topics), and I hope that this excursion may give a glimpse of how complex biology is across scales.

To start us off, I reprint an article commissioned by R. U. Sirius that first appeared in H+ Magazine in April 2010. An academic variant of this article appeared in Politics and the Life Sciences in response to Mark Walker’s “Genetic Virtue” proposal.

“We meant it for the best.” – Dr. Caron speaking of the Miranda settlers, in Whedon’s Serenity

When the sequence of the human genome was declared essentially complete in 2003, all biologists (except perhaps Craig Venter) heaved a sigh of gladness that the data were all on one website, publicly available, well-annotated and carefully cross-linked. Some may have hoisted a glass of champagne. Then they went back to their benches. They knew, if nobody else did, that the work was just beginning. Having the sequence was the equivalent of sounding out the text of an alphabet whose meaning was still undeciphered. For the linguistically inclined, think of Etruscan.

The media, with a few laudable exceptions, touted this as “we now know how genes work” and many science fiction authors duly incorporated it into their opuses. So did people with plans for improving humanity. Namely, there are initiatives that seriously propose that such attributes as virtue, intelligence, specific physical and mental abilities or, for that matter, a “happy personality” can (and should) be tweaked by selection in utero or engineering of the genes that determine these traits. The usual parties put forth the predictable pro and con arguments, and many articles get published in journals, magazines and blogs.

This is excellent for the career prospects and bank accounts of philosophers, political scientists, biotech entrepreneurs, politicians and would-be prophets. However, biologists know that all this is a parlor game equivalent to determining the number of angels dancing on the top of a pin. The reason for this is simple: there are no genes for virtue, intelligence, happiness or any complex behavioral trait. This becomes obvious by the number of human genes: the final count hovers around 20-25,000, less than twice as many as the number in worms and flies. It’s also obvious by the fact that cloned animals don’t look and act like their prototypes, Cc being the most famous example.

Genes encode catalytic, structural and regulatory proteins and RNAs. They do not encode the nervous system; even less do they encode complex behavior. At the level of the organism, they code for susceptibilities and tendencies — that is, with a few important exceptions, they are probabilistic rather than deterministic. And although many diseases develop from malfunctions of single genes, this does not indicate that single genes are responsible for any complex attribute. Instead they’re the equivalent of screws or belts, whose loss can stop a car but does not make it run.

No reputable biologist suggests that genes are not decisively involved in outcomes. But the constant harping on trait heritability “in spite of environment” is a straw man. Its main prop, the twin studies, is far less robust than commonly presented — especially when we take into account that identical twins often know each other before separation and, even when adopted, are likely to grow up in very similar environments (to say nothing of the data cherry-picking for publication). The nature/nurture debate has been largely resolved by the gene/environment (GxE) interplay model, a non-reductive approximation closer to reality. Genes never work in isolation but as complex, intricately regulated cooperative networks and they are in constant, dynamic dialogue with the environment — from diet to natal language. That is why second-generation immigrants invariably display the body morphology and disease susceptibilities of their adopted culture, although they have inherited the genes of their natal one.

Furthermore, there’s significant redundancy in the genome. Knockouts of even important single genes in model organisms often have practically no phenotype (or a very subtle one) because related genes take up the slack. The “selfish gene” concept as presented by reductionists of all stripes is arrant nonsense. To stay with the car analogy, it’s the equivalent of a single screw rotating in vacuum by itself. It doth not even a cart make, let alone the universe-spanning starship that is our brain/mind.

About half of our genes contribute directly to brain function; the rest do so indirectly, since brain function depends crucially on signal processing and body feedback. This makes the brain/mind a bona fide complex (though knowable) system. This attribute underlines the intrinsic infeasibility of instilling virtue, intelligence or good taste in clothes by changing single genes. If genetic programs were as fixed, simple and one-to-one mapped as reductionists would like, we would have answered most questions about brain function within months after reading the human genome. As a pertinent example, studies indicates that the six extended genomic regions that were defined by SNP (single nucleotide polymorphism) analysis to contribute the most to IQ — itself a population-sorting tool rather than a real indicator of intelligence — influence IQ by a paltry 1%.

The attempts to map complex behaviors for the convenience and justification of social policies began as soon as societies stratified. To list a few recent examples, in the last decades we’ve had the false XYY “aggression” connection, the issue of gay men’s hypothalamus size, and the sloppy and dangerous (but incredibly lucrative) generalizations about brain serotonin and “nurturing” genes. Traditional breeding experiments (cattle, horses, cats, dogs, royal families) have an in-built functional test: the progeny selected in this fashion must be robust enough to be born, survive and reproduce. In the cases where these criteria were flouted, we got such results as vision and hearing impairments (Persian and Siamese cats), mental instability (several dog breeds), physical fragility and Alexei Romanov.

I will leave aside the enormous and still largely unmet technical challenge of such implementation, which is light years distant from casual notes that airily prescribe “just add tetracycline to the inducible vector that carries your gene” or “inject artificial chromosomes or siRNAs.” I play with all these beasties in the lab, and can barely get them to behave in homogeneous cell lines. Because most cognitive problems arise not from huge genomic errors but from small shifts in ratios of “wild-type” (non-mutated) proteins which affect brain architecture before or after birth, approximate engineering solutions will be death sentences. Moreover, the proposals usually advocate that such changes be done in somatic cells, not the germline (which would make them permanent). This means intervention during fetal development or even later — a far more difficult undertaking than germline alteration. The individual fine-tuning required for this in turn brings up differential resource access (and no, I don’t believe that nanotech will give us unlimited resources).

Let’s now discuss the improvement touted in “enhancement” of any complex trait. All organisms are jury-rigged across scales: that is, the decisive criterion for an adaptive change (from a hemoglobin variant to a hip-bone angle) is function, rather than elegance. Many details are accidental outcomes of an initial chance configuration — the literally inverted organization of the vertebrate eye is a prime example. Optimality is entirely context-dependent. If an organism or function is perfected for one set of circumstances, it immediately becomes suboptimal for all others. That is the reason why gene alleles for cystic fibrosis and sickle cell anemia persisted: they conferred heterozygotic resistance to cholera and malaria, respectively. Even if it were possible to instill virtue or musicality (or even the inclination for them), fixing them would decrease individual and collective fitness. Furthermore, the desired state for all complex behaviors is fluid and relative.

The concept that pressing the button of a single gene can change any complex behavior is entirely unsupported by biological evidence at any scale: molecular, cellular, organismic. Because interactions between gene products are complex, dynamic and give rise to pleiotropic effects, such intervention can cause significant harm even if implemented with full knowledge of genomic interactions (which at this point is no even partially available). It is far more feasible to correct an error than to “enhance” an already functioning brain. Furthermore, unlike a car or a computer, brain hardware and software are inextricably intertwined and cannot be decoupled or deactivated during modification.

If such a scenario is optional, it will introduce extreme de facto or de jure inequalities. If it is mandatory, beyond the obvious fact that it will require massive coercion, it will also result in the equivalent of monocultures, which is the surest way to extinction regardless of how resourceful or dominant a species is. And no matter how benevolent the motives of the proponents of such schemes are, all utopian implementations, without exception, degenerate into slaughterhouses and concentration camps.

The proposals to augment “virtue” or “intelligence” fall solidly into the linear progress model advanced by monotheistic religions, which takes for granted that humans are in a fallen state and need to achieve an idealized perfection. For the religiously orthodox, this exemplar is a god; for the transhumanists, it’s often a post-singularity AI. In reality, humans are a work in continuous evolution both biologically and culturally and will almost certainly become extinct if they enter any type of stasis, no matter how “perfect.”

But higher level arguments aside, the foundation stone of all such discussions remains unaltered and unalterable: any proposal to modulate complex traits by changing single genes is like preparing a Mars expedition based on the Ptolemaic view of the universe.

Images: The “in-valid” (non-enhanced) protagonist of GATTACA; M. C. Escher, Drawing Hands; Musical Genes cartoon by Polyp; Rainbow nuzzles her clone Cc (“carbon copy”) at Texas A&M University.

What’s Sex Got to Do with It?

Tuesday, May 17th, 2011

(sung to Tina Turner’s à propos catchy tune)

Two events unfolded simultaneously in the last few days: Arnold Schwarzenegger’s admission that he left a household servant with a “love child” and Dominique Strauss-Kahn’s arrest for attempting to rape a hotel maid. Before that, we had the almost weekly litany of celebrity/tycoon/politician/figurehead caught with barely-of-age girl(s)/boy(s). In a sadly familiar refrain, an ostensibly liberal commentator said:

“…we know that powerful men do stupid, self-destructive things for sexual reasons every single day. If we’re looking for a science-based explanation, it probably has more to do with evolutionarily induced alpha-male reproductive mandates than any rational weighing of pros and cons.”

Now I hate to break it to self-labeled liberal men but neither love nor sex have anything to do with sexual coercion and Kanazawa-style Tarzanism trying to pass for “evolutionary science” won’t cut it. Everyone with a functioning frontal cortex knows by now that rape is totally decoupled from reproduction. The term “love child”, repeated ad nauseam by the media, is obscene in this context.

Leaving love aside, such encounters are not about sex either. For one, coerced sex is always lousy; for another, no reproductive mandate is involved, as the gang rapes of invading armies show. What such encounters are about, of course, is entitlement, power and control: the prerogative of men in privileged positions to use others (women in particular) as toilet paper with no consequences to themselves short of the indulgent “He’s such a ladies’ man…” and its extension: “This was a trap. Such men don’t need to rape. Women fling themselves in droves at alpha males!”

As I keep having to point out, there are no biological alpha males in humans no matter what Evo-Psycho prophet-wannabees preach under the false mantra of “Real science is not PC, let the chips fall where they may”. Gorillas have them. Baboons have them, with variances between subgroups. Our closest relatives, bonobos and chimpanzees, don’t. What they have are shifting power alliances for both genders (differing in detail in each species). They also have maternally-based status because paternity is not defined and females choose their partners. Humans have so-called “alpha males” only culturally, and only since hoarding of surplus goods made pyramidal societies possible.

The repercussions of such behavior highlight another point. Men of this type basically tell the world “I dare you to stop my incredibly important work to listen to the grievances of a thrall. What is the life and reputation of a minimum-wage African immigrant woman compared to the mighty deeds I (think I can) perform?” Those who argue that the personal should be separate from the political choose to ignore the fact that the mindset that deems a maid part of the furniture thinks the same of most of humanity — Larry Summers is a perfect example of this. In fact, you can predict how a man will behave in just about any situation once you see how he treats his female partner. This makes the treatment of nations by the IMF and its ilk much less mysterious, if no less destructive.

Contrary to the wet dreams of dorks aspiring to “alpha malehood”, women generally will only interact with such specimens under duress. They’re far more popular with men who (like to) think that being a real man means “to crush your enemies, see them driven before you, and to hear the lamentation of their women.” Civilization came into existence and has precariously survived in spite of such men, not because of them. If we hope to ever truly thrive, we will have to eradicate cultural alpha-malehood as thoroughly as we did smallpox — and figure out how we can inculcate snachismo as the default behavioral model instead.

Images: Top, Malcolm McDowell as Caligula in the 1979 eponymous film; bottom, Biotest’s “Alpha Male” pills.

The Quantum Choice: You Can Have either Sex or Immortality

Tuesday, March 29th, 2011

Note: A long-term study from Northwestern University (not yet in PubMed) has linked participation of young adults in religious activities to obesity in later life. Overhanging waistlines in First World societies undoubtedly contribute to the degenerative illnesses of lengthened lifespan. But it’s important to keep in mind that fat fulfills critical functions. This article, which looks at the other side of the coin, was commissioned by R. U. Sirius of Mondo 2000 fame and first appeared in H+ Magazine in September 2009.

Because of the four-plus centuries of Ottoman occupation, the folklore of all Balkan nations shares a Trickster figure named Hodja (based on 13th century Konyan Sufi wandering philosopher Nasreddin). In one of the countless stories involving him, Hodja has a donkey that’s very useful in carting firewood, water, etc.  The problem is that he eats expensive hay. So Hodja starts decreasing the amount of hay he feeds the donkey. The donkey stolidly continues doing the chores and Hodja, encouraged by the results, further decreases the feed until it’s down to nothing. The donkey continues for a few days, then keels over. Hodja grumbles, “Damnable beast! Just when I had him trained!”

Whenever I hear about longevity by caloric restriction, I immediately think of this story.

But to turn to real science, what is the basis for caloric restriction as a method of prolonging life? The answer is: not humans. The basis is that it appears (emphasis on the appears) that feeding several organisms, including mice and rhesus monkeys, near-starvation diets, seems to roughly double their lifespan. Ergo, reasons your average hopeful transhumanist, the same could happen to me if only I had the discipline and time to do the same –- plus the money, of course, for all the supplements and vitamins that such a regime absolutely requires, to say nothing of the expense of such boutique items as digital balances.

I will say a few words first about such beasties as flies (Drosophila melanogaster) and worms (Caenorhabditis elegans) before I climb the evolutionary ladder. Many organisms in other branches of the evolutionary tree have two “quantum” modes: survival or reproduction. For example, many invertebrates are programmed to die immediately after reproduction, occasionally becoming food for their progeny. In some cases, their digestive tracts literally disintegrate after they release their fertilized eggs. Conversely, feeding a infertile worker bee royal jelly turns her into a fully functioning queen. The general principle behind caloric restriction is that it essentially turns the organism’s switch from reproductive to survival mode.

Most vertebrates from reptiles onward face a less stark choice. Because either or both parents are required to lavish care on offspring, vertebrate reproduction is not an automatic death sentence. So let’s segue to humans. Due to their unique birth details, human children literally require the vaunted village to raise them — parents, grandparents, first degree relatives, the lot. At the same time, it doesn’t take scientific research to notice that when calories and/or body fat fall below a certain minimum, girls and women stop ovulating. It also takes just living in a context of famine, whether chosen or enforced, to notice the effects of starvation on people, from lethargy and fatigue to wasted muscles, brittle bones and immune system suppression, crowned with irritability, depression, cognitive impairment and overall diminished social affect.

Ah, says the sophisticated caloric restriction advocate, but much of this comes from imbalances in the diet –- missing vitamins, minerals, etc. Well, yes and no. Let me give a few examples.

All vitamins except B and C are lipid-soluble. If we don’t have enough fat, our body can’t absorb them. So the excess ends up in odd places where it may in fact be toxic –- hence the orange carotenoid-induced tint that is a common telltale sign of many caloric restriction devotees. Furthermore, if we have inadequate body fat, not only are we infertile, infection-prone and slow to heal due to lack of necessary hormones and cholesterol; our homeostatic mechanisms (such as temperature regulation) also flag. And because caloric restriction forces the body to use up muscle protein and leaches bones of minerals, practitioners can end up with weakened hearts and bone fractures.

Speaking of fat, the brain has no energy reserves. It runs exclusively on glucose. When starved of glucose, it starts doing odd things, including the release of stress chemicals. This, in turn, can induce anything from false euphoria to hallucinations. This phenomenon is well known from anorexics and diabetics entering hypoglycemia, but also from shamans, desert prophets and members of cultures that undertook vision quests, which invariably included prolonged fasting.  So caloric restriction may make its practitioners feel euphoric. But just as people feel they have comprehended the universe while under the influence of psychoactive drugs, so does this practice impair judgment and related executive functions.

So what about those glowing reports which purport to have demonstrated that caloric restriction doubles the lifespans of mice and rhesus monkeys, as well as giving them glossy pelts? Surely we can put up with a bit of mental confusion, even failing erections, in exchange for a longer life, as long as it’s of high quality –- otherwise we’ll end up like poor Tithonus, who was granted immortality but not youth and dwindled into a shriveled husk before the gods in their whimsical mercy turned him into a cicada. And it does seem that caloric restriction decreases such banes of extended human lifespan as diabetes and atherosclerosis. Well, there’s something interesting going on, all right, but not what people (like to) think.

In biology, details are crucial and mice are not humans. In Eldorado Desperadoes: Of Mice and Men, I explained at length why non-human studies are proof of principle at best, irrelevant at worst. Laboratory mice and monkeys are bred to reproduce early and rapidly. They’re fed rich diets and lead inactive lives –- the equivalent of couch potatoes. The caloric restriction studies have essentially returned the animals to the normal levels of nutrition that they would attain in the wild. Indeed, caloric restriction of wild mice does not extend their lives and when caloric levels fall below about 50%, both lab and wild mice promptly keel over, like Hodja’s donkey. In the rhesus studies, lifespans appeared extended only when the investigators counted a subset of the deaths in the animal group they tested.

On the molecular level, much attention has been paid to sirtuin activators, resveratrol chief among them. Sirtuins are a class of proteins that regulate several cell processes, including aspects of DNA repair, cell cycle and metabolism. This means they’re de facto pleiotropic, which should give would-be life extenders pause. As for resveratrol, it doesn’t even extend life in mice –- so the longer lives of the red-wine loving French result from other causes, almost certainly including their less sedentary habits and their universal and sane health coverage. That won’t stop ambitious entrepreneurs from setting up startups that test sirtuin activators and their ilk, but I predict they will be as effective as leptin and its relatives were for non-genetic obesity.

This brings to mind the important and often overlooked fact that genes and phenotypes never act in isolation. An allele or behavior that is beneficial in one context becomes deleterious in another. When longer-lived mutants and wild-type equivalents are placed in different environments, all longevity mutations result in adaptive disadvantages (some obvious, some subtle) that make the mutant strain disappear within a few generations regardless of the environment specifics.

Similarly, caloric restriction in an upper-middle class context in the US may be possible, if unpleasant. But it’s a death sentence for a subsistence farmer in Bangladesh who may need to build up and retain her weight in anticipation of a famine. For women in particular, who are prone to both anorexia and osteoporosis, caloric restriction is dangerous –- hovering as it does near keeling over territory. As for isolated, inbred groups that have more than their share of centenarians, their genes are far more responsible for their lifespan than their diet. So does the fact that they invariably lead lives of moderate but sustained physical activity surrounded by extended families, as long as they are relatively dominant within their family and community.

Human lifespan has already nearly tripled, courtesy of vaccines, antibiotics, clean water and use of soap during childbirth. It is unlikely that we will be able to extend it much further. Extrapolations indicate that caloric restriction will not lengthen our lives by more than 3% (a pitiful return for such herculean efforts) and that we can get the same result from reasonable eating habits combined with exercise. Recent, careful studies have established that moderately overweight people are the longest-lived, whereas extra-lean people live as long as do obese ones.

So what can you really do to extend your life? Well, as is the case with many other quality-of-life attributes, you should choose your parents carefully. Good alleles for susceptibilities to degenerative age-related diseases (diabetes, heart disease, hypertension, dementia) are a great help — as is high income in a developed country with first-rate medical services, which will ensure excellent lifelong nutrition and enough leisure time and/or devoted underlings to make it possible to attend to suchlike things.

Blastocysts Feel No Pain

Monday, March 14th, 2011

In 2010, the recipient for the Medicine Nobel was Robert Edwards, who perfected in vitro fertilization (IVF) techniques for human eggs in partnership with George Steptoe. Their efforts culminated with the conception of Louise Brown in 1978, followed by several million such births since. The choice was somewhat peculiar, because this was an important technical advance but not an increase in basic understanding (which also highlights the oddity of not having a Nobel in Biology). That said, the gap between the achievement and its recognition was unusually long. This has been true of others who defied some kind of orthodoxy – Barbara McClintock is a poster case.

In Edwards’ case, the orthodoxy barrier was conventional. Namely, IVF separates sex from procreation as decisively as contraception does. Whereas contraception allows sex without procreation (as do masturbation and most lovemaking permutations), IVF allows conception minus orgasms and also decouples ejaculation from fatherhood. Sure enough, a Vatican representative voiced his institution’s categorical disapproval for this particular bestowal. However, IVF has detractors even among the non-rabidly religious. The major reason is its residue: unused blastocysts, which are routinely discarded unless they’re used as a source for embryonic stem cells.

Around the same time that Edwards received the Nobel, US opponents of embryonic stem cell research filed a lawsuit contending that this “so far fruitless” research siphoned off funds from “productive” adult stem cell research. The judge in the case handed down a decision that amounted to a ban of all embryonic stem cell work and the case has been a legal and political football ever since. The brouhaha has highlighted two questions: what good are stem cells? And what is the standing of blastocysts?

Let me get the latter out of the way first. Since IVF blastocysts are eventually discarded if not used, most dilemmas associated with them reek with hypocrisy and the transparent desire to curtail women’s autonomy. A 5-day blastocyst consists of 200 cells arising from a zygote that has not yet implanted. If it implants, 50 of these eventually become the embryo; the rest turn into the placenta. A blastocyst is a potential human as much as an acorn is a potential oak – perhaps even less, given how much it needs to attain viability. Equally importantly, blastocysts don’t feel pain. For that you need to have a nervous system that can process sensory input. In humans, this happens roughly near the end of the second trimester – which is one reason why extremely premature babies have severe neurological defects.

This won’t change the mind of anyone who believes that a zygote is “ensouled” at conception, but if we continue along this axis (very similar to much punitive fundamentalist reasoning) we will end up declaring miscarriage a crime. This is precisely what several US state legislatures are currently attempting to do, with the “Protect Life Act” riding pillion, bringing us squarely into Handmaid’s Tale territory. It is well known by now that something like forty percent of all conceptions end in early miscarriages, many of them unnoticed or noticed only as heavier than usual monthly bleeding. A miscarriage almost invariably means there is something seriously wrong with the embryo or the embryo/placenta interaction. Forcing such pregnancies to continue would result in significant increase of deaths and permanent disabilities of both women and children.

The “instant ensoulment” stance is equivalent to the theories that postulated a fully formed homunculus inside each sperm and deemed women passive yet culpable vessels. It is also noteworthy that the concern of compulsory-pregnancy advocates stops at the moment of birth. Across eras, girls have been routinely killed at all ages by exposure, starvation, poisoning, beatings; boys suffered this fate only if they were badly deformed in cultures or castes that demanded physical perfection.

Let’s now focus on the scientific side. By definition, stem cells must have the capacity to propagate indefinitely in an undifferentiated state and the potential to become most cell types (pluripotent). Only embryonic stem cells (ESCs) have these attributes. Somatic adult stem cells (ASCs), usually derived from skin or bone marrow, are few, cannot divide indefinitely and can only differentiate into subtypes of their original cellular family (multipotent). In particular, it’s virtually impossible to turn them into neurons, a crucial requirement if we are to face the steadily growing specter of neurodegenerative diseases and brain or spinal cord damage from accidents and strokes.

Biologists have discovered yet another way to create quasi-ESCs: reprogrammed adult cells, aka induced pluripotent cells (iPS). However, it comes as no surprise that iPS have recently been found to harbor far larger numbers of mutations than ESCs. To generate iPS, you need to jangle differentiated cells into de-differentiating and resuming division. The chemical path is brute-force – think chemotherapy for cells and you get an inkling. The alternative is to introduce an activated oncogene, usually via a viral vector. By definition, oncogenes promote cell division which raises the very real prospect of tumors. Too, viral vectors introduce a host of uncontrolled variables that have so far precluded fine control.

ESCs are not tampered with in this fashion, although long-term propagation can cause epi/genetic changes on its own. Additionally, recent advances have allowed researchers to dispense with mouse feeder cells for culturing ESCs. These carried the danger of transmitting undesirable entities, from inappropriate transcription factors to viruses. On the other hand, ASC grafts from one’s own tissues are less likely to be rejected (though xeno-ASCs are even likelier than ESCs to be tagged as foreign and destroyed by the recipient’s immune system).

Studies of all three kinds of stem cells have helped us decipher mechanisms of both development and disease. This research allowed us to discover how to enable cells to remain undifferentiated and how to coax them toward a desired differentiation path. Stem cells can also be used to test drugs (human lines are better indicators of outcomes than mice) and eventually generate tissue for cell-based therapies of birth defects, Alzheimer’s, Parkinson’s, Huntington’s, ALS, spinal cord injury, stroke, diabetes, heart disease, cancer, burns, arthritis… the list is long. Cell-based therapies have advantages over “naked” gene delivery, because genes delivered in cells retain the regulatory signals and larger epi/genetic contexts crucial for long-term survival, integration and function.

People argue that ASCs (particularly hematopoetic precursors used in bone marrow transplants) have been far more useful than ESCs, whose use is still potential. However, they usually fail to note that ASCs have been in clinical use since the late fifties, whereas human ESCs were first isolated in 1998 by James Thomson’s group in Wisconsin. Add to that the various politically or religiously motivated embargoes, and it’s a wonder that our understanding of ESCs has advanced as much as it has.

Despite fulminations to the contrary, women never make reproductive decisions lightly since their repercussions are irreversible, life-long and often determine their fate. Becoming a human is a process that is incomplete even at birth, since most brain wiring happens postnatally. Demagoguery may be useful to lawyers, politicians and control-obsessed fanatics. But in the end, two things are true: actual humans are (should be) much more important than potential ones – and this includes women, not just the children they bear and rear; and embryonic stem cells, because of their unique properties, may be the only path to alleviating enormous amounts of suffering for actual humans.

Best FAQ source: http://stemcells.nih.gov/info/basics/