Remembering John Berger

I was saddened to learn of John Berger’s death this morning (3 Jan 2017). As a young art student, I re-read his books until they fell apart, and among my favourites was a collection of his early reviews called Permanent Red, a phrase that embodied his life-long political commitment. The following quote encapsulates that passion:

“The poverty of our century is unlike that of any other. It is not, as poverty was before, the result of natural scarcity, but of a set of priorities imposed upon the rest of the world by the rich. Consequently, the modern poor are not pitied … but written off as trash. The twentieth-century consumer economy has produced the first culture for which a beggar is a reminder of nothing”.

For me, Berger will be a permanent reminder that we don’t have to let the rich set the world’s priorities; we can struggle to ensure that compassion and a commitment to social and economic justice guide us.

The past isn’t such a foreign country after all (sadly)

For my current research, I’m reading every issue of the American Popular Science Monthly from the early decades of the twentieth century. Despite its title, PSM was rather a serious magazine, about 100 pages a month with lots of long, technical articles on everything from how street cars (trams) work to the role of mosquitoes in spreading disease. In the October 1911 issue, amongst the usual adverts for typewriters, microscopes and (for some unaccountable reason) chocolate, I found an article called “The race fibre of the Chinese”, which demonstrates (depressingly) that the past is not such a foreign country after all.

Immigration was something of an obsession for the PSM (as it was for many Americans at this time) and, predictably, there were lots of articles about the declining native birth rate and the threat of “degeneration” if too many inferior people were admitted to the USA. However, the PSM tended to take a fairly liberal line, arguing that many of the people arriving via Ellis Island (Jews were singled out, but so were other groups) were of higher than average intelligence, and if efforts could be made to address the poverty that was caused by discrimination, they would be a valuable addition to America’s stock.

The Chinese article (by Professor Edward Alsworth Ross of the University of Wisconsin) begins by noting that “Out of ten children born among us three, normally the weakest three, will fail to grow up. Out of ten children born in China these weakest three will die and probably five more besides. The difference is owing to the hardships that infant life meets with among the Chinese”. This was, however, not a plea for increasing charity or foreign aid. On the contrary. Ross argued that these horrifying statistics explained why the Chinese were exceptionally tough and hardy, capable of very hard physical labour.

One might therefore assume that the Chinese ought to be admitted as very welcome immigrants, but Ross argued that the key to Chinese success was not any innate superiority. Far from it: “the competition of white laborers and yellow is not so simple a test of human worth as some may imagine”, since if each are provided with equally good working conditions (including a good diet), the white man will triumph. However:

Under bad conditions the yellow man can best the white man, because he can better endure spoiled food, poor clothing, foul air, noise, heat, dirt, discomfort and microbes. Reilly can outdo Ah San, but Ah San can underlive Reilly. Ah San can not take away Reilly’s job as being a better workman; but, because he can live and do some work at a wage on which Reilly can not keep himself fit to work at all, three or four Ah Sans can take Reilly’s job from him.

Pity the poor-old stereotyped Irish labourer; even his proverbial willingness to live on nothing but potatoes won’t save his job from the clichéd Chinese hordes, whose teeming and inscrutable ability to live of nothing but fresh air will lead to their taking the Irishman’s job.

But “Reilly” can at least take comfort from the fact that “Ah San’s” days of dominance are numbered: “with the coming in of western sanitation, the terrible selective process by which Chinese toughness has been built up will come to an end”. This is a novel variant on what I’ve come to think of as the “paradox of civilization” argument for degeneration: human progress is the result of natural selection – the fittest survive and conquer the unfit, but that progress is also their undoing.

(It is, of course, no coincidence that natural selection emerged as a scientific theory in Britain at a time when it possessed the world’s largest empire. Indeed, much of Darwin’s success can be traced to the fact that many of his fellow countrymen chose to read him as having said that they were the fittest and thus had survived. Britain’s right to rule the waves was guaranteed by a law of nature.)

However, the paradox is that as humans progress they become civilized, their living standards rise and life becomes too soft. Worst of all (according to the prophets of degenerative doom), civilised people become more compassionate, and thus take care of the weak, the sick, the mentally ill and others who would once have perished in the struggle for existence. Since these inferiors lack the moral fibre and foresight to avoid having hordes of children they cannot afford to feed, they proliferate at the expense of their hard-working and prudent superiors. Evolution goes into reverse and degeneration ensues. (This argument, of course, provided the main foundation for the eugenics movement.)

However, Professor Ross – who was a supporter of eugenics – offered a new twist on what was then a well-established argument. He thought that the hardy Chinese would eventually be undone by their success at stealing other people’s jobs; their living standards would rise, they would be weakened by civilisation itself – the fatal lure of “western sanitation” would make them soft and fussy about their living and working conditions, and hence their unfair advantage in the labour market will be undermined. However, he admits that “It will take some generations of exposure to the relaxing effects of drains, ventilation, doctors, district nurses, food inspectors, pure water, open spaces and out-of-door sports to eradicate the peculiar vitality which the yellow race has acquired”.

And what, you might wonder, is Professor Ross’ conclusion from his argument? Should America have been shipping nurses, flush toilets and baseball bats to China as quickly as possible to accelerate the process of decay, or should it be denying these advantages to its own working population before they could get even softer? Neither. Ross argues that “During the interim the chief effect of freely admitting coolies to the labor markets of the west would be the substitution of low wages, bad living conditions and the increase of the yellow race for high wages, good living conditions and the increase of the white race”. So, naturally immigration restrictions were vital. What is really depressing, is that Ross was considered a ‘progressive’ eugenicist, becasue his racism was aimed at preserving American workers’ living standards.

Isn’t a good thing that such self-evidently racist. inhumane and ridiculous arguments no longer have any sway with Americans, or anyone else…

A Fine Reader (of PDF files in particular)

Like most academics, I rely increasingly on online resources and there are now millions of pages of old journals, magazines, newspapers, etc. accessible online. In most cases, the files are stored in PDF (Adobe Acrobat) format, and optical character recognition (OCR) has been used to turn the scanned images into searchable text. Much time is saved, but a couple of problems plague me (and, I assume, many other users of these sites). Firstly, the OCR text is seldom proof-read and can be of very low quality (as a rule with OCR, the lower the quality of the original image, the lower the accuracy of the final text – and a lot of the resources I use have texts scanned from old printed materials or from microfilms of old printed materials). Secondly, in many cases when you download a PDF file to read or refer to later, the OCR text isn’t part of the file – all you get is the image.

I use Adobe Acrobat Pro to create, edit and read PDF files (I know it’s expensive, but the education pricing makes it affordable). It has its own inbuilt OCR capability, but it doesn’t seem very accurate and there’s no easy way to edit the images that comprise the PDF (for example, to improve their contrast and thus the accuracy of the scanning). So, recently I’ve gone back to a program that I’ve not used in years, ABBYY Finereader 12 (again, not cheap, but having bought it in the past, I was able to buy an upgrade at the educational price).

As with most decent software, there’s a free demo version so that you can test it (and it runs on Windows and Mac). After playing with it for a few days, I’ve found it’s extremely accurate, even with old, fuzzy texts, and it has a couple of nice features that Acrobat Pro lacks. Firstly, it has a built-in image editor, so if your scanned image has dark edges, or other marks that confuse the OCR, you can delete them before you start. You can also eliminate big white borders (a pain when you’re trying to view your PDF at “page width” and want the text nice and readable). Even more usefully, FineReader lets you adjust settings in great detail; you can, for example, boost the contrast in the image edit the levels (to eliminate a grey background), or deskew the image – and then choose whether to apply the edit to all pages, the current page, or a selection. And FineReader can also pre-process all or some of the images automatically and – unlike Acrobat Pro – you have quite a lot of control over what it does when pre-processing (you can select options such as “reduce noise” or “whiten background”, for example). The result, in my tests so far, is close to 100% accuracy for most of the PDFs I’ve converted (and of course you have the option to verify and correct the text before saving, if you want to). And it’s fast: on my PC, a 20 page PDF file is converted in under 10 seconds.

And, of course, FineReader can do all the normal stuff an OCR package does, like scan pages directly to the format of your choice (Word, Excel, PDF, etc.).

So, if you need to do this kind of thing with documents, I recommend this very highly; over the course of the book I’m currently researching, I expect it to save me hundreds of hours.

UPDATE [15 September 2016]

I recently hit a problem with Finereader; whenever I tried to start the program I got an error that said “ABBYY licensing service is unavailable. The RPC server is unavailable.” I contacted ABBYY’s online help and after a couple of very quick emails they were able to solve the problem (you open Windows Services, select Finereader, and change the startup type from “Automatic” to “Automatic (delayed start)”. I was very impressed with the speed and efficiency with which ABBYY’s technicians resolved the issue. If you’re facing the same problem, there is more information on their website.

End of the affair (or how I learned to love an iPad)

My Surface Pro 3 got stolen last year and I decided to wait for version 4 to replace it. After reading various reports of initial teething troubles, I waited until the first major firmware update (a couple of weeks ago) before taking the plunge.

I won’t bore you with the details of the 48 hours of hell it took me to get it working; just be glad you weren’t there. When I finally got it going, I found the battery life was simply awful: 2.5 hours on a full charge, without doing anything very demanding (no games, no video editing, no mega downloads). I also found that the handwriting recognition (which is what I use it for most) worked less well in Windows 10 than it had with Windows 8.1 (for example, the text entry box is now one line instead of two and cannot be resized). And, since handwriting recognition is part of Windows, there are never likely to be many third-party handwriting apps (not that there are many Windows apps of any kind).

So, much to my surprise, I found myself reading reviews of the new iPad Pro…

 I have never been much of a fan of Apple anything, but I must say it was nice to be able to go to an Apple Store and just play with an iPad for as long as I wanted. It was the feel of the Apple “Pencil” that I wanted to experience. (They couldn’t call it a stylus, of course: Steve Jobs would rise from his grave, and what would Tim Cook do for a living then?) Despite the silly name the Pencil felt very nice to write with: a slightly softer tip, with a little more fiction, would be even nicer, but it seemed good.

So, I took the plunge and have now been an iPad user for 3 whole days. And, so far, I am very happy. The key to my happiness is a free utility called MyScript Stylus. I have been using the Android version on my Samsung Galaxy Note 4 for a while and it’s very useful, but on the iPad’s massive screen it is an absolute delight. It’s fast and accurate and apart from a slight “tappity tap” noise as I write, it’s a real pleasure to write with. Some apps seem to dislike it (Chrome hates it and Safari is unsure), but OneNote – which I use most – has not had a single problem so far (touch wood). I suspect that if I were user and got the Penultimate add-on. I would be even happier, but we will never know as I prefer OneNote. I hope that future versions (or rivals) will make it even better (editing text is a bit cumbersome, for example), but it’s OK for now.

And the iPad Pro’s battery lasts much, much longer than the Surface’s does. I haven’t been able to verify Apple’s claim of 10 hours, but I have managed about 6 without getting near the end of the battery life. And it really is a pleasure to watch TV on it: great picture and impressive sound. (Shame Arsenal couldn’t manage a goal, but I suppose Apple can’t be blamed for that.)

I will try to update this one I’ve been using the iPad for longer.

Fame at last?

It’s a moment every serious evolutionist dreams of – the first time you get misrepresented on a creationist website – and it finally happened to me recently, when the site Creation Evolution Headlines (CEH) used my recent review of Niles Eldredge’s book Eternal Ephemera (in the magazine Science, 17 April 2015) to add a little confusion to a non-existent “debate” around evolution. (I can’t tell you who wrote it, I’m afraid, because CEH’s contributor preferred not to put their name to their opinions.)

Eldredge’s book is not about whether evolution happened (as CEH tries to pretend); it’s about how it happened, what combination of factors best explains the phenomena we see around us. There’s no doubt in his mind (or in mine) that evolution has happened and the modern theoriy of evolution is the best explanation we currently have of the diversity of life on Earth. However, evolutionary biologists recognise that complex scientific processes have complex causes and there are often debates within science about which causes are (or have been) the most significant. In the case of evolution, there’s some debate about whether species are changing continuously or remain stable for long periods once they’ve evolved. I am not a biologist, and don’t pretend to be qualified to comment, but it’s an interesting historical issue because different ideas about the mechanism and pace of evolution have dominated at different historical times; I reviewed Eldredge’s book because it looks at that history.

Picture of Lamarck
Jean Baptiste Pierre Antoine de Monet, Chevalier de Lamarck
(1744–1829)

Eldredge notes that before Darwin, there were two main approaches to evolution: the Frenchman, Jean Baptiste Pierre Antoine de Monet, Chevalier de Lamarck (1744–1829) – better known simply as Lamarck – argued that species had not all been created at the same time, but had changed and developed over time. He assumed that each living thing began as the simplest possible form of life and then developed into a series of increasingly complex organisms. In Lamarck’s scheme, species do not split into multiple new species; each follows its own evolutionary path. Some species look more evolved than others because they started their evolutionary development earlier and thus have moved further up their particular evolutionary escalator. One of the interesting implications of Lamarck’s view is that there are really no such things as “species”, since all living things are changing continuously (albeit, too slowly for us to see the changes as they happen); the groups of more-or-less similar creatures that science names as species are really just snapshots of something that is always on its way to becoming something else. The fossils they leave behind embody this; they record an arbitrary moment in the organism’s progress. (And for Lamarck, evolution was very much a record of progress; organisms didn’t just change, they improved). The reason we don’t find lots of fossils of creatures that are clearly in-between known species (usually known as intermediate forms) is that fossils are rare; most dead creatures never become fossils, so of course there are lots of gaps. Lamarck argued that if everything were fossilised, we would have a continuous sequence of organisms that blend seamlessly into an unbroken series of links.

Picture of Brocchi
Giambattista (or Giovanni Battista) Brocchi (1772–1826)

One of the most interesting things about Eldredge’s book is that it highlights the existence of an alternative model of evolution (also before Darwin), that was developed by the Italian geologist Giambatista Brocchi (1772–1826), who saw species as very like individuals, they are born and eventually they die. They may give birth to new species (a process that biologists now call speciation), but they do not usually change steadily and gradually over time. For Brocchi, the lack of intermediate forms is not a result of poor fossilisation; such forms are incredibly rare because species are usually stable. It was only in the most unlikely circumstances – such as a small group of organisms becoming isolated from the rest of their species and having to survive in radically new circumstances – that a new species would arise. In  Brocchi’s view, the appearance of new species was a very uncertain business – genuinely random factors were involved – whereas Lamarck’s stately mechanism guaranteed progressive development, onward and upward to ever-more-perfect forms.

Brocchi’s alternative model never really caught on in his day and his work is now all-but forgotten, but one of the many interesting historical facts that Eldredge highlights is that Charles Darwin himself was dubious about Lamarck’s version of evolution and – for a fairly brief period during and after the voyage of the Beagle – more interested in Brocchi’s. However, by the time On the Origin of Species (1859) appeared, no trace of Brocchi’s ideas remained. One of the historical ironies that Eldredge highlights is that most histories of evolution see Lamarck as the bad guy, the one who got evolution wrong and whose ideas Darwinism eventually managed to remove, yet conventional Darwinism looks rather “Lamarckian” in some respects, not least because it has always emphasised slow, gradual change. Darwin himself emphasised the imperfection of the fossil record as explaining the lack of intermediate forms:

We should not be able to recognise a species as the parent of any one or more species if we were to examine them ever so closely, unless we likewise possessed many of the intermediate links between their past or parent and present states; and these many links we could hardly ever expect to discover, owing to the imperfection of the geological record. (Origin of Species, p. 464)

Or, as he put is even more succinctly, “The crust of the earth with its embedded remains must not be looked at as a well-filled museum, but as a poor collection made at hazard and at rare intervals” (p.487). Ever since Darwin, this view has largely prevailed and it is, as Eldredge notes, strikingly similar to Lamarck’s view (although there are, of course, many differences between Darwin and Lamarck’s views on many other points).

Eldredge thinks Brocchi was closer to the truth than Lamarck because Brocchi’s view is similar to his own theory of punctuated equilibria (developed with the late Stephen Jay Gould). “Punk Eek”, as it’s sometimes known, argues (as Brocchi did) that the fossil record is fairly accurate; species are stable throughout their lifetimes, and new species arise during relatively brief periods of speciation, usually in geographically isolated populations. There is no innate tendency for species to change, much less for them to progress; chance dominates. So Eldredge is left with a puzzle; why did Darwin once agree with him, and why did he change his mind? Why did the “Brocchian” strand in Darwin’s thinking disappear and why has it remained marginal in evolutionary circles ever since.

(As an aside, Punctuated Equilibria has never become a widely held view among evolutionists. I’m not qualified to judge why, but I’ve often wondered whether that’s because it’s not clear what the consequences of adopting it would be: if Gould and Eldredge are right, the fossils remain the same and the fact of evolution is untouched, and it is not obvious that it would make any significant change to the day-to-day practices of palaeontologists, taxonomists and evolutionary biologists. New scientific theories tend to be successful because they prompt new experiments, new questions, and new ways of solving problems. It’s not clear to me that Punk Eek does any of those things, but that’s probably just a reflection of my profound ignorance of palaeontology.)

As a historian, I don’t find Eldredge’s puzzle at all puzzling. As I noted, Lamarck saw evolution as progress and so did virtually every nineteenth-century evolutionary thinker. Just think about fossils; the recent ones are more like modern species (that have survived), less like earlier ones (that are extinct). What’s that if not a record of progress? Modern evolutionary biologists generally reject the very idea of progress. They describe evolution as a process of successfully adapting to a specific evolutionary niche; if your niche disappears, so do you, and it may be sheer dumb luck that deprives you of your niche (think of the dinosaurs; what possible adaptation could prepare you for a massive meteorite impact?). However, I’m not aware of any nineteenth-century evolutionist who thought in those terms, least of all Darwin. As his well-known conclusion to the Origin argued:

Thus, from the war of nature, from famine and death, the most exalted object which we are capable of conceiving, namely, the production of the higher animals, directly follows. There is grandeur in this view of life, with its several powers, having been originally breathed into a few forms or into one; and that, whilst this planet has gone cycling on according to the fixed law of gravity, from so simple a beginning endless forms most beautiful and most wonderful have been, and are being, evolved. (p.490)

Man is But a WormNote that “higher” (better) animals have developed from a “simple beginning”, and they are “exalted”, “beautiful” and “wonderful”. It’s all-but-impossible not to read this as a claim for progress. And the tense of the last sentence is, I would argue, very revealing if you are trying to understand why so many of Darwin’s contemporaries accepted the fact of evolution (and they often did, despite the religious controversy and persistent doubts about whether natural selection was sufficient to achieve everything Darwin claimed). Many of Darwin’s fellow gentlemen of science were impressed with his dignified and modest tone, the great weight of evidence, and by Darwin’s personal respectability. But all readers of the Origin were left with a vision of potentially limitless future progress: “have been, and are being, evolved”. The wonders we see around us (including ourselves) are no more than a taste of the beauty and wonder to come. Many of Darwin’s contemporaries read Darwin (quite plausibly, I would say) as saying that progress was a law of nature, and his book’s full title gave them further reason to hope: On the Origin of Species by means of Natural Selection, or the Preservation of Favoured Races in the Struggle for Life. “Races” could and did mean “varieties of animals or plants”, but of course it could also mean “varieties of human being” and many of Darwin’s readers concluded that the British ruled the world’s largest empire because they were superior to all other “races”; the fittest had not only survived, but come to dominate all others. There’s a famous cartoon of Darwin in later years that mocks him for publishing an entire book on earthworms (it was to be his last book). Look at the wildly implausible evolutionary sequence that surrounds the elderly, bearded sage; its pinnacle, its “most exalted object”, is an English gentleman in a top hat. There was not just grandeur in this view of life, but considerable self-satisfaction too.

Whether or not Darwin intended his book to be read that way is a topic for another day (but if you’re really interested in my view, you could read my introduction to the Cambridge University Press edition of the Origin, 2009). However, there’s no doubt that Darwin borrowed the key metaphors of his argument from the world around him. Like many of his contemporaries, he saw the world’s first industrialised, capitalist economy as one in which competition was producing continuous, gradual progress (expecially for wealthy gentlemen, like Darwin, with substantial investments in the railways). Darwin effectively saw nature as a perfectly efficient free market at a time when Victorian capitalism was gradually placing the fruits of industrialisation, in the form of cheap, mass-produced consumer goods, within many people’s reach. The Victorian middle-classes assumed that this progress, built on competition within and between nations, was why Britain had avoided the revolutionary upheavals that beset their less-successful competitors. Under these circumstances, it’s difficult to imagine Darwin formulating a radically Brocchian view of evolution, a chancier, more random process that instead of guaranteeing progress, promised only eventual extinction. But even if he had, it’s impossible to see his fellow gentlemen of science accepting it. This, in essence, is my critique of Eldredge’s history; he’s too steeped in the nitty gritty of Darwin’s theorising and tends to ignore its broad socio-cultural context. (If you’re interested, you can read my Review of Eldredge here.)

So, you may be wondering what does any of this have to do with Creationism? The answer, of course, is nothing. Yet CEH quotes my criticisms of the way Eldredge uses history and concludes with the supposedly shocking revelation “So are there current evolutionary debates? Endersby just admitted as much.” Well, duh, of course there are current evolutionary debates; evolution is a science, not a faith, it doesn’t deal in eternal, unchallengeable truths, but in evidence and open debate about the meanings of that evidence. CEH announces that: “It appears Endersby would prefer to put a bandage on that old sore spot by questioning Eldredge’s credibility”. Actually, I’ve done nothing of the sort; I said quite specifically that I doubted whether the historical argument added anything to the credibility of Eldredge’s view of evolution, but it was up to biologists to judge the usefulness of Punctuated Equilibria itself.

Nevertheless, CEH continues:

What [Endersby] has done, though, may be more damaging. He has agreed, in some detail, that evolutionary theory itself has evolved since the pre-Darwinian speculations, all through the 19th and 20th centuries, till the present day. But if evolutionary theory evolves, it could evolve into its converse in the future.

It is, of course, in the nature of scientific theories that they change over time; they are always provisional hypotheses, subject to debate and always potentially open to disproof (otherwise it wouldn’t be science). Evolutionary theory has changed (several times) since Darwin mainly because scientists have tested it and tried to improve it. Some of the changes, like the addition of genetics (which didn’t exist in Darwin’s day), have been adopted by almost all biologists; others are still being discussed. Could evolution “evolve into its converse in the future” (by which I assume CEH means, could a religious view displace it)? It’s impossible to say with complete certainty, of course, but it’s most unlikely; a mature scientific theory like evolution by natural selection has withstood over 150 years of critical testing and close scrutiny by some very smart people. And, as a result, the scientific community’s confidence in its correctness has grown, but that confidence could never reach 100% so there’s a small (and diminishing) chance that some new piece of evidence could force scientists to dramatically revise or even abandon current theories of evolution. However, what I will predict (with considerable confidence) is that if that were to happen, the modern theory of evolution would be replaced by an alternative scientific theory, based on rigorous evidence, not by an unsubstantiated faith in the literal truth of the Bible (or in any of the world’s other creation myths).

Naturally, I don’t expect anyone who believes the CEH view of the world to accept any of this, because their beliefs aren’t founded on testable evidence. I have no objection to that; I just wish they’d acknowledge it, stop pretending that they are interested in science, evidence or facts – and stop lying about what other people have or haven’t said about evolution.

 

Orchid-powered chocolate

Vanilla Planifolia
According to a sixteenth-century Spanish manuscript, the Mexicah (Aztecs) drank a tasty and nutritious drink they called chocólatl. However, the writer, Francisco Hernández, cautioned that: “they say that it puts on extraordinary amounts of weight if it used frequently”.

(If only I had read this sooner; I could have been spared many tedious hours at Weight Watchers.)

Hernández discovered chocólatl during a seven-year expedition to explore the natural history of the colony of New Spain (present day Mexico). He had been sent there by Philip II of Spain in 1570 and spent his years in the New World exploring, cataloguing, drawing and collecting. Unusually, he not only took the trouble to learn the indigenous language, Náhuatl, but also translated his own work back into it, so that the local people who had helped him to gather information could also read what he had written.

Hernández returned to Spain in February 1577 with thirty-eight volumes of notes, plus drawing and specimens. Sadly, very little of this was published in his lifetime and much was destroyed when the Spanish Royal Library burned down after his death. Fortunately, a few manuscript copies of his work circulated and some extracts were published, so today now have good editions of much of what he wrote.

Vanilla (from Hernández Rerum Medicarum Novae Hispaniae Thesaurus, 1651)
Among the many strange and wonderful new plants Hernandez described was one that the local people called tlilxóchitl (pictured, from the first printed edition of Hernandez’s work), the flower we now call vanilla, which was the first New World orchid to be described in Europe (although, as we shall see, it wasn’t recognised as an orchid until much later). As with many of the plants he described, Hernández (who was Philip Il’s doctor), was interested in the medicinal properties of tlilxóchitl. According to which version of his texts you read, vanilla could be mixed with several other things (from other flowers to “the tail of the opossum”) and then “introduced into the uterus, is an excellent remedy for sterility”. It was also good for stomach ache and poisons, “expels flatulence” and (thankfully, given the previous use) was highly aromatic.

Success with women?

However, one important property of vanilla is missing from this account; in the centuries that followed, it was widely believed that vanilla was an aphrodisiac. This story can be traced to one of the earliest accounts of chocólatl, Bernal Díaz del Castillo’s True History of the Conquest of New Spain (Historia verdadera de la conquista de la Nueva España). Díaz del Castillo, who was a foot-soldier in Cortes’ army, recorded that the Aztec emperor, Moctezuma II drank chocolate flavoured with vanilla and that his servants “brought him cups of fine gold, with a certain beverage made of cacao, that they said was for success with women”.

Two things are unclear about this. Firstly, it’s not clear whether vanilla or chocolate were considered aphrodisiacs, or if it was the combination of the two. When Hernández described chocólatl he recorded that, “the property of the drink composed is to excited the sexual appetite”, which suggests that the mixture was the key, not the vanilla on its own.
tlilxóchitl (from Badianus Manuscript)
Secondly, it’s not clear whether the aphrodisiac power of vanilla was really an indigenous belief. The very earliest written record of vanilla is in Latin text called the Libellus de medicinalibus Indorum herbis (The book of Indian medicinal herbs, from includes this picture of tlilxóchitl, on the left – the earliest known image of vanilla). The book is more often known as the Badianus Manuscript (1552) and was written by two indigenous men who had been educated by Spanish monks after the conquest. The main author  – Martinus de la Cruz – was a native doctor, so the book is the oldest surviving record of Aztec (Mexicah) plant lore, explaining how the indigenous people of Mexico used plants before the Spanish conquest. Among the remedies it describes is the use of tlilxóchitl, one of several flowers that should be:

ground to powder; and when pulverised put them into the chalice of the well-known and very fragrant flower huacalxochitl [“basket-flower”, probably a species of xanthosoma, a relative of Arums], so that they may catch and inhale the very redolent odour of this flower. Finally, take the much praised flower yolloxochitl [“heart-flower”, the Mexican magnolia, Talauma mexicana], hollow it out suitably, then put in the hole this health-bringing powder, and hang the little receptacle around the neck.

However, the recipe is for something called the “traveller’s safeguard” and there’s no hint that it had aphrodisiac properties.

Astonishing lovers

It would seem that if the Aztecs believed vanilla to be an aphrodisiac at all (which is not clear) it was only when mixed with chocolate. Yet, by the eighteenth century, the aphrodisiac properties of vanilla were widely accepted in Europe. For example, it is claimed that the German physician Bezaar Zimmerman published a treatise entitled “On experiences” (1762), in which he claimed that “no fewer than 342 impotent men by drinking vanilla decoctions, have changed into astonishing lovers of at least as many women” (although, as with so many orchid stories, I’ve been unable to trace a source for this one).

So, when and how did the idea that vanilla was an aphrodisiac gain currency? I’m still searching, but it’s interesting that orchids have been believed to have aphrodisiac properties since ancient Greek times (their name comes from the Greek word orkhis, a testicle, because the species the Greeks knew mostly had paired bulbs that looked like testicles). However, at the time vanilla was discovered by Europeans, nobody knew it was an orchid; the tropical orchids (most of which don’t have the paired bulbs) weren’t grouped together with the European ones until the seventeenth century. And tlilxóchitl (under its new Latin name, Vanilla planifolia) joined the orchid group quite late; for many years, Europeans only knew it by the black pods which gave it its Náhuatl name tlilxóchitl (from tlil or tlilli meaning black, and xóchitl or súchil = flower). [If you want to know how to pronounce Náhuatl words, there’s a Pronunciation and Spelling Guide online.]

Europeans mistranslated the name tlilxóchitl as “black flower” and for many years, believed the vanilla flowers were black (they’re in fact a creamy white); it was the black seed pods, from which vanilla essence is still made, that were the source of the Náhuatl name. Because European scientific names were for many years based on the form of the plant’s flower and no European had seen a vanilla flower (or at least, hadn’t connected one to the little black pods), it was impossible to recognise vanilla as an orchid.

So, did vanilla become a supposed aphrodisiac in Europe because it had finally been recognised as an orchid and orchids traditionally had that power? Or did the indigenous people of central America have that belief, and pass it on to their conquerors? And, if it was the latter, could the common aphrodisiac properties of vanilla and the European orchids have helped European naturalists to recognise that the orchids were a natural family of flowers? I’m still not sure.

In the meantime, I’m going to have some vanilla ice-cream, sprinkled with chocolate. Just in case it’s all true.

Almost in love (with Microsoft’s Surface Pro 2)

Surface-Pro-2I write for a living. When I’m not writing books and articles, I am writing notes for future books and articles, or I am writing emails and reports, or comments on my students’ work. And, for the last 20 years, I have been doing almost all of that on a computer, mostly a desktop PC. Like everyone else in the world (with the possible exception of members of an isolated tribe in the New Guinea highlands who have yet to see a white person, other than Lady Gaga), I have been reading reports of the death of the PC for the last couple of years. Apparently sales are plummeting because everyone is buying tablets.

Now, I’m a man who likes a nice gadget and my curiosity usually outweighs both my commonsense and concern for the environment. Also, I have a new role at work, which involves going to a lot more meetings. I make notes in these meetings, using a pen and paper (younger readers, if any, may wish to research those terms), which is quick and easy, and doesn’t distract me too much from what people are actually saying in the meeting. The problems begin when I try to find my notes later so as to remember what I had promised to do in respect of the various “action points” that arose.

At times my aching eyes, back, neck and arms lead me to suspect that I spend too much time sitting at a desk in front of the PC. So, to try and vary things. For example, when I’m reading for research I sit in a nice comfortable armchair, or stretch out on the couch, with a book or a print-out of an article that I’ve downloaded, a pen, and a pad of Post-It notes. Much nicer. But when I’m done reading I have to go back to the computer and type up my notes, otherwise I will eventually lose the book or the Post-It notes fall out. And there are other problems. I was re-reading my copy of J.B.S. Haldane’s Daedalus, or, Science and the Future (1924) in preparation for a class a few weeks ago. At one point, Haldane comments that “To light a lamp as a source of light is about as wasteful of energy as to burn down one’s house to roast one’s pork”. Attached to this is a Post-It note on which I’d written “roast pork”. – Elia (Charles Lamb) – also mentioned by J.S. Huxley”. Well, as Robert Browning almost said, when I wrote that only God and Jim Endersby knew what it meant, but now, only God knows. In slightly over five decades of fairly steady (albeit uneven) use, my brain seems to have developed a few bald patches, where it no longer grips as it once did. Colleagues comment approvingly on how quickly I reply to emails, but the truth is that if I don’t do things immediately, I forget to do them at all. I rely increasingly on the computer to remind me of things – names and dates and my friends’ children’s birthdays. And my assumption is that this need will become greater in time.

So, a few weeks ago, after much research, I bought myself a tablet computer and ended up with a Microsoft Surface Pro 2 (slightly to my surprise). Anyone who does a lot of writing with a computer and wonders how much of it you might do with a tablet (any why you might want to switch), may be interested to know how I arrived at the choice, and what I think the pros and cons of this tablet (and, to some extent, tablets in general) are.

What, no iPad?

We already have an iPad in the house; the kids love it, especially because they can make movies on it; iMovie and Edumotion (a very simple, easy-to-use stop-motion animation program) are wonderfully easy to use and the business of making visual content is genuinely intuitive, especially for the “pointer” generation (see Jennifer Egan, A Visit from the Goon Squad. Great book.) However, the iPad is visual; when it comes to words, it is mainly for consuming rather than creating them. Personally, I find the little on-screen keyboards on gadgets like this an absolute pain. You could, of course, buy a little tiny (and fearsomely expensive) Apple keyboard, which gives you a little tiny (and fearsomely expensive) laptop, that is woefully underpowered and awkward to use. No thank you. I have a laptop and when I want to use a laptop I prefer a 13” screen and a full-sized keyboard. However, the problem with laptops is that the screen is too close to the keyboard, which exacerbates the neck-ache problem. This is a technology, like the PC itself, that bends you into an odd shape so that you can use it.

Take Note(s)

clip_image001Apart from my “real” PCs (laptop, home and work desktops) the gadget that I use most is my phone, a Galaxy Note II, which is expensive – especially for someone who rarely makes phone calls. But I use it everyday as a diary and to check emails. It synchronises all my appointments and my address book with my other computers and the screen is big enough that I can actually read it (my eyes, like the rest of me, are nearly 53 years old, and my relatively new varifocals make reading most phones a challenge).

However, the really exciting feature about the Note that it has a stylus. Steve Jobs (“Most overrated individual in history?”, discuss.) once famously said that “if you see a stylus, they blew it”. A comment aimed at the old enemy, Microsoft, who introduced their tablets and “pen computing” platform back in 2000.[1] When it comes to styluses, I beg to differ (and when I’m as rich and famous as Steve Jobs was, no doubt people will actually care what I think). I simply cannot write emails or texts on a phone-sized screen using their on-screen keyboards (not, at least, at anything like the speed I can think. Even though that’s slower than it used to be). But I can write with the Note’s stylus and it does a pretty good job of turning my horrible, illegible scrawl into recognisable text. (Especially if you replace the installed Samsung handwriting recognition app with MyScript Stylus which, even though it’s still in beta, does an even better job.) Thanks to the Note and its stylus I can sit on the couch, read my emails and write short answers, instead of having to go upstairs, switch on the PC, wait, and then spend even more of my day sitting at a desk.

The only problem with the Note is that the screen is too small for extended writing. For a phone that I mostly use as a diary, it’s a sensible compromise, but it made me think that maybe I needed something larger.

Taking the tablets

My first thought was simply to buy a bigger Note; Samsung make 8” and 10” versions of it. I played with both (thanks to PC World, almost the only shop in Britain that has working versions of the gadgets it sells on display, so you can actually try them). I was not persuaded. The 8” is too small and the screen resolution of the 10.1” seemed a bit too low (everything looked fuzzy). Samsung announced a new 2014 edition a couple of months ago. It took me weeks to find one and I was disappointed; fairly expensive, the handwriting recognition didn’t seem to work as well as the Note II (maybe it just needed some tweaking, and of course in the shop I couldn’t install MyScript Stylus). But the bigger problem was that the tablet (like the phone) runs Android.

There is nothing wrong with Android, as an operating system for phones at least. It works fine and it does what it’s supposed to. My problem is that it’s an open platform and everyone does their own thing with it, so Samsung’s version of Android is not only different to everyone else’s but it varies slightly between their various gadgets. Also, Samsung are not primarily in the software business. As a result, they have no real interest in making apps/programs that will run on all platforms, nor in regularly updating and improving them. Once you’ve bought one of their gadgets, you are of no interest to them until you’re ready to buy another one. My other problem with Android is that I really rely on two bits of Microsoft software: OneNote and Outlook. I won’t explain here why I prefer OneNote to Evernote (some other time perhaps), but I’ve tried both and prefer OneNote. One of many reasons for that preference is that it’s really successfully integrated with Outlook; I can move items back and forth between the two easily and I find it simpler to organise my work that way. Outlook doesn’t exist at all on Android; I use a program called Touchdown instead, which is pretty good but lacks the tight integration with OneNote (and between its various components) that I’ve come to rely on. OneNote does exist on Android, and it’s usable but missing half the features of the full-fat Windows version. Given that Microsoft want everyone to buy and use Windows, they don’t have much interest in making really good Android apps. Neither Google (who make Android), Samsung, or Microsoft are to blame for this; it’s the remorseless logic of late capitalism, but I can’t wait for the revolution before I sort my tablet needs out. (Although it might be cheaper if I did.)

iPad and other alternatives

Both Outlook and OneNote run on some Apple gadgets, but you can’t get the full versions on iOS, which is what the iPad runs, and the iPad doesn’t have a stylus. You can, of course, buy a stylus for an iPad, but that would be a capacitive stylus, which means it’s basically a big, fat, artificial finger. The Samsung Note’s stylus is an active digitiser. You can learn the difference (and why it matters) from a nice clear post on Michael Linberger’s Blog, but basically an active digitiser is more accurate, it will detect your palm (and ignore it), so that you can rest your hand on the touch screen while you’re writing, it has a fine tip that produces nice, fine lines, and – if you’re at all artistic – they’re usually pressure-sensitive, so if you have a suitable application you can press harder to get heavier lines, etc. For me, though it’s the accuracy that matters: effective handwriting recognition relies on it.

So, by this stage I realised that I wanted a tablet with an active digitizer, a 10” (or better) high-resolution screen, and I wanted to run Outlook and OneNote, which meant I would have to buy a Windows tablet. That left me with five choices: Microsoft’s Surface; Sony’s Tap 11; Lenovo’s ThinkPad Tablet 2; or, the newly announced Dell Venue Pro 11.

Quick reasons for rejecting all non-Microsoft choices:

  • Sony is as expensive as the Surface but has slightly worse specifications, comes with a detached (and largely useless keyboard), and is rather flimsy. Reports of the active digitiser are mixed.
  • ThinkPad is too old, has a low-res screen, and there’s no sign of a new model.
  • Dell looks very attractive but not yet available in UK and I was in a hurry. It may be worth checking out.

So, that left Microsoft. There are two models: the cheaper is the Surface 2 (which runs what was called Windows RT, but Microsoft is now so ashamed of it that it seems to be OSWAN, the Operating System Without A Name). Much has been written about the pros and cons of the Surface 2 and it’s anonymous operating system (most of it fairly negative), but for me it was a non-starter because it doesn’t have the active digitiser.

Finally, the Surface Pro 2

I am not going to write a full review; there are lots online and I read dozens before I made my decision; I found the ones on Techradar and PC Pro the most helpful. However, I will highlight the best and worst features from my perspective:

The Good

handwriting_recognition-11328643%255B9%255DThe digitiser is an absolute delight. Almost 20 years of typing on computers has destroyed my handwriting (and it was never great, as my primary school teacher, Mrs Worcester, used to constantly point out; she had an enlarged photograph of Queen Elizabeth I’s stupendous, italic handwriting on her classroom wall to inspire us. Didn’t work for me. Perhaps my lifelong aversion to our monarchy put me off.) I can no longer read my own writing, but more importantly, I cannot quickly search piles of hand-written notebooks (especially after my kids have “enhanced” them in various creative ways); as a result, I often cannot find the crucial note I need.( Thinking back to undergraduate days, that was why I spent two days at the Sydney Workers Educational Association learning to touch type in the first place.) The Surface recognises my handwriting, and it did so straight out of the box, but better still it “learns” over time, so the number of corrections gradually reduces. You can also spend time training it to improve its accuracy faster.

Size, weight and battery life are all acceptable, but not wonderful. I can comfortably rest it on a lap or knee while writing and it doesn’t get too hot. It feels robust and solid enough to carry around and while I wish it was lighter, it’s only half the weight of my ultrabook (a Dell XPS 13, so it’s pretty svelte).

The best thing, however, is how you can use it. When I take it to meetings , I can pre-load load the agenda and minutes onto the tablet beforehand (less paper to recycle afterwards), then make notes of any key points during the meeting. I do all this in OneNote. Thanks to the magic of SkyDrive (and the fact that I work in a modern university with wifi in almost every room) by the time I get back to my office, my notes have all been synchronized with my desktop PC and I can look through the things I agreed to do, and just drag and drop an item onto my Outlook task list, or turn it into an email. No retyping, no searching my notes, no losing the printed agenda on which I had scrawled “must email Bill about this”.

There is also something minor, but – to me – rather lovely about the way the tablet affects my body language. When people use a laptop in a meeting, I feel as though they are not quite there: they’re “hiding” behind the upright screen, not listening to me (and possibly checking their emails or playing Solitaire). But a tablet can be almost flat on the desk, like a real pad of paper, which makes you look (and feel) as if you’re really part of the discussion. A small thing, but I like it.

My second main use for the tablet is annotating and commenting on PDF files or other documents. I do this more and more, partly to save paper and partly to be able to find my notes. I like to write on students’ work, for example. But if I print it out and write on it, they can’t read my writing, I have to get the physical bit of paper to them, which leaves me with no copy of what I wrote. If I work on the electronic version (which is how most student work arrives these days), I can annotate it (and handwriting recognition means they can read it), keep a copy and email the comments to the students. For PDF files, I use Adobe’s Acrobat Pro XI (again, tried other things but this is best; as long as you’re only paying the educational price). For Word files, I just use the built-in comment and “track changes” features. Neither is perfect, but they are each a big improvement on the hand-written alternative. Similarly with research notes; I can save them and search them at a later date. And it means I can take any number of documents with me on a train, plane or to a café. Read them. Make notes. And once I’m back in range of wifi my notes are all synchronised and available on my other computers.

My third main use for the Surface is to make notes when I am reading an actual physical book. Instead of having to prop the book up on a stand by the PC and give myself a crick in the neck twisting from book to screen and back, or use great handfuls of Post-It notes, I can now sit in an armchair, book in hand, tablet beside me and use OneNote to make notes as l go along. Not only can I read and search my notes, and copy key points into my documents, but l am already saving a lot on Post-it notes. Obviously, since the Surface costs £799 I will have to save a lot of Post-it notes before the gadget pays for itself. And then there’s the electricity. And the broadband. But as I plan to live to be about 150 and to read a couple of books a day, getting through at least 100 Post-It notes per book, l figure I will come out ahead. Just.

And, of course, the Surface also does email and web browsing, reads ebooks, etc. And apparently you can even play Solitaire on it. Not that I would know, of course.

The Bad

clip_image003So much for the good news. The not-so-good is that the Surface a classic Microsoft product, which means it’s trying to be all things to all people, which really means that you have to learn to do everything the Microsoft way. (Not for nothing is the company HQ’s official address “1 Microsoft Way”.) The Surface runs Windows 8.1 (pictured) which, in true Microsoft style, is both a dessert topping and a floor wax. You and I might think that it would be sensible to make two versions of Windows: a desktop version for large, non-touch screens that is designed to be used with a keyboard and mouse. (In which, for example, programs are represented by small icons, so that you fit lots of them on your big screen.) And we would produce a separate version for tablets that is designed to work with a finger or stylus. (In which, for example, programs are represented as nice big squares, that are easy to prod with your finger.) Well, you and I might think that, but that’s why will never work for Microsoft. At 1 Microsoft Way they follow the One Microsoft Way, which says that not only does Windows have to run on everything, but it has to be the same version of Windows on everything. I don’t understand why, but perhaps that’s why I’m not a multi-millionaire like Bill Gates.

(Windows 8.1 also embodies the other key part of the Microsoft philosophy, which is “If it ain’t broke, break it”, and then 3 months later, release an update that makes the product almost – but not quite – as good as it was before they “improved” it in the first place. But I digress…)

clip_image004Sadly, the one-size-fits-all philosophy infects the rest of the Surface too. In many ways, it’s a beautifully designed piece of technology, but it trying to both a full-blown laptop and a tablet. And so it is a rather unhappy compromise. With the attached Type Cover (another £100, pictured) you have a rather unsatisfactory (and small, and overpriced) laptop. Without the keyboard you have a lovely tablet, but it’s a bit too heavy and hot, etc, and – depending on how you use it – it will only run for about 5-6 hours on its battery.

What is more irritating, is that the stylus integration and handwriting recognition could have been even better, if Microsoft had focussed on making a perfect tablet instead of a slightly uncomfortable hybrid. For example, the handwriting recognition relies on a pop-up panel, but it takes up too much of the screen and it isn’t adjustable. At all. It should be possible to make it smaller (and semi-transparent) so that it doesn’t – for example – completely cover the web form you’re trying to complete. Nor does the pop-up appear automatically whenever you click a text box that required text input. How hard could that be? And, while, I’m whining, why does the OneNote app (i.e. the version that is supposedly designed for tablets) not allow you to simply write on it and recognise your writing? Instead, when you write you get a picture of your handwriting without even the option of converting it to text. This is seriously dumb. As a long-term user of Microsoft products, I often wonder if anybody in the company actually uses them.

The ugly?

Yet, despite my complaints, I am almost in love with my Surface. Why? Because, unlike almost all the gadgets I own, or have ever owned, I feel it is adapting to me, rather than my having to adapt myself (and my aching back) to the technology. This gadget actually makes life easier, for me at least. Obviously, it’s a device that is still evolving. I dream of a Surface 3 that weighs about 300g less, is about half as thick, runs for 10-12 hours on its battery, and has a 12” screen. It would also have the option of built-in 4G mobile networking. And a slot in which to store the stylus. Oh, and it would run “Windows Tablet” which combines the best features of Windows 8.1 with the best of Windows Phone. And please could mine be purple (the whole “any colour as long as it’s black” thing is so 1908).

But, in the meantime, the gadget and l are pretty happy together.

“Bottleborn monsters” and utopian futures

Biofutures I teach a course called Century of the Gene, which looks at the cultural impact(s) of biology in the Twentieth Century. As part of my homework for revising the course, I’ve been reading an old collection of SF stories called Bio-Futures, edited by Pamela Sargent.[1]

I have been particularly intrigued by one called “Emancipation: a romance of the times to come”, by Thomas M. Disch (an author whose name is vaguely familiar, but I can’t think of anything else by him that I’ve read). It deals with themes that I think of as a characteristic of feminist SF from the Seventies, that are rather unusual to find in a male author of this period (or, indeed, of any period), but perhaps my surprise simply indicates I need to read more.

Emancipation

  • [Spoiler alert: if you like to read a story before reading about it, go and find a copy before going further, since I am about to give away plot details.]

The story is narrated by Boz, a bored house-husband living uptown in a future New York. As the story unfolds, we gradually get glimpses of a very sexually liberated society in which masturbating to “erotic” movies on television, along with homosexuality and bisexuality are all totally accepted. (Rather entertainingly, the society’s euphemisms for gay, straight and bi are Republican, Democrat and Independent; at a party, Boz is asked “how are you registered?” by a handsome “Republican” man who’s trying to pick him up.) More surprisingly, children are taught about sex in schools by ”hygiene demonstrators”, like Boz’s wife Milly, who earns her living having sex in schools (thankfully, the story does not specify whether the students are participants or observers, nor what ages they are). There’s no jealousy and the story is, especially by 1970s SF standards, sexually explicit (it includes a rare, detailed, male’s-eye view of why Boz enjoys cunnilingus so much, as well as several uses of a word for the female genitalia that I cannot bear to type, much less say, which is one reason this story won’t be going on my syllabus).

And yet, in this future where there is no guilt, no inhibitions and no jealousy, Boz is unsatisfied and he and Milly fight constantly. At one point, he leaves and goes home to his mother for a few days and through the older woman’s complaints about the way the world is changing, we get glimpses of how this future came about and how recently it has evolved. The handsome Republican, with whom Boz goes home but for whom he cannot get it up, advises Boz and Milly to go to see a counsellor, who tells them the solution to their problems is to have a child. Milly – the driven, career woman – realises that that is what she’s always wanted, she just didn’t know it. So far, so predictable. But we then discover that in these “times to come”, all babies are gestated in artificial wombs (ectogenesis) and while his daughter is growing “in her bottle of brown glass, as pretty as a water lily”, Boz has implants and hormone treatments so that he will be ready to breastfeed her while Milly carries on working. At first, he has some doubts:

“Every hour of that first month was an identity crisis. A moment in front of a mirror could send Boz off into fits of painful laughter or precipitate him into hours of gloom”.

Yet, when breastfeeding he experiences the most intense pleasure, emotional and physical, that he has ever known. The story ends with the family on the balcony of their apartment, Boz and Milly completely happy with their daughter and relationship.

According to Bio-Futures, “Emancipation” was first published in 1974, but the Internet Speculative Fiction Database (I love this site, as only a geek can), says it first appeared in 1971, in New Dimensions 1: Fourteen Original Science Fiction Stories, (ed. Robert Silverberg). Either way, the date is interesting because the themes of the story are so similar to those in Marge Piercy’s Woman on the Edge of Time (1976), which I teach as part of Century of the Gene.

Woman on the Edge of Time

  • [Same spoiler alert]

Piercy, M Woman on the Edge of Time Piercy’s novel, a recognised classic of SF that is (understandably) much loved by feminists, concerns Consuelo (Connie) Ramos, a Latina woman from the Seventies who, like many women of the period, has become what Phyllis Chesler (then Professor of Psychology and Women’s Studies at City University of New York) described as involved in a “career” as a psychiatric paitient (Women and Madness, 1972; Piercy acknowledged Chesler in her novel).

At the beginning of the novel, Connie is once again imprisoned in a mental hospital, where she is subjected to abuse, enforced administration of sedatives and other drugs, and (eventually) experimental brain surgery. During this ordeal, she is contacted by Luciente, a time traveller from the future, via a form of mental telepathy. Connie gradually learns how to visit the future, where she gets to know the village of Mattapoisett and experiences a utopia of complete sexual equality (including complete tolerance of all varieties of sexuality), where there is neither economic inequality or racism. Connie is initially appalled by the seemingly primitive lifestyle of the future; everyone lives in small villages and spends much of their time on agriculture (at one point she contemptuously derides Mattapoisett as a “podunk utopia”), but the society’s egalitarianism gradually wins her over.

However, Connie is briefly repelled when she discovers the biological/technological basis of this new society:

“He pressed a panel and a door slid aside, revealing seven human babies joggling slowly upside down, each in a sac of its own inside a larger fluid receptacle.

“Connie gaped, her stomach slowly also turning slowly upside down. All in a sluggish row, babies bobbed. Mother the machine. Like fish in the aquarium at Coney Island. Their eyes were closed. One very dark female was kicking. Another, a pink male, she could clearly see by the oversize penis, was crying. Languidly they drifted in a blind school”.

Connie, whose own child has been taken into care, weeps:

“She hated them, the bland bottleborn monsters of the future, born without pain, multi-colored like a litter of puppies without the stigmata of race and sex”.

[“Stigmata” is an interesting word here; stigma comes from the Greek, στίγμα “mark made by a pointed instrument”, originally a brand applied to slaves. According to the OED, its earliest use in English (1596) referred to circumcision as “impressing a painefull stigma, or caracter in Gods peculiar people”. But perhaps that’s a topic for another day…]

Connie is initially repelled by these “bottleborn monsters”, especially by the revelation that each of these babies has three “mothers”; some of them are male, others female, but none are genetically related to the child. The implications of this use of bio-technology to break the genetic/parental link become obvious when one of the men picks up a crying baby, and:

“He sat down with the baby on a soft padded bench by the windows and unbuttoned his shirt. Then she felt sick.

“He had breasts. Not large ones. Small breasts, like a flat-chested woman temporarily swollen with milk. Then with his red beard, his face of a sunburnt forty-five-year-old man, stern-visaged, long-nosed, thin lipped, he began to nurse. The baby stopped wailing and begun to suck greedily.

Luciente explains the decision to use ectogenesis:

“It was part of women’s long revolution. When we were breaking all the old hierarchies. Finally that was that one thing we had to give up too, the only power we ever had, in return for no more power for anyone. The original production: the power to give birth. Cause as long as we were biologically enchained, we’d never be equal. And males never would be humanized to be loving and tender. So we all became mothers. Every child has three. To break the nuclear bonding.”

Babies in bottles and breastfeeding men are only two of the similarities: Luciente is confusingly androgynous, (Connie initially thinks she’s a man, largely because she moves and sits with so much self-confidence), as are Boz and Milly in “Emancipation”. Their counsellor explains that automation has destroyed the need for male physical strength:

“What this meant, in psychological terms, was that men no longer needed the kind of uptight, aggressive character structure, any more than they needed the bulky, Greek-wrestler physiques that went along with that kind of character. Even as sexual plumage that kind of body became unfashionable. Girls began to prefer slender, short ectomorphs. The ideal couples were those, like the two of you as a matter of fact, who mirrored each other”.

Ectogenesis, breaking the link between women and reproduction, is part of wider erasing of the differences between men and women that each of these authors approves of, either explicitly or implicitly.

The Dialectic of Sex

Firestone Dialectic of Sex The rationale for ectogenesis in “Emancipation” is not explained, but presumably – like Piercy’s use of the fictive device – it was inspired by Shulamith Firestone’s The Dialectic of Sex: the case for feminist revolution (1970). Firestone (who died in 2012) proclaimed that “pregnancy is barbaric” and argued that ectogenesis (and other technologies, particularly reproductive ones) would be a key part of the solution to gender inequality; once women were no longer defined or limited by their biology, they would be free to be full citizens of society, pursuing every opportunity on an equal basis with men. She argued for what she called “cybernetic communism” (which is a pretty good description of Mattapoisett; despite their rejection of consumerism, small wrist-worn computers called Kenners are another advanced technology the inhabitants rely on). Piercy’s novel may well have been inspired by Firestone’s observation that:

“We haven’t even a literary image of this future society; there is not even a utopian feminist literature yet in existence”.

That, of course, isn’t quite true. Firestone clearly hadn’t read Ursula K. LeGuin’s The Left Hand of Darkness (1969), but perhaps she’d finished writing her book before LeGuin’s came out. And she can be forgiven for not having read Charlotte Perkins Gilman’s classic Herland (1915), a “lost world”-style utopia inhabited only by women who reproduce asexually (i.e. by parthenogenesis, like some insects). Herland first appeared in serial form in a short-lived magazine called The Forerunner (which Gilman edited, published and largely wrote) and would not be re-published as a book until the late Seventies. There had been a few earlier feminist utopias (such as Mary E. Bradley’s Mizora: a prophecy, 1880–81), but all were long out of print. However, second-wave feminism in the Seventies led to many more (such as Joanna Russ’ The Female Man, 1975, and of course Piercy’s works).

SF has often provided a means for writers to imagine an alternative future that was better than their own, but it was comparatively unusual for men to write post-gender utopias in the Seventies (and there haven’t been very many since). The only other example I can think of (and it may not even qualify) is a short story called “Manikins” (1976) by John Varley, about Barbara Endicott who, like Connie Ramos, is a mental patient. The story is narrated by Evelyn Burroughs, a young female psychologist (Endicott won’t even speak to men), who is trying to find out what is “wrong” with the patient. Endicott is clearly crazy, since she believes that humans – and, it would seem, all the Earth’s other organisms – are all female and men are alien parasites who have infected women (semen passes the infection from generation to generation) in order to reproduce themselves. The offspring of these alien cuckoos are too big for the female body, hence the pain and danger of childbirth (so that’s why “pregnancy is barbaric”). Endicott argues that if women never had sex with men, they would revert to reproducing asexually (parthenogenetically, like the women in Herland), producing tiny female offspring easily and without pain. The story ends with an enigmatic haunting vision (in the mind, perhaps?) of Burroughs of just such a parthenogentic birth, which may be her own or perhaps her daughter’s, or perhaps of some past or future (race?) of women.

It is hard to say (it’s a pretty short story, just 13 pages) how “Manikins” is meant to be taken by the reader, which – I would argue – is what makes it a good story.[2] However, it’s clearly another rare example of a world without men, but it feels remote, perhaps impossibly remote, from the world we live in now. The same is true of Piercy’s novel (and Gilman’s), whereas Disch’s imagined future feels much closer to the world we live in now. Boz and Milly are city dwellers who look looking across a NY skyline that hasn’t changed from our day (in fact, given that the story was written in 1971, the future envisaged is roughly now). There are no SF trimmings or technologies (apart from ectogenesis) and familiar trademarks like Boeing and Pepsi get a mention. Married couples and nuclear families are (in part at least), still there norm and there is certainly economic equality (Boz’s mother lives downtown, which is still characterised by the poverty Boz has escaped by marrying Milly and moving uptown). So, if this is a utopia at all, maybe it’s just for a few; those with the money to live uptown. By contrast, Piercy envisions a much more radical change, in every sense, but perhaps that’s why I found Disch’s story curiously compelling. It feels like a world we might actually be living in before too long. Whether such a world would be utopian is much harder to imagine, but these are interesting stories to think about.


[1] Bio-futures is out of print, but I found it on the excellent Biblio website, and – no – I don’t have any affiliation with Biblio. BTW, Sargent, in addition to being an SF author herself, also edited two collections of SF by women, Women of Wonder (1975) and More Women of Wonder (1975).

[2] As I am not in the English Literature business (although many of my best friends are), I feel entitled to give rather simple-minded readings of stories and to use straightforward adjectives like “good”, but feel free to add more sophisticated ones.

Selling stuff with DNA

I am intrigued by the way the double-helix structure of DNA is used to signify all kinds of vague “isn’t this scientific/genetic/biological/hi-tech” kinds of messages.[1]

Guinot ad Philadelphia Here’s an example I saw in a shop window in Philadelphia a couple of weeks’ ago (hence the reflections, including me and my summer hat). Is the double helix in this image doing anything different from the attractive young lady?

According to Guinot’s website, their sun cream was “The first sun protection concept which incorporates DNA molecules, in addition to sun filters. Their role is to absorb and neutralise the UV rays, which would otherwise have damaged the DNA of skin cells”. I have no scientific qualifications, but this sounds as if the “DNA molecules” in the sun cream are the right size and shape to cover those in your skin, like a set of double-helical umbrellas. Or maybe some really advance scientific research has gone into this…


[1] For many more fascinating examples, see Susan M. Lindee and Dorothy Nelkin’s book, The DNA Mystique: The Gene as a Cultural Icon (2004).