Monday, May 30, 2011

Power words and such

One of the hats I wear is that of chronicler of words pertaining to homosexual behavior and culture (see my Homolexis Glossary at This activity reflects my long-term interest in languages and linguistics.

Recently I have noticed the growing popularity of a particular syntactic gambit, that is, the habitual use of nouns as adjectives to the point that they almost amount to prefixes.

Of course, the rules of English grammar permit nouns easily to function as adjectives, as in the compounds "nation building" and "child support." In romance languages one cannot do this, cf. the impermissible Italian expressions nazione costruzione (which must be costruzione di nazioni) and aiuto bambino (which must be aiuto di bambini or aiuto infantile).

However, the implicit rules of the English language easily permit noun-noun couplings of this type, as I have noted. Some new coinages reflect changes in actual practice, as in the compounds "day baseball" and "adult bookstore."

Lately though I have noticed two prefatory nouns of this type which have become so common--so infectious, one might say--that they are almost viral. I refer to “power” and “mercy.”

For some time we have been hearing about power breakfasts, power foods, and power walking. More recently we have had the power lesbian (a prosperous, well-connected lesbian who only sleeps with other power lesbians), and the more jocular expressions power ass and power coma (when one sleeps for a long time). A power shit is particularly intense evacuation. A power douche is a truly unpleasant person, a douchebag to the nth degree. However, it is OK to be a power nerd if you remain popular, even while liking such things as Pokemon and Star Wars.

Particularly productive in this way is the word mercy. The expressions mercy killing and mercy mission have long been familiar. In sports, the mercy rule means that a match must be called off if one side is getting too clobbered for the event to continue safely.

Some of the more recent usages have to do with sex: mercy date (going out with a plain person to make the person feel better), mercy grope (allowing a brief touch from someone who seems to need the contact), and mercy f*ck (self explanatory). Sometime a college professor will give a mercy pass to a student who is so lazy he clearly deserves an F. In the drug culture, one gives a mercy hit to some schlemiel who is too poor to buy his own stuff.


Saturday, May 28, 2011

Workerism and gay scholarship

Workerism is a term applied to several trends in left-wing political discourse and politics, especially anarchism and Marxism. In principle it stems from the recognition that in emerging capitalist economies a key economic role has been played by blue-collar workers, often termed simply the working class. For their part, white-collar employees are not usually regarded as workers in this sense. With the spread of the information-based economy and the decline of industrialism, the role of the working class in the older acceptation has necessarily diminished.

The corollary, especially on the Marxist left, is the idealization or glorification of manual workers, who are assumed to have acquired through experience--the school of hard knocks--some special wisdom that is denied to the intellectuals who idolize them. Yet much of this idolization is simply pro forma, as left-leaning intellectuals continue to communicate in jargon that is incomprehensible to the working class.

The glorification of the working class is the mirror image of the mainstream adulation of the aristocracy. Neither view seems well founded, even though both persist.

At one time, Socialist Realism ranked as one manifestation of cultural workerism. Now that that trend is gone, the notion occasionally pops up here and there.

A recent example is the spin leftist gay writers are currently giving to the work of the important gay historian Allan Bérubé, who died in 2007 at the age of sixty-one. A postumous collection of his writings has just been published by the University of North Carolina Press: My Desire for History: Essays in Gay, Community, and Labor History, edited by John D’Emilio and Estelle B. Freedman.

Bérubé made his mark some twenty years ago with Coming Out Under Fire, a pioneering study of gay men in World War II. Subsequently, he received a “genius award” from the MacArthur Foundation, an honor he well deserved.

However, I don’t think that Bérubé’s memory is enhanced by his left-leaning friends who claim that he was raised in an insalubrious trailer in New Jersey. Entirely self-taught (according the this view), he is an icon of “community history.” This appears to be a new wrinkle on workerism.

In reality, Bérubé lived in the trailer with his family only briefly. Assisted by a scholarship, he went to a fancy prep school in Massachusetts. He attended the University of Chicago, only to drop out just before getting his degree to work against the war in Vietnam.

I knew Allan Bérubé. He emphatically did not have a “working-class” accent or habits. He was, as far as I could tell, nonideological. (He did pronounce his surname "Beeruby," though.)

Of course, Bérubé did not teach in a college or university: he was an independent scholar. However, that status has characterized many, if not most of the founders of gay history (see my account at One need only think of Heinrich Hoessli, a milliner, or K.H. Kertbeny, a journalist. During the twentieth century important pioneering work was done by Donald Webster Cory (Edward Sagarin) and Jim Kepner, to name just two examples. Eventually, Cory-Sagarin did get a Ph.D., but only after he had published his magnum opus entitled The Homosexual in America. Kepner never went to college at all.

In this way Bérubé continued a long, very substantial tradition. For this reason, his important contribution should not be distorted by enrolling him posthumously in the ranks of workerist heroes.

UPDATE (June 9). I have read the essay collection. Unfortunately, it is rather slight. After his major book came out, he never quite seemed to find his footing as a scholar.


Wednesday, May 25, 2011

The gay left question

In 1968, a year before Stonewall I became a gay activist by joining the Mattachine Society of New York. A few years later I gravitated to other organizations that I found to be more effective. Fueled by well-justified opposition to the folly of the Vietnam War, a mood of insurrection was alive in the land. Together with other groups that had been repressed, gay people were “revolting.” And rightly so, we felt.

Some participants, however, who came to be called the gay left drew broader conclusions, encouraged by the turn to the left found in many newly independent third-world countries. The name of the first militant organization, the Gay Liberation Front (GLF), was modeled on the FLN, the Front de Liberation Nationale in Algeria. The core belief of the gay left in those days was that the “cosmetic” changes proposed by liberals were not enough; there must be a top-to-bottom renovation of society as a whole. Socialist revolution in short.

Others of us, aware of the repressive nature of authoritarian regimes in Eastern Europe and elsewhere that called themselves socialist, were not so sure. For many the officially sponsored homophobia of the Castro regime in Cuba was a turning point.

Still, the 1970s were the springtime for the gay left; it has not recovered since. And of course there was the larger context. Gradually, the fortunes of the left declined globally, as the actual practice in countries of “actually existing socialism” was seen to diverge so widely from the ideal. While this decline must be acknowledged as a fact, it is not necessarily a reason for rejoicing because when we needed the left to mount a vigorous opposition to our disastrous foreign wars it had become too feeble to be of much help.

Now the debate about the gay left has been reignited by a vigorous defense stemming from the Boston-based writer Michael Bronski, whose book “A Queer History of the United States” has just been released, and has caused a good deal of buzz. (My copy is on order, and I will offer a further report when I have read it.)

In the meantime my friend Andrew Sullivan has gone ballistic at his “Daily Dish” site. A self-described conservative (though very much a conservative with sanity), Sullivan has a somewhat peculiar definition of the gay left. In a nutshell, he thinks that it is mainly defined by opposition to gay marriage. Here is something he wrote this morning:

“By "left" I do not mean gay liberals, like, say, the HRC [Human Rights Campaign, headquartered in Washington, DC]. They opposed marriage rights for so long for pragmatic and tactical reasons--because it embarrassed their Democratic Party paymasters. By left, I mean those who opposed the push for military service and marriage rights from the get-go as a surrender to bourgeois conservatism. They wanted all gays to have no choice but to be associated with the New Left and, like many ideologues, spent a great deal of energy purging and demonizing those gays who dissented.

“Much of the gay left, mercifully, has now abandoned their stance (but not Michael Bronski, it appears). I wish I could claim some credit but most of it goes to George W Bush, who unified the gay movement around marriage rights in a way no gay writer or leader could. But among those who once virulently [sic] opposed gay civil equality in these areas [were] leftists like Bronski, Paula Ettelbrick, Peter Tatchell, Richard Goldstein, Michael Warner, and a whole slew of others whom the late and great gay journalist, Randy Shilts, called the Lavender Fascists.”

Sullivan goes on to quote the liberal Evan Wolfson, whom he rightly acknowledges as the real hero of the marriage equality movement:

"'[Marriage equality] was the subject of big divisions within the movement, within the legal groups and within Lambda,'' he says, noting there were two distinct approaches from opponents. ''There was the ideological opposition, and the strategic or tactical or timing opposition... That was the biggest dividing line, the biggest source of arguing amongst a group that might quibble or haggle over a particular legal idea but basically agreed over a whole range of things,'' says Wolfson. ''The one thing that people would argue about more than any other was marriage.''

"'Nobody was going to challenge that we needed to get rid of sodomy laws," Paula Ettelbrick explains. "No one was going to challenge that we needed antidiscrimination laws to deal with everything from HIV to sexual orientation.'' But marriage ''was hotly debated.'' She adds, ''I think it was a really important part of our movement that's seldom been fully addressed, to tell you the truth.'' ...

Wolfson continues: “A ''defense of sexual freedom'' was provided during the debate by people like Michael Warner, who countered Sullivan's book, Virtually Normal, with his own book published in 2000, The Trouble With Normal. ''At a time when the largest gay organizations are pushing for same-sex marriage," Warner writes in his preface, "I argue that this strategy is a mistake and represents a widespread loss of vision in the movement.'"

From all this Andrew Sullivan draws the following conclusion--or perhaps I should say he adds up two and two and gets five. “This is what and who I mean by the gay left. . . . It was once extremely powerful and to oppose its victimology argument and its insistence that all gays be corralled into one far left political positions was to go through a political wood-chipper. I know it seems bizarre today, and with Bush, the left might have retained more power for longer. But it was the defeat of the arguments of the gay left that allowed for the emergence of a movement for civil equality in marriage and military service. Bronski's attempt to rewrite history represents the final gasp of that dead end.”

Dead end? I don’t really think so, but to avoid that fate the left, including the gay left, must do some hard thinking.

For the record I should say that I do not regard gay marriage as a major desideratum, though I think that those who wish such a state of matrimony should be able to have it. However, John D’Emilio and other gay writers on the left are correct when they say that the mistakes that were made in the early stages of the push for gay marriage triggered the greatest outburst of antigay legislation since the days of Oscar Wilde. This year, in fact, marks the melancholy fifteenth anniversary of the odious “Defense of Marriage Act.” We are still struggling with these effects.


Tuesday, May 24, 2011

To assimilate is great--or is it?

For some time now a dispute has been simmering between two factions in the gay world (or as some would have it, the queer world). 

1) There is what is sometimes termed the gay-radical approach. Continuing the gay- liberation tradition that began with Stonewall in 1969, the proponents of this view strongly affirm gay and lesbian distinctiveness or exceptionalism.  These folks hold that the expressivity--sometimes extending to "outrageousness"--found in many g/l people is not simply a product of the long-standing obloquy and discrimination imposed by the host society. In no way are these traits to be dismissed as pathology. Instead, they serve to preserve a heritage that is vibrant and indispensable. For participants, the behavior functions as a survival mechanism. But that is not its only value. For its own good, society needs individuals who will shake things up from time to time.  That is our role, and it is a positive one. 

Harry Hay, the founder of the modern American gay movement, used to say that there is such a thing as a “gay window,” a particular way that homosexuals have of viewing things. This perspective stems from the fact that we must always seek to combine the way we perceive things with the standard opinions found in the larger society. (Hay was probably riffing on a similar thought W. E. B. Du Bois had voiced a hundred years ago about black people.)

2) Opposing this approach is the view of those observers who are sometimes labeled assimilationists: they hold that we are (or soon will be) just like everyone else. We must shun marginality, which is now dated and dysfunctional. As Andrew Sullivan put it, the ultimate goal of gay organizations is simply to disappear, once they have done their work.

Presently, I will cite Johann Hari, who puts the case for this second view more eloquently than I can--perhaps because I have difficulty signing off on it.

For some, the distinction between the two factions is encapsulated in the word "queer"--does one accept the q-word or reject it? Many who have embraced the term seem to understand it in sense no. 1, holding that it aptly characterizes the transgressive enterprise to which we must commit ourselves, body and soul. However, matters are not so straightforward, for the the queer concept also harbors assimilationist overtones. It points the way for us to merge into a larger entity, a great congress of outcasts, as it were. Yet the linkage may be even broader.  I remember once hearing Lisa Duggan say that everyone is queer, in the sense that we all have some eccentricity or personal distinctiveness.  It could be a oddity of speech or a mild phobia, but also some gift for performance and a way of doing things a little differently.

At all events the differences between the two factions have been well characterized by Johann Hari, a British gay journalist, in a piece published in the online magazine Slate a few days ago. (The paragraphs I am about to quote are from a review of a new book on Queer History in America by a Boston scholar, Michael Bronski, who is very much a supporter of the first position sketched above; Mr. Hari is not.) Here is Hari:

"My view—-since reading Andrew Sullivan's masterpiece "Virtually Normal" when I was a teenager—-is that the point of the gay rights struggle is to show that homosexuality is a trivial and meaningless difference. Gay people want what straight people want. I am the same as my heterosexual siblings in all meaningful ways, so I should be treated the same under the law, and accorded all public rights and responsibilities. The ultimate goal of the gay rights movement is to make homosexuality as uninteresting—and unworthy of comment—as left-handedness.

"That's not Bronski's view. As he has made more stridently clear in his previous books, he believes that gay people are essentially different from straight people. Why is his book called a "Queer History" and not a "Gay History"? It seems to be because the word "queer" is more marginal, more edgy, more challenging to ordinary Americans. He believes that while the persecution in this 500-year history was bad, the marginality was not. Gay people are marginal not because of persecution but because they have a historical cause—to challenge "how gender and sexuality are viewed in normative culture."

"Their role is to show that monogamy, and gender boundaries, and ideas like marriage throttle the free libidinal impulses of humanity. So instead of arguing for the right to get married, gay people should have been arguing for the abolition of marriage, monogamy, and much more besides. " 'Just like you' is not what all Americans want," Bronski writes. "Historically, 'just like you' is the great American lie." He swipes at the movement for gay marriage, and Sullivan in particular, as an elaborate revival of the old social purity movements—with the kicker that gays are doing it to themselves. (It's easy to forget that when Sullivan first made the case for gay marriage, his events were picketed by gay people spitting this argument into his face.)

"When Bronski argues this case, his prose—which is normally clear—becomes oddly murky and awkward, and he may not agree with every word of my summary: This is the best I can figure out his position. He does finally explicitly say that the gay movement should have fought instead to "eliminate" all concept of marriage under the law, a cause that would have kept gay people marginalized for centuries, if not forever. Of course some gay people hold revolutionary views against the social structures of marriage and the family—-and so do some straight people. But they are small minorities in both groups. If you want to set yourself against these trends in the culture, that's fine. Just don't equate it with your homosexuality. When Bronski suggests gay marriage "works against another unrealized American ideal: individual freedom and autonomy," he is bizarrely missing the point. Nobody is saying gay people have to get married—only that it should be a legal option if they want it. If you disagree with marriage, don't get married. Whose freedom does that restrict?

"It's bizarre that Bronski—-after a rousing historical rebuttal to the right-wing attempt to write gays out of American history-—ends up agreeing with Rick Santorum, Glenn Beck, and Michele Bachmann that gay people are inherently subversive and revolutionary, longing for the basic institutions of the heterosexual world to be torn down. There's a whole Gay Pride parade of people marching through Bronski's book who show it isn't so—from the residents of Merrymount proudly carrying their giant phallus, to Deborah Sampson Gannett dressed in her military uniform as Robert Shurtliff, to the men in Physique Pictorial in their little posing pouches. They didn't choose marginality and exclusion. They were forced onto the margins. It would be a betrayal of them—-not a fulfillment-—to choose to stay there, angrily raging, when American society is on the brink of letting them into its core institutions, on the basis of equality, at long last."

A curious feature of this debate is that advocates for both sides say that they are not trying to coerce anyone to do or be anything. They are simply asking that we each have the liberty to be true to our own nature.

I suppose that these libertarian claims are true, up to a point. Yet each side demonstrates, from time to time, a heavy dose of judgmentalism. Supporters of the distinctiveness view like to see themselves as brave outsiders who have the courage to know themselves and to chose to march to a different drummer, even though there may be, and often are, economic repercussions in the form of a reduced standard of living. These people like to portray their assimilationist opponents as sell-outs, cynics who have agreed to curtail their true natures in exchange for a "place at the table," the right to consume as much as they wish, and to bask in the plaudits of straight society. For their part, those who hold the second view, tend to see the supporters of gay exceptionalism as self-indulgent creatures who refuse to grow up. This resistance is seen in their opposition to, or indifference to gay marriage. In what may be a stereotype, the assimilationists cite the use of drugs in the first group as evidence of a flight from adult responsibility. So it happens that while neither side has the power to force the other to adopt its stance, they nonetheless seek to mobilize social pressures to move them in this direction.

Where, then, do I stand in this divide? My sense is that over many generations gay men and lesbians have built up a large store of cultural capital. This is manifested in works of fine art and literature, but also in the playful camp spirit manifested by many ordinary gay man and lesbians. In order to preserve this heritage we need gay archives, libraries, and museums. But ours is also a living tradition. Currently, I would estimate that there are at least twenty plays and musicals running in New York City that are strongly imbued with the gay sensibility.

That sensibility is real, and it must be preserved.


Sunday, May 22, 2011

Our three faiths--or more?

Nowadays, the expression “Judeo-Christian” has come under fire as glib and unhistorical. As someone who has dealt extensively with the origins and conflicts among the Abrahamic religions, I tend to agree with this criticism.

Yet a new book shows that the concept arose by way of an important American social experiment in tolerance. The book is Kevin M. Schultz, Tri-Faith America: How Catholics and Jews Held Postwar America to Its Protestant Promise (Oxford University Press; for my knowledge of the book and its findings I rely on a recent review by Adam Kirsch in The Tablet).

As Schultz shows, an important and salutary change came about in the 1930s and 1940s, thanks in large measure to the concerted effort of the National Conference of Christians and Jews, a lobbying and educational group founded in 1927. In fact, the group sought not simply to create better understanding between Christians and Jews, but also to foster good will between Protestant and Catholics—groups whose which had long been at logger heads in the United States. As Kirsch notes, “from any reasonable point of view, Catholics posed a much greater challenge to the hegemony of American Protestants than Jews ever could: At mid-century, the population was estimated to be two-thirds Protestant, one-quarter Catholic, and 3 percent Jewish. To many Protestants, moreover, Catholics were inherently unsuited to democracy, because of their obedience to the Church and their communal clannishness. Not until the election of John F. Kennedy in 1960 would this kind of hostility be wholly put to rest.”

Looking at the matter strictly from the viewpoint of self-interest, Protestants had a good deal to lose. It is therefore greatly to their credit that important Protestant leaders, put the weight of the Establishment behind the “tri-faith” vision and against long-standing prejudice and bigotry. Again as Kirsch observes, “the NCCJ had its origins as a reaction to the rise of the Ku Klux Klan, with its anti-Catholic and anti-Semitic hatreds, and took new urgency from the rise of Nazism in 1930s Europe. Its most popular programs were the so-called Tolerance Trios, in which a priest, minister, and rabbi would tour the country conducting public discussions.“

An early obstacle to the effort was President Franklin Roosevelt who once opined that the United States was “a Protestant country, and the Catholics and Jews are here under sufferance.” With the beginning of World War II he changed his mind--or at least his policies--encouraging Tolerance Trios to minister to the troops. Some of the efforts were a bit kitschy, as in a rally where a Jewish melody was followed by a performance of “Onward Christian Soldiers.” Brotherhood was the keynote, but a subsequent experience has shown that this goal proved harder to achieve than many expected.

The trend received support from a prominent Jewish intellectual, Will Herberg, whose 1955 book "Protestant-Catholic-Jew" “affirmed the arrival of Tri-Faith-America,” according to Schultz. Yet Herberg was no mere cheer leader, for he warned of the shallowness of a “religiousness without religion … a way of sociability or ‘belonging’ rather than a way of reorienting life to God.”

Kirsch, the reviewer makes an important final point. “But the real test for the tri-faith model, which Schultz barely addresses in his book, will be the assimilation of new religious groups into the “Judeo-Christian” model—above all, Muslims. From Ground Zero to Orange County, the last year witnessed a series of revolting demonstrations of anti-Muslim prejudice in the United States, reminiscent of the kind of bigotry that Jews and Catholics once faced. Tri-Faith America shows that our religious diversity has been a process of mutual accommodation: As “foreign” religions become less dogmatic and distinctive, Americans stop seeing them as alien or threatening. With luck, the same benevolent process will allow us, a few generations from now, to talk blithely of America’s Judeo-Christian-Islamic heritage.”

As things are going at present, this forecast strikes me as polyannaish. But maybe not. For the history of the tri-faith concept is essentially that. Long before the emergence of the modern expression "Abrahamic religions," a major model appeared that recognized the affinity of Judaism, Christianity, and Islam. This is the trope of the Three Rings. (I quote from a section of my Abrahamicalia site).

The Three Rings concept appears in several medieval texts, notably Giovanni Boccaccio’s Decameron (I, 3), a book written ca. 1350. The gist of Boccaccio's Tale of the Three Rings is as follows. The great Muslim leader Saladin summoned Melchizedek, a wealthy Jew, to his palace. The sultan posed an alarming question: “Which of the three great religions is the truly authentic one--Judaism, Christianity, or Islam?" Melchizedek paused before answering. “That is an excellent question, my lord. I can best explain my views on the subject with the following story. Once there was once a wealthy man whose most cherished possession was a precious ring. He bequeathed this ring to one of his sons, and with this talisman the latter took his place as the head of family. Succeeding generations followed this tradition, with the principal heir always inheriting the prized ring from his father. And yet the ring finally came into the possession of a man who had three sons, each the equal of the others in obedience, virtue, and worthiness. Unwilling to favor one son over the others, the father had a jeweler make two perfect copies of the valued ring, and he bequeathed a ring to each son. Following the father's death, each son laid claim to the deceased man's title and estate, proffering his ring as proof. Alas, a careful inspection of the three rings failed to reveal which was the authentic one, so the three sons' claims remain unresolved.”

The same is true, Melchizedek suggested, with the three great religions, Judaism, Christianity, and Islam. The adherents of each firmly believe themselves to be the sole legitimate heirs of God's truth. The question of which one is right must remain in abeyance.

Note that the original ring was not a "magic ring" that could confer invisibility or grant wishes, but a kind of title to the family fortune. It is the symbolism of the ring (and rings) that is important in this context.

A remarkable feature of the parable is that it assumes that the three rival faiths are equal in dignity, in accordance with the identical appearance of the rings. As a rule, adherents of each religion recognize the kinship only grudgingly, serving at best as a prelude to denigrating their rivals’ case. Over the centuries, Jews have tended to regard Christianity (and later Islam) as usurpers. Christians have remained confident that their own faith superseded its Judaic predecessor, while regarding Islam as a heretical aberration. For their part, Muslims believed in a dual supersessionism: since they had become hopelessly corrupted with the passage of time, both Judaism and Christianity could rank only as inadequate approximations of the true faith.

In modern times the ring parable came to enjoy new life during the Enlightenment. Gotthold Ephraim Lessing of Hamburg was responsible for bringing it back. Lessing’s play “Nathan the Wise” (Nathan der Weise; 1779) is a plea for religious tolerance. Set in Jerusalem during the Third Crusade, the play describes how the wise Jewish merchant Nathan, the enlightened sultan Saladin, and a certain Templar Knight seek to bridge the chasms separating Judaism, Christianity, and Islam.

The play’s centerpiece is the ring parable: Nathan volunteers it when Saladin challenges him to say which religion is true.

Initially the German writer's presentation follows Boccaccio's story line. However, according to Lessing the original ring had a secret power to make its wearer beloved of God and men. The father with the three obedient sons duly had two copies made, giving each son a ring. After the brothers quarreled over who owned the true ring, a learned judge admonished them that there was no way to know. In fact, all three rings may be fakes, the real one having vanished long ago. If that was so, none of the existing rings was imbued with the secret power of winning the favor of God and men. But there was no reason for despair. The judge advised that, even granting that one's ring was a fake, each son could live in such an exemplary fashion that it seemed that the ring's power was working. Undoubtedly, Lessing, a religious skeptic, was putting his own spin on the story. He hints that the lost archetypal ring was the emblem of the true religion. But that primordial faith is gone, so we must make do with what we have.

Be that as it may, whether we adopt Boccaccio's version (real rings) or Lessing's version (fake rings), the lesson of the parable is the same: the similarity of the three rings symbolizes the kinship of Judaism, Christianity, and Islam. Therein lies a significant problem, though, for despite all their commonalities, the three religions show significant, even glaring differences. They are not identical as the similarity of the rings suggests. Another drawback is that, as inert physical objects, the rings must always remain the same; by contrast, all living religions change and evolve.

There is yet another issue, which goes to the scope of the investigation. Assuming that such an inquiry can be meaningfully conducted, the matter of "which religion is true?" calls for a much broader approach. One would have to include the claims of Buddhism, Hinduism, Jainism, Daoism, animism, and others. That means a perspective in terms of the modern discipline of comparative religion.

The matter is indeed complex.


Saturday, May 21, 2011

Rapture fantasies

Today, the 21st, is the day the Rapture is supposed to take place. Preliminary reports from Australia (one day ahead of us), are negative, but one never knows!

In my studies of medieval theology, I never came across the idea of the Rapture in the current sense, though it has taken its place in recent decades as part of Dispensationalist apocalypticism, which does have a good medieval pedigree. The Scriptural basis is 1 Corinthians 4:15-16, if I recall the citation correctly.

One wag, of the liberal persuasion, has expressed the wish that it does come true, because with all the evil Republicans gone (or pretty much all), we will be able speedily to advance to universal medical care and an end to foreign wars. I suppose, but then we would have to listen to endless mandatory sermons on civic virtue by such self-righteous commentators as Lawrence O'Donnell and Donnie Deutsch.

At one time eliminationism--the wish that one's opponents would simply disappear--was confined to the far Right. Now it is occurring among liberals.

Hasn't anyone heard of the advantages of persuasion?

UPDATE. Maybe the Rapture did occur, and we all just think that everything has continued as before. But actually we're in some sort of hologram, a little like the situation in the film The Matrix. Of course, those lucky enough to experience the Rapture know that it did occur. When they contemplate the rest of us, they are pleased to affirm the truism of Jean-Paul Sartre: "Hell is other people."


Thursday, May 19, 2011

Literary celebrity

The cultural historian Daniel Boorstin remarked that “[t]he celebrity is a person who is known for his well-knownness.” A case in point is the current darling of NYC’s bon ton crowd, Jon-Jon Goulian, who has just published a memoir entitled “The Man in the Gray Flannel Skirt.” According to the thumbnail sketch in the New York Times, Goulian is “a former baby sitter, law clerk, freelance personal trainer, and assistant at the New York Review of Books.” At the age of 42, he presents himself as “an androgynous man-child with hermit tendencies.”

So far, so trendy. Yet to become a literary celebrity, be it Ernest Hemingway, Norman Mailer, or Camille Paglia, one must be something more: a real writer--which even though has perpetrated a book, Mr. Goulian clearly is not.

In fact, the intersection of celebrity and literature was the subject of a stimulating presentation I attended last night at Book Culture, a Morningside Heights establishment that ranks as one of my favorite haunts of this kind. Jonathan Goldman spoke about his new book “Modernism is the Literature of Celebrity.” In the book (which I have not read), Mr. Goldman writes about a number of figures: in the presentation he dealt mainly with Oscar Wilde, John Dos Passos and Ernest Hemingway. [See the comment he kindly posted, infra.]

While the vibes at the event were most pleasant, I could not help but think that focusing on such English and American figures revealed parameters that were too narrow, too reflective of Anglophone chauvinism.

In fact Wilde was known for his French connection. He was reputed to be carrying a copy of Joris-Karl Huysmans’ “A Rebours,” that bible of decadentism, under his arm when he was taken away from his last trial to be consigned to Reading Gaol. He had written Salome in French, and was to die in a hotel on the left bank in Paris.

It would seem then that the immediate antecedents of Anglo-American literary celebrity lie in France. To this model I will return presently.

However, the first literary celebrity was undoubtedly Pietro Aretino (1492-1556), an Italian poet, playwright, and satirist who, from the safety of his perch in Venice, wielded immense influence on contemporary art and politics. To him is rightly ascribed the invention of modern literary pornography.

When Hanno the elephant, pet of Pope Leo X, died in 1516, Aretino penned a satirical pamphlet entitled "The Last Will and Testament of the Elephant Hanno." The fictitious will cleverly mocked the leading political and religious figures of Rome at the time, including Pope Leo X himself. In his career as a scandal monger, Aretino took up the tradition of the Roman pasquinade, as seen in the popular rhymes placed on the famous figure of the Pasquino.

Apart from both sacred and profane texts—a satire of high-flown Renaissance neo-Platonic dialogues is set in a brothel—and comedies such as La cortigiana and La talenta, Aretino is remembered above all for his letters, full of literary flattery that could turn to blackmail. After they had circulated widely in manuscript, he collected the letters, issuing them in several successive printed volumes to burnish his image. In so doing he won notoriety and many enemies--but such is the price of fame.

Aretino was a close friend of the great painter Titian, who painted his portrait at least three times. He is said to have died of suffocation from laughing too much.

During the eighteenth century, Voltaire is the outstanding example of literary celebrity. His case is, however, bound up with the larger issue of the self-promotion of the major figures of the Enlightenment, especially Denis Diderot and Jean-Jacques Rousseau.

However, the truly important exemplar for own times, reflecting as he does modern techniques of journalism and publicity, was surely the poet Arthur Rimbaud (1854-1891). He produced his best work while in his teens, and gave up writing altogether at the age of 21. He then went to East Africa, where he became a gun runner.

The foundations of Rimbaud’s fame were laid by his connection with the eminent symbolist poet Paul Verlaine. Following the advice of a friend, the provincial Rimbaud sent Verlaine two letters containing several of his poems, including the hypnotic "Le Dormeur du Val" (The Sleeper in the Valley), in which certain facets of Nature are depicted and called upon to comfort an apparently sleeping soldier. Verlaine, who was intrigued by Rimbaud, sent a reply that stated, "Come, dear great soul. We await you; we desire you," along with a one-way ticket to Paris. In late September 1871 Rimbaud arrived, residing briefly in the older poet’s apartment. Verlaine, who was married to the seventeen-year-old and pregnant Mathilde Mauté, had recently left his job and taken up drinking. In later published recollections of his first sight of Rimbaud, Verlaine described him at the age of seventeen as having "the real head of a child, chubby and fresh, on a big, bony rather clumsy body of a still-growing adolescent, and whose voice, with a very strong Ardennes accent, that was almost a dialect, had highs and lows as if it were breaking."

Rimbaud and Verlaine began a torrid affair. While Verlaine had probably engaged in prior homosexual experiences, it remains uncertain whether the relationship with Verlaine was Rimbaud's first. During their time together they led a wild, Bohemian life spiced by absinthe and hashish, which they seemed to believe would enhance their poetic powers. Rimbaud’s youth and outrageous behavior scandalized the Parisian literary world.

The stormy relationship between Rimbaud and Verlaine eventually brought them to London in September 1872. During this time, Verlaine abandoned his wife and infant son (both of whom he had abused in his alcoholic rages). Rimbaud and Verlaine lived in appalling poverty in Bloomsbury and Camden Town, scraping a living together mostly from teaching, supplemented by an allowance from Verlaine's mother. Rimbaud spent his days in the Reading Room of the British Museum where "heating, lighting, pens and ink were free." All the while, the relationship between the two poets grew increasingly bitter.

By late June 1873, Verlaine had grown frustrated with the relationship and returned to Paris, where he quickly began to mourn Rimbaud's absence. On 8 July, he telegraphed Rimbaud, instructing him to come to the Hotel Liège in Brussels; Rimbaud complied at once. On the morning of July 10, Verlaine bought a revolver and ammunition. That afternoon, "in a drunken rage," Verlaine fired two shots at Rimbaud, one of them wounding the 18-year-old in the left wrist.

Dismissing the wound as superficial, Rimbaud did not initially seek to file charges against Verlaine, but Verlaine’s erratic behavior afterwards caused him to change his mind. This was a mistake, for after his arrest for attempted murder, Verlaine was subjected to a humiliating medico-legal examination. He was also interrogated with regard to both his intimate correspondence with Rimbaud and his wife's accusations about the nature of his relationship with Rimbaud. Verlaine was sentenced to two years in prison.

After traveling about for several years, Rimbaud set up shop in East Africa as a dealer in guns. At it happened, absence only enhanced the fame of the prodigy, and a kind of cottage industry grew up in France to promote what has been termed the “myth of Rimbaud.” He still fascinates, as this writer can personally attest.

This example shows that literary celebrity can involve scandal and notoriety. Yet is must also include literary quality, and that is a feature that Rimbaud and Verlaine share in abundance.

UPDATE. It may be that there are important dimensions of celebrity that are distinctly modern, going back a little more than one hundred and twenty years. These dimensions have to do with photography. The first episode has to do with unwanted publicity, yielding a kind of involuntary celebrity.

Between 1888 and 1890, Louis Brandeis (later a Justice in the US Supreme Court) and his law partner, Samuel Warren, wrote three scholarly articles published in the Harvard Law Review. The third, "The Right to Privacy," was the most important, with legal scholar Roscoe Pound saying it accomplished "nothing less than adding a chapter to our law."

Brandeis and Warren discussed "snapshot photography," a recent innovation in journalism, that allowed newspapers to publish photographs and statements of individuals without obtaining their consent. They argued that private individuals were being continually injured and that the practice weakened the "moral standards of society as a whole." They wrote:

"That the individual shall have full protection in person and in property is a principle as old as the common law; but it has been found necessary from time to time to define anew the exact nature and extent of such protection. Political, social, and economic changes entail the recognition of new rights, and the common law, in its eternal youth, grows to meet the demands of society.

"The press is overstepping in every direction the obvious bounds of propriety and of decency. Gossip is no longer the resource of the idle and of the vicious, but has become a trade, which is pursued with industry as well as effrontery. To satisfy a prurient taste the details of sexual relations are spread broadcast in the columns of the daily papers....The intensity and complexity of life, attendant upon advancing civilization, have rendered necessary some retreat from the world, and man, under the refining influence of culture, has become more sensitive to publicity, so that solitude and privacy have become more essential to the individual; but modern enterprise and invention have, through invasions upon his privacy, subjected him to mental pain and distress, far greater than could be inflicted by mere bodily injury."

Legal historian Wayne McIntosh holds that "the privacy tort of Brandeis and Warren set the nation on a legal trajectory of such profound magnitude that it finally transcended its humble beginnings." State courts and legislatures quickly drew on Brandeis and Warren's work. In 1905 the Georgia Supreme Court recognized a right to privacy in a case involving photographs. By 1909, California, New York, Pennsylvania, Virginia, and Utah had passed statutes establishing the right.

But what about individuals who do not care about privacy, but instead insist on the right to celebrity? To their aid came Edward Bernays who is generally recognized as the inventor of the profession of Public Relations. Bernays pioneered in the use of the Press Release as a device for achieving publicity.

Through modern technology, it seems, prominent individuals could "sculpt" their reputations, though not always successfully. Brandeis and Warren notwithstanding, Mr. Strauss-Kahn has recently been subjected to photographic magnification of his notoriety.

Others seek such fame--as long as through PR they can assert control over the nature of the images that secure it.


Monday, May 16, 2011

Abrahamic violence revisited: a new book

The Scriptures of the Abrahamic Triad--the Hebrew Bible, the New Testament, and the Qur’an--have enormous intrinsic interest: historical, anthropological, literary, and ritualistic, Yet these are not the only reasons for studying them, for through the centuries they have functioned in various ways as guides for human behavior. For better and (oftentimes) for worse, this activism has continued down to the present.

At my related site,, I have sought to explore the intersection of the the intrinsic and extrinsic dimensions of the Abrahamic Scriptures. One of the most disturbing aspects of the path that leads from texts to behavior resides in the monotheistic tradition of violence.

This issue has been addressed anew in a useful book by Robert Eisen: The Peace and Violence of Judaism: From the Bible to Modern Judaism (Oxford University Press, 2011). Right away, Mr. Eisen deals with one of the thorniest issues in the Hebrew Bible, the genocide of the Canaanites and Amalekites as a function of God's election (“chosenness”) of the ancient Israelites. He follows this brief, but horrifying recitation with a survey of efforts to mitigate or counter the perception of divine authorization for these efforts at elimination of entire peoples. This reflects his principle of “double reading,” whereby he finds evidence in the Hebrew Scriptures that points in both directions.

How then can one seek to excuse acts of genocide? One approach is to follow the Minimalists and say that the episodes of the acquisition of land depicted in Joshua and the other books did not actually occur. But if they did not occur, why would any people want to perpetuate such a horrible memory? Another assertion is to contextualize: we must take the reports in the context of a harsh Near Eastern environment in which the ancient Israelites actually lived. Surely, though, a standard of morality requires that some acts are simply beyond the pale, and not to be explained by the excuse that "everybody was doing it." And then there is the response that the ancient Israelites were just exercising tit for tat--a response in reaction to what had been done to them. Yet this is the excuse of many bullies: I was bullied, so now I am going to bully someone else. To be blunt, much of this commentary seems to amount to rationalization, pure and simple.

After the initial discussion of the Hebrew Bible, Mr. Eisen turns to the issue in the early rabbis, the Kabbala, and modern Zionism, both secular and and religious. It is this last phase, one that is occurring right now, that is most troubling with regard to the tendency to regard the Scriptures as a guide for action. The situation in Israel is complex, with many point of view expressed; I can only suggest that readers turn to Mr. Eisen, who seems to know this ground well.

To be sure, the book deals only with the Judaic tradition, but informed readers can easily apply the insights to the two successor faiths of Christianity and Islam, both of them just as problematic in this regard as the Judaic tradition.

This volume is carefully researched, clearly written, and well organized. In addition to the coverage of the main theme, there are a number of valuable collateral observations by Mr. Eisen. I recommend it wholeheartedly as a counterbalance to my own, more somber account.


Thursday, May 12, 2011

Annals of the Undead: Political Pilgrimages

In 1981 Paul Hollander published an important study, Political Pilgrims: Western Intellectuals in Search of the Good Society, recounting in considerable detail the long history of self-delusion that has characterized the approach of left-wing observers to totalitarian regimes. As this book is long, and now somewhat out of date, I will note some highlights here.

Lincoln Steffens was a muckraking journalist, who made his reputation exposing the seamy side of American life. He was less critical about the USSR. Returning from a trip there in 1921, he made his famous remark about the new Soviet regime: "I have been over into the future, and it works. A decade later his enthusiasm for communism had soured, as seen in his memoirs.

Yet many others were prepared to write a blank check for the “Soviet experiment,” prominent among them the “useful idiots” of Lenin’s phrase, whose enthusiasm caused them to indulge in denial and self-censorship. The most notorious was Walter Duranty, long honored by his employer The New York Times, who covered up the evidence for the mass starvation Stalin imposed on Ukraine.

After World War II the tide finally turned. Yet the quest by political naifs for an ideal society did not cease, it merely changed its object. Mao’s China became the new cynosure a phase that lasted until Nixon’s visit there in 1972. A few clung to Mao’s ally in Europe, Albania.

But there remained Cuba. Although it has long lost its luster, there are still some who defend that repressive, sclerotic regime, even though Fidel Castro has admitted that its economic model doesn’t work.

As Cuba’s shine faded, there came Nicaragua and, briefly Venezuela. After the fall of the Shah in 1979, some looked to Iran, a habit that quickly withered as a puritanical religious regime took over. Even North Korea enjoyed a brief vogue.

These days, at last, the quest seems to have weakened, though there were a few brave, or foolhardy souls, who have praised Qaddafi’s Green Revolution. Even when muted, it seems that political pilgrimage goes on.


Tuesday, May 10, 2011

A problem in ancient Roman history

I have been retired from my teaching job at Hunter College (CUNY) for five years now. This concluding phase of my earthly existence has offered me the privilege of reexamining certain problems or issues that emerged in my lectures, but where time constraints prevented me from exploring them fully.

Some of these issues remain, to one degree or another, enigmatic. Such is the case with the instance I am discussing herein, which may strike some readers as arcane and scholastic, but which has, nonetheless, a contemporary resonance.

The problem stems from ancient Roman history--not the history of the Roman Empire established by Augustus Caesar in 14 BCE, but of the previous epoch. According to tradition Rome was founded in 753 BCE, meaning that this vast period lasted for seven and a half centuries. (Modern archaeology has suggested that the traditional date is not far from the truth). At first, the political system, founded by Romulus, was monarchical. Once the last king Tarquinius Superbus was expelled (about 500 BCE), the “res publica” (or public thing) came into being.

In many respects the new institutions reflected the peculiar demography (or at least the conceptual demography) of the emerging city-state. For the Romans were not one people, but two. The two peoples were the Patricians and the Plebeians. Membership in these groups was hereditary. In principle intermarriage was forbidden, and a Patrician parents had Patrician children, a Plebeian parents had Plebeian children.

To the Patricians were reserved certain religious and political posts. Most importantly. only the patricians could serve as “patres conscripti,” members of the Roman Senate, which stood out as a visible symbol of their hegemony.

This is an extraordinary binarism, whose full implications have not, as far as I can see, been fully explored by historians of ancient Rome. (See, however, Gary Forsythe, A Critical History of Early Rome, Berkeley: University of California Press, 2005; and Kurt A. Raaflaub, Social Struggles in Archaic Rome: New Perspectives on the Conflict of the Orders, new ed., Oxford: Blackwell, 2005.)

As Titus Livy and other ancient historians and chroniclers indicate, the social dichotomy had grave consequences. There were several occasions in which the Plebeians, goaded by debt and other encumbrances, became openly rebellious. In the tactic known as the Secessio Plebis they retired to the Aventine Hill until their demands, or at least some of them, were met. One solution was to provide a new pair of officers, the tribunes, to offset the original executives, the Consuls, who tended to be Patricians.

As far as we can tell there were no identifiable physical characteristics that would serve to distinguish one’s status in this dual system. One theory is that the Patricians descended from the primordial inhabitants of Rome, while the Plebeians stemmed from immigrants from nearby territories. At all events, there was nothing remotely comparable to a racial distinction. Both groups were proficient in the Latin language.

How can one explain this extraordinary duality, the binarism that fostered the cohabitation of two distinct population entities in the relatively restricted territory of the early Roman Republic?

A simple solution presents itself: the patricians were the rich, the plebeians were the poor. Still, expressed in such stark terms this class analysis does not work. To be sure, many patricians were well off. Yet others, residing in the countryside, were ordinary farmers of modest means. As regards the Plebeians, most were relatively poor, but their were rich Plebeians as well. It was the latter who probably felt most keenly the stigma of disenfranchisement.

We must seek answers elsewhere. The only useful model I have come up with stems from the discipline of anthropology, which has detected many tribal societies with a dual organization. For example, the inhabitants of a village might be divided into two distinct groups, say, the clan of the eagles and the clan of the tigers. Sometimes the territory of the village would be divided into two sections, north and south or west and east. In other cases, the individuals would dwell side by side. This social organization is sometimes termed a moiety.

In this perspective the Roman Republic would represent an archaic survival of a pattern that had otherwise disappeared in the Mediterranean world.

It is interesting that other aspects of Roman Republican life were pervaded by dualism. According to tradition the city had been founded by two twins, Romulus and Remus. There were originally two consuls and two tribunes of the people. Geographically, the city of Rome was divided into two parts by the Tiber River. And so forth.

I conclude by noting some instances of Roman influence in the American foundation. Since the Founders were trained in Latin it was natural that they would appeal to Roman precedent. So we have the Senate, which meets in the Capitol, where the architecture is distinctively Roman. Many towns founded after the Revolution have pertinent names, as Rome, NY, Syracuse, and NY, Naples, FL. (There are similar influences in the French Revolution.)

It is true that we had no direct equivalent of the two dominant groups of ancient Rome. During the Revolution, however, many colonists had remained loyal to the British crown. After 1783 many of these people had to emigrate. Perhaps the Loyalists were our equivalent of the Plebeians. If so, of course, they were less successful, as they repaired not to the Aventine, but to Canada, Great Britain, and the West Indies, most of them never to return.

Speaking of Canada, there is another comparison that may be relevant, and that is the concept of the Two Nations. As is well known, Canada has escaped, at least for the present, the fate of dividing into two separate countries. Yet bilingualism tends to create tensions that may lead to the threat of national dissolution. For a number of years, Belgium, divided between Flemings and Walloons, has teetered on the verge of breaking up. In Spain, there is continuing tension between the Catalans and the Castilian speakers, though for the present the Spanish state seems to be holding firm. In both cases, the larger patterns of merger with the European Community provide a helpful counterpart.

In the United States, too, there are similar possibilities for discord. I am referring to the growing tendency to recognize the Spanish language as on a par with English. To be sure, most children of Spanish-speaking immigrants to the US do learn English. Yet combined with economic grievances, the presumed distinction between Anglos and Hispanics may yet hold the possibility of engendering national mischief.

NOTE. Towards the end of the nineteenth century some leading Viennese artists, and the critics who supported them, sought to separate themselves from the stodgy official art establishment. They called their new trend "Sezession," a clear allusion to the withdrawal of the Roman Plebs to its redoubt on the Aventine.

Emerging on the site of the Roman Vindobona, Vienna happily preserved a number of classical reminiscences. In a more tragic sense, the Austro-Hungarian empire was a multicultural affair, in which the dominant German-speaking minority (not unlike the Latin speakers of the Roman Empire) sought with increasing difficulty to impose its primacy over the subject peoples.


Saturday, May 07, 2011

Is Osama bin Laden having the last laugh?

Now that Osama bin Laden is dead, who won? The answer, most would say, is that we did. Not only did we slay the charismatic leader of Al Qaeda, we recovered a big cache of documents that will help us to find and destroy his henchmen.

Not so fast, though, because in a key sense it is Osama bin Laden who may have won. So at least argues Daveed Gartenstein-Ross [G-R], a counterterrorism expert who specializes in al-Qaeda, (Until this morning I had never heard of Gartenstein-Ross; I owe my knowledge of his views to a characteristically brilliant column by Ezra Klein in the Washington Post).

According to G-R. Bin Laden’s real goal was to bankrupt the United States. Looking at the gloomy economic news and the gridlock in Washington, it is hard to deny that that colossal disaster has actually come upon us. It is well known that Bin Laden cut his spurs in Afghanistan when it was seeking to escape the Soviet yoke. The US organized that campaign. (Yet we did not enlist Osama bin Laden in our ranks as Michael Moore misleading maintains; he was acting independently). The Reagan strategy had been to bankrupt the USSR, and the debacle in Afghanistan made a major contribution to achieving that goal.

As Klein remarks, “superpowers fall because their economies crumble, not because they’re beaten on the battlefield. [Moreover,] superpowers are so allergic to losing that they’ll bankrupt themselves trying to conquer a mass of rocks and sand. This was bin Laden’s plan for the United States, too.”

In an article in the magazine Foreign Policy, G-R argued “[h]e has compared the United States to the Soviet Union on numerous occasions — and these comparisons have been explicitly economic. . . . For example, in October 2004 bin Laden said that just as the Arab fighters and Afghan mujaheddin had destroyed Russia economically, al Qaeda was now doing the same to the United States, ‘continuing this policy in bleeding America to the point of bankruptcy.’ ”

As Klein pertinently notes, “Nobel laureate Joseph Stiglitz estimates that the price tag on the Iraq War alone will surpass $3 trillion. Afghanistan likely amounts to another trillion or two. Add in the build-up in homeland security spending since 9/11 and you’re looking at another trillion. And don’t forget the indirect costs of all this turmoil: The Federal Reserve, worried about a fear-induced recession, slashed interest rates after the attack on the World Trade Center, and then kept them low to combat skyrocketing oil prices, a byproduct of the war in Iraq. That decade of loose monetary policy may well have contributed to the credit bubble that crashed the economy in 2007 and 2008.”

Just a minute, though. Did Bin Laden actually foresee these developments? America could not have reached this parlous state without the collaboration of an unendicted co-conspirator, George W. Bush. It was Bush who lowered taxes while initiating an unnecessary war in Iraq, supposedly as part of a grand, post-9/11 strategy of Middle Eastern transformation. In this decision, the Israel Lobby, surely in no sense an ally of Bin Laden, played a major part.

Have we learned anything from all this? Apparently not, to judge by our bungling intervention in the Libya quagmire.


Monday, May 02, 2011

Origins of a famous phrase

Most of the turgid writings attributed to Karl Marx cannot be read with any literary pleasure. The one sterling exception is the Communist Manifesto, first published in German in 1848 under the joint authorship of Karl Marx and Friedrich Engels.

English-speaking readers have often been struck by one phrase in the first part of the manifesto: “All that is solid melts into air.” Ostensibly, these words capture the duality of capitalism: a seeming permanence that is nonetheless destined for total dissolution.

It will come as a surprise to learn that that ringing phrase was not due to either Marx or Engels. It was introduced by the English lawyer Samuel Moore in his somewhat free English version of 1888. Here is the original German text: “Alles Ständische und Stehende verdampft, alles Heilige wird entweiht, und die Menschen sind endlich gezwungen, ihre Lebensstellung, ihre gegenseitigen Beziehungen mit nüchternen Augen anzusehen.” Evidently Moore was paraphrasing the verb “verdampft” in the first clause; it means “evaporates.”

The reason for the effectiveness of the Moore version is the fact that it is a riff on Prospero’s speech in Shakespeare’s Tempest (Act IV, scene 1).

 “Our revels now are ended. These our actors,
As I foretold you, were all spirits, and
Are melted into air, into thin air:
And, like the baseless fabric of this vision,
The cloud-capp’d towers, the gorgeous palaces,
The solemn temples, the great globe itself,
Yea, all which it inherit, shall dissolve,
And, like this insubstantial pageant faded,
Leave not a rack behind. We are such stuff
As dreams are made on; and our little life
Is rounded with a sleep.”

The allusion does not seem entirely appropriate, for Prospero is describing the illusions conjured up by Shakespeare’s dramatic art, which have never been real. Yet for Marx and Engels the oppressions of capitalism are all too hideously real.

With its three authors--Engels wrote the draft, Marx edited it, and Moore gave it its enduring English dress--the Communist Manifesto is a powerful piece of rhetoric. Yet it may be less successful as analysis than the tedious texts Marx wrote all by himself.

NOTE. Here is the whole paragraph in the familiar Moore version: “The bourgeoisie cannot exist without constantly revolutionising the instruments of production, and thereby the relations of production, and with them the whole relations of society. Conservation of the old modes of production in unaltered form, was, on the contrary, the first condition of existence for all earlier industrial classes. Constant revolutionising of production, uninterrupted disturbance of all social conditions, everlasting uncertainty and agitation distinguish the bourgeois epoch from all earlier ones. All fixed, fast-frozen relations, with their train of ancient and venerable prejudices and opinions, are swept away, all new-formed ones become antiquated before they can ossify. All that is solid melts into air, all that is holy is profaned, and man is at last compelled to face with sober senses his real conditions of life, and his relations with his kind.”