Wednesday, October 27, 2004


Conventional wisdom holds that the Hebrew Bible and the New Testament share a common position on homosexual conduct: they are against it. This condemnation is patent in certain “clobber texts,” such as Leviticus 18:22; 20:13, and Romans 1:26-27.

Not so, say some gay Christians and Jews, who are revisionists in this matter. Rightly understood, they claim, the Bible does not condemn same-sex behavior as such. The venom that has disturbed those who would reconcile their orientation with their faith is not, so the revisionists claim, truly toxic.

This detoxification effort began a half century ago with a book by the Anglican canon D.S. Bailey entitled Homosexuality and the Western Christian Tradition (1955). Among other points he claimed that the Sodom story in Genesis 19 is not about homosexual rape. Other gay-friendly scholars have added to the stock of exclusions, so that it is now widely accepted in some quarters that the Bible has what amounts to a clean bill of health. Even if it does not actually permit homosexuality, the texts are problematic enough for us to conclude that the Bible doesn’t condemn the behavior. Needless to say, this program of reinterpretation finds little favor among traditional Christians and Jews, who stand by their conviction that “it means what it says.”

My own view is that while the revisions have achieved some paring down, a core of antihomosexual condemnation remains. Detoxification has only been partial. Still today, then, to be a Bible-based Christian or Jew, on the one hand, and a practicing gay man or lesbian, on the other, is an exercise in cognitive dissonance.

How is the detoxification achieved? There are three main gambits. 1) Subject the texts to a close philological reading. This technique yields the claim that the words that seem to refer to homosexual conduct did not have that meaning in ancient times; or that the syntax of the assertion has been misunderstood. 2) Assert that the condemnation refers to some special form of same-sex conduct, which does not exist today; or represents some distortion (as unloving promiscuity). 3) Insist that the supposed clobber texts belong to a social world that has long-since vanished. “Modern” Christians and Jews need pay no more attention to these outdated strictures than they do to dietary laws or the ban on mixed fabrics.

The last consideration sometimes takes a lapidary form: we are no more bound to eschew same-sex conduct than we are to avoid eating shellfish. I fear, though, that having sexual relations is not on the same plane as a trip to Red Lobster.

This linkage of prohibitions from the book of Leviticus reflects the idea that these prohibitions are random, just a kind of ragbag of disparate peeves. In fact they belong to several distinct categories: exclusion of impure species from one’s diet; avoidance of mixtures; and sexual transgressions—to name the three main categories. Since the monograph of the anthropologist Mary Douglas, Purity and Danger (1966), it has been recognized that the first category, the uneatables (including shellfish), depends upon a kind of ecological determinism. “Proper” sea creatures are those with fins and scales. Creatures living in the sea who do not have these features have crossed a boundary. They should be one thing; in fact they are another. The dietary prohibitions reflect an insistence on maintaining boundaries. The ban reflects a basic human propensity for placing things in neat categories, shunning the muzzy thinking that comes from eliding appropriate distinctions.

However, same-sex conduct between males is not the result of such a pattern of thinking, where one thing is confused with another. If anything, assuming that this is the only relevant standard, homosexual behavior should be approved, since like is consorting with like.

Clearly then other considerations are at play. And in fact, the dietary rules belong to the first (the so-called “P part”) of Leviticus, while the ban on gay sex stems from the Holiness Code. These two textual collections have different origins. In the Holiness Code, Leviticus 20:13 calls for the death of any man who lies with a male as one lies with a woman. This is part of a larger set of transgressions that merit capital punishment, most of them dealing with sex.

As far as I can see, the prohibition against male same-sex behavior is the only one that ranks as an abomination while at the same time meriting the death penalty. This is a far cry away from eating shellfish, for which no penalty is specified. Comparing the two may give comfort to religious gays, but others will regard it as ludicrous and flippant.

Another seemingly telling parallel is with the ban on wearing fabrics made of two different materials in Leviticus 19. Scholars believe that this may be a metaphor for Hebrew mixing with gentiles. At all events, it has to do with mixing two d i f f e r e n t
things—not two things of the same kind (two males). And no penalty is mentioned for such a fashion transgression.

Eating, clothing, and sex belong to different realms. The biblical writers treat them as such.

Closely examined, such parallels do nothing to elide the deplorable slur on male-homosexual behavior as both an abomination and a capital crime. When gay apologists seek to reinterpret such passages, they forfeit credibility—except among others committed to the total detoxification enterprise. Unable to convince the majority of exegetes, gay Christians and Jews who adopt such arguments segregate themselves into another minority, an interpretive community at odds with the mainstream. In the end, however, there cannot be two truths on these matters.

I turn to some broader remarks. Leviticus is of course one of the books of the Pentateuch. Since the middle of the 19th century it has been recognized that those texts are not unitary, but composite--an amalgam of four separate strands (known to scholars as J, E, D, and P).

Rereading the texts for the first time after many years, it struck me that there is a more fundamental division, with Genesis and Exodus forming one whole, the other three books another. Whatever one may think of the theology, Genesis and Leviticus contain some of the most enthralling stories ever compiled. Who can resist the fascination of the stories of Noah and Abraham, of Moses leading the children of Israel in the Exodus? To be sure, an influential school of minimalist exegetes holds that all these stories are pure myth. But they are wonderful stories all the same.

When we cross into Leviticus (forming with Numbers and Deuteronomy the concluding triad of the Pentateuch), the atmosphere changes. We are immersed in a frightful miasma of intolerance; ethnic cleansing; preoccupation with blood, semen and other bodily fluids; and death for homosexuals and other deviants. While many passages are ostensibly concerned with maintaining ritual purity, the reality is that the reader is conduced through a slough of filth. Apparently James Frazer and William Robertson Smith had a similar reaction in their day.

No wonder modern believers tend to skip over these grim recitals. Understandably so. The Bible should only be honored selectively. That means, however, acknowledging the parts of it that are simply worthless must be set aside. One should resist the temptation to claim that the entire bible is fine: it’s just that parts of it have been “misunderstood.” In many cases the meaning is all too clear—and it is detestable.

Recently a new version of the Pentateuch by the literary critic Robert Alter has been garnering praise. The novel wordings of this effort reflect a contestable theory of translation derived from German-Jewish sources. In accordance with that theory, known to specialists as word translation, the lexemes (individual words) and syntax of the Hebrew must be imitated as exactly as possible. This produces results that are sometimes striking, but miss the central point. Translations should not render words but meanings. Moreover, as the title of his version “The Five Books of Moses” suggests, Alter is willfully ignorant of the advances of two centuries of Bible scholarship. His so-called “Commentary” is a miscellany of rabbinical embroidery. Alter throws these testimonia together in a haphazard way, so that not even a proper impression of the rabbinical tradition is conveyed.

At the opposite pole “The Jewish Study Bible,” edited by Berlin and Brettler (Oxford, 2003) is a wonderful resource, dealing honestly with the issues posed by the critical school. This volume should be the cornerstone of anyone’s biblical library.

Friday, October 22, 2004

Latin America held back by its stunted legacy

My first direct exposure to a Latin American country occurred when I visited Mexico in 1968. Naturally I was struck by the poverty. However, I met an American woman there who said that building factories was solving the problem—she had been the partial owner of one. Well, today factories proliferate around Mexico City, creating a tremendous amount of pollution, but the misery remains. Over the years the gap in the standard of living between North America and the lands south of the Rio Grande has widened, fostering vast streams of illegal immigration.

The years after World War II saw the recovery of the lands that had achieved economic advance, but had been devastated by the conflict. France, Germany, Italy, Japan, and the Netherlands are prime examples. This almost miraculous result was due in some measure to US aid, but primarily it came about because of the reservoirs of human capital available in those lands. There were educated people who knew how to make things work. Now these countries, those of the G7 group and the smaller ones associated with them, have achieved remarkably high and uniform levels of prosperity. Yet these high-achieving European countries, so we were told forty years ago, reflected a dichotomy likely to remain permanent. Except for Japan, the Third World was condemned to grinding poverty. However, this prediction has not proved to be the case—at least not uniformly. First, South Korea, Taiwan, Hong Kong, and Singapore moved forward to prosperity. Now mainland China and perhaps India are joining them.

So Third World status cannot provide the sole explanation for the developmental lag in Latin America.

Conventional wisdom offers a number of reasons for this underdevelopment, prominent among them selfish domination by the US. However, Cuba’s escape from the clutches of the colossus of the North did not liberate its economic energies. Later in this essay we will briefly revisit the reasons for the popularity of this pseudo-explanation.

Now comes a book with a startling new thesis. In their “Fabricantes de miseria” (Barcelona, 1998) Plinio Apuleyo Mendoza, Carlos Alberto Montaner, and Alvaro Vargas Llosa suggest that the Hispanic heritage itself may be at fault. They point out that of all the types of society exported from Western Europe only that stemming from Spain has left its daughter societies in a seemingly permanent state of misery. That is the US, Canada (including Quebec), Australia and New Zealand all have a high standard of living. Most of these countries have substantial bodies of immigrants from other European lands, including Ireland, Germany, the Netherlands and Italy. All have prospered. Sub-Saharan Africa, colonized by the British, French, and Portuguese may seem an exception. Actually it is not, for most colonies in Africa were simply exploited by the imperial powers, not settled. The one that was settled—and where the settlers have mostly remained--South Africa, is now doing reasonably well.

Not so the former Spanish possessions. In earlier times, there was a tendency to dismiss this approach as “Spain-bashing,’ which took the form of the so-called Black Legend. The time has come to ask, though, whether this legend was really so legendary.

A possible intermediate case, lusophone Brazil, will not be treated here. In terms of development Brazil occupies a gray area. Sometimes it seems poised to assume First World status; sometimes it falls disappointingly behind. If Brazil were included, one should address the issue of the cultural deficit of Ibero-America. This enlargement would not substantially alter the argument offered here.

A revealing parallel is with the former Spanish colony of the Philippines. While its Asian neighbors to the north and west forge ahead, that countries suffers from Latin American-type misery.

What then are the factors that make up the bill of particulars of the Stunted Legacy bequeathed by Spain?

1. The idea of limpieza de sangre—purity of blood. This criterion emerged in the state of cultural impoverishment that ensued after the expulsion of the Jews and Moors from the peninsula (1492ff.). Converts were allowed to remain. However, their descendents were regarded as racially tainted, their blood impure (no limpio).
2. The “culture of poverty” which emerged in the Asturias, the hardscrabble lands in the north of the peninsula that were the cradle of the Christian Spain of the Reconquista.
3. The failure of Spain to develop a native merchant class. Italians and Flemings, who excelled in banking and commerce, handled much of the trade of the peninsula.
The fact that for long periods parts of Italy and the Low Countries were Spanish possessions facilitated this takeover.
4. The Inquisition and the Counterreformation. These developments affected all of Catholic Europe, but nowhere so balefully as in Spain. For that reason we speak of a distinctive Spanish Inquisition, an institution fully active in the New World.
5. Held back by these factors, Spanish intellectuals never developed their own substantial version of the Enlightenment (even though there was some encouragement under the Bourbon kings in the 18th century). There is no Spanish Voltaire, no Spanish Swift, and no Spanish Kant. Indeed Spanish literature does not seem to form an organic whole. It has only two high points—the Siglo de Oro (of the 16th and 17th centuries) and the modernism of Unamuno, Lorca and company. No wonder that many Latin American intellectuals prefer to do their reading in French and English.
6. The preceding tendencies all contributed to the idea that Spain was “not part of Europe.” Europe ended at the Pyrenees. This separation was affirmed in the authoritarian Spain of the caudillo Francisco Franco. By contrast, one of the accomplishments of today’s Spain is finally to join Europe. Yet while Spain languished outside of Europe, it could not bequeath European civilization to its daughter societies. It could only devastate what many already had—advanced Pre-Columbian civilizations.
7. In her Transatlantic possessions Spain imposed restrictivist trade policies, which favored the mother country and kept down interregional exchange. An elite corps of Peninsulares tightly controlled everything. Bureaucrats and nobles came out from Spain, insulating themselves from those they governed. Despite the advantages of climate, to this day Mexico has trouble developing viticulture. In colonial times all the wine had to be imported from Spain. The continuing problem of the stranglehold of dirigisme, centralized government authority, has been well highlighted by the Peruvian economist Fernando de Soto.
8. An exclusionist domination of the criollos (whites born in the Americas) followed the expulsion of the peninsulares in the third decade of the 19th century. Thus only the very topmost smidgen of the white upper crust was scraped off. In a result very different from that in the US, the mass of the people were in no way empowered. They were left in a series of castes, with mestizos (those of mixed blood) generally at the top, then various groups of Amerindians and Africans, who were left to contend for places at the bottom, their numbers notwithstanding.

The above is a formidable indictment. No doubt it will be faulted as politically incorrect. But in view of the obstinacy of the problem of Latin American underdevelopment, no stone must be left unturned.

In conclusion, we turn to some competing theories that have long held the field.

1. The thesis that Third World countries have been kept in poverty by having to serve as suppliers of raw materials, accepting high-priced industrial goods in exchange. The remedy, much favored in the 1940s and 50s was to impose high tariffs on industrial goods in order to encourage “import substitution.” However, the locally made refrigerators and automobiles were generally inferior. Besides, as the Asian dragons have shown, what is needed is to produce some high-quality product that other countries will be willing to buy.
2. The false notion that foreign investment is holding these countries back. This is refuted by the history of the United States itself, which grew rapidly in the 19th century owing to European investment.
3. The US solely to blame. With a dulling frequency, this rationalization emanates from guilty Latin American whites of the ruling class who need a scapegoat for the mismanagement their group has imposed for so many generations. It is true that there has often been collaboration between local oligarchies and Yankee interlopers. However, US influence in Europe and Asia has not held those nations down. Despite its popularity in some circles, this blame-the-gringos claim does not hold up.
4. Leftist theories that capitalism itself is to blame. This notion, still popular in some quarter, fails to explain why some countries, indeed an increasing number of them, have prospered mightily under capitalism. Why not Latin America as well?
5. A prideful sense that there is a precious quality of Latin civilization, Arielismo (using the term coined by the Uruguayan writer Rodo’) that must be preserved in the face of vulgar Yankee hucksterism and consumerism. But is this an either-or? Why not have both—a sense of courtesy together with progress? The Japanese certainly manage it.
6. Now for the final taboo: there are too many Indians and too many blacks in these countries to make modern societies. In view of the effects of artificially induced Indian inferiorization for the benefit of the conquistadores, not to mention the illiteracy and general abjection imposed on these groups for so many generations, this explanation should not be embraced without exhausting the others. Moreover, if the magic of whiteness (a new version of limpieza de sangre) were the key, the European-sourced peoples of the southern cone (Argentina, Chile, and Uruguay) should have surged mightily forward. Yet in 2003 the gross domestic product per head of Argentina was a mere $7,700, not that much higher than racially mixed Mexico, counting $5,860 per head. Hence the argument of this paper, that the Hispanic legacy is the common factor.

Tuesday, October 19, 2004


The death of Jacques Derrida in Paris on October 7 has evoked a blitz of commentary, divided fairly evenly between supporters and skeptics. Although I would call myself more a skeptic than a believer, some of the negative responses are superficial, viz. “He is unnecessarily obscure” and (sotto voce) “He’s French; so how can he be good?”

In the 1960s, like many academics, I had great hopes for structuralism as a unifying method in the humanities, which seemed undertheorized. When I heard that Derrida had revised and (in considerable measure) overturned this trend, my interest was naturally piqued. I read some of his early, foundational books in French and English, but gradually lost interest, as his followers seemed to have conscripted his ideas into a kind of Church of Postmodernism. The fact that some of my students embraced a “lite” version, Derridada, did not serve to revive my enthusiasm.

Mark C. Taylor, an admirer, asserts in a NY Times op-ed (October 14) that Derrida ranks with Martin Heidegger and Ludwig Wittgenstein as one of the 20th century’s three most important philosophers. Reading this, I flashed back to my undergraduate days at UCLA half a century ago. How my friends in the philosophy department there would have been horrified by this outcome! They believed that what we now term the Analytic tradition, rooted in Frege and Russell and brought to maturity in the Logical Positivism of the Vienna Circle, had driven competitors from the field. They had only contempt for such “fuzzy-minded” continentals as Nietzsche, Heidegger, and Sartre. Interestingly enough, at this stage the Analytic thinkers believed that they had created an advanced species of philosophy that condemned most of the earlier history of the field to obsolescence. Above all, out with metaphysics! Ethics and aesthetics were merely subjective, and as such extraphilosophical—in essence “meaningless.” Only the hard precision of logical statements would do.

Ironically, Derrida returned the favor, dismissing most previous philosophy, including the Analytic trend, as disastrously implicated in the “myth of presence.” Now there is an understandable yearning among the young—and not only there—to short-circuit the hard work of confronting the historical deposit, with all of its crabbed language and cunning twists and turns. We can skip the old rubbish, and join the Revolution! At all events, a mystery remains to be explained by historians of systems of thought as to why the Analytic trend, seemingly so triumphant, suffered the dismal fate of being so thoroughly supplanted. Many tenured old-fogeys in our philosophy departments still profess it, to be sure. But time has passed the Analytic preoccupation by. In that sense Taylor is basically correct.

And yet Derrida’s ideas are not as new as his admirers believe. More light is needed on the precursors of Derrida’s idea of the radical indeterminacy of all human utterance. Here are two suggestions.

1) In 1930 a young Englishman, William Empson, published his book Seven Types of Ambiguity. Impressed both by the complexities of 17th-century Metaphysical verse and the contemporary work of T. S. Eliot, Empson made a map, a tentative one it is true, of ways to detect verbal complexity and contradiction. Supported by the work of Empson's friend and mentor, I. A. Richards, these ideas flowed into the New Criticism of our English departments.

2) A generation later a more formidable contribution came from the political philosopher Leo Strauss. Working initially from Jewish and freethinking Muslim philosophers of the medieval Islamic world, Strauss found that the orthodox message--what they seemed to be saying on the surface--was belied by seeming asides in which they abjured these views. Close reading can smoke out these revealing contradictions. Yet those who are not Straussians wonder whether this method of detecting hidden truths may sometimes get out of hand.

Still Strauss and his followers emphasized careful parsing of texts, in the original if possible. Derrida seems less careful. His readings of Heidegger, for example, seem to reflect an inadequate knowledge of German. My German too may be inadequate for this daunting task—but I do not profess to offer authoritative criticism of the German magus. In other instances, as in Derrida’s brilliant detection of the ambiguity of a text by Mallarmé, the discovery remains on the level of an apercu. It cannot be generalized to explicate the whole body of the poet’s work.

Admirers speak of Derrida’s far-reaching influence on contemporary art, architecture, and politics. It is true that postmodern architects have often paid homage to him. However, when we visit a supposedly Derridan enterprise, such as Bernard Tschumi’s intriguing Parc de la Villette in Paris, it hard to detect any direct inspiration. It is not enough to point out that the architect sought the philosopher’s help, claiming that it was essential Was Derrida’s intervention in fact formative? If we had not heard this, how could we detect that source from the buildings? Perhaps Derrida's help was what some term a green umbrella, that is, something the creator thinks is essential but really isn't.

In his later years Derrida advanced meritorious political views, as on Czechoslovakia, the Balkans, and gay marriage. He was a person of admirable decency. But how were these positions products of his theory? One might say that they rather represent the triumph of common sense over theory.

Those who hold that Derrida was simply a charlatan are mistaken. However, there remains the key problem of the cost-benefit ratio. Extracting the yield from Derrida’s labored texts (some very much in need of editing) can be exasperating. Apart from making one qualified for certain academic posts, is the time justified? Given a choice, I think I’d rather watch a movie.

It is a commonplace that the reputation of major figures suffers a decline after their death. Some like Sartre recover, at least in part. Others like T. S. Eliot continue to sink beneath the weight of justified objections. It remains to be seen whether Derrida will share the fate of Sartre—or Eliot.

Sunday, October 17, 2004

The current political polarization

The November 4 issue of the NY Review of Books contains a symposium in which fourteen NYR writers respond on “The Election and America’s Future.” Several of them, including Mark Danner, Thomas Powers, and Brian Urquhart have written perceptively on current affairs in the pages of the magazine. Yet the conclusions of all fourteen are uniform: no intelligent person could vote for Bush. This is the kind of collective narcissism one finds at countless dinner parties in the Upper West Side of Manhattan where I live. The symposium might be a tape recording of such a gathering. In the atmosphere of mutual congratulation that prevails in these contexts, there is no effort to consider, even for a moment, what the other side might be thinking, and how voters, imbued with this thinking, will pull the lever for Bush.

For my part I hope that Kerry wins. But this victory, if it comes, will not be achieved with the help of the smug arrogance displayed in this NYR symposium. It looks as if the NYR brain trust is setting itself up for a Pauline Kael moment. Kael, it will be remembered, was astonished when she found that Nixon won the election. “No one I know voted for him,” she noted.

Some of the hubris of the opposite side is shown in a quotation from Mark McKinnon, a Bush media consultant, responding to a question by the liberal journalist Ron Suskind, as reported in the NY Times Magazine (Oct. 17). “You think [Bush’s] an idiot, don’t you? … [A]ll of you do, up and down the West Coast, the East Coast … Let me clue you in. We don’t care. Your see, you’re outnumbered 2 to 1 by folks in the big, wide middle of America who don’t read The New York Times of Washington Post or the LA Times. And you know what they like? They like the way he walks and the way he points, the way he exudes confidence. They have faith in him. And when you attack him for his malaprops [sic], his jumbled syntax, it’s good for us. Because you know what these folks don’t like? They don’t like you.”

Yet Suskind, the liberal target of these harsh remarks, is undeterred. He calmly refers to his group—those inhabiting the Blue zone attacked by McKinnon--as “reality-based.” Huh? I live among these people on the Upper West Side of Manhattan. I share their hope, though not much else, that Bush will be defeated. But I am not delusional enough to think that only the views of my neighbors are “reality-based.”

Of course much crude ranting comes from Republicans, especially on cable TV. But it is a huge tactical error to think that conservative views are simply redneck mouthings, unsupported by ideas. Arguably there are more ideas on the Republican side than the Democrat one.

It is not hard to survey this ideology. It started twenty-five years ago when George Will provided a conservative rationale for big government in his “statecraft as soulcraft.” More recently this idea of aggressive government intervention has gone global in National Greatness Conservatism, as advocated by David Brooks, William Kristol, and others. These ideas are not limited to a narrow circle of neo-cons, though of course 9/11 has helped to lend them urgency.

For my own part, these statist views send chills to the very center of my libertarian soul. I hope that they do not prevail. But we are summoned to analyze and refute this ideology, not to pretend that it doesn’t exist. All too often, to cite Schopenhauer, traditional liberalism has taken the limits of its horizon for the limits of the world. If an alternative to the current ideology is to prevail we must take it seriously, abandoning the comforts of a cocoon of collective narcissism.

Tuesday, October 12, 2004

Mozart's symphonies: weighing their significance

When I casually remarked that Mozart wrote only 41 symphonies, a friend rightly corrected me. According to one authoritative enumeration, the Austrian composer is now known to have written 68 surviving ones (a few others may have perished.)

The ostensibly canonical roster of 41 stems from the edition of Mozart’s publisher Breitkopf and Hartel (1879-82). Yet by 1910 fourteen additional symphonies had been recovered (sometimes known, despite their early date, as 42-55). Now rejected as spurious are 2, 3, 48, 49, 51, 52, 53. So, subtracting 7 from 55, we get 48.

But that is not the end of the story. More recent work has identified twenty more symphonies. The musicologist Neal Zaslow had analyzed and consolidated these findings in a major book of 1989 entitled “Mozart’s Symphonies.” Zaslow also collaborated with the Academy of Ancient Music in their complete recording, available in nineteen CDs. To show that he is not simply some wild expansionist but one whose opinion must be accorded full weight, Zaslow is one of the editors of the forthcoming edition of the Koechel Catalogue, the bible of Mozart’s works.

Yet how much do these discoveries really matter? Mozart first started his symphonic career when he was 8 ½. Although a prodigy, he was not t h a t remarkable. Moreover, the symphony itself was then in its childhood. Etymologically the word symphony simply means playing in unison. In those days, many performers were not adept enough to attempt complex works. It was safer to write some straightforward oom-pah piece they could negotiate. There was often only one rehearsal or none. Symphonies could serve as overtures, entr’actes, or just background music. Many manuscripts were lost, because they were regarded as pieces d’occasion—no need to save them.

More consistent in quality (relatively speaking) and active over a longer period, Haydn is probably justly regarded as the father of the symphony. Again the record is patchy, though. In short, we should refrain from retrojecting the qualities of Mozart/s last six and Haydn’s London symphonies back through the entire series of either composer.

So how many of the Mozart symphonies are still worth listening to? As someone who has long admired Wolfgang Amadeus (though less so now than formerly), I would judge the following worth listening to more than once: 25, 29, 31 (Paris), 35 (Haffner), 36 (Linz), 38 (Prague), 39, 40, 41.

Specialists, and some others, will want them all (as in the AAM CD set). But I think that there is a difference between archaeology and enduring contributions to world culture. Most of the 68 are not enduring contributions in this sense. To be sure, we can be glad that (unlike the case of Aristotle's lost treatise on comedy) we have the full collection and can judge for ourselves.

Today many rightly express concern for the fate of classical music, where audiences seem to be dwindling day by day. Sometimes razzmatazz, as the Met’s new production of The Magic Flute, brings the crowds back—but for how long? I do not know how to measure what contribution archaeological piety has made to this decline, but it may account for some of it.

When I was in college I revered W. A. Mozart more than any other person of the past or present. I was horrified when I saw the novelist Kingsley Amis (that ignoramus) refer to him as “filthy old Mozart.” Looking back on those days, I suppose that my youthful hero-worship reflected a need for a stabilizing force to hold my chaos in check.

Mozart’s best instrumental works display a cosmic dimension, a sense of order, balance, and completeness. Together with Haydn’s comparable achievements, they set the pattern for “absolute music,” a medium of s o u n d a l o n e that did not require the help of sung words or a printed program. Still, this advance was only accomplished with the major compositions of each: our understanding is not enhanced by a fetishistic concentration on a large mass of indifferent ‘prentice work. Even Zaslow seems tacitly to acknowledge this distinction.

This new autonomy of sound was also the great contribution of Germany to music. To the orientation of Haydn and Mozart, which can rightly be termed cosmic, Beethoven added a huge accretion of subjectivity—his colossal ego. This modification is not to everyone’s taste. But arguably it set the stage for all later music. The mighty heritage of Beethoven has stretched even to rock and roll. So classical music unknowingly generated the seeds for its own supersession. “Roll over, Beethoven,” indeed.

Thank goodness, though, we have the Mozart symphonies on CD. Even all 68, if you wish.

Wednesday, October 06, 2004

Chauncey on gay marriage

George Chauncey, professor of American history, at the University of Chicago has been widely admired for his 1994 book “Gay New York,” which covered the field up to the year 1940. Soon to appear is a sequel with a broader remit, “The Making of a Modern Gay World (1935-1975)". In between these two landmark works, Chauncey has found time to produce a shorter volume on a subject of great current interest, “Why Marriage? The History Shaping Today’s Debate Over Gay Equality.” The time frame of this book is about fifty years, though his analysis of the institution of marriage as such delves back into the 19th century.

Not previously noted for his interventions on the subject of on gay marriage, Chauncey can claim a detached point of view. This does not mean that he endorses any of the unviable arguments against gay marriage emanating from “traditional values” defenders. But he does not take sides among the several factions arguing for same-sex marriage. And of course some gays and lesbians, mainly on the left, continue to be leery of marriage. Here he makes a shrewd point. The fact that gays differ over marriage shows that there is no gay agenda. Chapter 3, on the historical variability of marriage, is excellent. Chauncey indicates that some anthropologists have been so struck by this variability that they deny that there is any single thing called marriage. This means that both traditionalists, who deplore the profanation of their parochial concept of marriage, and gay social-policy types, who assume marriage stability for their own purposes, are on shaky ground.

Presumably reflecting his historian's credo, Chauncey gives little attention to the contributions of individuals, so that Andrew Sullivan, Jonathan Rauch, and Evan Wolfson rate only passing mention. He thinks that historical forces are the main element.

However, his approach to the central problem is unsatisfactory. On the one hand, he holds that the migration of gay marriage from the periphery to the center of attention (a trajectory that has taken just ten years) is a phenomenon. I agree. On the other hand, he claims that he can easily explain this development. Well, if it is a phenomenon it is not easily explained, even by means of a prophecy after the event. If it is easily explained, the amazing “legs” of the issue cease to be problematic. In short Chauncey has framed the problem, but has not advanced very far towards its solution.

Though fluent, the book rarely probes deeply. Instead of gesturing towards the ineffable wisdom of the historian’s stance, Chauncey needs to do some hard empirical work to find the relevant data. In fact he misses most of the whole first act of the drama, which prefigured that which was to come in our own day.

In 1952 ONE Magazine was founded in southern California, as a publications counterpart of the Mattachine Society, our first stable and serious gay rights organization. Early on, in August of 1953, the monthly published an exploratory article “Homosexual Marriage?” Matters hung fire until 1961. In January of that year ONE, Inc., the organization, convened a summit meeting to discuss a homosexual bill of rights. According to the position paper plank No. 3 read as follows: “Marriages between homosexual members of the same sex should be recognized and provided for by law and should have exactly the same status and confer the same benefits and responsibilities as heterosexual marriages. This would include tax exemptions, joint ‘husband-and-wife’ ownership, and so on.”

This proposal led to a heated controversy. Generally, the southern California delegates were in favor, the northern California ones (including a delegation from The Ladder, the lesbian organization) were against. A flurry of publications ensued. A pulp journalist, R.E.L. Masters, publicized the matter in his expose’ book “The Homosexual Revolution” of 1962. The last notice in the series seems to have been a 1963 article in ONE Magazine. This terminal article is the only item in this ten-year development Chauncey has noticed. In this way the California movement laid the foundations for what was to become a mass-movement for gay marriage. This movement did not stem (as some have suggested) from the grass roots, that is from a few obscure gay and lesbian couples. The idea was hatched by an avant-garde of intellectuals, centered in Southern California.

The art historian Jonathan Katz has rightly singled out a paradox. The fifties were in some ways the most homophobic decade this country has witnessed. Not only was same-sex behavior illegal in every state, but gays were denied federal employment and widely subject to entrapment by vice squads, which were intensely proactive. Yet, as Katz points out, this era also shaped such luminaries of American culture as Tennessee Williams, Gore Vidal, and Andy Warhol. Among the creative responses, we must now include the debate on gay marriage.

The futility of political junkiedom

We used to hear a lot about the digital divide. Yet as more and more people develop the computer skills they need and gain access to them (in the public library, if necessary) that controversy has faded.

The divide that now troubles me is the one between political junkies, our chattering class, and the rest. These days we have access to news—actually “news analysis”—on cable TV 24/7. And that is not the only source of saturation coverage. The most frequently visited blogs seem to be the ones those spouting partisan political commentary. But many, especially young people, have pretty much tuned out—except for attending to Jay Leno, John Stuart, and the satirical paper The Onion.

Bruce Ackerman and others think that what needs to be done is to shift more people from column B to column A—get them interested in political debate, by financial incentives if necessary. Of course this is a perennial issue for the googoo (“good government”) folks, heightened by growing indifference among the mass of the people.

I say just the opposite. I think that we need to pay less, rather than more attention to the minutiae of political junkiedom.

Never before have ordinary citizens had less power to influence the actual course of government. The major positions of Bush and Kerry are virtually indistinguishable. They are both hawks and spendthrifts. For most of us there is not even the possibility of similating a choice. The outcome of the vote in most states—the blue bloc and the red bloc—is known in advance. Some say that as few as 40,000 undecideds in the swing states will determine who will occupy the White House. All but a few congressional seats are now “safe,” meaning that it doesn’t matter who we vote for, as the incumbent is sure to be returned to office.

Contrast this exclusion from almost any role in what will determine our fate, with the staggering amount of actual information—-much of it minutiae--we can now acquire.

Of course it is not all trivial. Early last year, sitting in my pajamas in my NYC apartment I made up a list of 20 spurious reasons advanced by our government and the neo-cons for invading Iraq, and sent it to friends. Subsequent events have proved exactly that—they were spurious. Yet respected pundits are now complaining: “I was deceived.” They should ask themselves how they could have been so dumb as to back the Iraq war in the first place. Chalk up another failure for the “best and the brightest.” But do they even deserve that accolade? Could it be that the politician/pundit class in this country is by and large incompetent?

Of course I could create a political blog and gain some tiny bit of influence. But why turn one’s life over to this stuff? I prefer to follow the example of Machiavelli and converse in the courts of the ancients. Nowadays, of course the ancients include anyone who died before the end of the 20th century. That broad field suits me just fine.