Thursday, December 30, 2004

Christmas retrospect

For several days now some friends and I have locked horns in a lively discussion on the origins of Christmas. In the larger world, the role of this holiday has become a matter of public controversy, owing to the views of some Christians that the event is being downgraded and relativized

It is generally acknowledged that no one can determine the actual day of Christ’s birth. Several candidates enjoyed popularity in various parts of the late Roman world, some in the spring. The most popular choice, though, was January 6, Epiphany. Yet the Roman church adopted December 25. Why?

A currently popular view regards Christmas as a hijacking of the Saturnalia, a somewhat pagan raucous event, which started on December 17 and extended to from three to seven days thereafter—but never, it seems, reaching as far as the 25th. In addition, some have suggested the winter solstice as a source, but that is fixed at December 22. Near misses don’t qualify, for the Romans insisted on precision in these calendrical matters. The reason for this emphasis is that astrology, then widely accepted, required determination not just of the actual day of one’s birth, but the hour. (By the way, has anyone ever calculated Jesus’s horoscope based on the several candidates for his proposed birth?)

To make a long story short, Christmas in fact coincides with an observance established by the emperor Aurelian in AD 274: December 25 was fixed as the birthday of the Unconquered Sun (Sol Invictus).

Late antiquity saw the rise of a contentious welter of religions. Of Middle Eastern origin, the Unconquered Sun came to enjoy wide appeal because of its lack of specificity. While it connoted potency, the Sol Invictus otherwise had a kind of neutrality that gave it appeal to a number of competing religious factions. It was cosmic, not anthropomorphic, at least not necessarily so. For traditional pagans Sol Invictus was identical with Apollo, originally a Greek import. The Mithraists saw it as a manifestation of their Mithras Helios. Christians could honor the solar deity as a metaphor for the "Sun of Righteousness," that is, Jesus Christ. Interestingly, the soil underneath the basilica of St. Peters has yielded a mosaic, apparently of the early 4th century, showing Christ as a sun god riding a chariot.

While the Roman Church, and eventually the entire Latin West, adopted December 25 to mark the Nativity, the eastern holiday of Epiphany was retained as well. Today the 6th of January is observed in Hispanic countries as the day of the Three Kings (the Magi), when gifts are exchanged. In this way, the old Roman observation of New Year’s Day, the first of January, was bracketed by Christmas, on the one hand, and Epiphany, on the other. They were two bookends, as it were, enclosing the older date for the beginning of the civil year. (For the Church Christmas was the beginning of the year.) The combination attests a widely ramifying process: retention of traditional holidays—providing that their pagan character was not overt--while mingling them with the new.

As part of this inquiry I looked into one of the major sources for late Roman festivals, the Calendar of 354. This richly illustrated volume, made for a cultivated Christian named Valentinus, is actually a composite reference book recording the public religious festivals in Rome (roughly the first half), together with Christian parallels (the second half). While this combination may at first sight seem schizophrenic, or at best a shotgun marriage, it actually accords well with an era of transition. Valentinus, the book’s owner, wished to have a record of the festivals of his ancestors, as well as the holy observances of his own faith. Many of the old festivals were falling into desuetude in his own day, and new deities, more acceptable to Christians and those adhering to other salvific religions, came in, favored because of their relative neutrality. These included Roma Aeterna, a personification of the city; Salus, or public safety; and the aforementioned Sol Invictus.

The original for the Calendar of 354, our best source for these matters, has been lost. Yet it has been reconstructed by several generations of classical scholars. The results of these labors have been summed up by Michele Renee Salzman in her fine monograph, On Roman Time (Berkeley, 1990).

Now for a fast forward. Somewhat analogous to the transitional picture recorded by the Calendar, we can observe changes in our own practice. Since 1954, Armistice Day, devised to commemorate the end of World War I on November 11, has been renamed Veterans Day. Washington’s and Lincoln’s birthday get rolled together as President’s Day, while a new holiday has appeared to honor Martin Luther King.

Today controversy surrounds Christmas. For some time it has had two rivals, Hanukkah and Kwanzaa. In many of the "blue" (liberal) states it is no longer fashionable to say "Merry Christmas"—one should call out "Happy Holidays" instead. In some cases Nativity scenes and Christmas carols have been banned from public observance, ostensibly on grounds of separation of church and state.

Christmas is a national holiday in the United States. Yet in its origin it is a religious holiday. In a sense we have come full circle, back to the duality of late Roman times. The day of the Unconquered Sun was a holiday in the perfected version of the official (pagan) calender. Yet Roman Christians could also accept this figure as the avatar of their own founder. Hence Christmas as we know it.

Like everything else in human culture, holidays evolve. As in 4th-century Rome, these changes can occasion controversy, with some urging radical change and others defending the status quo. Whatever the case, it seems that Christmas will be with us a good deal longer.

Sunday, December 19, 2004

Emblems of national identity

Art historians tend to shy away from the simpler, stylized imagery of daily life. At one time semiotics, a promising discipline, took on the burden of studying business logos, traffic signs, and patriotic emblems. Now that semiotics is languishing in the doldrums, these designs have settled uneasily in the discipline of "visual studies." And good riddance, we think. For these jingles for the eye--memes, if you will--lack the complexity of True Art.

Still, as in the case of heraldry, such emblems have sometimes been constituent elements of more complex works of art. Moreover, the study of such motifs can benefit from the sophisticated techniques of analysis stemming from orthodox art history.

During the Middle Ages such regalia as the orb, crown, and coronation robes served to signal the potency of particular regimes and ruling houses, as those of the Holy Roman Empire.

A sequence of emblems in France affords a representative overview of this development. During the Middle Ages the fleur-de-lis and the oriflamme were closely associated with the ruling dynasty. (In some cases, such symbols may be ambivalent, as when the fleur-de-lis, rechristened the giglio, symbolizes the city of Florence.) The Gallic cockerel stems from a gibe first made by foreigners. According to the French constitution the only officially recognized symbol of the Republic is the tricolor flag, with its red, white, and blue bands. However, la Marianne, embodied in recent times by celebrities, is the human personification of the nation.

To those of us residing on these shores the current symbols of the United States are most familiar. The historical record provides many less well-known examples. Some of these appear in a massive new tome (851 pp.) by David Hackett Fischer, "Liberty and Freedom: A Visual History of America’s Founding Ideas" (Oxford U Press, 2004). A respected historian, Fischer’s earlier book "Albion’s Seed" convincingly traced individual strands of the American character to four different waves of immigration from the British Isles.

Fischer’s new book covers such important emblems of the struggle for independence as the Liberty Tree and the “Don’t Tread on Me” serpent. A number of interesting permutations of Uncle Sam and Lady Liberty (before and after Bartholdi colossus) enrich the volume.

Given the current immature state of scholarship in this field, no such book could be complete. Still, there are some odd omissions. There seems to be no discussion of the Great Seal of the United States, which since 1935 has adorned every dollar bill. The Seal’s reverse shows an unfinished pyramid. At its zenith is an eye within a triangle, surrounded by a golden glory. As we know it, the pyramid of the Great Seal reflects the standard Egyptian type with straight sloping sides. Yet when the design first appeared in 1782, the pyramid had 13 steps, a design more typical of Mesopotamia (the ziggurat)—ironically, the country we have invaded and occupied. In the latter part of the 19th century, when there were many more than thirteen states, this design was no longer suitable. Then it received straight sloping sides. It seems that more research must be undertaken to clarify this pervasive emblem. The matter is not simply resolved by archly declaring it to be a Masonic symbol, and leaving it at that.

In the coverage of recent times, Fischer seems to lose his way. The symbols of the Civil Rights era and the Women’s movement are fairly well handled. Yet there is little on the rich imagery of the hippie and psychedelic movements, generating endless variations on Art Nouveau themes. The symbols of the gay and lesbian movement are scanted. There is only one tiny illustration of the rainbow flag, with no explanation. For the last twenty years or so gays have this flag in many US and foreign cities. Few realize, however, that the design was purloined (inadvertently or deliberately) from the emblem designed by the Frenchman Charles Gide in the 1920s for the world Cooperative Movement. From that source, it also migrated to Peru, where the rainbow flag is proudly displayed as the emblem of the highland people of Cuzco and environs.

Even though they may be destined to remain of little interest to art historians (and indeed to cultural and political historians), such emblems have protean qualities of survival and self-transformation. They have also played a vital role in shaping national identity in many lands.

Tuesday, December 14, 2004

Primacy of Egyptian art and architecture

"Black Athena: The Afroasiatic Roots of Classical Civilization" (1987-91), a major polemic by J. D. Bernal, has elicited much controversy. Despite the misleading title, this publication is not a contribution to Afrocentrism, as the term is usually understood. Instead the author seeks to restore what he terms the "ancient model," which posits the massive indebtedness of ancient Greece to Egypt. He holds that this view was dominant in ancient Greece itself, and prevailed in Europe until the late 18th century, when rising Eurocentrism sidelined it.
Against Bernal, Mary R. Lefkowitz has orchestrated a torrent of criticism--in her own book "Not Out of Africa," and in an edited volume, "Black Athena Revisited," gathering a whole raft of scholars to smite Bernal’s work. In the eyes of some this massive assault demolished Bernal. Yet he was not to go down so easily. In a collection of essays and reviews, "Black Athena Writes Back" (2001), Bernal vigorously rebutted his opponents. The upshot is this: some of what Bernal says is true, and some of it isn’t. Yet that which is the case suffices to refute the conventional wisdom of the Hellenophile miraculists, who assert that the Greek "miracle" was a case of parthenogenesis, for it emerged without any help from the older peoples dwelling to the east and south. Apart from its fantastic etymologies, "Black Athena"'s major defect (in my view) is neglect of the ancient Near Eastern sources from Sumeria, Assyria, and Syria-Palestine (as pointed out by Walter Burkert and others).

Bernal postponed the subject of artistic relations for a later volume. It appears that this sequel will not be appearing. If so, this will be a pity, as the case for artistic indebtedness is a substantial one. It is not impossible, though, to divine some of the points Bernal might make in his putative supplement. The following outline of the Egyptian legacy in art and architecture looks beyond Greece and Rome, to modern Europe

1. Ashlar masonry. The Third Dynasty funerary precinct of Zoser at Saqqara (ca. 2630-2611 BCE) is the first major architectural enterprise to be executed in stone throughout. Moreover, the fine limestone blocks are in ashlar masonry. That is, they are parallelepipeds, six-faced regular solids of standard sizes, laid in regular horizontal courses. Before this, such monuments were of mud brick. Imitating the regularity of the six-faced bricks (top, bottomm and four sides), the Saqqara limestone blocks are examples of skeuomorphism, a learned term for the migration of a form native to one medium into another. Once invented, ashlar masonry had a great future. One thinks of the walls of Greek monumental buildings, not to mention countless stately banks, libraries, and governmental buildings of our own time—all executed in ashlar masonry. It is sometimes assumed, by the way that standardization is a product of our own industrial age. However, standarized bricks and limestone blocks long preceded it.
2. Modularity. The invention of ashlar is probably the first instance of the principle of modularity—the regular "scansion" of space using architectural means. A kind of negative version appears in the regular bays of Egyptian temples and hypostyle halls.
3. Columnar architecture. The Saqqara complex shows several types of engaged (attached) columns. Later, the columns are freestanding, surmounted by capitals, and marshaled into rows (colonnades). Indebted to Egypt, columnar architecture was fundamental in ancient Greece, Rome, and the Renaissance.
4. Pyramids. As is well known, the Egyptians perfected the pyramid as a geometrical monument with five smooth faces (counting the base). There is a long history of replication of pyramids, culminating in I.M. Pei’s glass examples in Paris and Washington, D.C., as well as the Luxor Hotel in Las Vegas. However, the pyramid embodies the broader theme of elementalism. As architects from Le Doux to Le Corbusier have shown, beauty and authority stem from dramatically simple forms.
5. The hypostyle hall. As seen at the Karnak temple, this is a large pillared hall in which the central section, the nave, is higher than the two wings on either side. Light floods into the structure’s middle from the clerestory at the top of the nave. This principle recurs in Roman basilicas, and again in Christian churches, including many modern cathedrals.
6. Orthogonal city planning. Groups of Old Kingdom mastabas are distributed according to a gridiron plan. Like most early towns everywhere, most Egyptian cities were apparently “organic” (higgledy-piggeldy) in planning. Hower, the Middle Kingdom town of Kahun, created anew to accommodate workers, reflects a system of right angles. Broadly speaking this is the pattern found in Greek “Hippodamean” cities, Roman towns, and many American cities. Regardless of whether there is a direct connection, the Egyptians pioneered the orthogonal principle.
7. Monumental sculpture. Beginning in the Third Dynasty the Egyptians created canons of monumental sculpture, life-size or nearly life-size pieces that follow well-defined patterns of arrangement. In this way they invented the s t a t u e, as distinct from the “figurines” and rough “idols” formerly dominant.
8. The nude. During the Fifth Dynasty the Egyptians introduced nude male figures in the tombs. These are shown striding, with the left foot forward. Sporadically recurring, the form was purloined by the Greeks for their k o u r o s.
9. The bust. This is an abbreviated human being, a type of sculpture showing only the head and shoulders. The earliest surviving example seems to be the Old Kingdom Ankh-haf in Boston. There is a charming wooden example in the Tut treasure—and of course the world-famous Nefertiti in Berlin. The Romans produced busts of revered ancestors. And busts proliferated in the European baroque.
10. The sphinx. Egyptian sphinxes (atypical examples of animal-human hybrids) generally represent rulers. In Greece the form, always female, is hypothesized. Modern artists like Elihu Vedder and František Kupka have quoted the form as a token of inscrutability.
11. The frame. Early relief carvings, such as the Wadji stele in the Louvre and the wooden Hesira panels, fix the frame situation by raising the surface outside the picture area so as to create a uniform boundary. Later, the Egyptians developed wall paintings that clearly suggest beaded frames. Simulated frames occur in Pompeiian painting, while real three-dimensional examples enclose European canvases from the Renaissance to the present.
12. Illustrated books. In their papyri the Egyptians invented the practice of interspersing pictures amidst columns of text. The illuminated books of Byzantium inherited this practice. It lives on in our art books, with their dialogue of picture and text.
13. Comic papyri of animals simulating human conduct. A striking example is the strip of the lion and the gazelle in the British Museum. These images show that the ancient Egyptians had a sense of humor. Yet such depictions are not just humorous but embody social commentary. Cats peacefully look after mice and geese, while a gazelle must ponder how to cater to her lion master. Mickey Mouse and Donald Duck came much later.
14. Abstract art. During the Amarna period (ca. 1372-54 BCE) the old anthropomorphic and thereomorphic (human and animal) forms of deities were discarded in favor of a circular rendering, the concave disk standing for the Aten, the solar principle. Modern abstraction, also rejecting the depiction of living beings, has also favored circles and disks. Among the abstract artists exploiting the disk form are Robert Delaunay, Theo van Doesburg, and Kenneth Noland.
15. Gender variation. During the Old Kingdom and Middle Kingdom human figures complied with an established gender contrast. Men were robustly muscled, their buff upper torso revealed by the standard kilt. Women were slender, graceful, and lissom, generally wearing a slight slip-like garment. During the New Kingdom major changes became evident. Queen Hatshepsut (r. 1479-1457 BCE) ruled as a man. Her statues sometimes reflect her birth gender (her feminine side) and sometimes her masculine status, with pronounced features and a false beard. With his shrunken upper torso and pear-shaped middle section, the Amarna pharaoh Akhenaten (r. 1353-1337) shows a pronounced gender ambiguity. Recent scholarship holds that his wife Nefertiti may have assumed the male identity of Smenkhare, so as to be co-ruler with her husband during his last years. The depictions of Smenkhare are notably androgynous.

In their number and variety, these "firsts" speak for themselves. To be sure, there were many firsts in the rival civilation of Mesopotamia, but rarely in art. Egyptian primacy in this realm suggests the following stark conclusion. In all of Western art there are two main sequences, BE (Before Egyptian, i.e. prehistoric) and E plus (Egyptian and after).

Thursday, December 09, 2004

Two Italian cinema classics revisited

My attitude to movies has been a complex one. When I was a kid television was at first nonexistent and then trivial and fledgling. Everyone went to the movies two or three times a week. There were special prices for children and I reveled in cartoons. Later, as my culture-vulture tendencies increased, I came to look down on current Hollywood films as major supports for the mindless “togetherism,” conformity, and consumerism I perceived as blighting America. I did have a love for the early silents—D.W. Griffith, Chaplin, and above all the German expressionist cinema. Coming at the end of that tradition was “The Blue Angel,” with Marlene Dietrich and Emil Jannings, a masterpiece whose full range I will attempt to explore on another occasion.

Only in the 1960s did I become engaged with contemporary films, in the work of Antonioni, Bergman, and Fellini. I was less keen on Godard.

Today I carefully ration my visits to current films. Too many of them reflect the “car-crash” aesthetic—lots of violence, whirling camera work, and overly loud music. The principal alternative is only slightly less palatable, the sentimental fare sometimes termed "chick flics." Instead of these things, I go back to the classics.

As part of a recent retrospective of Italian films, I saw two classics of the sixties, Antonioni’s "La Notte" (The Night) and Bertolucci’s "Before the Revolution."

"La Notte" has been saddled with the reputation of being about alienation and noncommunication. Citing these issues (which were indeed preoccupations of the era) fails to do the film justice. In reality the self-disciplined upper-middle-class lifestyle of the characters reflects a utopian vision, a world of the near future (as seen from 1961) that will blend European sophistication with American efficiency. Of course this outlook emerged in the years before Watts, May 1968, the Brigate Rosse, and do on. The sleek modern architecture of Milan (or those parts we are shown) sets the mood for a Le Corbusier city of cleanliness, propriety, and efficiency.

The chic La Notte folk are always well dressed and their hair is impeccably groomed. No one is overweight or sloppy in appearance. Voices must not be raised; outright arguments must be shunned. It is better to take evasive action, as the dashing intellectual Pontano (Marcello Mastroianni) does, than engage in any direct confrontation. He does not even refuse the tycoon Gherardini’s job offer, though it is repugnant to him. A flat turn down would be, well, so gauche. Even the servants adhere to the code of respectability and discretion. Although liquor is available, no one gets drunk. At first the disappearance of Pontano’s wife Lidia (Jeanne Moreau) for the afternoon seems an attempt to get free (she even encounters the offer of a maison de passe!), but it turns out not to be. Instead, she agrees to meet her social obligations by attending the evening soiree at the Gherardini’s lavish villa.

In short this is a world in which those who wish to get ahead must conform. If so, the rewards are there, including the last one--a "sweet death" in an ultramodern hospital (the destiny of Pontano’s ailing mentor Tommaso).

A few chinks afford a vision into another world. A nymphomaniac being treated in the hospital lures Pontano into a brief embrace. Two fighting toughs duke it out in the rundown suburb of Sesto. A nightclub features two exotic dancers, African Americans whose “daring” clinches are carefully calculated. We learn of the rich family’s junket to America to witness a hurricane (off screen). At the soiree the viewer encounters a middle-aged woman, Resy, who yields to a crying jag. Everyone ignores the unfortunate Resy (apparently a discarded mistress of the tycoon), except for the interlocutor who seeks to comfort her on the settee at the closing soiree. All these things are eruptions of a world of irrationality and primitivism that must be banished.

That these eruptions represent a primitive world that once was more pervasive, but is now ready to be discarded is shown by Lidia’s revelation that she and her husband once lived in the dilapidated suburb of Sesto. That was then—now is now.

With its almost Corbusian vision of a world of progress, this film contrasts with a product of the young Turk of the era, Bernardo Bertolucci. His "Before the Revolution" (1964) deals prophetically with sexual anomaly (the hero has an affair with his aunt) and the prospect of political upheaval. The prophecies did not take long to work out. Four years later came the uprisings in France and in US universities. The gay rebellion occurred at Stonewall a year later.

In the long run, though, the vision of "La Notte" seems the more enduring one--as we struggle to cope with cell-phones, iPods, DVDs, and who knows what is coming next.

Wednesday, December 08, 2004

The end of art?

The last half-century has seen a series of proclamations of closure, typically cued by the introductory words “end of.” During the 1960s Daniel Bell and others wrote of the end of ideology (an unlucky prediction since the spread of the New Left was then in the process of reviving ideology). More recently, Francis Fukuyama’s concept of the end of history provoked a flurry of interest. There has been a widespread sense that we have reached the end of the book as a medium of information. Well maybe we stand at the end of the "Gutenberg Era," but books are still thriving, though somewhat less than formerly. One music critic sounded a peremptory doom: the end of the classical CD was scheduled for 2004 (yet this debacle doesn’t seem to have happened).

Many of these statements are rhetorical exaggerations, designed either to accelerate the trend (as with those who hoped that ideology could be hastened to its grave) or, conversely, to sound the alarm, hoping to reverse or at least retard the threatened decline. Sometimes the point seems to be simple provocation, as the volume entitled “The End of Racism.” When there genuinely is a complete end, as with the sodomy laws in the US in 2003, there doesn’t seem to be a felt need to appeal to "end of" rhetoric.

I turn now to my main subject: assertions of the "end of art."

Aesthetic reactionaries constitute the common-garden variety of those who speak of the end of art (that is, of the visual arts of painting and sculpture). These censorious individuals decry the prevalence of modern art as a sign of decline and debasement, spelling the end of anything worthy of being called art. And perhaps (with a side-glance at the fall of the Roman Empire) today’s purported travesties offer premonitions of the end of civilization itself. These commentators, then, do not deny that many are now producing objects that could be called art—they simply claim that no good art is being produced. By signaling this debasement, they hope to spark a return of "standards."

Recently, though, there has arisen a school of critics who admire modern art, even its most challenging innovations, while simultaneously holding that art has come to an end, perhaps in the 1980s. This school asserts that good "art," some very good, is being produced today—but that it is not ART.

Before examining the current form of these arguments, a look at two earlier figures will be useful.

The first history of art that has come down to us is that embedded in the Natural History of Pliny the Elder (first century CE) Himself Roman, Pliny was concerned primarily with the development of Greek sculpture and painting from the fifth century BCE onwards. For two centuries all went well. Then, after a flowering of art about the time of the 121st Olympiad (296-93), "deinde cessavit ars," art stopped; 34:52). But not permanently, for it recovered at the time of the 156th Olympiad (156-53). Given Pliny’s neo-classical taste, it seems likely that this blank century or so witnessed not the total cessation of art, but rather the prevalence of a type of art that Pliny did not like: the Hellenistic baroque. Accordingly, Pliny’s scheme is a model for aesthetic reactionaries, who think that art, as far as they are concerned, is dead now, but may make a comeback.

The German philosopher G. W. F. Hegel (1770-1831) is often regarded as maintaining the end of art, a demise apparent in his own times two hundred years ago. This claim stems from his vast Lectures on Aesthetics, which were collected and edited posthumously, largely from student notes. For this reason they contain infelicities and perhaps even contradictions that the writer might not have authorized. At all events, as far as one can reconstruct his views from this somewhat opaque medium, Hegel held the view (common among intellectuals of his own day) that Greek art had achieved an unsurpassable excellence that cannot be replicated. In Hegelian terms this art constitutes the reflection of absolute spirit. Art of this quality has indeed ended. But less perfect forms of art have soldiered on until our own day.

At the end of the Lectures Hegel presents a different view. Art (comprising the five branches of architecture, sculpture, music, and poetry) is like a giant garland, with all the arts contributing strands. That garland, it seems, is now fully complete, and nothing more is to be done. In this sense the art enterprise is a great task undertaken by humanity millennia ago. In that respect it is not unlike short-term, mundane efforts, such as cleaning out the garage or persuading a loved one to get married. One day we recognize that the task has been accomplished.

There are indeed many endeavors that, though protracted, have a definite beginning and a definite end. Consider the acquisition of the Ph.D. degree. This has four main steps: 1) application and acceptance to a degree-granting university; 2) completion of required course work; 3) passing the relevant examinations; 4) production and defense of a written document, the Ph.D. degree. With the accomplishment of stage four the Ph.D. effort achieves closure. May it not be that humanity’s art effort, though stretching over many millennium is now finished? Perhaps the human collective now has its art Ph.D.

Knowing nothing of prehistoric art (the major discoveries were to occur after his death), Hegel assumed that art began in the high civilizations of the ancient Middle East, especially Egypt. Thus there was an era b e f o r e art, for prehistory was preart. Why not then posit an era a f t e r art? As we shall see, this triadic sequence (absence, flowering, absence) has furnished a template for our own time.

In 1984 the American philosopher and art critic Arthur Danto had a kind of epiphany. After seeing Andy Warhol’s Brillo boxes at the Stable Gallery in New York, he produced an essay “The End of Art.” Seemingly, Warhol had shown that there was no difference between art objects and objects of daily use. When the artist transported the boxes into a gallery, they became art. At the same time such acts abolished art. Danto’s reasoning was indebted to Hegel. With Warhol art had become philosophical, yet in so doing it had ceased to be art. A few years later he qualified this claim by suggesting that we have entered an era of “deep pluralism.” Art, by ceasing to move in a discernible direction, had lost an essential feature of being art. Despite the implicit qualification Danto was unwilling to abandon his conceit that art had ended in 1984. This is true, even though objects are appearing that look very much like art. Appearances, it seems, can be deceiving.

Almost a century ago, Virginia Woolf, impressed by Roger Fry’s great impressionist show, posited that in 1910 "human character" had fundamentally changed. Still, most people looked and thought much the same. One is reminded of the sosia delusion, a rare mental disorder in which the victim concludes that those near and dear to him have been spirited away and replaced by imposters, who mimic the behavior of those they have displaced.

In 1983 Danto’s friend the German art historian Hans Belting published an essay "Ende der Kunstgeschichte?" Although his manner was less categorical than Danto's, Belting suggested that the collapse of traditional methods of conceiving of art history also said something about the dissolution of art as it had been known.

Trained as a medievalist, Belting advanced another thesis in his book Presence and Likeness (1990). He held that prior to the fifteenth century there had been no art. Any aesthetic qualities that might be discerned in the art-like icons produced before that time were subsumed in the category of the devotional. Art came into existence only when the devotional emphasis yielded to an aesthetic one.

Combining their assertions, we have the Belting-Danto thesis. Art as we know it is a relatively transient convention of Euro-American civilization. It began towards the end of the fifteenth century and concluded in 1984, or thereabouts. Art is not a human universal, but a contingent phenomenon that is socially constructed.

These ideas of beginnings—and consequently of possible conclusions—have been something of a French specialty. A good many years ago, Denis de Rougement posited that romantic love first appeared in the 12th century with the poetry of the troubadours. After a long run, the new convention of “hooking up” among college students may indicate the reign of romantic love is over. Philippe Ariès held that the concept of childhood had begun only in the early modern period (the 16th-17th century. More recently, Michel Foucault held that homosexuality started only in 1869 (when the term was coined). Before that, there were same-sex acts, but no homosexuality. Some queer theorists hold that homosexuality is over now; instead there is a more flexible field in which individuals are free to be sexual, but not homo-, hetero- or bi.

Is there any advantage in continuing to hold that art is finished? The perception may signify that, for the moment, art has become somewhat stale and unexciting. But why should this condition (even if it is correctly described) last? As Pliny the Elder suggested art may make a comeback. In fact it almost certainly will.

Thursday, December 02, 2004

Atlantic permutations

As a teenager half a century ago I aspired to leave the crass commercialism of America forever. Fleeing our “air-conditioned nightmare,” I would reside in Europe's venerable arcadia, soaking myself in old-world culture. Three countries competed for my allegiance: Britain, France, and Italy. After sojourns in New York City (which for a while I regarded as a mere starting-off place), I did get to live in Italy for two years; and after an interval in London for four. Needless to say, these expat stays had a chastening effect.

So it was that in 1967 I decided to come back to the USA, as that was where the action was. The civil rights and antiwar movements were well under way; women's rights and gay rights were waiting in the wings. Still I kept going back across the pond on visits to Europe, where the most of the art works I taught were located. There was also a Europe of my mind, implanted by my superb teachers in graduate school, almost all of them products of Central Europe. This was the Transatlantic Migration. These scholars migrated in the reverse sense, and under much more pressing circumstances--the rise of Fascism and Nazism in the context of the world Depression. This migration has few parallels. Perhaps the best one is with the flight of Byzantine scholars to Italy after the fall of Constantinople in 1453.

However much these demigods set down roots here (and in England too), they could not get Europe out of their system. This meant proficiency in a number of modern languages, together with a formation in Greek and Latin. In addition there was an intangible essence, which cannot be duplicated. Later in these pages I will attempt reconsiderations of some of these figures, including Erich Auerbach, Gerhart Ladner, Karl Mannheim, Erwin Panofsky, and Leo Spitzer.

This historic development led to the emergence of an Atlantic community after World War II. It seemed to me that within the broader horizons there was a kind of hexagon: New York, London, Paris, Rome, Berlin, Amsterdam. Daunting as it sounds, I held that one must keep up with all of them.

Some said, mockingly, that this idea of multipolar Atlanticism was a mere artifact of NATO. We were the children of NATO. There is something to this, as I was supported for two years by a Fulbright grant in England.

But first the Soviet Union disappeared; then the US sought to go it alone. We became alienated from “Old Europe.” How can this process of mutual alienation be reversed? This problem merits further thought.

POSTSCRIPT. The disdain felt by my youthful self for my native land may seem hopelessly callow and ungrateful. Yet listen to the litany of deficits Henry James noted as marking the United States in the middle of the 19th century: "No State, in the European sense of the word, and indeed barely a specific national name. No sovereign, no court, no personal loyalty, no aristocracy, no church, no clergy, no army, no diplomatic services, no country gentlemen, no palaces, no castles, nor manors, nor old country-houses, nor parsonages, nor thatched cottages, nor ivied ruins; no cathedrals, nor abbeys, nor little Norman churches, no great Universities nor public schools--no Oxford, nor Eton, nor Harrow; no literature, no novels, no museums, no pictures, no political society, no sporting class--no Epsom, nor Ascot!"

Clearly a good deal has changed since HJ made up this bill of indictment, embroidering on a gentler complaint by Nathanael Hawthorne. The changes are not all for the better. "No State" sounds almost like paradise.