The last time we see someone becomes the final frame of reference we have of that person. It’s shocking and almost cruel for nature to deny us the truth of one’s appearance only to then spring it upon us years later, unveiling the effects of time. On the other hand, not seeing that person again “preserves” us, as if in a jar, in a preferred state, and we rest knowing that at least in someone’s mind we’re eternally young. We’re both spared the humbling truth.

Time is a trickster that way, since it is a mental construct in the first place. When I watched old films as a child, and again as an adolescent, it was more or less the same phenomenon, only in reverse. The actors were never young but “older.” They were “the first” heroes. They projected strong personalities, knew who they were, where they were going, and where to find the truth.

This is why today watching those same films again presents an emotional dilemma. Suddenly I realize that those same heroes, in the film and in real life, are now younger than me. It alters the calculus of cause and effect, between impressions made then and now. That juxtaposition makes finding the truth messy. To believe in those heroes requires that I regress to my childhood state of mind again, which is actually easy to do – for a moment. But then I can’t avoid realizing that they’re different now, because I’m different. I’m revisiting a naive childhood daydream. Incredulously, a subtle feeling of betrayal overtakes me, of being lied to and tricked at a young impressionable age. These are the same “heroes” but younger and bewilderingly transparent. They’re ciphers, chameleons, and wind-up dolls that change like the wind, given life by screenwriters and scripts.

The only way to return the experience back to its pure innocence, to resuscitate those heroes, is to watch the film through that fifty-five year-old lens. I become fourteen again, and John Wayne, Spencer Tracey, Ronald Colman, Katherine Hepburn, and others are instantly who they were, strong, formidable, and bigger than life. And in fact I often want to be re-inspired that way, especially today, since real heroes, even anti-heroes, are so hard to find.

Sadly, the same legerdemain works its hand in Washington politics. I flip on the TV and watch our “august” experts, the so-called pros filling our three branches of government going about their business. As a child these were always gray-haired, established “men” mostly who had proven their metal and who stood for the highest principles of justice and liberty. But now most of them were born after 1949 – many after 1960 (some even after 1970). Gradually those personas have re-morphed from dignified and preeminent statesmen into rookie-dilettante-actor-opportunists who are there to learn all “the moves” about legal subterfuge. They have (stunningly) enormous egos and huge personalities, and despite their law degrees and book tours are not much more than players in a huge parlor game. Worse yet, they’ve managed to fool citizens just long enough to get what they really wanted – notoriety, fame, status, egregious salaries, lavish lifestyles, and handsome retirements. They’re “celebs” first and custodians of justice last. The skills they work so hard to perfect are those involving the art of lying (legally) and avoiding criminal accountability. At the end of the day it’s all about them. – By the way, these are now “legitimate careers,” the principles we teach children. Along with Wall Street, it’s the “American Way.”

This is why I have a personal problem whenever I hear the message “we need new faces in Washington, a new generation to meet new challenges.” At one level I understand it and agree. At another level it confirms in my mind just how precariously the world hangs by the thread of gross inexperience, immaturity, ineptitude, and pure guesswork. It’s astonishing that the world has survived this long. When we were young we didn’t trust anyone over thirty. Today I rarely trust anyone under fifty-five.

I look back at my own family – my father, mother, maternal and paternal grandparents; and then my teachers and neighbors I had while growing up. These were people I feared and respected, the voices who planted the most indelible lessons into my consciousness – for better or worse. This perhaps is “the moment” which stuns me the most, stops me in my tracks and fills me with either deep regret and anger or appreciation. – And now they’re all so young.

At 69, I take virtually everything with a jaundiced eye and knee-jerk skepticism (an old man’s normal reactions, I fear).  If I’ve learned anything it’s that most “professionals” are people who are simply too young, notwithstanding their telegenetic smiles and sales pitches painstakingly learned by rote. They sound and look like androids. They wear their degrees on their foreheads like prairie grouse in mating season, trying to impress us with tremendous knowledge, skill, and cutting-edge technology. But there’s an enormous disconnect between the overture and the aria.

Knowledge is really nothing more than collected information. It means nothing by itself, something they wouldn’t know. What’s important is what they do with it – which becomes a measure of wisdom, a totally different thing from knowledge. It’s like the difference between belief and faith: belief comes from the old Middle English lief which means “to hope for.” To believe something is actually to hope that it’s true. This has nothing to do with faith, which is actually the “letting go” of preferred beliefs. Wisdom is a detachment from, a divestment from, how one hopes things transpire. – It’s a matter of apples and oranges.

One more analogy which separates those unworthy of “believing in”: Too many of us recklessly toss around the phrase “consciousness awareness.” But again – “apples and oranges.” One has nothing to do with the other. Awareness comes the Latin geware, meaning “to sense danger.” Animals have awareness. It’s where we get the derivative terms “beware” and wary.” Consciousness comes from the Latin scire cum meaning “to know with.” To have consciousness means that “we know that we know” – a distinctly more evolved (exclusively “human” is no longer true) endowment.

I personally know lots of people with awareness but very little (if any) consciousness. I have also known people with lots of knowledge and endless academic credentials to prove it, but with absolutely no wisdom. There are PhDs out there who are literally some of the dumbest people on earth. Take them out of their academic element, a world of safe abstractions, lofty formulas and hypotheticals, place them in a world which requires character and decision-making and they’re absolutely lost. Right before your eyes you see them shrink into a kind of thumb-sucking infancy. Alas, these are the types of people who run for office and buck for leadership roles. They think they can do it all with a religious commandment, a mathematical principle, or a general rule of thumb. Genuinely stupid people.

At what (feels like) is becoming the receding end of a lifetime, one long enough to see patterns coming full circle, the whole thing leaves me just a little queasy and disheveled. All things considered, and knowing what I know now about heroes and roll models, saints and crooks, the fact is there is no final truth to anything, no purveyors of wisdom, no savior-hero who is going to deliver us from evil. The only truth is my own. I am my own roll-model, (anti-)hero, fool, and villain.

If you were to ask what I believe the truth to be (politically, philosophically, spiritually) from all the available information “out there,” I would flatly say “I don’t know.” Who’s telling the truth and who’s lying? What are the facts, what are the myths” – I am truly lost. I don’t know who or what to believe. But this somehow magically takes me to another place, a better place, in fact to the fallout usually resting on the flip-side of tragedy and irony. It tells me to extricate from the burlesque of the human theater, to stop leaning on the “knowledge” and “beliefs” from “out there,” and to trust myself. I’m usually headed for trouble when I listen to people. I am my own best teacher, finally, at 69.

We sit on eggshells, powder-kegs, empty slogans, lofty promises, and great melodrama every single day. And then we toss it all up in the air, as if “for grabs” – in hopes that tomorrow will still be here. It’s because nobody knows anything. We’re all adult-children (some with PhDs) still watching black & white TV, taking our cues from Hollywood heroes.

Maybe this, after all, is the critical lesson about getting old: Nothing is as it seems, nobody has the final answers to anything, no one is smarter than anyone else, and, as the old saying goes, “In Life there are no experienced Sailors.” Never has that expression been more meaningful to me than now.

Just a few thoughts on my birthday.

© 2018 Richard Hiatt




To me, psychologically, the principle difference between the East and West, even Eastern and Western Europe, becomes a question of ego. In “socialist land” children are brought up believing that personal meaning is all about fitting into a larger whole, a larger consciousness. The West conveniently interprets this to mean loss of identity, that one merely becomes an automaton, a cog in a machine’s wheel. – The images we convey to ourselves of factory workers becoming the wheels and gears they operate, of faceless and starving peasants on collective farms, are still amazingly vivid years after the Cold War.

While the West busies itself putting down that whole “collectivism” idea, it now grapples with it’s own ideological dilemma, of elevating the ego while simultaneously discovering that it must also fit into larger (environmental, social) paradigms. It tries teaching this in its art, its music, and so forth, but it’s a rough go when it comes to changing real consciousness. We’ve always been taught that a strong ego (being special, “rugged individualism,” entrepreneurialism,“standing out”) determines one’s success or failure in the world. We’re taught that “I have to be a somebody” lest I become a “nobody.” The message is black & white. It also explains the epidemic of clinical depression in the West and its virtual absence in the East.

The socialist child finds this all rather puzzling and yet doesn’t have to announce it. This is because (s)he doesn’t have to make a public statement about it. (S)He seems to be taught that consciousness is about subtlety, trust, and self-awareness. There’s no comparison when it comes to maturity and self-discipline of children in Eastern versus Western cultures.

The bottom line here is about personal happiness and fulfillment. Both East and West don their best smiles as if trying to show the other who wins this argument. But the West wins only in the race to advertise it, to show itself off every every moment, turning up the volume and announcing “boy are we right – and you’re wrong!!” Even Western Europe finds America’s brashness, volume, arrogance, and swagger so distasteful that it almost goes into hiding and covers its ears in embarrassment for its chief ally. America simply protesteth too much – every day.

I take personal interest whenever I see European artists, politicians, and/or average citizens visiting American talk-shows. They show conspicuous culture shock, brain paralysis, as they drown in a deluge of volume, glitz, music, and celebrity swagger. They keep a “stiff upper lip” and a congenial smile, but the body-language never lies. They’re simply overwhelmed and (especially the British) respond in tones so low that America has to “turn itself down” just to hear them. Then “we” turn around and call them “contained,” reserved, high-brow, intellectual, haughty, and stuck up. We’re like loud drunken sailors on Saturday night physically shaking our one-night stands and yelling at them to “put out” more.

It just reminds me of the terrible dilemma the British had when America showed up at its backdoor during World War II. They needed America to defeat fascism and had no choice but to show unending appreciation and hospitality. But they sure as hell didn’t like us – and understandably so. “We” showed up with our swagger, chewing gum, big-band sounds, and with money to spare (and spend) on their women. All the while their men were stretched to their limits on the German front-lines. Never reported, there were plenty of bar brawls over this. The British shared a mantra: “They’re over-paid, over-sexed, and over here.”

All this is to segue into another big problem unique to the West, specifically America. Individualism (ego), self-importance, personal impressions, etc. also bleed deeply into religion and death. Martyrdom is by no means unique to America. Every culture and government has its pantheon of martyrs. But having them isn’t the issue. The problem is in our confusion over what it is, what to expect from it, hence our interpretations of meaning and purpose.

Not to get ahead of ourselves, but probably the most dramatic divide between East and West concerns the application of literal self-sacrifice – the kamikaze pilot in World War II or the terrorist bomber today who blows himself up in the city square. “We” simply don’t understand it, nor can we allow ourselves to accept it in light of our ideological confusion. So we declare it “abhorrent.” What may contribute to this, though not the central cause, is again the problem of “selfhood.” “We” simply can’t imagine ourselves not existing. In war it’s always the “other guy” who dies. “We” (if not the self, then the soul) must remain eternal, and we write that condition into our morality and religion. This I think is a factor in our disbelief over enemies who crash planes and ignite themselves in public places. We immediately condemn both and place them in categories ranging anywhere from insanity/lunacy/ madness to barbarism, radical fundamentalism, anarchy, and simple savagery. Reversely, they see “our hangups” linked directly to self-importance and materialism. Their puzzlement is just as real and strong.

Each culture has its definition of martyrdom, but the West has a singularly difficult problem finding clarity because of its ambiguities and confusion surrounding death, eternity, and sacrifice – over which it simply places the veil of darkness. “We” therefore entertain many forms of martyrdom, mixed and confused. It’s recognition is eternally up for debate even today, and it most always comes down to a matter of politics, media presentation, and lasting impressions. Ultimately it comes down to a simple problem of definition.

In his book Fools, Martyrs, Traitors Lacey Baldwin Smith asks the critical questions: “Are all martyrs heroes and all dead heroes martyrs?” “The trick is to turn death from a conclusion into a beginning, to make it count…. So much depends on the debate over definition, the judgment of history, the cooperation of the executioners, the vagaries of timing and circumstance, and the question of motive.”

Christians for instance, “recognize ‘red’ and ‘white’ martyrs – those who gave their lives as opposed to those who suffered desperately for their faith.” And then there’s the added complication of the link between martyr and traitor which, says Smith, share two common characteristics: “Both are either alienated from or rejected by [society], and both are failures in the eyes of officialdom.” There is therefore the oxymora-sounding idea of the “noble traitor.” Martyrdom is an act of defiance or condemnation and hence a unique “display of individuality … sanctified by death.”

In the end there are as many kinds of martyrdom as there are (or have been) martyrs. When we peruse Western history we find that each one case more or less set its own standard and meaning. Socrates, Jesus, Joan of Arc, Thomas More, Thomas Becket, John Brown, and Mahatma Gandhi, to name a few, already create a template of many variances. At the end of his book Smith mentions “introverted martyrs tortured with doubt and oppressed with guilt,” “extroverted martyrs, those athletes of faith and warriors of truth, who regarded themselves as instruments of God’s ultimate design,” “conceited and ambitious martyrs who did the right thing for the wrong reason, seeking power, paradise, and self-esteem,” “accidental martyrs who through chance and circumstance stumbled upon death,” “useless and silly martyrs who squandered their lives for a chimera,” “false martyrs and fabricated martyrs, products of society’s need for heroic symbols,” “traitor-martyrs who may be any or all of the above,” and finally the “unrecorded martyrs … whose mark on history is like the air we breathe – there but unobserved.”

To this list (some introverted, some not, mostly “accidental” and/or “unrecorded”) I would add the “reluctant martyr.” Those who were perhaps acutely aware of society’s proclivity for stereotypes and its facile rush to finding “heroes” – yet who were simply immersed in their jobs and never meant to die for anything. In this category I would add the names of Oscar Wilde, Emma Goldman, Rosa Parks, Madame Currie, Jackie Robinson, and Rachel Corrie.

A question of “selflessness,” to me, again is what accentuates the prime difference in martyrdom as seen in the West versus the East. Take for instance Thomas Becket as opposed to Rachel Corrie as opposed to Mother Teresa as opposed to Oscar Wilde (I choose Wilde here simply because he’s one of my favorite “victims” of circumstance, time and place).

Becket was a swamp of contradictions, and history is still debating his status as a martyr. A young, intelligent, clever opportunist/lawyer (some say “Becket” was a derisive name meaning “beak-nosed”), he was born midst of the notorious conflict of the “two swords” between church and state. In the 12th century everyone was forced to serve both the pope and king – an idea doomed not to last long. Thomas had the advantage of an education, he schmoozed with aristocrats, and landed the lucky job as an accountant for a banker, then as the archbishop where he learned the art of debate and legal legerdemain from the best lawyers around.

At thirty-two king Henry II appoints him archdeacon of Canterbury, and Becket proceeds to turn the “moral” side of the crown into a secularized and bureaucratized office. He is not only efficient at this but, by some accounts, neurotic and “pathologically concerned” with himself. He lives like a prince, extravagantly, with impressive and expensive entourages, while collecting revenues of ungodly sums – most of it channeled into his own appetite for dramatic entrees, pageantry and pomp. From this point on Henry begins regretting his appointment for archdeacon, and a life-long feud begins between himself and Becket. The monarch is furious that his “vassel, friend, and servant” could be so faithless, and Becket hides behind his new position to defend his right of sovereignty over the crown.

Then Becket shocks everyone. He suddenly capitulates to Henry and to the crown’s Constitutions of Clarendon. He imposes strict penance, fasting, and flagellation upon himself. Some say this is a turning point in his life towards martyrdom. He allegedly never forgives himself for his previous sins and indulgences. But at the same time he also refuses many of Henry’s conditions and punishments. He thinks that self-flagellation is enough and that Henry’s added censures and penalties are a direct attack on the Church. He sees himself and the Church as one and the same. He announces himself as the standard-bearer for Christ. Becket is exiled, seen as “Christ’s outlaw” (and later is the first person in history to die both as a martyr and a traitor).

Then, another alleged metamorphosis. Becket chooses to live as an aesthete – in poverty, wearing peasant hair shirts, sleeping on hard wooden beds, and subjecting himself to daily flagellation. He’s trying to purge himself of guilt and humiliation, not to mention carnal desires. But he still refuses to relinquish full authority to the crown. Several attempts are made by Henry to compromise (since he was being taxed elsewhere by war, and the pope himself had other political concerns filling his docket). But every effort to compromise is ruined by Becket’s stipulations putting the Church over the crown. He also refuses to surrender all the monies he collected as lord chancellor. In fact Becket retorts that Henry should pay him instead for all the revenues lost when forced into exile.

Henry even lifts the Constitutions of Clarendon and grants the Church its former liberties, but Becket jinxes it again by requiring Henry to submit to a series of contractual phrases, like the “kiss of peace” and “saving the honor of God.” Henry balks. – In the end Becket withdraws these phrases, Henry once again returns the Church’s previous liberties, and it looks as if the “two swords” might find reconciliation after all. But no specific terms are ever finalized. These are two proud and stubborn men never finding peace or trust between themselves.

Finally, Henry sends four crusaders to “arrest” Thomas, not to kill him, in the cathedral at Canterbury. But Becket stands his ground refusing to leave. And here, quoting Smith, “lies the rub: did Becket play-act at martyrdom, imitating the early martyrs, refusing escape, provoking his assailants, fearful lest delay should deprive him of his coveted crown, or was the script written for him by biographers determined to turn a man who possessed more than his share of human failings into a spiritual hero?” Hagiographers, says Smith, are “in the business of manufacturing martyrs” and dramatic willful death would make Becket “the most immaculate soldier of Christ.” But here is a man who is “haughty, rapacious, violent and cruel” who “wanted to be more than a king.” When the four crusades cut him down in front of the altar for resisting arrest it’s fairly clear that he is dying for personal pride, not for Christ or faith in the Church. All his pronouncements about living as an aesthete are efforts to instill a public image.

The dilemma for Christian martyrdom from then on involves the issue of using immoderate means to achieve “moderation in all things.” Smith says Becket is “self-absorbed” and “lacked the gift of empathy for others; some might say he lacked the gift of charity.” In the years to follow history decides not to be so kind to Becket. His light fades. There are no more “miracles” in his name, his shrine is destroyed, and his death is “degraded to a minor incident in the history of a four-century long struggle between the kingdom of God and the kingdom of men…”

Compare this long-drawn out story to the 2003 murder/death of a virtually unknown crusader for Palestinian rights in the Gaza Strip. A 23 year-old girl named Rachel Corrie stands in front of an Israeli bulldozer to protest the illegal dozing of Palestinian homes. A more horrific death not humanly imaginable, Corrie does not anticipate being crushed, nor does she wish to die for this cause. The official Israeli response is that the driver “did not see her” standing in the way. Appeals to the Israeli court (the last one being in 2015) are systematically denied, including financial restitution (amounting to “one dollar”).

Corrie is not just an “unrecorded” crusader for justice. She is by all indicators a “reluctant” one as well who, it would seem, meets the requirements of martyrdom. She has no designs on notoriety, fame, position, or profiting from her efforts. She only has in mind the rights of the disenfranchised in Gaza on March 16th, 2003. She does not ask for a martyr’s signet and would most like reject the very idea of one. – But the point here is this: It seems that those who most disavow and reject such affiliations are those most deserving of them. The reverse is just as true for those, like Becket, who “lacked the gift of charity,” was self-absorbed, and who hid behind an institution for self-advancement.

Compare Corrie to Agnes Bojaxhiu of Skopje, Macedonia, alias, Mother Teresa – so-called defender of “charity for all.” Immediately, something stinks in the support of “everlasting charity.” Charity is supposed to be a temporary fix to help people in need. But when turned into a permanent institution and collecting huge sums of money while doing it, it becomes a crutch for the poor and actually ensures the “everlasting” imbalances of wealth distribution. Many, including even some at the Vatican, thought she wasn’t as much a friend of the poor as she was a friend of poverty. Christopher Hitchens said, “She praised poverty and disease and suffering as gifts from on high, and told people to accept these gifts joyfully. She was adamantly opposed to the only policy that has ever alleviated poverty in any country – that is , the empowerment of women, and the extension of their control over their own fertility.” She once announced that the greatest danger to world peace was abortion and that abortion and contraception were morally equivalent.

Teresa’s own Calcutta clinic was kept primitive, a place for people to “die” because medical treatment was kept virtually nonexistent. At the same time, when she became ill, she was flown to a first-class private clinic in California. She befriended people like Charles Keating who was convicted of fraud, racketeering and conspiracy, and many other crooks who donated large sums that were actually stolen from the poor. She held firm at a referendum in Ireland condemning divorce and remarriage. At the same time she gave Princess Diana her blessings for getting divorced since her marriage was “such an unhappy one.” Father O’Connor said, “[A] lot of people in the church will tell you that she was indeed a very difficult woman.” And when the church set her up in a “nice little place” in Washington, she had everything modern stripped out of it “right down to the Formica.” And yet, “nothing but absolute austerity for the poor and the sick.”

The Church used to require two miracles to saint someone. Now it’s just one. And it confessed to having “fast-tracked” Teresa’s. Traditionally, hearings on sainthood couldn’t be held until five years after the person’s departing. But Pope John Paul II wanted to personally announce hers before he died. Hence, the near-frantic search for a miracle. And as luck would have it, they found a young Hindu woman with cancer who had apparently prayed to Teresa – and “miraculously” it was gone. Good enough!! The only stipulation after that was that doctors had to certify the cure was medically inexplicable, leaving only a supernatural explanation. Again, no problem. Lastly, it had to be decided that the cure was directly attributable to her, especially after she had died. The “after death” scenario is critical because it ensures that there will be no danger of hucksterism, no witchcraft, and above all, no need for proof or disproof of the miracle itself. – “Fast-tracked,” indeed.

Compare Teresa to a real victim of minority abuse and discrimination, who kept his dignity despite Edwardian jailers and late-Victorian piety, vilified as the exemplar of moral decadence. Oscar Wilde’s plays ceased to be performed and his writings fell out of print. Finally released from prison (for homosexuality), he was a broken man, exiled, nearly friendless, and virtually homeless. More than any other I can think of, he was the embodiment of someone who lived a hundred years “before his time” artistically, intellectually, and morally, which meant he suffered the strictures and censures of an acutely ignorant, intolerant, phobic, and hypocritical society. Extroverted, gregarious, hospitable, gracious, and highly sensitive, he lampooned the bourgeoisie and the indulgences of the upper caste (prefiguring Fitzgerald, Dorothy Parker, and others), all the while caricaturing himself as a central player in that milieu.

“The best way of keeping my word is to never give it.” “America is the only place which went from barbarism to decadence without creating a civilization in between.” “If we are always guided by other people’s thoughts, what’s the point of having our own?” “If the skeletons in your closet are going to rattle, they might as well dance.” “There are only two tragedies: one is not getting what you want, the other is getting it.” “We all straddle the abyss. If we don’t look down, how would we ever know who we are?” (all taken from just one play, Lady Windermere’s Fan – made into the film, A Good Woman).

The publishers of The Wit and Wisdom of Oscar Wilde remarked, “[B]ehind the aesthete’s facade and the superbly crafted witticisms, he concealed great warmth of character and generosity of spirit, an insouciant gallantry in adversity, and a profound understanding of human life and human vanities.” Dorothy Parker said, “”If, with the literate, I am impelled to try an epigram, I never seek to take the credit; we all assume that Oscar said it.”

To peruse all these styles of martyrdom is to compare states and levels of selflessness – versus – ego. They all indicate vast stretches of gray in between those extremes. To borrow the old Christian metaphor, the “redder” martyrdom gets, the closer it touches the shoals of American ideology. The “whiter” it gets, the further East it drifts – if not literally , then through its consciousness.

Martyrdom could be the perfect lesson to those of us in America in how to at least understand the principle of self-sacrifice as an act of selflessness. To be sure, the terrorist who explodes himself is NOT necessarily selfless; in fact he probably does it to impress Allah and in hopes of great rewards in the afterlife (concubines and camels). Perhaps the kamikaze pilot had the same designs and expectations from his “god-emperor.” Who knows? But the fact is, their sense of self-importance and self-preservation were (and are) in a very different place than ours. The “socialist” individual lives for the commonwealth and sees his happiness in terms of the greater whole. “Individuality” (which is the centerpiece of market capitalist ideology) for him is greedy, reprehensible, and immoral. As opposed to the words echoed by Gordon Gecco in the film of Wall Street, for Americans the most praised, practiced, exalted, and shameful mantra of all: “greed is good.”

For better or worse, the fact is Americans are beginning to discover that the human species can no longer afford to think and live as separate and independent beings. We are not just a social species but a global species inextricably linked to each other. Survival depends on that humbling concession. We have lots to take away from the history of martyrdom, from those deserving and undeserving of that honorific. We’re finding that many so anointed are not so deserving after all, and those unknown, forgotten, and/or unjustly punished probably deserve it more than anyone.

“No man dies for what he knows to be true. Men die for what they want to be true, for what some terror in their hearts tells them is not true.” – Oscar Wilde.

© 2018 Richard Hiatt



Have you noticed that within the past twenty years or so market capitalism has latched onto a new symbol and visual aid: Everything from garden hoses to fly swatters to men’s razors to flashlights to cigarette lighters to on- and off-road vehicles to sunglasses to beef jerky to frozen foods to security systems to hats/trousers/boots/underwear to car wax to sporting gear – has made “all things military” its chief selling point. We don’t even have to mention assault weapons to drive this home. Suffice it to say, if you want it to sell it, just camouflage it in khaki-olive drab, make it look like a gun, function like a gun, and tell customers that it’s a “protective gear for rugged types on the go!”

During and after the Vietnam War “all things military” as a concept had embarrassed itself because of America’s so-called “first TV war.” We saw the truth behind the rhetoric for the first time (we were 20 years behind Europeans on this) and the “uniform” began suffering a horrible reputation. Military morale was at its lowest point ever. It had a long row to hoe to regain respect. By the early 1980s films were coming out, like Stripes starring Bill Murray and Harold Ramis (of SNL and Ghostbusters fame), showing recruits as a bunch of stupid morons and the Army as fodder for a good lampoon. The film was a success and is still aired albeit in the wee hours of late-night.

By the late 1980s America’s conservative class had collected enough evidence to attempt a comeback. You could almost liken it to a final blitzkrieg through Bastogne. All the imperfections, mistakes, and shortcomings of the 1960s’ student revolution had been shored up and tactically ready for redeployment by the growing phalanx of neo-cons. People like William Bennett, Norman Podhoretz (his monthly Commentary), and William Kristol (the Weekly Standard) had been fighting liberalism since the 1960s, comparing it to everything from Satan to communism. Ever since then they had set out to stage a right-wing counterrevolution.

Christianity was suffering a similar setback in morale. Evangelicals needed a place to find moral renewal. The 1960s and 70s were a time of “lost souls and devil worship” which, said, William Bennett, “should never be allowed to happen again.” It saw the military as a sharp contrast to long-haired hippies strung out on drugs, “decadent” music, and alternative “cultish” religions. They found servicemen as a bulwark of “private morality and evangelistic engagement.” They called them the “sons of the Church” and “responsible, moral, Christ-like leaders.” Money started coming in to the armed services in the form of childcare centers, facilities for children, and housing for married couples.

The military of course saw the political benefits of this in helping to recharge the image of the common soldier – from “baby killer” to hero. Themes of duty, honor and country quickly became the mantra for both camps (even if “duty” meant something different for the church). It didn’t matter. Both recognized a calling to the most pressing concerns: an evil world needing salvation, too much free speech, dissent, and religious diversity needing restraint, the infinite “unknown” (at any level, within and without) as the devil’s playground, and seeing holy war (an American jihad) as a noble crusade led by Jesus Christ.

Billy Graham was bestowed with the US Military Academy’s Sylvanus Thayer Award in 1972, and military veterans and top brass were invited in to Christian camps to give speeches on duty and honor. The sight of a clean, crisp military uniform and Marine “razor-cut” in church (flags and the cross together) evoked the quintessential image of American virtue. A “good soldier” meant a “good Christian,” and visa versa. Graham said “the men and women who believe in duty, honor, and country … have a strong faith in God.” And from there on the march for a newly sanitized Christian-Military image was on.

This became a manifesto of “propositions” for a new neoconservative agenda. The principles were clear and simple: “evil” was incarnate; reason and diplomacy were useless when dealing with hostiles; “armed might” was the only answer to foreign and domestic criminals; civil disobedience (dissent) and terrorism were synonymous terms; America was “rightfully” the only superior world power; America needed a renewed appreciation for “authority”; and decisive action was needed immediately to establish a “new nationalism” (i.e., nativism). – Today, “making America great again” is just a continuation of that mantra, mostly as a rally flag to remind the already indoctrinated of their moral and civic duty.

The operative term here was “authority” (as in authoritarian) – government officials, clergy, parents (the good ‘ol despotic father), and law enforcement. Neo-cons loved mixing metaphors like democracy with “traditional values,” force with “heroism,” imperial ambition with “reform,” while nursing oxymora like “moral clarity” and “social cleansing.” The religious right chimed in with cleverly hyphenated watchwords like “pro-life,” “pro-family,” “pro-morality,” and “pro-American.”

By the 1990s a second generation of neo-cons spearheaded by William Kristol picked up the torch. Eventually “the uniform,” regardless of who wore it or for what purpose, demanded fawning genuflections by a humbled and appreciative citizenry. We were witnessing the presence of angels and saints in the military, police force, fire fighters, post carriers, boy scouts, bus drivers, sewage employees, airplane mechanics, fast-food inspectors, bug exterminators – anyone in a uniform or in vehicles mounted with emergency flashers. Any hint of inveighing against said individuals found oneself in an undesirable database, put in a category along with “unpatriotic” liberals, welfare druggies, and suspiciously Muslim, non-English speaking kaffiyeh-wearing foreigners. You were suspect and “un-American” – not unlike today if refusing to stand for the National Anthem.

Norman Podhoretz and William Kristal saw utopia (synonymous with democracy) and censorship (synonymous with free press) as critical stepping stones to a “great America.” It was an Orwellian (arguably Spenglerian) dreamscape – “evil and decay” meticulously balanced by essential interventions of Newspeak, doublethink, and a Ministry of Truth. Meanwhile William Bennett along with Rush Limbaugh and other media celebs were freely confessing their “moral right” to move into the top 2 percent tax bracket. They “must have been” divinely chosen for a higher purpose.

This continued throughout the 1990s. And today the offspring of that “second generation” have already been raised in a climate so thick with those injunctions that (from the perspective of where it all started – the 1960s) they live in a virtual boot camp and don’t know it – in styles, popular rhetoric, in the habits of moral, political and religious thinking. We/They are once again in the Land of Reagan, the Citadel on the Hill, a snow-white Disneyland postcard with Hallmark poems, where “God’s chosen” enjoy freedom (and incomes), while everyone else does not. They self-righteously and arrogantly abuse passages like Romans 8:31: “If God is for us, who could be against us?”

Where I currently reside (Colorado Springs) I sometimes sense that the city actually thinks John Wayne and Ronald Reagan are still out on the prairie, east of town. fighting heathens at Fort Apache. Watching Army personnel donning the uniforms of 19th cavalrymen on horseback for ritual ceremonies more or less “gives it away.” Colorado Springs is home to three military bases and James Dobson’s Focus on the Family. – Reagan (“the Gipper”) plays taps every night.

The media and Wall Street definitely have not shirked their allegiances to this ideology – nor to mammon. The advertising industry is awash in everything “macho,” patriotic, all-American, hallowed and solemn, forever grateful, and draped in red, white, and blue. The freedoms we used to have (regarding religion, devotions, free expression) have been reduced to jaundiced expectations of solemn tributes to the flag, and fawning, even “awe-struck,” fealty to anyone in uniform. Returning jobs to America doesn’t have as much to do with the economy as it does with fueling a nostalgic/nativist sentiment for “all things American” – an absurdity if you acknowledge that there is little left which is American-made anyway, thanks to outsourcing (a real American invention). Even Harley-Davidson isn’t an American product anymore.

And now Trump is sending America’s message through yet another cleansing – by attempting to resurrect white supremacy, hostility to immigrants (brown people in general), and a strident resistance to the old notions of America as a melting pot of diversity. America is now “whiter” than it’s ever been since before Civil Rights. Whenever Trump, Pence, Kristol, Limbaugh, and others mention “family values” and “tradition,” what they really have in mind (consciously or not) is an America as it was before the term “rights” ever entered the national lexicon. It was a place seen in “black & white” (like on TV), pre-civil rights, pre-women’s rights, pre-children’s rights, pre-gay rights, pre-workers right, pre-nonChristain/pagan rights – where the man owned his castle and everyone else feared his mighty swift hand. Again, “black & white” – when the world was simple and easy (for white middle-class males) and the script for good and evil, right and wrong, could be taken right out of the Scofield Reference Bible – or William Bennett’s The Book of Virtues.

Hollywood has never helped in stemming the problem of racism or sexism (a la Harvey Weinstein). Since the beginning of filmmaking it has cottoned to the role of simply responding to public and political trending. If the nation went “far right” it simply followed along since profit was its sole concern. In fact, during the George W. Bush administration it struck a deal with Washington. Washington knew of the regulations and fees movie companies were subjected to, and it offered a deal: “You wave the flag, and we’ll wave the regs.” And ever since we’ve been bombarded with films and TV shows emphasizing themes of “valor and virtue” of men and women in uniform fighting every war conceivable — on drugs, crime, immigration, foreign and domestic terrorism, family violence, illiteracy, disease, smoking, and pornography. Themes are rubber-stamped. Only the faces change.

A quick inventory since the 1980s makes the point: Law & Order, The FBI Files, 48 Hours, CSI, Cold Case Files, Dog the Bounty Hunter, NCIS,’ Raising the Bar, Without a Trace, Justice Files, Disorderly Conduct: Video on Patrol, Criminal Minds, JAG, NYPD-Blue, Walker, Texas Ranger, Madame Secretary, and on and on. Each season we’re pummeled with new shows with the same themes, with lots of violence but very little substance. And if the reminder of what’s “important” isn’t made clear enough, then COPS is there as a booster shot – a constant reminder of our “daily heroes” (always polite, patient, sensitive, objective, caring, reverent, clean-cut, loyal, courteous, kind, thrifty, friendly and clean) out facing danger in the streets “for you,” martyring it up for “truth, justice, and the American Way.” The culprit caught on screen is of course always a “born loser,” drugged out, degenerate, irresponsible, uneducated, totally unkempt, marginally psychopathic, anti-social, self-absorbed, lost, and abusive.

Which brings us back to the moment. The ironic gift of smart-phone technology is that pedestrian life is now inconveniently “live.” And the truth which belies the “official” truth is being summoned forward. “The law” is having to face its dark side – finally – and facades and careers are falling. The curtain which has concealed a frightening, imposing, imperious OZ is more and more exposing a bumbling side-show buffoon and confidence man. The power of the badge protected by America’s symbols of strength and virtue has shriveled to the level of the street itself and to the rules forced on everyone else.

When I see virtually everything today being promoted by either a military cliché, a Hollywood image of “manhood,” painted in “stars and bars,” pest-control equipment looking like machine-guns spitting out “bullets” of salt, weed-killer commercials played out like gunslingers from a Clint Eastwood film – even sex sold in khaki-patterned, olive-drab camouflage (rubbers are Trojans, “personal lubricants” are marketed like products for the Indy-500) – I pause and take a deep breath. Then I remind myself just how indoctrinated we’ve become by the “boot camp” world.

Perspective seems irrevocably lost. We have no clue as to what America just might have looked like once not so long ago – as a “civilian” society. We’ve forgotten the ideals most Americans aspired to, for themselves and their children – literacy, open-mindedness, multi-culturalism, egalitarianism, gender-equality, environmental responsibility and stewardship, world peace, anti-criminalization (healing and rehabilitation), mutual trust, spiritual consciousness, compassion, and connectivity in a much larger global community as equals. – Alas, for most Americans today, being “civilian” just means being out of uniform.

Need we remind ourselves that our culture originally aspired not to be eternally “dug in” and unmovable, forever fearing and demonizing unknowns, but to constantly grow like a double helix circling around but always winding upwards. This is a concept which still eludes conservatives. They prefer the circle to connect at both ends, like the ancient ouroboros eating its tail. They take comfort in kissing their own derrieres. At least derrieres are familiar and (relatively) safe.

Lewis Lapham once wrote, “Democracy allies itself with change and proceeds on the assumption that nobody knows enough, that nothing is final, that the old order (whether of men or institutions) will be carried offstage every twenty years. The multiplicity of it voices and forms assumes a ceaseless making and remaking of laws and customs as well as equations and matinee idols.” – Somewhere in that intelligence there’s the eventual need to remind ourselves of who we are, of what we wish to become, and at the very least to downgrade “all things military” and redirect an absurdly inflated military budget to more exalted needs like education, the environment, and healthcare. War readiness is supposed to be about protecting what is most redeemable about us and valuable to us. It’s a means to a greater end, not an end unto itself. It exists so we can live beyond it, to have a “civil” life, as civilians. It’s not there to remind us of evil everyday (much of which it creates and funds) in order to keep us eternally afraid and phobic. We are not defined by the profiteers of war and paranoia – gun merchants and military contractors. And yet, as we speak, America’s “number one” export in sales and profits is military weaponry to whomever happens to be the highest bidder. This is currently “who we are.” It’s how the world sees us. It’s actually how we see ourselves, and we’re inured to that image.

“Civilian” is not a word denoting weakness, nor it just is a temporary pause in political time meant to shore up reserves for another war. The uniform is NOT supposed to be America’s normal attire. Nor is its mentality, vocabulary, mindset, and lifestyle. It’s supposed to be a temporary (mental, physical) space, not unlike elected “servants of the people” who are supposed to serve one or two terms and then go home. The military base and boot-camp mentality is supposed to remain at the periphery of that. This is supposed to be how an evolved, industrial society functions.

Many in the military have families, and they obviously take issue with the charge that “civilian” means weakness, also something peripheral and temporary. They will say they return to civilian life all the time. But they really don’t. Just a change of clothes doesn’t make it so. They also miss the point – or actually make it. If asked (while barbequing in the backyard and swinging their kids), they would most likely say they support a strong, domineering, omnipresent, lurking, surveilling, all-powerful, and absurdly over-funded military industrial complex. They also mold their civilian time in the context of “shore leave” and “on call” standby status, even long after retirement from the military. The point is, this is “normal” to them, the way all Americans should live. They know nothing else.

For the most extreme (militant) types, as in Colorado Springs, whatever they do as civilians must first always have a loud military crescendo. Schools, religion, and social events are draped in bunting, anointed in amber waves and purple mountain majesties, pauses of remembrance, strains from the Battle Hymn of the Republic, and/or benedictions of John Wayne riding into the sunset (scenes of John Wayne from The Sands of Iwo Jima are popular license plates, not to mention alliterations like “Guns, Guts, and God”). Holy Martyrs and “Onward Christian Soldiers,” one and all.

These are the people who live inside that “circle” whose tail connects at both ends. It goes round and round the same old sandbagged terrain, protecting itself from all things unknown – that is, things/ people not yet subdued, exploited, converted, and/or killed (to “save” them). It is the circle of perpetual fear. They lend the impression of mice feverishly spinning on a treadmill.

This all may sound completely exaggerated and blown out of proportion. But I think not. The next time you go out to buy something, chat with a neighbor, listen to the news, drive down the road, attend church, or eat at a drive-in, slow down a moment and just “listen” to what engulfs you. The next toothbrush you buy your child may just have the face of The Duke embossed on its handle. And when pressed by small fingers it will light up and say “Hey there, pilgrim!” – Gotta start those little ones out early, ya know, make sure they know right from wrong, good from evil, that God’s on our side, and the guys in the white hats are always right.

© 2018 Richard Hiatt



Someone once said that you have to be anguished to be a writer. Orwell said no man would write a book unless he was driven by some kind of demon. As a creative inlet-outlet you grapple with the “catechisms of complacency” in everyday life, specifically with status quos. Status quos mean death to artists. Writing earns its metal when realizing just how looming the walls of resistance are to forward movement, to breaking out of comfortable spaces. And only when it is commercially successful is it given an accolade for opening channels and confronting the human temptation to stay in shallow waters.

I suppose it’s true for any artist but, as a writer, if your only building blocks are words, nothing visual, auditory, gestural, you realize your limitations. Your masonry only builds one type of creation. But that limitation also stimulates it to work harder. * Every medium wants to be superior to its fellow media. It wants to take possession as that singular effort that cracked the code of the human mystery and broke it wide open. These are covetous, territorial, and jealous entities. The muses would agree.

You keep writing, soon almost obsessively-compulsively, as if trying to find an elusive question. When found, then to labor around answering it, perhaps forever. The process of writing is always unfinished, and it eventually turns on itself. One’s style tries to transcend “style” as well as easy categories, and only if it keeps up with itself does it tread water. But even then it finds another dilemma: no ground to stand on, and writers feel like victims of their own writing. They circle around a sense of non-being, of death (again), as the muse replaces him. We can write about it, describe it, but when it happens it has no name. It’s detaches from everything. It invents its own language. The French poet Edmund Jabes said “We will die a bartered death at the feet of our words.” He also said:

With me, the need to make a book is an obsession. But the edifice that you try to create disintegrates. One ends up emptied and almost nonexistent. I think writing can lead to suicide. For one continually finds oneself face to face with oneself, as if one had not said anything. For me, the actual act of talking is like the opening of a wound. The lips open and what will come out? Writing forces you to close your mouth, to be silent and to confront something which wants to exist in place of you and at your expense.

On top of self-immolation, creative writing bears yet another burden: to find truth while not interrupting it with words. But one also needs to exist while not existing. It’s the need to express while “being expressed” upon, to control while being controlled. It causes pleasure and anguish interchangeably. To merely facilitate is a devil’s dance around a beam of light, and to use words for it actually kills the light. But then also to keep it unsaid leaves only darkness. So, of what use are you? Why are you here? Are you to merely witness and then let it go? Is everything already created?

When Arthur Miller wrote Death of a Salesman, he confessed to being totally unprepared for the public’s reaction. He didn’t know what to make of it because, initially, neither did the public. The play had taken on a life of its own to which he was unable to respond. He managed to disappear behind the play and for years said nothing. He couldn’t even tell if he wrote it or not, whether to feel good about it, or not; hence, neither could the audience upon seeing it the first time. It was too close to the bone, particularly for men. “That was me up there,” said many who remained in their seats when the curtain fell. They slumped forward and many wept. There was only silence. It took someone to stand up and applaud to remind everyone that they had just seen something remarkable. And from that moment the applause never ended. It was the perfect modern tragedy.

The play basically wrote itself, but it didn’t. This was that danse macabre Miller had to do for the rest of his life. For the most of his later years he stayed away from it unless/until the public demanded its return, which it always did. He was forced to attend countless revival performances realizing over and over again that the mesalliance with his demon had not changed. And so he just let it be what it had to be – until the end of his life.

J,D. Salinger’s life was incredibly similar after The Catcher in the Rye. It was a demon which haunted not just himself but notorious types who blamed Holden Caulfield for their “notoriety.” Mark David Chapman had the book stuffed in his pocket while shooting John Lennon. John Hinkley had Caulfield’s words on his lips while explaining his attempt to kill Reagan. Robert Bardo had the book practically in hand when murdering actress Rebecca Schaeffer. Caulfield was (still is) the symbol of youth failing to understand the adult world, calling it “phony,” “screwed up,” and not anything to ever trust.

That was one albatross Salinger had to carry his entire life. Apart from that, the book was actually about his childhood which was trustworthy, but darkened then by age, warfare (vowing to never discuss it), failed relationships and publishing. It was the juxtaposition of both sides, the death of innocence, and nostalgia for the return of innocence. Caulfield’s running away to New York from a military boarding school was Salinger leaving his youth and exploring the adult world through the lens of a 16-year old projection of himself.

Being accused of catalyzing an entire generation’s rebelliousness was the downside which played itself out the most. But The Catcher in the Rye was actually written in 1951 and caused problems well before the sixties. It then caught its second wind a decade later when “justifications” for rebellion were in very high demand. The book was censored in schools and libraries for its language and Salinger became a target as a troublemaker. And yet the more it was banned the more demand it had among young readers. Not unlike those (mostly men) watching Death of a Salesman the first time, kids heard Caulfield and repeated the mantra, “That’s me!” But Salinger didn’t wait for the 1960s to vanish from public life. He was already disappearing after its publication in the 1950s.

Salinger kept writing and submitting to publishers until 1965 (his last published work appearing in The New Yorker). But time and again editors kept altering his work, changed titles, words and punctuation, sometimes even without his knowledge. Again, the adult world was not to trust. His live-in girlfriend at the time, Joyce Maynard, said “His books were like children who were not safe if you sent them out into the world.” For the rest of his life, though still writing for himself, he chose never to publish again (hating publishers). He allegedly kept 15 unpublished books at his home in a huge bank vault he had air-lifted to his property. He rejected all interviews for the remainder of his life and lived in seclusion. But even in seclusion he struggled with legal battles over unwanted biographies, memoirs, and copyright infringements.

In the end, one book was enough. Everything after that (even Franny and Zooey published in 1961) was derivative of the seminal message of Caulfield’s that the world was “phony.” Caulfield was bigger than Salinger himself, more real, ageless, and eternally available for newer and newer generations of the dispossessed. Salinger had no choice but to step aside and disappear. The author was essentially dead. Salinger was Caulfield, and yet he wasn’t. As earlier mentioned, the creative writer expressed AND was “expressed upon.” He made the supreme sacrifice for an inner child who had an inquisitive mind and a stubborn intolerance for bullshit.

Is death itself a creative act? Is finality a creative effort? Do we actually strive to end striving? Or do we seek a place which simply no longer needs words? Does that mean we, bearers of the word, simply disappear? Is that what the creative process (art) is really about – a kind of death wish? Does it seek its own end time, its own dehumanization?

Camus said this about the political-philosophical works (and “insanity”) of the Marquis de Sade:

Sade died in order to fire the imagination …. But that is not all: His success … can be explained by a dream which he has in common with contemporary sensibility: the demand for total liberty and the dehumanization carried out coldly ….

Sartre was another exemplar of this dilemma. He felt he had become landlocked with his own (French) generation. At the same time he felt estranged from the young post-war generation of the 1950s-60s. It was the quandary of being who he was while attempting to (convincingly) embrace the mileau of Maoist philosophy, New Thought spirituality, and political revolution. At one level he succeeded. But on another he confessed to feeling like a clown, and many of the generation in front of him felt he no longer relevant. He had a following who practically deified him overnight, and yet every word uttered, every sentence written, was subject to radical self-scrutiny.

In other words, it was the difference of existing as some icon – versus- extricating from his social status simply in order to live according to his essence, to a rhythm belonging to a different time and sensibility, and as a writer to become invisible (to die) through a time-sensitive alchemy and knowledge – something his youthful followers would never understand.

How strange it must have been, not just for Sartre, but for his comrades (Beauvoir, Merleau-Ponty, Camus, Husserl, Jaspers) who lived and wrote during the war. They carried with them an entirely different, deeply sobering, sensibility while simultaneously experiencing cult-like celebrity status and hero-worship after the war. Not wanting to pop the bubble of youthful ambition and idealism, at the same time they could only write what they knew about dehumanization. There was the creative process naively reaching for sanitized interpretations of meaning; then there was the more personal, intimate, ugly, and deeply wounded depersonalization one could only understand from war and death. It was a literary generation-gap.

It seems that those writers had no alternative but to seek more novelistic ways of dealing with this dilemma. Some succeeded, some did not. Roland Barthes for instance was very uncomfortable with himself at the end of his life, feeling he had, quoting Jean-Paul Aron, “enclosed himself in a dead-end system.” He died in a state of despair. Aron had plenty to say about this collision of generations:

I am not against modernity, and I know that each epoch produces its own truth. But I am against these impostures of modernity, that’s to say, these ‘new’ systems of thought, which become like commodities. But it’s true of France. At the moment we have new products, or books, that glut the market, each one destroying the validity of the previous ‘new’ book, This vertiginous production of books has nothing to do with modernity.

Speaking of Aron and Barthes, both delved deeply into the subject of Structuralism. Ever since Saussure’s linguistics in 1915, structuralist theory has been about defending grammatical/syntactic rules, codes, and conventions in order to explain how one “makes sense” of what he reads, of his culture and the world. Structuralism provides explicit interpretations of texts which govern objective meaning. All well and good. Each culture has its own conventions and rules. – But even in this case creative writing is prone to what I already explained above – the “catechisms of complacency” and status quos. It strives to subordinate itself to “the process” so that words (rules) don’t get in the way.

In that sense it transforms into what Structuralists hate most – it’s evil cousin – Post-Structuralism (taken up in the 1960s) which emphasizes the very ambiguities of meaning, signs and signifiers. The author himself is stripped of his work (as was Miller and Salinger). The ‘self,’” according to M.H. Abrams, “is declared to be a construct that is itself the product of the workings of the system….” Barthes himself said, “As institution, the author is dead.”

Aron attributed Structuralism’s survival to separate cultures, geography, and the fear of “letting go”:

Nowadays, thank goodness, the French have no time for all that, and the only place where structuralism is taken seriously is in American universities, and a little in English universities….

When the Americans stumbled on semiology, and structuralism, they were delighted, for it offered them a method of thinking which dispensed with the need to live or feel. They were reassured to find a discipline that affirmed that language had an existence of its own, independent from life. I also think that Americans are wary of delving into their feelings, and structuralism gave them the opportunity to opt out.

It should be noted that Barthes eventually abandoned Structuralism. He went for the jouisance (bliss) of multiple codes and rules which said that the reader was the producer of his own meanings. This was the introduction of deconstruction and post-structural theory in the late 1960s.

To sum it up with even more ambiguity, Abrams said this: “Structuralism replaces the author by the reader as the central agency in criticism, but the traditional reader … is replaced by the impersonal activity of ‘reading,’ and what is read is not a work imbued with meanings, but ecriture, writing.” – He could have just as easily said that Structuralism’s focus is strictly impersonal. Rules and codes and conventions impose their own meanings and existence and, echoing Aron, it allows people to “opt out” of their own.

And so round and round we go. And we return to the question, Is death a creative act? Is writing all about the ultimate disappearance of the writer? Is it a suicide, a wish to be done writing? Or is it the opposite: a need to find new horizons as a raison d’etre?

A writer’s “anguish” is indeed part seducer, part executioner. It’s the hero’s journey he’s fated to take knowing the only way out is “through.” It reminds me of the ancient Duat of the Pharaohs, a journey into the pyramid – first downward (with steps) before hearing a “mighty noise” above and finding the “Mountain of the Ascent.” (Ra). There he finds himself sailing towards Aten, the “Imperishable Star.” He’s now in the boat (Eye of Horus) ready for his celestial trip across the sky. – Six steps down, six steps up. His quest is for oblivion.

Sparing us any New Age “garnish, tinsel, and frippery,” the point is simply that writing is an odyssey of persecutions, deaths and resurrections. It’s the messiah in search of an executioner, the Rood Beam and caduceus. Yet he never truly dies. He sails with quill and parchment in hand to witness the human story, his own story.

© 2018 Richard Hiatt

*A point not lost to writers during the Occupation. In his And the Show Went On: Cultural Life in Nazi-Occupied Paris, Alan Riding said, “Of all the French artists forced to live under Nazi rule, it was inevitable that writers should take the clearest stances – and assume the greatest risks…. France’s writers had long presumed a right to opine on politics and, particularly since the 1930s, the public had grown used to hearing them hold forth.”



Have you ever watched gravel descend from a hill and fall into an alluvial fan, or sifted it as if sifting for gold? The large pebbles remain topside while the finer grains filter below. Every dimension and weight has its appointed place in a beautifully crafted vertical hierarchy authored by nature and gravity.

Pulling all human events together, at every level, in every form imaginable, I think this is what our species is doing, or trying to do – sifting out a newly refined sense of Self. It’s trying to separate wheat from chafe, garbage and hypocrisy from a more refined (silty) “essence.” The manner in which we tilt our sifters calibrates and instructs the exact degree of chaos and suffering we craft for ourselves. What survives, what distills out of the rubble of hard rock is who and what we are.

There’s an enormous alembic process going on in this manner inside the human community. Some use the metaphor of multiple “publics” beginning to crystallize – a cultural reference which has, since Diogenes, grappled with the dilemmas of separateness and alienation, of participation and nonparticipation. A public is whatever one wishes to call a gathering of people with a context and a purpose. It could be strictly “impersonal” or, as Arendt put it, “the world itself, insofar as it is common to all ….”

In the context of the prevailing (political, economic, environmental) winds today, in the gross inequities of wealth and poverty, suffering and egregious privilege, “publics” aren’t just idling by and watching the show. They seem to be pursuing a meaning outside (and within) themselves. They’re constantly stripping themselves down in order to niche themselves into a larger, undefinable matrix while sifting out old debris. They’re discovering, first, that what concerns us about “the other” is precisely our own otherness; “the other” is the outline to someone else’s inline, the ground to a particular figure; secondly, that communication/language is not rooted in different cultures but in nature.

The content of language isn’t what’s important. What’s important is that we’re simply, instinctively, desperately, reaching out to one another with a language seemingly devoid of content, as we usually think of content. The language is primordial, silent, cellular. We’re learning that our separateness is an evasion of sorts, a prevarication, a tragic distortion of who we are. We’re learning that, when separate, we are incapable of the truth. – Nature’s irony here is that legitimate separateness can only be attained by everyone all at once. Being of a single mind is the only truth.

Put yet another way, civilization is unconsciously pursuing what Zymunt Bauman calls “liquid modernity” (his book’s title). Let’s begin with an analogy: A public presentation or theme brings people of all kinds together under a roof, into a theater for a common purpose. Disparate interests, passions, and beliefs during the day are, as it were, “checked at the door.” For two hours all eyes are set on one stage, or one screen. And almost magically, everyone laughs and cries together, shares the same silence and ultimately the same emotions of approval and disapproval.

After the performance everyone revisits the cloakroom and picks up their respective personas, beliefs, passions, and idiosyncrasies “dissolving into the variegated crowd filling the city streets from which they emerged a few hours earlier.” – An unmentioned presupposition is planted here: that “art” is what effectively dissolves all human differences and inspires us in concert to seek something higher, something richly bonding, something actually forgotten and recognized again.

Bauman offers another premise before proceeding: the place most fitting to witness and study this “liquidity” is the city – “where strangers are likely to meet in their capacity [as] strangers…. [a place which] befits strangers.” It’s where, quoting Richard Sennett, “Wearing a mask is the essence of civility.” It’s an agreement to mutually withdraw from the “true self.” “Images of communal solidarity are forged in order that men can avoid dealing with each other,” says Sennett. The brief and meaningless encounter (interaction, dialogue), to be seen and not heard (at length), is the primary rule.

From here (and putting family and religious groups aside) Bauman says there are two kinds of public spaces: one is where “nothing mitigates.” It wears a “monotonous emptiness.” It’s the kind of space meant to be temporary and which discourages staying. It’s strictly pedestrian (terminals, bus stops, depots, public restrooms). The second kind discourages the same kinds of prolonged interaction but encourages frequent visits nonetheless – what George Ritzer calls the “temples of consumption” (concert halls, stadiums, restaurants, shopping malls). Both spaces deal with the “otherness of others” in their own ways.

There is one more space mentioned which is slightly more abstruse and difficult to notice: “non-places.” These are the “unseen” places “empty of meaning” because they exist only in our minds. They are basically “emotional” memories. These are anonymous hotel rooms, airports, public spaces which most people prefer not to remember – but do. They are obviously different for everyone, yet we recollect basically the same smells, shapes, and sounds. What they do share in common is not having to “negotiate differences… because there is no one to negotiate with.” Users have no need to “earmark them” because they leave only a “ghostly presence.” – I personally liken them to the pauses, black holes, and otherwise silent voids in between the sounds and images of meaning. We all have them. City residents have a rich cache of these particular (daily) memories.

The only reason I include this third space of Bauman’s and include it here is because it may be a spectral “non-place” we’re all attempting to bring forward in some way – to make real. It has a function, and I think society is, albeit unconsciously, attempting to delineate, amplify, and legitimize those surreal voids. We want to reconcile them with the two other spaces above mentioned. We may be attempting to transpose them, in the manner of a figure-ground reversal, camera obscura, shadow and light – as a new space for community replete with its own instructive wisdom.

Within this resides a deep psychology – that of ferreting out the intricacies of “belonging” while establishing the absence of differences. Bauman mentions two strategies introduced by Levi-Strauss: one is anthropoemic (that of segregation, separating good from evil, right from wrong, barring contact and dialogue). In this atmosphere “belonging” is exclusionary. The other is anthropophagic and is inclusionary (integration, assimilation, mutual recognition). “If the first strategy was aimed at the exile or annihilation of the others, the second was aimed at the suspension or annihilation of their otherness.”

The city requires a kind of civility – “the ability to interact with strangers without holding their strangeness against them.” But at the same time, it learns quickly that distance breeds isolation, and as Sennett again point out, “cries for law and order are greatest when the communities are most isolated,” when people/groups are “cut off.” This lesson is learned by all income, political and social groups simultaneously. No one is excluded. It’s one experience they all share in common. What they all experience after that is what might be called the “human equation” – a desire to move to a better place, despite the rules of income, class, race, and ideology.

Many city-dwellers may vehemently deny this, but urbanites learn faster than ruralites – simply because they must. The mere fact of living so close together, compromising such limited space, forces an interdisciplinary sharpness which is a city hallmark – called adaptation. They must adapt. They must be vigilant, wary, and able to negotiate with the unexpected everyday. All this goes away, is alien and unwelcome, in the rural community. People get intellectually and emotionally lazy, or should I say “unresponsive,” in the country.

Bauman doesn’t address it directly but alludes to another point as well with regard to Levi-Strauss (above). There exists a “response to the existential uncertainty rooted in the new fragility or fluidity of social bonds.” Later on he describes it as a “lightness of being.” “Time-distance … is shrinking … [they’ve] lost much of their meaning.” “There are only ‘moments’ – points without dimensions.” Perhaps without knowing, we’re approaching that third “non-space,” unseen and empty, and trying to fill it with something transcendentally human.

Much later on again, when discussing “security at a price” he says there must always be a balance between “freedom and security.” But in the end what humans may be striving for in that empty space, (in those voids and pauses) is the knowledge that “freedom and security may grow together… each may grow only if growing together with the other.” – This, I feel, is what a nascent community is striving for without knowing it. The prime motivator for this is nothing but desperation, the desire to survive, at a moment when our species could in fact end.

Another way of saying this is in terms of “unity versus difference.” Our unity is our differences, much like that in a true democracy unum is our pluribus, and visa versa. “The most promising kind of unity is one which is achieved … by confrontation, debate, negotiation, and compromise between values, preferences and chosen ways of life … of many and different … members of the polis.”

In his Afterthought, Bauman says “Society is truly autonomous once it knows … that there are no ‘assured’ meanings, that it lives on the surface of chaos, that it itself is a chaos seeking a form, but a form that is never fixed once for all.” To this I would add the reverse – a form seeking chaos (by way of formlessness, new beginnings). Our new modernity is “liquid” in just this manner.

This is a new existential moment for our community, both locally and globally. It’s taking on the nature and habits of laws historically rejected; that is, those of hologrammatic instinct/memory, self-recognition in “otherness,” and disparate values becoming human values. As Cicero said, “We were born to unite … with the human race.” It seems that what we want to respond to more than anything is “the other” to validate ourselves. This is our social inheritance, our communal nature. The rules and structures of separate publics are juggling the new imaginings of emptiness, time-space voids, freedom and security, unity and difference, in ways which are finally breaking through to a higher dimension. — The rocks are breaking up and sifting into more refined forms.

I see this happening most of all because of global attrition, of exhaustion, for trying to remain separate (and special) for too long. I see the world as tired out. I see the earth as “spent” for trying to provide and sustain us. I see all human and earthly resources exhausted much like the aftermath of world war. I see the human family ready to put down their shields and just wanting to help, to belong, to live together (despite our “installed” leaders). The city has become the world, the world the individual, everyone unique yet one body-mind.Walter-Benjamin-Flaneur-3

No one can explain it or fully understand it, but that no longer matters. We are where we are. Maybe we don’t understand it because it’s still to big for us to get our minds around. It’s telling us to join a much larger social consciousness (much like the massive student demonstrations against assault weapons). It’s a fascinating overture, but it’s also an ultimatum, because we have no choice. We’re out of time.

© 2018 Richard Hiatt



A traveler is visiting a place he knew long ago and comes across a familiar scent. He asks a local what it is. The native says, “It’s jasmine.” He then reflects, “So I had known it all those years! To me it had been a word in a book, a word to play with, something removed from the dull vegetation I knew…. But the word and the flower had been separate in my mind for too long. They did not come together.” (V.S. Naipaul).

There’s a lesson I have to keep learning: You can never go back to something as remembered. First, it never was as you think it was in the first place. It was always oblivious to your needs and desires, and, like the universe itself, had absolutely no investment in your projection of it. It had, as it were, “its own will” irrespective of, unrelated to, yours. Secondly, time has changed you as well. And either consciously or subconsciously it has altered those desires, needs, and anticipations, not to mention your general impression of everything. – Both realities just illustrate the truth about linear time; that everything is in constant flux, and, paraphrasing an old Hindu adage, “you can’t put your foot in the same river twice.”

What happens then is a violation of sorts, an unfair imposition placed upon the place in question. It’s forced to become a fantasyland, a mimicry of itself, parody, caricature, and a cartoonish simplification. It becomes theater, a stage for one’s fantasies, and a testing ground to prove that one’s memory is in fact real. And since it is accosted by countless memories (hence expectations) it must play the role of chameleon within its own parameters. The Rocky Mountains for instance (and the “Old West” associated with it) make a general historical impression, but as a “market” it must please everyone at the same time.

Alas, in most cases, perhaps because of too many expectations, perhaps of its own “will” irrespective of those expectations, the visitor is invariably disappointed, even let down and betrayed – especially if his expectation is intractable and firm. And for myself, if going away to some place is meant to resurrect some specific (nostalgic) feeling or memory, it makes sense that I’m setting myself up for a big letdown.

It’s a question worth pondering, especially in a place like the American West which prides itself, markets itself, identifies itself, with the theme of “preservation” and always being as it once was. “Nothing’s changed. Come and revisit the past and relive the Old West.” This essentially becomes a contract in the business of feeling and memory, between visitor and place, an agreement to transport the present backwards in time, into an atmosphere minus all the impertinences and scales of modernity. Which makes it an emotional breach of contract when it fails, protected only by small-print loopholes of psychology and marketing.

Upon one’s arrival, of course, everything is not as promised, not even remotely. His first encounter is a phalanx of forces sure to ruin the mystique of what’s allegedly “preserved” – crowds, lines, noise, all the intrusions of modernity (cellphones, radios), not to mention the often confusing and colliding expectations brought up by everyone else. A kind of mental gridlock takes over, and emotions run high. They imbricate, collide, blend, interfere, and sometimes even supplement each other. A cacophony of sound – white noise – overwhelms everything, which includes sugar-tripping toddlers having tantrums.

But the point is this: The onus of disappointment is placed on us to change it, to reinvent the moment, to make it interesting again despite the nightmare ambuscade of “people.” The place/source which made the original overture absolves itself of any and all outcomes. It knows that we traveled all this way, and we want to at least make the price of admission feel it was worth it. But the fact is, the experience is completely different from what was presented, and one realizes almost instantly that he’s been had : there is no past, nothing to revisit, even if the physical geography is original and genuine.

The most disturbing reminder of this, at this “violent” intersection, is witnessing what is colloquially known as the “buzz kill” – someone doing something which violates the plan, something totally out of context and oblivious to place and time: a parent punishing his child, a lover’s quarrel, a child vomiting on the sidewalk, a dog getting run over, a senior’s group singing hymns, people talking on smartphones, wide-screen TVs tuned in to football games, concession stands selling hot-dogs and commemorative hats. Meanwhile, you’re trying to recollect the reason you came here in the first place – something based on a “promise and a vision” to escape the very things now confronting you. You stand there motionless, silent, forced to rationalize and compromise your sensibilities with theirs. You must also go “above and beyond” your natural reaction and be “sociable,” show courtesy and calm diplomacy. Depression sets in.

Everything, even the so-called original buildings and streets, are nothing more than movie props, picture postcard scenes that never existed in the first place, at least in “the Old West.” Any so-called expectation other than what plays in this multiplex theater is again your burden. The whole thing is reduced to a kind of pornography: one pays for a show, and whatever fantasy he garners from it is his own doing. It’s up to him. But he never “gets laid.” He leaves disappointed, let down, and the concierge asks, “well, what did you expect?!”

Which is the question: What did we expect? It’s a self-inflicted wound. It’s a mutually contradicting Catch-22: the promise/guarantee of a preserved past, a frozen moment, and a satiated feeling, while at the same time requiring a “reboot” (by us) to fit it into a different lens entirely – of commercialism, politics and protocols. It’s the present tense pasted over by sepia tones and phony promises – computer-enhanced, airbrushed, sanitized, “government-inspected, mother approved.” The projection room flips the switch and the show begins.

I dare say that the whole Rocky Mountain experience has become a fenced-in amusement park, in many places without the fences, but still a replica of a replica – spectral and vacuous. Real time is replaced by the Kodak Moment. He doesn’t see the Rocky Mountains; he sees the “Rocky Mountains!” He doesn’t see the West; he sees “the West!” – both framed by a TV and movie screen. In fact it’s not “real” until presented two-dimensionally, safely removed by the illusion of distance and time. The point is to reduce the whole thing to the safety and comfort of one’s living room – accessible to junk food.

The “experience” as marketed is the wet dream of developers, investors, and the lovers of privatization. At the expense of wildlife habitat and sensitive ecosystems, it’s been turned into a playground for monied interests, rendered safe for kids and small dogs, just a step away from boutiques, eateries, and gas stations. And most of all (most importantly for “Americans”) a day’s ticket promises safety from the worst of all fears – boredom. The mountains have been made into one huge smorgasbord of entertainment for the whole family. “You’re all welcome. You’re like family here. Just bring your pocketbooks. We’ll furnish the balloons and ice cream.”

If going places to escape into “the future,” into unknowns, is one’s goal, that’s one thing. He then has a better than fifty-fifty chance of finding his experience at least entertaining. But if it means going the opposite direction, a “return” to something allegedly preserved, he’s doomed to the depths of his naivete. And again, it begs the question: Why would one even want to visit a marketed, commercialized, environmentally ruined, and usually over-crowded (spoiled) fantasy, where he’s expected to settle for enactments and imitations – a sufficient enough insult? Why not just relegate everything “past” to a Disneyland tent-show or a Hollywood movie set?

The replica has replaced the terrain, the postcard the experience, the menu the food, and America now maintains a comic reverence for invented traditions – folklore, fairy tale, and even history. Everything is derivative, remixed, nothing uncopied. The world is Baudrillard’s “simulacra: imitations without originals.” The “originals” themselves died long ago, tossed into a trash heap of irrelevance because they’re boring and made no money. John Ford’s famous Hollywood quote: “When the legend becomes fact, print the legend.” – Americans love fairy tales, fables, myths, and melodrama. Historical fact and authenticity are usually dull and monotonous, so embellishment, exaggeration, and even total fabrication (when needed) take over to meet the requisitions of “entertainment.”

The bottom line seems to be this: all things “revisited” is an oxymoron. If the idea is to renew or revitalize the present by retrieving some chosen memory locked away in one’s mind, it invariably fails. The product of that ends up with nothing of relevance to say because it’s nearly always based on an artificial past. Still, we keep doing it. It’s also why modern tragedy seems consistently “more tragic” than ancient tragedy: because modern tragedy is founded on lies, propaganda, and fabrications the likes of which we’ve never seen. Modern War and its victims are the perfect example.

In the end, finally arriving back home from the trip, one doesn’t reach for explanations. He just feels regret and exhaustion. He’s learned yet one more thing he vows not to do again. It’s one less fantasy that never had any chance of recrudescing into something real. His atavisms are concocted and self-inflicted, hence self-deserved, and he hopefully knows it by now. The whole thing just reflected his desperate need for something different, something real and substantive (for once), and for escape from (mostly stupid, violent, and half-made) worlds making and remaking themselves everyday.

Naipaul’s crypt-amnesiac experience with jasmine is similar to, quoting Edward Said, “like reading Wordsworth without ever having seen a daffodil.” It crystallizes the alienation of memory and wish-fulfillment from things that never were. But this is our choice. The alternative would be actually visiting where jasmine grows and submitting oneself to that experience, fated again to inevitably wear down our expectations. – Better to savor (preserve) the daffodil in the safety of one’s imagination and keep it there.

Everyday I see people with their boats, skis, ATVs, and motor-homes – all the accoutrement required to live out some fantasy promised them for the price of a ticket, registration, or membership. And they labor so incredibly hard just to meet that wish fulfillment halfway – just “halfway.” It’s exhausting just to watch. I always wonder if they feel satisfied. And if they were to ever answer the question, I’d wonder if they were cognizant of their own beguilement. Why do people watch pornography?

I therefore end up in the same place every time – home, in front of a book, or in my backyard. “Traveling” has become a decision sandwiched between a physical place and a metaphor. And I opt more and more for the metaphor. It’s not just a cheaper and quieter travel plan, I can journey to my heart’s content in the safety of my thoughts. And most importantly, the Rocky Mountains, though limited mostly to distant views (through binoculars) and vague remembrances, never disappoint. In my mind, “when the legend becomes fact, [I] print the legend.” It’s much more fun and rewarding. In those now extremely rare cases when I do go to the mountains, I know now to leave all expectations behind. I expect the unexpected, and usually get it.

© 2018 Richard Hiatt



I’m currently captured by a painting by Raphael Kirchner, entitled Harlequin (1916). It’s not just a harlequin but a harlequin hoisting a woman on his shoulder. In fact, the harlequin’s face isn’t even seen. It’s concealed behind the woman’s smile and body which harlequinis the center of attention. He’s the silent, invisible, monochromatic masculine to the prismatic feminine – either the naively serenading romantic (or the unscrupulous Lothario) to the virgin countess (or the alluring siren and temptress).

The harlequin’s history precedes him. In the visual arts, poetry, fiction, stage and screen he’s an archetype, born in the 16th century. He starts out as a stock character in pantomime theater, in commedia dell’arte. At times he’s Pierrot, the sad clown who is jilted by his lover, Columbino (Colombine). Sometimes he’s Pedrolino, the comic servant standing in the wings giving advice to confused and lost friends. At other times he’s Harlequin himself stealing Pierrot’s lover.

He shows up everywhere in Europe and eventually America. He’s the subject of paintings by the Romantics, Symbolists, and Modernists. He’s a stock character in all the fine arts of Spain, Germany, Denmark, England, France, and Italy. He probably most famous as the alter-ego in the plays of Shakespeare. And the most recent and well-known stage performance is/was probably that given by David Bowie.

In Kirchner’s depiction, he appears to be “Pierrot’s Love” taken by Harlequin. But to me, other energies fill this painting as well. The harlequin merely supports here. He remains invisible yet invaluably strong, the foundation to a smiling, genial, youthful coquette. She has a particular focus and interest in mind, and it remains unknown whether he knows or even cares. His focus is to simply anchor, to ensure her balance in whatever her furtive eyes convey to the observer.

Colombine dons an outfit familiar to (belonging to?) his own closet, and he responds with joyful approval by lifting her up, as if celebrating an unexpected bond. Whether she wears it for him, for someone else, or simply for herself is her secret. She holds a mask in her left hand, briefly removed to show herself, ready to reemploy in a calculated masquerade – or – is it the harlequin’s mask she holds to help him along? The red shoes befit the times – designed for bourgeois parlor games yet priced for proletarian hard work – stylish but familiar to the streets.

Kirchner was born alongside the fin de siecle – he in 1876, the turn of the century in 1888 (when it really began) – literally twelve years apart, but emotionally/aesthetically at the same time. There isn’t much known about him, except for the fact that he grew up witnessing the times: the emergent stressors of industrialization, mass-production, urban migration, consumerism, and mass-marketing. The fin de siecle was more a rushed confluence, an imbrication of past and future. It phased in just as much as it phased out – as do most all movements, eras, periods, and epochs.

Kirchner’s work in art nouveau coincided with the excitement of that transitioning moment in time, a fledgling period of new and strange energies. One such energy was “modernism” itself, a phenomenon whose official beginning is still contested today. Consider: Virginia Woolf claimed it was ushered in with the discovery of “self-consciousness” in the arts – in 1910 when “all human relations shifted.” If this was true then modernism could very well be traced to Flaubert, the “first self-conscious novelist” (who worried about everything). He also heralded in the concept of “abstraction” in literature.

Others say modernism began with the death of King Edward and the first Post-Impressionist Exhibition. D.H. Lawrence said the old world ended in 1915 (with the new war) along with Yeats, Joyce, Pound, Gide, Valery, Mann, Proust, and others. Others point to just after the war – 1922, the year of Ulysses and The Waste Land, Brecht’s first play, Woolf’s Jacob’s Room, Proust’s Sodom and Gomorrah, and others. While still others claim it found itself in the inter-war period when modern psychology, African sculpture, American detective stories, Russian music, and German innovation all converged.

Flaubert aside, four other writers – Woolf, Eliot, Forster, and Lawrence – looked at each other in 1922 and realized they had moved the tectonic plates in literature. They were then followed by the “lost generation” of mostly American writers and painters in Paris – Hemingway, Macleish, Stein, Cummings, Fitzgerald, dos Passos, Crane, and others who took self-consciousness to various logical (self-destructive) collisions.

Kirchner felt the wave of self-consciousness expressed through absurd creations, random methods, parody, fiction, aleatory (“chance”) art, surrealism, erotica, and“new” psychology. He came away with the distinct impression that history’s transitions were, putting it mildly, very messy.

Enter art nouveau right in the middle of all this – the fin de siecle AND modernism with its whiplash curves and undulations, flames, waves, and flowing hair of (almost exclusively) female figures and purely decorative motifs. A carnival-like ambiance is generated with Harlequin but one immersed at intervals in dark and macabre themes. This painting is a dance of dualities: On the surface, a celebratory smile and two animated spirits; underneath, something hidden, concealed, secret, and dark (deeply self-conscious). This is the harlequin’s signature.

Harlequin is a burlesque of curtain calls, pantomimes, and marionettes. He is no one in particular and “someone” only for a moment. A juggler of energies and ideas (balancing the feminine), he simultaneously escapes all trappings and entanglements while flying through public spaces. He plays with “presentations” and then discards them. Everything is temporary, colorful but translucent.

But he is also about attempted harmonies (again, the feminine), exquisite conundrums, abstruse arabesques, and simply weird serendipities. His constructions are about the unfurling of things, mysteries and movements, meanderings, zigzags, and most of all constant de-centering. In this sense, while exuding self-consciousness, he also challenges it. He plays with the circuitry of the imagination which clusters together in words and ideas – capriccio – always pointing to margins. Eternally poetic, never vulnerable to “targets” and “traps,” because he dances around them, eluding specificity, apportionment, detail, the hard and factual. His “tactics” reside in having none. He “wins” over the negative, the cold and hard personality, by using a wardrobe that rotates on a wheel. He’s the chameleon, the fool and the trickster… and we still don’t see his face.

Therefore, to me, Kirchner’s Harlequin is much more than just a woman and a clown, or even a stealer of women from a jilted lover. It’s about time and place, the warnings/misgivings of a new century – risk-taking, achievements and progressions tempered by unparalleled and unprecedented suffering to come. These figures sense the ominous approach of history. They move through the air with brave faces while concealing apprehension. He steps lively but gingerly, cautiously. His hands grasp her tightly and he looks before he steps.

The doffed mask could just as well announce the new fin de siecle into the 21st century, where nothing is secret anymore. Everything revealed and alarmingly exposed, the smile becomes nervous, still seductive and inviting, but afraid. She doesn’t know the face(s) she’s looking at (that is, “us” — are we the new Pierrot?) and clings to the danse gitan beneath her. He promises to be there, to support her, but he still doesn’t show himself. He feels her apprehension and is afraid while having to be strong. Her ground begins to shake as he stumbles into uncharted, unstable spaces. The animated and blithe spirit, the confident and bold humor of a century ago, is now tested, and one wagers as to which century, which emotional universe, will prevail over the other.

These are the faces we see today, donning the past, facing the present, looking wearisomely into the future, while wearing the veneer of confidence. The modern “ship comedy-and-tragedy-masksof fools” carries either the harlequinade of knowing, the masquerade of enlightened theater (marqueed for tonight’s instruction and performance) – or – the mask of human tragedy, the dark and demonic concealed in a clown’s costume (with rippling coulrophobia). The unknown is the direction our hero is taking us – stage left. His eyes are our eyes. He navigates as we navigate. He (and we) must “balance” all our information and hope we don’t trip on a misplaced step along the catwalk.

© 2018 Richard Hiatt