It is a common and age-old task for fiction to imagine life in the future. Although the phrase “the future” immediately calls to mind modern works of science fiction, narratives that imagine alternative worlds don’t belong solely to that genre or this time. More broadly, works of future fiction have been around since before the novel itself. Thomas More’s Utopia (1516) and Tommaso Campanella’s City of the Sun (1602) are two pioneers in future writing and were written before the modern novel came into being — and these forerunners follow the model devised by Plato in The Republic (around 380 bc). Since then, writers and philosophers have written numerous works of future fiction, some of which have even been outpaced by the dogged sands of time. Though almost all of the predictions in these works haven’t come to pass, we continue to read them and write new books that speculate about tomorrow and beyond — Gary Shteyngart’s screaming, heart-felt satire, Super Sad True Love Story (2010) comes to mind. While visions of the future have run the gamut from Big Brother to Big Otter, one thing remains abidingly clear: writing about the future is an enduring pursuit of universal interest. Thus, when well executed, future fiction is literature in the sublime.
It’s interesting how much credence we give to works about the future considering the epistemic fact that the future is impossible to predict. The skeptic David Hume makes a convincing argument that we do not know if the sun will rise tomorrow, let alone the organization of society one, ten, or one hundred years from now. And yet, after reading the words “future fiction” in the above paragraph, I’m sure the names of other works that do just that have sprung into your head: Brave New World, Anthem, Nineteen Eighty-Four, Fahrenheit 451… the list is long. Despite the senselessness of the task, it seems we cannot resist imagining the future through alternative worlds. Maybe this is proof that we are romantic beings, or hopeless fools — personally, I think it’s likely that we’re both. At any rate, we do read books about the future; this much is clear. We read them, study them, enshrine them, canonize them, and instruct them in our schools. I don’t think it was unique to my education that I read all the books in the short list above before graduating from high school — though I suppose this should come as no surprise. Future fiction always furnishes us with a new perspective on the present times. In this regard, it is by its very nature instructional.
What are we to make, then, of the U.S. National Intelligence Council’s (NIC) Global Trends Report, “Global Trends 2030: Alternative Worlds?” Published in December, the report “identifies key drivers and developments likely to shape world events a couple of decades into the future.” In other words, the report tries, like so many works of fiction, to extrapolate current trends to predict the future. The NIC publishes the report every four years after the president is elected but before inauguration day. Its purpose is twofold: to inform policy makers, namely the president, of the current state of the world and to show them where — it appears — things are headed so that they can make informed policy decisions. Their website states that the last edition of the report, Global Trends 2025, was read and consulted by leaders worldwide.
The NIC bases these Global Trend reports on information gathered by the U.S. intelligence network, think tanks, independent research laboratories, the opinions of scientists, experts in industry, academics, NASA, Silicon Valley moguls, and other foreign collaborators, then has the world’s most powerful consulting firm create data models based on the findings. In short, they use the finest data, methods, and minds available. Period. We may call these 24-karat predictions because I suspect they cost as much to produce. The predictions in the first published report, however, turned out to be very wrong.
In an article for The Atlantic, Joshua Foust reminded us that the 1997 report predicted
…by 2010, North Korea would be transformed into a normal state and tensions on the peninsula would be eliminated; the western world would see unending 2-percent growth in personal income; and precision weapons would make conflicts smaller and less costly.
Obviously, none of this happened. Instead, something resembling the opposite state of affairs obtained. How should we respond? The fact that the greatest minds of our age furnished with all the information gathered by the greatest intelligence network of all time and outfitted with the most advanced predictive modeling systems ever created cannot infer much — if anything — about what will happen in the future should be a sobering reminder of the limitations of the knowledge of man. Hume is laughing from his tomb on Calton Hill. And two words — Financial crisis!! — should remind you that this is hardly news. Next question: can we blame the NIC for the report’s inaccuracies? No — by their own admission, the report is speculative: “We do not seek to predict the future.” It is worth noting, however, that they have taken down the links to Global Trends 2010, published in 1997. I can only guess as to the reasons, but I imagine it’s because world leaders would use the newest report for a doormat if they knew how disparate past predictions have been from reality.
What’s especially interesting about Global Trends 2030 is that it culminates in a section called “Alternative Worlds.” After 107 pages of snazzy infographics on globalized workflows, widespread aging, and the advantages and disadvantages of a multipolar global economy, the experts depict four possible futures using “scenario narratives.” The introduction to the Alternative Worlds section states:
…we have fictionalized the scenario narratives to encourage all of us to think more creatively about the future. We have intentionally built in discontinuities, which will have a huge impact in inflecting otherwise straight linear projections of known trends. We hope that a better understanding of the dynamics, potential inflection points, and possible surprises will better equip decisions makers to avoid the traps and enhance possible opportunities for positive developments.
Consider the line between fact and fiction officially blurred. On the page, the report fictionalizes the four worlds in brief written accounts complete with fabricated back-stories. One is a paper given at a Davos meeting, another, an address by a noted archaeologist. My favorite scenario, called “Gini-Out-of-the-Bottle,” comes in a paper that the “2028 Editor of the New Marxist Review” selected — after “sifting through piles” of “thousands of submissions” — as the winner of an essay competition held in honor of Marx’s 210th birthday. Titled “Marx Updated for the 21st Century,” the essay is printed on hammer-and-pick stationery (no joke).
Ignore for a moment the anachronism of someone sifting through thousands of paper essay submissions in the year 2030 to absorb the chimera that is “Alternative Worlds.” This is scary and this is cool. A government agency is handing the president a report in which they’ve “fictionalized” the future and they expect him to make policy decisions with their projections in mind. Our tax dollars are literally creating speculative fiction. And while the authors of these scenarios could learn a lot about character development from an intro to creative writing class, what they’ve written is future fiction. Global Trends 2030 shows the major characteristics that I’ve identified — imagined empires, utopian and dystopian societies, an instructional purpose, widespread readership and appreciation despite past failures; it’s all there.
I’ve struggled while writing this post to decide whether Global Trends 2030 should be treated as a farce or as some proof that fiction about the future has tangible value. On the one hand, I want to laugh when the NIC states that they’ve disrupted their own projections with “discontinuities” to encourage us to think nonlinearly about the future (—so you took your best guess, formulated by experts and supercomputers, and arbitrarily messed it up to remind us that shit happens?). On the other hand, the fact that a government agency tasked with advising policy makers about the future chose to use fictional scenarios — rather than CG-augmented “dramatic representations” of cable television fame — to help readers process 100+ pages of analytic reasoning and data tables makes a powerful argument for the value of fiction and fantasy alike. I’ll leave you to draw your own conclusions about the report. I will say, however, that fiction does seem particularly suited to portray the future. As Churchill famously said, “The empires of the future are the empires of the mind,” and what are works of fiction if not empires of the mind built with ink and paper.
From another perspective, the Global Trends Report is merely one symptom of larger societal mores. To put it bluntly: we are obsessed with the future. Earlier, I said that writing about the future was an enduring pursuit of universal interest. This seems to be even truer in the wake of the Global Trends Reports. And while early works of future fiction prove that we’ve always been interested in the future, soon after the Industrial Revolution, this interest became a fascination. Allow me to identify some Global Trends: the acceleration of technological progress that began during the Industrial Revolution—and which has only hastened since the advent of the microchip, the personal computer, and the Internet—has enabled man to see significant change during his lifetime. Concurrently, global conflict during the World Wars spread terror and destruction and evinced the fragility of life. It is only natural, then, for us to wonder what will come after we are gone because we now realize just how exciting or terrible the future can be and just how quickly the fate of mankind can change.
If the first half of the 20th century roused our concern for tomorrow, the postmodern era has steered the attitude toward mania. 24-hour news networks now provide updates in real time on unfolding crises and events, while satellites, our watchtowers in low Earth orbit, remind us that we are each a small part of global change. We are constantly told that we live in dire times or that we are standing on the precipice of some new catastrophe. It makes sense, then, that we now pay futurists — experts whose sole job is to predict the future—to tell us about tomorrow and have even founded Future Studies programs in universities. The soothsayers of Shakespeare’s day are alive and well, and have traded their staves for PowerPoint clickers.
In the world of literature, this blooming passion for the future can be seen in the proliferation of science fiction, a once distinct genre that now bleeds into everything. The New Yorker’s June 2012 issue was devoted, for the first time ever, solely to Science Fiction; David Foster Wallace’s too popular Infinite Jest (1996) takes place in a near future with an advanced entertainment network that bears an eerie resemblance to Netflix, a service that would become popular five years later; and author George Saunders continues to write literary fiction that takes place in some postmodern twilight zone that can be described as, if nothing else, futuristic-ish. Furthermore, we continue to revere authors for their prescience — sometimes, only after it’s validated. In his September 2012 essay in the Los Angeles Review of books, Cornel Bonca wrote on Don DeLillo’s Cosmopolis (2003), a book that was largely dismissed by critics after its publication but which has since, as Bonca explains, gained credit for its grim depiction of a wayward multi-billionaire asset manager who drives through Manhattan to get a haircut only to be impeded by anti-capitalist riots, prefiguring the 2007 economic downturn and Occupy Movement. This justification compels Bonca to retroactively declare: “Don Delillo has once again taken on the mantle of artist-prophet.”
Now, let me ask a novel question: if future fiction is the literary indigo of our age, what will the future of fiction hold? Will novelists like Gary Shteyngart become futurists while futurists like Michio Kaku become novelists (Kaku’s “nonfiction” book Physics of the Future (2011) concludes, like the Global Trends Report, with a fictitious vision of the future: “A Day In The Life In 2100”) in some sort of literary singularity? Already, this conflation has occurred for writer/futurist Arthur C. Clarke, author of 2001: A Space Odyssey (1968), among others. I don’t have answers to these questions — though if given four years, a platoon of data-table-waving experts, and a supercomputer, I’m sure I could scrape something together. Through all of this, though, one thing remains certain: writing about the future will have an important future in our lives, as it has had an important past.
James David Lamon is a liberal arts grad now finding himself, professionally. He lives and writes in Austin. Follow him on twitter @JamesDavidLamon
Published in 1989 by Doubleday Canada, Whale Music by Paul Quarrington sets up an age-old artistic problem: artistic autonomy versus the interests of the marketplace. The novel examines the life of musical savant Desmond Howell, who reflects both on his once commercially successful rock’n’roll career with The Howl Brothers and its fallout, made explicit through Desmond’s discussion of his frequent substance abuse, his failed marriage, the untimely death of his brother and bandmate Danny, and his reclusive withdrawal into his seaside mansion.
Desmond dedicates his time to the composition of what he believes is his magnum opus, the titular “Whale Music.” But Desmond’s record executives say that he isn’t producing commercial, and therefore profitable, music. As a result, the tension between Desmond’s desire to attain cultural capital and the execs’ desire for economic capital becomes untenable, particularly as repeated visits from outside individuals (save his Torontonian houseguest Claire) increasingly impede upon the autonomous creative universe that he desperately wants to maintain. Desmond acknowledges this disparity between his desires and those of the execs in the novel’s opening pages:
I must work on the Whale Music. The Whale Music is very important to me. It’s the only think that’s important to me. Don’t try to stop me from working on it like…countless record executives have tried to do. The record execs say the Whale Music isn’t commercial. I say it’s not my fault if whales don’t have any money.
Using a comic tone to communicate rather tragic circumstances — for instance, his labors in music (over which he only has partial control) are Desmond’s sole means of fulfillment — the novel evokes a style akin to that of John Kennedy Toole. And like Toole, who won the Pulitzer for A Confederacy of Dunces in 1981, Whale Music also received a major literary award, Canada’s prestigious Governor General’s Award, in 1989, placing Quarrington in the company of former winners Alice Munro and Margaret Atwood. Five years later, in 1994, the novel was adapted into a feature-length film, with Quarrington on board as the film’s screenplay writer. The film was nominated for Best Picture and Best Adaptation at the 1994 Genie Awards, with Maury Chaykin, who played Desmond Howell, taking home the award for Best Actor.
However, instead of building on its initial success, Whale Music has fallen somewhat by the literary wayside. Once deemed “the greatest rock’n’roll novel ever written” by Penthouse, the novel is now rarely mentioned by literary scholars and commentators. While some of Quarrington’s other works, such as King Leary, which is often called “the great hockey novel,” have received considerable attention from critics, writing on Whale Music rarely ventures beyond succinct summaries or reviews. These tend to focus more on illuminating the novel’s plot structure than its critical or social function. (One of the longest summaries comes from Quarrington’s website.)
For example, Brian Busby’s Character Parts: Who’s Really Who in CanLit (2004) gives Whale Music some very good and useful scholarly attention, but even this is brief: Busby includes a short, two-paragraph description of the parallel between Desmond Howell and the real-life individual upon whom he is allegedly based, Brian Wilson of The Beach Boys. Many other reviews or interviews with Quarrington center on his battle with lung cancer, the disease that, sadly, ended his life in early 2010.
It is a shame that Whale Music is not widely read today. It is humorous and poignant and offers an original insight into the life of an artist and how art is produced. It also beautifully captures the first-person subjectivity of the artist at a time of personal crisis. The protagonist himself is also incredibly endearing and someone who the reader cannot help but like.
However, these very strengths may have led to the novel’s current marginality. Quill & Quire writer Scott MacDonald, for instance, argues that the use of comedy in order to communicate sad, troublesome ideas, does not sell well in Canada. And so it goes, he argues, that Whale Music has significantly faded from view. And it seems unsurprising that a Canadian novel that does not do well in its own literary marketplace will not be a success across the border.
On a grander scale, it appears that the literary marketplace has not been particularly welcoming to narratives about fictional rock’n’roll stars in the last several decades. Rather, a plethora of auto- and historical musician biographies have emerged — such as the critically acclaimed Just Kids by Patti Smith (winner of the 2010 National Book Award), Life by Keith Richards, Robin Kelly’s Thelonious Monk, and Pete Townshend’s Who I Am, to name a few — that seem to fully satiate the reading public’s want for an insider’s look into the life experience of musicians. Some recent novels, such as Dana Spiotta’s Stone Arabia, do indeed feature a fictional rocker, but that novel is as focused on the musician’s familial circumstances as it is on his professional ones.
In a way, this movement away from fictional rock’n’roll characters parallels the essence of Desmond’s plight. Just as there isn’t as much of a commercial space for Desmond’s highly technical, abstract “Whale Music” (much of which Desmond discusses using music theory terminology), the space within the literary marketplace for fictional rock’n’roll narratives has given way to the highly profitable nonfiction rock’n’roll narratives. What’s more, these nonfiction narratives seem to mainly focus on providing a more personal account of how an individual came to make music (e.g., through backstories, childhood experience, and the like), rather than focusing on the structure and production of the music itself. Exceptions certainly emerge, however, such as David Byrne’s How Music Works, which was recently featured in a great piece written for this blog.
None of this is to say that such a shift in focus is a mistake or that there is no artistic merit in memoirs. But I fear that peripheral novels like Whale Music will eventually slip into full-fledged obscurity should they remain unmentioned. The reading public would certainly be at a loss should such a wonderful work as Whale Music, in the words of Mr. Townshend, fade away.
The Rockaways, NY
You have a choice to make, and your choices are infinite. Does this inspire or terrify you? The desirability of having greater choice is hard to argue with. If you get to choose whom to marry, instead of being promised to a husband when you’re barely of age, you get to utilize the full range of your human capacities for love and empathy. Even more quotidian decisions, like choosing among Baskin-Robbins’ 31 flavors, are improved with greater choice. If you innately love rocky road more than any other ice-cream flavor, a rocky road scoop will give you more utility than a simplified decision between chocolate and vanilla ever could. Here, the positive returns to more choice seem obvious.
More utility is better, declaims traditional economics. If “utility” is social science’s fancy word for happiness, then you might have to be insane to want fewer choices, less utility, and depressed happiness. So economists assume every homo sapien is also a homo economicus, a rational decision maker whose only end is maximizing her utility. Utility and the rationality of actors are so foundational to modern economics, that any economist questioning their legitimacy is automatically considered a pariah.
Orthodox economists have within their treatises constructed armies of these rational decision makers who, in my imagining, march lockstep down the aisles of your local Walmart, mechanically exchanging a swipe of their credit card for merchandise whenever price is lower than their would-be utility. I do not begrudge economists their simplifying assumptions. The world is a big, messy place, and it is not for economists to capture all the nuance. For that, man had to invent poetry and, later, the novel. And if our world were ruled by an equal number of economists and poets, then the magic through which economists distill billions of humans’ desires and intents into indifference curves and utility-possibility frontiers would be no problem.
If our policymakers could cite Homer and Austen as easily as they do Milton Friedman, no one would need worry about the dehumanizing effects of collapsing the machinery of human desires into several equations. In such a world, every cost-benefit analysis of a new regulation would come with a literature review section that cites Tolstoy, Shakespeare, and Shelley. If we were allowed to consult experts on the human condition, when making policy decisions affecting the world in which humans live, we wouldn’t have to fret so much about the danger of treating America’s 350 million individuals as if they all ran on the same decision-making, utility-maximizing software. And so the metrics by which we judge public policy come to be about totalling up utility, about costs in dollars and benefits as they accrue to GDP.
Economists have sold us on their vision of a world where many choices exist, and the role of public policy is to solve for the choice that keeps inflation in check or grows GDP or insert the favorable performance of your favorite economic indicator here. Cass Sunstein, until this past August head of the White House agency most responsible for judging regulations, wrote that “in an executive order issued in January 2011, the administration doubled down on cost-benefit analysis…. Obama made an unprecedented commitment to quantification of both costs and benefits.” Where Mr. Sunstein’s congratulatory Bloomberg editorial was titled “The Stunning Triumph of Cost-Benefit Analysis,” I think the rest of us should be a little skeptical.
Allowing public policy to be exclusively the domain of economists is dangerous. In assuming that choice is good, more choice is better, and that there’s only one way of finding the best outcome (that is, quantifying costs and benefits till the end of time), we lose something essential in our decision making.
Take, for example, climate change. For an empathetic human watching the wiping out of New Orleans, Staten Island, and Haiti, it’s obvious that only two choices exist: either we spend the billions necessary to heal our planet and protect our towns from the inevitable battering to come, or we dawdle over our spreadsheets and figures until the entire world is drowned and there are no economists left to run the numbers. Economists do serve a purpose, and their methods can lead us to more or less cost-effective fixes. The danger comes, however, when we adopt the habit of infinitely quantifying and cost-comparing — useful when writing a budget — and gloss it onto our entire world. Maybe there are 1,000 valid ways of financing the search for renewable energy, but we don’t have 1,000 choices about whether or not to act on climate change. Mr. Sunstein’s spreadsheets may account for the lost future productivity from the thousands of lives lost in natural disasters, but I’d wager that they have no cells for entering in the loss and guilt we as humans feel when watching entire neighborhoods, cities and nations violently torn apart and massacred in ways that we might have prevented.
Human desires and intents are more complicated than what can be captured by quantifying costs and benefits. And sometimes our real choices are much simpler than the dozens of counter-factuals an economist might want to run to find the optimal course of action. It’s a dream to think of the powers that be allowing some dissenting non-economist voices in our public policy debates, but if ever they do, those voices might remind us of the full humanity of the “public” in public policy.
My past is nothing if not littered with embarrassing stories of how I’ve made a fool of myself trying to emulate various artists and fictional personas. There was the time my teenage hopes of becoming an all singing, all dancing Bohemian, á la Jonathan Larson’s Rent, were dashed after discovering there were seemingly more Starbucks than destitute artists in the East Village; the time I shellacked my eyelids with Elmer’s glitter glue to look more like David Bowie and gave myself a stye; or the night I tried to convince myself I liked gimlets after reading Raymond Chandler’s The Long Goodbye, only to end the evening staring down at a lime green-tinged bowl of toilet water. But despite all these transformative failures, I still find myself completely unable (or perhaps, unwilling) to relinquish the not-so-sporadic desire to make myself up into one specific cultural icon:
And before you ask, no, I do not mean Anna Karenina, the tragic Tolstoy heroine most recently
portrayed onscreen by a lip-quivering Keira Knightly (and written about relentlessly by Full Stop).
I mean Anna Karina, the French-Danish actress, singer, and novelist, primarily known for her leading performances in many films of the French New Wave genre, and also for her rocky romance with director Jean-Luc Godard. Now, I realize that French films of the ‘60s are not everybody’s cup of tea (or black coffee mixed with ennui and stale smoke, as the case may be), but I’m still always a bit shocked at how overlooked Anna is as a popular style icon and screen presence. More often than not, the mention of her name draws either blank stares or Tolstoy-induced confusion. Nevermind that Zooey Deschanel as we know her would be a mere pile of twee fabric scraps and false eyelashes without Anna’s influence.
Celebrity chef Gizzi Erskine once referred to Anna Karina as “a dirty version Audrey Hepburn,” a description I consider to be delightfully apt. After all, both Anna and Audrey were cinema stars of the doe-eyed, gamine variety, but while most of Audrey’s characters at least aspired to be society girl-types with slim cigarette holders and lovely, couture frocks, the majority of Anna’s characters were nearly always slumming it — making ends meet by working as striptease artists, or living their lives on the run from strange Algerian gunman. And yet, somehow, even with the tousled hair and the cheap cigarettes, Anna always managed to look every bit as dazzling as Audrey, if not more so, simply by virtue of the fact that she knew how to match a gun with a pleated skirt, and would probably recite Marx with superb accuracy if you asked nicely and offered her a light.
I suspect the nature of these roles Anna played in her heydey are a large part of the reason why her status as an international screen symbol is less cemented than I feel it ought to be. True, much of it can also be attributed to the imbalance between Hollywood cinema and foreign arthouse fare, but even if that weren’t a contributing factor, it still remains likely that most young female spectators would prefer to mentally align themselves with a moped-riding princesses than with a tragic political dissident — something which is a shame, not only because it works against Anna getting the recognition she deserves, but also because revolutionism is just so much sexier than aristocratism.
Thankfully, the call for Anna to get more recognition outside of French film-loving circles isn’t limited to my rantings and dress-up habits alone. Just last month, Seattle-based hip-hop duo Blue Scholars released the video for their song “Anna Karina”, a musical homily declaring Anna’s rightful place in the pantheon of cinematic icons. The lyrics proclaim:
Now whatever way she talks
heads turn away in a shade of dark
In a way, many say she was made for art
If the world is a stage then she played her part
Nobody knows it yet, but no disrespect
’cause you deserve much more than this
And all through it all, f*ck them all
it’s your life to live, ‘cause you’re a star
It’s a very fitting tribute for an actress whose image gets used to showcase the accomplishments of those behind the camera far more often than it’s ever used to honor her own iconic significance.
So, despite all of my comically ill-fated attempts to embody the art and artists I love, I plan to keep my fingers crossed and hold fast to my collection of all things Anna Karina-inspired. I’m keenly aware that, to most people, my cache of vintage dresses, plaid skirts, and oversized wool sweaters probably looks more like a horrible mashup of life-sized Blythe Doll outfits and second-hand Catholic school uniforms, but to me, it’s a very small way of celebrating Anna. Because it’s my life to live, and in it, Anna’s a star.
This semester, my film class assigned several essays by André Bazin, including “Adaptation, or the Cinema as Digest,” which argues that film versions of novels should aim for what he calls equivalence in meaning of the forms. The very best adaptation manages to “transform the voltage of the novel,” converting its energy into a power that crackles over the lens. Yet even middling vulgarizations of great literary works did not bring some great evil in the world. Lite, abridged movie versions could serve as a gateway to the original novels.
I could apply what I learned to my reaction to Anna Karenina. It would be the virtuous thing to do. But it’s hard, in the days (nay, hours) following finals, to fit his essay over the framework of extra-curricular joys. My more analytical neural clusters have split, maybe for good, from my pleasure receptors. The only thing I brought with me into the theater was the memory of the movies I’d loved more than the texts from which they sprang, like The Princess Bride and The Lord of the Rings. Films like that make the worlds so much bigger on the inside than they had seemed on the outside.
After what felt a particularly long and sludgy week, a few friends and I (we who live on the wild side) said, “Fuck it, we’re going to see Anna Karenina.” We shoved aside the papers and course packets and ran off to the movies. I enjoy a Joe Wright literary adaptation, but one of my friends declared his movies too “sleepy” for her. They seem so sleek with prestige, so stamped and Oscar-ready. But a sleepy, soaped-up costume drama was all we wanted. We wanted to go to a nice big dark room and settle our eyes on something pretty, something that went down easy.
Seven years ago, Wright made Pride and Prejudice. Running a slim 127 minutes to the 1995 BBC miniseries’ 300 minutes, his adaptation was lighter, crisper, and—some argued—freer. Those older versions were pale, stiff, indoors sort of movies, with low budgets and a set they were chained to. Wright’s Pride and Prejudice opened itself to the outdoors, to vistas from high cliffs, to pigs lumbering through the Bennet household. Lizzie and Darcy were shoved into the rain for their pivotal quarrel, were drawn into a sunlit meadow for their reconciliation; even their final scene of domestic bliss takes place on a porch. Exterior shots brought naturalism, and with naturalism came a renewed sense of romance. There is something about dappled sunshine and Keira Knightley’s dimples which pair together sweetly. It’s a dreaminess, even a fizziness, totally foreign to Austen. It drove some people up a wall. It went straight for my heart.
But the descriptor “atmospheric” would prove to be a double-edged sword with the release of Atonement two years later. Response to the film was tepid, and what drew the most criticism was the very cinematic sensibility which had warmed people to Wright’s previous work. Remember the long shots of Dunkirk at sunrise, the extravagant beauty of the war-torn French countryside? It was a gloss totally at odds with the horror of war. It was nice to look at, but strangely deadening. (The one spark of life was the imagined reunion, when Cecilia—Knightley again, thinner, more severe—whispers to Robbie, “Come back to me.”) Atonement was somehow too beautiful to fully realize the psychological depth of the original novel. Exterior had been substituted for interior, rather than expressing it. The movie just didn’t think hard enough.
With a screenplay by Stoppard, Anna Karenina couldn’t help but be a more cerebral movie. It’s nowhere near as monumental as Tolstoy’s novel, but would we want such a thing to try and drag itself across the screen? Stoppard takes whichever elements please him and use them to assemble a fascinating Fabergé egg of a movie, an object to marvel at. Compressions in time express themselves in impossible spaces. Anna Karenina is as tightly wound as an old-fashioned clock, as carefully constructed as a music box. Its gears are the relentless churn of the train wheels—and I really do mean relentless, the doom starts early and re-surfaces often—its inner compartments continually unfold to reveal themselves: the stairway from the princess’ ballroom leads to Karenin’s study; Levin throws open the doors of the palace and strides out into his beloved estate. Spinning in the center is Knightley, reincarnated as a Russian, looking positively skeletal and bizarrely young. When she hugs her son to her, she seems like a favorite older sister, not like a mother. Her caprices are baffling and illegible. (Sidelined, wasted, stands Michelle Dockerey: she is a thinking sort of woman, someone to whom love could only come unexpectedly. The Anna of my heart.)
This film version of Anna Karenina, written by a playwright, is fascinated by the theatricality of Russian society, of nobles who hold out their arms as servants whisk around, dressing them. It plays with the idea of performance and audience, public and private. The film opens with a red curtain rising onto the scene. Needless to say, it does not have a straight-shooting kind of script, and plays a few more tricks than Pride and Prejudice or Atonement ever dared to attempt. Yet this film, like Wright’s others, continues to grapple with the old problems of exteriors and interiors: the sumptuous and clever private rooms are not balanced with the pastoral estates. Not enough attention is paid to Levin and his forays into the fields to make them feel substantive, and Joe Wright is not exactly a purveyor of grit. After a long and exhausting day, Levin falls, perfectly arranged, onto a bale of hay to sleep.
The closing image of the film is a strange one, and seems perfectly emblematic of Wright’s problem of outside and inside: the Karenin children play in a field, as their father looks on. The grass is Technicolor lush, spreading across the floor of the theater we began the film in. None of us knew what to do with this moment, this image of domesticity which seemed so antithetical to the bureaucratic Alexei, the sunshine streaming through the wood, the over-bright artifice. It was a strange cross between the cerebral and the pleasurable, an overgrowth that still felt too neat and too contained.
As someone who delights in the minutia of literary culture, I have been following the controversy surrounding Dalkey Archive Press and the bizarre job requisition posted to their website last month quite intently. And let me just say, my delight has begun to feel morbid.
A quick recap: in a search for new interns, Dalkey Archive Press, a nonprofit book publisher currently based out of Illinois, made public a job posting that included outlandishly brutish requirements of unpaid interns-to-be, such as a stipulation that they “not have any other commitments (personal or professional) that will interfere with their work at the Press (family obligations, writing, involvement with other organizations, degrees to be finished, holidays to be taken, weddings to attend in Rio, etc.” (The posting, no longer viewable on the Press’s website, was republished here by The Stranger.) The backlash was swift and damning. In a statement made to The Irish Times, John O’Brien, founder of Dalkey Archive Press, defended the job requisition as “a modest proposal,” in reference to Jonathan Swift’s infamous satirical essay.
Not everyone accepts O’Brien’s explanation, and with good reason. To address the O’Brien’s questionable defense, I would like to invoke Poe’s Law: in the absence of clear indicators, satire can be exceedingly difficult to distinguish from the real McCoy. While I don’t really have to afford O’Brien the benefit of any great doubt to grant that the ad was intended to be satirical, I nonetheless find it difficult to differentiate satire from sincerity on a line-item basis. I suspect O’Brien himself would have a hard go of drawing such distinctions. I suspect a good deal of overlap between the sincere and the jesting. Or, as my partner put it the other day: “John O’Brien appears to be an exploitative domineering boss posing as an exploitative domineering boss.”
The thing about “A Modest Proposal” is that it was not, in fact, a proposal at all. Fact: Swift never intended for anyone to stew, roast, bake, or boil a single Irish child. The job posting in question, however, was still — wait for it! — an actual job posting. In his statement to The Irish Times, O’Brien himself admits that the advertisement was both “serious and not-serious at one and the same time.” So whether or not the tone of the advertisement was tongue-in-cheek is, ultimately, incidental to the fact that the content was more or less sincere. With that in mind, the posting’s “satire” begins to look sardonic, and, given the very harsh realities faced by interns that there is neither time nor space to expound upon here, O’Brien must excuse our stoicism: the butt of the joke is rarely wont to laugh along.
My intent is not to malign O’Brien or Dalkey Archive Press, which has a strong track record of discovering new writers,preventing literary classics from falling out of print, and otherwise championing literature in all the valiant ways that independent literary presses valiantly champion literature. (I am particularly enamored of the press’ special focus on forgotten, overlooked, and challenging experimental literature, practice of keeping all five-hundred-plus titles in their massive back catalogue in continuous print, and, in general, mission of making a business of publishing books the for-profit publishing industry deems unprofitable.)
Indie and nonprofit presses are labors of love, and while O’Brien certainly acted tactlessly and may or may not be an exploitative domineering boss (and I have no objection to the lampooning of any of that), it feels disingenuous to portray O’Brien or Dalkey Archive Press as engines of capitalist exploitation. Anyone who has ever staffed — or, for that matter, published in or otherwise contributed to — a literary publication knows that remuneration is generally not expected. (Full disclosure: I am writing this blog pro bono!)
In fact, my original aim was simply to call for Dalkey Archive Press to retract the advertisement: which they did well before this piece went to press, about a week or two after the job requisition first went up. I had also hoped that O’Brien would issue an apology for being so gauche, which he has (sort of) done in a new statement to The Spectator. I, for one, would have been willing to forgiven him the gaffe of tone-deafness.
And yet, skimming the various bits of press coverage of the incident, especially the heated debates in the comments sections, reveals that discussions between interested parties concern not etiquette or clarity but the very real and concerning nature of the publishing, media, and other creative industries’ reliance upon unpaid interns.
With regard to the treatment of interns, O’Brien’s most recent statement smacks of entrenchment. He admits to being clumsy in the ad, but then goes on to defend not just the chosen rhetoric of the job posting — the thin veil of “satire” now cast by the wayside — but the odious perception of interns as necessary burdens borne by more experienced staff. Promoting disparaging stereotypes, O’Brien implied there is a strong case for not paying interns that goes beyond the harsh economic realities facing nonprofit presses, that interns come to the Press with no useful skills and take three full years to develop necessary abilities, and other such nonsense. O’Brien’s comments provide further proof that the job posting was far more serious than not and – while certainly clumsy and lacking tact – was undoubtedly engineered to convey the low status of Press interns to potential candidates. This purported clarification, like the job posting itself, gives at least — at least — the impression of disdain.
The clarification does what a clarification is supposed to do though, and all of this is now laid out for us in transparent and unencumbered form, making clear all the ways in which O’Brien misprizes his interns and would-be interns, or rather, his would-have-been interns. Dalkey Archive Press, in an act that feels a little like ex post facto justification and a little like misguided reprisal, has suspended the intern program, perhaps permanently, which is really just a lose-lose shame.
Many have declared the last few years of television a “renaissance”, with shows like Mad Men, Breaking Bad, and The Wire widely acclaimed and praised as literary, usually accompanied by the term Dickensian. While Dickens and other pre-television broadcast authors did publish works of fiction episodically in mass media print publications as a form of popular entertainment, the assertion that a TV series may be “read” as literary narrative hinges on the perception of these shows as highbrow realism. Last year, Salman Rushdie told The Telegraph much the same; unlike mainstream movie-writing, American television is not “dumbed down” to entertain mass audiences but written with “the kind of control over plot and characterization previously enjoyed only by novelists.” Even recent college courses have taken television series as their primary subject matter, encouraging students to approach their favorite shows with the same critical approach with which they would a classic novel. The implicit contrast is that of reality TV, but a reality TV show featuring competitive author performances could flip that construction on its head.
In order to film their Kickstarter-funded two-episode pilot, creators of the television show Literary Death Match overcame the perception that literature is primarily sober in tone and complex in presentation, incompatible with scripted unreality television, and therefore untenable as popular entertainment. The series, filmed by Todd Zuniga, Elizabeth Koch, and Dennis DiClaudio in 2006, takes place in diverse locations around the world and features fiction writers trying to out-read each other with their own material before a panel of judges. Novelist Jonathan Lethem, comedian Tig Notaro, musician Moby, and actor Michael C. Hall all played referee in Hollywood last week outside a real boxing ring, where writers duked it out for high scores in quality and delivery. If this sounds as dull as a middle school spelling bee, Literary Death Match purportedly “marries the literary and performative aspects of Def Poetry Jam, rapier-witted quips of American Idol’s judging (without any meanness), and the ridiculousness and hilarity of Double Dare.” Melissa Goldstein reports in The Daily Beast that “In the absence of confrontation, there were clever bells and whistles: sexy librarian–styled ring girls and elaborately quirky tiebreakers involving a game of ‘pin the mustache on Hemingway’ and a vegan cupcake toss.”
But comedic spectacle and is only part of the appeal. As par the medium, some of the show’s potential lies with sex appeal and a fabricated intimacy with the filmed subject. As Goldstein puts it,
“These people — the ones whose books and New Yorker essays you take to bed with you — are real- life people you just might want to take to bed with you. Or hang out with. Which, if you didn’t realize, are two of the most compelling motivations for watching people on reality television.”
Of course, Literary Death Match may never be broadcasted on television. Even if Literary Death Match is picked up by production agencies, the program will most likely air on a network with less viewership – and thus, less power of celebrity creation – than those which produce shows like The Voice. Even the notion that fiction writers might warrant a television audience, however small, outside of the talk-show format might would be a boon for writers, though. J.K. Rowling, one of the most famous living writers, may be a household name — but even hers still fails to evoke tabloid interest the way Kardashian does. Not that tabloid interest is what writers want or should shoot for, but I’m sure most wouldn’t complain about a little more mainstream exposure.
So, let’s hope that Literary Death Match gets picked up. Not because it will turn writers into celebrities, but because literature is not just for realists. It’s magical, absurdist, experimental, even plebian; it can be hilarious and irreverent and at odds with deceptively fluid notions of “highbrow” versus “lowbrow” entertainment. Perhaps Literary Death Match wouldn’t revolutionize reality TV the way literature as a symbolic language revolutionized our viewing of television dramas like The Sopranos and Treme, but there’s no reason the reality TV format couldn’t express diverse aspects of the literary community; after all, the live event has already proved popular. In any case, the worst we’d be left with is bookish eye-candy, outrageous action, and silly jokes; all things that even the most serious members of the literati should be able to get behind.
The United States has been engaged in a constant war for the past ten years with no sign of slowing down. The drone war is not exactly a secret, but it’s not a hot topic in the national conversation, either. The drone program is rarely mentioned by politicians and while some journalists in the mainstream media have done impressive work documenting and exposing the drone war, the day-to-day of targeted assassinations rarely makes the front page or the evening news. “That secretive program in which unmanned robots controlled from trailers in Nevada swoop down and deliver death to suspected terrorists on President Obama’s ‘kill list’ in lawless far away lands?” Americans seem to be thinking, “Yeah, I guess I know that exists.”
This obliviousness is, of course, by design. Among the benefits of the drone program for the military and intelligence services is how easily the general public can ignore it. The national security apparatus is able to defeat America’s enemies — killing around 3,000 people, including between 473 and 889 civilians, 176 of whom were children — while having a minimal impact on most Americans. With the exception of a very small handful of traumatized drone operators, most Americans don’t face any consequences of the drone program. Drone warfare allows Americans a near complete divorce of the state’s violence from human reality.
But recently online activists have undertaken attempts to use social media to rattle American complacency on the issue. On Dec. 11, Josh Begley, a web artist and NYU graduate student, started @dronestream, a Twitter account chronologically tweeting every reported US drone strike in Pakistan from 2002 until today. Begley told The Daily Beast that the project is “about the way stories are told on new social-media platforms.”
Looking at @dronestream’s page succeeds in creating a striking narrative through social media. Viewing the different strikes in a long stream turns the “national security” imperative into a narrative story about assassination from above in Pakistan. Patterns emerge: Jan. 2010 was a particularly intensive month for strikes. Cites are often hit multiple times in a row. Ten is an average number of people killed. Is the end of the month more prone to drone strikes than the beginning?
But Begley’s project is about more than experimental story telling. It is also to remind @dronestream’s 19,000+ followers of what is going on. The tweets jump out among the endless stream of stream of news and commentary and online snark, a jolting reminder of incidence of state violence. “The central question is about access and information,” Begley said. “Even if we have access to the data about drone strikes, do we really want to be interrupted by it? Do we really want a twitter feed that is going to show up in our other communications and annoy us with this heavy data?”
James Bridle, a technologist and artist, took on a similar project before Begley began his. In November, Bridle launched Dronestagram. With Dronestagram Birdle takes information on US drone strikes in Pakistan, Somalia and Yemen published by the Bureau for Investigative Journalism and uses it to locate the strike sites on Google Maps. Birdle then takes the images of the strike location and puts it into Instagram, sometimes applying the program’s nostalgic and dreamy filters to make photos look retro. The result is often haunting. An ambiguous little square of digital imagery — what is that? a farm? a hill top — that the viewer knows contains death.
Like @dronestream, Dronestagram is a simple reminder of the ongoing war, but it is also more. Bridle explains on his blog: “The political and practical possibilities of drone strikes are the consequence of invisible, distancing technologies, and a technologically-disengaged media and society. Foreign wars and foreign bodies have always counted for less, but the technology that was supposed to bring us closer together is used to obscure and obfuscate. We use military technologies like GPS and Kinect for work and play; they continue to be used militarily to maim and kill, ever further away and ever less visibly.”
The images — a small hut surrounded by a fence somewhere in the barren desert of northern Yemen, for example — help to turn the drone strikes into a concrete reality, something that can be seen and situated in real space. In doing so, Bridle helps to recontextualize drone strikes as something that occur in material reality, precisely what Americans are supposed to ignore.
So is this kind of online activism effective? The efficacy is impossible to measure. The nature of social media means that while anyone can access Bridle and Begley’s messages, the reach is in reality much smaller. You have to sign up to follow @dronestream on Twitter and have your personal feed punctuated by the history of drone strikes from 2002 to 2012. The “average” person isn’t necessarily going to be affected by it. And it would easy to dismiss Bridle and Begley’s projects as some kind of “clicktivism,” that simplest activism where a re-tweet to your 800 followers feels like a revolutionary act.Dronestagram and @dronestream won’t have the effective capacity for changing Americans’ relationship to the ongoing war that, say, hundreds of thousands of returning veterans or a looming draft did during Vietnam. But in a digital age of digital warfare fought (on one side, at least) on computer screens, a reaction on computer screens is only natural. Moreover, social media is where many Americans construct their understanding of global affairs and current events, making it a natural place to fight the ideational battle against the drone war.
National leaders, especially leaders of democracies, will tell you that if you want to lead a people into war, you had best have a pretty good reason for it. The world has seen its share of less-than-stellar casus belli, from a suspiciously fake-looking Polish attack on Germany on August 31, 1939, to fears of a gigantic communist domino setup in Southeast Asia, to the missing weapons of mass destruction in Iraq. But, over the last twenty years, some of the flimsiest provocations of all have led the populist Right to cast themselves as defenders in a series of great cultural wars that challenge the very future of America.
Casting policy in a military light is certainly not a new practice, of course, but today’s skirmishes are a far cry from the legislative crusades of the last century. With earlier examples like LBJ’s War on Poverty or Reagan’s War on Drugs, politicians from both parties generally agreed on the problems, if not the solutions. Presidents and legislators were the ones declaring war, and it was a point of pride to be engaged in a struggle against a universally recognized social ill.
Today, however, the political bickering has extended to the causes themselves. Instead of being used to fight a societal problem, the war motif becomes a playground game of “they started it,” with one party trying to score points by tarring the other side as aggressors. By their nature as a conservative party, Republicans fit more comfortably into the role of the entrenched defender of traditional mores, which is why right-leaning pundits are generally the ones trading in the imagery of cultural warfare.
And trade they do. Whether it’s Bill O’Reilly playing Cronkite as he tolls the body count of Christmas trees or congressman Mike Kelly comparing the architects of Obamacare to the perpetrators of 9/11, conservatives are quick to draw the parallel between making policy and making war.
And, from the false equivalency files, the dizzying world of right-wing blogs brings us the War on Men. Contrary to the “War on Women” narrative of the last election, author Suzanne Venker says that it’s actually men who are the embattled sex. Bafflingly, Venker cites female competition in the workplace and other feminist goals as affronts not only to men everywhere, but to the very concept of manhood. The same feelings of persecution among a decidedly privileged majority has enjoyed play in various “common sense” conservative circles, from “white pride” movements to a push to name July “Heterosexual Awareness Month.”
The ignoble commonality among all these current congressional and cultural wars is their partisan nature. There is no War on the Deficit or War on Illiteracy being fought on the Hill, at least not by those names — the conflicts that are given the distinction of being called “wars” are not the most pressing problems, but the most divisive along party lines. While not surprising in the same legislative body where Senate minority leader Mitch McConnell introduced a bill that he opposed, then had to filibuster to keep it from passing, it is another troubling indicator of the widening chasm between parties in Congress.
The policy-as-warfare phenomenon’s partisan nature also indicates that, like McConnell with his DOA-by-design bill, many of these cultural warriors may not be entirely sincere in their intentions. While there are undoubtedly some true believers in the political sphere, the alarming discipline with which even the most moderate politicos tow the party line on these issues — see, for example, the stampede toward the far right in the Republican primary debates — hints that the force driving the narrative is not passion but point-scoring.
And while those in power celebrate their battlefield victories and mourn their losses, it is their constituents who must try to breathe in the toxic political atmosphere that such rhetoric creates. Like in a real war, the real casualties fall not on the elites who wage it, but on the regular citizens who suffer through it. Empowered as foot soldiers in the culture wars, many take to writing angry letters to the newspaper, or self-righteously half-shouting “Merry Christmas” to incredulous store clerks, or cutting off friends and neighbors of differing opinions because they constitute a part of the “enemy.”
But among those reasonable observers caught in the middle, war weariness is mounting. Polls taken after November’s election showed that economic issues topped voters’ lists of the most important issues at 64%, with “Ethics/Moral/Religious decline” — i.e. the War on Religion — far behind at just 3%. And these voters know which party is responsible for the warfare rhetoric, too; out of nine contested states in the presidential race, only North Carolina sided with Mitt Romney. Societal war-mongers, take note: invoking the language of cultural warfare is quite simply an outdated tactic, and it ought to go the way of the horse and the bayonet.
Let’s start with three inarguable but potentially subjective truths. 1. Henry James’ The Portrait of a Lady is the wildly contested and ever elusive Great American Novel (I am sure of it). 2. Paul Thomas Anderson’s The Master is the best film of the year 3. Bela Tarr’s Damnation is the most stunning cinematic depiction of near-apocalyptic desolation you’ll ever see committed to film. I have been challenged and haunted by all three of these. As I read and watched piles of tripe, and greedily nibbled on whatever treasures were thrown my way, it was these three that stuck out and continued to do so. Together, they form the whacked-out cultural triangle that made 2012 “My Year in Affect.” I am just going to take a bit of your time and point out why you simply must seek them out. You have to brace yourself though. All three, characterized by intimacy and interiority, are savage in their affect and practically ooze feeling. Moreover, they are narrative triumphs which make for a strange trifecta but somehow, a fitting one.
In his preface to Portrait, James points out its origin pretty matter-of-factly. “The Portrait of a Lady” was, like “Roderick Hudson,” begun in Florence, during three months spent there in the spring of 1879.” In a marvelous coincidence, I began reading it in Florence. I finished it months later back in the States. I couldn’t read it for too long a time or over too much time. It had to be put down now and again; it was trenchant and that’s an understatement. I know I am not alone in this, but to my mind, James has rendered not just one of world literature’s greatest characters but the preeminent American female character in Isabel Archer. If you picture the Last Supper, but swap apostles for literary characters, you’d have Isabel in the middle, Anna to her left, Emma to her right, and so forth down the table. But don’t let the power get diminished by the analogy. What makes Isabel’s trials and tribulations so much more difficult is their interiority and the lack of catharsis. I don’t want to go into spoilers but Isabel’s story is shocking for what doesn’t happen, whereas Emma and Anna’s stories shock for what does. But, as a character in Tarr’s Damnation points out, “ All stories end badly, because they are always stories of disintegration.” It’s unfortunate that Portrait is no exception. From it’s very beginning, something seems amiss and the feeling only deepens. It’s highs; happiness and occasional light moments seem disingenuous and ominous. It’s lows; depressions, fears, anxieties and various ruinations seem far more real because, well, they are. It’s hard to so forcefully recommend something without going into too much detail. This is one of those books that people say they’ve read but can never really talk about. Don’t be one of those people. Suffice it to say that no book has made a greater impression on me this year or in a very long time.
Early in Tarr’s film Damnation, a character warns another that “you should realize there’s order in the world and you can’t do anything to upset it.” You could picture the same sentiment coming out of Lancaster Dodd, Phillip Seymour Hoffman’s mind-bendingly charismatic proto-Hubbardian shaman-cum-con man in The Master, as he speaks soothingly to disciple Freddy Quell, played with unquenchable fire by Joaquin Phoenix. In short, it’s a towering achievement from the most sure-footed American director working today. The cinematography and score are equally chilling and soaring, the writing is complex and magnificent, swinging pendulously between earthy and cerebral. Overall, though, it’s the acting that rattles the bones, squeezes the heart and fries the brain of any viewer. Hoffman is scarily composed with moments of sharp-tongued rage and swelling ego. Joaquin Phoenix is unhinged and melancholic, adrift, occasionally manic and eternally desirous of succor and counsel. Amy Adams is meek and unassuming. To elaborate on her character would be a grave disservice. A caveat: The places her character goes and what she manages with a bathroom sink should eclipse any possibility of another woman winning an Oscar, be she singing as she dies of consumption or suffering as a first lady. Like Portrait, the most unsettling aspect of The Master is its ability to elicit feeling and sympathy from its audience while exposing it to the most horrific faces of humanity. Both The Master and Portrait are extraordinary examples of what happens when that social order is flaunted.
Finally, Tarr’s Damnation. It’s a film about a sad man dating a married woman. But, it’s also about immorality and darkness and squalor. It’s about being alone in a city full of people or being most alive when you’re alone. It’s about struggle and sacrifice, giving up and soldiering on, being oppressed by life and still somehow learning to live with it. Don’t let this steer you from it. It’s unquestionably powerful and unlike anything else I’ve seen. It’s black and white cinematography pops and moves, deepens, lightens, widens and narrows in ways so different than The Artist or anything you’d see on TCM. Tarr traffics in long takes that could, and often do, last for minutes at a time. He trains the camera on something until you see the way he wants you to, and then rewards you by moving on. Halfway through the film, a character speaking quietly to another offers up a reminder that struck me. “ I know you can step out of the story just as you can any story.” We could turn it off; we could tweet about it or post it. We could warn people against wasting their time. Instead, I say commit. These were the three defining cultural moments of 2012 for me. They are challenging and exacting, but just as rich in their rewards. In closing, it’s not too late to throw these on your Christmas list. Shift your perspective — I dare you.