Through a Glass, Darkly

Notes on Screens

Cassandra Nelson

Photo collage by Zachary Bos

"Rabbit rabbit." Photocollage by Zachary Bos. 1 viii 13.

When I think about what might be at stake in the increasing ubiquity of screens—at home and at work, in schools, facing the backseats of taxis and minivans, on the smartphones that may now, in a more or less socially acceptable way, be both taken to the toilet at a restaurant and left on the table when you return—I am always struck by the fact that until a few hundred years ago, none but the very rich had windows, or at least not windows made of glass. And also by the knowledge that windows, like an artist's vision, have always been linked with seeing, as the Old English word éagþyrel or eyethurl (meaning, literally, "eye-hole") and the Old Norse word that supplanted it, vindauga ("wind-eye") show.

Now we spend vast amounts of time staring at, rather than through, glass put to a very different use. Thirty-eight hours and 23 minutes per week on average for Americans, according to Nielson's most recent report—and that's just time spent in front of the television. Computers, smartphones, and tablets are garnering a larger share of remaining eye-time every quarter, although, as it turns out, they're not cutting into our relationship with television at all: we're increasingly likely to be looking at multiple screens at once or using the web to watch content originally produced for TV. Instead, what's being steadily chipped away at is time spent entirely apart from screens. If you assume a third of each day is used for sleeping and a third for work, that means that less than 2 hours and 31 minutes of leisure time—again, on average, across America—remains up for grabs.

Even setting aside for the moment recent revelations that screens have been watching us quite a lot too, and reporting their findings to the federal government, there are so many unsettling implications to these statistics that it's hard to know where to begin. What we see or don't see, and how it's framed, unquestionably alters what we see or don't see after that, both with our eyes and with our mind's eye. So the sheer homogeneity of the situation is frightening, for a start. Yes, there are more and more channels, and more and more kinds of screens, but isn't that, as Adorno and Horkheimer suggested in 1944, simply a way of making sure that "something is provided for all so that none may escape"? There's invariably a loss of imagination in turning first to prepackaged content, whether it's falling back on a cliché to approximate your thoughts where a new and more exact phrase would express them, choosing to "react with an animated GIF" to something you saw on Buzzfeed (handy multiple-choice answers to save you the strain of knowing how you feel about even baby otters), or using MOOCs to deliver standardized content in college courses across the country. What if we were, collectively, to forget the diversity of what came before—how we reacted before GIFs, what we talked about before MOOCs? In some ways that might be the most frightening part: not change itself, but how soon afterwards it becomes difficult, or even impossible, to remember what was there before, whether it was a voice, a name, or a demolished building. Do you have any idea what windows were made of before glass? Cloth, apparently, or at other times wood, paper, animal hides or—goodness knows how—flattened horns, mica, sheets of thinly sliced marble. In my lifetime cell phones and the Internet have made doorbells, payphones, and planning in advance obsolete; weirdly, someone still seems to be printing and delivering phonebooks. What did we as a nation do before we spent so much time staring at screens? Should someone write it down before we forget?

Sometimes I think about non-screen activities I enjoyed as a kid and wonder whether anything other than nostalgia is involved in my hoping that future generations will do the same. As an adult, I'm definitely not pulling my weight in terms of per capita television consumption, leaving someone somewhere with 71 hours of TV viewing per week to pick up the slack, and a lot of that leftover time is spent with books. While I admittedly have more of an investment in the written word than most—I'm doing a PhD in English, I count letterpress printing as a hobby—it saddens me to think that it will be harder and harder to explain to others why I'm so invested in words on pages in the years to come. Don't get me wrong, books aren't dead or dying: more are being printed and sold each year than the last, and as for e-readers, if there's one thing that the history of the book up to this point has shown, it's that new technologies are far more likely to coexist alongside older forms than to displace them entirely. But habits of reading and thinking are surely changing. And it's hard to make a case for a quiet, delayed, demanding kind of satisfaction, when loud, instant, undemanding pleasures abound.

I.

I first started thinking about screens in April, for a conference panel on poetry and Kindles (do they go together like oil and water? peanut butter and jelly? a fish and a bicycle or what?). It was then that the idea of windows as a sort of incunabula for e-readers came to me, in what I thought was a free-associative fancy, but now suspect was more likely the result of thirty years of marketing by Microsoft. Screens and windows are alike in cordoning off some aspect of experience and directing our gaze toward it, calling our attention to the thing on the other side of the glass. Books and paintings, notably, provide a similar kind of framing device—minus in most instances the glass, which is a not insignificant difference when you think of jewelry stores and museum cases and the signals they send, namely that the most valuable and desirable things are to be kept within sight but out of reach. Certainly, Theodore Dreiser suggests in Sister Carrie (1900) that the gleaming plate-glass windows lining Michigan Avenue—made possible by then recent advances in steel and glass manufacturing, which arrived just in time for rebuilding after the Great Chicago Fire—help to fan the flames of his protagonist's nascent consumerism. And how quaint such windows seem in retrospect, in an age in which Things to Want would be delivered to Carrie Meeber's inbox hourly, all through the day and night.

Along with shine, screens and certain kinds of windows—those affixed to forms of transportation—share an element of speed. Railroad travel was intolerable to at least some of the first writers who rode upon it. Used to traveling by horse and carriage, and taking in the countryside around them as they went, they felt frustrated and above all bored by the way the increased pace of trains robbed them of the ability to focus on anything outside except at a distance, and reduced flowers, trees, and people to mere smudges of color. "All travelling becomes dull in exact proportion to its rapidity," John Ruskin declared, while Gustave Flaubert wrote to a friend, "I get so bored on the train that I howl with tedium after five minutes of it. One might think that it's a dog someone has forgotten in the compartment; not at all, it is M. Flaubert, groaning." But of course their discomfort was learned, not inherent, and future generations, whether writers or not, simply adjusted their modes of perception, or rather developed new and more fitting modes of perception, since they had never had any other modes to begin with. On the whole the human sensory apparatus acclimated itself so well to the changed pace that it comes as something of a shock today, in an age of widespread air travel, to learn that trains were not always a relatively slow, cozy, and engaging way to traverse a space.

So let us concede that perception will always adjust to technology, and that what makes a grown person howl with boredom or frustration in any age will seem as natural and pleasant as breathing to an eight-year-old. That's not to say that every change is necessarily good, just that as a species we'll cease to find it jarring after a while.

If we flash forward a few decades to W. E. B. Du Bois' The Souls of Black Folk (1903), we begin to discern some of the lasting costs, both perceptual-cognitive and human, of increasing speed. What is lost is duration and depth. The "car-window sociologist," writes Du Bois, is someone "who seeks to understand and know the South by devoting the few leisure hours of a holiday trip to unraveling the snarl of centuries," and who thereby rushes to reductive and highly questionable conclusions. Catching a glimpse of others' lives in passing, as readers know from the rest of Du Bois' book, is not at all the same thing as living with them for months and years—working, sleeping, eating, singing, and mourning alongside them. The catch is that the car-window sociologist, unlike Flaubert, feels no frustration and no boredom, nothing that might warn him something is amiss, that he has not seen it all, that there are greater depths of understanding than the one he has achieved.

And screens, as Nicholas Carr has shown in The Shallows: What the Internet is Doing to Our Brains (2011), are turning us all into car-window sociologists, to some extent. Like Marshall McLuhan before him, Carr calls attention to the way in which every medium is accompanied by its own intellectual ethos. The printed codex is comparatively demanding: it requires that a reader's attention be narrowly focused, directed in a linear way, and kept up for quite a while—just you and your attention span, working your way through sentence after sentence, for hundreds of pages, hours at a time. The screen, by contrast, asks very little. Content can be skimmed or even skipped with little or no consequence. When the going gets tough, the reader gets going. According to a recent piece in Slate, shocking numbers of people don't scroll down even once when reading an article online.

This kind of "shallow" reading is nothing short of apostasy to the language arts teacher and literary critic (and the link between it and screens is one reason why I shudder every time I see "Print" specified in an MLA style bibliography, happily eroding the idea of the codex as the book's default form). The reading that we do, and try to instruct our students to do, involves more than a glancing encounter with a text. And the most difficult moments in that text—instances of what Wolfgang Iser called indeterminacy, which force the reader to make sense of ambiguity, and to come up with multiple interpretations, analyze their validity, and argue on behalf of their persuasiveness—these are the moments when a reader must buckle down, go back and read again, think harder. To skip ahead, as one might to the next article on a website, would be to miss what makes literature literature. It takes time to read closely and sink into a world made of words, time to reflect on it afterwards and formulate a response, and truly astonishing amounts of time to find the right new words, and put them in the right order, so that you can share your response with others.

An investment in these kinds of close reading practices is surely what brought my fellow panel attendees and me to the Association of Literary Scholars, Critics, and Writers conference at the University of Georgia this spring. Not just because our livelihoods depend upon it, but because much of our satisfaction in life does too. There is a real pleasure and lasting contentment that comes from finishing arduous things. Literary types don't have a monopoly on such joys—artists, athletes, parents know them well—but the process is one that can't be understood secondhand: you actually have to put in the effort to get back the satisfaction. And the relationship between the two is proportional. As one reviewer said of making it through Samuel Beckett's posthumously published Dream of Fair to Middling Women (a dense and allusive novel, to say the least): "It's uphill all the way, but then so was Calvary, and the view from the top redeems the pains taken."

The uphill battle that is education, and sustained concentration, is thankfully less like the hill at Calvary and more like the mountain in Dante's Purgatory: "Ever at the beginning below it is toilsome, but the higher one goes the less it wearies. Therefore, when it shall seem to you so pleasant that the going up is as easy for you as going downstream in a boat, then will you be at the end of this path: hope there to rest your weariness; no more I answer, and this I know for true."

A person in the habit of abandoning prose before the end of the third paragraph would have no way of intuiting that reading and writing get exponentially easier with practice. I could only try to tell them, like Virgil, that I know it to be true.

II.

On the far side of a window, we find one thing. Maybe reality is too strong a word for some, but we can at least say that it's the physical world, nearby and in the present moment. And on the far side of a screen? It might be a mediated version of reality (potentially quite far away in space or time), or something invented and unreal. It might be a mix of the two, something invented and unreal that masquerades as life. This is the charge David Foster Wallace famously brought against television in his 1993 essay "E Unibus Pluram," and one that Jonathan Safran Foer recently revived in an op-ed about social media and other communication technologies in The New York Times. Email and text messaging, Foer writes, were "not created to be improvements upon face-to-face communication," but rather as "acceptable if diminished substitutes for it. But then a funny thing happened: we began to prefer the diminished substitutes."

Here too screens prove less demanding than the alternative, in this case, a living, breathing, in-the-flesh or even on-the-phone human being. It just seems easier somehow to send our thoughts through various screens, even when the lack of voice and intonation introduces the potential for a thousand new difficulties.

The world on the far side of a screen is not physical in the way that the world outside your window is physical. It does have physical components, silicon chips and liquid crystals and who knows what else, but these are very much behind the scenes, so that an image of a sugar gum tree online does not reveal its materiality in the way that, say, a painting of a sugar gum tree reveals its brushstrokes or a description of a sugar gum tree is immediately perceived to be made of words. N. Katherine Hayles calls this phenomenon "virtuality." In her book How We Became Posthuman: Virtual Bodies in Cybernetics, Literature, and Informatics, she argues that various technologies developed around the time of the Second World War have led to a widespread, if not necessarily conscious, belief that information can exist as "pattern" rather than "presence," and thus that it can move from one material form to another unchanged and can even exist wholly independently of matter. The slogan "Information just wants to be free" captures this sense of virtuality somewhat. "Free" in this context primarily refers to cost, but it also implies that information desires to be released from whatever form it's being held in, to be out there, everywhere, in the ether, on the Internet, where everyone can access it.

Translators, editors, book historians, and librarians—anyone familiar with what might be called last generation information technology—will quickly see the error in this reasoning. The truth is that unless information exists in some material form, no one can access it. That's why no one will ever again read the lost plays of Aeschylus or all but two Ernest Hemingway stories written before 1922. The first works were destroyed along with the Library of Alexandria in antiquity; the second disappeared when the trunk carrying them was left momentarily unattended in a Paris train station. I was taught never to say never as far as declaring something "not extant" (because the second you do, it will turn up in an attic), but the hope in that case is that legible copies do still exist, somewhere, not that the stories and plays will miraculously reconstitute themselves out of ashes and mulch. Or ones and zeroes. Cloud computing may be marketed as data heaven, an almost metaphysical place in the sky where information is always and forever safe, but it's really a bunch of server banks here on earth, at risk of damage from thunderstorms and tornados and fire. Even in 2013, information that does not exist in some material form—whether it's coded in neural circuits, set in lead type, written on vellum, saved to a hard drive, or chiseled in stone—will be lost, irrevocably.

Moreover, the transfer of information from one medium to another, or even from one copy to another within a particular medium, is never flawless. Human memory is flawed, the publishing process too; scribes make one set of errors, typesetters another, typists a third, computers a fourth. I feel fortunate not to have to read Shakespeare in translation, because it simply wouldn't be the same in another language, no matter how skilled the translator. Conversely, I am aware that the best of Russian literature will always be kept from me at one remove; the ink will be printed, or pixels arranged, in the shapes of the English alphabet, rather than the Cyrillic letters in which they were written; this physical difference will affect meaning. At the panel on Kindles and poetry, much ado was rightly made about line endings. Poets take great care and consideration in choosing where to end a line, but those decisions are liable to be lost when reading their poems on a screen, especially if it's a small format like a tablet or a smartphone, or if the reader can adjust the text size. Prose can emerge scrambled on a screen too—italics can be dropped, fonts altered or changed to hieroglyphics, every apostrophe replaced with a percent sign—or it might come through fine on one browser or e-reader, but not on another. It's all very confusing and much harder to keep tabs on than, say, a set of galley proofs.

Here too there is a danger in the way that screens obscure materiality. Bookmaking is a complex process, but even a child can get the gist of how it's done. Charlotte Brontë and Flannery O'Connor produced very clever and very fine books as children, with just paper, ink, and thread. Kids today could never play at e-book making, only at e-book consuming. The technology behind tablets, laptops, smartphones, and television is far more opaque, which means that the companies delivering content—the middlemen between authors and readers—retain a lot more power than Penguin does, for instance, after I have purchased a paperback from them. If something goes wrong with my phone or laptop, I'm basically helpless (perhaps more helpless than most), and so have given Apple money twice on occasion for the same song, sometimes grudgingly, sometimes gladly, lazily. In this way, I wonder whether the content on or behind screens—a library of e-books, say—is one's own, or merely leased. At a larger scale, those bricks-and-mortar libraries that have begun to discard their collections after digitization are leaving themselves vulnerable to Google's continued existence and benevolent rule.

Neither this power dynamic nor, presumably, its earning potential is lost on corporations. It's an irresistibly convenient arrangement for consumers at the moment, but also an unprecedented one. Time will tell if we've struck a rather Faustian bargain.

III.

The rub, when it comes, might be related to privacy; we have seen glimpses of that already in recent weeks. But I'm also thinking more literally, or literarily, of Christopher Marlowe's Doctor Faustus. Wendell Berry invoked the play in a 2008 essay in Harper's, "Faustian Economics: Hell Hath No Limits," in which he argued that Americans are apparently unwilling to seriously consider the possibility of looming economic and environmental collapse, or at least to make any changes that might avert it. Instead, we as a nation seem content to go on simply "consuming, spending, wasting, and driving, as before," confident that science and the free market will find a solution and that "what we call the American Way of Life will prove somehow indestructible." This thinking, according to Berry, is predicated in part on "an assumed limitlessness," a willful refusal "to see that limitlessness is a godly trait" and that humans are finite beings.

Today, limitlessness is practically guaranteed with the purchase of an iPhone from Sprint—see their "I Am Unlimited" commercials on YouTube—and there's no trick that Mephistopheles performs at Faustus' bidding that modern society hasn't made routine. Grapes were $3.99 at my local supermarket this January, on sale for $1.99; presumably, they could have been purchased online and delivered. Google Earth is a decent approximation of the dragons that Faustus rides so high "into the air / That, looking down, the earth appeared to me / No bigger than my hand in quantity." The Internet can easily be made to conjure up a parade of the Seven Deadly Sins or stunning but untouchable likenesses of Alexander the Great and Helen of Troy.

And perhaps it does resemble omniscience, the way we can see one friend's newborn baby on Facebook, and another's fishing trip, and a livestream from Taksim Square, all at once, but it doesn't change the fact that we're each still just in the one place, in front of our computers or cell phones, and can no sooner touch or affect anything on the other side of the screen than if we were a ghost. That such disparate scenes, and much else besides, should come and go from one blank space does lend the screen an appearance of infinity, and for all I know about the inner workings of a phone or computer Mephistopheles himself may as well be bringing them to me.

There's something wonderfully inert about a book, in comparison—the way that meaning and magic are fixed in a set number of words on a set number of pages, with no power button and no backlit glow—so that when a book springs to life, the reader's role in the process is never in doubt. Closing the last page leaves one with a sense of perfection, however fleeting. An experience has been completed, a connection with an author, alive or long dead, has been made. So much of what's found on a screen only ever leaves one wanting more. Again, words from Sister Carrie spring to mind: "In your rocking-chair, by your window dreaming, shall you long, alone. In your rocking-chair, by your window, shall you dream such happiness as you may never feel."

Works Cited

  • Alighieri, Dante. Purgatorio. Translated, with a commentary, by Charles S. Singleton. Princeton: Princeton University Press, 1973.
  • Berry, Wendell. "Faustian Economics: Hell Hath No Limits." Harper's Magazine (May 2008): 35–42.
  • Carr, Nicholas. The Shallows: What the Internet is Doing to Our Brains. New York: W. W. Norton, 2011.
  • Dreiser, Theodore. Sister Carrie. Edited by Donald Pizer. New York: W. W. Norton, 2006.
  • Du Bois, W. E. B. The Souls of Black Folk. New York: Penguin, 1996.
  • Foer, Jonathan Safran. "How Not to be Alone." The New York Times, 9 June 2013, SR12. Available online at http://www.nytimes.com/2013/06/09/opinion/sunday/how-not-to-be-alone.html.
  • Hayles, N. Katherine. How We Became Posthuman: Virtual Bodies in Cybernetics, Literature, and Informatics. Chicago: University of Chicago Press, 1999.
  • Horkheimer, Max, and Theodor W. Adorno. Dialectic of Enlightenment. Translated by John Cumming. New York: Continuum, 1982.
  • Iser, Wolfgang. The Act of Reading: A Theory of Aesthetic Response. Baltimore: Johns Hopkins University Press, 1978.
  • Knowlson, James. Damned to Fame: The Life of Samuel Beckett. New York: Simon and Schuster, 1996.
  • "A Look Across Screens: The Cross-Platform Report," Q1 2013, The Nielson Company (10 June 2013). Available online at http://www.nielsen.com/us/en/reports/2013/the-cross-platform-report--a-look-across-screens.html.
  • Marlowe, Christopher. Doctor Faustus. Edited by Roma Gill. London: Ernest Benn Limited, 1965.
  • Manjoo, Farhad. "You Won't Finish This Article: Why People Online Don't Read to the End." Slate, 6 June 2013. Available online at http://www.slate.com/articles/technology/technology/2013/06/how_people_read_online_why_you_won_t_finish_this_article.html.
  • McLuhan, Marshall. Understanding Media: The Extensions of Man. Introduced by Lewis H. Lapham. Cambridge, Mass.: MIT Press, 1994.
  • Schivelbusch, Wolfgang. The Railway Journey: The Industrialization of Time and Space in the 19th Century. Berkeley: University of California Press.
  • Wallace, David Foster. "E Unibus Pluram: Television and U.S. Fiction." Review of Contemporary Fiction 13:2 (1993): 151–194.

CASSANDRA NELSON is a PhD candidate in English at Harvard University, writing a dissertation about religion and screen media in postwar American fiction. A version of her first chapter, on Flannery O'Connor and film, is forthcoming in Literary Imagination. She holds an MA from the Editorial Institute at Boston University, and prepared an edition of Samuel Beckett's More Pricks than Kicks for Faber & Faber in 2010.