The following is an excerpt from The Word Pretty by Elisa Gabbert (Black Ocean, 2018), published with permission of Black Ocean.

I habitually mention my husband in essays. We eat at the same table, read on the same couch; we talk things out, and his voice becomes a part of my thinking. John has a beautiful voice, a radio voice—people mention it, in taxis, at pharmacies. They say “You sound like Tom Ashbrook.” Voice is important, the way spelling is important (Allen is more beautiful than Alan, Aerin more beautiful than Erin). John listened to Christopher Isherwood’s A Single Man on audio book, read by the English actor Simon Prebble, and loved it. I read an edition printed after the movie came out, with a photo of Colin Firth on the cover, and was unmoved. Could it be because my inner voice has no British accent?

Lately, John’s old Connecticut accent—studiously unlearned in acting classes at college—has begun to resurface, on certain words. Wonderful becomes wonduhful. Understand becomes unduhstand. It’s because he is losing his hearing. My voice doesn’t sound like my voice to him. It is amplified but also distorted—on their highest settings, the hearing aids make voices sound monotone, robotic. They must, too, make me shrill—he sometimes reads overreaction and annoyance into my voice when my tone feels perfectly neutral to me.

He can hear my voice better than anyone else’s, aside from his mother’s. Familiarity is a factor, but we also speak in the right register—it’s easier for him to hear women than men. I also know to look at him, not to cover my mouth. On bad days, or in noisy environments, he needs to read lips. (Mumbling is an issue. Beards are an issue.) But lip reading is of limited use; both sounds and lip movements are more ambiguous than we realize. We know, from videos with edited soundtracks, that seeing the wrong mouth shape can make you “hear” a different sound; when someone’s mouth makes the “pa” shape you hear a “pa,” even if the soundtrack is saying “fa.” It’s called the McGurk effect. I suspect this works the other way too, and hearing the wrong sound can change what you “see.”

John may eventually need cochlear implants, which entail a period of cognitive rewiring where you can’t hear anything at all. Should we learn sign language? It might help us around the house, but nobody else we know speaks in sign. Will speech recognition get good enough, fast enough? There’s a software called Dragon that works strikingly well for real-time dictation, but it has to learn your voice patterns first—it wouldn’t work in a classroom (John teaches) or for strangers. If it did, the software could be combined with a wearable device to essentially provide subtitles for the world.

Cochlear implants don’t replicate natural hearing, the way glasses or laser surgery restore 20/20 vision. You have to retrain the brain to understand language, and the results are unpredictable, though they tend to be better if you’re younger, more educated, more determined. Technology is ageist. (The same goes for hearing aids, which don’t work that great out of the box; brain plasticity is a big assist.) You learn to assign frequencies to meaning again, but the sounds are not the same. A friend of ours, a poet, told us that his mother’s response to cochlear implants was considered exceptionally strong. When he asked her what his speech sounded like to her now, she said, “I hear bells, and the bells have words inside them.”

Elisa Gabbert is a poet and essayist and the author of four collections, mostly recently The Word PrettyL’Heure Bleue, or the Judy Poems; and The Self Unstable (all from Black Ocean). Her writing has appeared in the New Yorker, the New York Times Magazine, the Guardian Long ReadA Public Space, Boston Review, the Paris Review Daily, Pacific Standard, Guernica, the Harvard Review, Real Life, and many other venues. She is currently writing a book about disasters, forthcoming from FSG Originals.

Become a Patron!

This post may contain affiliate links.