Innovations in Hearing Technology: A Journey from Silence to Clarity

A little over thirty years ago, my friend David Howorth, then in his mid-forties, faced an unexpected and life-altering event: he lost all hearing in his left ear, a condition known as single-sided deafness. It happened literally overnight, he recounted, still grappling with the abruptness of the change. His physician was equally baffled, stating, We really dont understand why. At that time, Howorth was working as a litigator in the bustling Portland, Oregon, office of a large law firm. His family had relocated there from New York after an alarming incident where one of his daughters pricked her finger on a discarded syringe while climbing on the rocks in Prospect Park.
Although Howorths hearing loss did not impact his professional lifeIn a courtroom, you can get along fine with one earit transformed many personal aspects of his everyday experience. The human brain locates sound sources by analyzing the minuscule differences between the times sounds reach each ear. This ability is not just a human trait; it's a skill shared by owls and bats to hunt prey they cannot see. With only one ear functioning, Howorth found himself disoriented when someone called his name on a busy sidewalk. In social gatherings, he often pretended to follow conversations, nodding along, even when unsure of the topic. I was hesitant to join in for fear of being off-point or, worse, repeating what someone else had just said, he reflected. At dinner parties, his wife, Martha, would always sit on his left to avoid the awkwardness of explaining to strangers why he sometimes failed to respond.
Tragically, Martha passed away in 2016. Without her by his side to facilitate communication, Howorth realized that his remaining ear was also deteriorating. It was then that he was fitted with hearing aids for the first time. These devices were specifically designed for individuals with his type of hearing loss, featuring a unit for each ear. The device for his non-functioning ear included a microphone that transmitted sounds wirelessly to the functioning ear, yet even with this technology, Howorth found himself struggling in multi-speaker environments. I went to a bar with my brothers, and I was amazed, he said about his initial experience. One of them was talking to me across the table, and I could hear him, but this feeling of amazement was fleeting. Conversations became muddled, and identifying sound sources remained a challenge, as everything seemed to emanate from the same point.
Then, one morning in 2023, Howorth experienced another devastating shock: his right ear had stopped functioning as well. In pursuit of answers, he traveled to the Shea Clinic in Memphis, one of the foremost facilities for hearing disorders. There, doctors administered steroid injections of dexamethasone directly into his middle ear through the eardrum. Steroids are the conventional treatment for sudden deafness, but their effectiveness varies widely. Unfortunately for Howorth, they provided no relief.
After resigning himself to the fact that he might never regain his hearing in the right ear, he underwent a cochlear implant procedure. A professor of otolaryngology at Harvard Medical School once described cochlear implants as undeniably the finest biological prosthesis that we have today, for anybody, in terms of restoration of function. The journey of cochlear implants dates back to research initiated in the nineteen-fifties, and the technology has made significant strides over the decades. However, it is a common misconception that these implants restore normal hearing. Instead, they bypass the intricate sensory structures of the cochlea, utilizing simple electrodes to stimulate the auditory nerve. While many recipients learn to interpret these signals as recognizable soundsespecially if they receive the implant in infancyothers face considerable challenges.
Today, Howorth has acquired advanced hearing aids that he can adjust in unison with his cochlear implant via a smartphone application. Yet, even with these devices functioning at their best, he struggles to comprehend much. When I pee, it sounds like a roomful of people making conversation, he humorously noted, adding that it often sounds less intelligible than actual conversations among groups of people. Music presents an even greater challenge; he found that sounds from violins and orchestras resemble fingernails on a chalkboard, a sentiment echoed by fellow cochlear implant recipient Rush Limbaugh. Although Howorth hesitates to use the same analogy, he recognizes the unpleasantness of the sound experience: You do want to say, Make it stop!
Despite these challenges, Howorth feels he now navigates many situations better than he did with just one fully functioning ear. The key to this newfound clarity lies in a free voice-to-text application on his phone called Google Live Transcribe & Notification. When someone speaks to him, he can read their words on his screen and engage in conversation as though he had heard them clearly. He is part of a weekly lunch group with a handful of men in their seventies and eighties, and during these gatherings, he places his phone at the center of the table, allowing him to participate fully in the discussions. Although Live Transcribe is not infallibleat one point, it misinterpreted a comment from a retired history professor as I have a dickit is impressively accurate and often punctuates and capitalizes better than some English majors. The app can also vibrate or flash alerts for alarms, crying babies, beeping appliances, and other significant noises, functioning with varying levels of effectiveness across eighty different languages. A few years after Marthas passing, Howorth remarried, and his current wife, Sally, has only known him with hearing loss. At a party they attended together, Howorth utilized Live Transcribe and later learned from Sally that it was the first time she had witnessed him in a social setting where he didnt appear aloof and unengaged.
A researcher I interviewed back in 2018 remarked, There is no better time in all of human history to be a person with hearing loss, a sentiment echoed by numerous experts who highlighted the plethora of advancements, including over-the-counter hearing devices, enhancements in traditional hearing aids and cochlear implants, and innovative drugs and gene therapies currently under development. While these innovations continue to unfold, for Howorth and many others like him, the true breakthrough has been the ability to subtitle life itself. Its transcription that has made the difference, he stated, crediting the monumental investments in artificial intelligence made by the tech industry. Googles Live Transcribe relies on a vast repository of speech and text samples, the source of which remains somewhat of a mystery.
Reflecting on the evolution of technology, I recall my own experience with voice-to-text software, specifically Dragon NaturallySpeaking, which I purchased years ago. Initially intrigued, I found it cumbersome, requiring extensive voice training and often producing inaccurate transcriptions that took longer to correct than typing out the text manually. Today, numerous alternatives exist, including modern iterations of Dragon. The dictation feature in Microsoft Word is so efficient that a writer I know predominantly relies on it instead of his keyboard. Howorth and I occasionally play bridge with friends online, chatting via Zoom as we play. If I were unaware of his hearing challenges, I would never guess; Zooms captioning feature displays everything said by the players, complete with speaker identification, allowing him to respond without any noticeable delay.
The advent of sound in film marked a significant turning point for the hearing-impaired community. Before the arrival of talkies in the late nineteen-twenties, silent films offered accessible entertainment, with dialogue conveyed through printed title cards. The integration of sound presented new challenges for viewers who relied on visual cues. In 1958, Congress initiated the Captioned Films for the Deaf program, akin to the Talking Books for the Blind initiative. The introduction of subtitles for television came later. The first captioned TV broadcast took place in 1971 during an episode of The French Chef, with Julia Child, as Bostons WGBH conducted an experimental airing. Subsequent successful trials led to the establishment of the National Captioning Institute (N.C.I.) in 1979, which aimed to produce more captioned content. Notably, the first live network broadcast with real-time captioning occurred during the 1982 Academy Awards on ABC, where stenographers supplemented scripted content with ad-libbed remarks as they unfolded.
While many of N.C.I.s original captioners were court reporters working part-time, the growing demand for captioning in the early 2000s prompted experimentation with automated speech recognition. Initially, this software struggled with direct dialogue conversion, requiring captioners to train it to recognize their voices. They would then act as simultaneous translators, listening to on-screen dialogue and repeating it into a microphone connected to a computer. They became known as voice writers.
Meredith Patterson, now the president of N.C.I., began her career there in 2003, becoming one of the first voice writers. She noted that while the software excelled in complex vocabulary, it faltered with smaller words that often elude clear articulation. To compensate, Patterson and her colleagues devised verbal shortcuts for punctuation and established verbal tags to differentiate similar-sounding words. Moreover, a sharp memory was crucial, particularly when summarizing fast-paced information. The hiring process for N.C.I. mirrored that of air traffic controllers, emphasizing the critical nature of the role.
Today, N.C.I. continues to employ voice writers and stenographers, but the majority of captioning is now automateda shift spurred by the COVID-19 pandemic, which necessitated a marked increase in virtual interactions and, consequently, a greater demand for captions. N.C.I. now provides services not only to television networks but also to educational institutions, corporations, and various clients, while the rapid advancement of AI technology has significantly improved transcription accuracy.
In December, I spent an evening with Cristi Alberino and Ari Shell, both in their fifties and facing severe hearing impairments. We met at Alberinos home in West Hartford, Connecticut. As board members of the organization Hear Here Hartford, they are deeply invested in advocating for the hearing-impaired community. Alberino, who began losing her hearing in graduate school, and Shell, who experienced uncertainty about his hearing loss from a young age, both wear powerful hearing aids and excel in lip reading. Shell recounted how, as a child, he would sometimes watch television with the sound muted at night, confident that he could follow the plotline despite the silence.
Alberino noted that the pandemic introduced significant challenges for those with hearing loss, as masks muffled voices and hindered lipreading. Yet, she also recognized the profound benefits it brought. As a consultant in Connecticuts Department of Education, her work-from-home arrangement fundamentally transformed her daily routine. Ten years ago, we moved from a building with separate offices into a large open space with two hundred and fifty people and white noise, she explained, describing the cacophony that made communication difficult. The pandemic allowed her to escape that chaos, providing a quiet room where she could engage in captioned meetings using Microsoft Teams, which she hailed as the single greatest thing ever invented. This platform enabled her to read colleagues comments and respond without feeling strained, alleviating the exhaustion she had previously felt from intense concentration during meetings.
Conversing with my ninety-six-year-old mother via Zoom often presents its own challenges. Her preference for positioning the camera away from her face limits my visibility of her expressions, making our interactions feel somewhat distant. Similarly, transcription utilities can create barriers, as someone focused on reading words on a screen may struggle to maintain eye contact. Howorth shared a similar experience during a meeting with financial advisers, where the inability to identify who was speaking compounded the challenge of understanding, despite his cochlear implant not improving voice differentiation.
In response to these challenges, a solution was devised by Madhav Lavakare, a Yale senior who was inspired by a classmate with hearing loss. Recognizing that traditional hearing aids amplified noise without aiding comprehension, Lavakare envisioned eyeglasses capable of displaying real-time speech transcription while allowing users to maintain their field of vision. Initially lacking knowledge in optics, he deconstructed a family movie projector to grasp its workings and began developing a prototype. Over time, he refined his invention with the help of volunteers, focusing solely on the project for two years. Now at twenty-three, he has returned to Yale and recently demonstrated his prototype during lunch. Appearing like normal eyeglasses, I tried them on over my own glasses. Immediately, lines of translucent green text materialized in the air between us, capturing our dialogue in real time. Holy shit, I exclaimed, an instantaneously transcribed reaction. The demo allowed him to toggle transcription on and off, and even identify speakers by changing settings on his phone. Despite the noisy environment of the restaurant, the glasses filtered out surrounding chatter, ensuring our conversation remained clear and uninterrupted.