Imagine a common movie scene: a hero confronts a villain. Captioning such a moment would at first glance seem as basic as transcribing the dialogue. But consider the choices involved: How do you convey the sarcasm in a comeback? Do you include a henchman’s muttering in the background? Does the villain emit a scream, a grunt, or a howl as he goes down? And how do you note a gunshot without spoiling the scene?
These are the choices closed captioners face every day. Captioners must decide whether and how to describe background noises, accents, laughter, musical cues, and even silences. When captioners describe a sound—or choose to ignore it—they are applying their own subjective interpretations to otherwise objective noises, creating meaning that does not necessarily exist in the soundtrack or the script.
Reading Sounds looks at closed-captioning as a potent source of meaning in rhetorical analysis. Through nine engrossing chapters, Sean Zdenek demonstrates how the choices captioners make affect the way deaf and hard of hearing viewers experience media. He draws on hundreds of real-life examples, as well as interviews with both professional captioners and regular viewers of closed captioning. Zdenek’s analysis is an engrossing look at how we make the audible visible, one that proves that better standards for closed captioning create a better entertainment experience for all viewers.