Asthe lines between subtitles and captions continue to blur, perhaps none has become more confusing than the difference between subtitles for the d/Deaf and hard of hearing (SDH) and closed captions (CC).
Both subtitles and captions are timed text files synchronized to media content, allowing the text to be viewed at the same time the words are being spoken. Captions and subtitles can be open or closed.
Closed captions are designed for d/Deaf and hard-of-hearing audiences. They communicate all audio information, including sound effects, speaker IDs, and non-speech elements. They originated in the 1970s and are required by law for most video programming in the United States and Canada.
SDH often emulates closed captions on media that does not support closed captions, such as digital connections like HDMI or OTT platforms. In recent years, many streaming platforms, like Netflix, have been unable to support standard broadcast Line 21 closed captions. This has led to a demand for English SDH subtitles styled similarly to FCC-compliant closed captions instead.
But in recent years, rapid developments in streaming content and the globalization of media has shaken up the popular nomenclature across the world. This has left viewers and users of these accessibility services scratching their heads and wondering how SDH and CC are different.
SDH subtitles and closed captions are both capable of supporting placement. Viewers often find SDH and CC are placed in the bottom center, with movement to the top to avoid lower thirds. Some styles of CC may include horizontal placement to indicate speaker changes.
Caption placement is usually implemented by a captioner and cannot be adjusted by the user unless the captions are formatted to 708 standards. According to FCC rules, captions must be positioned in such a way to avoid covering important lower third graphics.
Streaming services that follow this trend include Netflix and Amazon. EncodingThe move from analog television to high-definition (HD) media over the last 20 years had major implications for the encoding of closed captions and subtitles.
HD disc media, like Blu-ray, does not support traditional closed captioning but is compatible with SDH subtitles. The same goes for some streaming services and OTT platforms. SDH formats are increasingly used on these platforms due to their inability to support traditional Line 21 broadcast closed captions. That being said, some classic captioning formats, like SCC, have proven to be versatile across television and digital formats.
Apple TV+ is one of such platforms offering a wide array of accessibility choices for viewers on select programming. Depending on the program chosen, a viewer could find themselves choosing between CC and SDH. So why offer this?
The answer can be different depending on the platform, but by offering both options, viewers are able to choose the format that they prefer. In situations where there is no distinction made between CC and SDH, the file could be considered one in the same.
Like many media accessibility services, CC and SDH are nuanced and tricky to definitively declare as being one specific solution designed for one specific purpose. In the greater scheme of timed text files, either solution offered by a television network or streaming platform will provide an accessible experience for viewers.
Closed captioning (CC) and subtitling are both processes of displaying text on a television, video screen, or other visual display to provide additional or interpretive information. Both are typically used as a transcription of the audio portion of a program as it occurs (either verbatim or in edited form), sometimes including descriptions of non-speech elements. Other uses have included providing a textual alternative language translation of a presentation's primary audio language that is usually burned-in (or "open") to the video and unselectable.
HTML5 defines subtitles as a "transcription or translation of the dialogue when sound is available but not understood" by the viewer (for example, dialogue in a foreign language) and captions as a "transcription or translation of the dialogue, sound effects, relevant musical cues, and other relevant audio information when sound is unavailable or not clearly audible" (for example, when audio is muted or the viewer is deaf or hard of hearing).[1]
Closed captioning was first demonstrated in the United States at the First National Conference on Television for the Hearing Impaired at the University of Tennessee in Knoxville, Tennessee, in December 1971.[2] A second demonstration of closed captioning was held at Gallaudet College (now Gallaudet University) on February 15, 1972, where ABC and the National Bureau of Standards demonstrated closed captions embedded within a normal broadcast of The Mod Squad.At the same time in the UK the BBC was demonstrating its Ceefax text based broadcast service which they were already using as a foundation to the development of a closed caption production system. They were working with professor Alan Newell from the University of Southampton who had been developing prototypes in the late 1960s.
The closed captioning system was successfully encoded and broadcast in 1973 with the cooperation of PBS station WETA.[2] As a result of these tests, the FCC in 1976 set aside line 21 for the transmission of closed captions. PBS engineers then developed the caption editing consoles that would be used to caption prerecorded programs.
Real-time captioning, a process for captioning live broadcasts, was developed by the National Captioning Institute in 1982.[2] In real-time captioning, stenotype operators who are able to type at speeds of over 225 words per minute provide captions for live television programs, allowing the viewer to see the captions within two to three seconds of the words being spoken.
Major US producers of captions are WGBH-TV, VITAC, CaptionMax and the National Captioning Institute. In the UK and Australasia, Ai-Media, Red Bee Media, itfc, and Independent Media Support are the major vendors.
Improvements in speech recognition technology means that live captioning may be fully or partially automated. BBC Sport broadcasts use a "respeaker": a trained human who repeats the running commentary (with careful enunciation and some simplification and markup) for input to the automated text generation system. This is generally reliable, though errors are not unknown.[3]
The first use of regularly scheduled closed captioning on American television occurred on March 16, 1980.[4] Sears had developed and sold the Telecaption adapter, a decoding unit that could be connected to a standard television set. The first programs seen with captioning were a Disney's Wonderful World presentation of the film Son of Flubber on NBC, an ABC Sunday Night Movie airing of Semi-Tough, and Masterpiece Theatre on PBS.[5]
Until the passage of the Television Decoder Circuitry Act of 1990, television captioning was performed by a set-top box manufactured by Sanyo Electric and marketed by the National Captioning Institute (NCI). (At that time a set-top decoder cost about as much as a TV set itself, approximately $200.) Through discussions with the manufacturer it was established that the appropriate circuitry integrated into the television set would be less expensive than the stand-alone box, and Ronald May, then a Sanyo employee, provided the expert witness testimony on behalf of Sanyo and Gallaudet University in support of the passage of the bill. On January 23, 1991, the Television Decoder Circuitry Act of 1990 was passed by Congress.[2] This Act gave the Federal Communications Commission (FCC) power to enact rules on the implementation of closed captioning. This Act required all analog television receivers with screens of at least 13 inches or greater, either sold or manufactured, to have the ability to display closed captioning by July 1, 1993.[6]
The Federal Communications Commission requires all providers of programs to caption material which has audio in English or Spanish, with certain exceptions specified in Section 79.1(d) of the commission's rules. These exceptions apply to new networks; programs in languages other than English or Spanish; networks having to spend over 2% of income on captioning; networks having less than US$3,000,000 in revenue; and certain local programs; among other exceptions.[7] Those who are not covered by the exceptions may apply for a hardship waiver.[8]
The Telecommunications Act of 1996 expanded on the Decoder Circuitry Act to place the same requirements on digital television receivers by July 1, 2002.[9] All TV programming distributors in the U.S. are required to provide closed captions for Spanish-language video programming as of January 1, 2010.[10]
A bill, H.R. 3101, the Twenty-First Century Communications and Video Accessibility Act of 2010, was passed by the United States House of Representatives in July 2010.[11] A similar bill, S. 3304, with the same name, was passed by the United States Senate on August 5, 2010, by the House of Representatives on September 28, 2010, and was signed by President Barack Obama on October 8, 2010. The Act requires, in part, for ATSC-decoding set-top box remotes to have a button to turn on or off the closed captioning in the output signal. It also requires broadcasters to provide captioning for television programs redistributed on the Internet.[12]
On February 20, 2014, the FCC unanimously approved the implementation of quality standards for closed captioning,[13] addressing accuracy, timing, completeness, and placement. This is the first time the FCC has addressed quality issues in captions.
In 2015, a law was passed in Hawaii requiring two screenings a week of each movie with captions on the screen. In 2022 a law took effect in New York City requiring movie theaters to offer captions on the screen for up to four showtimes per movie each week, including weekends and Friday nights.[14]
3a8082e126