PowerPoint for Microsoft 365 can transcribe your words as you present and display them on-screen as captions in the same language you are speaking, or as subtitles translated to another language. This can help accommodate individuals in the audience who may be deaf or hard of hearing, or more familiar with another language, respectively.
You can choose which language you want to speak while presenting, and which language the caption/subtitle text should be shown in (i.e. if you want it to be translated). You can select the specific microphone you want to be used (if there is more than one microphone connected to your device), the position where the subtitles appear on the screen (bottom or top, and overlaid or separate from slide), and other display options.
Use Spoken Language to see the voice languages that PowerPoint can recognize, and select the one you want. This is the language that you will be speaking while presenting. (By default, this will be set to the language corresponding to your Office editing language.)
Use Subtitle Language to see which languages PowerPoint can display on-screen as captions or subtitles, and select the one you want. This is the language of the text that will be shown to your audience. By default, this will be the same language as your Spoken Language, but it can be a different language, meaning that translation will occur.
In the Subtitle Settings menu, set the desired position of the captions or subtitles. They can appear over the top or bottom margin of the slide (overlaid), or they can appear above the top or below the bottom of the slide (docked). The default setting is Below Slide.
If you're in the middle of giving a presentation and want to turn the feature on or off, click the Toggle Subtitles button from Slide Show View or Presenter View, on the toolbar below the main slide:
To have subtitles always start up when a Slide Show presentation starts, from the ribbon you can navigate to Slide Show > Always Use Subtitles to turn this feature on for all presentations. (By default, it's off.) Then, in Slide Show and Presenter View, a live transcription of your words will appear on-screen.
Use Spoken Language to see the voice languages that PowerPoint can recognize, and select the one you want. This is the language that you will be speaking while presenting. (By default, this will be set to the language corresponding to your Office language.)
You can choose which language you want to speak while presenting, and which language the caption/subtitle text should be shown in (i.e., if you want it to be translated). You can also select whether subtitles appear at the top or bottom of the screen.
Use Spoken Language to see the voice languages that PowerPoint can recognize, and select the one you want. This is the language that you will be speaking while presenting. (By default, this will be set to the language corresponding to locale of your web-browser.)
Use Subtitle Language to see which languages PowerPoint can display on-screen as captions or subtitles, and select the one you want. This is the language of the text that will be shown to your audience. (By default, this will be the same language as your Spoken Language, but it can be a different language, meaning that translation will occur.)
Several spoken languages are supported as voice input to live captions & subtitles in PowerPoint for Microsoft 365. The languages marked as Preview are offered in advance of full support, and generally will have somewhat lower accuracy, which will improve over time.
PowerPoint live captions & subtitles is one of the cloud-enhanced features in Microsoft 365 and is powered by Microsoft Speech Services. Your speech utterances will be sent to Microsoft to provide you with this service. For more information, see Make Office Work Smarter for You.
Microsoft wants to provide the best possible experience for all our customers. If you have a disability or questions related to accessibility, please contact the Microsoft Disability Answer Desk for technical assistance. The Disability Answer Desk support team is trained in using many popular assistive technologies and can offer assistance in English, Spanish, French, and American Sign Language. Please go to the Microsoft Disability Answer Desk site to find out the contact details for your region.
Captions (subtitles) are available on videos where the owner has added them, and on some videos where YouTube automatically adds them. You can change the default settings for captions on your computer or mobile device.
Some people refer to closed captions and subtitles interchangeably as they are both the text version of audio in a video. However, there are key differences between the two. Getting these services mixed up can lead to problems, especially if you require a service like closed captioning for a film or live event.
Closed captions are created to allow people who are deaf or hard of hearing to experience the video, so they also include background sounds and speaker changes. On the other hand, subtitles assume that the viewer can hear the audio and as a result do not contain the background sounds or notifications for speaker changes.
Closed captioning (CC) and subtitling both include displaying text on a television, video screen, or other visual display to provide access to the audio track in a different form. Beyond this similarity, the two are very different services.
Closed captions are a textual representation of the audio within a media file. They make video accessible to deaf and hard of hearing by providing a time-to-text track as a supplement to, or as a substitute for, the audio.
While the text within a closed caption file is comprised predominantly of speech, captions also include non-speech elements like speaker IDs and sound effects that are critical to understanding the plot of the video.
Unlike captions, subtitles do not include the non-speech elements of the audio (like sounds or speaker identifications). Subtitles are also not considered an appropriate accommodation for deaf and hard of hearing viewers.
The easiest way to create open captions is to hire a professional captioning company that offers open caption encoding. Open caption encoding can be tricky to do yourself. It can be time-consuming and often requires expensive video software.
Closed caption quality matters because closed captions are meant to be an equivalent alternative to video for individuals with hearing loss. When closed captions are inaccurate, they are inaccessible.
Studies have shown that even a 95% accuracy rate is sometimes insufficient to accurately convey complex material. For a typical sentence length of 8 words, a 95% word accuracy rate means there will be an error, on average, every 2.5 sentences.
Knowing how a captioning vendor measures its accuracy rate is important. For example, with some closed captioning vendors, punctuation errors are subjective; even though an em dash, period, or comma could make all the difference to the meaning of a sentence.
With accuracy, the FCC states that closed captions must match the spoken words in the audio to the fullest extent. This includes preserving any slang or accents in the content and adding non-speech elements. For live captioning, some leniency does apply.
WCAG 2.0 has three levels of compliance: Level A, AA, and AAA. Level A is the easiest to complete, while level AAA is the hardest. Most web accessibility laws require compliance with Level A and/or AA.
Lastly, an integration or an API workflow is a way to automate the process of adding closed captions. Essentially, you are creating a link between your captioning vendor and video player to allow your captioning vendor to automatically post your captions back to the original video file.
Did you know more than 500 million hours of videos are watched on YouTube each day? YouTube is pretty much the king of video content on the internet. In fact, every 60 seconds, 72 hours of video are uploaded to the platform.
Always be careful with YouTube closed captioning and be sure to edit the final closed caption file before publishing. If you upload poor-quality captions, Google will flag your content as spam and penalize you in search results.
Adding technology into the mixture can cut your time by more than half. On average a trained transcriptionist can take four to five hours to transcribe one hour of audio or video content from scratch. For an untrained novice, this can take much longer.
a3c65b3c4b