|
| 1 | +--- |
| 2 | +id: closed-captions |
| 3 | +title: Closed captions |
| 4 | +description: How to add closed captions to your calls |
| 5 | +--- |
| 6 | + |
| 7 | +The Stream API supports adding real-time closed captions (subtitles for participants) to your calls. This guide shows you how to implement this feature on the client side. |
| 8 | + |
| 9 | +## Call and call type settings |
| 10 | + |
| 11 | +The closed caption feature can be controlled with the following options: |
| 12 | + |
| 13 | +- `available`: the feature is available for your call and can be enabled. |
| 14 | +- `disabled`: the feature is not available for your call. In this case, it's a good idea to "hide" any UI element you have related to closed captions. |
| 15 | +- `auto-on`: the feature is available and will be enabled automatically once the user is connected to the call. |
| 16 | + |
| 17 | +This setting can be set on the call or call type level. |
| 18 | + |
| 19 | +You can check the current value like this: |
| 20 | + |
| 21 | +```typescript |
| 22 | +console.log(call.state.settings?.transcription.closed_caption_mode); |
| 23 | +``` |
| 24 | + |
| 25 | +## Closed caption events |
| 26 | + |
| 27 | +If closed captions are enabled for a given call, you'll receive the captions in the `call.closed_caption` events. Below, you can find an example payload: |
| 28 | + |
| 29 | +``` |
| 30 | +{ |
| 31 | + "type": "call.closed_caption", |
| 32 | + "created_at": "2024-09-25T12:22:25.067005915Z", |
| 33 | + "call_cid": "default:test", |
| 34 | + "closed_caption": { |
| 35 | + "text": "Thank you, guys, for listening.", |
| 36 | + // When did the speaker start speaking |
| 37 | + "start_time": "2024-09-25T12:22:21.310735726Z", |
| 38 | + // When did the speaker finish saying the caption |
| 39 | + "end_time": "2024-09-25T12:22:24.310735726Z", |
| 40 | + "speaker_id": "zitaszuperagetstreamio" |
| 41 | + } |
| 42 | +} |
| 43 | +``` |
| 44 | + |
| 45 | +## Displaying the captions |
| 46 | + |
| 47 | +When displaying closed captions, we should make sure that they are real-time (showing a sentence from 30 seconds ago has very little use in a conversation) and visible for enough time that participants can read them. |
| 48 | + |
| 49 | +Below is an example implementation: |
| 50 | + |
| 51 | +```typescript |
| 52 | +import { |
| 53 | + Call, |
| 54 | + CallClosedCaption, |
| 55 | + ClosedCaptionEvent, |
| 56 | +} from '@stream-io/video-client'; |
| 57 | + |
| 58 | +// The captions queue |
| 59 | +let captions: (CallClosedCaption & { speaker_name?: string })[] = []; |
| 60 | +// The maximum number of captions that can be visible on the screen |
| 61 | +const numberOfCaptionsVisible = 2; |
| 62 | +// A single caption can stay visible on the screen for this duration |
| 63 | +// This is the maximum duration, new captions can push a caption out of the screen sooner |
| 64 | +const captionTimeoutMs = 2700; |
| 65 | + |
| 66 | +// Subscribe to call.closed_caption events |
| 67 | +call.on('call.closed_caption', (event: ClosedCaptionEvent) => { |
| 68 | + const caption = event.closed_caption; |
| 69 | + // It's possible to receive the same caption twice, so make sure to filter duplicates |
| 70 | + const isDuplicate = captions.find( |
| 71 | + (c) => |
| 72 | + c.speaker_id === caption.speaker_id && |
| 73 | + c.start_time === caption.start_time, |
| 74 | + ); |
| 75 | + if (!isDuplicate) { |
| 76 | + // Look up the speaker's name based on the user id |
| 77 | + const speaker = call.state.participants.find( |
| 78 | + (p) => p.userId === caption.speaker_id, |
| 79 | + ); |
| 80 | + const speakerName = speaker?.name || speaker?.userId; |
| 81 | + // Add the caption to the queue |
| 82 | + captions.push({ ...caption, speaker_name: speakerName }); |
| 83 | + // Update the UI |
| 84 | + updateDisplayedCaptions(); |
| 85 | + // We specify a maximum amount of time a caption can be visible |
| 86 | + // after that, we remove it from the screen (unless a newer caption has already pushed it out) |
| 87 | + captionTimeout = setTimeout(() => { |
| 88 | + captions = captions.slice(1); |
| 89 | + updateDisplayedCaptions(); |
| 90 | + captionTimeout = undefined; |
| 91 | + }, captionTimeoutMs); |
| 92 | + } |
| 93 | +}); |
| 94 | + |
| 95 | +const updateDisplayedCaptions = () => { |
| 96 | + // The default implementation shows the last two captions |
| 97 | + const displayedCaptions = captions.slice(-1 * numberOfCaptionsVisible); |
| 98 | + const captionsHTML = displayedCaptions |
| 99 | + .map((c) => `<b>${c.speaker_name}:</b> ${c.text}`) |
| 100 | + .join('<br>'); |
| 101 | + // Update the UI |
| 102 | +}; |
| 103 | +``` |
| 104 | + |
| 105 | +:::note |
| 106 | +Since the closed caption event contains `start_time` and `end_time` fields, you can subtract the two to know how long it took the speaker to say the caption. You can then use this duration to control how long the text is visible on the screen. This is useful to ensure the captions are as real-time as possible, but that might not leave enough time for participants to read the text. |
| 107 | +::: |
| 108 | + |
| 109 | +## See it in action |
| 110 | + |
| 111 | +To see it all in action check out our TypeScript sample application on [GitHub](https://github.com/GetStream/stream-video-js/tree/main/sample-apps/client/ts-quickstart) or in [Codesandbox](https://codesandbox.io/p/sandbox/eloquent-glitter-99th3v). |
0 commit comments