summaryrefslogtreecommitdiffstats
path: root/captioning.md
blob: 9a85080c4fe3cd83c77632d450b3caf92ce705a0 (plain) (blame)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
[[!meta title="Captioning tips"]]
[[!meta copyright="Copyright © 2021, 2022 Sacha Chua"]]

Captions are great for making videos (especially technical ones!)
easier to understand and search.

If you see a talk that you'd like to caption, feel free to download it
and start working on it with your favourite subtitle editor. Let me
know what you pick by e-mailing me at <sacha@sachachua.com> so that I
can update the backstage index and try to avoid duplication of work. [Find talks that need captions here](https://emacsconf.org/help_with_main_captions). You can also help by [adding chapter markers to Q&A sessions](https://emacsconf.org/help_with_chapter_markers).

You're welcome to work with captions using your favourite tool. We've
been using <https://github.com/sachac/subed> to caption things as VTT
or SRT in Emacs, often starting with autogenerated captions from
OpenAI Whisper or WhisperX (the .vtt file backstage).

We'll be posting VTT files so that they can be included by the HTML5
video player (demo: <https://emacsconf.org/2021/talks/news/>), so if
you use a different tool that produces another format, any format that
can be converted into that one (like SRT or ASS) is fine. `subed` has
a `subed-convert` command that might be useful for turning WebVTT
files into tab-separated values (TSV) and back again, if you prefer a
more concise format.

You can e-mail me the subtitles when you're done, and then I can merge
it into the video.

You might find it easier to start with the autogenerated captions
and then refer to the video or any resources provided by the speaker in order to
figure out spelling. Sometimes speakers provide pretty complete
scripts, which is great, but they also tend to add extra words. 

# Edit the VTT to fix misrecognized words

The first step is to edit misrecognized words. VTT files are plain text, so
you can edit them with regular `text-mode` if you want to. If you're
editing subtitles within Emacs,
[subed](https://github.com/sachac/subed) can conveniently synchronize
video playback with subtitle editing, which makes it easier to figure
out technical words. subed tries to load the video based on the
filename, but if it can't find it, you can use `C-c C-v`
(`subed-mpv-find-media`) to play a file or `C-c C-u` to play a URL.

Look for misrecognized words and edit them. We also like to change
things to follow Emacs keybinding conventions (C-c instead of Control C). We sometimes spell out
acronyms on first use or add extra information in brackets. The
captions will be used in a transcript as well, so you can add
punctuation, remove filler words, and try to make it read better.

Sometimes you may want to tweak how the captions are split. You can
use `M-j` (`subed-jump-to-current-subtitle`) to jump to the caption if
I'm not already on it, listen for the right spot, and maybe use
`M-SPC` to toggle playback. Use `M-.` (`subed-split-subtitle`) to
split a caption at the current MPV playing position and `M-m`
(`subed-merge-with-next`) to merge a subtitle with the next one. 

If you don't understand a word or phrase, add two
question marks (`[??]`) and move on. We'll ask the
speakers to review the subtitles and can sort that
out then.

If there are multiple speakers, you can indicate switches between speakers
with a `[speaker-name]:` tag, or just leave it plain.


<video src="https://media.emacsconf.org/editing.webm" controls=""></video>

Once you've gotten the hang of things, it might take between 1x to 4x
the video time to edit captions.

# Subtitle timing

Times don't need to be very precise. If you notice
that the times are way out of whack and it's
getting in the way of your subtitling, we can
adjust the times using the [aeneas forced
alignment tool](https://www.readbeyond.it/aeneas/
and `subed-align`).

## Splitting and merging subtitles

If you want to split and merge subtitles, you can
use `M-.` (`subed-split-subtitle`) and `M-m`
(`subed-merge-dwim`). If the playback position is
in the current subtitle, splitting will use the
playback position. If it isn't, it will guess an
appropriate time based on characters per second
for the current subtitle.

## Splitting with word-level timing data

If there is a `.json` or `.srv2` file with
word-level timing data, you can load it with
`subed-word-data-load-from-file` from
`subed-word-data.el` in the subed package. You can
then split with the usual `M-.`
(`subed-split-subtitle`), and it should use
word-level timestamps when available.

# Playing your subtitles together with the video

MPV should automatically load subtitle files if
they're in the same directory as the video. To
load a specific subtitle file in MPV, you can use
the `--sub-file=` or `--sub-files=` command-line
argument.

If you're using subed, the video should autoplay if it's named the
same as your subtitle file. If not, you can use `C-c C-v`
(`subed-mpv-play-from-file`) to load the video file. You can toggle
looping over the current subtitle with `C-c C-l`
(`subed-toggle-loop-over-current-subtitle`), synchronizing player to
point with `C-c ,` (`subed-toggle-sync-player-to-point`), and
synchronizing point to player with `C-c .`
(`subed-toggle-sync-point-to-player`).

# Starting from a script

Some talks don't have autogenerated captions, or you may prefer to
start from scratch. Whenever the speaker has provided a script, you
can use that as a starting point. One way is to start by making a VTT
file with one subtitle spanning the whole video, like this:

```text
WEBVTT

00:00:00.000 -> 00:39:07.000
If the speaker provided a script, I usually put the script under this heading.
```

If you're using subed, you can move to the point to a good stopping
point for a phrase, use `M-SPC` to toggle pausing `M-.`  
(`subed-split-subtitle`) when the player reaches that point. If it's
too fast, use `M-j` to repeat the current subtitle.

# Starting from scratch

One option is to send us a text file with just the text transcript in it 
and not worry about the timestamps. We can figure out the timing using
[aeneas for forced alignment](https://www.readbeyond.it/aeneas/). 

If you want to try timing as you go, you might
find it easier to start by making a VTT file with
one subtitle spanning the whole video (either
using the video duration or a very large
duration), like this:

```text
WEBVTT

00:00:00.000 -> 24:00:00.000
```

Use `C-c C-p` (`subed-toggle-pause-while-typing`)
to automatically pause when typing. Then start
playback with `M-SPC` and type, using `M-.`
(`subed-split-subtitle`) to split after a
reasonable length for a subtitle. If it's too
fast, use `M-j` to repeat the current subtitle or
adjust `subed-mpv-plackback-speed`.

# Chapter markers

In addition to the captions, you may also want to add chapter markers.
An easy way to do that is to add a =NOTE Chapter heading= before the
subtitle that starts the chapter. For example: 

```text
...
00:05:13.880 --> 00:05:20.119
So yeah, like that's currently the problem.

NOTE Embeddings

00:05:20.120 --> 00:05:23.399
So I want to talk about embeddings.
...
```

We can then extract those with
`emacsconf-subed-make-chapter-file-based-on-comments`.

For an example of how chapter markers allow people to quickly navigate
videos, see <https://emacsconf.org/2021/talks/bindat/> .

Please let us know if you need any help!

Sacha <sacha@sachachua.com>