|
|
[[!meta title="Captioning tips"]]
[[!meta copyright="Copyright © 2021 Sacha Chua"]]
Captions are great for making videos (especially technical ones!)
easier to understand and search.
If you see a talk that you'd like to caption, feel free to download it
and start working on it with your favourite subtitle editor. Let me
know what you pick by e-mailing me at <sacha@sachachua.com> so that I
can update the index and try to avoid duplication of work. [Find talks that need captions here](https://emacsconf.org/help_with_main_captions).
We've been using <https://github.com/sachac/subed> to caption things
as VTT or SRT in Emacs, often starting with autogenerated captions
from YouTube (the .vtt or .srt file), but you're welcome to make
captions using your favourite tool.
We'll be posting VTT files so that they can be included by the HTML5 video
player (demo: <https://emacsconf.org/2021/talks/news/>), so if you use a
different tool that produces another format, any format that can be
converted into that one (like SRT or ASS) is fine. You can e-mail me the
subtitles when you're done, and then I can merge it into the video.
# Formatting tips
I generally find it easier to start with the autogenerated captions
and then refer to any resources provided by the speaker in order to
figure out spelling. Sometimes speakers provide pretty complete
scripts, which is great, but they also tend to add extra words. I
tried uploading the scripts to YouTube in order to get YouTube to
automatically align the text, but then the timing information wasn't
granular enough for easy splitting, so correcting the autogenerated
captions myself seemed to be easier. I use some code in my
[subed configuration](https://sachachua.com/dotemacs/#subed) (see
`my-subed-fix-common-error` and `my-subed-common-edits`) to help with
capitalization and commonly misrecognized words.
Please keep captions to one line each so that they can be displayed
without wrapping, as we plan to broadcast by resizing the video and
displaying open captions below. Maybe 50 characters max? Since the
captions are also displayed as text on the talk pages, you can omit
filler words. We've also been trying to break captions at reasonable
points (ex: phrases).
For example, instead of:
- so i'm going to talk today about a
- fun rewrite i did of uh of the bindat
- package
I would probably edit it to be more like:
- So I'm going to talk today
- about a fun rewrite I did
- of the bindat package.
# Editing autogenerated captions
If you want to take advantage of the autogenerated captions and the
word-level timing data from YouTube, you can start with the VTT file
for the video you want, then use `my-caption-load-word-data` from
<https://sachachua.com/dotemacs/#word-level> to load the srv2 file
(also attached), and then use `my-caption-split` to split using the
word timing data if possible. You can bind this to a keystroke with
something like `M-x local-set-key M-' my-caption-split`.
# Starting from a script
Some talks don't have autogenerated captions because YouTube didn't
produce any. Whenever the speaker has provided a script, you can use
that as a starting point. I generally start by making a VTT file with
one subtitle spanning the whole video, like this:
```text
WEBVTT
00:00:00.000 -> 00:39:07.000
If the speaker provided a script, I usually put the script under this heading.
```
I move to the point to a good stopping point for a phrase, toggle
playing with `M-SPC`, and then `M-.` (`subed-split-subtitle`) when the
player reaches that point. If it's too fast, I use `M-j` to repeat the
current subtitle.
# Starting from scratch
Sometimes there are no autogenerated captions and there's no script.
Then I guess we just have to type it by hand.
I generally start by making a VTT file with
one subtitle spanning the whole video, like this:
```text
WEBVTT
00:00:00.000 -> 00:39:07.000
```
Then I start playback and type, using `M-.` (`subed-split-subtitle`)
to split after I've typed a reasonable length for a subtitle. If it's
too fast, I use `M-j` to repeat the current subtitle.
Please let me know if you need any help!
Sacha <sacha@sachachua.com>
|