summaryrefslogtreecommitdiffstats
path: root/captioning.md
diff options
context:
space:
mode:
Diffstat (limited to 'captioning.md')
-rw-r--r--captioning.md62
1 files changed, 38 insertions, 24 deletions
diff --git a/captioning.md b/captioning.md
index 353f28ba..ed9e6890 100644
--- a/captioning.md
+++ b/captioning.md
@@ -1,5 +1,5 @@
[[!meta title="Captioning tips"]]
-[[!meta copyright="Copyright © 2021 Sacha Chua"]]
+[[!meta copyright="Copyright © 2021, 2022 Sacha Chua"]]
Captions are great for making videos (especially technical ones!)
easier to understand and search.
@@ -22,15 +22,13 @@ subtitles when you're done, and then I can merge it into the video.
# Formatting tips
-I generally find it easier to start with the autogenerated captions
+You might find it easier to start with the autogenerated captions
and then refer to any resources provided by the speaker in order to
figure out spelling. Sometimes speakers provide pretty complete
-scripts, which is great, but they also tend to add extra words. I
-tried uploading the scripts to YouTube in order to get YouTube to
-automatically align the text, but then the timing information wasn't
-granular enough for easy splitting, so correcting the autogenerated
-captions myself seemed to be easier. I use some code in my
-[subed configuration](https://sachachua.com/dotemacs/#subed) (see
+scripts, which is great, but they also tend to add extra words.
+
+Emacs being Emacs, you can use some code (
+[example subed configuration](https://sachachua.com/dotemacs/#subed), see
`my-subed-fix-common-error` and `my-subed-common-edits`) to help with
capitalization and commonly misrecognized words.
@@ -38,8 +36,8 @@ Please keep captions to one line each so that they can be displayed
without wrapping, as we plan to broadcast by resizing the video and
displaying open captions below. Maybe 50 characters max? Since the
captions are also displayed as text on the talk pages, you can omit
-filler words. We've also been trying to break captions at reasonable
-points (ex: phrases).
+filler words. Split the captions at natural pausing points (ex:
+phrases) so that they're displayed nicely.
For example, instead of:
@@ -47,12 +45,23 @@ For example, instead of:
- fun rewrite i did of uh of the bindat
- package
-I would probably edit it to be more like:
+you can edit it to be more like:
- So I'm going to talk today
- about a fun rewrite I did
- of the bindat package.
+If you don't understand a word or phrase, add two question marks (??)
+and move on. We'll ask the speakers to review the subtitles and can
+sort that out then.
+
+If there are multiple speakers, indicate switches between speakers
+with a `[speaker-name]:` tag.
+
+During questions and answers, please introduce the question with a
+`[question]:` tag. When the speaker answers, use a `[speaker-name]:`
+tag to make clear who is talking.
+
# Editing autogenerated captions
If you want to take advantage of the autogenerated captions and the
@@ -67,7 +76,7 @@ something like `M-x local-set-key M-' my-caption-split`.
Some talks don't have autogenerated captions because YouTube didn't
produce any. Whenever the speaker has provided a script, you can use
-that as a starting point. I generally start by making a VTT file with
+that as a starting point. One way is to start by making a VTT file with
one subtitle spanning the whole video, like this:
```text
@@ -77,18 +86,23 @@ WEBVTT
If the speaker provided a script, I usually put the script under this heading.
```
-I move to the point to a good stopping point for a phrase, toggle
-playing with `M-SPC`, and then `M-.` (`subed-split-subtitle`) when the
-player reaches that point. If it's too fast, I use `M-j` to repeat the
-current subtitle.
+If you're using subed, you can move to the point to a good stopping
+point for a phrase, toggle playing with `M-SPC`, and then `M-.`
+(`subed-split-subtitle`) when the player reaches that point. If it's
+too fast, use `M-j` to repeat the current subtitle.
# Starting from scratch
-Sometimes there are no autogenerated captions and there's no script.
-Then I guess we just have to type it by hand.
+Sometimes there are no autogenerated captions and there's no script,
+so we have to start from scratch.
-I generally start by making a VTT file with
-one subtitle spanning the whole video, like this:
+You can send us a text file with just the text transcript in it and
+not worry about the timestamps. We can figure out the timing using
+[aeneas for forced alignment](https://www.readbeyond.it/aeneas/).
+
+If you want to try timing as you go, you might find it easier to start
+by making a VTT file with one subtitle spanning the whole video, like
+this:
```text
WEBVTT
@@ -96,10 +110,10 @@ WEBVTT
00:00:00.000 -> 00:39:07.000
```
-Then I start playback and type, using `M-.` (`subed-split-subtitle`)
-to split after I've typed a reasonable length for a subtitle. If it's
-too fast, I use `M-j` to repeat the current subtitle.
+Then start playback and type, using `M-.` (`subed-split-subtitle`) to
+split after a reasonable length for a subtitle. If it's too fast, use
+`M-j` to repeat the current subtitle.
-Please let me know if you need any help!
+Please let us know if you need any help!
Sacha <sacha@sachachua.com>