summaryrefslogtreecommitdiffstats
path: root/2022/organizers-notebook
diff options
context:
space:
mode:
Diffstat (limited to '2022/organizers-notebook')
-rw-r--r--2022/organizers-notebook/index.org87
1 files changed, 58 insertions, 29 deletions
diff --git a/2022/organizers-notebook/index.org b/2022/organizers-notebook/index.org
index 213c4f2b..e722c41f 100644
--- a/2022/organizers-notebook/index.org
+++ b/2022/organizers-notebook/index.org
@@ -163,28 +163,6 @@ P.S. please direct all replies to this post either to myself or to the
emacsconf-discuss list, so as to help avoid generating extra off-topic
chatter in the other lists cc'd in this message; thank you.
-** TODO Write volunteer update 2022-10-23 :update:
-:PROPERTIES:
-:CUSTOM_ID: volunteer-2022-10-23
-:END:
-
-- set up web-based upload, nudging speakers
-- Backstage area now open with three talks, info sent to speakers and captioning volunteers, jai sent in the first edited captions
-- created BBB rooms and added them to conf.org
-- dto signed up for shifts
-- playbook drafts
- - https://emacsconf.org/2022/volunteer/irc
- - https://emacsconf.org/2022/volunteer/caption
- - https://emacsconf.org/2022/volunteer/pad
- - https://emacsconf.org/2022/volunteer/checkin
- - https://emacsconf.org/2022/volunteer/host
-- tested streaming to gen and dev streams, viewing from watch pages
-
-requests:
-- html/css/js for watch pages
-next week:
-- caption workflow
-
* Projects and other long-running tasks
:PROPERTIES:
:CUSTOM_ID: projects
@@ -367,17 +345,29 @@ It looks like OpenAPI needs a little less editing in terms of
capitalization and punctuation, but it produces longer captions
(likely a 30-second sliding window). I'll try to upload both YT and
OpenAPI captions so that people can decide what they like.
-*** TODO Investigate more granular timestamps for the output from OpenAPI Whisper
-https://stackoverflow.com/questions/73822353/how-can-i-get-word-level-timestamps-in-openais-whisper-asr
-*** TODO Compare large, medium, and small models
+*** DONE Compare large, medium, and small models
+CLOSED: [2022-10-23 Sun 08:32]
12 threads
-| Large | |
-| Medium | 2:03 | Shorter subtitles
-| Small | 0:40 |
+Original file: 21:16 21 minutes
+| | Hours | Mult | Notes |
+| [[https://media.emacsconf.org/2022/backstage/emacsconf-2022-sqlite--using-sqlite-as-a-data-source-a-framework-and-an-example--andrew-hyatt--large.vtt][Large]] | 2:49 | 8 | |
+| [[https://media.emacsconf.org/2022/backstage/emacsconf-2022-sqlite--using-sqlite-as-a-data-source-a-framework-and-an-example--andrew-hyatt--medium.vtt][Medium]] | 2:03 | 5.9 | |
+| [[https://media.emacsconf.org/2022/backstage/emacsconf-2022-sqlite--using-sqlite-as-a-data-source-a-framework-and-an-example--andrew-hyatt--small.vtt][Small]] | 0:40 | 2 | More run-on sentences |
-Large and medium might do better on a system with a GPU
+Large and medium might do better on a system with a GPU. I'll default to the small model for now.
+*** DONE Commit subed-tsv so that people can try a cleaner output
+CLOSED: [2022-10-23 Sun 09:59]
+:PROPERTIES:
+:Effort: 1:00
+:QUANTIFIED: Emacs
+:END:
+:LOGBOOK:
+CLOCK: [2022-10-23 Sun 08:32]--[2022-10-23 Sun 09:59] => 1:27
+:END:
+*** TODO Investigate more granular timestamps for the output from OpenAPI Whisper
+https://stackoverflow.com/questions/73822353/how-can-i-get-word-level-timestamps-in-openais-whisper-asr
*** DONE Upload srv2 from YouTube for word-level
CLOSED: [2022-10-22 Sat 23:16]
:PROPERTIES:
@@ -2876,6 +2866,45 @@ CLOSED: [2022-10-22 Sat 09:27]
- org-reveal config
- SIL fonts choice
+** DONE Write volunteer update 2022-10-23 :update:
+CLOSED: [2022-10-23 Sun 10:22]
+:PROPERTIES:
+:CUSTOM_ID: volunteer-2022-10-23
+:TO: emacsconf-org@gnu.org
+:END:
+
+Hello, folks! Here's the weekly update on what's happening backstage
+for EmacsConf 2022 in case you notice something that you want to help
+out with. =)
+
+- We've e-mailed the speakers instructions for uploading their files through either a web browser or an FTP client, and three speakers have already done so! Those talks are now available in the backstage area (https://media.emacsconf.org/2022/backstage/), along with the first set of edited captions (thanks Jai Vetrivelan!). If you don't have the username and password for the backstage area and you would like to access it, please e-mail me and I'll send you the details.
+- We've created a BBB room for each speaker's live Q&A session. The URLs are in conf.org in the private repository if you need them.
+- We've drafted some documentation for different volunteer roles. If you'd like to volunteer as a captioner, check-in person (hmm, reception?), Etherpad scribe, IRC monitor, or host, please check out the appropriate link and let me know if I need to add anything to the docs:
+ - https://emacsconf.org/2022/volunteer/caption
+ - https://emacsconf.org/2022/volunteer/irc
+ - https://emacsconf.org/2022/volunteer/pad
+ - https://emacsconf.org/2022/volunteer/checkin
+ - https://emacsconf.org/2022/volunteer/host
+- Thanks to David O'Toole for signing up for some IRC shifts! If you would like to volunteer for a shift, check out https://emacsconf.org/2022/organizers-notebook/#shifts .
+- We've updated our streaming configuration for the General and Development tracks, and have started testing them using mpv and the watch pages. Videos aren't currently streaming, but you can check out the layout of the watch pages at:
+ - https://emacsconf.org/2022/watch/gen/
+ - https://live.emacsconf.org/2022/watch/gen/
+ - https://emacsconf.org/2022/watch/dev/
+ - https://live.emacsconf.org/2022/watch/dev/
+ These pages could probably be a lot prettier and easier to use. If you have some ideas for improving them or if you'd like to work on the HTML/CSS/JS, we'd love your help!
+- There are now Q&A waiting rooms with friendly URLs so that it's easier for people to join the live Q&A when the host decides it's okay to let everyone in. They're linked on the watch pages (along with the pads) and they'll be linked from the talk pages once we're ready to share them.
+- zaeph has been busy tweaking the ffmpeg workflow for reencoding and normalizing videos. Thanks to Ry P. for sharing the res.emacsconf.org server with us - we've been using it for all the processing that our laptops can't handle.
+- We experimented with using the OpenAI Whisper speech-to-text toolkit to create the auto-generated captions that captioning volunteers can edit. Looks promising! If you'd like to compare the performance between small, medium, and large models, you can look at the VTT files for the sqlite talk in the backstage area. I've also added support for tab-separated values (like Audacity label exports) and a subed-convert command to subed.el, which might give us a more concise format to work with. I'll work on getting word-level timing data so that our captioning workflow can be even easier.
+
+Next week, we hope to:
+
+- improve the prerec and captioning workflows
+- get more captions underway
+
+Lots of good stuff happening!
+
+Sacha Chua
+
* Communications
:PROPERTIES:
:CUSTOM_ID: comms