summaryrefslogtreecommitdiffstats
path: root/2022/organizers-notebook
diff options
context:
space:
mode:
Diffstat (limited to '')
-rw-r--r--2022/organizers-notebook.md41
-rw-r--r--2022/organizers-notebook/index.org9
2 files changed, 43 insertions, 7 deletions
diff --git a/2022/organizers-notebook.md b/2022/organizers-notebook.md
index 630104dc..3e1bb166 100644
--- a/2022/organizers-notebook.md
+++ b/2022/organizers-notebook.md
@@ -527,6 +527,35 @@ Considerations:
Remember to update <../prepare.md> with the new incantation.
+#### Incantation from last year
+
+ Q=32
+ ffmpeg -y -i "$1" -c:v libvpx-vp9 -b:v 0 -crf $Q -aq-mode 2 -an -tile-columns 0 -tile-rows 0 -frame-parallel 0 -cpu-used 8 -auto-alt-ref 1 -lag-in-frames 25 -g 240 -pass 1 -f webm -threads 8 /dev/null &&
+ ffmpeg -y -i "$1" -c:v libvpx-vp9 -b:v 0 -crf $Q -c:a copy -tile-columns 2 -tile-rows 2 -frame-parallel 0 -cpu-used -5 -auto-alt-ref 1 -lag-in-frames 25 -pass 2 -g 240 -threads 8 "$2"
+
+
+#### New candidate
+
+Changelog:
+
+- Disable adaptive quantization by setting aq-mode to 0 (TODO: compare samples)
+- Add `-row-mt 1` needed to support `tile-rows` (2×2 is enough for 720p)
+- Also use tiles for first pass
+- Remove `-frame-parallel 0` because it’s disabled by default (see [Notes on encoding settings · Kagami/webm.py Wiki](https://github.com/Kagami/webm.py/wiki/Notes-on-encoding-settings))
+- Put number of CPU in variable and use it for `cpu-used` and `threads`
+- Stick to default for `auto-alt-ref`
+- Stick to default for `lag-in-frames`
+
+ Q=32
+ CPU=8
+ ffmpeg -y -i "$1" -c:v libvpx-vp9 -b:v 0 -crf $Q -an -row-mt 1 -tile-columns 2 -tile-rows 2 -cpu-used $CPU -g 240 -pass 1 -f webm -threads $CPU /dev/null &&
+ ffmpeg -y -i "$1" -c:v libvpx-vp9 -b:v 0 -crf $Q -c:a copy -row-mt 1 -tile-columns 2 -tile-rows 2 -cpu-used $CPU -pass 2 -g 240 -threads $CPU "$2"
+
+Other considerations:
+
+- We might want to tweak the time before keyframes (`-g`).
+
+
### TODO Figure out workflow for handling submitted prerecs
We need time after the prerecs get submitted to:
@@ -562,11 +591,15 @@ Where should we host this?
[Ansible notes](#ansible)
Consider if we need extra scaling beyond being on a beefy live0?
-<https://mclear.co.uk/2021/09/08/deploying-etherpad-at-scale-in-one-minute/>
-<https://github.com/ether/etherpad-load-test>
+
+- Scale calculator: <https://scale.etherpad.org/>
+ - assuming 3 concurrent authors, 200 lurkers per pad, 3 concurrent pads
+ - 1 core, 4GB RAM, bandwidth Mb/s: 14.688
+- <https://mclear.co.uk/2021/09/08/deploying-etherpad-at-scale-in-one-minute/>
+- <https://github.com/ether/etherpad-load-test>
etherpad-load-test: 1GB nanode, 42 clients connected (11 authors, 31 lurkers)
-Will need to try this again with a bigger node
+Will need to try this again when we resize nodes. Probably just the extra memory will be enough and the CPU use from node won't step on the streaming, but not sure
### DONE Use the API to create pages based on all the slugs
@@ -1997,7 +2030,7 @@ Probably focus on grabbing the audio first and seeing what's worth keeping
Make a table of the form
-<table id="orga97a7f5" border="2" cellspacing="0" cellpadding="6" rules="groups" frame="hsides">
+<table id="org6a1378f" border="2" cellspacing="0" cellpadding="6" rules="groups" frame="hsides">
<colgroup>
diff --git a/2022/organizers-notebook/index.org b/2022/organizers-notebook/index.org
index 2a5ab409..b29004ed 100644
--- a/2022/organizers-notebook/index.org
+++ b/2022/organizers-notebook/index.org
@@ -303,11 +303,14 @@ Where should we host this?
[[#ansible][Ansible notes]]
Consider if we need extra scaling beyond being on a beefy live0?
-https://mclear.co.uk/2021/09/08/deploying-etherpad-at-scale-in-one-minute/
-https://github.com/ether/etherpad-load-test
+- Scale calculator: https://scale.etherpad.org/
+ - assuming 3 concurrent authors, 200 lurkers per pad, 3 concurrent pads
+ - 1 core, 4GB RAM, bandwidth Mb/s: 14.688
+- https://mclear.co.uk/2021/09/08/deploying-etherpad-at-scale-in-one-minute/
+- https://github.com/ether/etherpad-load-test
etherpad-load-test: 1GB nanode, 42 clients connected (11 authors, 31 lurkers)
-Will need to try this again with a bigger node
+Will need to try this again when we resize nodes. Probably just the extra memory will be enough and the CPU use from node won't step on the streaming, but not sure
*** DONE Use the API to create pages based on all the slugs
CLOSED: [2022-10-11 Tue 20:41]