summaryrefslogtreecommitdiffstats
path: root/2022/captions/emacsconf-2022-lspbridge--lspbridge-a-smooth-as-butter-asynchronous-lsp-client--andy-stewart-matthew-zeng--main.vtt
diff options
context:
space:
mode:
Diffstat (limited to '2022/captions/emacsconf-2022-lspbridge--lspbridge-a-smooth-as-butter-asynchronous-lsp-client--andy-stewart-matthew-zeng--main.vtt')
-rw-r--r--2022/captions/emacsconf-2022-lspbridge--lspbridge-a-smooth-as-butter-asynchronous-lsp-client--andy-stewart-matthew-zeng--main.vtt1002
1 files changed, 1002 insertions, 0 deletions
diff --git a/2022/captions/emacsconf-2022-lspbridge--lspbridge-a-smooth-as-butter-asynchronous-lsp-client--andy-stewart-matthew-zeng--main.vtt b/2022/captions/emacsconf-2022-lspbridge--lspbridge-a-smooth-as-butter-asynchronous-lsp-client--andy-stewart-matthew-zeng--main.vtt
new file mode 100644
index 00000000..622d0b40
--- /dev/null
+++ b/2022/captions/emacsconf-2022-lspbridge--lspbridge-a-smooth-as-butter-asynchronous-lsp-client--andy-stewart-matthew-zeng--main.vtt
@@ -0,0 +1,1002 @@
+WEBVTT captioned by matthew
+
+NOTE Opening
+
+00:00:00.000 --> 00:00:04.639
+Good morning folks, I'm Matthew.
+
+00:00:04.640 --> 00:00:07.399
+Welcome to another year of EmacsConf.
+
+00:00:07.400 --> 00:00:10.319
+It's looking fantastic this year.
+
+00:00:10.320 --> 00:00:13.559
+Firstly, I have to apologize for my voice
+
+00:00:13.560 --> 00:00:15.879
+and occasional cough today.
+
+00:00:15.880 --> 00:00:18.039
+I am currently recovering from a cold,
+
+00:00:18.040 --> 00:00:21.159
+hopefully it's not Covid or flu,
+
+00:00:21.160 --> 00:00:24.719
+so please bear with me today.
+
+00:00:24.720 --> 00:00:27.919
+Actually, this talk was supposed to be brought to you
+
+00:00:27.920 --> 00:00:31.559
+by Manatee Lazycat, the author of lsp-bridge.
+
+00:00:31.560 --> 00:00:36.079
+But verbal English isn't Lazycat's strongest skill,
+
+00:00:36.080 --> 00:00:38.599
+and we are good friends as we maintain
+
+00:00:38.600 --> 00:00:40.999
+the Emacs Application Framework together,
+
+00:00:41.000 --> 00:00:45.999
+so here I am today presenting to you this package.
+
+00:00:46.000 --> 00:00:48.479
+Welcome to my talk on lsp-bridge:
+
+00:00:48.480 --> 00:00:50.320
+a smooth-as-butter asynchronous LSP client.
+
+NOTE What is LSP?
+
+00:00:50.321 --> 00:00:57.200
+What is LSP?
+
+00:00:57.201 --> 00:01:01.159
+The first question is, what is LSP?
+
+00:01:01.160 --> 00:01:03.199
+For anyone who doesn't know here,
+
+00:01:03.200 --> 00:01:06.799
+LSP stands for Language Server Protocol,
+
+00:01:06.800 --> 00:01:09.719
+it is a set of protocols defined by Microsoft
+
+00:01:09.720 --> 00:01:13.399
+that provides smart features like autocomplete,
+
+00:01:13.400 --> 00:01:17.599
+go to definition, documentation, etc.,
+
+00:01:17.600 --> 00:01:23.439
+that can be implemented across different editors and IDEs.
+
+00:01:23.440 --> 00:01:25.559
+It was initially created
+
+00:01:25.560 --> 00:01:28.399
+for their Visual Studio Code product,
+
+00:01:28.400 --> 00:01:33.919
+then publically shared with everyone.
+
+00:01:33.920 --> 00:01:35.999
+So there are language servers out there
+
+00:01:36.000 --> 00:01:38.119
+that implemented this procotol,
+
+00:01:38.120 --> 00:01:41.239
+and editors need to implement the same procotols
+
+00:01:41.240 --> 00:01:43.119
+to talk to the language servers
+
+00:01:43.120 --> 00:01:46.799
+in order to retrieve necessary information.
+
+00:01:46.800 --> 00:01:53.159
+Emacs has 2 LSP clients already, the lsp-mode and eglot,
+
+00:01:53.160 --> 00:01:57.319
+both implemented the protocols and both are very good.
+
+NOTE Why another LSP client?
+
+00:02:00.440 --> 00:02:03.199
+Now comes to the second question, of course,
+
+00:02:03.200 --> 00:02:09.519
+given lsp-mode and eglot, why another LSP client?
+
+00:02:09.520 --> 00:02:12.359
+I used to use lsp-mode all the time,
+
+00:02:12.360 --> 00:02:15.999
+I have to say I really appreciate Ivan Yonchovski
+
+00:02:16.000 --> 00:02:20.159
+and the team's efforts. Also, I'd like to congratuate eglot
+
+00:02:20.160 --> 00:02:27.439
+for making into Emacs 29! These are fantastic packages,
+
+00:02:27.440 --> 00:02:30.999
+they are very mature and robust.
+
+NOTE
+
+00:02:31.000 --> 00:02:31.000
+However, with all due respect, both of the implementation
+
+00:02:35.120 --> 00:02:36.719
+are fundamentally limited
+
+00:02:36.720 --> 00:02:39.639
+by the single-threaded nature of Emacs,
+
+00:02:39.640 --> 00:02:43.639
+it is neither the fault of lsp-mode nor eglot.
+
+NOTE
+
+00:02:46.000 --> 00:02:47.959
+Although in recent years there have been
+
+00:02:47.960 --> 00:02:51.799
+improvements to Emacs core such as native JSON support,
+
+00:02:51.800 --> 00:02:55.319
+there are still scenarios where Emacs clog
+
+00:02:55.320 --> 00:02:59.359
+for a brief second when processing large amounts of data,
+
+00:02:59.360 --> 00:03:03.399
+as Emacs is processing everything in the single thread.
+
+00:03:03.400 --> 00:03:08.439
+This problem is especially apparent in some LSP servers
+
+00:03:08.440 --> 00:03:11.839
+that feeds in tens of thousands of JSON data
+
+00:03:11.840 --> 00:03:15.199
+with every single key press.
+
+NOTE
+
+00:03:15.200 --> 00:03:17.559
+Additionally, the large amount of data
+
+00:03:17.560 --> 00:03:21.279
+sent by the LSP server, such as the completion candidates,
+
+00:03:21.280 --> 00:03:23.959
+the diagnostics and documentation,
+
+00:03:23.960 --> 00:03:27.359
+they are temporarily stored in the Emacs memory,
+
+00:03:27.360 --> 00:03:31.159
+which will trigger garbage collection very frequently,
+
+00:03:31.160 --> 00:03:34.159
+this also causes stuttering user experience.
+
+00:03:34.160 --> 00:03:37.279
+Increasing the gc-cons-threshold helps,
+
+00:03:37.280 --> 00:03:43.759
+but doesn't eliminate the problem.
+
+NOTE
+
+00:03:43.760 --> 00:03:45.559
+For something like the LSP,
+
+00:03:45.560 --> 00:03:48.319
+the language servers need time to compute,
+
+00:03:48.320 --> 00:03:52.359
+and Emacs needs capacity to process and filter
+
+00:03:52.360 --> 00:03:55.799
+all the data coming from the language servers.
+
+00:03:55.800 --> 00:03:59.399
+A large codebase project with a slow language server
+
+00:03:59.400 --> 00:04:02.439
+that sends tens of thousands of JSON
+
+00:04:02.440 --> 00:04:06.519
+will significantly increase the time needed to process it,
+
+00:04:06.520 --> 00:04:08.079
+when we don't have a multi-thread,
+
+00:04:08.080 --> 00:04:12.719
+the single thread originally allocated for perhaps,
+
+00:04:12.720 --> 00:04:17.279
+handling user input will be used to process all the data,
+
+00:04:17.280 --> 00:04:22.719
+and don't even talk about the garbage collection along the way.
+
+NOTE
+
+00:04:22.720 --> 00:04:26.239
+The unfortunate truth is that the size of the codebase
+
+00:04:26.240 --> 00:04:28.919
+and the efficiency of the language server
+
+00:04:28.920 --> 00:04:31.759
+is completely out of Emacs' control,
+
+00:04:31.760 --> 00:04:38.519
+it is also out of both the lsp-mode and eglot's control.
+
+NOTE
+
+00:04:38.520 --> 00:04:40.279
+If there's an LSP client
+
+00:04:40.280 --> 00:04:42.279
+that can completely eliminate stuttering
+
+00:04:42.280 --> 00:04:44.999
+and provide a seamless feedback,
+
+00:04:45.000 --> 00:04:50.279
+that would be great, isn't it?
+
+NOTE What is seamless input feedback?
+
+00:04:50.280 --> 00:04:53.839
+However, we're vaguely talking about speed right now,
+
+00:04:53.840 --> 00:04:56.399
+what is considered fast?
+
+00:04:56.400 --> 00:04:58.359
+What is considered seamless?
+
+00:04:58.360 --> 00:05:01.479
+What we really mean when we say
+
+00:05:01.480 --> 00:05:05.239
+the current LSP implementation is slow?
+
+00:05:05.240 --> 00:05:12.559
+Let's first look at the problem fundamentally.
+
+NOTE
+
+00:05:12.560 --> 00:05:17.679
+We interact with Emacs through a keyboard,
+
+00:05:17.680 --> 00:05:22.719
+so what we perceive as a fast and smooth feedback
+
+00:05:22.720 --> 00:05:25.999
+completely depends on how long it takes
+
+00:05:26.000 --> 00:05:29.359
+for a keyboard input to display on the Emacs buffer.
+
+00:05:29.360 --> 00:05:32.919
+From a pure graphical perspective,
+
+00:05:32.920 --> 00:05:36.519
+we need a minimum of 24 frames per second,
+
+00:05:36.520 --> 00:05:39.079
+the standard in the media industry,
+
+00:05:39.080 --> 00:05:42.359
+for us humans to perceive something as seamless.
+
+00:05:42.360 --> 00:05:46.999
+Say we need 25 frames per second, this means,
+
+00:05:47.000 --> 00:05:50.399
+if we divide 1000 milliseconds by 25,
+
+00:05:50.400 --> 00:05:54.759
+we only have approximately 40 millisecond window
+
+00:05:54.760 --> 00:05:57.919
+for the response time to spare.
+
+00:05:57.920 --> 00:06:01.679
+Even if we relax the constraint a bit more,
+
+00:06:01.680 --> 00:06:06.679
+on average a typist takes about 100 to 200 milliseconds
+
+00:06:06.680 --> 00:06:09.159
+between typing each character,
+
+00:06:09.160 --> 00:06:12.599
+so as long as we see a response within this timeframe,
+
+00:06:12.600 --> 00:06:19.559
+it is tolerable. However, using a slow language server
+
+00:06:19.560 --> 00:06:22.279
+on a large codebase easily exceeds
+
+00:06:22.280 --> 00:06:24.679
+the hundred millisecond mark,
+
+00:06:24.680 --> 00:06:27.479
+and sometimes takes more than 200 milliseconds,
+
+00:06:27.480 --> 00:06:32.039
+and inevitably will cause an inconsistent delay
+
+00:06:32.040 --> 00:06:33.199
+for the end user.
+
+NOTE
+
+00:06:33.200 --> 00:06:37.959
+At this point, someone might want to point out
+
+00:06:37.960 --> 00:06:41.079
+that nobody is gonna type at the maximum pace all the time.
+
+00:06:41.080 --> 00:06:45.039
+That's right, frankly speaking most of my time
+
+00:06:45.040 --> 00:06:47.639
+spent at programming is not writing code,
+
+00:06:47.640 --> 00:06:49.039
+but staring at the screen
+
+00:06:49.040 --> 00:06:51.279
+thinking about how to write the code.
+
+00:06:51.280 --> 00:06:55.599
+However, when we do actually type,
+
+00:06:55.600 --> 00:07:00.359
+maybe only a sentence, a variable name, a keyword,
+
+00:07:00.360 --> 00:07:03.039
+or just performing keybinding shortcuts,
+
+00:07:03.040 --> 00:07:08.479
+that's when we want to see our input feedback immediately.
+
+00:07:08.480 --> 00:07:10.479
+We've already spend so much time
+
+00:07:10.480 --> 00:07:12.159
+thinking about how to write,
+
+00:07:12.160 --> 00:07:16.479
+we don't want to waste any more time waiting for Emacs
+
+00:07:16.480 --> 00:07:19.559
+to process and show us what we've written
+
+00:07:19.560 --> 00:07:27.679
+half a second ago. Otherwise the frustration will build up.
+
+NOTE EAF showed a possibility
+
+00:07:28.400 --> 00:07:31.999
+In the past two years of EmacsConf, I've talked about
+
+00:07:32.000 --> 00:07:35.399
+the Emacs Application Framework, a project that extended
+
+00:07:35.400 --> 00:07:39.839
+Emacs Lisp to Python, Qt and JavaScript ecosystems.
+
+00:07:39.840 --> 00:07:43.759
+The EAF project specializes in improving
+
+00:07:43.760 --> 00:07:47.439
+the graphical and multimedia capabilities of Emacs
+
+00:07:47.440 --> 00:07:51.759
+through other languages, it was a great success.
+
+00:07:51.760 --> 00:07:55.759
+It demonstrated the endless possibilities of Emacs
+
+00:07:55.760 --> 00:08:00.159
+by embracing the strengths in other ecosystems.
+
+00:08:00.160 --> 00:08:04.239
+If anyone is interested for more information on EAF,
+
+00:08:04.240 --> 00:08:08.519
+please see the EAF repo and refer to my talks
+
+00:08:08.520 --> 00:08:12.959
+from EmacsConf2020 and 2021.
+
+00:08:12.960 --> 00:08:12.960
+
+
+00:08:12.960 --> 00:08:16.239
+The EAF project was created by Manatee Lazycat as well,
+
+00:08:16.240 --> 00:08:19.999
+so he thought if there is a way to design
+
+00:08:20.000 --> 00:08:22.759
+an LSP client similar to EAF
+
+00:08:22.760 --> 00:08:25.759
+that takes the advantage of Python's multi-threading,
+
+00:08:25.760 --> 00:08:27.839
+it will be able to solve our problem.
+
+00:08:27.840 --> 00:08:32.399
+Conveniently EAF had already done most of the ground work
+
+00:08:32.400 --> 00:08:34.359
+and demonstrated the possibility
+
+00:08:34.360 --> 00:08:42.159
+of cooperating Elisp and Python using the Emacs RPC effectively.
+
+NOTE LSP Bridge Objectives
+
+00:08:42.160 --> 00:08:45.039
+LSP Bridge has several goals in mind.
+
+00:08:45.040 --> 00:08:50.159
+Firstly, performance is the number one priority.
+
+00:08:50.160 --> 00:08:55.839
+Secondly, use Python multi-threading to bypass
+
+00:08:55.840 --> 00:08:59.239
+the aforementioned bottlenecks of a single-threaded Emacs.
+
+00:08:59.240 --> 00:09:04.519
+Thirdly, provide a simple solution that requires
+
+00:09:04.520 --> 00:09:07.519
+minimal setup for someone who just wants to have
+
+00:09:07.520 --> 00:09:10.079
+a fast autocomplete system in Emacs.
+
+00:09:10.080 --> 00:09:15.999
+This means, LSP Bridge does not intend
+
+00:09:16.000 --> 00:09:21.439
+and will not implement the entire LSP protocol,
+
+00:09:21.440 --> 00:09:23.639
+which is a vastly different approach
+
+00:09:23.640 --> 00:09:25.759
+than a solution like lsp-mode,
+
+00:09:25.760 --> 00:09:28.479
+we do not want to compete this way.
+
+00:09:28.480 --> 00:09:33.559
+We also believe some of the LSP Protocol features
+
+00:09:33.560 --> 00:09:37.759
+are unnecessary, or we already have better solutions
+
+00:09:37.760 --> 00:09:38.959
+in the Emacs ecosystem,
+
+00:09:38.960 --> 00:09:42.679
+such as tree-sitter for syntax highlighting.
+
+00:09:42.680 --> 00:09:44.959
+So we will not reinvent the wheel.
+
+00:09:44.960 --> 00:09:50.279
+Ultimately, we want to provide the fastest, butter-smooth
+
+00:09:50.280 --> 00:09:53.679
+and performant LSP client out of the box.
+
+NOTE Design.
+
+00:09:53.680 --> 00:09:54.560
+Design.
+
+00:09:54.561 --> 00:10:01.239
+Now let's look at the design architecture diagram.
+
+00:10:01.240 --> 00:10:04.639
+As you can see, it is split into
+
+00:10:04.640 --> 00:10:07.079
+the top half and bottom half.
+
+00:10:07.080 --> 00:10:10.559
+The top is the design for a single file model,
+
+00:10:10.560 --> 00:10:13.359
+and the bottom half is for project model.
+
+00:10:13.360 --> 00:10:18.159
+We make this distinction because we don't want a new user
+
+00:10:18.160 --> 00:10:22.599
+to be troubled on choosing a project root directory
+
+00:10:22.600 --> 00:10:25.199
+as the first impression to LSP
+
+00:10:25.200 --> 00:10:27.279
+before even start writing code.
+
+00:10:27.280 --> 00:10:27.280
+
+
+00:10:27.280 --> 00:10:30.479
+From a new user's perspective,
+
+00:10:30.480 --> 00:10:32.959
+they've just installed this package,
+
+00:10:32.960 --> 00:10:35.159
+and all they are expecting
+
+00:10:35.160 --> 00:10:37.679
+is using a smart autocomplete system,
+
+00:10:37.680 --> 00:10:41.519
+what does root directory even mean in this context?
+
+00:10:41.520 --> 00:10:44.119
+So we make the decision for them
+
+00:10:44.120 --> 00:10:48.199
+based on whether this file is part of a git repository.
+
+00:10:48.200 --> 00:10:56.719
+Often times we write code in its own standalone file,
+
+00:10:56.720 --> 00:10:59.919
+this is extremely common for scripting languages
+
+00:10:59.920 --> 00:11:03.319
+like bash or python. So in the single file model,
+
+00:11:03.320 --> 00:11:07.159
+LSP Bridge will start a dedicated LSP server
+
+00:11:07.160 --> 00:11:10.319
+for this particular file based on file type,
+
+00:11:10.320 --> 00:11:13.479
+and every file corresponds to a LSP server,
+
+00:11:13.480 --> 00:11:17.839
+so each server doesn't interfere with one another.
+
+00:11:17.840 --> 00:11:23.719
+The project model will have every file of the same type
+
+00:11:23.720 --> 00:11:25.919
+under the same project share one server.
+
+00:11:25.920 --> 00:11:30.439
+We believe this is a positive trade-off for user experience.
+
+00:11:30.440 --> 00:11:30.440
+
+
+00:11:30.440 --> 00:11:36.599
+LSP Bridge internally implemented two main threads,
+
+00:11:36.600 --> 00:11:40.399
+one is the Request Thread, the other is Response Thread.
+
+00:11:40.400 --> 00:11:45.279
+The Request Thread is used to handle all the requests
+
+00:11:45.280 --> 00:11:48.679
+coming from Emacs, it does not answer immediately,
+
+00:11:48.680 --> 00:11:52.839
+this is important because Emacs doesn't need to wait
+
+00:11:52.840 --> 00:11:54.679
+for any response under any reason,
+
+00:11:54.680 --> 00:11:58.159
+even if the server is buggy or died out,
+
+00:11:58.160 --> 00:12:01.159
+it shouldn't matter to the performance of Emacs.
+
+00:12:01.160 --> 00:12:04.039
+The Response Thread is used to handle
+
+00:12:04.040 --> 00:12:06.559
+the response coming from LSP servers.
+
+00:12:06.560 --> 00:12:11.239
+After retrieving a response, regardless of the JSON size,
+
+00:12:11.240 --> 00:12:14.439
+it sends to its own thread for computation,
+
+00:12:14.440 --> 00:12:17.079
+such as candidate filtering and renaming.
+
+00:12:17.080 --> 00:12:19.999
+Once the computation is finished,
+
+00:12:20.000 --> 00:12:23.639
+it will determine if this information is expired,
+
+00:12:23.640 --> 00:12:26.399
+if not, then push it to Emacs.
+
+00:12:26.400 --> 00:12:26.400
+
+
+00:12:26.400 --> 00:12:31.559
+From the Emacs side, when it receives the LSP information,
+
+00:12:31.560 --> 00:12:34.639
+it only needs to determine the course of action,
+
+00:12:34.640 --> 00:12:39.159
+either popup completion, jump to definition,
+
+00:12:39.160 --> 00:12:44.799
+renaming action, or show references and show documentions.
+
+00:12:44.800 --> 00:12:49.119
+You see, from a user, all LSP Bridge doing
+
+00:12:49.120 --> 00:12:52.279
+is these 5 things, the user doesn't need to care about
+
+00:12:52.280 --> 00:12:54.559
+anything else like the complicated
+
+00:12:54.560 --> 00:12:56.479
+Language Server Protocols.
+
+00:12:56.480 --> 00:12:56.480
+
+
+00:12:56.480 --> 00:13:02.439
+Python side caches heavy data
+
+00:13:02.440 --> 00:13:06.279
+such as candidate documentation and diagnostics.
+
+00:13:06.280 --> 00:13:11.079
+We process as much server data as possible in Python,
+
+00:13:11.080 --> 00:13:15.759
+and only pass to Emacs as little data as possible
+
+00:13:15.760 --> 00:13:18.159
+so it doesn't clog the Emacs thread
+
+00:13:18.160 --> 00:13:19.799
+and triggers garbage collection.
+
+00:13:19.800 --> 00:13:19.800
+
+
+00:13:19.800 --> 00:13:24.319
+This design is critical, because all Emacs needs to do
+
+00:13:24.320 --> 00:13:27.039
+is sending LSP requests to LSP Bridge,
+
+00:13:27.040 --> 00:13:29.439
+it doesn't wait for a response,
+
+00:13:29.440 --> 00:13:32.999
+it simply knows what to do *when* there is a response.
+
+00:13:33.000 --> 00:13:37.159
+So the user's input immediately displays on the buffer
+
+00:13:37.160 --> 00:13:39.559
+well within the 40 millisecond window,
+
+00:13:39.560 --> 00:13:45.199
+and in the mean time, the user can continue to type
+
+00:13:45.200 --> 00:13:48.199
+if he doesn't need the help from LSP right away,
+
+00:13:48.200 --> 00:13:51.279
+it fundamentally resolves the stuttering problem.
+
+NOTE ACM - Asynchronous Completion Menu
+
+00:13:51.280 --> 00:13:59.079
+Now I want to talk about acm-mode,
+
+00:13:59.080 --> 00:14:09.599
+which stands for asynchronous completion menu,
+
+00:14:09.600 --> 00:14:12.479
+it is a completion framework
+
+00:14:12.480 --> 00:14:15.039
+that currently bundled with LSP Bridge
+
+00:14:15.040 --> 00:14:17.279
+designed to accomodate for
+
+00:14:17.280 --> 00:14:20.399
+the asynchronous nature of LSP servers.
+
+00:14:20.400 --> 00:14:26.919
+It is a replacement for the built-in capf,
+
+00:14:26.920 --> 00:14:30.359
+short for completion-at-point-functions,
+
+00:14:30.360 --> 00:14:32.519
+used in almost everywhere
+
+00:14:32.520 --> 00:14:35.759
+including company-mode and corfu-mode.
+
+00:14:35.760 --> 00:14:40.839
+Yes, we unfortunately reinvented a very fundamental wheel.
+
+00:14:40.840 --> 00:14:44.279
+No, it wasn't an easy decision.
+
+00:14:44.280 --> 00:14:47.879
+However we still believe it's worth it.
+
+00:14:47.880 --> 00:14:53.359
+LSP Bridge initially used company-mode,
+
+00:14:53.360 --> 00:14:56.119
+then moved on to corfu-mode for a while,
+
+00:14:56.120 --> 00:14:58.999
+but eventually Lazycat determined
+
+00:14:59.000 --> 00:15:00.719
+that it is much more painful to write
+
+00:15:00.720 --> 00:15:05.679
+a lot of workaround code to force LSP Bridge
+
+00:15:05.680 --> 00:15:09.959
+to handle capf nicely than to just fork Corfu,
+
+00:15:09.960 --> 00:15:11.999
+remove all the capf code,
+
+00:15:12.000 --> 00:15:15.239
+and write a new completion framework from the remainings.
+
+00:15:15.240 --> 00:15:15.240
+
+
+00:15:15.240 --> 00:15:20.719
+Performance wise, capf requires Emacs to store
+
+00:15:20.720 --> 00:15:23.119
+the entire candidate list
+
+00:15:23.120 --> 00:15:27.159
+when looking up candidate annotations.
+
+00:15:27.160 --> 00:15:30.639
+It needs to search through the entire candidate list first,
+
+00:15:30.640 --> 00:15:32.599
+then use the candidate as a key
+
+00:15:32.600 --> 00:15:34.799
+to search for the actual information.
+
+00:15:34.800 --> 00:15:38.919
+This entire process will be repeated every time
+
+00:15:38.920 --> 00:15:40.679
+when drawing the completion menu.
+
+00:15:40.680 --> 00:15:45.199
+This is truly intensive computing task for Emacs to handle.
+
+00:15:45.200 --> 00:15:50.519
+On top of that, the existing capf frameworks assume
+
+00:15:50.520 --> 00:15:54.279
+the candidate list, which is retrieved from the LSP server,
+
+00:15:54.280 --> 00:15:56.839
+to be ready and finalized in place
+
+00:15:56.840 --> 00:15:58.719
+when the completion popup occurred.
+
+00:15:58.720 --> 00:16:02.119
+However given the design of LSP Bridge,
+
+00:16:02.120 --> 00:16:05.919
+Emacs will not sit there and wait for the server response,
+
+00:16:05.920 --> 00:16:10.439
+instead the Response Thread may feed Emacs data
+
+00:16:10.440 --> 00:16:14.919
+whenever it's ready. This makes capf almost impossible
+
+00:16:14.920 --> 00:16:21.919
+to form a finalized candidate list during popup.
+
+00:16:21.920 --> 00:16:21.920
+
+
+00:16:21.920 --> 00:16:26.079
+The complete reasons regarding why capf is incompatible
+
+00:16:26.080 --> 00:16:28.679
+with the asynchronous nature of LSP servers
+
+00:16:28.680 --> 00:16:32.479
+are very complicated and deserves its own talk.
+
+00:16:32.480 --> 00:16:37.079
+Lazycat wrote an entire blog post detailing his reasonings,
+
+00:16:37.080 --> 00:16:40.999
+while Corfu's author Daniel Mendler a.k.a minad
+
+00:16:41.000 --> 00:16:44.239
+also done his own investigations and experiments,
+
+00:16:44.240 --> 00:16:47.239
+and reached a common conclusion.
+
+00:16:47.240 --> 00:16:50.919
+For anyone interested, I've pasted the links
+
+00:16:50.920 --> 00:16:52.759
+to the corresponding posts here.
+
+00:16:52.760 --> 00:16:57.399
+Therefore, keep in mind that LSP Bridge
+
+00:16:57.400 --> 00:16:59.919
+can only use acm-mode to work nicely,
+
+00:16:59.920 --> 00:17:03.359
+so please disable other completion frameworks
+
+00:17:03.360 --> 00:17:07.159
+like company and corfu before trying LSP Bridge.
+
+NOTE LSP Bridge + ACM -> Multi-Backend Completion Framework
+
+00:17:07.160 --> 00:17:14.919
+By designing ACM with asynchronous server response in mind,
+
+00:17:14.920 --> 00:17:18.759
+this unlocks LSP Bridge project's potential
+
+00:17:18.760 --> 00:17:22.199
+to provide completions from almost any backends.
+
+00:17:22.200 --> 00:17:25.679
+ACM has blended all the backends together,
+
+00:17:25.680 --> 00:17:28.799
+and configured a priority to display
+
+00:17:28.800 --> 00:17:32.839
+important completion results like LSP before other backends.
+
+00:17:32.840 --> 00:17:38.559
+It can autocomplete LSP, TabNine, Elisp symbols, yasnippets,
+
+00:17:38.560 --> 00:17:41.039
+even English dictionaries and much more.
+
+00:17:41.040 --> 00:17:43.959
+As long as you have the backends installed,
+
+00:17:43.960 --> 00:17:46.319
+they all work out-of-the-box!
+
+NOTE Today and future. Join us!
+
+00:17:46.320 --> 00:17:55.239
+Although LSP Bridge is a relatively new package
+
+00:17:55.240 --> 00:18:00.039
+with just over 7 months old, it is already a success!
+
+00:18:00.040 --> 00:18:06.599
+As of December of 2022, we have 67 contributors
+
+00:18:06.600 --> 00:18:08.439
+making more than 1000 commits,
+
+00:18:08.440 --> 00:18:12.679
+and we reached more than 600 stars on Github!
+
+00:18:12.680 --> 00:18:16.359
+LSP Bridge is easily extensible,
+
+00:18:16.360 --> 00:18:18.879
+developing a new language backend is very simple too,
+
+00:18:18.880 --> 00:18:20.639
+feel free to join us!
+
+00:18:20.640 --> 00:18:25.599
+LSP Bridge is another successful example
+
+00:18:25.600 --> 00:18:29.919
+of extending Emacs Lisp with Python, and just like EAF,
+
+00:18:29.920 --> 00:18:33.639
+it demonstrated the potential Emacs can achieve
+
+00:18:33.640 --> 00:18:37.039
+when we jump out of the Lisp-only world
+
+00:18:37.040 --> 00:18:39.199
+and embrace other ecosystems.
+
+00:18:39.200 --> 00:18:43.479
+Recently Lazycat created a package called blink-search
+
+00:18:43.480 --> 00:18:45.679
+that leveraged similar ideas
+
+00:18:45.680 --> 00:18:48.919
+but an asynchronous search framework,
+
+00:18:48.920 --> 00:18:51.239
+as well as a package called deno-bridge
+
+00:18:51.240 --> 00:18:53.119
+that extended Emacs Lisp
+
+00:18:53.120 --> 00:18:56.439
+with Deno JavaScript TypeScript runtimes.
+
+00:18:56.440 --> 00:18:57.559
+Please check it out,
+
+00:18:57.560 --> 00:19:05.199
+if consider joining the development too!
+
+NOTE Thanks
+
+00:19:05.200 --> 00:19:08.599
+This is the entirety of my presentation, thanks for joining!
+
+00:19:08.600 --> 00:19:11.319
+Me and Lazycat will be available
+
+00:19:11.320 --> 00:19:20.240
+to answer questions on IRC and Etherpad.