summaryrefslogtreecommitdiffstats
path: root/2025/talks
diff options
context:
space:
mode:
authorSacha Chua <sacha@sachachua.com>2025-12-28 19:30:54 -0500
committerSacha Chua <sacha@sachachua.com>2025-12-28 19:30:54 -0500
commit5173e42c5a3e707f5e0edaa31a11001e1282f8bb (patch)
tree95b7ce5585d836194ee2552762dab00ec2ee1027 /2025/talks
parentede786c08c144000f2cea1199ba0e7130bf79272 (diff)
downloademacsconf-wiki-5173e42c5a3e707f5e0edaa31a11001e1282f8bb.tar.xz
emacsconf-wiki-5173e42c5a3e707f5e0edaa31a11001e1282f8bb.zip
add sat dev discussions
Diffstat (limited to '2025/talks')
-rw-r--r--2025/talks/bookclub-tapas.md352
-rw-r--r--2025/talks/gardening.md28
-rw-r--r--2025/talks/graphics.md20
-rw-r--r--2025/talks/hyperboleqa.md362
-rw-r--r--2025/talks/juicemacs.md97
-rw-r--r--2025/talks/llm.md70
-rw-r--r--2025/talks/private-ai.md87
-rw-r--r--2025/talks/schemacs.md215
-rw-r--r--2025/talks/sun-close.md23
-rw-r--r--2025/talks/swanky.md109
-rw-r--r--2025/talks/zettelkasten.md6
11 files changed, 1365 insertions, 4 deletions
diff --git a/2025/talks/bookclub-tapas.md b/2025/talks/bookclub-tapas.md
index 5d0574b1..3f8f877c 100644
--- a/2025/talks/bookclub-tapas.md
+++ b/2025/talks/bookclub-tapas.md
@@ -64,6 +64,358 @@ out of it as well. I'll be laying out what it is, how I found it, why Emacs
makes an awesome environment for it, and how you can get started with it
too!
+## Discussion / notes
+
+- Q: I missed the beginning of the talk... did you show  examples of
+ files in bookclub style? that seems to be related to what I've been
+ doing, but coming from different influences...
+ - [https://github.com/ElephantErgonomics/Squint](https://github.com/ElephantErgonomics/Squint)
+ - A: So I included a... Let me see, I'm just looking at the IRC
+ here and smiling at all the people. So, yes, I provided a link.
+ So I think that an excellent... So I have gone ahead and
+ provided the link to the repo and I'm going to go ahead and
+ post that again. So this should serve as a full example of what
+ a just sort of standard book club file looks like. And if anyone
+ has specific questions about anything in particular, they would
+ love to see my walkthrough and narrate specifically, you know,
+ any place in this file that they would like to see me go over
+ live, I would be super happy to do that. So I have the whole
+ more or less complete book club file for Squint pulled up here.
+ Yeah, I have my vision laid out, which has my initial sort of
+ goal. The background and the vision sort of combined to lay out
+ what my general sort of goal is.
+- Q: The product of a Tapa like squint.org would be pure GOLD for an
+ agent like Claude Code - have you experimented with providing an
+ agent with the final output and letting it chew through todos?
+ - A: The product of a tapa like squint.org would be pure gold for
+ an agent like Claude Code. Have you experimented with providing
+ an agent with a final output and letting it chew through to-dos?
+ That would be a really excellent question. I actually just kind
+ of recently got into Claude in particular. I played quite a bit
+ with GPT and and a lot of 8 billion parameter local models. And
+ I was never super impressed. It always felt like I was just sort
+ of wrangling to get it on the same page, whether as a result of
+ sycophantism or really just not having enough parameters in
+ order to understand the context of what's going on. Claude has
+ completely changed my perception of what an LLM can do or not.
+ It makes autonomy not seem like a total fever train. I have
+ definitely been curious about how an LLM would react to book
+ club files. I think that, yeah, especially like, I've been
+ daydreaming a little bit about, you know, having it generate
+ scratch artifacts or suggesting, you know, changes to the
+ format. It's like, yeah, the fact that this is all like, you
+ know, like super, The goal and the hope for all of this is that
+ we're being verbose about our thinking anyway. This is sort of
+ how, by default, deep reasoning kind of works. 
+ - I actually think that I totally agree. It would be a great fit.
+ I have yet to personally do it, because I've always been just a
+ little bit wary about, like, you know... Well, if I'm writing
+ a program, I want to write it, you know? People often talk
+ about, like, you know, oh, I just want to hand off the boring
+ parts to Claude. But the thing is, if I'm writing in Elisp, I
+ find the whole thing to be kind of fun. I'd be super interested
+ in, you know, just sort of as a point of exercise, seeing what
+ it's capable of. Because I think, I really do think that this
+ would be kind of an ideal environment. It is kind of close to,
+ you know, native-ish, how LLMs think. There's also, like, you
+ know, of course, the, um, the privacy angle. I don't
+ necessarily want to provide a whole bunch of code verbatim that
+ I intend to GPL3. But I believe that Claude kind of has a better
+ policy in terms of what does and does not become training data.
+ I'll have to look into Claude in particular because I feel like
+ that would be my target for it. But yeah, I think that's
+ definitely onto something. I've definitely thought about this.
+ I've definitely been really curious about this.
+- Q: Do you think every Tapa should have it's own Bookclub file as
+ well? Or would you rather keep just one bookclub file in the top of
+ the project?
+ - So I think that I definitely would advise that each Tapa have
+ its own book club file. The reason being is because I find that
+ for me personally, the way that my brain kind of works is that
+ out of sight, out of mind is very literal for me. I find
+ that... I find that... What am I thinking of? Sorry, I just
+ saw that I got an email and I'm like, yeah, okay, cool. Case in
+ point, right? We are at case in point, you know, out of sight,
+ out of mind. Yes, no, absolutely. Yeah, no, exactly. I, um, I'm
+ definitely quite ADHD and it works for my advantage because it
+ provides all sorts of versatility. This is another great
+ advantage of book club. If you have an ADHD mind like I do
+ where, you know, You love jumping around and working on all
+ sorts of different pieces simultaneously. You don't like
+ sitting down and doing the same thing all day unless it really
+ latches onto you. You know, you can pivot and you don't do
+ anything. It really rewards the fact that you can pivot. So I
+ find that to be really excellent.
+ -  But to go back to the original question, I would definitely
+ recommend, at least in my circumstance, I find it to be
+ incredibly useful to have each tapa be its own book club file
+ rather than to have a unified file that holds all of your tapas.
+ You can definitely do this, especially if you're using org to
+ organize it hierarchically. It's just sort of a matter of
+ preference and style at that point. So long as you're making a
+ clear distinction between your tapas, that's the main thing
+ that I would recommend no matter what, because the whole hope
+ that I have is that you have a sort of separation of focus
+ between the different you know, the different focuses of your
+ different tapas, they really should ideally feel like different
+ programs so that you're not, you know, getting over yourself,
+ getting ahead of yourself. 
+ - I think that, you know, on that basis, I would probably default
+ to recommending that tapas have their own separate book club
+ files, because ideally they should kind of be different sort of
+ independent but related thoughts. But at the same time, I mean,
+ like, you know, this is coming from someone who like has a
+ billion small, like, you know, I had one giant org file for a
+ long time and then realized that really didn't work for me. So
+ now I have a billion tiny ones. So depending upon how you feel
+ about, you know, should I have one really big org file or a
+ bunch of really little org files? I feel like that more or less
+ gives your answer. I think it's whatever works best for you. I
+ know that far and away what works best for me is having separate
+ files. No matter what, you should have separation of concept
+ though. But however you do that is, you know, is best your
+ judgment call.
+ - ([Sacha]: Oooh, if you're jumping around a lot, C-u
+ org-refile is great for that, set it up with your agenda
+ files)
+ - Thank you! Makes sense! :-)
+- Q: How do you build habits when it comes to documentation? I tend to
+ produce lots of documentation in one go, then effectively "forget"
+ to do it for long periods of time, and end up playing catch-up which
+ results in a loss of precision as you aluded to in your talk. In a
+ work setting, when something goes on fire or priorities change, it
+ can be hard to keep discipline. Would love your thoughts, thanks!
+ - A: Yes, absolutely. So what I tend to do is I don't... So
+ really, so far, what I've been doing is that I haven't been
+ making a conscious priority of writing documentation at all. And
+ if that sounds contradictory to the talk, that is correct. What
+ I mean by this is that I go about is that when I'm writing
+ code, when I'm writing, you know, drafts of my functions, the
+ way that I tend to approach this, the way that I really
+ emphasize the approach for it, is that I want to focus first and
+ foremost on sort of like just writing down what my internal
+ monologue is for what I'm doing for that pass working on the
+ file. So my document takes ultimate... Distance of doc is
+ ultimately a property from the fact that I am writing what I'm
+ doing as I'm doing it. And it's more or less just I'm just
+ mashing out the stream of consciousness of what's going on
+ inside my head as it's happening. 
+ - So if we go down and we take a look at, yeah, so let's go ahead
+ and take a look back at the macro. Yeah, really, this is kind of
+ cheating, because mostly I would consider this to be
+ self-documenting, but we all kind of know that that in and of
+ itself is a slippery slope. That's not great. Because it's
+ like, I could believe that this would be self-documenting if
+ this was a three-liner. It is not. which, you know, also goes to
+ show me that this needs to be splitting into its own Tapas I
+ intend to, you know, write a Tapa that's a sort of macro
+ builder that automatically, you know, does the gensyms for you.
+ Something along the lines of what's the Common Lisp macro for
+ that called? It's like, there's some Common Lisp faculty that
+ does automatic gensym binding. I can't quite remember what
+ it's called. A prior version of this talk had my live coding
+ that, but that ended up sort of distracting from what I kind of
+ wanted to nail out and focus on. 
+ - But really kind of what I do is that, let me see here if I can
+ find some sort of... Yeah, so I have in my research section
+ sort of layout like what the quirks of all this sort of are. I
+ think my development focuses contain a little bit of what could
+ be ultimately considered to be documentation. Yeah, as I'm
+ looking through all of this, I'm kind of realizing that like,
+ you know, yeah, there's stuff that I'm into documentation
+ here, but it's all a little ad hoc. You know, I would, in part,
+ the design of this particular tapa is arguably not currently,
+ but is going to be simple enough such that a doc string is
+ sufficient for documentation. That is not the case currently.
+- Q: How do you write examples and tests? I think that you mentioned
+ that during the talk, but I couldn't find them on a very quick look
+ at your org file in the squint repo...
+ - My use of the word test was a little bit creative. It's my
+ validation of the code that I've written. I more or less tend
+ to do a, I tend to try and write really small functions and have
+ really aggressive validation by just making sure that, like, you
+ know, when I chain functions in the REPL, each step of them
+ produces results that are really quite immediately and
+ self-verifiably seen. Now, this isn't a great excuse to not use
+ a test suite, but it's gotten me pretty far. 
+ - What I mean by tests is that in the research sections, what
+ I've done is, so I've created a sort of tested in the sense
+ that I have created a really highly representative case of the
+ way that the program ultimately ought to behave. In doing so, I
+ created a sort of embedded domain language that I have termed
+ Animal Houses. And Animal Houses is a sort of markup language
+ that has rather simple rules. This here is the entirety of the
+ spec for Animal Houses. Grammar or anything, but like, it is
+ more or less. Breadth of everything that needs to be known about
+ how Animal Houses works. And I've created Animal Houses because
+ it is an ideal and incredibly simple circumstance for how to go
+ about as-needed tests for how Squint ultimately ought to work in
+ practice. So when I'm doing research, what I do is I take the
+ text of animal houses, and I will go ahead and insert it into a
+ buffer. And I'll just create an analog buffer. I just called it
+ awoo. 
+ - And then what I'll do is in my research sections, I will
+ write... Like I'll write like step-by-step instructions on how
+ to go about with a REPL-driven detection using Animal Houses. So
+ it does squint pass label to :with-restriction: correctly. The
+ tests conducted here indicate that it does not. And then I link
+ to a development focus that effectively acts as my bug report,
+ or, sorry, my bug listing for this particular problem that I've
+ identified. I lay out some criteria of how to go about using the
+ REPL to... you know I identify what I believe is sort of like
+ the quarantined area that I found for the bug, and then test is
+ that I will go about engaging with narration the step-by-step of
+ how I produce the circumstances around the bug until I
+ ultimately narrow all the way in and arrive at a conclusion. 
+ - Yeah, this is the sample text for animal houses. This is the
+ spec, not a formal grammar, but it is more or less the whole of
+ the spec that you need to write a parser for animal houses. Most
+ of the tests around Squint involve writing ad-hoc parsers for
+ animal houses. Just when I have it in its own buffer, you know,
+ I find, more or less, it's an excellent way of going about
+ testing in an ad-hoc sort of REPL-driven manner that I just sort
+ of write regular... that pull out the pieces of the sections of
+ buffer that represent the different fields and data types in
+ association with the animals and the houses to which they
+ belong. And then when I am engaging in research, what my
+ research section is, is I'm ultimately just laying out, like,
+ you know, I'm thinking to myself, is this working right? I feel
+ like there's something here, something in this area. And I'll
+ ask myself, well, what is it, what am I looking for? And then
+ nail down, how am I going to go about looking for it? The
+ process of working with the REPL to pin down what exactly is
+ going on and come to a conclusion on... 
+- Q: what is the largest project in terms of team size you had the
+ chance to consult and introduce the Bookclub Tapas concept and what
+ have been your experiences with these setups (implying larger
+ applications / solutions a company is working on)?
+ - A: It's been interesting. So in regards to this, the largest, I
+ would say two people in a couple of different circumstance. So
+ it's the pair of us working in a startup context. And then, you
+ know, we both have like rather technical backgrounds. We can
+ both more or less, you know, You know, sort of reason about
+ particularly excite, especially as we've been building up top
+ us is that, you know, well, we're both rather technical. You
+ know, I'm definitely software engineering sort of end. And, you
+ know, this partner is more. I mean, he's done all sorts of
+ different engineering, but none of it in a, like, especially
+ software context. So like, you know, but what's been really
+ cool about that is that especially as we've built up top us and
+ made clear distinctions about what they ought to do, you know,
+ he doesn't have a ton of like really, he doesn't like
+ experience like specifically in software engineering, but
+ because we have it all laid out in this really flexible way,
+ he's able to pick up the ball and like, you know, like he's
+ able to take the ball and run with it. because it's all laid
+ out in a way that's so intuitive. Like, you know, he's able to
+ like collaborate with me and like, you know, like, you know, run
+ off these ideas and like really go for it. Like, you know,
+ almost as quickly as I can, just because we've set up a
+ structure where like all of the different pieces have these
+ really intuitive and intrinsic and straightforward roles. And
+ that's, that's something that's really exciting in of itself
+ that I didn't really go over in the talk. Like a managerial
+ perspective, this is actually a really excellent way of
+ understanding the whole context of like what the software stack
+ looks like. Because it's like, you know, it makes it more
+ intuitive for developers for sure, but it makes it more
+ intuitive for everyone. You know, it's on that basis that I
+ can't imagine clients like just a better way at this point. Um,
+ that was that was the other circumstance where I have been
+ working with a partner. This has been with, um, you know, I
+ would be, uh. You know, sort of going back and forth with
+ someone who had hired me. Um, to, uh, like, you know, to work on
+ contract. And I would use this to sort of go over with them
+ about, um. Sort of get a solid idea of scope and function, do
+ pre-planning as we're going into more specifics on what the
+ overall look for the project and how it ought to look and how it
+ all ought to be laid out. So there's a lot of really exciting
+ flexibility there that I think is really cool.
+- Q: people will also be curious about the mechanics of collaboration:
+ other person uses Emacs and Org? Shipping things back and forth via
+ git / version control? CRDT?
+ - A: Screensharing, I'm stepping through the buffer by hand.
+ Using Emacs+Org is a bit to ask. I love the idea of crdt, would
+ love to use it with someone someday. Also would be nice to have
+ people thumb through individual Tapas in the stack.
+ - Note: (ah, maybe Org publishing for ease of
+ reference)
+
+ - Maybe a read-only version of the Org, making
+ screensharing a little bit neater.
+- Q: Have you experimented with something like whisper.el for doing
+ speech-to-text as you think out loud into your Bookclub? Might also
+ be fun to hook it up to Org-capture to link to whatever you're
+ looking at and then save it to your file
+ - A: Have you experimented with whisper.el for doing speech to
+ text as you think out loud into your book club? Now I am. I love
+ that idea. That is awesome. Yeah, no, I love that. 
+ - [Sacha]: Even with... I only have a CPU, no GPU on mine, it
+ does capture things a lot faster. And because it actually saves
+ the recording to a WAV, or I guess you can configure it, in case
+ it doesn't recognize something well, you can go back and check
+ it. That's nice. I like that more than a straight speech-text
+ thing. I've been mulling over the idea of having a keystroke
+ save into a background buffer so that even when I'm looking at
+ something else, I can dictate into my equivalent of the book
+ club file. 
+ - [Maddie]: Yes, yes, yes, absolutely. So you can be scrolling
+ through documentation on, like, you can be scrolling through
+ documentation on one screen and you can be musing to yourself
+ about, like, you know, is this supposed to work this way? Like,
+ you know, like, what in terms of, like, you know, like, I see
+ this function. It sounds like it's what I'm looking for. I
+ don't know if the types are quite right. I don't understand.
+ It's named what I'm looking for, but I don't know what it's
+ taking in. You can reason through all of this. You're not even
+ writing into the buffer that you're working with. That's
+ actually so cool. 
+ - [Sacha]: Or you can tie it into the org capture process so
+ that it can pick up an annotation automatically. Sorry,
+ annotation is the link to the thing, whatever you're looking
+ at. 
+ - [Maddie]: Oh, that's super cool. Yes. No, I actually really
+ love it. I haven't, you know, hooking this all up to Org
+ Capture at all. I actually really love that idea in and of
+ itself. Yeah. 
+ - [Sacha]: Org capture will give you a lot of capture options.
+ You can capture to your currently clocked in heading. So then it
+ just files your note in the right place automatically. 
+ - [Maddie]: Absolutely. I love that. Let me see. I'm actually
+ like writing a note to try that out. I'm definitely going to
+ have to do that. Like the flexibility of that in particular
+ sounds just perfect.
+ - Also related:
+ [https://newsletter.pragmaticengineer.com/p/san-francisco-is-back?open=false#%C2%A7wispr-flow-a-new-modality-for-programming](https://newsletter.pragmaticengineer.com/p/san-francisco-is-back?open=false#%C2%A7wispr-flow-a-new-modality-for-programming)
+- Q: I guess a major pro is it has less friction as people can do (a
+ lot, maybe not everything) in BookClub Tapas file vs. having to log
+ into gazillions of different systems, each one of them keeping a
+ portion of the information. Did I get that viewing point right from
+ your elaboration of the collaboration between you and your team
+ mate(s)?
+ - A:
+
+- i appreciate how easy this is to follow
+- i think i'm already getting an idea of how this comes together
+- Important caveot on this callout: The Emacs community has been really great about this, but this is a pain point of software development as a whole 😛
+- I don't think I've ever written really excellent documentation...
+- i don't think i've ever written even decent documentation
+- I don't know if I have ever written excellent documentation but I do actually enjoy writing it. But partly because I do tend to approach projects the way you are describing in this talk! And I like having a name for this process!
+- one of my ways of writing a mix of tests, examples and documentation is here: https://anggtwu.net/eepitch.html#test-blocks
+-
+- modern world, no time to read or write it anymore
+- A: I'm so glad to see that people are noticing bits and pieces of already doing all of this! 😊 I definitely found a lot of what I arrived at out in things that we're already doing. My hope was to formalize, name, and pull it all together 😊
+- “Clean Code" from Robert C. Martin ("Uncle Bob") is also worth taking a look and his views on documentation and so on.
+ - A: Clean Code is definitely a big inspiration for me, and I would highly recommend just about anyone read it. I don't think all of it is *perfect*, but *all* of it is worth considering
+- i definitely think this has a good balance between complexity and simplicity
+- Thank you! 👏
+- Great talk!
+- Excellent work!
+- great talk
+- Interesting concept! Now I'm thinking about how can I adopt it
+- I definitely noticed the utility of this process for ADHD
+- Would be very cool to record buffer information to effectively bookmark the context for that stream of thought
+- https:////newsletter.pragmaticengineer.com/i/177384640/wispr-flow-a-new-modality-for-programming "In the office, every desk is fitted with a $70 BOYA Gooseneck microphone, into which devs whisper"
[[!inline pages="internal(2025/info/bookclub-tapas-after)" raw="yes"]]
diff --git a/2025/talks/gardening.md b/2025/talks/gardening.md
index a7b8c2a2..e8882c7f 100644
--- a/2025/talks/gardening.md
+++ b/2025/talks/gardening.md
@@ -47,6 +47,34 @@ developer and Vi user during university life, now moved to the dark side
of agile coaching as Scrum Master.
Started learning Emacs by chance since I wanted to try it since ages.
+## Discussion / notes
+
+- Q: Have you faced any major problems while using emacs in Windows?
+ - A: Hello, thank you for the message and sorry for the delay,
+ messy days... No major issues till now, but I'm basically
+ using it just like any other text editor, mainly for my Org Mode
+ garden.
+- Q: What do you run when you want to publish content from your org
+ files to your web page on codeberg?
+ - A: Hello, thank you for the message and sorry for the delay,
+ messy days... Basically, as I wrote here
+ [https://marcoxbresciani.codeberg.page/digital-garden.html#garden-my-garden](https://marcoxbresciani.codeberg.page/digital-garden.html#garden-my-garden)
+ I open my publish.el file and evaluate it (M-x ev-b). Then I
+ usually (but it's just an habit) switch to the index.org file
+ and run the export-publish-project command (C-x C-e P p) that
+ will automagically generate all the needed HTML files for the
+ org files that have been updated or changed since last time.
+- [https://marcoxbresciani.codeberg.page/](https://marcoxbresciani.codeberg.page/)
+ - [https://codeberg.org/marcoXbresciani/pages](https://codeberg.org/marcoXbresciani/pages)
+- Lovely talk, thanks! =)
+- Ok, loved the humour in this, and yes an `outlook' email in this
+ day and age, SIN!! :P
+ - It is fine, the way to freedom is a ladder.
+- Nice talk
+- Thank you every one. Hope this message will reach you. Sorry messy
+ days...
+- I'm loving this talk. I admit that I was influenced by his accent, but the ideas are great
+- i like his philosophy on customization/configuration. (make it your own)
[[!inline pages="internal(2025/info/gardening-after)" raw="yes"]]
diff --git a/2025/talks/graphics.md b/2025/talks/graphics.md
index 34ba2314..78409015 100644
--- a/2025/talks/graphics.md
+++ b/2025/talks/graphics.md
@@ -13,9 +13,29 @@ Emanuel Berg (he/him) - Pronunciation: Swenglish, IRC: lacni, <https://dataswamp
Modern graphics with Emacs with hardware/software acceleration
+This video has no narration.
+
- <https://dataswamp.org/~incal/tmp/greeting.webm>
- <https://dataswamp.org/~incal/tmp/kitty-vt.webm> shows the software can be used from a -nw Emacs
+- didn't expect to see demos today :|
+- I was reminded of the demo scene by this presentation. https://scene.org/
+- does the demo show its code at some point?
+ - It doesn't :/
+ - https://dataswamp.org/~incal/bad-www/index.html
+- this has more than when I previewed it last. Wow!
+- So how to download the source from https://dataswamp.org/~incal/bad-el/src/ ?
+- maybe he likes to be mysterious
+
+- Q:So how to download the source from that site?
+ - A: there is a link to src/ maybe I can do a tar ball for y'all.
+ if so I'll put it in that dir
+ - wget use -e robots=off if robots are a problem
+- Q: How do you get into demomaking?
+ - A: never thought of it that way. is it different from other
+ programming?
+
+
[[!inline pages="internal(2025/info/graphics-after)" raw="yes"]]
[[!inline pages="internal(2025/info/graphics-nav)" raw="yes"]]
diff --git a/2025/talks/hyperboleqa.md b/2025/talks/hyperboleqa.md
index 31f02953..764cb174 100644
--- a/2025/talks/hyperboleqa.md
+++ b/2025/talks/hyperboleqa.md
@@ -62,6 +62,368 @@ day someone said I look 28-years-old, so neither I nor Hyperbole feel
that old. We have gained some perspective through the years, so maybe
I can help you learn something new or see something in a new way.
+## Discussion / notes
+
+- Q: I'm excited to know opinion on current state of using MCP and Ai
+ for PKM and PIEs. Since they do carry lot of burden out of us and
+ ease lot of process. How does hyperbole stand with coming days?
+ - A: We haven't yet done anything specific for MCP-based modes
+ but Hyperbole is a toolchest of capabilities for interlinking
+ information across Emacs.  You can use existing link types or
+ create your own with just a few lines of code.
+ - AI is obviously on everybody's mind. We haven't done a lot of
+ integration with any of the popular AI engines, but I think as
+ you'll see through this Q&A session, Hyperbole's function is
+ really to interlink your information everywhere throughout
+ Emacs. And so, whether you're using a chatbot in a specific
+ buffer, you can use hyperbole implicit links, implicit buttons
+ to activate different actions there as well. So sometimes it
+ takes a bit of customization, a small amount of two to seven
+ lines of code to do that. As we get to working with more of
+ these engines, we'll build that into the core part of
+ Hyperbole. But right now, that's left as an extension for users
+ who are heavily using MCP or other protocols right now. We have,
+ for example, integrated with LSPs, you know, for coding and have
+ that interface through xref and basically using the single key,
+ the action key, which is made a return. You can jump around to
+ any of your source definitions from any reference in almost any
+ language that anybody uses today. So you can extrapolate from
+ that how that might work with AI as well. And I think you'll
+ see later when we talk about HyWiki that we're now enabling
+ just just wiki words to be buttons in hyperbole. So those could
+ be part of your chat with an AI and you just click on it and you
+ jump right to all your references associated with that
+ terminology.
+- Q: As a normal user who codes and takes notes, I really want to
+ deep-dive and learn Hyperbole, but always end up winding back up to
+ embark and org-mode being the better system. For me hyperbole looks
+ like over-engineered (or over-configured) system which other
+ individual packages do well. And outside emacs there is no system
+ supporting hyperbole nor any usability.
+ - A: Listen to this Q&A session and take it one bite at a time. 
+ Across time, you will see how the parts of Hyperbole integrate
+ together and why they are all there.
+ - Right, Hyperbole is large, but there's reasons behind that.
+ We're just trying to link all your information in Emacs. So I
+ think you can see my screen here in Emacs. So for example, you
+ can take any Lisp expression, even a variable like here we have
+ in Hyperbole, hyperb:dir variable, and I just hit the action key
+ M-RET, and in my minibuffer, I see the value of that variable,
+ but I could just as well take any other expression and take the
+ outer parens off and change them to angle brackets and now
+ that's a live hyperbutton. Could be in a comment in a
+ programming buffer in this case. It's in Koutliner buffer,
+ which is a an auto-numbered outliner part of hyperbole. So
+ let's just try this and say M-RET. I pressed and it ran occur
+ and found all the occurrences of buttons. And similarly in here,
+ I could just jump and go to any of these lines directly by
+ hitting M-RET in that buffer as well. So all your text, all your
+ sort of what we call implicit links become live in Hyperbole.
+ And you didn't have to learn much. You just learn, you know, if
+ you know a little Lisp or how to type any expression, then you
+ just change the outer brackets. And all of a sudden, you have
+ hyperbuttons. So Hyperbole, you can learn a little bit at a
+ time. And although it seems daunting at first because it has so
+ much functionality, very large and rich architecture. But what
+ we do is teach people one piece at a time.
+ - So just to continue on that a little bit, implicit buttons are
+ buttons that exist just from the text pattern in the buffer. So
+ you saw an example of changing Lisp into implicit buttons right
+ there. I could do keystrokes. I can just type them out in my
+ buffer and surround them with braces. So here's something,
+ let's see, this is actually a command in the K Outliner to jump
+ to the cell numbered four. So let's just do that. And it took
+ me right there, right? So I'm just pressing M-RET to activate
+ these buttons. Similarly, any sort of, this is a complex
+ example, but any path name I can surround with double quotes,
+ and it's a live hyperbutton. In this case, I want to jump to a
+ path name called readme.md, but it's in a directory that's
+ specified by an actual list variable. And then I want to go
+ directly to a headline within that file called Hyperbole manual.
+ And within that headline, I wanna go to the eighth line relative
+ to that. So all I have to do, M-RET again, and boom, I'm in
+ that, I'm directly linked to that. And Hyperbole has ways that
+ you can just split your windows like this and create that
+ reference in the source buffer right there. You just press a few
+ keys and it'll embed that link. We'll see that a little later.
+ - Another example, so all of these buttons, if I just show you
+ here, you can press C-h A anytime. and it will show you exactly
+ what M-RET will do in that context. In this case, it's an
+ implicit button, and it shows you even where the button starts
+ and ends, what type of action it will run, it's a link to a
+ file line, and then what arguments it takes. So Hyperbole
+ extracts all this meta information just from the text in your
+ buffer and displays it to you conveniently so you can know
+ before you ever touch a hyper button if it will do something
+ that you want it to do. Here we have a fairly advanced button
+ that's very simple to do. You just specify a bug in Emacs that
+ you want to reference to. Notice no delimiters, just bug pound,
+ whatever, M-RET. And I'm in Gnus reading the conversation for
+ that bug. And I can just, you know, move through all the
+ conversation. I can quit out of there and go back to where I
+ was. So very, very easy to use these implicit buttons because
+ they're already there throughout your Emacs buffers. I
+ described the C-h A, what that does. And there's other types of
+ buttons that we can get into as questions go on, but you can
+ create your own explicit buttons that have a little slightly
+ different delimiter than you see in the implicit buttons. And
+ this one I just put in here to show you that If you use it and
+ you go, this is the hyperbole to do list, which is an org
+ buffer. But I wanted to show in here that similarly, we have
+ implicit buttons for TODOs in the work. And when we hit M-RET,
+ it just changes the state of that to do. And I can cycle through
+ those but even better with the prefix argument if I have
+ multiple sequences of TODOs because there's Bob and Mats that
+ maintain hyperbole so I can shift to Bob's TODOs with C-u M-RET
+ and then cycle through the states for me So very very easy to
+ use, you know something that's a little bit more difficult to
+ do I think in org without it.
+ - So that's an explicit button where I had to actually say I want
+ to create this button, and I had to specify what type it is. If
+ I show you the information there again, you see it has a little
+ different type called a keyboard key, which runs just the key
+ sequence. So you're starting to see already that explicit
+ buttons have a type that's connected to an action that an
+ implicit button can do as well. So all of this ties back
+ together.
+ - And finally, there's a homepage that Hyperbole has, a personal
+ homepage that you have. You hit C-h h, which is our mini-buffer
+ menu, and then you hit what is it, b for button file and then p
+ for personal file. And that just brings you to basically a set
+ of links that you can create buttons in any format you want.
+ There's no structure that you see here. But the nice thing is
+ that all of these buttons that have these names, as we call
+ them, with the delimiters here, can be referenced now as what we
+ call global buttons wherever you are in Emacs. So I'm in a
+ separate buffer here and say I want to jump to that to-do button
+ that's labeled td on line 10 down there. No matter what I have
+ on screen, I can hit C-h h g for global button, a for activate,
+ and then it gives me a list of those. So I know it's td, I just
+ put td in. Okay, that's a path link problem I have, but when I
+ fix the link, it would go to it. So you can create buttons that
+ you can access in any mode, anywhere, and just give them quick
+ names, and it's very easy. So that kind of gives you an idea of
+ how you can get very productive with hyperbole with just a few
+ simple techniques. 
+- Q: I've been using "activities.el" and "Bufferlo" to save
+ dedicated workspaces (open buffers, window positions) in tabs and
+ frames for tasks/projects across Emacs sessions. Could I do
+ something similar with Hyperbole?
+ - A: We plan to have Hyperbole activities.el integration in about
+ another month, so stay tuned for that.  In the meantime, there
+ is the Win/ minibuffer menu, that lets you save window and frame
+ configurations by name or onto a window config ring similar to
+ the kill-ring.
+ - Yes, you can. And activities is a nice package from alphapapa.
+ We've actually been working with it lately. So we're probably
+ in the next month or so we'll have a specific integration to
+ activities built into Hyperbole. But right now, we don't. But
+ of course, you can call any of its functions or key bindings
+ using the techniques that I just showed you earlier. But what we
+ do have built in if you go to the menu again. and you see C-h h,
+ and then there's a w, Windows, WinConfig menu, and there's two
+ types of window configurations that you can save here. They are,
+ right now, they're per Emacs session. They're not stored
+ beyond that, but we'll probably add that in as well, or we'll
+ use activities for that. so the two types are you can either
+ just save a window configuration in a frame... Actually, it
+ stores the frame configuration to a ring just like the kill
+ ring. So you have the three commands at the right. you can save
+ with an s, you can pop one off the ring with p, or you can just
+ yank and keep cycling through with a y and it will restore the
+ frame configuration that you saved. Similarly, you can just do
+ it by name, and you can say "I want to add a name" and then
+ just give it a name again, winc, and store it and it stores it
+ there and then you can get back to it by name as well. So fairly
+ easy to use as well and again integrated in the same simple menu
+ system. S
+- Q: How well do Hyperbole and org-mode work together? Is there any
+ kind of integration?
+ - A: Hyperbole is very well integrated with Org mode and most
+ Hyperbole capabilities are live within Org mode buffers.  We did
+ an EmacsConf talk in an earlier year about the integration. 
+ Find it here: 
+ [https://emacsconf.org/2022/talks/hyperorg/](https://emacsconf.org/2022/talks/hyperorg/)
+ - How well do hyperbole and org mode work together? Is there any
+ kind of integration? Yes, in fact, that's really good. I'll
+ just mention something. Let me go back to my homepage. I just
+ stored that here. So we gave a talk at an earlier Emacs
+ conference right here on org and hyperbole integration. So
+ that's a good one to go back to. And I believe it's in this
+ files included with hyperbole as well. So you can learn various
+ techniques of how the action key helps you in org. It does
+ special things in tables. And there's some nice support for,
+ for example, working with code blocks. Let me see where that is.
+ Okay, so right back here. So you can run them with the action
+ key. You can refresh the output and do things like that. So
+ again, if I just hit C-h A, it'll tell me that it's in smart
+ org, and it'll give me all the different contexts that that
+ operates within. So there's a lot that it does in here. And you
+ can see it would point on the dir value of a code block
+ definition that will actually display a summary and all sorts of
+ functionality. So the integration is quite tight. And one of the
+ things we do since M-RET is used in org, we have a customization
+ setting, c then o. And you have these three settings where you
+ can say, I want M-RET to... I want hyperbole to control that
+ and everything that the action key does I want to happen, or I
+ only want hyperbole to control when I'm over a hyperbole
+ implicit/explicit button, or I want org to control that key and
+ never use hyperbole. So you just set that once, it's persistent
+ across sessions and you're good to go. And again, it's built
+ right into the menus.
+ - But even following that we've the latest addition to hyperbole
+ is something, and this is the first time we're really showing
+ it publicly, is the Hywiki, which is a new subsystem as we call
+ it, and this is I think the best wiki capability in Emacs. Now
+ what it does is it automatically highlights... Let me turn it
+ on. I have to turn on hywiki mode. And you see those wiki words
+ now got highlighted, so any any wiki word which is the
+ capitalized alpha word you know, so you can have multiple
+ capitals in there and it'll get recognized, can be used as a
+ wiki word. So for example when I just type HyWiki here, it
+ automatically recognizes it, and you see it turned it into a
+ hyperlink button, which again, C-h a will tell me exactly what
+ it does there. But I can just hit the action key, M-RET, and
+ it'll display my hywiki.org file. All wiki pages are org files.
+ So we're using that for the wikis, and you have You can export
+ an entire wiki using essentially the org export capability with
+ a little extra set of features that we've added in, but let's
+ say, even better. You see I have this heading here, so let me
+ just change this. You go back here, and I'll say go to heading,
+ so you just put a pound on it, and now that whole thing is a
+ reference to a specific org section. Notice there's no org IDs
+ here. There's nothing other than the text that you're seeing.
+ There's not even a delimiter. So we have automatic implicit
+ hyper buttons being added in any buffer. Could be a comment in a
+ programming buffer with all you You don't have to add anything.
+ I'll show you how to create a new page in a minute. But you see
+ I can link to any org section without any IDs. And then I can
+ also do like org-roam does, but without the indexing or database
+ that it requires. I can scan over all of my wiki files and
+ headings. find a match really quickly. So we can get into some
+ of that a little later as well. But, you know, very convenient.
+ There's nothing that you change on org to do this. So how do I
+ create a wiki word? Well, let's say I wanted, you know, wiki
+ word for me. So that's already, that was a wiki word, but now
+ this is a new one. So you see it doesn't highlight because I
+ haven't created a wiki page yet. So all I hit is the action
+ key, and boom. Now it created it as a new wiki word. It created
+ the .org file. If I don't edit this file, it won't save it,
+ and it'll not become a word in case you made an accident. But
+ let's just say I want to say it. So, you know, heading. That's
+ it. I'm just in org mode. Now anytime that hywiki mode is
+ active, in any buffer essentially, I can type that out and
+ it'll recognize it. Notice so that's not a wiki word. So it's
+ highlighting and it's unhighlighting right as I type. So,
+ again, you can embed these as org links in org. There's a
+ special format like this, HyWiki word that you can make an org
+ link if I was in org mode, just like that. So there's all sorts
+ of compatibility, but basically it's just words, and HyWiki
+ takes care of the rest for you. So there's a directory where
+ all these, it's HyWiki, hywiki, ~/hywiki is the default place
+ where all these would be found, and there's a menu now in
+ hyperbole for hywiki, h, and you can see, it has a lot of
+ capabilities. But I can say, b, go into the directory of all the
+ files, just pull them up, and any of these you'll see... Let
+ me give you one like this. Okay. So you can see the other wiki
+ words being highlighted in here. It's very fast too. There's
+ almost no delay for anything, and yet very flexible, and you
+ have this ability where you could type emacs#section-1-2 and if
+ you didn't have delimiters around it, but you can put any
+ delimiters like double quotes or parentheses, and then it'll
+ match without you having to change the header at all with the
+ spaces included, and all of those will get recognized. I don't
+ know if the section exists right there. So anyway a lot of
+ capability you can see that here where I did the hy... it
+ actually highlights as an org link because it is an org link,
+ and it'll operate just like any other org link even though
+ it's a hywiki word link as well. So very powerful stuff and
+ totally integrated with Org Mode throughout. Great. 
+- Q: Are there any talks from this year's emacsconf that discussed
+ things that would work well with Hyperbole?
+ - A: Had to work yesterday so I haven't followed the talks. Pick
+ your favorite mode/type of information. Can Hyperbole work with
+ that? The answer is yes.
+ - Demo of how to create an implicit button type.  See
+ documentation here:
+ [https://rswgnu.github.io/hyperbole/man/hyperbole.html#Creating-Types](https://rswgnu.github.io/hyperbole/man/hyperbole.html#Creating-Types)
+ - No. Unfortunately, I had to work yesterday, so I haven't been
+ following the conference as much as I do. Maybe somebody else
+ could comment on that. But I think, you know, again, it's like
+ pick your favorite mode, pick your favorite type of information.
+ Can hyperbole work with that? You know, the answer is almost
+ always yes. So, you know, if I show you just a little bit, if I
+ show you some of these implicit button types, just so you know
+ the amount of code involved to create a type. So here's like a
+ mail, recognizing an email address as a button. It's a little
+ long, so that it creates a lot of things, but you know it's
+ less than 15 lines of code for that. Path names are complicated,
+ so that's a longer one, but let's look at... So here's one
+ recognizing a bibliography entry. So it can be between two and
+ 20 lines of code to create an entirely new button type. And you
+ create it once, and you just add it to the set of types, just
+ like at the fun, except it's done with this macro called def
+ implicit button type, and defib. And it's part of your
+ hyperlinking system forever then. So say you got dumped with
+ 5,000 documents that were in this weird text format, and they
+ all had cross-references among them, but it was, again, using a
+ weird format. You could just write your own little type for
+ that, and then those 5,000 documents are hyperlinked for you
+ every time you're browsing them in Emacs automatically. So we
+ do that all the time, create small things, but all of these are
+ built into Hyperbole. Markdown links, texinfo links, all of
+ that's automatic. I could even be in a shell mode, and I just
+ say ls, and these are hyperlinks that Hyperbole understands,
+ right? It just jumps right to the file. So grep -n, you know,
+ looking at any line numbers, you don't have to remember all
+ these different commands anymore. You just hit M-RET, and
+ Hyperbole does the right thing in all these different contexts,
+ including following cross-references in code. So I would say
+ that's your answer. Most things that people are talking about,
+ we've already probably integrated with Hyperbole or with a
+ little bit of custom coding. You can do it. 
+- Comment: Interesting, but the many different link formats makes
+ reading and analyzing my notes much harder and less usable outside
+ Emacs. 
+ - Well, I mean, the different formats that you're saying, like
+ angle brackets or curly braces, are just so that you can utilize
+ many different types. of buttons, but if you just want to use
+ key sequences, there's only one markup format. With org,
+ you've got the square brackets, which are consistent, but in
+ order to have different types, you have to type a prefix name,
+ like you see the HY for the HyWiki buttons in org mode. So I
+ think the trade-off is pretty much the same, but Hyperbole
+ always, always works to minimize the amount of markup. Markdown
+ is pretty simple. A lot of people like that. But I think you'll
+ find in hyperbole texts, they read just like regular language. I
+ mean, the delimiters are fairly invisible. So I'm not sure what
+ the issue is there. And again, you can choose your own. You can
+ make your own types with your own delimiters. There's even a
+ custom macro that instead of using that defib where you have to
+ type out Lisp code, you can use regular expressions. And in one
+ line, you can define your own type of button with its own
+ delimiters. So, you know, depending on what works well for your
+ eyes, you can make Hyperbole adapt quite well to that. And
+ again, if you start using the HyWiki, there's literally zero
+ markup on that. So you're just reading text, and when you want
+ something hyperlinked, it's like a glossary or a dictionary
+ entry. It's just there, and it's just highlighted in the text.
+ So I don't see much barrier to using it with many different
+ types of documents.
+- Q: Is there any doc on Hyperbole's design and architecture?
+ - A: See this very interesting AI-generated document with a bunch
+ of diagrams covering Hyperbole's architecture: 
+ [https://deepwiki.com/rswgnu/hyperbole/1-gnu-hyperbole-overview](https://deepwiki.com/rswgnu/hyperbole/1-gnu-hyperbole-overview)
+
+- interesting but the many different link formats makes
+ reading/analysing my notes much harder, and less usable outside
+ emacs
+- Hyperbole is designed to minimize the markup necessary on
+ hyperbuttons and with HyWikiWords there is literally no markup.  Org
+ has uniform link delimiters but requires different link prefixes to
+ embed different link types.  Hyperbole uses different delimiters for
+ different types instead, but they are always easy to read and not
+ heavyweight like some Org markup is, e.g. drawers and IDs.
[[!inline pages="internal(2025/info/hyperboleqa-after)" raw="yes"]]
diff --git a/2025/talks/juicemacs.md b/2025/talks/juicemacs.md
index d724ad33..1a5f8c79 100644
--- a/2025/talks/juicemacs.md
+++ b/2025/talks/juicemacs.md
@@ -66,6 +66,103 @@ during the journey, including how three interpreters out of four (or
more?) in Emacs are implemented in Juicemacs and how speculative
compilation can make some optimizations possible.
+## Discussion / notes
+
+- Q: Sorry for the explain-for-CL-user question - is what juicemacs is
+ doing analogous to issuing a new declamation then doing something
+ like (funcall (eval (function-lambda-expression
+ #'my-sbcl-function))) ?
+ - - (Thanks I had been confused about JIT a little bit)
+
+ - A: I actually know very little about CL (the benchmarks come
+ from an article linked in elisp-benchmarks). Personally I think
+ the difference between Juicemacs and CL impl like SBCL, is that,
+ most CL runtimes I know actually ahead-of-time compiles code,
+ even in REPL, but Juicemacs is a JIT runtime and tries to gather
+ statistics before compiling things (mostly through Truffle). For
+ function calls, Juicemacs has several optimizations, like
+ assuming unchanged function definitions (and recompile when it
+ changes), and cache functions produced by `#'(lambda ()
+ ...)` constructs.
+- Q: What's the inspiration behind the name Juicemacs?
+ - A: Since it is in Java, so I want the name to begin with 'J'.
+ Since juice is humorously not solid, I chose that name :)
+- Q: Do you think the GC of Juicemacs will have similarities with the
+ GC iterations of GNU Emacs (such as IGC)?
+ - A: I am very much looking forward to IGC but haven't tried it
+ yet. The difference between IGC (using MPS under the hood) and
+ JVM GCs is that the MPS used by Emacs is conservative (GC term)
+ and not "precise" (also GC term), in that it guesses what
+ machine words on the stack are actual objects, but otherwise it
+ should be a very competent GC.
+- Q: Just reading the blog - your experiments with Emacs are so
+ extensive :D how'd you get started? Have you experience writing
+ text editors?
+ - A: Thanks! Currently, Juicemacs is mostly about an ELisp runtime
+ though and has very little actual "text editor" things. (It
+ has proper elisp buffers, but not display in any way - no
+ GUI/TUI). And with a functional elisp runtime, an actual editor
+ can (1) be fairly easy if you ignore a bunch of things mentioned
+ in the blog, (2) or very very hard if you want to replicate
+ Emacs. And, sadly, no, I don't have experience previously
+ writing text editors, so my current plan is to delegate
+ rendering to GTK (and Pango for low-level rendering to support
+ overlays) (and other programmers by using a proper IPC protocol
+ :-) ), and delegate editing to ELisp (which is also what Emacs
+ does).
+ - For getting started, I don't know? I've already started
+ experimenting with a bit more things. Basically it is all about
+ replicating and learning from other implementations, like Emacs
+ and GtkTextView (and GraalJs for JIT compilers) (oh! and VS
+ Code, I just borrowed their intervalTree.ts for overlays): it's
+ all there, written by competent people. And all you need is
+ maybe read them all? ;P (Experimenting with Emacs and learning
+ about it (and crashing it sometimes) is very fun, by the way.)
+ - Here is a little devlog:
+ [https://juice.zulipchat.com/#narrow/channel/535506-gui/topic/Tiny.20progress.3A.20GUI.20dev.20log/with/562157361](https://juice.zulipchat.com/#narrow/channel/535506-gui/topic/Tiny.20progress.3A.20GUI.20dev.20log/with/562157361)
+ , in which I am trying to get a GUI (and after that maybe an
+ editor) working.
+ - (original question-asker) -> So: read read read, smash smash
+ smash. Love it :D and zulip is a nice idea for organising public
+ engagement with a project, looks v cool. Thanks for sharing!
+- Q: <emarsden> GraalVM is able to run C code via LLVM; I wonder
+ whether it would be feasible to use some of the existing Emacs code
+ in that way
+ - A: (Came across this interesting question on #emacs IRC.)
+ Actually, Java has added FFI (or "Foreign Function and Memory,
+ FFM) API very recently, so we can directly run (dynamically
+ linked) C code without needing LLVM or GraalVM in Java. However,
+ the thing I'm afraid of is that, Emacs is very complicated, and
+ it seems rather impossible to just use "some" of its existing
+ code without incorporating a whole GNU Emacs... (For example,
+ even passing a cons list from Java to C code is very hard, since
+ we need to now care about ABI compatibility. Not to mention that
+ in many places Emacs just assumes it is run with its built-in
+ GC.) But, speaking of FFM, I think we can make very good use of
+ it to support Emacs dynamic modules (right now I'm also
+ listening to the Emacs PDF reader talk, which is also a dynamic
+ module). As far as I know, the API interface of Emacs dynamic
+ modules is very clearly decoupled from Emacs internals (when
+ reading /usr/include/emacs-module.h, which seems to make its
+ `struct emacs_value` opaque, meaning that we can use any
+ pointer, or even Java object references in that pointer), and we
+ should be able to support any dynamic modules out there with
+ FFM.
+- [https://codeberg.org/gudzpoz/Juicemacs](https://codeberg.org/gudzpoz/Juicemacs)
+- The blog article is now online:
+ [https://kyo.iroiro.party/en/posts/juicemacs-exploring-jit-for-elisp/](https://kyo.iroiro.party/en/posts/juicemacs-exploring-jit-for-elisp/)
+- Wonderful explanations!  Thank you for taking the time to share your
+ wisdom.
+- A very impressive project! Thanks for presenting it
+- [https://juice.zulipchat.com](https://juice.zulipchat.com)
+- Very exciting project!
+
+- yet another exciting emacs clone lets goo!!
+- I really want to end up with a formal specification of emacs lisp one day
+- juicemacs elisp is suprisingly far along
+- your jit compiled elisp is impressive
+- Thanks, I know a little more about JIT now !
+- Kana here! Thanks! Please feel free to ask any question. I'm also trying out a Zulip chat server for maybe lengthier discussions: https://juice.zulipchat.com .
[[!inline pages="internal(2025/info/juicemacs-after)" raw="yes"]]
diff --git a/2025/talks/llm.md b/2025/talks/llm.md
index f9b5633a..402d1910 100644
--- a/2025/talks/llm.md
+++ b/2025/talks/llm.md
@@ -33,7 +33,77 @@ websocket, vecdb, ekg, and more). LLMs have already transformed how many
people write and edit text. This talk explores the major workflows that
have developed and examines what these mean for Emacs.
+## Discussion / notes
+- Q: My biggest question with AI code editors trying to integrate with
+ Emacs is -- are the AI code editors able to read unsaved buffers
+ and not just saved files?
+ - A: And I think it's actually... this great thing that I did not mention is that like, if you have unsaved buffers, which is, you know, when you're actually doing editing, most buffers are unsaved. really you need something tightly integrated with Emacs to deal with that. So things like, you know, I demonstrated Copilot, I demonstrated Gptel, things like those things, things like Ellama, these things will all work with unsaved buffers because they work via, you know, the input is the buffer. as opposed to a file. Things like Claude Code, Gemini Code, et cetera, those are working with files. They have no idea what is going on with your buffers. And it could be that you can solve this problem by using this thing called MCP, which kind of gives the coding agent a way to see anything in particular. In this case, it would be Emacs and the state of your buffers. But I think that's not a particularly great solution if that's how you want to work. But I think that's kind of like if you're in the Claude Code that kind of world where you know things are happening, basically through a terminal. It's okay, like you typically would not be doing a mix of things. You would just be doing things either in one place or the other place. You know, it could be that you switch off from one place to another, but you wouldn't be doing both at the same time. And it's kind of a, you tend to just fall into one, you know, editing outside the editor or editing inside the editor. And I find myself switching between the two when I use those kinds of tools. So David, let me interrupt you for just one moment. I want to just take care to read out the question that we're answering. The question was, my biggest question with AI code editors trying to integrate with Emacs is, are the AI code editors able to read unsaved buffers and not just saved files? Sorry. Yes. Yeah. Thank you for reminding me to. I will read the questions from now on. But yes, that's what I think about. that interesting questions about unsaved buffers.
+- Q: Personally I don't agree with the comment you made about VS Code
+ usage dying out because I see companies/products pushing for
+ tightly-integrated VS-Code agents/products like Windsurf. Thoughts?
+ - A: Yeah, I mean, it's really hard to be certain of anything, like things are changing very fast and it's very hard to predict the future. But the trend I see is that um, the sort of outside editing experience where you just kind of instruct a model, what to do is getting better. And as long as that keeps getting better, I think that's going to lessen the demand for these tightly integrated editing experiences. So it could be that, um, a lot of people, especially in, you know, corporate environments just start using, they're going to use whatever is going to make the most productive. And I think right now, it's not clear that that will be, you know, the very agent-based, you know, command line-centric way of doing things. But it certainly, the trend is, if that continues, I think it probably will be like that. So I think we'll have to see. I don't think your opinion is unreasonable. I guess I'm kind of cautiously saying I think it's gonna be the opposite, but I guess we'll see. Like, let's reconvene in a year and see what happens.
+- Q: Do you have any thoughts about the environmental cost of using
+ LLMs - either the training of models we can download and use
+ locally, or the larger, commercial models used from the cloud?
+ - A: You know, I'm on social media, probably a little bit more than I should be. And I do see a lot of discussion there and a lot of concern about the environmental costs of using LLMs. I've looked into this as I'm also concerned about keeping my environmental footprint personally down. And I do this in many ways, but I certainly don't want to kind of like blow that all the water because I'm using LLMs so much. I think that the concerns are mostly overblown. There's a concern that, well, it uses a lot of energy. In aggregate, the total amount of energy used by the data centers in the US is a few percent. And this is a fraction. I think this is like LM's account for something like 20% now of all data center usage, which is a lot. But Those data centers are doing lots of things. They all need to be water cooled. Um, if you like per LLM prompt, the costs are relatively small and by relatively small, I mean, you know, people have said online, well, it's like a few bottles of water per prompt. That, that is not true. It is much, much less than that. It's a fraction of that. So, uh, I don't think the answer is nothing, but I would say it's, I would say you probably, if you want the most bang for your environmental buck, probably the best thing for you to do is take less flights and things like that. Like, yes, you can cut down on this, but I think it's pretty marginal at the moment. We do probably need to think about the total costs like of humanity using all of this. Like a lot of stuff you'll see corporations are using a lot of these things. And so like, just like if you look at water usage or energy uses in total, it's like really corporations that are using this. So there might, there's a lot of leverage there to make things more efficient as opposed to personal use. So I think it's wise to be cautious, but I think it's okay, I think, at least for personal use.
+- Q: I must say, I liked your conclusion, but I differ insofar as you
+ said that VS Code differ from Emacs because the former is not as
+ easy to adapt as the latter. Why should Microsoft not adapt VS Code
+ as we adapt Emacs for the new era of coding? And why would VS Code
+ be harder hit? Could you please elaborate on this point? Thx!
+ - A: I think maybe I wasn't as sharp on my point as I could be. Because I think the core of what I'm saying is like, there is a going to be a trend. I believe there will be a trend away from editing. And if we are going to be editing less, I think VS Code, like people will be in editors less. And that means people will be in VS Code less, people will probably be in Emacs less. And yes, I think you can, VS Code is to some degree extensible. but I think there's less of a community, or that is, I think the people using Emacs have used Emacs for a long time. They're going to continue to use Emacs. I speak for myself, but I know a lot of people here are kind of like this, and they're going to just, like, we have a lot of momentum to keep doing things in Emacs, and especially because we have a lot of things that we already do in Emacs. We do to-do lists and, you know, with org mode and some people read email and some people are using shells in Emacs and all these things, I think will make Emacs kind of a better environment if you want to do various editing like things in Emacs. In, you know, in an editing environment, because I think just emails can edit more types of things I think will naturally be a bit more useful than VS code, which people are really just using to edit code and if people find it less useful to edit code. I think it's VS Code will be harder hit than emails because that's its whole, like, that's in the name, like the whole reason for it to be doing things as to edit code. So I think that it's it's vulnerable in a way that Emacs isn't just because Emacs is so very... you know, it's, it could do so many things and and people use it for so many different kinds of things that it's I think it's going to be a little bit more resilient. But as I said with the present. For those of us that are using Emacs, it's everywhere for us. Not necessarily everyone is an I live in Emacs person, but whatever you're using Emacs for, it is the thing you reach for to do that thing.
+- Q: Do you think that we are falling behind in productivity as Emacs
+ users? Compared to all these VSCode forks that have 1000 buttons and
+ textboxes everywhere (i.e. much richer UIs which are basically
+ webpages).
+ - A: I do think Emacs is falling behind in some ways. I mean, it's definitely showing its age a little bit, especially you mentioned richer UIs that are basically web pages. I mean, this I think is one of the big problems Emacs has is that it uses a very, you know, a much more ancient way of kind of doing UIs that is not particularly flexible and not particularly comfortable for any modern UI coder. And I think if you look at the Emacs stuff out there, like, yes, you can do a few things with UIs. You can have some amount of UI richness, but it's pretty limited. And I kind of, if there's one thing I could wish for in Emacs, it's sort of like, I kind of wish Emacs could be on a, could be built on top of basically like Atom or something like that, where it's like a web framework that allows us to write actual rich pages, rich UIs in a modern style using things like CSS instead of the kinds of things Emacs lets you do. But that said, that is an advantage of VS Code and other editors like that. I think that Emacs does a good job of eventually catching up to all sorts of things people are doing in other editors. It's often that other editors get there first, but there's a lot of momentum to kind of keep Emacs fresh, keep it modern. And it's pretty easy to- I love that. I forgot about the lag. We do have a little bit of lag, but I just, I find that very captivating. We have with technologies like Apache Cassandra in the database world, we have this idea of eventual concurrency. And you make me think with Emacs, we have this idea of eventual feature parity, right? If a feature stays desirable long enough, Emacs will eventually grow it. I think that's a very contagious idea. Yeah, yeah, thanks. I hope that idea makes sense. And I hope it's correct, because I think that I do want Emacs to continue to succeed. And I personally, using Emacs, do not feel myself falling behind in productivity. That said, there's a lot of ways that Emacs can improve and should improve on this front. And a lot of these ways are pretty fundamental. So I kind of hope people pay a lot of attention to some of these more fundamental lower-level Emacs things that really allows the packages to do more richer and better things.
+
+- Q: I've been using Claude Code extensively. I recently switched to
+ Agent Shell with Claude Code. Have you tried it, what are your
+ thoughts?
+ - A: I actually have tried Agent Shell. And currently, I recorded this video like three months ago. So Agent Shell did not exist then. If Agent Shell did exist, I probably would have demoed it as well. Agent shell is great in the sense of it's... It does use comint, which is the way that I think all Emacs users would prefer to interact with something like Claude Code, or any of those types of tools, which is like, I don't. Um, the other, but it's a trade-off it uses like on the back and it's, it has a common buffer. And then on the back end, it's using a protocol to talk to agent, uh, to Claude Code and other things. The problem is this has a lot of problems. For example, like you don't have completion of slash commands. You don't have, um, if you ask to see the, in Claude Code, you can get a visual representation of. the context window. But you can't do this. I mean, last time I tried, I couldn't do this in agent shell. It's progressing rapidly. But it's not as rich in functionality as using Claude Code directly. On the other hand, because it's letting Emacs be Emacs and using comint, it's a much better experience to actually give instructions. I think the maximum power, though, is, to me, the best way is still like, you know, do your editing in org mode, and then just tell, you could have, you know, the richer experience of using of using Claude Code in, in it's more like shell like form where everything is, it's much, you know, designed to be used in the terminal, but you don't have to type in that much because you're really doing your typing in order to me, I think there's kind of the sweet spot that I like. Um, but agent-shell is a great step forward and I think it's, uh, it's quite good to use. And I, I personally use it a lot.
+- Q: In terms of agent selection, what has your experience been with
+ different agents, and have you had any success with hosting your own
+ models and using open weights? 
+ - A: I think there's, you know, many people have many different opinions on this. I think Claude Code is, most people I know would say Claude Code is probably, sorry, Claude is probably the best for coding right now. Gemini can be very hit and miss even with 3.0, but Claude is quite good. 4.5 Opus is actually relatively cheap compared to the previous version of 4.1 Opus. There's other models out there, but I think most people just stick with Claude because it's very reliable, it's very good, and nothing is obviously better than that. And as far as DeepSeek is pretty good as well, and then much cheaper. I've had some good luck using that locally, but actually the problem is for my day-to-day machine, like my personal machine, it's not powerful enough to run anything locally. And my work machine, it is powerful enough, but I can spend my company's money at will on more powerful models. So there's really not a lot of incentive for me to run locally. I think, as far as I know, I haven't heard of local models being incredible, but I think you can get reasonable quality with them. That is, especially if you're doing relatively simple things, I think it's pretty reasonable to be using those. Also, they tend to be slower than the models that are elsewhere just because they just have more horsepower, they can churn through those tokens a little quicker.
+- Q:   I'm reading angst in your thinking about AI/editing.  What are
+ you excited about?
+ - A: I mean, I think there are possibilities. Like, yes, people are going in sort of a relatively obvious direction with LLMs right now. And I think there's lots of opportunities, clever opportunities to do things we couldn't have thought of... Things that are useful, but in ways that are not super obvious to us, and I think I'm still excited about the possibilities of using them in ways that are super helpful and different than normal. I'll give you an example. This is something that I intend to, I think, post on Reddit in a few days, but I have a extension to eshell where you can prefix a command with at, and then just tell it what you want to do, and it will substitute the command that you are thinking of. Because often, I do not remember. I never remember, like, how do you find a file in a directory tree, you know, recursing? Who can remember how to do that? It's like a find, and there's like a dash print there somewhere. Yes. There are some smart people who remember this but I am not one of them. And so I think, like, something like this is like, you just type out, find me this file, and it will substitute the correct command. I think this is, there's a lot of little, little tweaks you could do like, you know, if you want the AI, it could be there for you, and it will help you. And if you don't want it, it's not going to get in your way. And I think this is where Emacs can really shine. It can really take advantage of LLMs, but still remain true to its kind of editing experience, because it's not forcing you to use LLMs all the time.
+- Q: Why does it matter to have a richer UI? All that is left is
+ basically writing and getting the results.
+ - A: I think maybe this is a response to me complaining about Emacs not having a richer UI before, but I think it does matter a lot for all sorts of things. It's hard to kind of explain succinctly, because I'm talking about UI and I'd have to show you things. But it should be just something like, oh I have an error, and I'm using flymake and I'm, I'm using the... I have options where it'll show me the error in line by underlining things and having a little message, but like, you know what, that message doesn't appear quite right a lot of the times. Or here's another one like. I program in Python a lot. And Python, it's super hard to program in unless you have these little vertical lines that shows you what the indents are. At least I find it. There are two packages that do that. None of them do it particularly well, just because Emacs at its base does not allow you to do this. And so you kind of have to hack it in. And there's lots of ways to mess it up. And when editing, you'll find yourself messing this thing up regularly. So it doesn't look quite clean. And like, there's little artifacts, or, you know, there's little ways that it, it kind of gets things wrong, or you can get things wrong with it. So I think that, like, there's a lot of issues with that sort of thing. And also, like, you know, what if you want to do something like play a video inline, like, I don't know, you might should be able to do that, you might should be able to do anything. But right now, it just can't. I think a lot of the reason as well... you know, we wanted to be compatible with TRS 80 machines or something like that. This is important, this really is important, but I hope there's some way that we can kind of eventually figure out how to get the best of both compatibility and more modern UIs. So, you know, we can have more modern UIs for people that have modern machines and other people either do without that functionality or sort of fall back to some reasonable default.
+
+
+- Q: I have 45+ years editing, programming.   I'm not sure I can
+ think about things without thinking of buffers, editors etc.   Is
+ this a handicap/should we just have people with no experience with
+ code learn to prompt?
+ - A: I think experience only helps here - I don't trust people
+ with no experience creating code.  It's OK for one-shot type
+ apps where you don't care about maintainability, but for
+ serious code where you need to think about a lot of different
+ types of typically software-engineering concerns such as
+ latency, maintainability, scalability, etc, experience is a huge
+ boost.  We see this in the industry right now where junior
+ engineers are less desirable than senior engineers, because
+ senior engineers can just use LLMs more effectively.  So I think
+ dipping your toes into the water is well worth it.
+ - A:
+- I really like that sentiment. I like the idea that maybe we could
+ have a place for code that is "good enough", and have a place for
+ code where passion for the craft shines through
+- I think editors like Windsurf/Cursor/VS Code will stick around but
+ to Andrew's point, the editing portion of the app will shrink while
+ the agent interaction will take center stage
+- Thanks for answering the questions in such a clearly articulated
+ manner :)
+- Monster write up on energy usage:
+ [https://www.technologyreview.com/2025/05/20/1116327/ai-energy-usage-climate-footprint-big-tech/](https://www.technologyreview.com/2025/05/20/1116327/ai-energy-usage-climate-footprint-big-tech/) -
+ tldr; AI's energy use is small per query but exploding
+ overall---driven by opaque, power-hungry data centers.
+- interesting talk. I'll start asking it for everything: "but is it
+ editing?"
[[!inline pages="internal(2025/info/llm-after)" raw="yes"]]
diff --git a/2025/talks/private-ai.md b/2025/talks/private-ai.md
index 55660ac2..8f78c9e2 100644
--- a/2025/talks/private-ai.md
+++ b/2025/talks/private-ai.md
@@ -23,6 +23,93 @@ About the speaker:
AI is everywhere and everyone is trying to figure out how to use it better.  This talk will be a quick introduction to showing some
of the tools and techniques that a user can do to integrate AI privately and securely into their Emacs workflow.  The goal is to help people take the first steps on what will hopefully be a productive journey.
+## Discussion / notes
+
+- Q: Why is the David Bowie question a good one for testing a model?
+ e.g. does it fail in interesting ways?
+ - A:  Big fan, firstly; also Deepseek will tend to have errors and
+ I'm familiar with the data so easy to spot halucinations
+ - A: First off, huge fan of David Bowie. But I came down to it really taught me a few things about how the models work in terms of things like how many kids he had, because Deepseek, which is a very popular Chinese model that a lot of people are using now, misidentifies him having three daughters, and he has like one son and one, one, I think, two sons and a daughter or something like that. so there's differences on that, and it just goes over... there's a whole lot of stuff because his story spans like 60 years, so it gives good feedback. That's the real main reason I asked that question because I just needed one... That sea monkeys, I just picked because it was obscure, and just always have, write, I used to have it write hello world in forth because I thought was an interesting one as well. It's just picking random ones like that. One question I ask a lot of models is, what is the closest star to the Earth? Because most of them will say Alpha Centauri or Proxima Centauri and not the sun. And I have a whole 'nother talk where I just argue with the LLM trying to say, hey, the sun is a star. And he just wouldn't accept it, so.
+- Q: What specific tasks do you use local AI for?
+ - A: refactoring for example converting python 2 to python 3,
+ cybersecurity researching
+ - A: I like to load a lot of my code into and actually have it do analysis of it. I was actually going through some code I have for some pen testing, and I was having it modified to update it for the newer version, because I hate to say this, but it was written for Python 2, and I needed to update it for Python 3. And the 2 to 3 tool did not do all of it, but the actual tool was able to do the refactoring. It's part of my laziness. But I use that for anything I don't want to hit the web. And that's a lot of stuff when you start thinking about if you're doing cyber security researching. and you have your white papers and stuff like that and stuff in there. I've got a lot of that loaded into RAG in one model on my Open WebUI system.
+
+- Q: Have you used any small domain-specific LLMs?  What are the kinds
+ of tasks they specialize in, and how do I find and use them?
+ - A:  On the todo list but not something I have used very much yet
+- Q: Are the various models updated regularly?  Can you add your own
+ data to pre-built models? +1
+ - A:
+- Q-piggy-back: Will the models reach out to the web if they need to
+ for more info?
+ - A: haven't 
+- Q: What is your experiance with RAG? are you using them and how have
+ they helped?
+ - A:
+- Q: Thoughts on running things on AWS/digital ocean instances, etc?
+ - A: prefer not to have the data leave home; AWS and DO works
+ okay, oracle has some free offerings but tend to work locally
+ most often
+- Q: What has your experience been using AI for cyber security
+ applications? What do you usually use it for?
+ - A: Yeah, really, for cybersecurity, what I've had to do is I've dumped logs to have it do correlation. Keep in mind, the size of that Llama file we were using for figuring out David Bowie, writing the hello world, all that stuff, is like six gig. How does it get the entire world in six gig? I still haven't figured that out in terms of quantization. So I'm really interested in seeing the ability to take all this stuff out of all my logs, dump it all in there, and actually be able to do intelligent queries against that. Microsoft has a project called Security Copilot, which is trying to do that in the Cloud. But I want to work on something to do that more locally and be able to actually drive this stuff over that. That's one also on the long-term goals.
+- Q: Is there a disparity where you go to paid models becouse they are
+ better and what problems would those be?
+ - A: Paid models, I don't mind them. I think they're good, but I don't think they're actually economically sustainable under their current system. Because right now, if you're paying 20 bucks a month for Copilot and that goes up to 200 bucks, I'm not going to be as likely to use it. You know what I mean? But it does do some things in a way that I did not expect. For example, Grok was refactoring some of my code in the comments and dropped an F-bomb. which I did not see coming, but the other code before that I had gotten off GitHub had F bombs in it. So it was just emulating the style, but would that be something I'd want to turn in a pull request? I don't know. But, uh, there's, there's a lot of money going into these AIs and stuff, but in terms of the ability to get a decent one, like the llama, llama 3.2, and load your data into it, you can be pretty competitive. You're not going to get all the benefits, but you have more control over it. So it's a balancing act.
+
+
+- Q:  What's the largest (in parameter size) local model you've been
+ able to successfully run locally, and do you run into issues with
+ limited context window size?  The top tier paid models are up to
+ 200k now.
+ - A: By default, the context size is I think 1024. But I've upped it to 8192 on this box, the Pangolin, because it seems to be, for some reason, it's just a very... working quite well. But the largest ones I've loaded have been in the... have not been that huge. I've loaded this... the last biggest one I've done... That's the reason why I'm planning on breaking down and buying a Ryzen. Actually, I'm going to buy an Intel i285H with 96 gig of RAM. Then I should be able to load a 70 billion parameter model in that. How fast will it run? It's going to run slow as dog, but it's going to be cool to be able to do it. It's an AI bragging rights thing, but I mostly stick with the smaller size models and the ones that are more quantitized because it just tends to work better for me.
+- Q: Are thre "Free" as in FSF/open source issues with the data?
+ - A: Yes.  Where the data is coming from is a huge issue with AI
+ and will be an issue long term.
+ - A: Yes, where's the data coming from is a huge question with AI. It's astonishing you can ask questions to models that you don't know where it's coming from. That is gonna be one of the big issues long-term. There are people who are working on trying to figure out that stuff, but it's, I mean, if you look at, God, I can't remember who it was. Somebody was actually out torrenting books just to be able to build it into their AI system. I think it might've been Meta. So there's a lot of that going on. The open source of the stuff is going to be tough. There's going to be there's some models like the mobile guys have got their own license, but where they're getting their data from, I'm not sure, so that's a huge question. That's a talk in itself. But yeah, if you train on your RAG and your data, you know what it's come, you know, you have a license that... but the other stuff is just more lines of supplement if you're using a smaller model.
+- Q:  Have you used local models capable of tool-calling?
+ - A: I'm scared of agentic. I'm going to be a slow adopter of that. I want to do it, but I just don't have the, uh, four decimal fortitude right now to do it. I've had to give me the commands, but I still run the commands by hand. I'm looking into it and it's on once again, it's on that list, but I just, that's a big step for me.
+- Q: What scares you most about agentic tools? How would you think
+ about putting a sandbox around it if you adopt an agentic workflow?
+ - A: Air-gap; based on experiece in the defense industry
+ - A: In terms of that, I would just control what it's able to talk to, what machines, I would actually have it be air gap. I work for a defense contractor, and we spend a lot of time dealing with air gap systems, because that's just kind of the way it works out for us. So agentic, it's just going to take a while to get trust. I want to see more stuff happening. Humans screw up stuff enough. The last thing we need is to multiply that by 1000. So in terms of that, I would be restricting what it can do. If you look at the capabilities, if I created a user and gave it permissions, I would have a lockdown through sudo, what it's able to do, what the account's able to do. I would do those kind of things, but it's going to be, it's happening. It's just, I'm going to be one of the laggards on that one. So air gap, jail, extremely locked down environments, like we're talking about separate physicals, not Docker. Yeah, hopefully.
+- Q: Tool calling can be read-only, such as giving models the ability
+ to search the web before answersing your question. (No write access
+ or execute access) I'm interested to know if local models are any
+ good at calling tools, though.
+ - A: Yes, local models can do a lot of that stuff. It's their capabilities. If you load LM studio, you can do a lot of wonderful stuff with that or with Open Web UI with ollama. It's a lot of capabilities. It's amazing. Open Web UI is actually what a lot of companies are using now to put their data behind that. They're curated data and stuff like that. So works well. I can confirm that from my own professional experience. Excellent.
+- Q: Really interesting stuff, thank you for your talk :) Given that
+ large AI companies are openly stealing IP and copyright, thereby
+ eroding the authority of such law (and eroding truth itself as
+ well), can you see a future where IP & copyright flaw become
+ untenable and what sort of onwards effect might that have? Apologies
+ if this is outside of the scope of your talk
+ - A: I'm not a lawyer, but it is really getting complicated. It is getting to the point, I asked a question from, I played with Sora a little bit, and it generated someone, you can go like, oh, that's Jon Hamm, that's Christopher Walken, you start figuring out who the people they're modeling stuff after. There is an apocalypse, something going to happen right now. There is, but this is once again, my personal opinion, and I'm not a lawyer, and I do not have money. So don't sue me, is there's going to be the current administration tends is very AI, pro AI. And there's very a great deal of lobbying by those groups. And it's on both sides. And it's going to be, it's gonna be interesting to see what happens to copyright the next 510 years. I just don't know how it keeps up without there being some adjustments and stuff.
+- [https://grothe.us/](https://grothe.us/)
+ <-- speaker's online presence
+- Thanks for your demo and for encouragement. I'll actually give it a
+ try.
+- I remember seeing the adverts for sea monkeys in old comic books as
+ a kid -- that was a blast from the past!
+- Super inspired! And very well done as a live prezi! :) 
+- respect his commitment to privacy
+- [https://aws.amazon.com/what-is/retrieval-augmented-generation/](https://aws.amazon.com/what-is/retrieval-augmented-generation/)
+ <- What is RAG?  (an explanation)
+
+- File size is not going to be the bottleneck, your RAM is. You're
+ going to need 16 GB of RAM to run the smallest local models and
+ ~512 GB RAM to run the largest ones.  You'll need a GPU with this
+ much memory (VRAM) if you want it to run fast.
+ - A: It also depends upon how your memory is laid out. Like example being the Ultra i285H I plan to buy, that has 96 gig of memory. It's unified between the GPU and the CPU share it, but they go over the same bus. So the overall bandwidth of it tends to be a bit less, but you're able to load more of it into memory. So it's able to do some additional stuff with it as opposed to come off disk. It's all balancing act. If you hit Ziskind's website, that guy's done some great work on it. I'm trying to figure out how big a model you can do, what you can do with it. And some of the stuff seems to be not obvious, because like example, being that MacBook Air, for the five minutes I can run the model, it runs it faster than a lot of other things that should be able to run it faster, just because of the way the ARM cores and the unified memory work on it. So it's a learning process. But if you want to, Network Chuck had a great video talking about building his own system with a couple really powerful Nvidia cards and stuff like that in it. And just actually setting up on his system as a node and using a web UI on it. So there's a lot of stuff there, but it is a process of learning how big your data is, which models you want to use, how much information you need, but it's part of the learning. And you can run models, even on Raspberry Pi 5s, if you want to, they'll run slow. Don't get me wrong, but they're possible.
+- Great talk/info.   Thanks.
+
+- it went very well!
+- (from the audience perspective)
+- respect his commitment to privacy
+- Very interesting talk! Thanks!
+- AI, you are on notice: we want SBOMs, not f-bombs!
+- thanks for the presentation
[[!inline pages="internal(2025/info/private-ai-after)" raw="yes"]]
diff --git a/2025/talks/schemacs.md b/2025/talks/schemacs.md
index 62ca5321..1f13e06e 100644
--- a/2025/talks/schemacs.md
+++ b/2025/talks/schemacs.md
@@ -76,11 +76,224 @@ and submit a patch.
About the speaker:
-I am Ramin Honary, I am have been professional software
+I am Ramin Honary. I have been professional software
engineer for 17 years and I have always had a passion for
functional programming languages, especially Haskell and
the Lisp family of languages.
+## Discussion / notes
+
+- Q: I think that Kiczalez et al.'s metaobject protocol has a scheme
+ implementation, does this mean schemacs will be
+ metaobject-changeable in practice? 
+ - A: I was not aware of that implementation, but I will look into
+ it. The MOP has not been necessary for building the GUI,
+ actually (apart from the fact that Guile-GI uses GOOPS to
+ organize the Gtk3 bindings). Pretty soon I will demonstrate the
+ React-like programming framework I have developed for the
+ Schemacs GUI.
+ - A: I don't need a meta-object protocol for Schemacs, at least
+ so far it hasn't been necessary, but may be something to look
+ into if it can be made cross-platform (for various R7RS
+ Schemes).
+- Q: How will the GUI display code be r7rs compliant afaik there is no
+ dlopen in r7rs?
+ - A: To handle these platform-dependent concerns, I make heavy use
+ of the `cond-expand` macro. Basically any Scheme
+ implementation upon which I would like to run the Schemacs GUI
+ will have to have it's own unique back-end code that is
+ loaded-in to the main Scheme program. `cond-expand` has
+ mechanisms for checking which Scheme implementation it is using,
+ so it is pretty easy to write code that can load-in different
+ back-ends for whatever platform you are using.
+- Q: Will it be possible to write multithreaded code for Schemacs?
+ - A: The GUI is inherently single-threaded, but SRFI-18 provides
+ multi-threading. So yes, there is multi-threading, and I do have
+ ways of evaluating Scheme code inside of the GUI thread so that
+ you can update the GUI. This is necessary for running external
+ processes and putting the results into buffers. But anyone
+ should be able to use the threading mechanism through the
+ ordinary SRFI-18 APIs.
+ [https://srfi.schemers.org/srfi-18/srfi-18.html](https://srfi.schemers.org/srfi-18/srfi-18.html)
+- Q: Do you think some of schemacs could be extracted into SRFIs since
+ you have made it portable between scheme implementations?
+ - A: Absolutely. I have considered making a SRFI for my
+ `(schemacs lens)` library. I would like to break up the
+ Schemacs into libraries and publish them on the Akku package
+ manager, or in the Guix repository. I am hopeful that some of
+ the libraries I have written will be useful for other Scheme
+ programmers.
+ [https://akkuscm.org/](https://akkuscm.org/)
+- Q: Is there a recommended scheme implementation or does it try to be
+ as portable as possible?
+ - A:(He said earlier that Guile was the only version that worked
+ so far.  He wants it to work for all R7RS though.) That's
+ right, Guile is the reference implementation, the GUI only works
+ on Guile, but Emacs Lisp works on Guile, Chibi, and Gauche. I
+ would like to support as many Scheme's as possible. If you want
+ to get started with Scheme and you want to try Schemacs, I
+ recommend Guile.
+- Q: How would Schemacs deal with Emacs' (re)display architecture?
+ Would it be having its own display architecture? If so, how can it
+ be compatible with things like overlays, images, etc.? From what I
+ know, Emacs is extremely idiosyncratic here.
+ - A: That is all "to be determined." At some point we will have
+ to emulate the Emacs Lisp display architecture in Schemacs, but
+ for the time being Schemacs has it's own completely different
+ display architecture.
+- Q: You were saying that you'd like to get "most" of the one
+ thousand three hundred and something Emacs packages done. Is there a
+ technical blocker to doing them all? Or just a problem of getting
+ enough people in to help and start writing scheme?
+ - A: just a matter of implementing enough of Emacs' built-in
+ functions; this relates to the bug we saw in the presentation
+ (stack dump); other people will have trouble contributing until
+ this is resolved because it does not handle closures correctly. 
+ once that is worked out it will be a matter of implementing
+ Emacs' C-based functions in scheme.  Don't have a way to be
+ sure but we probably do not need all of them implemented.
+- Q: What are you thoughts on Chicken Scheme? Would it be a good fit?
+ - A: I think it will be; tried this in preping for the
+ presentation but ran into some issues; tried using the pattern
+ matcher from Alex Shinn; each implementation has a slightly
+ different take on macro-expansion for pattern matching; I would
+ definitely love help in this area. I will probably have to avoid
+ pattern matching to make it fully portable, or else implement my
+ own pattern matcher which I can be sure will work on all R7RS
+ Scheme implementations.
+- Q: Can this emacs lisp implementation be used by Guile's emacs lisp
+ "mode"?
+ - A: This was touched on last year; Emacs Lisp in guile is a
+ different implemtation which is unfortunately quite incomplete,
+ it can't even run some of the GNU Emacs initialization code. 
+ When I first started I was using Guile Emacs Lisp's parser,
+ however it did not give source locations, and was not portable
+ to other Schemes, so I had to basically write everything for
+ Schemacs from the ground-up. If Andy Wingo is interested, we can
+ probably replace the existing Guile Emacs Lisp implementation
+ with Schemacs.
+- Q: I wonder if we could do some sort of programmatic analysis on
+ popular Emacs packages to see what list of functions they tend to
+ depend upon, follow function calls down to the lowest level
+ - A:  Yes, please do this for me! :D :D
+- Q: Shouldn't it be enough to just implement the builtin functions?
+ Most of the commands are written in Emacs Lisp, right?
+ - A: Yes, correct. That is the approach I am taking. My goal is to
+ get the Emacs Regression Test suite (ERT) system working in
+ Schemacs Emacs Lisp, then we can just use the reports generated
+ by the GNU Emacs regression tests to see what Emacs Lisp
+ functions we have left to implement in Schemacs.
+- Q:  Do you think there is an opportunity to use Racket?
+ - A: Yes, looking at getting Schemacs working Chez then could
+ somehow move onto Racket; haven't tried R7RS for racket. Racket
+ works on Chez, and I would like to make Schemacs work on Chez,
+ but I won't be able to make use of Racket libraries. Alexis
+ King has written a R7RS language package for Racket, and I
+ haven't tried it yet, but it may be a good way to get Schemacs
+ to work in Racket.
+- Q: Tell us more about this show-stopping bug! How to squash it? Can
+ people help?
+ - A: Unfortunately, this is something I will have to do on my own
+ unless you happen to be a Scheme genius who can read and
+ understand all of my code so far in a short amount of time. It
+ has to do with how closures work. Closures were introduced with
+ Emacs 27 (?) and lexical scoped variables for ELisp.  When we
+ create and return a lambda that uses a variable declared outside
+ of the Lambda in a "let" binding, that variable resides on the
+ stack, so the Lambda must have a "note" that captures part of
+ the current stack and then later restores it when the Lambda is
+ executed. This is where the issue is: it is not capturing the
+ variables from the stack properly.  The plan is to do static
+ analysis of the Lambda and then store a reference to those
+ vairbles in the Lambda data structure.
+ - Q: How about using smaller test cases (instead of a full
+ Emacs loadup) to pinpoint the issue? When writing Juicemacs
+ I've gather a few closure-related test cases
+ ([https://github.com/gudzpoz/Juicemacs/blob/ddc61c08632cfdd1a9f2bc10f63e61c5679d6592/elisp/src/test/java/party/iroiro/juicemacs/elisp/runtime/ELispBindingScopeTest.java#L12-L91](https://github.com/gudzpoz/Juicemacs/blob/ddc61c08632cfdd1a9f2bc10f63e61c5679d6592/elisp/src/test/java/party/iroiro/juicemacs/elisp/runtime/ELispBindingScopeTest.java#L12-L91)
+ , and some more test cases in a blog post:
+ [https://kyo.iroiro.party/en/posts/emacs-lisp-interpreter-with-graalvm-truffle/#creating-closures-in-a-loop](https://kyo.iroiro.party/en/posts/emacs-lisp-interpreter-with-graalvm-truffle/#creating-closures-in-a-loop)
+ ). Could they be useful? (I just tried to take a peek at
+ Schemacs' code, but I'm really not familiar with
+ Scheme...)
+ - By the way, Emacs comes with its own static analyzer in
+ elisp (cconv.el) that seems to select captured variables
+ from env cons lists in Emacs, which might be useful if
+ you're also using a cons/linked list of lexical bindings, I
+ guess?
+- Q: Are there performance concerns with implementing certain C
+ primitives in pure scheme?
+ - A: No :) I think it was Donald Knuth who said "Premature
+ optimization is the root of all evil." The graphical back-end
+ is usually written in C anyway (Gtk3), so the graphics is being
+ done in C. Besides that, Scheme compilers like Guile, Chez,
+ Gambit, and Chicken all have very good performance
+ characteristics. So for the time being, I don't think
+ performance is a major concern.
+- Q:  If this project is successful, are you worried about a possible
+ split in the community between Schemacs and GNU Emacs users?
+ - A: There seems to be a large call for a scheme based editor, so
+ the demand for this "split" is already there. There have been
+ attempts at rewriting Emacs in Scheme since the early 90s. And
+ there hasn't been a good, free-software, Scheme-based
+ programming environment like Emacs since Edwin on MIT Scheme. 
+ So Schemacs may cause some fragmentation but "a rising tide
+ raises all ships".  If I have time I would also like to
+ contribute some of what I learn from Schemacs back to GNU Emacs,
+ for example I would like to work on an interactive canvas
+ library based on the "Cairo" SVG rendering library. Cairo is
+ already built-in to Emacs, so I would like to maybe port my
+ Schemacs interactive canvas (still a work in progress) to GNU
+ Emacs when I have some time.
+- Q:  The dream of never even needing to change to the web browser -
+ would schemacs bring us closer to that?
+ - A: I hope so!  this is also a dream of mine!  I wanted to make
+ sure I have a good workable UI framework like React so we can
+ write proper GUIs for applications such as a Mastadon client, it
+ could be very nice to have a better GUI for this, or for Magit,
+ or Gnus. I would love to be able to do as much as possible in
+ Schemacs, e.g. social networking, public Git repos. That is a
+ goal of mine for this project.
+- Q: Anything specific other than minimalism that made you choose
+ Scheme over Common Lisp?
+ - A: Philosophical question :)  I love Haskell, and I once had a
+ conversation with William Byrd (author of "MiniKanren," who
+ studied under Dan Friedman at the University of Indiana) who
+ told me about why he didn't like Haskell and suggested looking
+ into Scheme. I like Haskell because it is a very pure
+ implementation of the "System-F" Lambda Calculus, and I like
+ Scheme because it's closer to the mathematical framework of the
+ Untyped Lambda Calculus, but Scheme is friendly (without the
+ strict type system), similar to Python. It provides a tiny
+ framework from which all of computer science can be explored. 
+ Excited to see what this tiny language can do. I like the idea
+ of starting from a tiny "kernel" language and using it to
+ build-out all other alglorithms and software. I think it is a
+ shame that there isn't much Scheme code out there, and I would
+ like to try to expand the Scheme software ecosystem.
+- [https://codeberg.org/ramin_hal9001/schemacs](https://codeberg.org/ramin_hal9001/schemacs)
+- [https://github.com/spk121/guile-gi](https://github.com/spk121/guile-gi) 
+ <-- that is the GUI back-end, by the way.
+- [https://gi.readthedocs.io/en/latest/](https://gi.readthedocs.io/en/latest/)  
+ <-- that is GObject Introspection, this is how the GUI bindings
+ work.
+- lol it feels like rahim has been standing at the same place since
+ last year :P
+- Basically, yes, I haven't moved at all! Of course I have moved away
+ from that spot in the interim once or twice.
+- Awesome talk, I will surely try to contribute even though I don't
+ know stuff yet :)
+ - All are welcome, if you don't know anything, I'll be happy to
+ try and teach you!
+
+
+- amazing progress
+- nice talk.
+- I'm so excited for this project! Amazing update 😊
+- I wonder if we could do some sort of programmatic analysis on popular Emacs packages to see what list of functions they tend to depend upon, follow function calls down to the lowest level
+- would love to see that
+- That is probably a good idea (getting rid of the baggage)
+
+
[[!inline pages="internal(2025/info/schemacs-after)" raw="yes"]]
diff --git a/2025/talks/sun-close.md b/2025/talks/sun-close.md
index 3f029db3..3617a1d1 100644
--- a/2025/talks/sun-close.md
+++ b/2025/talks/sun-close.md
@@ -11,6 +11,29 @@
[[!inline pages="internal(2025/info/sun-close-before)" raw="yes"]]
+## Discussion / notes
+
+- Thank you for everything sachac and everyone else that volunteered or that gave a talk
+- It was very fun participating
+- thank you all for everything!!!
+- Thanks for another great emacsconf!
+- thank you!
+- Thanks everybody :-)
+- Thank you for all your amazing work!
+- Thank you sachac, corwin and everyone that made this possible. great emacsconf
+- 👏👏👏 Thanks to the EmacsConf org (pun intended ;) ) team and community and of course all emacs contributors and maintainers
+- Thanks to everybody involved with Emacsconf. Awesome conference as always
+- Fantastic conference!
+- Thank you so much for making it happen! :)
+- Reddit and IRC mentioned, but most is happening on the Emacs mailing lists :)
+- Emacs bugs mailing list is surprisingly interesting as well: lots of discussion there, on various details (and upcoming little features!), every single day.
+- i.e. Not your boring bug tracker. :)
+- The Emacs Carnival, perhaps? 🙂 I'm very curious about getting into the bloggosphere around Emacs. I haven't done much digging there yet 🙂
+- thanks all for this nice Emacs weekend
+- See you all around, Happy Emacsconf! 😃😊
+- Excellent weekend. It went by so fast 😊
+- bye! good talks :D
+- thanks for an excellent weekend, amazing conf
diff --git a/2025/talks/swanky.md b/2025/talks/swanky.md
index de939730..be639835 100644
--- a/2025/talks/swanky.md
+++ b/2025/talks/swanky.md
@@ -40,6 +40,115 @@ About the speaker:
Python is eating the world. Emacs is eating my computing environment. I'm
attempting to get them working together.
+## Discussion / notes
+
+- Q: Does swanky-python work with Sly?
+ - A: It doesn't, Sly is great but I went with slime for a few
+ reasons:
+ - I wanted to use some cool stuff from slime-star
+ - I actually think there's good potential with slime's presentations that sly removed.
+ - The main feature of sly missing from slime is stickers.
+ slime-star provides something similar in being able to
+ recompile a function with an expression traced, but I
+ think for python it'll be better to integrate with dape
+ for debugging
+ - In recent years slime has been more actively maintained
+ in terms of bug fixes and such.
+- Q: Does this work with Hy?
+ ([https://hylang.org](https://hylang.org),
+ lisp syntax for Python)
+ - A: I actually first wrote this in Hy.
+ [https://codeberg.org/sczi/swanky-python/src/commit/6d8f4e0c8000c931746edd0fb442704dff853492](https://codeberg.org/sczi/swanky-python/src/commit/6d8f4e0c8000c931746edd0fb442704dff853492)
+ is the last commit before I switched back to python.
+ - Though even when the swanky python backend was written in Hy, it
+ was still targeted at editing python code, not Hy. Implementing
+ it in Hy just made the implementation a bit easier, as the slime
+ "protocol" is just slime sending lisp forms to the swank
+ backend to evaluate, so to write the backend in python we need
+ to implement a lisp interpreter (swank_eval in server.py), which
+ we already have in Hy.
+ - To make it work for editing Hy code would require some changes
+ on the backend, around evaling code, returning source locations,
+ and autocomplete. But most would stay the same, so I think it
+ could be made to support both without needing to fork a separate
+ project. I don't plan to use Hy or work on it. When writing
+ lisp I'd rather write CL, and when writing python I'd rather
+ use standard python syntax. But if someone wants to add Hy
+ support I'd be happy to merge it and assist a bit.
+- Q: Where can I find a list of Slime-like interfaces for other
+ languages?
+ - A: I don't know that a slime-like interface really exists for
+ any languages outside of the lisp and smalltalk family. I made a
+ list of some of those at
+ [https://codeberg.org/sczi/swanky-python/src/branch/main/Hacking.org#headline-63](https://codeberg.org/sczi/swanky-python/src/branch/main/Hacking.org#headline-63)
+- Q: Is there an IRC channel for swanky-python? If not, are you
+ interested in creating one?
+ - A: Good idea to have, I just made #swankypython on libera
+- Q:How would this integrate with python notebooks such as marimo?
+ - A: I've never used marimo, just jupyter, but it looks nicer so
+ I'd like to try it out sometime. The most basic integration
+ would be to just run swanky python within the notebook. That way
+ you would use the notebook as normal, but get the interactive
+ backtrace buffer in emacs on any exception, and be able to use
+ the inspector, nicer repl, etc. A more complete integration
+ would probably be based on emacs-jupyter but I haven't looked
+ into it yet.
+- Q: Why not org babel as well? +1 for org-babel with this, would be
+ awesome
+ - A: That'd be great and probably not much work at all. I just
+ tried evaling python code as a "lisp" block, since babel for
+ lisp calls slime-eval, and it dies with an exception because I
+ haven't implemented swank:eval-and-grab-output in swanky python
+ yet. Maybe it's just needed to implement that and then
+ configure babel for python src blocks to use slime-eval rather
+ than running with org-babel-python-command.
+- Tangentially, did you see Kent Pitman's recent moves to introduce
+ his common lisp condition system to python? E.g. about resuming
+ execution after an exception. He showed it some sunday lispy gopher
+ climate. In my opinion, reach out to
+ [https://climatejustice.social/@kentpitman](https://climatejustice.social/@kentpitman)
+ since you asked for contact about lisp-style python exception
+ restarts which he has worked on recently.
+- I hadn't seen that, thanks, it's super interesting to hear the old
+ legends talk. Here's the link for anyone else:
+ [https://medium.com/@screwlisp/live-interview-with-kent-pitman-incoming-216092e24f44](https://medium.com/@screwlisp/live-interview-with-kent-pitman-incoming-216092e24f44)
+- But a condition system is a bit of a separate issue from the
+ exception restarts I'd like to have. A condition system can be
+ implemented without any changes to the runtime, in any language with
+ dynamic scope and first class functions. And dynamic scope can be
+ emulated in any language with global variables, so people have
+ implemented Common Lisp (CL) style condition systems as libraries
+ for many languages. If this was used universally in place of the
+ language's native exceptions, it would give the ability to drop
+ into a repl at the point of an otherwise uncaught exception, but not
+ the ability to restart execution from any stack frame. Smalltalk has
+ traditional exceptions and not a CL like condition system, but its
+ debugger does provide this ability, as does the JVM and v8
+ debuggers. In CL this ability (sldb-restart-frame in slime) isn't
+ provided by the condition system, but in SBCL for example by
+ internal functions in the sb-debug and sb-di packages.
+- It'd be interesting to experiment with a condition system in
+ Python, but what I'm more interested in is the ability on any
+ runtime error, to be able to fix the bug and restart execution from
+ any stack frame. 
+- Amazing work!
+- [https://slime.common-lisp.dev/doc/html/](https://slime.common-lisp.dev/doc/html/)
+ (anyone who doesn't have this bookmarked)
+- This is really cool, I am amazed how much functionality you have
+ implemented! I hope I can start using this in my day job!
+- Very very impressive. I will definitely try to use this in my
+ workflow. I love the Lisp development style.
+- very impressive. I am also working on a Python IDE with a python
+ process and a webview to host the python runtime and display the
+ IDE, but I am very far behind in terms of features. I just made the
+ reload system work and the code(AST)->html renderer
+- Neat, if you publish it send me a link!
+- Definitely going to give it a try! I've been missing interactive
+ since learning python many years ago, even before I knew Common Lisp
+ existed and one of the primary reasons why Common Lisp replaced
+ python as my go-to language
+- Such a package alone would automatically make Emacs a much better option than something like PyCharm.
+- I found it very funny how he showed M-x doctor But very interesting talk!
[[!inline pages="internal(2025/info/swanky-after)" raw="yes"]]
diff --git a/2025/talks/zettelkasten.md b/2025/talks/zettelkasten.md
index 0ee51214..b4177971 100644
--- a/2025/talks/zettelkasten.md
+++ b/2025/talks/zettelkasten.md
@@ -271,8 +271,10 @@ there's a connection that you just can't spell out, yet?
I give my best during the writing stage, which is 'cagey' and
taking more effort, too, for the benefit of my research in the
long run.
- - Makes sense. Thanks! I'll give another shot to zettelkasten and
+ - (audience): Makes sense. Thanks! I'll give another shot to zettelkasten and
rewatch your talk!
+ - (audience): the point of zettelkasten is to see varying differences between various notes and make interesting connections. so the only 'cage' is to just write notes, the creativity will happen when you see the interesting connections.
+ - (audience): as luhmann put it: "without noticing differences, one cannot think"
- Q: How does denote compare to org-roam?
- A: Denote is smaller, allows finding notes, but gets out of your
way otherwise.
@@ -401,8 +403,6 @@ there's a connection that you just can't spell out, yet?
- "so is brushing your teeth everyday"
- Q: does christian make videos too
- I have seen them somewhere
-- the point of zettelkasten is to see varying differences between various notes and make interesting connections. so the only 'cage' is to just write notes, the creativity will happen when you see the interesting connections.
- - as luhmann put it: "without noticing differences, one cannot think"
- so far this talk has been very good
- off topic, but I really dig there overall room aesthetics
- https://github.com/yibie/org-supertag