[[!meta title="Blee-LCNT: An Emacs-centered content production and self-publication framework"]] [[!meta copyright="Copyright © 2025 Mohsen BANAN"]] [[!inline pages="internal(2025/info/blee-lcnt-nav)" raw="yes"]] # Blee-LCNT: An Emacs-centered content production and self-publication framework Mohsen BANAN (he/him) - Pronunciation: MO-HH-SS-EN, [[!inline pages="internal(2025/info/blee-lcnt-before)" raw="yes"]] In a sense this is yet another talk about how you can use Emacs to produce fancy presentations like this or write complex books and self-publish them. But our approach is fundamentally different. Many talks at previous Emacs Conferences have described how Emacs and org-mode can be extended to facilitate content production by adding more to Emacs. Our approach is that of putting a smaller Emacs at the core of something bigger. That something bigger is an autonomy oriented digital ecosystem called "ByStar" which is uniformly built with a layer on top of Debian called BISOS (ByStar Internet Services OS). At Emacs Conf-2024 the title of my talk was "About Blee" – . Blee (ByStar Libre-Halaal Emacs Environment) is that smaller Emacs packaging that positions Emacs at the core of BISOS and ByStar. BISOS and Blee are intertwined and ByStar is about autonomy oriented unified platforms for developing and delivering both internet services and software-service continuums. This talk is about Content Production and Self-Publication capabilities of Blee and BISOS. Blee-LCNT is LaTeX centric. The original text is always in COMEEGA-LaTeX – LaTeX augmented by Org-Mode. This is the inverse direction of exporting LaTeX from Org-Mode. For typesetting, the LaTeX syntax is far more powerful than org-mode. And with COMEEGA-LaTeX, you can also benefit from all that org-mode offers. The scope of Blee-LCNT is all types of content from presentations to videos to books to name-tags and business cards. LaTeX to HTML translation is done with HeVeA. For presentation/screen-casting, the original text is then augmented in layers by images, audio voice-overs, screen captures, videos and captions. The Beamer LaTeX file is then processed by both LaTeX and HeVeA. LaTeX produced slides are then absorbed in html by HeVeA as images. HeVeA output is destined to be dispensed by Reveal.js. The video is then just a screen capture of the autoplay of reveal file. Viewing presentations in their original Reveal form makes for an even richer experience. All of this involves a whole lot of integration and scripting. But all of that has been done and you can get it all in one shot by just running one script. To get started with BISOS, Blee, and ByStar, visit . From a vanilla Debian 13 installation ("Fresh-Debian"), you can bootstrap BISOS and Blee (with Emacs-30) in one step by running the raw-bisos.sh script. It produces "Raw-BISOS" which includes "Raw-Blee". You can then add the LaTeX sources for your content as ByStar Portable Objects (BPO) to BISOS and process your content with Blee-LCNT. All of this and more has been documented in a book that was produced by Blee-LCNT itself. The title of that book is: Nature of Polyexistentials: Basis for Abolishment of the Western Intellectual Property Rights Regime And Introduction of the Libre-Halaal ByStar Digital Ecosystem - On Line US Edition: – Download: - On Line International Edition: – Download: DOI: - US Edition Book Prints At Amazon: - International Edition Book Prints in Iran: I welcome your thoughts and feedback, especially if you experiment with Blee, BISOS, ByStar, and the model and the concept of Libre-Halaal Polyexistentials. ## Discussion / notes - Q: all the outputs and the inputs that you mentioned, where are they? - So they are on GitHub, and this is in one of my slides. I mentioned the URL for it. I'll show you that as well. So the URL for it is https://github.com/bxplpc/180068, which is the handle for this talk. In there, you have all the PDFs and the HTMLs, a citation, a bib input, and also the sources. So if you were to go to the PDF, you will see um, both the article presentation and the beamer, let's take a quick look at the beamer, which is what you have seen. So. And as far as the sources are, there are two primary files. This presentation, left to right, is the one that includes all the LaTeX packages. We might as well take a quick look. So what's in there is primarily the use packages. And then it dispatches to bodyPresArt, and this is where the code is. And I walked through this briefly. So, notice here again that this is a mixture of LaTeX and Org. Each of the presentation slides are here. For example, my introduction is just a video that gets included. And then the notes that I use, the voiceover, is also included in the LaTeX file. Let me... It'll probably be easy to take those voiceover notes and then align them with a tool like Aeneas to make subtitles for your videos. Exactly, and that is what I do. So there is a way to gather them all as P-notes. And so all the P-notes get together in a single file, and then you feed that to Aeneas, and it will align them. And then there is the work of using your subed to just get the right sort of line length on them. But you did all of that for me this year, Sacha. Thank you very much. It was just a matter of not having time. Otherwise, I planned to do it myself. It's all right. It was very easy since he provided the full narration. I still need to tweak it sometimes, so I often use the waveforms in subed to find the right starting time and ending time for things. But it is so nice to have a presentation where you can experience it in different forms, as an article, as a video, as a post with links and everything. Very handy. Right, and in case a teacher uses this for class lectures, then the student profits from all sorts. The article presentation format is very useful for a student to add their own notes to it and the rest. Exactly as you said, having multiple forms is great. Video has its place, reveal has its place, PDF has its place, article has its place. All of them work together. - Q: what changes have you seen in the culture while developing all these things like libre-halal system and now blee-lcnt? - A: We learn from one another. And what I'm doing may be considered just a stepwise increment, but the cultural input is that we really should start thinking about providing solutions as opposed to packages. The FOSS culture is really limited in its scope to packages or even if when you think something very large like Debian, which is a collection of packages. And it is still choice oriented, as opposed to solution oriented. Yeah. Are there any additional topics or questions? Otherwise, I'll just add a few additional concepts. - I agree with 'Solutions over Packages' phrase :) - Thank you Mohsen - Q: Really interesting stuff, thank you for your talk :) Given that large AI companies are openly stealing IP and copyright, thereby eroding the authority of such law (and eroding truth itself as well), can you see a future where IP & copyright flaw become untenable and what sort of onwards effect might that have? Apologies if this is outside of the scope of your talk - So yeah, over the past two years, something huge has happened. And what I am seeing in there as a solution is essentially comes down to a talk that was given maybe two years ago by someone at EmacsConf, and its label was attribution-based economics. In my thinking, intellectual property as a whole is invalid. But that means that through something like a Affero GPL, you focus on attribution basing, proper attribution basing. If somebody has done some work, it should be clear, no matter what, that that work is his. And that we already, even prior to AI, we were seeing this. We were seeing large GitHub repos with hundreds of authors. And it was utterly unclear as to who would own this whole thing. And any piece of it is not of significance. What is of significance is the whole thing. So moving towards that attribution based economics is key. And then once we do that, and then we accept AI as a reality. AI should still take very seriously and conform to attribution-based economics. In other words, what is generated by the machine should not be claimed to be no one's or the machine owners, the AI owners. It should still clearly be attributed to the people who contributed in its creation. This all becomes very muddy, very clear, and I don't have a simple or clear answer to it. But the perimeters of the solution lie in rejection of intellectual property, replacement of the intellectual property with attribution-based economics, and restrictions on AI use of not properly attributed content. Yeah, I'd say that would be, it's a complicated topic and I would simply say I haven't figured it out at all. I just have a perimeter set of concepts that can be used to drive it. [[!inline pages="internal(2025/info/blee-lcnt-after)" raw="yes"]] [[!inline pages="internal(2025/info/blee-lcnt-nav)" raw="yes"]]