summaryrefslogtreecommitdiffstats
path: root/2021/talks/imaginary.md
diff options
context:
space:
mode:
authorSacha Chua <sacha@sachachua.com>2021-12-02 09:30:32 -0500
committerSacha Chua <sacha@sachachua.com>2021-12-02 09:30:32 -0500
commit8ce2aaa5433b7b3550148b945579eda611629d9f (patch)
tree3cf0e78c29dc83942d342fcb4c4048fefb5c4211 /2021/talks/imaginary.md
parenta744cc027439dde603e0a6f8f5467a0736fdb763 (diff)
downloademacsconf-wiki-8ce2aaa5433b7b3550148b945579eda611629d9f.tar.xz
emacsconf-wiki-8ce2aaa5433b7b3550148b945579eda611629d9f.zip
Update wiki so far
Diffstat (limited to '2021/talks/imaginary.md')
-rw-r--r--2021/talks/imaginary.md85
1 files changed, 85 insertions, 0 deletions
diff --git a/2021/talks/imaginary.md b/2021/talks/imaginary.md
index a5416c28..0f9a030c 100644
--- a/2021/talks/imaginary.md
+++ b/2021/talks/imaginary.md
@@ -33,6 +33,91 @@ GPL. Please keep an open mind.
IRC nick: libertyprime
+BBB:
+
+- libertyprime: What kinds of software is IP (imaginary programming) not suitable for?
+ - libertyprime: Good question. IP is great for things like mocking API calls, because you can imagine the API call output. It's great for code generation where you can then do a macro-expand to generate code as you are programming. It's great for coming up with functions that might be difficult to write (idefun color-of-watermelon), for example
+- Hey libertyprime, where do we follow up to find out more?
+ - libertyprime: it's not really good for scaffolding code. I consider emacs to be 45 years of scaffolding to build imaginary functions around
+ - libertyprime: Because IP needs a rigid complimentary code.
+- So how does an IP user verify that the imagined code does what is intended?
+- I like the word 'imaginary' to describe the paradigm
+- libertyprime: How does an IP user verify that the imagined code does what is intended? Through a combination of 'validator functions', imaginary validation functions and language model fine-tuning. So you may also choose an underlying language model to use when running code. That model may have been trained to do the task you are giving it. If you're trying out the docker container you can run `pen config` or do `M-x pen-customize` to force the language model, or chage it in the imagine-evaluating-emacs-lisp .prompt file
+- libertyprime: Haha. The brilliance of emacs, and the reason this stuff is so easy to do with emacs, is that emacs provides intelligible modes and abstractions with which to build prompts. Otherwise you have an amorphous blob of a language model.
+- libertyprime: So the value is absoltely not in replacing emacs entire, as I've come to understand it, but in combining real and imaginary.
+- (wish i could give you back just a fraction of the time you saved just this one person here!)
+- I would love to see the result of imaginary major modes and keymaps
+- libertyprime, is the idea for the first draft of the gpt output to be final, or do you expect to edit some?
+- There seems to be a lot of jargon in this context, like validators, prompts, language models, etc. It's really hard for someone who doesn't already use these things to understand what these pieces are and how they fit together.
+ - well prompts seem to be the input you give to the language model, which it then generates a follow up to
+ - validators sounds like tests? language models are neural language models like GPT-3/j etc.
+ - libertyprime: <http://github.com/semiosis/glossaries-gh/blob/master/pe-prompt-engineering.txt>
+<http://github.com/semiosis/glossaries-gh/blob/master/prompt-engineering.txt>
+<http://github.com/semiosis/glossaries-gh/blob/master/pen.el.txt>
+ - libertyprime: Here are some glossaries for the subjects
+ - So like, a prmpt would be "Marco!" and GPT-3 would of course say... "Polo"
+ - libertyprime: @alphapapa, I also have a much matured prompt format readme, here: <https://github.com/semiosis/prompts>
+ - libertyprime: which can explain 'validator'', etc.
+- aindilis: So uh... does GPT-3 know... everything? in every human and computer language? I don't understand its role exactly, or its limitations.
+ - GPT-3 knows a lot, but not all, from my experience. It's pretty scary, in a good way. I think libertyprime wants to keep it libre.
+ - libertyprime: the latest language models such as Codex are world language + codex, and they know everything at an abstract level, like a human does, in a way. So their depth may be superficial. They're pretty good knowledge aggregators.
+- so libertyprime can you just tab complete and it completes on like the previous sentence, region, buffer, etc?
+ - libertyprime: Yes, it has basic autocompletion functions, (word, line, lines). I'm also making more interesting autocompletion functions, which do beam-search on downstream generations, -- calling it desirable-search. <http://github.com/semiosis/pen.el/blob/master/src/pen-example-config.el>
+ - libertyprime: There are some key binding definitions here which will work for the docker container
+- Does GPT-3 "know" how to transliterate from say public code written in JS / Other-Lang to elisp if you were trying to imaginary code similar function names?
+ - libertyprime: yes, it absolutely can. transpilation is one thing it is very good at. But more bizarrely, you can also transpile intermediary languages, that are composed of multiple different language chimerically. For example, you can smash out your algorithm with a combination of elisp and bash and it will understand when it transpiles into a real language.
+- How well does it actually work to write a function in a mishmash of Bash and Elisp? I can't imagine that working well in practice. There are too many semantic differences in the languages and implementations
+ - libertyprime: it's a very new sort of thing, but feels natural as you are doing it, to generate code. the results of generating code should most probably be looked at before running. that beign said, you can also run 'ieval' around it to run it in inference. I think the takeawaay should be that these models are getting better and better and show no signs yet of reducing quality of results or ability -- no sign yet
+- how does lexical binding affect things, if at all?
+- How about going from a CLOS/EIEIO style of OO to Java / C++ style? Or Erlang style of parameter pattern matching?
+- so IIUC GPT-3 is a service run on a remote system, right? And it's proprietary? How big is it in terms of bytes?
+ - libertyprime: yes, aggregated language models are not good in my opinion. GPT-3 is around 170 GB, approximately 1GB per million parameters, IIUC
+ - libertyprime: There are libre models, and you can connect one to penel to run the inference etc. My goal is to decentralise them though
+ - libertyprime: Because I don't think that 170GB is accessible enough. The issue is actually running the models though. You need a very large computer indeed for that
+ - libertyprime: I can do a customized demo if anyone wants
+- can someone here provide some sample input, and you run it and paste the result, just to give an idea of the quality? or do you already have samples online?
+- here's an idea for a demo... something like (idefun (string target-language) "Translate STRING from its source language into TARGET-LANGUAGE and output it to the echo area.")
+ - oops I forgot to name the function, was thinking of ilambda
+ - I have a feeling that such a large scope for the function will exceed the max output size of the model. maybe we work on a more realistic example?
+ - I was hoping the model would solve all the messy problems for me :)
+ - libertyprime: Oh crud. I hope I havent broken Ilambda. Lol I added support for 0 arguments, it makes it variadic. This will work
+ - doesn't seem like it quite understood the purpose but I can see the connection
+- what happens if you change target lang to "Elisp" &gt;:)
+ - look at the echo area if you didn't notice it
+ - oh wait, I missed the echo area
+ - libertyprime: Yup, exactly, that will work too. One sec
+- can you run the function again or show "C-h e"? And can we see the resulting source code?
+ - libertyprime: translate python to elisp
+ - libertyprime: just with (idefun translate)
+ - libertyprime: No docstring, etc. or arguments.
+ - libertyprime: ccrud. It didnt work haha
+ - libertyprime: Sigh.
+- libertyprime: I need to fix the 2-ary argument thing. :S Really sorry I think I broke it
+- I'd like to see the generated (or "imagined") Elisp source code, assuming it does some HTTP API queries to do the translation and such
+- libertyprime: Yup, I can show that. It works much better when I use OpenAI Codex. Here are some generated functions
+- libertyprime: That's how it works under the hood. Then it cuts out the bit that you want
+- This reminds me of the classical AI paradigm of "generate and check."
+- libertyprime: Sigh. I really cry when demos break. Sorry. I demo'd the underlying prompt though. I broke ilambda, i think
+- I think I saw it generate a huge fibonacci function, is that still in your kill-ring?
+- okay, well thanks for demoing, the code is pretty stable though at this point right? this is just the norm with any demo.
+- I bet people would be glad to watch/read something later on if you want time to work on it.
+- libertyprime: <https://semiosis.github.io/cterm/> This is what I call the complex terminal. Essentially you prefix any terminal program with ct and you get autocompletion etc. for anything. it uses emac's term-mode
+- libertyprime: <https://semiosis.github.io/ii/> And this, ii, it's fully imaginary terminals, so you can import imaginary libraries, etc. and work with them.
+- libertyprime: <https://semiosis.github.io/apostrophe/> This one here, which imagines conversations with anyone that the model knows about. So I'm demoing having a 3way conversation with amber heard, tom cruise and chris nolan.
+ - so you used GPT to generate a compliment, and now GPT generates the convo from that prompt?
+ - libertyprime: Yeah, so the best way to interact with these types of chatbots is to imagine the situation you are in before hand. the initial phrases can be anything you can think of really. Why are you in the bath tub?, for example. But I tend to open with something like, may I interrupt? What were you just working on? so by choosing the prompt very carefully, you can tease out the information you require.
+- libertyprime: <https://semiosis.github.io/nlsh/> and this, which is a natural language shell
+- libertyprime: I also have a way to filter results semantically, with my semantic search prompt <http://github.com/semiosis/prompts/blob/master/prompts/textual-semantic-search-filter-2.prompt>
+- libertyprime: YOu can run all these prompts also from bash like so: pl "[\"It's cool. I used to dance zouk.\",\"I don't know.\",\"I'm not sure.\",\"I can't stop dancing to it.\",\"I think it's ok.\",\"It's cool but I prefer rock and roll.\",\"I don't know. It sounds good.\",\"Nice but a bit too fast,\"Oh, I know zouk, you can teach it to me.\",\"Zouk is nice.\"]" | "penf" "-u" "pf-textual-semantic-search-filter/2" "positive response". That will pipe json results into Pen.el, and have it filtered. all prompting functions are also available as shell commands.
+- well I think this is the coolest thing I've seen in a long time. how do we follow up with you and get involved? run it etc?
+- libertyprime: hehe thanks aindilis: i'm on #emacs as libertyprime. Feel free to hit me up any time. Otherwise, the setup for pen.el is fairly straight forward. If you have any issues demoing, I'd be very interested, so I can make Pen.el more reliable. I have a discord server. I'll copy the link. One sec
+- Do you think you could run an IRC channel too?
+ - libertyprime: <https://discord.gg/sKR8h9NT>
+- Thanks a lot, very interesting and I am excited to learn more later!
+- yeah this talk was crazy good, ty!
+
+IRC:
+
- What Shane is saying right now reminds me a lot of the SICP opening words, about how programming, and computing ideas in general are all about dreams and magic. Creating an idealized solution from abstractions and building blocks.
- This also reminds me of the concept of Humane Tech. Technology, and frameworks that are inherently conducive to human curiosity, intelligent, and all the best traits. <https://github.com/humanetech-community/awesome-humane-tech>
- I think this is like semantic auto-complete on steroids, like tab completion of whatever your typing, or translation of something you've written into code for instance.