ank's Shortform
post by ank · 2025-01-21T16:55:21.170Z · LW · GW · 5 commentsContents
5 comments
5 comments
Comments sorted by top scores.
comment by ank · 2025-02-18T03:01:48.082Z · LW(p) · GW(p)
Places of Loving Grace
On the manicured lawn of the White House, where every blade of grass bent in flawless symmetry and the air hummed with the scent of lilacs, history unfolded beneath a sky so blue it seemed painted. The president, his golden hair glinting like a crown, stepped forward to greet the first alien ever to visit Earth—a being of cerulean grace, her limbs angelic, eyes of liquid starlight. She had arrived not in a warship, but in a vessel resembling a cloud, iridescent and silent.
Published the full story as a post here: https://www.lesswrong.com/posts/jyNc8gY2dDb2FnrFB/places-of-loving-grace [LW · GW]
comment by ank · 2025-02-28T15:04:08.891Z · LW(p) · GW(p)
We can build the Artificial Static Place Intelligence – instead of creating AI/AGI agents that are like librarians who only give you quotes from books and don’t let you enter the library itself to read the whole books. Why not expose the whole library – the entire multimodal language model – to real people, for example, in a computer game?
To make this place easier to visit and explore, we could make a digital copy of our planet Earth and somehow expose the contents of the multimodal language model to everyone in a familiar, user-friendly UI of our planet.
We should not keep it hidden behind the strict librarian (AI/AGI agent) that imposes rules on us to only read little quotes from books that it spits out while it itself has the whole output of humanity stolen.
We can explore The Library without any strict guardian in the comfort of our simulated planet Earth on our devices, in VR, and eventually through some wireless brain-computer interface (it would always remain a game that no one is forced to play, unlike the agentic AI-world that is being imposed on us more and more right now and potentially forever).
If you found it interesting, we discussed it here recently [EA · GW]
Replies from: daijin↑ comment by daijin · 2025-03-01T01:52:17.694Z · LW(p) · GW(p)
so you want to build a library containing all human writings + an AI librarian.
- the 'simulated planet earth' is a bit extra and overkill. why not a plaintext chat interface e.g. what chatGPT is doing now?
- of those people who use chatgpt over real life libraries (of course not everyone), why don't they 'just consult the source material'? my hypothesis is that the source material is dense and there is a cost to extracting the desired material from the source material. your AI librarian does not solve this.
I think what we have right now ("LLM assistants that are to-the-point" and "libraries containing source text") serve distinct purposes and have distinct advantages and disadvantages.
LLM-assistants-that-are-to-the-point are great, but they
- don't exist-in-the-world, therefore sometimes hallucinate or provide false-seeming facts; for example a statement like "K-Theanine is a rare form of theanine, structurally similar to L-Theanine, and is primarily found in tea leaves (Camellia sinensis)" is statistically probable (I pulled it out of GPT4 just now) but factually incorrect, since K-theanine does not exist.
- don't exist in-the-world, leading to suboptimal retrieval. i.e. if you asked an AI assistant 'how do I slice vegetables' but your true question was 'im hungry i want food' the AI has no way of knowing that; and also the AI doesn't immediately know what vegetables you are slicing, thereby limiting utility
libraries containing source text partially solve the hallucination problem because human source text authors typically don't hallucinate. (except for every poorly written self-help book out there.)
from what I gather you are trying to solve the two problems above. great. but doubling down on 'the purity of full text' and wrapping some fake grass around it is not the solution.
here is my solution
- atomize texts into conditional contextually-absolute statements and then run retrieveal on these statements. For example, "You should not eat cheese" becomes "eating excessive amounts of typically processed cheese over the long run may lead to excess sodium and fat intake".
- help AI assistants come into the world, while maintaining privacy
↑ comment by ank · 2025-03-01T12:36:41.021Z · LW(p) · GW(p)
Thank you, daijin, you have interesting ideas!
The library metaphor is a versatile tool it seems, the way I understand it:
My motivation is safety, static non-agentic AIs are by definition safe (humans can make them unsafe but the static model that I imply is just a geometric shape, like a statue). We can expose the library to people instead of keeping it “in the head” of the librarian. Basically this way we can play around in the librarian’s “head”. Right now mostly AI interpretability researchers do it, not the whole humanity, not the casual users.
I see at least a few ways AIs can work:
- The current only way: “The librarian visits your brain.” Sounds spooky but this is what is essentially happening right now to a small extent when you prompt it and read the output (the output enters your brain).
- “The librarian visits and changes our world.” This is where we are heading with agentic AIs.
- New safe way: Let the user visit the librarian’s “brain” instead, make this “brain” more place-like. So instead of the agentic librarians intruding and changing our world/brains, we’ll intrude and change theirs, seeing the whole content of it and taking into our world and brain only what we want.
I wrote more about this in the first half of this comment [EA(p) · GW(p)], if you’re interested
Have a nice day!