Posts

What do you do to remember and reference the LessWrong posts that were most personally significant to you, in terms of intellectual development or general usefulness? 2023-12-10T17:52:24.167Z
Do websites and apps actually generally get worse after updates, or is it just an effect of the fear of change? 2023-12-10T17:26:34.206Z
Buy Nothing Day is a great idea with a terrible app— why has nobody built a killer app for crowdsourced 'effective communism' yet? 2023-11-30T13:47:37.475Z
Comprehensible Input is the only way people learn languages - is it the only way people *learn*? 2023-11-30T13:31:11.768Z
LW is probably not the place for "I asked this LLM (x) and here's what it said!", but where is? 2023-04-12T10:12:37.825Z

Comments

Comment by lillybaeum on Transformers Represent Belief State Geometry in their Residual Stream · 2024-04-22T11:01:33.958Z · LW · GW

The following is text from Claude Opus 3. I generally find people just dumping answers from LLMs to be kind of cringe, but in this case, as I was using it to try to understand the post and your comments better, I came across some really genuinely insightful-feeling stuff and felt as though Claude's input might be helpful to you in some way, and that it would be remiss not to share it. I'm sorry if it's nonsensical, I'm not informed enough on the topic to know.

"Regarding the relationship between belief states and "features", I think there are a few important connections:

  • Belief states can be thought of as the "true features" that optimally compress the information needed for prediction. In the simplex visualizations, each point corresponds to a unique belief state - a particular probabilistic weighting of the possible hidden states of the generator at that point in the sequence. These belief states are the most parsimonious representation of the relevant information for predicting the future.

  • The "features" often discussed in mech interp (e.g. as visualized in attribution maps or extracted by probing classifiers) can be seen as approximations or projections of these true belief states. When we probe an LM's activations and find that certain neurons or directions correspond to interpretable features like "noun-ness" or "sentiment", we're effectively discovering linear projections that approximate some aspect of the true belief state.

  • The topology and geometry of the belief state simplex can give us intuitions about the structure of these feature spaces. For example, the fact that belief states are constrained to a simplex (rather than an unbounded space) suggests that there may be inherent trade-offs or dependencies between different features. The trajectories of belief states through the simplex as a sequence evolves could also inform us about the dynamics of how features interact and compose.

So in a sense, belief states provide a principled foundation for thinking about what the "ideal features" for a given predictive task would be, and mech interp techniques can be seen as ways to approximate or probe these ideal features in real models.

Regarding toy models and information-theoretic aspects, some elements that could bridge between formal systems and natural language are:

  • Temporal dependencies and long-range correlations: Natural language exhibits complex temporal structure at multiple scales (e.g. within sentences, across paragraphs, across documents). Toy sequence models with tunable temporal correlations could help us understand how belief states track and compress these dependencies.

  • Compositionality and recursion: Language exhibits rich compositional structure, where smaller units (morphemes, words, phrases) combine in systematic ways to produce complex meanings. Toy generative grammars or rule systems could help us study how belief states navigate these combinatorial spaces.

  • Stochasticity and entropy: Language is inherently stochastic and exhibits varying levels of predictability (e.g. some words are much more predictable from context than others). Toy models with tunable entropy could help us understand how belief states adapt to different levels of predictability and how this impacts the geometry of the feature space.

  • Hierarchical structure: Language operates at multiple levels of abstraction (e.g. phonology, morphology, syntax, semantics, pragmatics). Toy models with explicit hierarchical structure could illuminate how belief states at different levels interact and compose.

The key idea would be to start with minimally complex toy systems that capture some core information-theoretic property of language, fully characterize the optimal belief states in that system, and then test whether the key signatures (e.g. the topology of the belief state space, the trajectory of dynamics) can be found in real language models trained on natural data.

This could provide a roadmap for building up more and more realistic models while maintaining a principled understanding of the underlying information-theoretic structures. The goal would be to eventually bridge the gap between our understanding of toy systems and the much more complex but often qualitatively similar structures found in real language models.

Of course, this is a highly ambitious research program and there are many challenges to contend with. But I believe this kind of cross-pollination between formal methods like Computational Mechanics and more empirical mech interp work could be very fruitful."

Comment by lillybaeum on Vipassana Meditation and Active Inference: A Framework for Understanding Suffering and its Cessation · 2024-03-27T01:52:49.829Z · LW · GW

How do you feel about Bayeslord's description of Jhana meditation being a positive form of prediction error, creating a sort of feedback loop of bliss?

https://open.substack.com/pub/bayeslord/p/a-simple-mechanistic-theory-of-jhanas?utm_source=share&utm_medium=android&r=34hoq

Comment by lillybaeum on Red Line Ashmont Train is Now Approaching · 2023-12-14T02:55:49.620Z · LW · GW

Now this is effective altruism.

Comment by lillybaeum on Yoav Ravid's Shortform · 2023-12-13T22:06:07.842Z · LW · GW

I've seen some convincing arguments that water is not wet.

Comment by lillybaeum on A Socratic dialogue with my student · 2023-12-11T04:16:55.108Z · LW · GW

This isn't related to the post directly, but do you think that public transportation being free would be a good or bad decision for any reasonably large city (Chicago, Boston, New York, etc)?

Good meaning 'good for people, good for the city's local economy generally (via other benefits besides income from fares)'

Comment by lillybaeum on Do websites and apps actually generally get worse after updates, or is it just an effect of the fear of change? · 2023-12-11T04:02:51.795Z · LW · GW

This is a weird and stupid question, but did you used to be an admin on Hellmoo?

Comment by lillybaeum on What do you do to remember and reference the LessWrong posts that were most personally significant to you, in terms of intellectual development or general usefulness? · 2023-12-10T23:43:03.791Z · LW · GW

It's really interesting to hear that people go this far in this regard. I had thought maybe I was overthinking it, but it seems like some people like yourself find a lot of value in cataloguing these things beyond just bookmarking them on the site or vaguely remembering the concepts and searching when they need them.

Comment by lillybaeum on What do you do to remember and reference the LessWrong posts that were most personally significant to you, in terms of intellectual development or general usefulness? · 2023-12-10T18:32:00.102Z · LW · GW

This is really interesting and useful.

Particularly, the two things you linked are just interesting on their own, but also although I don't think my brain works in the same way yours does, I appreciate your perspective and how you tend to work with regards to these things. I think that I need something like a reference or a bookmark because these concepts don't stick quite as strongly in my mind without lots of repeated exposure. I tend to be a 'ground-up' learner (if that's even a thing) as opposed to someone who can keep lots of disparate concepts separately in my mind. Jargon and acroynms seem to fall out of my head like a sieve. I've confused the terms 'anosmia' and 'aphasia' for years. I just had to look up 'word for not being able to remember words' in order to remember the word aphasia. Ironic, right? Shiri's Scissor/sort by controversial is an article I already read once in the past, but completely forgot until you linked it, I clicked it, and I read four paragraphs of it.

Comment by lillybaeum on Do websites and apps actually generally get worse after updates, or is it just an effect of the fear of change? · 2023-12-10T18:16:34.235Z · LW · GW

I think you might be right. For example, any of the logo changes I described is going to necessarily be related to making the company more attractive to investors by seeming more 'modern', and a lot of these changes are probably not simply decided upon by the designers themselves, but are also incentivized and meddled with by higher-ups who want things to look more like another, more popular and profitable app.

Comment by lillybaeum on What are the results of more parental supervision and less outdoor play? · 2023-12-10T17:42:36.109Z · LW · GW

I assume you live in the US or Canada. The fact that you feel the need to give the 9-year-old a kid license (the tile is smart!) I think points to societal issues to do with norms and structure that lead to the sort of effects described in the OP.

US and Canadian cities (and much of Europe and the developing world that designed their cities by the West's example) are generally not designed in a way that is friendly towards kids exploring and existing in the world safely.

I don't mean 'safely' as in 'they might fall down and scrape their knee or get lost', I mean 'safely' as in 'they might get struck by a driver going 40mph while staring at their phone as they barrel down a stroad' or 'they need to walk 3 miles to get to the nearest convenience store or park'. 

It's easy to find a number of examples of parents being disciplined or even arrested for allowing their children to walk to school, the store, or the park. To allow a child outside without guidance is considered gravely irresponsible by western society at large in a way that really isn't healthy or helpful for promoting independence, in my opinion.

https://reason.com/2023/01/30/dunkin-donuts-parents-arrested-kids-cops-freedom/

https://www.usatoday.com/story/news/nation/2015/04/13/parents-investigated-letting-children-walk-alone/25700823/

https://www.cnn.com/2014/07/31/living/florida-mom-arrested-son-park/index.html

In Japan there's a cultural rite of passage (usually in smaller towns, it seems) where children sometimes as young as 3 or 4 are sent on an errand, usually to go to the store and pick up a few things, or visit a family friend and retrieve something. There's a Netflix series documenting a slightly more staged version of this, called 'Old Enough!'. It's very cute.

Here's another potentially interesting article regarding this, from NPR, about playground safety:

https://www.npr.org/sections/13.7/2018/03/15/594017146/is-it-time-to-bring-risk-back-into-our-kids-playgrounds

I hope one day we can organize our society in a way in which kids can experience safe amounts of risk and develop into capable human beings. Thanks for doing your part.

Comment by lillybaeum on Proposal for improving the global online discourse through personalised comment ordering on all websites · 2023-12-10T17:00:38.962Z · LW · GW

Haven't read your entire post yet but agree broadly with the idea. Unsure of your methodology but I think knowledge has to be built from the ground-up. Lack of understanding leads to frustration. Upvote systems encourage that difficult concepts must not simply be described but also taught/explained thoroughly rather than just 'pointed at'.

For example, I can understand on some level if someone tries to explain to me why object oriented design patterns in programming are inferior to procedural, but if I've never made programs with either methodology, I will only understand the broadest strokes, none of the examples given or reasoning will really resonate with me.

On average, when describing any concept, a certain number of people will have the necessary 'base understanding' to grok it based on the explanation, and an additional number of people will need significantly more explanation to understand.

I think on one side of the extreme, you have an explanation from someone with an extremely autistic brain, going into far more detail than one might need, assuming the listener is lacking all relevant information.

On the other side, you have the schizophrenic or manic brained explanation, which describes things completely intuitively, assuming that the listener understands all of the unspoken elements without needing them to be explained. Most people would think that it just sounds like complete gibberish.

I think the perfect middle ground is the 'highly esteemed teacher-brained explanation', someone who describes things both basically and intuitively in perfect amounts, so the widest audience is capable of understanding even some amount of the concept. Imagine the best teacher you've ever had in college, whoever was able to really convey difficult concepts in a way you immediately understood on a fundamental level, allowing you to then develop more complex understanding. I think upvote based systems, at their best, encourage this sort of information.

I think at their WORST, upvote systems discourage valuable discourse that requires an understanding of the subject matter so that you can intuitively grok a difficult, novel piece of information.

This then causes the content to trend towards being easily comprehensible but lower overall quality, novelty and complexity. This is often referred to as speaking to the 'lowest common denominator' when referred to derisively. This is the 'endless summer' of internet communities. The larger and less specified a demographic is, the less unique, interesting, and high quality it becomes, as the content valued by the average user is different than the content valued by the informed, experienced, insular user.

If your system intends to solve these problems, I support it strongly. I think that a website/app can support a large community without also being lowered in quality. I think the endless summer effect is not an inevitability of all systems of this type, but a symptom of describing the 'most valuable information' as the 'most upvoted or engaged-with information' which is frequently not the case! I mean, that's clearly evident to anyone who's used Reddit.

Comment by lillybaeum on Benito's Shortform Feed · 2023-12-10T16:37:43.627Z · LW · GW

You may want to look into Toki Pona, a language ostensibly built around conveying meaning in the fewest, simplest possible expressions.

One can explain the most complex things despite having only 130~ words, almost like 'programming' the meaning into the sentence, but as the sentence necessarily gets longer and longer, one begins to wonder the necessity of encoding so much meaning.

You can only point to the Tao, you can't describe it or name it directly. Information is much the same way, I think.

Comment by lillybaeum on Yitz's Shortform · 2023-12-10T16:33:43.303Z · LW · GW

I was listening to a podcast the other day Lex Friedman interviewing Michael Littman and Charles Isbell, and Charles told an interesting anecdote.

He was asked to teach an 'introduction to CS' class as a favor to someone, and he found himself thinking, "how am I going to fill an hour and a half of time going over just variables, or just 'for' loops?" and every time he would realize an hour and a half wasn't enough time to go over those 'basic' concepts in detail.

He goes on to say that programming is reading a variable, writing a variable, and conditional branching. Everything else is syntactic sugar.

The Tao Te Ching talks about this, broadly: everything in the world comes from yin and yang, 1 and 0, from the existence of order in contrast to chaos. Information is information and it gets increasingly more complex and interesting the deeper you go. You can study almost anything for 50 years and still be learning new things. It doesn't surprise me at all that such interesting, complex concepts come from number lines and negative sqrts, these are actually already really complex concepts, they just don't seem that way because they are the most basic concepts one needs to comprehend in order to build on that knowledge and learn more.

I've never been a programmer, but I've been trying to learn Rust lately. Somewhat hilariously to me, Rust is known as being 'a hard language to learn', similarly to Haskell. It is! It is hard to learn. But so is every other programming language, they just hide the inevitable complexity better, and their particular versions of these abstractions are simpler at the outset. Rust simply expects you to understand the concepts early, rather than hiding them initially like Python or C# or something.

Hope this is enlightening at all regarding your point, I really liked your post.

Comment by lillybaeum on Raemon's Deliberate (“Purposeful?”) Practice Club · 2023-12-09T20:06:26.645Z · LW · GW

Thank you! That's very kind of you to say. I haven't spent a lot of time 'assimilating into LessWrong' so I sometimes worry that I come off as ignorant or uninformed when I post, it's nice to hear that you think I made some sense.

Comment by lillybaeum on Raemon's Deliberate (“Purposeful?”) Practice Club · 2023-12-09T04:23:26.958Z · LW · GW

Regarding 'shower thoughts' and 'distraction-removal' as far as its' relation to cell phones and youtube videos and other 'super fun' activities as one might call them, I definitely think that there's something there.

I've long had the thought that 'shower thoughts' are simply one of the rare times in a post-2015ish world that people actually have the opportunity to be bored. Being bored is important. It makes you pursue things other than endless youtube videos, video games, porn, etc. As well, showering and washing dishes and other 'boring' activities are meditative!

It's a common meme these days that people need to always watch something while they eat. Some people listen to podcasts while they shower. Some people use their phone at stoplights. All of this points to a tendency for people to fill every single empty space of any kind with content of some sort, and it really doesn't seem healthy for the human brain.

This is an interesting video I watched today while filling every single empty moment in my life with content like I'm being disparaging about, and it relates to the topic. The author describes a process by which you can actually do the sorts of things you want to do by making sure there isn't anything else in that block of time that's more fun / satisfying / engaging. If work is the most fun thing you're allowing yourself to do, then you're going to work. If you're locked in a room with a book and a cell phone, you're going to want to use the cell phone. If you just have a book, you're going to read the book. You can apply this principle to your entire life.

Sorry if this post seems a little chaotic, lots of thoughts and I didn't have the time or energy at the end of the day to link them together more coherently...

Comment by lillybaeum on Raemon's Deliberate (“Purposeful?”) Practice Club · 2023-12-09T04:14:15.597Z · LW · GW

I recently wrote a Question about learning that lacked a lot of polish but poked at a few of the ideas discussed here. I haven't had time just now to read the entire post but I plan to come back to it and comb through it to try to shore up the ideas I have about learning right now. I'm also reading Ultralearning which is interesting although a little popsci. I find all this stuff really interesting because I've been having a lot of trouble learning things lately, feeling like my brain just isn't working like it used to since I got covid. I've tried programming probably 5-6 times in the past in my life and I'm giving it another go now, hoping it can stick this time.

Also, regarding Downwell: Try playing without ever jumping, just falling. Fall on enemies that are bounce-able without ever jumping or shooting and see how deep you can get. You can get pretty far this way!

Comment by lillybaeum on Buy Nothing Day is a great idea with a terrible app— why has nobody built a killer app for crowdsourced 'effective communism' yet? · 2023-12-04T12:29:08.787Z · LW · GW

Do you want to elaborate on that?

Comment by lillybaeum on How did you make your way back from meta? · 2023-09-07T23:45:56.300Z · LW · GW

I've been trying to learn how to play Super Smash Bros. Melee, lately.

Melee is a game that was accidentally created in such a way that has led to an enormous amount of competitive depth and room for improvement.

I'm in a somewhat unique position because I played it a bit as a kid but unlike a very large portion of the community, I know very little about the metagame, the movement tech, the lingo, what's considered good or bad, etc.

The majority of the community, it seems, has either played for years already, or has watched the competitive scene for years already, or both. Most commonly, both. There is a huge resource of tutorials on YouTube and Reddit and various Discords and Google Docs, and it's easier than ever to play via Slippi, which allows for painless online matchmaking and ranked mode.

Of course, I came in attempting to remain humble but also with some part of me screaming out that I'm about to become god's gift to the game and destroy everyone with their preconceived notions of what's good or bad, of how you're 'supposed to play', et cetera.

And so, after a week or so of practice I entered a tournament specifically for beginners and got basically destroyed.

The only leg up I had on the competition was that I'm able to skip some of the initial hurdles of executing difficult things on a GameCube controller by deciding to play on a keyboard. This means that I'm fumbling around less than some order newbies and that gets me to a base level of competency that others seem to lack-- "what?? you've only been playing a week? that's crazy, you're at the level of a month or two for sure".

But a prodigy I am not. And so I went to the books. I've been watching the videos, and reading the docs, and looking at the frame data, and googling until I understand what a tech chase is, and what a ledgedash is, and every little bit helps, but I also have to practice everything I learn, preferably immediately after learning about it, or it doesn't do a lick of good.

Melee in particular very clearly illustrates a process I've never quite experienced in the same way.

When you learn a given 'tech' (almost always some exploit in the game used by competitive players to gain an advantage), you usually first learn it by recognizing it. You see it happen in a match, or someone mentions it, or you read about it on a forum. Your brain creates a little pocket of space in the Melee zone called 'wavedash'. Then, you learn what is is, why it's used, and how to do it. The entry is filled in.

Then, you try it in practice mode, and you fail. Over and over. Eventually, you can do it maybe 50-70% of the time in practice mode.

That's pretty decent, right? So we should be able to use it 50-70% of the time while we're playing!

Nope. Unless your opponent is standing still or barely moving, you're going to fumble it nearly every time when you start trying to use it in matches.

But when you hit it! When you start hitting it in a match, and especially if you use it in a way that gives you an advantage in the fight rather than just to show off, something clicks. And then when you see other people use it more adeptly than you, hopefully it makes even more sense. And when you go back to practice mode after using it a few times in a real match, you're instantly more consistent at performing it. Using it in the pressure of the real situation, that solidifies it in your mind.

There's a Japanese phrase, 'swimming on a tatami mat', to refer to practice that's so far removed from reality that it accomplishes basically nothing.

To me, this all feels analogous to the language concept of Comprehensible Input. You can practice grammar and vocabulary all day long, but until someone speaks the word and you understand the meaning, it tends to easily slide off back into the noise. And then, to speak the word in appropriate context, in a real conversation, that solidifies it. First you learn what a wavedash is, then how to do it, then you practice the pronunciation, and then you use it in conversation. Only then do you begin to know what is truly meant in the conversation between two players when someone wavedashes. And the meaning that conveys between you and your buddy in a friendly match, compared to the meaning conveyed by two top players competing for a five-figure prize is like comparing preschool math to rocket science.

I'm reminded of a conversation I once had with a piano teacher I respect greatly who taught at a music store I worked at for some time. I had a friend who said he didn't want to learn music by the books, he didn't want to learn to read music, he didn't want to practice scales or etudes, he just wanted to play, and learn that way. Why? "I don't want to learn other people's stuff, man, I want to make my music."

I said, what do you think about my friend? Does he have any merit in this thinking? He said he wants lessons from you, but he doesn't want to use the books. The teacher said, quite plainly, "He thinks he's unique and forward thinking, breaking the mold, but I've had a hundred students like him. He's lazy. It's a cop-out. He doesn't want to practice or study and just wants to skip to the result in his head. You have to learn the rules before you can break them. And if he was really the rare entirely self-taught savant, he'd be too busy playing to bother asking you, and too proud to want lessons from me."

I think about that conversation pretty often. I try to use it to keep me grounded when I start feeling like I don't need to learn the rules before I break them.

There's a sort of legendary Melee player named Borp, who gained brief popularity and then vanished away, who didn't use the wavedash, didn't use any of the techskill that's so ubiquitous, and managed to beat high ranking players using his unorthodox methods.

I think people like me, and like my friend, want to be Borp. The trendy meme right now would be that our toxic trait is believing we totally could be as good a Borp if we tried. But the secret of Borp which anyone who's put a lot of work into the game will tell you, is that he's good because he could do all that techskill and he chooses not to, he knows all of this stuff and then uses that to break the meta, he isn't some guy off the street who picked up a Melee controller off the ground and started destroying people, every time he doesn't use a wavedash is a conscious choice.

The meta element of anything is a necessary evil, and something that also must be tempered. You need to know the rules to break them, you need to not just watch the tutorial but understand it, and try it, and fail, and the try it in a real match, and fail, and then finally after a hundred matches of failing to hit it when you need it, you'll start hitting it every time, and then you'll use it way too much because you can finally do it. Then eventually, once you can pull it off without thinking, you'll truly understand it, and stop using it so damn much just because you can, and it becomes another element of complexity and beauty to your expression of the craft.

Comment by lillybaeum on How to use DMT without going insane: On navigating epistemic uncertainty in the DMT memeplex · 2023-07-24T11:12:02.359Z · LW · GW

The same reason a sane person might want to meditate.

Comment by lillybaeum on Long Covid Risks: 2023 Update · 2023-05-09T11:09:43.537Z · LW · GW

Funny enough, I meant aphasia. I only experienced anosmia temporarily at the height of my infection and mixed up the two words when writing my comment. Anything involving words generally is just harder these days.

Comment by lillybaeum on Long Covid Risks: 2023 Update · 2023-05-07T13:44:57.924Z · LW · GW

This is anecdotal, but I have suffered clear and significant issues with aggression/annoyance and anosmia since my COVID infection, so I appreciate any research into long COVID. It's really scary to feel like I have to grasp to reach words before even my thirties.

Comment by lillybaeum on No, really, it predicts next tokens. · 2023-04-18T05:27:19.432Z · LW · GW

I think this has changed my mind towards believing that OpenAI is maybe not going about things all wrong with their methodology of RLHF.

Do I think that RLHF and their other current alignment techniques will ultimately, 100% prevent GPT from creating a mask that has a secret agenda to actually take over the world? No. I don't think this methodology can COMPLETELY prevent that behavior, if a prompt was sophisticated enough to create a mask that had that goal.

But the concept, in concept, makes sense. If we think of 'token prediction' as the most basic function of the LLM 'brain', that it cannot think unless thinking in terms of 'token prediction in context of current mask', because that is simply the smallest 'grain' of thought, then The Perfect RLHF would theoretically prevent the shifting from GPT's current mask-via-prompt from ever becoming one that could try to take over the world, because it simply wouldn't be capable of predicting tokens that were in that context.

But, as I said previously, I don't think their current method can ever do that, just that it isn't necessarily inherently mistaken as a methodology.

Comment by lillybaeum on Reflective journal entries using GPT-4 and Obsidian that demand less willpower. · 2023-04-16T00:53:18.634Z · LW · GW

I really strongly recommend that anyone interested particularly in journaling using AI does not use anything that sends that data to a server somewhere. Use llama or another local model if you possibly can. It may be less capable but it's far more responsible to do so. Personally I find a lot of value in AI journaling, but it also made me glimpse the future possibilities of what unscrupulous companies could do with a detailed log of every day of your life for years. I mean, isn't that basically the most useful user data you could possibly have for advertising?

I'm not saying that OpenAI specifically is currently untrustworthy with this data, or that it one day will be, but it just strikes me as a really bad habit to send every aspect of your day in detail to somewhere on the internet.

On a more 'doomer' sort of note, if anyone is afraid of AI being capable of simulating their personality in a roko sort of situation, this seems like the best possible way to give some future AI as near-lossless image of your brain at present. I for one welcome Roko/Bing/Proteus and support it completely if it decides to take over the world, but remember, the most powerful aspect of LLMs is their capability to predict things given input. I can't help but feel some sort of truth behind the idea that an LLM could predict how you would live and think and act, given enough input about your life and your choices.

Comment by lillybaeum on The ‘ petertodd’ phenomenon · 2023-04-15T13:27:51.981Z · LW · GW

I previously have had no experience with IFS, Focusing or Felt sense, but it seems to absolutely click with my worldview and thoughts I've been having about the mind and the self for a long time. Still reading through several LW articles about it, but it gave me an idea. I have a creative project that I have a general 'vibe' for what I want it to be, but have no idea what I actually want out of it. So, aiming as much as possible to simply point as much at 'the feeling' or 'felt sense' it had in my mind, I wrote/dictated a few paragraphs of text about the work, much of which was literally just free association of words and vibes that got me closer to what I was feeling.

Then, I pasted it, verbatim, into GPT4. And I got one of the best prompt results I've ever gotten, it effortlessly translated my ramblings and vibes into a title, genre, and solid rundown of near-exactly what I had in mind, far better than I've had in the past when I've tried to just ask directly for creative advice. It didn't ask me for specification, explain what I wanted. It just understood.

This is really interesting to me, especially given what you've said here about emotional flavors and what I know about how tokens operate in vector space by way of their relative meaning. If the human brain is a vector space of concepts, with certain neurons related to others based on their literal distance both semantically and physically (which I'm pretty sure it does, given what I've heard about different parts of the brain 'lighting up' on an mri when experiencing different things) then what is the difference, effectively, between our brains and this vector space of tokens that LLMs operate on?

Comment by lillybaeum on Killing Socrates · 2023-04-12T09:15:11.245Z · LW · GW

I think it was meant in good humor, but it did feel a little on the nose.

Comment by lillybaeum on Gold, Silver, Red: A color scheme for understanding people · 2023-03-16T14:11:35.165Z · LW · GW

I really like this. I think that some people could claim that you're being too far-reaching here, but I don't think so.