Posts

Comments

Comment by Archimedes on The Third Fundamental Question · 2024-11-16T22:03:27.177Z · LW · GW

This sounds like metacognitive concepts and models. Like past, present, future, you can roughly align them with three types of metacognitive awareness: declarative knowledge, procedural knowledge, and conditional knowledge.

#1 - What do you think you know, and how do you think you know it?

Content knowledge (declarative knowledge) which is understanding one's own capabilities, such as a student evaluating their own knowledge of a subject in a class. It is notable that not all metacognition is accurate.

#2 - Do you know what you are doing, and why you are doing it?

Task knowledge (procedural knowledge) refers to knowledge about doing things. This type of knowledge is displayed as heuristics and strategies. A high degree of procedural knowledge can allow individuals to perform tasks more automatically.

#3 - What are you about to do, and what do you think will happen next?

Strategic knowledge (conditional knowledge) refers to knowing when and why to use declarative and procedural knowledge. It is one's own capability for using strategies to learn information.


Another somewhat tenuous alignment is with metacognitive skills: evaluating, monitoring, and planning.

#1 - What do you think you know, and how do you think you know it?

Evaluating: refers to appraising the final product of a task and the efficiency at which the task was performed. This can include re-evaluating strategies that were used.

#2 - Do you know what you are doing, and why you are doing it?

Monitoring: refers to one's awareness of comprehension and task performance

#3 - What are you about to do, and what do you think will happen next?

Planning: refers to the appropriate selection of strategies and the correct allocation of resources that affect task performance.

Quotes are adapted from https://en.wikipedia.org/wiki/Metacognition

Comment by Archimedes on Tapatakt's Shortform · 2024-11-10T17:03:52.285Z · LW · GW

The customer doesn't pay the fee directly. The vendor pays the fee (and passes the cost to the customer via price). Sometimes vendors offer a cash discount because of this fee.

Comment by Archimedes on Tapatakt's Shortform · 2024-11-09T17:18:45.237Z · LW · GW

It already happens indirectly. Most digital money transfers are things like credit card transactions. For these, the credit card company takes a percentage fee and pays the government tax on its profit.

Comment by Archimedes on LLMs Look Increasingly Like General Reasoners · 2024-11-09T17:02:54.235Z · LW · GW

Additional data points:

o1-preview and the new Claude Sonnet 3.5 both significantly improved over prior models on SimpleBench.

The math, coding, and science benchmarks in the o1 announcement post:

BMs

Comment by Archimedes on LLM Generality is a Timeline Crux · 2024-11-03T00:45:20.651Z · LW · GW

How much does o1-preview update your view? It's much better at Blocksworld for example.

https://x.com/rohanpaul_ai/status/1838349455063437352

https://arxiv.org/pdf/2409.19924v1

Comment by Archimedes on JargonBot Beta Test · 2024-11-02T15:58:36.588Z · LW · GW

There should be some way for readers to flag AI-generated material as inaccurate or misleading, at least if it isn’t explicitly author-approved.

Comment by Archimedes on What TMS is like · 2024-11-02T15:41:43.870Z · LW · GW

Neither TMS nor ECT didn’t do much for my depression. Eventually, after years of trial and error, I did find a combination of drugs that works pretty well.

I never tried ketamine or psilocybin treatments but I would go that route before ever thinking about trying ECT again.

Comment by Archimedes on Claude Sonnet 3.5.1 and Haiku 3.5 · 2024-10-25T03:09:51.418Z · LW · GW

I suspect fine-tuning specialized models is just squeezing a bit more performance in a particular direction, and not nearly as useful as developing the next-gen model. Complex reasoning takes more steps and tighter coherence among them (the o1 models are a step in this direction). You can try to devote a toddler to studying philosophy, but it won't really work until their brain matures more.

Comment by Archimedes on LLMs can learn about themselves by introspection · 2024-10-20T00:44:11.199Z · LW · GW

Seeing the distribution calibration you point out does update my opinion a bit.

I feel like there’s still a significant distinction though between adding one calculation step to the question versus asking it to model multiple responses. It would have to model its own distribution in a single pass rather than having the distributions measured over multiple passes align (which I’d expect to happen if the fine-tuning teaches it the hypothetical is just like adding a calculation to the end).

As an analogy, suppose I have a pseudorandom black box function that returns an integer. In order to approximate the distribution of its outputs mod 10, I don’t have to know anything about the function; I just can just sample the function and apply mod 10 post hoc. If I want to say something about this distribution without multiple samples, then I actually have to know something about the function.

Comment by Archimedes on LLMs can learn about themselves by introspection · 2024-10-19T15:06:29.519Z · LW · GW

This essentially reduces to "What is the next country: Laos, Peru, Fiji?" and "What is the third letter of the next country: Laos, Peru, Fiji?" It's an extra step, but questionable if it requires anything "introspective".

I'm also not sure asking about the nth letter is a great way of computing an additional property. Tokenization makes this sort of thing unnatural for LLMs to reason about, as demonstrated by the famous Strawberry Problem. Humans are a bit unreliable at this too, as demonstrated by your example of "o" being the third letter of "Honduras".

I've been brainstorming about what might make a better test and came up with the following:

Have the LLM predict what its top three most likely choices are for the next country in the sequence and compare that to the objective-level answer of its output distribution when asked for just the next country. You could also ask the probability of each potential choice and see how well-calibrated it is regarding its own logits.

What do you think?

Comment by Archimedes on LLMs can learn about themselves by introspection · 2024-10-19T04:53:30.755Z · LW · GW

Thanks for pointing that out.

Perhaps the fine-tuning process teaches it to treat the hypothetical as a rephrasing?

It's likely difficult, but it might be possible to test this hypothesis by comparing the activations (or similar interpretability technique) of the object-level response and the hypothetical response of the fine-tuned model.

Comment by Archimedes on LLMs can learn about themselves by introspection · 2024-10-18T22:31:29.819Z · LW · GW

It seems obvious that a model would better predict its own outputs than a separate model would. Wrapping a question in a hypothetical feels closer to rephrasing the question than probing "introspection". Essentially, the response to the object level and hypothetical reformulation both arise from very similar things going on in the model rather than something emergent happening.

As an analogy, suppose I take a set of data, randomly partition it into two subsets (A and B), and perform a linear regression and logistic regression on each subset. Suppose that it turns out that the linear models on A and B are more similar than any other cross-comparison (e.g. linear B and logistic B). Does this mean that linear regression is "introspective" because it better fits its own predictions than another model does?

I'm pretty sure I'm missing something as I'm mentally worn out at the moment. What am I missing?

Comment by Archimedes on Why I’m not a Bayesian · 2024-10-07T00:53:39.102Z · LW · GW

I see what you're gesturing at but I'm having difficulty translating it into a direct answer to my question.

Cases where language is fuzzy are abundant. Do you have some examples of where a truth value itself is fuzzy (and sensical) or am I confused in trying to separate these concepts?

Comment by Archimedes on Why I’m not a Bayesian · 2024-10-06T16:36:34.087Z · LW · GW

Can you help me tease out the difference between language being fuzzy and truth itself being fuzzy?

It's completely impractical to eliminate ambiguity in language, but for most scientific purposes, it seems possible to operationalize important statements into something precise enough to apply Bayesian reasoning to. This is indeed the hard part though. Bayes' theorem is just arithmetic layered on top of carefully crafted hypotheses.

The claim that the Earth is spherical is neither true nor false in general but usually does fall into a binary if we specify what aspect of the statement we care about. For example, "does it have a closed surface", "is it's sphericity greater than 99.5%", "are all the points on it's surface between radius * ( 1 +/- epsilon)", "is the circumference of the equator greater than that of the prime meridian".

Comment by Archimedes on Musings on Text Data Wall (Oct 2024) · 2024-10-06T15:42:34.284Z · LW · GW

Synthetically enhancing and/or generating data could be another dimension of scaling. Imagine how much deeper understanding a person/LLM would have if instead of simply reading/training on a source like the Bible N times, they had to annotate it into something more like the Oxford Annotated Bible and that whole process of annotation became training data.

Comment by Archimedes on AXRP Episode 37 - Jaime Sevilla on Forecasting AI · 2024-10-05T03:18:50.035Z · LW · GW

I listened to this via podcast. Audio nitpick: the volume levels were highly imbalanced at times and I had to turn my volume all the way up to hear both speakers well (one was significantly quieter than the other).

Comment by Archimedes on COT Scaling implies slower takeoff speeds · 2024-09-29T03:46:36.268Z · LW · GW

Appropriate scaffolding and tool use are other potential levers.

Comment by Archimedes on COT Scaling implies slower takeoff speeds · 2024-09-29T03:28:57.838Z · LW · GW

Kudos for referencing actual numbers. I don’t think it makes sense to measure humans in terms of tokens, but I don’t have a better metric handy. Tokens obviously aren’t all equivalent either. For some purposes, a small fast LLM is more way efficient than a human. For something like answering SIMPLEBENCH, I’d guess o1-preview is less efficient while still significantly below human performance.

Comment by Archimedes on COT Scaling implies slower takeoff speeds · 2024-09-28T21:00:37.014Z · LW · GW

Is this assuming AI will never reach the data efficiency and energy efficiency of human brains? Currently, the best AI we have comes at enormous computing/energy costs, but we know by example that this isn't a physical requirement.

IMO, a plausible story of fast takeoff could involve the frontier of the current paradigm (e.g. GPT-5 + CoT training + extended inference) being used at great cost to help discover a newer paradigm that is several orders of magnitude more efficient, enabling much faster recursive self-improvement cycles.

CoT and inference scaling imply current methods can keep things improving without novel techniques. No one knows what new methods may be discovered and what capabilities they may unlock.

Comment by Archimedes on Four Levels of Voting Methods · 2024-09-27T04:18:30.271Z · LW · GW

It's cool that the score voting input can be post-processed in multiple ways. It would be fascinating to try it out in the real world and see how often Score vs STAR vs BTR winners differ.

One caution with score voting is that you don't want high granularity and lots of candidates or else individual ballots become distinguishable enough that people can prove they voted a particular way (for the purpose of getting compensated). Unless marked ballots are kept private, you'd probably want to keep the options 0-5 instead of 0-9 and only allow candidates above a sufficient threshold of support to be listed.

Comment by Archimedes on What Depression Is Like · 2024-09-27T02:37:16.202Z · LW · GW

Yes, but with a very different description of the subjective experience -- kind of like getting a sunburn on your back feels very different than most other types of back pain.

Comment by Archimedes on Pay Risk Evaluators in Cash, Not Equity · 2024-09-07T15:39:03.016Z · LW · GW

Your third paragraph mentions "all AI company staff" and the last refers to "risk evaluators" (i.e. "everyone within these companies charged with sounding the alarm"). Are these groups roughly the same or is the latter subgroup significantly smaller?

Comment by Archimedes on On the UBI Paper · 2024-09-05T01:31:10.825Z · LW · GW

I agree. I would not expect the effect on health over 3 years to be significant outside of specific cases like it allowing someone to afford a critical treatment (e.g. insulin for a diabetic person), especially given the focus on a younger population.

Comment by Archimedes on Solving adversarial attacks in computer vision as a baby version of general AI alignment · 2024-09-03T00:04:07.297Z · LW · GW

This is a cool paper with an elegant approach!

It reminds me of a post from earlier this year on a similar topic that I highly recommend to anyone reading this post: Ironing Out the Squiggles

Comment by Archimedes on What Depression Is Like · 2024-08-30T22:01:56.304Z · LW · GW

OP's model does not resonate with my experience either. For me, it's similar to constantly having the flu (or long COVID) in the sense that you persistently feel bad, and doing anything requires extra effort proportional to the severity of symptoms. The difference is that the symptoms mostly manifest in the brain rather than the body.

Comment by Archimedes on Liability regimes for AI · 2024-08-25T19:57:42.129Z · LW · GW

This is a cool idea in theory, but imagine how it would play out in reality when billions of dollars are at stake. Who decides the damage amount and the probabilities involved and how? Even if these were objectively computable and independent of metaethical uncertainty, the incentives for distorting them would be immense. This only seems feasible when damages and risks are well understood and there is consensus around an agreed-upon causal model.

Comment by Archimedes on We’re not as 3-Dimensional as We Think · 2024-08-04T23:20:22.668Z · LW · GW

I also guessed the ratio of the spheres was between 2 and 3 (and clearly larger than 2) by imagining their weight.

I was following along with the post about how we mostly think in terms of surfaces until the orange example. Having peeled many oranges and separated them into sections, they are easy for me to imagine in 3D, and I have only a weak "mind's eye" and moderate 3D spatial reasoning ability.

Comment by Archimedes on Pantheon Interface · 2024-07-12T21:57:27.566Z · LW · GW

Even for people who understand your intended references, that won't prevent them from thinking about the evil-spirit association and having bad vibes.

Being familiar with daemons in the computing context, I perceive the term as whimsical and fairly innocuous.

Comment by Archimedes on AI #71: Farewell to Chevron · 2024-07-05T04:37:13.353Z · LW · GW

The section on Chevron Overturned surprised me. Maybe I'm in an echo chamber, but my impression was that most legal scholars (not including the Federalist Society and The Heritage Foundation) consider the decision to be the SCOTUS arrogating yet more power to the judicial branch, overturning 40 years of precedent (which was based on a unanimous decision) without sufficient justification.

I consider the idea that "legislators should never have indulged in writing ambiguous law" rather sophomoric. I don't think it's always possible to write law that is complete, unambiguous, and also good policy. Nor do I think Congress is always the best equipped to do so. I don't fully trust government agencies delegated with rulemaking authority, but I have much less trust that a forum-shopped judge in the Northern District of Texas is likely to make better-informed decisions about drug safety than the FDA.

FWIW, I haven't really thought much about Loper as it relates to AI, tech, and crypto specifically. The consequences of activist judges versus the likes of the DOJ, CDC, FDA, and EPA are mostly what come to mind. Maybe it's attention bias given recent SCOTUS decisions versus more limited memory of out-of-control agencies but I feel uneasy tilting the balance of power toward judicial dominance.

Comment by Archimedes on The Potential Impossibility of Subjective Death · 2024-07-05T01:53:54.814Z · LW · GW

This is similar to the quantum suicide thought experiment:

https://en.wikipedia.org/wiki/Quantum_suicide_and_immortality

Check out the Max Tegmark references in particular.

Comment by Archimedes on Actually, Power Plants May Be an AI Training Bottleneck. · 2024-06-21T03:55:43.472Z · LW · GW

[Epistemic status: purely anecdotal]

I know people who work in the design and construction of data centers and have heard that some popular data center cities aren't approving nearly as many data centers due to power grid concerns. Apparently, some of the newer data center projects are being designed to include net new power generation to support the data center.

For less anecdotal information, I found this useful: https://sprottetfs.com/insights/sprott-energy-transition-materials-monthly-ais-critical-impact-on-electricity-and-energy-demand/

Comment by Archimedes on Claude 3.5 Sonnet · 2024-06-21T03:47:04.307Z · LW · GW

I can definitely imagine them plausibly believing they're sticking to that commitment, especially with a sprinkle of motivated reasoning. It's "only" incremental nudging the publicly available SOTA rather than bigger steps like GPT2 --> GPT3 --> GPT4.

Comment by Archimedes on [Linkpost] Transcendence: Generative Models Can Outperform The Experts That Train Them · 2024-06-19T14:36:28.270Z · LW · GW

Exactly. All that’s needed for “transcendence” is removing some noise.

I highly recommend the book Noise by Daniel Kahneman et al on this topic.

Comment by Archimedes on Thoughts on Francois Chollet's belief that LLMs are far away from AGI? · 2024-06-16T19:22:13.929Z · LW · GW

My hypothesis is that poor performance on ARC is largely due to lack of training data. If there were billions of diverse input/output examples to train on, I would guess standard techniques would work.

Efficiently learning from just a few examples is something that humans are still relatively good at, especially in simple cases where system1and system 2 synergize well. I’m not aware of many cases where AI approaches human level without orders of magnitude more training data than a human ever sees in a lifetime.

I think the ARC challenge can be solved within a year or two, but doing so won’t be super interesting to me unless it breaks new ground in sample efficiency (not trained on billions of synthetic examples) or generalization (e.g. solved using existing LLMs rather than a specialized net).

Comment by Archimedes on Level up your spreadsheeting · 2024-05-29T00:30:15.139Z · LW · GW

Ah, I was under the impression that OP was covering both, not only things relevant to Google Sheets.

Comment by Archimedes on Level up your spreadsheeting · 2024-05-26T17:11:50.906Z · LW · GW

Ah, I do see the LET function now but I still can't find a reference to the Power Query editor.

Comment by Archimedes on Is there an idiom for bonding over shared trials/trauma? · 2024-05-26T05:50:51.215Z · LW · GW

There doesn't appear to be an existing term that neatly fits. Here are some possibilities:

One-word terms:

Traumaraderie (portmanteau of "trauma" and "camaraderie")

Two-word terms:

Stress bonding (a technique for bonding rabbits)

Trauma bonding (this is sometimes used to mean bonding between victims even though it also means bonding between victim and abuser in other contexts)

Trauma kinship

Survivor bond

Longer terms:

Friendship forged in fire

Shared trauma bonding

Mutual suffering solidarity

Camradery through adversity

Comment by Archimedes on Level up your spreadsheeting · 2024-05-26T05:09:02.157Z · LW · GW

I'm surprised to see no mention of the LET function for making formulas more readable.

Another glaring omission is Power Query in Excel. I find it incredibly useful for connecting to data sources and transforming data. It's easily auditable as it produces steps of code for table transformations rather than working with thousands of cells with individual formulas.

When it comes to writing about spreadsheets, it's just about impossible to even skim the surface without missing anything, especially considering many aspects like array formulas, VBA macros, pivot tables with DAX measures, and Power Query can go super deep. I own a 758-page textbook just on Power Query and multiple books on Power Pivot / DAX.

Comment by Archimedes on Math-to-English Cheat Sheet · 2024-04-13T03:54:46.253Z · LW · GW

Some of these depend on how concise you want to be. For example,

"Partial derivative with respect to x of f of x, y" may be shortened to "partial x, f of x, y".

Conciseness is more common when speaking if the written form is also visible (as opposed to purely vocal communication).

Comment by Archimedes on Anthropic release Claude 3, claims >GPT-4 Performance · 2024-03-04T23:23:46.692Z · LW · GW

AI Explained already has a video out on it:

The New, Smartest AI: Claude 3 – Tested vs Gemini 1.5 + GPT-4

Comment by Archimedes on Why you, personally, should want a larger human population · 2024-02-24T03:02:49.143Z · LW · GW

Supposing humanity is limited to Earth, I can see arguments for ideal population levels ranging from maybe 100 million to 100 billion with values between 1 and 10 billion being the most realistic. However, within this range, I'd guess that maximal value is more dependent on things like culture and technology than on the raw population count, just like a sperm whale's brain being ~1000x the mass of an African grey parrot's brain doesn't make it three orders of magnitude more intelligent.

Size matters (as do the dynamic effects of growing/shrinking) but it's not a metric I'd want to maximize unless everything else is optimized already. If you want more geniuses and more options/progress/creativity, then working toward more opportunities for existing humans to truly thrive seems far more Pareto-optimal to me.

Comment by Archimedes on When do "brains beat brawn" in Chess? An experiment · 2024-02-23T22:04:00.126Z · LW · GW

Leela now has a contempt implementation that makes odds games much more interesting. See this Lc0 blog post (and the prior two) for more details on how it works and how to easily play odds games against Leela on Lichess using this feature.

GM Matthew Sadler also has some recent videos about using WDL contempt to find new opening ideas to maximize chances of winning versus a much weaker opponent.

I'd bet money you can't beat LeelaQueenOdds at anything close to a 90% win rate.

Comment by Archimedes on The case for training frontier AIs on Sumerian-only corpus · 2024-01-16T02:23:05.336Z · LW · GW

Why not go all the way and use a constructed language (like Lojban or Ithkuil) that's specifically designed for the purpose?

Comment by Archimedes on Nonlinear’s Evidence: Debunking False and Misleading Claims · 2023-12-26T21:18:55.624Z · LW · GW

All the pics and bragging about how wonderful their adventures were really rub me the wrong way. It comes across as incredibly tone deaf to the allegations and focusing on irrelevant things. Hot tubs, beaches, and sunsets are not so important if you’re suffering from deeper issues. Good relationship dynamics are way more important than scenery and perks, especially in a small group setting.

Comment by Archimedes on When will GPT-5 come out? Prediction markets vs. Extrapolation · 2023-12-12T03:29:00.159Z · LW · GW

The markets you linked to roughly align with my intuition that GPT-5 before 2025 is likely, although maybe not fully released publicly.

Things to keep in mind:

  1. We know that GPT-5 is officially in development.
  2. The hype since ChatGPT went public is huge and competition is substantial.
  3. Because of hype and competition, investment will be large in terms of dollars and talent.
  4. There will be strong incentive to show a return on investment rather than holding back and waiting for something worthy of calling "GPT-5".

Other than the amount of investment in next-gen models, most of my intuition is related to the human and marketing factors involved. OpenAI won't want to lose its lead by waiting many years.

Comment by Archimedes on So you want to save the world? An account in paladinhood · 2023-11-23T02:16:03.197Z · LW · GW

That sounds rather tautological.

Assuming ratfic represents LessWrong-style rationality well and assuming LW-style rationality is a good approximation of truly useful instrumental reasoning, then the claim should hold. There’s room for error in both assumptions.

Comment by Archimedes on My AI Predictions 2023 - 2026 · 2023-10-16T22:52:53.524Z · LW · GW

My hunch is that there's sufficient text already if an AI processes it more reflectively. For example, each chunk of text can be fed through a series of LLM prompts intended to enrich it, and then the model trains on the enriched/expanded text.

Comment by Archimedes on A quick remark on so-called “hallucinations” in LLMs and humans · 2023-09-26T02:26:26.089Z · LW · GW

I definitely like the term "confabulate" more than "hallucinate". It's more accurate and similar to what humans do. My favorite confabulation examples in humans are split-brain experiments.

"split-brain" patients, whose left and right brain hemispheres have been surgically disconnected for medical treatment. Neuroscientists have devised clever experiments in which information is provided to the right hemisphere (for instance, pictures of naked people), causing a change in behavior (embarrassed giggling). Split-brain individuals are then asked to explain their behavior verbally, which relies on the left hemisphere. Realizing that their body is laughing, but unaware of the nude images, the left hemisphere will confabulate an excuse for the body's behavior ("I keep laughing because you ask such funny questions, Doc!").

https://www.edge.org/response-detail/11513

Comment by Archimedes on Paper: LLMs trained on “A is B” fail to learn “B is A” · 2023-09-26T01:52:19.204Z · LW · GW

I found your thread insightful, so I hope you don't mind me pasting it below to make it easier for other readers.

Neel Nanda ✅ @NeelNanda5 - Sep 24

The core intuition is that "When you see 'A is', output B" is implemented as an asymmetric look-up table, with an entry for A->B. B->A would be a separate entry

The key question to ask with a mystery like this about models is what algorithms are needed to get the correct answer, and how these can be implemented in transformer weights. These are what get reinforced when fine-tuning.

The two hard parts of "A is B" are recognising the input tokens A (out of all possible input tokens) and connecting this to the action to output tokens B (out of all possible output tokens). These are both hard! Further, the A -> B look-up must happen on a single token position

Intuitively, the algorithm here has early attention heads attend to the prev token to create a previous token subspace on the Cruise token. Then an MLP neuron activates on "Current==Cruise & Prev==Tom" and outputs "Output=Mary", "Next Output=Lee" and "Next Next Output=Pfeiffer"

"Output=Mary" directly connects to the unembed, and "Next Output=Lee" etc gets moved by late attention heads to subsequent tokens once Mary is output.

Crucially, there's an asymmetry between "input A" and "output A". Inputs are around at early layers, come from input embeddings, and touch the input weights of MLP neurons. Outputs are around more at late layers, compose with the unembedding, and come from output weights of MLPs

This is especially true with multi-token A and B. Detecting "Tom Cruise" is saying "the current token embedding is Cruise, and the prev token space says Tom", while output "Tom Cruise" means to output the token Tom, and then a late attn head move "output Cruise" to the next token

Thus, when given a gradient signal to output B given "A is" it reinforces/creates a lookup "A -> B", but doesn't create "B->A", these are different lookups, in different parameters, and there's no gradient signal from one to the other.

How can you fix this? Honestly, I can't think of anything. I broadly think of this as LLMs working as intended. They have a 1 way flow from inputs to outputs, and a fundamental asymmetry between inputs and outputs. It's wild to me to expect symmetry/flow reversing to be possible

Why is this surprising at all then? My guess is that symmetry is intuitive to us, and we're used to LLMs being capable of surprising and impressive things, so it's weird to see something seemingly basic missing.

LLMs are not human! Certain things are easy for us and not for them, and vice versa. My guess is that the key difference here is that when detecting/outputting specific tokens, the LLM has no notion of a variable that can take on arbitrary values - a direction has fixed meaning

A better analogy might be in-context learning, where LLMs CAN use "variables". The text "Tom Cruise is the son of Mary Lee Pfeiffer. Mary Lee Pfeiffer is the mother of..." has the algorithmic solution "Attend to the subject of sentence 1 (Tom Cruise), and copy to the output"

Unsurprisingly, the model has no issue with reversing facts in context! Intuitively, when I remember a fact A is B, it's closer to a mix of retrieving it into my "context window" and then doing in-context learning, rather than pure memorised recall.

Comment by Archimedes on Lack of Social Grace Is an Epistemic Virtue · 2023-08-01T02:42:22.474Z · LW · GW

Adversarial gaming doesn't match my experience much at all and suggesting options doesn't feel imposing either. For me at least, it's largely about the responsibility and mental exertion of planning.

In my experience, mutual "where do you want to go" is most often when neither party has a strong preference and neither feels like taking on the cognitive burden of weighing options to come to a decision. Making decisions takes effort especially when there isn't a clearly articulated set of options and tradeoffs to consider.

For practical purposes, one person should provide 2-4 options they're OK with and the other person can pick one option or veto some option(s). If they veto all given options, they must provide their own set of options the first person can choose or veto. Repeat as needed but rarely is more than one round needed unless participants are picky or disagreeable.