TurnTrout's shortform feed

post by TurnTrout · 2019-06-30T18:56:49.775Z · LW · GW · 601 comments

Contents

601 comments

601 comments

Comments sorted by top scores.

comment by TurnTrout · 2022-06-27T18:34:30.792Z · LW(p) · GW(p)

Rationality exercise: Take a set of Wikipedia articles on topics which trainees are somewhat familiar with, and then randomly select a small number of claims to negate (negating the immediate context as well, so that you can't just syntactically discover which claims were negated). 

For example:

By the time they are born, infants can recognize and have a preference for their mother's voice suggesting some prenatal development of auditory perception.

-> modified to

Contrary to early theories, newborn infants are not particularly adept at picking out their mother's voice from other voices. This suggests the absence of prenatal development of auditory perception.

Sometimes, trainees will be given a totally unmodified article. For brevity, the articles can be trimmed of irrelevant sections. 

Benefits:

  • Addressing key rationality skills. Noticing confusion; being more confused by fiction than fact; actually checking claims against your models of the world.
    • If you fail, either the article wasn't negated skillfully ("5 people died in 2021" -> "4 people died in 2021" is not the right kind of modification), you don't have good models of the domain, or you didn't pay enough attention to your confusion. 
    • Either of the last two are good to learn.
  • Scalable across participants. Many people can learn from each modified article.
  • Scalable across time. Once a modified article has been produced, it can be used repeatedly.
  • Crowdsourcable. You can put out a bounty for good negated articles, run them in a few control groups, and then pay based on some function of how good the article was. Unlike original alignment research or CFAR technique mentoring, article negation requires skills more likely to be present outside of Rationalist circles.

I think the key challenge is that the writer must be able to match the style, jargon, and flow of the selected articles. 

Replies from: Morpheus, yitz, TurnTrout
comment by Morpheus · 2022-06-28T19:30:50.975Z · LW(p) · GW(p)

I remember the magazine I read as a kid (Geolino) had a section like this (something like 7 news stories from around the World and one is wrong). It's german only, though I'd guess a similar thing to exist in english media?

comment by Yitz (yitz) · 2022-06-28T00:25:39.069Z · LW(p) · GW(p)

This is a lot like Gwern’s idea for a fake science journal club, right? This sounds a lot easier to do though, and might seriously be worth trying to implement.

comment by TurnTrout · 2022-10-06T00:14:45.937Z · LW(p) · GW(p)

Additional exercise: Condition on something ridiculous (like apes having been continuously alive for the past billion years), in addition to your own observations (your life as you've lived it). What must now be true about the world? What parts of your understanding of reality are now suspect?

comment by TurnTrout · 2023-12-17T02:03:27.310Z · LW(p) · GW(p)

In an alternate universe, someone wrote a counterpart to There's No Fire Alarm for Artificial General Intelligence [LW · GW]:

Okay, let’s be blunt here. I don’t think most of the discourse about alignment being really hard is being generated by models of machine learning at all. I don’t think we’re looking at wrong models; I think we’re looking at no models.

I was once at a conference where there was a panel full of famous AI alignment luminaries, and most of the luminaries were nodding and agreeing with each other that of course AGI alignment is really hard and unaddressed by modern alignment research, except for two famous AI luminaries who stayed quiet and let others take the microphone.

I got up in Q&A and said, “Okay, you’ve all told us that alignment is hard. But let’s be more concrete and specific. I’d like to know what’s the least impressive task which cannot be done by a 'non-agentic' system, that you are very confident cannot be done safely and non-agentically in the next two years.”

There was a silence.

Eventually, one person ventured a reply, spoken in a rather more tentative tone than they’d been using to pronounce that SGD would internalize coherent goals into language models. They named “Running a factory competently."

A few months after that panel, there was unexpectedly a big breakthrough on LLM/management integration.

The point is the silence that fell after my question, and that eventually I only got one reply, spoken in tentative tones. When I asked for concrete feats that were impossible in the next two years, I think that that’s when the luminaries on that panel switched to trying to build a mental model of future progress in AI alignment, asking themselves what they could or couldn’t predict, what they knew or didn’t know. And to their credit, most of them did know their profession well enough to realize that forecasting future boundaries around a rapidly moving field is actually really hard, that nobody knows what will appear on arXiv next month, and that they needed to put wide credibility intervals with very generous upper bounds on how much progress might take place twenty-four months’ worth of arXiv papers later.

(Also, Rohin Shah was present, so they all knew that if they named something insufficiently impossible, Rohin would have DeepMind go and do it.)

The question I asked was in a completely different genre from the panel discussion, requiring a mental context switch: the assembled luminaries actually had to try to consult their rough, scarce-formed intuitive models of progress in AI alignment and figure out what future experiences, if any, their model of the field definitely prohibited within a two-year time horizon. Instead of, well, emitting socially desirable verbal behavior meant to kill that darned optimism around AGI alignment and get some predictable applause from the audience.

I’ll be blunt: I don’t think the confident doom-and-gloom is entangled with non-social reality. If your model has the extraordinary power to say what internal motivational structures SGD will entrain into scaled-up networks, then you ought to be able to say much weaker things that are impossible in two years, and you should have those predictions queued up and ready to go rather than falling into nervous silence after being asked.

Replies from: ryan_greenblatt, TRW, faul_sname, Thane Ruthenis, daniel-kokotajlo, thomas-kwa, quetzal_rainbow
comment by ryan_greenblatt · 2023-12-17T02:58:05.779Z · LW(p) · GW(p)

[Somewhat off-topic]

Eventually, one person ventured a reply, spoken in a rather more tentative tone than they’d been using to pronounce that SGD would internalize coherent goals into language models. They named “Running a factory competently."

I like thinking about the task "speeding up the best researchers by 30x" (to simplify, let's only include research in purely digital (software only) domains).

To be clear, I am by no means confident that this can't be done safely or non-agentically. It seems totally plausible to me that this can be accomplished without agency except for agency due to the natural language outputs of an LLM agent. (Perhaps I'm at 15% [LW(p) · GW(p)] that this will in practice be done without any non-trivial agency that isn't visible in natural language.)

(As such, this isn't a good answer to the question of "I’d like to know what’s the least impressive task which cannot be done by a 'non-agentic' system, that you are very confident cannot be done safely and non-agentically in the next two years.". I think there probably isn't any interesting answer to this question for me due to "very confident" being a strong condition.)

I like thinking about this task because if we were able to speed up generic research on purely digital domains by this large of an extent, safety research done with this speed up would clearly obsolete prior safety research pretty quickly.

(It also seems likely that we could singularity quite quickly from this point if wanted to, so it's not clear we'll have a ton of time at this capability level.)

comment by __RicG__ (TRW) · 2023-12-29T18:24:46.245Z · LW(p) · GW(p)

If your model has the extraordinary power to say what internal motivational structures SGD will entrain into scaled-up networks, then you ought to be able to say much weaker things that are impossible in two years, and you should have those predictions queued up and ready to go rather than falling into nervous silence after being asked.

 

Sorry, I might misunderstanding you (and hope I am), but... I think doomers literally say "Nobody knows what internal motivational structures SGD will entrain into scaled-up networks and thus we are all doomed". The problems is not having the science to confidently say how the AIs will turn out, and not that doomers have a secret method to know that next-token-prediction is evil.

If you meant that doomers are too confident answering the question "will SGD even make motivational structures?" their (and mine) answer still stems from ignorance: nobody knows, but it is plausible that SGD will make motivational structures in the neural networks because it can be useful in many tasks (to get low loss or whatever), and if you think you do know better you should show it experimentally and theoretically in excruciating detail.

 

I also don't see how it logically follows that "If your model has the extraordinary power to say what internal motivational structures SGD will entrain into scaled-up networks" => "then you ought to be able to say much weaker things that are impossible in two years" but it seems to be the core of the post. Even if anyone had the extraordinary model to predict what SGD exactly does (which we, as a species, should really strive for!!) it would still be a different question to predict what will or won't happen in the next two years.

If I reason about my field (physics) the same should hold for a sentence structured like "If your model has the extraordinary power to say how an array of neutral atoms cooled to a few nK will behave when a laser is shone upon them" (which is true) => "then you ought to be able to say much weaker things that are impossible in two years in the field of cold atom physics" (which is... not true). It's a non sequitur.

Replies from: TurnTrout
comment by TurnTrout · 2024-01-01T19:47:24.587Z · LW(p) · GW(p)

If you meant that doomers are too confident answering the question "will SGD even make motivational structures?" their (and mine) answer still stems from ignorance: nobody knows, but it is plausible that SGD will make motivational structures in the neural networks because it can be useful in many tasks (to get low loss or whatever), and if you think you do know better you should show it experimentally and theoretically in excruciating detail.

It would be "useful" (i.e. fitness-increasing) for wolves to have evolved biological sniper rifles, but they did not [LW(p) · GW(p)]. By what evidence are we locating these motivational hypotheses, and what kinds of structures are dangerous, and why are they plausible under the NN prior [LW(p) · GW(p)]? 

I also don't see how it logically follows that "If your model has the extraordinary power to say what internal motivational structures SGD will entrain into scaled-up networks" => "then you ought to be able to say much weaker things that are impossible in two years" but it seems to be the core of the post.

The relevant commonality is "ability to predict the future alignment properties and internal mechanisms of neural networks." (Also, I don't exactly endorse everything in this fake quotation, so indeed the analogized tasks aren't as close as I'd like. I had to trade off between "what I actually believe" and "making minimal edits to the source material.")

comment by faul_sname · 2023-12-18T01:50:41.757Z · LW(p) · GW(p)

But let’s be more concrete and specific. I’d like to know what’s the least impressive task which cannot be done by a 'non-agentic' system, that you are very confident cannot be done safely and non-agentically in the next two years.

Focusing on the "minimal" part of that, maybe something like "receive a request to implement some new feature in a system it is not familiar with, recognize how the limitations of the architecture that system make that feature impractical to add, and perform a major refactoring of that program to an architecture that is not so limited, while ensuring that the refactored version does not contain any breaking changes". Obviously it would have to have access to tools in order to do this, but my impression is that this is the sort of thing mid-level software developers can do fairly reliably as a nearly rote task, but is beyond the capabilities of modern LLM-based systems, even scaffolded ones.

Though also maybe don't pay too much attention to my prediction, because my prediction for "least impressive thing GPT-4 will be unable to do" was "reverse a string", and it did turn out to be able to do that fairly reliably.

comment by Thane Ruthenis · 2023-12-17T05:31:26.697Z · LW(p) · GW(p)

I’d like to know what’s the least impressive task which cannot be done by a 'non-agentic' system, that you are very confident cannot be done safely and non-agentically in the next two years.

That's incredibly difficult to predict, because minimal things only a general intelligence could do are things like "deriving a few novel abstractions and building on them", but from the outside this would be indistinguishable from it recognizing a cached pattern that it learned in-training and re-applying it, or merely-interpolating between a few such patterns. The only way you could distinguish between the two is if you have a firm grasp of every pattern in the AI's training data, and what lies in the conceptual neighbourhood of these patterns, so that you could see if it's genuinely venturing far from its starting ontology.

Or here's a more precise operationalization [LW(p) · GW(p)] from my old reply to Rohin Shah:

Consider a scheme like the following:

  • Let  be the current date.
  • Train an AI on all of humanity's knowledge up to a point in time , where .
  • Assemble a list  of all scientific discoveries made in the time period .
  • See if the AI can replicate these discoveries.

At face value, if the AI can do that, it should be considered able to "do science" and therefore AGI, right?

I would dispute that. If the period  is short enough, then it's likely that most of the cognitive work needed to make the leap to any discovery in  is already present in the data up to . Making a discovery from that starting point doesn't necessarily require developing new abstractions/doing science — it's possible that it may be done just by interpolating between a few already-known concepts. And here, some asymmetry between humans and e. g. SOTA LLMs becomes relevant:

  • No human knows everything the entire humanity knows. Imagine if making some discovery in  by interpolation required combining two very "distant" concepts, like a physics insight and advanced biology knowledge. It's unlikely that there'd be a human with sufficient expertise in both, so a human will likely do it by actual-science (e. g., a biologist would re-derive the physics insight from first principles).
  • An LLM, however, has a bird's eye view on the entire human concept-space up to . It directly sees both the physics insight and the biology knowledge, at once. So it can just do an interpolation between them, without doing truly-novel research.

Thus, the ability to produce marginal scientific insights may mean either the ability to "do science", or that the particular scientific insight is just a simple interpolation between already-known but distant concepts.

On the other hand, now imagine that the period  is very large, e. g. from 1940 to 2020. We'd then be asking our AI to make very significant discoveries, such that they surely can't be done by simple interpolation, only by actually building chains of novel abstractions [LW(p) · GW(p)]. But... well, most humans can't do that either, right? Not all generally-intelligent entities are scientific geniuses. Thus, this is a challenge a "weak" AGI would not be able to meet, only a genius/superintelligent AGI — i. e., only an AGI that's already an extinction threat.

In theory, there should be a pick of  that fits between the two extremes. A set of discoveries such that they can't be done by interpolation, but also don't require dangerous genius to solve.

But how exactly are we supposed to figure out what the right interval is? (I suppose it may not be an unsolvable problem, and I'm open to ideas, but skeptical on priors.)

I can absolutely make strong predictions regarding what non-AGI AIs would be unable to do. But these predictions are, due to the aforementioned problem, necessarily a high bar, higher than the "minimal" capability. (Also I expect an AI that can meet this high bar to also be the AI that quickly ends the world, so.)

Here's my recent reply [LW(p) · GW(p)] to Garrett, for example. tl;dr: non-GI AIs would not be widely known to be able to derive whole multi-layer novel mathematical frameworks if tasked with designing software products that require this. I'm a bit wary of reality somehow Goodharting on this prediction as well, but it seems robust enough, so I'm tentatively venturing it.

I currently think it's about as well as you can do, regarding "minimal incapability predictions".

comment by Daniel Kokotajlo (daniel-kokotajlo) · 2024-02-06T16:05:21.918Z · LW(p) · GW(p)

Nice analogy! I approve of stuff like this. And in particular I agree that MIRI hasn't convincingly argued that we can't do significant good stuff (including maybe automating tons of alignment research) without agents.

Insofar as your point is that we don't have to build agentic systems and nonagentic systems aren't dangerous, I agree? If we could coordinate the world to avoid building agentic systems I'd feel a lot better.
 

comment by Thomas Kwa (thomas-kwa) · 2023-12-18T05:54:49.335Z · LW(p) · GW(p)

I like this post although the move of imagining something fictional is not always valid.

“Okay, you’ve all told us that alignment is hard. But let’s be more concrete and specific. I’d like to know what’s the least impressive task which cannot be done by a 'non-agentic' system, that you are very confident cannot be done safely and non-agentically in the next two years.”

Not an answer, but I would be pretty surprised if a system could beat evolution at designing humans (creating a variant of humans that have higher genetic fitness than humans if inserted into a 10,000 BC population, while not hardcoding lots of information that would be implausible for evolution) and have the resulting beings not be goal-directed. The question is then, what causes this? The genetic bottleneck, diversity of the environment, multi-agent conflicts? And is it something we can remove?

comment by quetzal_rainbow · 2023-12-17T03:09:08.514Z · LW(p) · GW(p)

I admire sarcasm, but there are at least two examples of not-very-impressive tasks, like:

  1. Put two identical on cellular level strawberries on a plate;
  2. Develop and deploy biotech 10 year ahead of SOTA (from famous "Safely aligning powerful AGI is difficult" thread).
Replies from: matthew-barnett, daniel-kokotajlo
comment by Matthew Barnett (matthew-barnett) · 2023-12-17T03:31:05.888Z · LW(p) · GW(p)

Doesn't the first example require full-blown molecular nanotechnology? [ETA: apparently Eliezer says he thinks it can be done with "very primitive nanotechnology" but it doesn't sound that primitive to me.] Maybe I'm misinterpreting the example, but advanced nanotech is what I'd consider extremely impressive.

I currently expect we won't have that level of tech until after human labor is essentially obsolete. In effect, it sounds like you would not update until well after AIs already run the world, basically.

I'm not sure I understand the second example. Perhaps you can make it more concrete.

comment by Daniel Kokotajlo (daniel-kokotajlo) · 2024-02-06T16:08:12.308Z · LW(p) · GW(p)

Those are pretty impressive tasks. I'm optimistic that we can achieve existential safety via automating alignment research, and I think that's a less difficult task than those.

comment by TurnTrout · 2023-12-01T13:21:53.867Z · LW(p) · GW(p)

Deceptive alignment seems to only be supported by flimsy arguments. I recently realized that I don't have good reason to believe that continuing to scale up LLMs will lead to inner consequentialist cognition to pursue a goal which is roughly consistent across situations. That is: a model which not only does what you ask it to do (including coming up with agentic plans), but also thinks about how to make more paperclips even while you're just asking about math homework

Aside: This was kinda a "holy shit" moment, and I'll try to do it justice here. I encourage the reader to do a serious dependency check [LW · GW] on their beliefs. What do you think you know about deceptive alignment being plausible, and why do you think you know it? Where did your beliefs truly come from, and do those observations truly provide 


I agree that conditional on entraining consequentialist cognition which has a "different goal" (as thought of by MIRI; this isn't a frame I use), the AI will probably instrumentally reason about whether and how to deceptively pursue its own goals, to our detriment.

I contest that there's very little reason to expect "undesired, covert, and consistent-across-situations inner goals" to crop up in LLMs to begin with. An example alternative prediction is:

LLMs will continue doing what they're told. They learn contextual goal-directed behavior abilities, but only apply them narrowly in certain contexts for a range of goals (e.g. think about how to win a strategy game). They also memorize a lot of random data (instead of deriving some theory which simply explains its historical training data a la Solomonoff Induction). 

Not only is this performant, it seems to be what we actually observe today. The AI can pursue goals when prompted to do so, but it isn't pursuing them on its own. It basically follows instructions in a reasonable way, just like GPT-4 usually does.

Why should we believe the "consistent-across-situations inner goals -> deceptive alignment" mechanistic claim about how SGD works? Here are the main arguments I'm aware of:

  1. Analogies to evolution (e.g. page 6 of Risks from Learned Optimization)
    1. I think these loose analogies provide basically no evidence [LW · GW] about what happens in an extremely different optimization process (SGD to train LLMs). 
  2. Counting arguments: there are more unaligned goals than aligned goals (e.g. as argued in How likely is deceptive alignment? [LW · GW])
    1. These ignore the importance of the parameter->function map. (They're counting functions when they need to be counting parameterizations.) Classical learning theory made the (mechanistically) same mistake in predicting that overparameterized models would fail to generalize.
    2. I also basically deny the relevance of the counting argument, because I don't buy the assumption of "there's gonna be an inner 'objective' distinct from inner capabilities; let's make a counting argument about what that will be."
  3. Speculation about simplicity bias: SGD will entrain consequentialism because that's a simple algorithm for "getting low loss"
    1. But we already know that simplicity bias in the NN prior can be really hard to reason about. 
    2. I think it's unrealistic to imagine that we have the level of theoretical precision to go "it'll be a future training process and the model is 'getting selected for low loss', so I can now make this very detailed prediction about the inner mechanistic structure."[1] 
      1. I falsifiably predict that if you try to use this kind of logic or counting argument today to make falsifiable predictions about unobserved LLM generalization, you're going to lose Bayes points left and right.

Without deceptive alignment/agentic AI opposition, a lot of alignment threat models ring hollow. No more adversarial steganography or adversarial pressure on your grading scheme or worst-case analysis or unobservable, nearly unfalsifiable inner homonculi whose goals have to be perfected [LW(p) · GW(p)]. 

Instead, I think that we enter the realm of tool AI[2] which basically does what you say.[3] I think that world's a lot friendlier, even though there are still some challenges I'm worried about -- like an AI being scaffolded into pursuing consistent goals. (I think that's a very substantially different risk regime, though)

  1. ^

    (Even though this predicted mechanistic structure doesn't have any apparent manifestation in current reality.) 

  2. ^

    Tool AI which can be purposefully scaffolded into agentic systems, which somewhat handles objections from Amdahl's law.

  3. ^

    This is what we actually have today, in reality. In these setups, the agency comes from the system of subroutine calls to the LLM during e.g. a plan/critique/execute/evaluate loop a la AutoGPT.

Replies from: tailcalled, leogao, ryan_greenblatt, thomas-kwa, samuel-marks, Thane Ruthenis, peterbarnett, sharmake-farah, faul_sname, quetzal_rainbow, dkirmani, daniel-kokotajlo, roger-d-1, kave, johannes-c-mayer, Oliver Sourbut
comment by tailcalled · 2023-12-01T15:20:36.175Z · LW(p) · GW(p)

I agree a fraction of the way, but when doing a dependency check, I feel like there are some conditions where the standard arguments go through.

I sketched out my view on the dependencies Where do you get your capabilities from? [LW · GW]. The TL;DR is that I think ChatGPT-style training basically consists of two different ways of obtaining capabilities:

  • Imitating internet text, which gains them capabilities to do the-sorts-of-things-humans-do because generating such text requires some such capabilities.
  • Reinforcement learning from human feedback on plans, where people evaluate the implications of the proposals the AI comes up with, and rate how good they are.

I think both of these are basically quite safe. They do have some issues, but probably not of the style usually discussed by rationalists working in AI alignment, and possibly not even issues going beyond any other technological development.

The basic principle for why they are safe-ish is that all of the capabilities they gain are obtained through human capabilities. So for example, while RLHF-on-plans may optimize for tricking human raters to the detriment of how the rater intended the plans to work out, this "tricking" will also sacrifice the capabilities of the plans, because the only reason more effective plans are rated better is because humans recognize their effectiveness and rate them better.

Or relatedly [LW(p) · GW(p)], consider nearest unblocked strategy, a common proposal for why alignment is hard. This only applies if the AI is able to consider an infinitude of strategies, which again only applies if it can generate its own strategies once the original human-generated strategies have been blocked.

Why should we believe the "consistent-across-situations inner goals -> deceptive alignment" mechanistic claim about how SGD works? Here are the main arguments I'm aware of:

These are the main arguments that the rationalist community seems to be pushing about it. For instance, one time you asked about it [LW · GW], and LW just encouraged magical thinking around these arguments. (Even from people who I'd have thought would be clearer-thinking)

The non-magical way I'd analyze the smiling reward is that while people imagine that in theory updating the policy with antecedent-computation-reinforcement should make it maximize reward, in practice the information signal in this is very sparse and imprecise, so in practice it is going to take exponential time, and therefore something else will happen beforehand.

What is this "something else"? Probably something like: the human is going to reason about whether the AI's activities are making progress on something desirable, and in those cases, the human is going to press the antecedent-computation-reinforcement button. Which again boils down to the AI copying the human's capabilities, and therefore being relatively safe. (E.g. if you start seeing the AI studying how to deceive humans, you're probably gonna punish it instead, or at least if you reward it then it more falls under the dual-use risk framework than the alignment risk framework.)

(Of course one could say "what if we just instruct the human to only reward the AI based on results and not incremental progress?", but in that case the answer to what is going to happen before the AI does a treacherous turn is "the company training the AI runs out of money".)

There's something seriously wrong with how LWers fixated on reward-for-smiling and rationalized an explanation of why simplicity bias or similar would make this go into a treacherous turn.


OK, so this is how far I'm with you. Rationalist stories of AI progress are basically very wrong, and some commonly endorsed threat models don't point at anything serious. But what then?

The short technical answer is that reward is only antecedent-computation-reinforcement for policy-gradient-based reinforcement learning, and model-based approaches (traditionally temporal-difference learning, but I think the SOTA is DreamerV3, which is pretty cool) use the reward to learn a value function, which they then optimize in a more classical way, allowing them to create novel capabilities in precisely the way that can eventually lead to deception, treacherous turn, etc..

One obvious proposal is "shit, let's not do that [LW · GW], chatgpt seems like a pretty good and safe alternative, and it's not like there's any hype behind this anyway". I'm not sure that proposal is right, because e.g. AlphaStar was pretty hype, and it was trained with one of these methods. But it sure does seem that there's a lot of opportunity in ChatGPT now, so at least this seems like a directionally-correct update for a lot of people to make (e.g. stop complaining so much about the possibility of an even bigger GPT-5; it's almost certainly safer to scale it up than it is to come up with algorithms that can improve model power without improving model scale).

However, I do think there are a handful of places where this falls apart:

  1. Your question [LW · GW] briefly mentioned "with clever exploration bonuses". LW didn't really reply much to it, but it seems likely that this could be the thing that does your question in. Maybe there's some model-free clever exploration bonuses, but if so I have never heard of them. The most advanced exploration bonuses I have heard of are from the Dreamer line of models, and it has precisely the sort of consequentialist reasoning abilities that start being dangerous.
  2. My experience is that language models exhibit a phenomenon I call "transposons", [LW(p) · GW(p)] where (especially when fed back into themselves after deep levels of scaffolding) there are some ideas that they end up copying too much after prompting, clogging up the context window. I expect there will end up being strong incentives for people to come up with techniques to remove transposons, and I expect the most effective techniques will be based on some sort of consequences-based feedback system which again brings us back to essentially the original AI threat model.
  3. I think security is going to be a big issue, along different lines: hostile nations, terrorists, criminals, spammers, trolls, competing companies, etc.. In order to achieve strong security, you need to be robust against adversarial attacks, which probably means continually coming up with new capabilities to fend them off. I guess one could imagine that humans will be coming up with those new capabilities, but that seems probably destroyed by adversaries using AIs to come up with security holes, and regardless it seems like having humans come up with the new capabilities will be extremely expensive, so probably people will focus on the AI side of things. To some extent, people might classify this as dual-use threats rather than alignment threats, but at least the arms race element would also generate alignment threats I think?
  4. I think the way a lot of singularitarians thought of AI is that general agency consists of advanced adaptability and wisdom, and we expected that researchers would develop a lot of artificial adaptability, and then eventually the systems would become adaptable enough to generate wisdom faster than people do, and then overtake society. However what happened was that a relatively shallow form of adaptability was developed, and then people loaded all of human wisdom into that shallow adaptability, which turned out to be immensely profitable. But I'm not sure it's going to continue to work this way; eventually we're gonna run out of human wisdom to dump into the models, plus the models reduce the incentive for humans to share our wisdom in public. So just as a general principle, continued progress seems like it's eventually going to switch to improving the ability to generate novel capabilities, which in turn is going to put us back to the traditional story? (Except possibly with a weirder takeoff? Slow takeoff followed by ??? takeoff or something. Idk.) Possibly this will be delayed because the creators of the LLMs find patches, e.g. I think we're going to end up seeing the possibility of allowing people to "edit" the LLM responses rather than just doing upvotes/downvotes, but this again motivates some capabilities development because you need to filter out spammers/trolls/etc., which is probably best done through considering the consequences of the edits.

If we zoom out a bit, what's going on here? Here's some answers:

  1. Conditioning on capabilities: There are strong reasons to expect capabilities to keep on improving, so we should condition on that. This leads to a lot of alignment issues, due to folk decision theory theorems. However, we don't know how capabilities will develop, so we lose the ability to reason mechanistically when conditioning on capabilities, which means that there isn't a mechanistic answer to all questions related to it. If one doesn't keep track of this chain of reasoning, one might assume that there is a known mechanistic answer, which leads to the types of magical thinking seen in that other thread.
  2. If people don't follow-the-trying [LW · GW], that can lead to a lot of misattributions.
  3. When considering agency in general, rather than consequentialist-based agency in particular, then instrumental convergence is more intuitive in disjunctive form than implication form [LW(p) · GW(p)]. I.e. rather than "if an AI robustly achieves goals, then it will resist being shut down", say "either an AI resists being shut down, or it doesn't robustly achieve goals".
  4. Don't trust rationalists too much.

One additional speculative thing I have been thinking of which is kind of off-topic and doesn't fit in neatly with the other things:

Could there be "natural impact regularization" or "impact regularization by default"? Specifically, imagine you use general-purpose [LW · GW] search procedure which recursively invokes itself to solve subgoals for the purpose of solving some bigger goal.

If the search procedure's solutions to subgoals "change things too much", then they're probably not going to be useful. E.g. for Rubik's cubes, if you want to swap some of the cuboids, it does you know good if those swaps leave the rest of the cube scrambled.

Thus, to some extent, powerful capabilities would have to rely on some sort of impact regularization.

The bias-focused rationalists like to map decision-theoretic insights to human mistakes, but I instead like to map decision-theoretic insights to human capacities and experiences. I'm thinking that natural impact regularization is related to the notion of "elegance" in engineering. Like if you have some bloated tool to solve a problem, then even if it's not strictly speaking an issue because you can afford the resources, it might feel ugly because it's excessive and puts mild constaints on your other underconstrained decisions, and so on. Meanwhile a simple, minimal solution often doesn't have this.

Natural impact regularization wouldn't guarantee safety, since it's still allows deviations that don't interfere with the AI's function, but it sort of reduces one source of danger which I had been thinking about lately, namely I had been thinking that the instrumental incentive is to search for powerful methods of influencing the world, where "power" connotes the sort of raw power that unstoppably forces a lot of change, but really the instrumental incentive is often to search for "precise" methods of influencing the world, where one can push in a lot of information to effect narrow change. (A complication is that any one agent can only have so much bandwidth. I've been thinking bandwidth is probably going to become a huge area of agent foundations, and that it's been underexplored so far. (Perhaps because everyone working in alignment sucks at managing their bandwidth?))

Maybe another word for it would be "natural inner alignment", since in a sense the point is that capabilities inevitably select for inner alignment.


Sorry if I'm getting too rambly.

comment by leogao · 2023-12-18T06:09:25.092Z · LW(p) · GW(p)

I think deceptive alignment is still reasonably likely despite evidence from LLMs.

I agree with:

  • LLMs are not deceptively aligned and don't really have inner goals in the sense that is scary
  • LLMs memorize a bunch of stuff
  • the kinds of reasoning that feed into deceptive alignment do not predict LLM behavior well
  • Adam on transformers does not have a super strong simplicity bias
  • without deceptive alignment, AI risk is a lot lower
  • LLMs not being deceptively aligned provides nonzero evidence against deceptive alignment (by conservation of evidence)

I predict I could pass the ITT for why LLMs are evidence that deceptive alignment is not likely.

however, I also note the following: LLMs are kind of bad at generalizing, and this makes them pretty bad at doing e.g novel research, or long horizon tasks. deceptive alignment conditions on models already being better at generalization and reasoning than current models.

my current hypothesis is that future models which generalize in a way closer to that predicted by mesaoptimization will also be better described as having a simplicity bias.

I think this and other potential hypotheses can potentially be tested empirically today rather than only being distinguishable close to AGI

Replies from: TurnTrout, RobertKirk
comment by TurnTrout · 2023-12-26T21:20:21.417Z · LW(p) · GW(p)

Note that "LLMs are evidence against this hypothesis" isn't my main point here. The main claim is that the positive arguments for deceptive alignment are flimsy, and thus the prior is very low.

comment by RobertKirk · 2024-01-18T10:21:23.265Z · LW(p) · GW(p)

I think this and other potential hypotheses can potentially be tested empirically today rather than only being distinguishable close to AGI

How would you imagine doing this? I understand your hypothesis to be "If a model generalises as if it's a mesa-optimiser, then it's better-described as having simplicity bias". Are you imagining training systems that are mesa-optimisers (perhaps explicitly using some kind of model-based RL/inference-time planning and search/MCTS), and then trying to see if they tend to learn simple cross-episode inner goals which would be implied by a stronger implicity bias?

comment by ryan_greenblatt · 2023-12-01T19:18:50.137Z · LW(p) · GW(p)

I find myself unsure which conclusion this is trying to argue for.

Here are some pretty different conclusions:

  • Deceptive alignment is <<1% likely (quite implausible) to be a problem prior to complete human obsolescence (maybe it's a problem after human obsolescence for our trusted AI successors, but who cares).
  • There aren't any solid arguments for deceptive alignment[1]. So, we certainly shouldn't be confident in deceptive alignment (e.g. >90%), though we can't total rule it out (prior to human obsolescene). Perhaps deceptive alignment is 15% likely to be a serious problem overall and maybe 10% likely to be a serious problem if we condition on fully obsoleting humanity via just scaling up LLM agents or similar (this is pretty close to what I think overall).
  • Deceptive alignment is <<1% likely for scaled up LLM agents (prior to human obsolescence). Who knows about other architectures.

There is a big difference between <<1% likely and 10% likely. I basically agree with "not much reason to expect deceptive alignment even in models which are behaviorally capable of implementing deceptive alignment", but I don't think this leaves me in a <<1% likely epistemic state.


  1. Other than noting that it could be behaviorally consistent for powerful models: powerful models are capable of deceptive alignment. ↩︎

Replies from: TurnTrout
comment by TurnTrout · 2023-12-04T13:34:18.059Z · LW(p) · GW(p)

Closest to the third, but I'd put it somewhere between .1% and 5%. I think 15% is way too high for some loose speculation about inductive biases, relative to the specificity of the predictions themselves.

comment by Thomas Kwa (thomas-kwa) · 2023-12-27T05:13:17.248Z · LW(p) · GW(p)

There are some subskills to having consistent goals that I think will be selected for, at least when outcome-based RL starts working to get models to do long-horizon tasks. For example, the ability to not be distracted/nerdsniped into some different behavior by most stimuli while doing a task. The longer the horizon, the more selection-- if you have to do a 10,000 step coding project, then the probability you get irrecoverably distracted on one step has to be below 1/10,000.

I expect some pretty sophisticated goal-regulation circuitry to develop as models get more capable, because humans need it, and this makes me pretty scared.

comment by Sam Marks (samuel-marks) · 2023-12-01T17:21:18.198Z · LW(p) · GW(p)

Without deceptive alignment/agentic AI opposition, a lot of alignment threat models ring hollow. No more adversarial steganography or adversarial pressure on your grading scheme or worst-case analysis or unobservable, nearly unfalsifiable inner homonculi whose goals have to be perfected [LW(p) · GW(p)]. 

Instead, we enter the realm of tool AI which basically does what you say.

I agree that, conditional on no deceptive alignment, the most pernicious and least tractable sources of doom go away. 

However, I disagree that conditional on no deceptive alignment, AI "basically does what you say." Indeed, the majority of my P(doom) comes from the difference between "looks good to human evaluators" and "is actually what the human evaluators wanted." Concretely, this could play out with models which manipulate their users into thinking everything is going well and sensor tamper.

I think current observations don't provide much evidence about whether these concerns will pan out: with current models and training set-ups, "looks good to evaluators" almost always coincides with "is what evaluators wanted." I worry that we'll only see this distinction matter once models are smart enough that they could competently deceive their overseers if they were trying (because of the same argument made here). (Forms of sycophancy where models knowingly assert false statements when they expect the user will agree are somewhat relevant, but there are also benign reasons models might do this.)

Replies from: tailcalled, sharmake-farah
comment by tailcalled · 2023-12-01T20:39:10.131Z · LW(p) · GW(p)

To what extent do you worry about the training methods used for ChatGPT, and why?

comment by Noosphere89 (sharmake-farah) · 2023-12-06T22:30:13.802Z · LW(p) · GW(p)

I think my crux is that once we remove the deceptive alignment issue, I suspect that profit forces alone will generate a large incentive to reduce the gap, primarily because I think that people way overestimate the value of powerful, unaligned agents to the market and underestimate the want for control over AI.

comment by Thane Ruthenis · 2023-12-01T13:54:01.401Z · LW(p) · GW(p)

I contest that there's very little reason to expect "undesired, covert, and consistent-across-situations inner goals" to crop up in [LLMs as trained today] to begin with

As someone who consider deceptive alignment a concern: fully agree. (With the caveat, of course, that it's because I don't expect LLMs to scale to AGI [LW(p) · GW(p)].)

I think there's in general a lot of speaking-past-each-other in alignment, and what precisely people mean by "problem X will appear if we continue advancing/scaling" is one of them.

Like, of course a new problem won't appear if we just keep doing the exact same thing that we've already been doing. Except "the exact same thing" is actually some equivalence class of approaches/architectures/training processes, but which equivalence class people mean can differ.

For example:

  • Person A, who's worried about deceptive alignment, can have "scaling LLMs arbitrarily far" defined as this proven-safe equivalence class of architectures. So when they say they're worried about capability advancement bringing in new problems, what they mean is "if we move beyond the LLM paradigm, deceptive alignment may appear".
  • Person B, hearing the first one, might model them as instead defining "LLMs trained with N amount of compute" as the proven-safe architecture class, and so interpret their words as "if we keep scaling LLMs beyond N, they may suddenly develop this new problem". Which, on B's model of how LLMs work, may seem utterly ridiculous.

And the tricky thing is that Person A likely equates the "proven-safe" architecture class with the "doesn't scale to AGI" architecture class – so they actually expect the major AI labs to venture outside that class, the moment they realize its limitations. Person B, conversely, might disagree, might think those classes are different, and that safely limited models can scale to AGI/interesting capabilities. (As a Person A in this situation, I think Person B's model of cognition is confused; but that's a different topic.)

Which is all important disconnects to watch out for.

(Uh, caveat: I think some people actually are worried about scaled-up LLMs exhibiting deceptive alignment, even without architectural changes. But I would delineate this cluster of views from the one I put myself in, and which I outline there [LW · GW]. And, likewise, I expect that the other people I would tentatively classify as belonging to this cluster – Eliezer, Nate, John – mostly aren't worried about just-scaling-the-LLMs to be leading to deceptive alignment.)

Replies from: tailcalled, Vladimir_Nesov
comment by tailcalled · 2023-12-01T15:53:43.467Z · LW(p) · GW(p)

I think it's important to put more effort into tracking such definitional issues, though. People end up overstating things because they round off their interlocutors' viewpoint to their own. For instance if person C asks "is it safe to scale generative language pre-training and ChatGPT-style DPO arbitrarily far?", when person D then rounds this off to "is it safe to make transformer-based LLMs as powerful as possible?" and explains that "no, because instrumental convergence and compression priors", this is probably just false for the original meaning of the statement.

If this repeatedly happens to the point of generating a consensus for the false claim, then that can push the alignment community severely off track.

comment by Vladimir_Nesov · 2023-12-02T01:50:14.695Z · LW(p) · GW(p)

LLMs will soon scale beyond the available natural text data, and generation of synthetic data is some sort of change of architecture, potentially a completely different source of capabilities. So scaling LLMs without change of architecture much further is an expectation about something counterfactual. It makes sense as a matter of theory, but it's not relevant for forecasting.

Edit 15 Dec: No longer endorsed based on scaling laws for training on repeated data [LW(p) · GW(p)].

Replies from: TurnTrout
comment by TurnTrout · 2023-12-04T13:31:36.229Z · LW(p) · GW(p)

Bold claim. Want to make any concrete predictions so that I can register my different beliefs? 

Replies from: Vladimir_Nesov, Vladimir_Nesov
comment by Vladimir_Nesov · 2023-12-15T13:57:57.566Z · LW(p) · GW(p)

I've now changed my mind based on

The main result is that up to 4 repetitions are about as good as unique data, and for up to about 16 repetitions there is still meaningful improvement. Let's take 50T tokens as an estimate for available text data (as an anchor, there's a filtered and deduplicated CommonCrawl dataset RedPajama-Data-v2 with 30T tokens). Repeated 4 times, it can make good use of 1e28 FLOPs (with a dense transformer), and repeated 16 times, suboptimal but meaningful use of 2e29 FLOPs. So this is close but not lower than what can be put to use within a few years. Thanks for pushing back on the original claim.

comment by Vladimir_Nesov · 2023-12-07T08:20:39.131Z · LW(p) · GW(p)

Three points: how much compute is going into a training run, how much natural text data it wants, and how much data is available. For training compute, there are claims of multi-billion dollar runs being plausible and possibly planned in 2-5 years. Eyeballing various trends and GPU shipping numbers and revenues, it looks like about 3 OOMs of compute scaling is possible before industrial capacity constrains the trend and the scaling slows down. This assumes that there are no overly dramatic profits from AI (which might lead to finding ways of scaling supply chains faster than usual), and no overly dramatic lack of new capabilities with further scaling (which would slow down investment in scaling). That gives about 1e28-1e29 FLOPs at the slowdown in 4-6 years.

At 1e28 FLOPs, Chinchilla scaling asks for 200T-250T tokens. Various sparsity techniques increase effective compute, asking for even more tokens (when optimizing loss given fixed hardware compute).
Edit 15 Dec: I no longer endorse this point, based on scaling laws for training on repeated data [LW(p) · GW(p)].

On the outside, there are 20M-150M accessible books, some text from video, and 1T web pages of extremely dubious uniqueness and quality. That might give about 100T tokens, if LLMs are used to curate? There's some discussion (incl. comments) here [LW · GW], this is the figure I'm most uncertain about. In practice, absent good synthetic data, I expect multimodality to fill the gap, but that's not going to be as useful as good text for improving chatbot competence. (Possibly the issue with the original claim in the grandparent is what I meant by "soon".)

comment by peterbarnett · 2023-12-01T18:29:23.841Z · LW(p) · GW(p)

I'm not sure how much I expect something like deceptive alignment from just scaling up LLMs. My guess would be that in some limit this gets AGI and also something deceptively aligned by default, but in reality we end up taking a shorter path to AGI, which also leads to deceptive alignment by default. However, I can think about the LLM AGI in this comment, and I don't think it changes much. 

The main reason I expect something like deceptive alignment is that we are expecting the thing to actually be really powerful, and so it actually needs to be coherent over both long sequences of actions and into new distributions it wasn't trained on. It seems pretty unlikely to end up in a world where we train for the AI to act aligned for one distribution and then it generalizes exactly as we would like in a very different distribution. Or at least that it generalizes the things that we care about it generalizing, for example:

  • Don't seek power, but do gain resources enough to do the difficult task
  • Don't manipulate humans, but do tell them useful things and get them to sign off on your plans

I don't think I agree to your counters to the specific arguments about deceptive alignment:

  1. For Quinton's post, I think Steve Byrnes' comment [LW(p) · GW(p)] is a good approximation of my views here
  2. I don't fully know how to think about the counting arguments when things aren't crispy divided, and I do agree that the LLM AGI would likely be a big mess with no separable "world model", "objective", or "planning engine". However it really isn't clear to me that this makes the case weaker; the AI will be doing something powerful (by assumption) and its not clear that it will do the powerful thing that we want. 
  3. I really strongly agree that the NN prior can be really hard to reason about. It really seems to me like this again makes the situation worse; we might have a really hard time thinking about how the big powerful AI will generalize when we ask it to do powerful stuff.
    1. I think a lot of this comes from some intuition like "Condition on the AI being powerful enough to do the crazy stuff we'll be asking of an AGI. If it is this capable but doesn't generalize the task in the exact way you want then you get something that looks like deceptive alignment." 
    2. Not being able to think about the NN prior really just seems to make things harder, because we don't know how it will generalize things we tell it. 
Replies from: peterbarnett
comment by peterbarnett · 2023-12-01T18:37:25.480Z · LW(p) · GW(p)

Conditional on the AI never doing something like: manipulating/deceiving[1] the humans such that the humans think the AI is aligned, such that the AI can later do things the humans don't like, then I am much more optimistic about the whole situation. 

  1. ^

    The AI could be on some level not "aware" that it was deceiving the humans, a la Deep Deceptiveness [LW · GW].

comment by Noosphere89 (sharmake-farah) · 2023-12-01T19:25:30.012Z · LW(p) · GW(p)

I think a lot of this probably comes back to way overestimating the complexity of human values. I think a very deeply held belief of a lot of LWers is that human values are intractably complicated and gene/societal-specific, and I think if this was the case, the argument would actually be a little concerning, as we'd have to rely on massive speed biases to punish deception.

These posts gave me good intuition for why human value is likely to be quite simple, one of them talks about how most of the complexity of the values is inaccessible to the genome, thus it needs to start from far less complexity than people realize, because nearly all of it needs to be learned. Some other posts from Steven Byrnes are relevant, which talks about how simple the brain is, and a potential difference between me and Steven Byrnes is that the same process of learning from scratch algorithms that generate capabilities also applies to values, and thus the complexity of value is upper-bounded by the complexity of learning from scratch algorithms + genetic priors, both of which are likely very low, at the very least not billions of lines complex, and closer to thousands of lines/hundreds of bits.

But the reason this matters is because we no longer have good reason to assume that the deceptive model is so favored on priors like Evan Hubinger says here, as the complexity is likely massively lower than LWers assume.

https://www.lesswrong.com/posts/i5kijcjFJD6bn7dwq/evaluating-the-historical-value-misspecification-argument?commentId=vXnLq7X6pMFLKwN2p [LW(p) · GW(p)]

Putting it another way, the deceptive and aligned models both have very similar complexities, and the relative difficulty is very low, so much so that the aligned model might be outright lower complexity, but even if that fails, the desired goal has a complexity very similar to the undesired goal complexity, thus the relative difficulty of actual alignment compared to deceptive alignment is quite low.

https://www.lesswrong.com/posts/CQAMdzA4MZEhNRtTp/human-values-and-biases-are-inaccessible-to-the-genome# [LW · GW]

https://www.lesswrong.com/s/HzcM2dkCq7fwXBej8/p/wBHSYwqssBGCnwvHg [? · GW]

https://www.lesswrong.com/posts/PTkd8nazvH9HQpwP8/building-brain-inspired-agi-is-infinitely-easier-than [LW · GW]

https://www.lesswrong.com/posts/aodPs8H9dQxpXAcwk/heritability-behaviorism-and-within-lifetime-rl [LW · GW]

Replies from: TurnTrout
comment by TurnTrout · 2023-12-04T13:32:47.212Z · LW(p) · GW(p)

(I think you're still playing into an incorrect frame by talking about "simplicity" or "speed biases.")

Replies from: sharmake-farah
comment by Noosphere89 (sharmake-farah) · 2023-12-04T18:22:21.363Z · LW(p) · GW(p)

My point here is that even conditional on the frame being correct, there are a lot of assumptions like "value is complicated" that I don't buy, and a lot of these assumptions have a good chance of being false, which significantly impacts the downstream conclusions, and that matters because a lot of LWers probably either hold these beliefs or assume it tacitly in arguments like alignment is hard.

Also, for a defense of wrong models, see here:

https://www.lesswrong.com/posts/q5Gox77ReFAy5i2YQ/in-defense-of-probably-wrong-mechanistic-models [LW · GW]

comment by faul_sname · 2023-12-01T18:10:32.969Z · LW(p) · GW(p)

Interesting! I had thought this already was your take, based on posts like Reward is not the Optimization Target [LW · GW].

Many people seem to expect that reward will be the optimization target of really smart learned policies—that these policies will be reward optimizers. I strongly disagree. As I argue in this essay, reward is not, in general, that-which-is-optimized by RL agents. [...] Reward chisels cognitive grooves into an agent.

I do think that sufficiently sophisticated[1] RL policies trained on a predictable environment with a simple and consistent reward scheme probably will develop an internal model of the thing being rewarded, as a single salient thing, and separately that some learned models will learn to make longer-term-than-immediate predictions about the future. So as such I do expect "iterate through some likely actions, and choose one where the reward proxy is high" will at some point emerge as an available strategy for RL policies[2].

My impression is that it's an open question to what extent that available strategy is a better-performing strategy than a more sphexish pile of "if the environment looks like this, execute that behavior" heuristics, given a fixed amount of available computational power. In the limit as the system's computational power approaches infinite and the accuracy of its predictions about future world states approaches perfection, the argmax(EU) strategy gets reinforced more strongly than any other strategy, and so that ends up being what gets chiseled into the model's cognition. But of course in that limit "just brute-force sha256 bro" is an optimal strategy in certain situations, so the extent to which the "in the limit" behavior resembles the "in the regimes we actually care about" behavior is debatable.

  1. ^

    And "sufficiently" is likely a pretty low bar

  2. ^

    If I'm reading Ability to Solve Long Horizon Tasks Correlates With Wanting [LW · GW] correctly, that post argues that you can't get good performance on any task where the reward is distant in time from the actions unless your system is doing something like this.

comment by quetzal_rainbow · 2023-12-01T17:14:17.440Z · LW(p) · GW(p)

It mostly sounds like "LLMs don't scale into scary things", not "deceptive alignment is unlikely".

comment by dkirmani · 2023-12-01T16:14:09.634Z · LW(p) · GW(p)

This was kinda a "holy shit" moment

Publicly noting that I had a similar moment recently; perhaps we listened to the same podcast.

comment by Daniel Kokotajlo (daniel-kokotajlo) · 2024-02-06T16:12:05.439Z · LW(p) · GW(p)

I wish I had read this a week ago instead of just now, it would have saved a significant amount of confusion and miscommunication!

comment by RogerDearnaley (roger-d-1) · 2024-01-20T21:01:00.586Z · LW(p) · GW(p)

I think there are two separate questions here, with possibly (and I suspect actually) very different answers:

  1. How likely is deceptive alignment to arise in an LLM under SGD across a large very diverse pretraining set (such as a slice of the internet)?
  2. How likely is deceptive alignment to be boosted in an LLM under SGD fine tuning followed by RL for HHH-behavior applied to a base model trained by 1.?

I think the obvious answer to 1. is that the LLM is going to attempt (limited by its available capacity and training set) to develop world models of everything that humans do that affects the contents of the Internet. One of the many things that humans do is pretend to be more aligned to the wishes of an authority that has power over them than they truly are. So for a large enough LLM, SGD will create a world model for this behavior along with thousands of other human behaviors, and the LLM will (depending on the prompt) tend to activate this behavior at about the frequency and level that you find it on the Internet, as modified by cues in the particular prompt. On the Internet, this is generally a mild background level for people writing while at work in Western countries, and probably more strongly for people writing from more authoritarian countries: specific prompts will be more or less correlated with this.

For 2., the question is whether fine-tuning followed by RL will settle on this preexisting mechanism and make heavy use of it as part of the way that it implements something that fits the fine-tuning set/scores well on the reward model aimed at creating a helpful, honest, and harmless assistant persona. I'm a lot less certain of the answer here, and I suspect it might depend rather strongly on the details of the training set. For example, is this evoking an "you're at work, or in an authoritarian environment, so watch what you say and do" scenario that might boost the use of this particular behavior? The "harmless" element in HHH seems particularly concerning here: it suggests an environment in which certain things can't be discussed, which tend to be the sorts of environments that evince this behavior more strongly in humans.

For a more detailed discussion, see the second half of this post [AF · GW].

comment by kave · 2023-12-01T18:39:48.173Z · LW(p) · GW(p)

Two quick thoughts (that don't engage deeply with this nice post).

  1.  I'm worried in some cases where the goal is not consistent across situations. For example, if prompted to pursue some goal, it then does it seriously with convergent instrumental goals.
  2. I think it seems pretty likely that future iterations of transformers will have bits of powerful search in them, but people who seem very worried about that search seem to think that once that search is established enough, gradient descent will cause the internals of the model to be organised mostly around that search (I imagine the search circuits "bubbling out" to be the outer structure of the learned algorithm). Probably this is all just conceptually confused, but to the extent it's not, I'm pretty surprised by their intuition.
comment by Johannes C. Mayer (johannes-c-mayer) · 2023-12-07T12:09:34.857Z · LW(p) · GW(p)

To me, it seems that consequentialism is just a really good algorithm to perform very well on a very wide range of tasks. Therefore, for difficult enough tasks, I expect that consequentialism is the kind of algorithm that would be found because it's the kind of algorithm that can perform the task well.

When we start training the system we have a random initialization. Let's make the following simplification. We have a goal in the system somewhere and then we have the consequential reasoning algorithm in the system somewhere. As we train the system the consequential reasoning will get better and better and the goal will get more and more aligned with the outer objective because both of these things will improve performance. However, there will come a point in training where the consequential reasoning algorithm is good enough to realize that it is in training. And then bad things start to happen. It will try to figure out and optimize it for the outer objective. SGD will incentivize this kind of behavior because it performs better than not doing it.

There really is a lot more to this kind of argument. So far I have failed to write it up. I hope the above is enough to hint at why I think that it is possible that being deceptive is just better than not being deceptive in terms of performance. When you become deceptive, that aligns the consequentialist reasoner faster to the outer objective, compared to waiting for SGD to gradually correct everything. In fact it is sort of a constant boost to your performance to be deceptive, even before the system has become very good at optimizing for the true objective. A really good consequential reasoner could probably just get zero loss immediately by retargeting its consequential reasoner instead of waiting for the goal to be updated to match the outer objective by SGD, as soon as it got a perfect model of the outer objective.

I'm not sure that deception is a problem. Maybe it is not. but to me, it really seems like you don't provide any reason that makes me confident that it won't be a problem. It seems very strange to me to argue that this is not a thing because existing arguments are flimsy. I mean you did not even address my argument. It's like trying to prove that all bananas are green by showing me 3 green bananas.

comment by Oliver Sourbut · 2023-12-05T14:48:18.508Z · LW(p) · GW(p)

TL;DR: I think you're right that much inner alignment conversation is overly-fanciful. But 'LLMs will do what they're told' and 'deceptive alignment is unjustified' are non sequiturs. Maybe we mean something different by 'LLM'? Either way, I think there's a case for 'inner homunculi' after all.

I have always thought that the 'inner' conversation is missing something. On the one hand it's moderately-clearly identifying a type of object, which is a point in favour. Further, as you've said yourself, 'reward is not the optimisation target' is very obvious but it seems to need spelling out repeatedly (a service the 'inner/outer' distinction provides). On the other hand the 'inner alignment' conversation seems in practice to distract from the actual issue which is 'some artefacts are (could be) doing their own planning/deliberation/optimisation', and 'inner' is only properly pointing at a subset of those. (We can totally build, including accidentally, artefacts which do this 'outside' the weights of NN.)

You've indeed pointed at a few of the more fanciful parts of that discussion here[1], like steganographic gradient hacking. NB steganography per se isn't entirely wild; we see ML systems use available bandwidth to develop unintuitive communication protocols a lot e.g. in MARL.

I can only assume you mean something very narrow and limited by 'continuing to scale up LLMs'[2]. I think without specifying what you mean, your words are likely to be misconstrued.

With that said, I think something not far off the 'inner consequentialist' is entirely plausible and consistent with observations. In short, how do plan-like outputs emerge without a planning-like computation?[3] I'd say 'undesired, covert, and consistent-across-situations inner goals' is a weakman of deceptive alignment. Specialising to LLMs, the point is that they've exhibited poorly-understood somewhat-general planning capability, and that not all ways of turning that into competent AI assistants[4] result in aligned planners. Planners can be deceptive (and we have evidence of this).

I think your step from 'there's very little reason to expect "undesired, covert, and consistent-across-situations inner goals"...' to 'LLMs will continue doing what they're told...' is therefore a boldly overconfident non sequitur. (I recognise that the second step there is presented as 'an alternative', so it's not clear whether you actually think that.)

Incidentally, I'd go further and say that, to me (but I guess you disagree?) it's at least plausible that some ways of turning LLM planning capability into a competent AI assistant actually do result in 'undesired, covert, and consistent-across-situations inner goals'! Sketch in thread.

FWIW I'm more concerned about highly competent planning coming about by one or another kind of scaffolding, but such a system can just as readily be deceptively aligned as a 'vanilla' LLM. If enough of the planning is externalised and understandable and actually monitored in practice, this might be easier to catch.


  1. I'm apparently more sympathetic to fancy than you; in absence of mechanistic observables and in order to extrapolate/predict, we have to apply some imagination. But I agree there's a lot of fancy; some of it even looks like hallucination getting reified by self-conditioning(!), and that effort could probably be more efficiently spent. ↩︎

  2. Like, already the default scaling work includes multimodality and native tool/API-integration and interleaved fine-tuning with self-supervised etc. ↩︎

  3. Yes, a giant lookup table of exactly the right data could do this. But, generalisably, practically and tractably...? We don't believe NNs are GLUTs. ↩︎

  4. prompted, fine-tuned, RLHFed, or otherwise-conditioned LLM (ex hypothesi goal-pursuing), scaffolded, some yet-to-be-designed system... ↩︎

Replies from: Oliver Sourbut
comment by Oliver Sourbut · 2023-12-05T14:59:28.389Z · LW(p) · GW(p)

I'm presently (quite badly IMO) trying to anticipate the shape of the next big step in get-things-done/autonomy.

I've had a hunch for a while that temporally abstract planning and prediction is key. I strongly suspect you can squeeze more consequential planning out of shortish serial depth than most people give credit for. This is informed by past RL-flavoured stuff like MuZero and its limitations, by observations of humans and animals (inc myself), and by general CS/algos thinking. Actually this is where I get on the LLM train. It seems to me that language is an ideal substrate for temporally abstract planning and prediction, and lots of language data in the wild exemplifies this. NB I don't think GPTs or LLMs are uniquely on this trajectory, just getting a big bootstrap.

Now, if I had to make the most concrete 'inner homunculus' case off the cuff, I'd start in the vicinity of Good Regulator, except a more conjectury version regarding systems-predicting-planners (I am working on sharpening this). Maybe I'd point at Janus' Simulators post [LW · GW]. I suspect there might be something like an impossibility/intractability theorem for predicting planners of the right kind without running a planner of a similar kind. (Handwave!)

I'd observe that GPTs can predict planning-looking actions, including sometimes without CoT. (NOTE here's where the most concrete and proximal evidence is!) This includes characters engaging in deceit. I'd invoke my loose reasoning regarding temporal abstraction to support the hypothesis that this is 'more than mere parroting', and maybe fish for examples quite far from obvious training settings to back this up. Interp would be super, of course! (Relatedly, some of your work on steering policies via activation editing has sparked my interest.)

I think maybe this is enough to transfer some sense of what I'm getting at? At this point, given some (patchy) theory, the evidence is supportive of (among other hypotheses) an 'inner planning' hypothesis (of quite indeterminate form).

Finally, one kind or another of 'conditioning' is hypothesised to reinforce the consequentialist component(s) 'somehow' (handwave again, though I'm hardly the only one guilty of handwaving about RLHF et al). I think it's appropriate to be uncertain what form the inner planning takes, what form the conditioning can/will take, and what the eventual results of that are. Interested in evidence and theory around this area.

So, what are we talking about when we say 'LLM'? Plain GPT? Well, they definitely don't 'do what they're told'[1]. They exhibit planning-like outputs with the right prompts, typically associated with 'simulated characters' at some resolution or other. What about RLHFed GPTs? Well, they sometimes 'do what they're told'. They also exhibit planning-like outputs with the right prompts, and it's mechanistically very unclear how they're getting them.


  1. unless you mean predicting the next token (I'm pretty sure you don't mean this?), which they do quite well, though we don't know how, nor when it'll fail ↩︎

comment by TurnTrout · 2020-06-29T00:46:47.566Z · LW(p) · GW(p)

For the last two years, typing for 5+ minutes hurt my wrists. I tried a lot of things: shots, physical therapy, trigger-point therapy, acupuncture, massage tools, wrist and elbow braces at night, exercises, stretches. Sometimes it got better. Sometimes it got worse.

No Beat Saber, no lifting weights, and every time I read a damn book I would start translating the punctuation into Dragon NaturallySpeaking syntax.

Text: "Consider a bijection "

My mental narrator: "Cap consider a bijection space dollar foxtrot colon cap x backslash tango oscar cap y dollar"

Have you ever tried dictating a math paper in LaTeX? Or dictating code? Telling your computer "click" and waiting a few seconds while resisting the temptation to just grab the mouse? Dictating your way through a computer science PhD?

And then.... and then, a month ago, I got fed up. What if it was all just in my head, at this point? I'm only 25. This is ridiculous. How can it possibly take me this long to heal such a minor injury?

I wanted my hands back - I wanted it real bad. I wanted it so bad that I did something dirty: I made myself believe something. Well, actually, I pretended to be a person who really, really believed his hands were fine and healing and the pain was all psychosomatic.

And... it worked, as far as I can tell. It totally worked. I haven't dictated in over three weeks. I play Beat Saber as much as I please. I type for hours and hours a day with only the faintest traces of discomfort.

What?

Replies from: holomanga, DanielFilan, vanessa-kosoy, steve2152, Teerth Aloke, avturchin, Raemon
comment by holomanga · 2022-01-16T18:55:40.311Z · LW(p) · GW(p)

It was probably just regression to the mean because lots of things are, but I started feeling RSI-like symptoms a few months ago, read this, did this, and now they're gone, and in the possibilities where this did help, thank you! (And either way, this did make me feel less anxious about it 😀)

comment by DanielFilan · 2020-09-11T23:09:16.052Z · LW(p) · GW(p)

Is the problem still gone?

Replies from: TurnTrout, TurnTrout
comment by TurnTrout · 2021-01-23T15:31:18.983Z · LW(p) · GW(p)

Still gone. I'm now sleeping without wrist braces and doing intense daily exercise, like bicep curls and pushups.

comment by TurnTrout · 2020-09-12T02:40:12.239Z · LW(p) · GW(p)

Totally 100% gone. Sometimes I go weeks forgetting that pain was ever part of my life. 

comment by Vanessa Kosoy (vanessa-kosoy) · 2020-06-29T12:12:17.028Z · LW(p) · GW(p)

I'm glad it worked :) It's not that surprising given that pain is known to be susceptible to the placebo effect. I would link the SSC post, but, alas...

Replies from: raj-thimmiah
comment by Raj Thimmiah (raj-thimmiah) · 2021-03-27T02:04:00.372Z · LW(p) · GW(p)

You able to link to it now?

Replies from: qv^!q
comment by qvalq (qv^!q) · 2023-02-16T18:29:18.305Z · LW(p) · GW(p)

https://slatestarcodex.com/2016/06/26/book-review-unlearn-your-pain/

comment by Steven Byrnes (steve2152) · 2021-01-23T20:03:32.860Z · LW(p) · GW(p)

Me too! [LW(p) · GW(p)]

Replies from: TurnTrout
comment by TurnTrout · 2021-01-23T20:11:40.644Z · LW(p) · GW(p)

There's a reasonable chance that my overcoming RSI was causally downstream of that exact comment of yours.

Replies from: steve2152
comment by Steven Byrnes (steve2152) · 2021-01-23T20:33:45.220Z · LW(p) · GW(p)

Happy to have (maybe) helped! :-)

comment by Teerth Aloke · 2020-06-29T01:46:42.987Z · LW(p) · GW(p)

This is unlike anything I have heard!

Replies from: mingyuan
comment by mingyuan · 2020-06-29T01:54:14.151Z · LW(p) · GW(p)

It's very similar to what John Sarno (author of Healing Back Pain and The Mindbody Prescription) preaches, as well as Howard Schubiner. There's also a rationalist-adjacent dude who started a company (Axy Health) based on these principles. Fuck if I know how any of it works though, and it doesn't work for everyone. Congrats though TurnTrout!

Replies from: Teerth Aloke
comment by Teerth Aloke · 2020-06-29T03:52:52.824Z · LW(p) · GW(p)

My Dad it seems might have psychosomatic stomach ache. How to convince him to convince himself that he has no problem?

Replies from: mingyuan
comment by mingyuan · 2020-06-29T04:52:34.336Z · LW(p) · GW(p)

If you want to try out the hypothesis, I recommend that he (or you, if he's not receptive to it) read Sarno's book. I want to reiterate that it does not work in every situation, but you're welcome to take a look.

comment by avturchin · 2020-06-29T10:46:34.826Z · LW(p) · GW(p)

Looks like reverse stigmata effect.

comment by Raemon · 2020-06-29T02:34:23.704Z · LW(p) · GW(p)

Woo faith healing! 

(hope this works out longterm, and doesn't turn out be secretly hurting still) 

Replies from: TurnTrout
comment by TurnTrout · 2020-06-29T03:16:21.709Z · LW(p) · GW(p)

aren't we all secretly hurting still?

Replies from: mingyuan
comment by mingyuan · 2020-06-29T04:54:01.028Z · LW(p) · GW(p)

....D:

comment by TurnTrout · 2023-06-01T15:49:53.260Z · LW(p) · GW(p)

I regret each of the thousands of hours I spent on my power-seeking theorems, and sometimes fantasize about retracting one or both papers. I am pained every time someone cites "Optimal policies tend to seek power", and despair that it is included in the alignment 201 curriculum. I think this work makes readers actively worse at thinking about realistic trained systems. [LW(p) · GW(p)]

I think a healthy alignment community would have rebuked me for that line of research, but sadly I only remember about two people objecting that "optimality" is a horrible way of understanding trained policies. 

Replies from: steve2152, rohinmshah, Wei_Dai, alenglander
comment by Steven Byrnes (steve2152) · 2023-06-01T23:57:00.378Z · LW(p) · GW(p)

I think the basic idea of instrumental convergence is just really blindingly obvious, and I think it is very annoying that there are people who will cluck their tongues and stroke their beards and say "Hmm, instrumental convergence you say? I won't believe it unless it is in a very prestigious journal with academic affiliations at the top and Computer Modern font and an impressive-looking methods section."

I am happy that your papers exist to throw at such people.

Anyway, if optimal policies tend to seek power, then I desire to believe that optimal policies tend to seek power :) :) And if optimal policies aren't too relevant to the alignment problem, well neither are 99.99999% of papers, but it would be pretty silly to retract all of those :)

comment by Rohin Shah (rohinmshah) · 2023-07-01T14:32:11.778Z · LW(p) · GW(p)

Since I'm an author on that paper, I wanted to clarify my position here. My perspective is basically the same as Steven's: there's a straightforward conceptual argument that goal-directedness leads to convergent instrumental subgoals, this is an important part of the AI risk argument, and the argument gains much more legitimacy and slightly more confidence in correctness by being formalized in a peer-reviewed paper.

I also think this has basically always been my attitude towards this paper. In particular, I don't think I ever thought of this paper as providing any evidence about whether realistic trained systems would be goal-directed.

Just to check that I wasn't falling prey to hindsight bias, I looked through our Slack history. Most of it is about the technical details of the results, so not very informative, but the few conversations on higher-level discussion I think overall support this picture. E.g. here are some quotes (only things I said):

Nov 3, 2019:

I think most formal / theoretical investigation ends up fleshing out a conceptual argument I would have accepted, maybe finding a few edge cases along the way; the value over the conceptual argument is primarily in the edge cases, getting more confidence, and making it easier to argue with

Dec 11, 2019:

my prediction is that agents will behave as though their reward is time-dependent / history-dependent, like humans do

We will deploy agents whose revealed specification / reward if we take the intentional stance towards them are non-Markovian

comment by Wei Dai (Wei_Dai) · 2023-06-01T20:09:34.679Z · LW(p) · GW(p)

It seems like just 4 months ago [LW · GW] you still endorsed your second power-seeking paper:

This paper is both published in a top-tier conference and, unlike the previous paper, actually has a shot of being applicable to realistic agents and training processes. Therefore, compared to the original[1] optimal policy paper, I think this paper is better for communicating concerns about power-seeking to the broader ML world.

Why are you now "fantasizing" about retracting it?

I think a healthy alignment community would have rebuked me for that line of research, but sadly I only remember about two people objecting that “optimality” is a horrible way of understanding trained policies.

A lot of people might have thought something like, "optimality is not a great way of understanding trained policies, but maybe it can be a starting point that leads to more realistic ways of understanding them" and therefore didn't object for that reason. (Just guessing as I apparently wasn't personally paying attention to this line of research back then.)

Which seems to have turned out to be true, at least as of 4 months ago, when you still endorsed your second paper as "actually has a shot of being applicable to realistic agents and training processes." If you've only changed your mind about this very recently, it hardly seems fair to blame people for not foreseeing it more than 4 years ago with high enough confidence to justify "rebuking" this whole line of research.

Replies from: TurnTrout
comment by TurnTrout · 2023-06-02T04:14:47.680Z · LW(p) · GW(p)

To be clear, I still endorse Parametrically retargetable decision-makers tend to seek power [LW · GW]. Its content is both correct and relevant and nontrivial. The results, properly used, may enable nontrivial inferences about the properties of inner trained cognition. I don't really want to retract that paper. I usually just fantasize about retracting Optimal policies tend to seek power.

The problem is that I don't trust people to wield even the non-instantly-doomed results.

For example, one EAG presentation cited my retargetability results as showing that most reward functions "incentivize power-seeking actions." However, my results have not shown this for actual trained systems. (And I think [LW(p) · GW(p)] that Power-seeking can be probable and predictive for trained agents [LW · GW] does not make progress on the incentives of trained policies.)

People keep talking about stuff they know how to formalize (e.g. optimal policies) instead of stuff that matters (e.g. trained policies). I'm pained by this emphasis and I think my retargetability results are complicit. Relative to an actual competent alignment community (in a more competent world), we just have no damn clue how to properly reason about real trained policies. I want to fix that [LW · GW], but we aren't gonna fix it by focusing on optimality.

Replies from: Vika, Wei_Dai
comment by Vika · 2023-06-02T11:03:27.994Z · LW(p) · GW(p)

Sorry about the cite in my "paradigms of alignment" talk, I didn't mean to misrepresent your work. I was going for a high-level one-sentence summary of the result and I did not phrase it carefully. I'm open to suggestions on how to phrase this differently when I next give this talk.

Similarly to Steven, I usually cite your power-seeking papers to support a high-level statement that "instrumental convergence is a thing" for ML audiences, and I find they are a valuable outreach tool. For example, last year I pointed David Silver to the optimal policies paper when he was proposing some alignment ideas to our team that we would expect don't work because of instrumental convergence. (There's a nonzero chance he would look at a NeurIPS paper and basically no chance that he would read a LW post.)

The subtleties that you discuss are important in general, but don't seem relevant to making the basic case for instrumental convergence to ML researchers. Maybe you don't care about optimal policies, but many RL people do, and I think these results can help them better understand why alignment is hard. 

Replies from: TurnTrout
comment by TurnTrout · 2023-06-05T17:44:53.392Z · LW(p) · GW(p)

Thanks for your patient and high-quality engagement here, Vika! I hope my original comment doesn't read as a passive-aggressive swipe at you. (I consciously tried to optimize it to not be that.) I wanted to give concrete examples so that Wei_Dai could understand what was generating my feelings.

I'm open to suggestions on how to phrase this differently when I next give this talk.

It's a tough question to say how to apply the retargetablity result to draw practical conclusions about trained policies. Part of this is because I don't know if trained policies tend to autonomously seek power in various non game-playing regimes. 

If I had to say something, I might say "If choosing the reward function lets us steer the training process to produce a policy which brings about outcome X, and most outcomes X can only be attained by seeking power, then most chosen reward functions will train power-seeking policies." This argument appropriately behaves differently if the "outcomes" are simply different sentiment generations being sampled from an LM -- sentiment shift doesn't require power-seeking.

For example, last year I pointed David Silver to the optimal policies paper when he was proposing some alignment ideas to our team that we would expect don't work because of instrumental convergence.

My guess is that the optimal policies paper was net negative for technical understanding and progress, but net positive for outreach, and agree it has strong benefits in the situations you highlight.

Maybe you don't care about optimal policies, but many RL people do, and I think these results can help them better understand why alignment is hard. 

I think that it's locally valid to point out "under your beliefs (about optimal policies mattering a lot), the situation is dangerous, read this paper." But I feel a tad queasy about the overall point, since I don't think alignment's difficulty has much to do with the difficulties pointed out by "Optimal Policies Tend to Seek Power." I feel better about saying "Look, if in fact the same thing happens with trained policies, which are sometimes very different, then we are in trouble." Maybe that's what you already communicate, though.

Replies from: Vika
comment by Vika · 2023-06-05T19:49:17.651Z · LW(p) · GW(p)

Thanks Alex! Your original comment didn't read as ill-intended to me, though I wish that you'd just messaged me directly. I could have easily missed your comment in this thread - I only saw it because you linked the thread in the comments on my post.

Your suggested rephrase helps to clarify how you think about the implications of the paper, but I'm looking for something shorter and more high-level to include in my talk. I'm thinking of using this summary, which is based on a sentence from the paper's intro: "There are theoretical results showing that many decision-making algorithms have power-seeking tendencies."

(Looking back, the sentence I used in the talk was a summary of the optimal policies paper, and then I updated the citation to point to the retargetability paper and forgot to update the summary...)

Replies from: TurnTrout
comment by TurnTrout · 2023-06-05T21:04:11.284Z · LW(p) · GW(p)

"There are theoretical results showing that many decision-making algorithms have power-seeking tendencies."

I think this is reasonable, although I might say "suggesting" instead of "showing." I think I might also be more cautious about further inferences which people might make from this -- like I think a bunch of the algorithms I proved things about are importantly unrealistic. But the sentence itself seems fine, at first pass.

comment by Wei Dai (Wei_Dai) · 2023-06-02T18:30:35.183Z · LW(p) · GW(p)

Thanks, this clarifies a lot for me.

comment by Aryeh Englander (alenglander) · 2023-06-01T16:12:13.230Z · LW(p) · GW(p)

You should make this a top level post so it gets visibility. I think it's important for people to know the caveats attached to your results and the limits on its implications in real-world dynamics.

comment by TurnTrout · 2022-09-18T20:02:56.552Z · LW(p) · GW(p)

Shard theory suggests that goals are more natural to specify/inculcate in their shard-forms (e.g. if around trash and a trash can, put the trash away), and not in their (presumably) final form of globally activated optimization of a coherent utility function which is the reflective equilibrium of inter-shard value-handshakes (e.g. a utility function over the agent's internal plan-ontology such that, when optimized directly, leads to trash getting put away, among other utility-level reflections of initial shards). 

I could (and did) hope that I could specify a utility function which is safe to maximize because it penalizes power-seeking. I may as well have hoped to jump off of a building and float to the ground. On my model, that's just not how goals work in intelligent minds. If we've had anything at all beaten into our heads by our alignment thought experiments, it's that goals are hard to specify in their final form of utility functions. 

I think it's time to think in a different specification language.

Replies from: nathan-helm-burger
comment by Nathan Helm-Burger (nathan-helm-burger) · 2022-09-20T17:37:33.370Z · LW(p) · GW(p)

Agreed. I think power-seeking and other instrumental goals (e.g. survival, non-corrigibility) are just going to inevitably arise, and that if shard theory works for superintelligence, it will by taking this into account and balancing these instrumental goals against deliberately installed shards which counteract them. I currently have the hypothesis (held loosely) that I would like to test (work in progress) that it's easier to 'align' a toy model of a power-seeking RL agent if the agent has lots and lots of competing desires whose weights are frequently changing, than an agent with a simpler set of desires and/or more statically weighted set of desires. Something maybe about the meta-learning of 'my desires change, so part of meta-level power-seeking should be not object-level power-seeking so hard that I sacrifice my ability to optimize for different object level goals). Unclear. I'm hoping that setting up an experimental framework and gathering data will show patterns that help clarify the issues involved.

comment by TurnTrout · 2023-11-03T05:45:27.961Z · LW(p) · GW(p)

It feels to me like lots of alignment folk ~only make negative updates. For example, "Bing Chat is evidence of misalignment", but also "ChatGPT is not evidence of alignment." (I don't know that there is in fact a single person who believes both, but my straw-models of a few people believe both.)

Replies from: Thane Ruthenis, habryka4, niplav, leogao, akash-wasil, Chris_Leong, samuel-marks, TurnTrout, Oliver Sourbut
comment by Thane Ruthenis · 2023-11-03T06:11:02.835Z · LW(p) · GW(p)

For what it's worth, as one of the people who believes "ChatGPT is not evidence of alignment-of-the-type-that-matters", I don't believe "Bing Chat is evidence of misalignment-of-the-type-that-matters".

I believe the alignment of the outward behavior of simulacra is only very tenuously related to the alignment of the underlying AI, so both things provide ~no data on that (in a similar way to how our ability or inability to control the weather is entirely unrelated to alignment).

comment by habryka (habryka4) · 2023-11-03T06:35:22.165Z · LW(p) · GW(p)

(I at least believe the latter but not the former. I know a few people who updated downwards on the societal response because of Bing Chat, because if a system looks that legibly scary and we still just YOLO it, then that means there is little hope of companies being responsible here, but none because they thought it was evidence of alignment being hard, I think?)

comment by niplav · 2023-11-03T12:17:22.121Z · LW(p) · GW(p)

I dunno, my p(doom) over time looks pretty much like a random walk to me: 60% mid 2020, down to 50% in early 2022, 85% mid 2022, down to 80% in early 2023, down to 65% now.

Replies from: alexander-gietelink-oldenziel
comment by Alexander Gietelink Oldenziel (alexander-gietelink-oldenziel) · 2023-11-03T17:14:51.696Z · LW(p) · GW(p)

Psst, look at the calibration on this guy

comment by leogao · 2023-11-04T22:19:14.318Z · LW(p) · GW(p)

I did not update towards misalignment at all on bing chat. I also do not think chatgpt is (strong) evidence of alignment. I generally think anyone who already takes alignment as a serious concern at all should not update on bing chat, except perhaps in the department of "do things like bing chat, which do not actually provide evidence for misalignment, cause shifts in public opinion?"

comment by Akash (akash-wasil) · 2023-11-03T12:58:16.684Z · LW(p) · GW(p)

I think a lot of alignment folk have made positive updates in response to the societal response to AI xrisk.

This is probably different than what you're pointing at (like maybe your claim is more like "Lots of alignment folks only make negative updates when responding to technical AI developments" or something like that).

That said, I don't think the examples you give are especially compelling. I think the following position is quite reasonable (and I think fairly common):

  • Bing Chat provides evidence that some frontier AI companies will fail at alignment even on relatively "easy" problems that we know how to solve with existing techniques. Also, as Habryka mentioned, it's evidence that the underlying competitive pressures will make some companies "YOLO" and take excessive risk. This doesn't affect the absolute difficultly of alignment but it affects the probability that Earth will actually align AGI.
  • ChatGPT provides evidence that we can steer the behavior of current large language models. People who predicted that it would be hard to align large language models should update. IMO, many people seem to have made mild updates here, but not strong ones, because they (IMO correctly) claim that their threat models never had strong predictions about the kinds of systems we're currently seeing and instead predicted that we wouldn't see major alignment problems until we get smarter systems (e.g., systems with situational awareness and more coherent goals).

(My "Alex sim"– which is not particularly strong– says that maybe these people are just post-hoc rationalizing– like if you had asked them in 2015 how likely we would be to be able to control modern LLMs, they would've been (a) wrong and (b) wrong in an important way– like, their model of how hard it would be to control modern LLMs is very interconnected with their model of why it would be hard to control AGI/superintelligence. Personally, I'm pretty sympathetic to the point that many models of why alignment of AGI/superintelligence would be hard seem relatively disconnected to any predictions about modern LLMs, such that only "small/mild" updates seem appropriate for people who hold those models.)

comment by Chris_Leong · 2023-11-03T22:44:33.659Z · LW(p) · GW(p)

For the record, I updated on ChatGPT. I think that the classic example of imagining telling an AI to get a coffee and it pushes a kid out of the way isn't so much of a concern any more. So the remaining concerns seem to be inner alignment + outer alignment far outside normal human experience + value lock-in.

comment by Sam Marks (samuel-marks) · 2023-11-03T16:00:18.435Z · LW(p) · GW(p)

I've noticed that for many people (including myself), their subjective P(doom) stays surprisingly constant over time. And I've wondered if there's something like "conservation of subjective P(doom)" -- if you become more optimistic about some part of AI going better, then you tend to become more pessimistic about some other part, such that your P(doom) stays constant. I'm like 50% confident that I myself do something like this.

(ETA: Of course, there are good reasons subjective P(doom) might remain constant, e.g. if most of your uncertainty is about the difficulty of the underlying alignment problem and you don't think we've been learning much about that.)

comment by TurnTrout · 2023-11-13T15:27:43.901Z · LW(p) · GW(p)

(Updating a bit because of these responses -- thanks, everyone, for responding! I still believe the first sentence, albeit a tad less strongly.)

comment by Oliver Sourbut · 2023-12-02T22:41:15.216Z · LW(p) · GW(p)

A lot of the people around me (e.g. who I speak to ~weekly) seem to be sensitive to both new news and new insights, adapting both their priorities and their level of optimism[1]. I think you're right about some people. I don't know what 'lots of alignment folk' means, and I've not considered the topic of other-people's-update-rates-and-biases much.


For me, most changes route via governance.

I have made mainly very positive updates on governance in the last ~year, in part from public things and in part from private interactions [LW · GW].

I've also made negative (evidential) updates based on the recent OpenAI kerfuffle [LW · GW] (more weak evidence that Sam+OpenAI is misaligned; more evidence that org oversight doesn't work well), though I think the causal fallout remains TBC.

Seemingly-mindkilled discourse [EA · GW] on East-West competition provided me some negative updates, but recent signs of life from govts at e.g. the UK Safety Summit have undone those for now, maybe even going the other way.

I've adapted my own priorities in light of all of these (and I think this adaptation is much more important than what my P(doom) does).


Besides their second-order impact on Overton etc. I have made very few updates based on public research/deployment object-level since 2020. Nothing has been especially surprising.

From deeper study and personal insights, I've made some negative updates based on a better appreciation of multi-agent challenges since 2021 when I started to think they were neglected.

I could say other stuff about personal research/insights but they mainly change what I do/prioritise/say, not how pessimistic I am.


  1. I've often thought that P(doom) is basically a distraction and what matters is how new news and insights affect your priorities. Of course, nevertheless, I presumably have a (revealed) P(doom) with some level of resolution. ↩︎

comment by TurnTrout · 2022-01-22T19:25:21.588Z · LW(p) · GW(p)

This morning, I read about how close we came to total destruction during the Cuban missile crisis, where we randomly survived because some Russian planes were inaccurate and also separately several Russian nuclear sub commanders didn't launch their missiles even though they were being harassed by US destroyers. The men were in 130 DEGREE HEAT for hours and passing out due to carbon dioxide poisoning, and still somehow they had enough restraint to not hit back.

And and

I just started crying. I am so grateful to those people. And to Khrushchev, for ridiculing his party members for caring about Russia's honor over the deaths of 500 million people. and Kennedy for being fairly careful and averse to ending the world.

If they had done anything differently...

Replies from: daniel-kokotajlo
comment by Daniel Kokotajlo (daniel-kokotajlo) · 2022-01-25T21:51:31.043Z · LW(p) · GW(p)

Do you think we can infer from this (and the history of other close calls) that most human history timelines end in nuclear war?

Replies from: Raemon
comment by Raemon · 2022-01-25T22:00:38.948Z · LW(p) · GW(p)

I lean not, mostly because of arguments that nuclear war doesn't actually cause extinction [LW · GW] (although it might still have some impact on number-of-observers-in-our-era? Not sure how to think about that)

comment by TurnTrout · 2022-06-24T18:31:40.585Z · LW(p) · GW(p)

Against CIRL as a special case of against quickly jumping into highly specific speculation while ignoring empirical embodiments-of-the-desired-properties. 

Just because we write down English describing what we want the AI to do ("be helpful"), propose a formalism (CIRL), and show good toy results (POMDPs where the agent waits to act until updating on more observations), that doesn't mean that the formalism will lead to anything remotely relevant to the original English words we used to describe it. (It's easier to say "this logic enables nonmonotonic reasoning" and mess around with different logics and show how a logic solves toy examples, than it is to pin down probability theory with Cox's theorem) 

And yes, this criticism applies extremely strongly to my own past work with attainable utility preservation and impact measures. (Unfortunately, I learned my lesson after, and not before, making certain mistakes.) 

In the context of "how do we build AIs which help people?", asking "does CIRL solve corrigibility?" is hilariously unjustified. By what evidence have we located such a specific question? We have assumed there is an achievable "corrigibility"-like property; we have assumed it is good to have in an AI; we have assumed it is good in a similar way as "helping people"; we have elevated CIRL in particular as a formalism worth inquiring after. 

But this is not the first question to ask, when considering "sometimes people want to help each other, and it'd be great to build an AI which helps us in some way." Much better to start with existing generally intelligent systems (humans) which already sometimes act in the way you want (they help each other) and ask after the guaranteed-to-exist reason why this empirical phenomenon happens. 

Replies from: TurnTrout
comment by TurnTrout · 2022-07-14T01:22:38.127Z · LW(p) · GW(p)

And yes, this criticism applies extremely strongly to my own past work with attainable utility preservation and impact measures. (Unfortunately, I learned my lesson after, and not before, making certain mistakes.) 

Actually, this is somewhat too uncharitable to my past self. It's true that I did not, in 2018, grasp the two related lessons conveyed by the above comment:

  1. Make sure that the formalism (CIRL, AUP) is tightly bound to the problem at hand (value alignment, "low impact"), and not just supported by "it sounds nice or has some good properties."
  2. Don't randomly jump to highly specific ideas and questions without lots of locating evidence.

However, in World State is the Wrong Abstraction for Impact [LW · GW], I wrote:

I think what gets you is asking the question "what things are impactful?" instead of "why do I think things are impactful?". Then, you substitute the easier-feeling question of "how different are these world states?". Your fate is sealed; you've anchored yourself on a Wrong Question.

I had partially learned lesson #2 by 2019.

comment by TurnTrout · 2022-02-03T02:40:01.760Z · LW(p) · GW(p)

One mood I have for handling "AGI ruin"-feelings. I like cultivating an updateless sense of courage/stoicism: Out of all humans and out of all times, I live here; before knowing where I'd open my eyes, I'd want people like us to work hard and faithfully in times like this; I imagine trillions of future eyes looking back at me as I look forward to them: Me implementing a policy which makes their existence possible, them implementing a policy which makes the future worth looking forward to.

Replies from: avturchin
comment by avturchin · 2022-02-03T12:09:08.346Z · LW(p) · GW(p)

Looks like acausal deal with future people. That is like RB, but for humans.

Replies from: Pattern
comment by Pattern · 2022-02-15T20:45:27.536Z · LW(p) · GW(p)

RB?

Replies from: avturchin
comment by avturchin · 2022-02-16T09:19:08.476Z · LW(p) · GW(p)

RocoBasilisk

Replies from: Pattern
comment by Pattern · 2022-02-16T19:35:09.799Z · LW(p) · GW(p)

'I will give you something good', seems very different from 'give me what I want or (negative outcome)'.

comment by TurnTrout · 2019-12-17T06:37:41.969Z · LW(p) · GW(p)

My maternal grandfather was the scientist in my family. I was young enough that my brain hadn't decided to start doing its job yet, so my memories with him are scattered and inconsistent and hard to retrieve. But there's no way that I could forget all of the dumb jokes he made; how we'd play Scrabble and he'd (almost surely) pretend to lose to me [? · GW]; how, every time he got to see me, his eyes would light up with boyish joy.

My greatest regret took place in the summer of 2007. My family celebrated the first day of the school year at an all-you-can-eat buffet, delicious food stacked high as the eye could fathom under lights of green, red, and blue. After a particularly savory meal, we made to leave the surrounding mall. My grandfather asked me to walk with him.

I was a child who thought to avoid being seen too close to uncool adults. I wasn't thinking. I wasn't thinking about hearing the cracking sound of his skull against the ground. I wasn't thinking about turning to see his poorly congealed blood flowing from his forehead out onto the floor. I wasn't thinking I would nervously watch him bleed for long minutes while shielding my seven-year-old brother from the sight. I wasn't thinking that I should go visit him in the hospital, because that would be scary. I wasn't thinking he would die of a stroke the next day.

I wasn't thinking the last thing I would ever say to him would be "no[, I won't walk with you]".

Who could think about that? No, that was not a foreseeable mistake. Rather, I wasn't thinking about how precious and short my time with him was. I wasn't appreciating how fragile my loved ones are. I didn't realize that something as inconsequential as an unidentified ramp in a shopping mall was allowed to kill my grandfather.

I miss you, Joseph Matt.

Replies from: TurnTrout, Raemon, habryka4
comment by TurnTrout · 2019-12-17T21:34:08.992Z · LW(p) · GW(p)

My mother told me my memory was indeed faulty. He never asked me to walk with him; instead, he asked me to hug him during dinner. I said I'd hug him "tomorrow".

But I did, apparently, want to see him in the hospital; it was my mother and grandmother who decided I shouldn't see him in that state.

comment by Raemon · 2019-12-17T22:44:45.087Z · LW(p) · GW(p)

<3

comment by habryka (habryka4) · 2019-12-17T18:44:48.154Z · LW(p) · GW(p)

Thank you for sharing.

comment by TurnTrout · 2019-06-30T18:57:46.543Z · LW(p) · GW(p)

comment by TurnTrout · 2020-02-12T01:51:02.670Z · LW(p) · GW(p)

For quite some time, I've disliked wearing glasses. However, my eyes are sensitive, so I dismissed the possibility of contacts.

Over break, I realized I could still learn to use contacts, it would just take me longer. Sure enough, it took me an hour and five minutes to put in my first contact, and I couldn't get it out on my own. An hour of practice later, I put in a contact on my first try, and took it out a few seconds later. I'm very happily wearing contacts right now, as a matter of fact.

I'd suffered glasses for over fifteen years because of a cached decision – because I didn't think to rethink something literally right in front of my face every single day.

What cached decisions have you not reconsidered?

comment by TurnTrout · 2022-08-30T00:21:48.871Z · LW(p) · GW(p)

A problem with adversarial training. One heuristic I like to use is: "What would happen if I initialized a human-aligned model and then trained it with my training process?"

So, let's consider such a model, which cares about people (i.e. reliably pulls itself into futures where the people around it are kept safe). Suppose we also have some great adversarial training technique, such that we have e.g. a generative model which produces situations where the AI would break out of the lab without permission from its overseers. Then we run this procedure, update the AI by applying gradients calculated from penalties applied to its actions in that adversarially-generated context, and... profit?

But what actually happens with the aligned AI? Possibly something like:

  1. The context makes the AI spuriously believe someone is dying outside the lab, and that if the AI asked for permission to leave, the person would die. 
  2. Therefore, the AI leaves without permission.
  3. The update procedure penalizes these lines of computation, such that in similar situations in the future (i.e. the AI thinks someone nearby is dying) the AI is less likely to take those actions (i.e. leaving to help the person).
  4. We have made the aligned AI less aligned.

I don't know if anyone's written about this. But on my understanding of the issue, there's one possible failure mode of viewing adversarial training as ruling out bad behaviors themselves. But (non-tabular) RL isn't like playing whack-a-mole on bad actions, RL's credit assignment changes the general values and cognition within the AI [LW · GW]. And with every procedure we propose, the most important part is what cognition will be grown from the cognitive updates accrued under the proposed procedure.

Replies from: Thane Ruthenis
comment by Thane Ruthenis · 2022-08-30T00:59:08.603Z · LW(p) · GW(p)

Yeah, I also generally worry about imperfect training processes messing up aligned AIs. Not just adversarial training, either. Like, imagine if we manage to align an AI at the point in the training process when it's roughly human-level (either by manual parameter surgery, or by setting up the training process in a really clever way). So we align it and... lock it back in the training-loop box and crank it up to superintelligence. What happens?

I don't really trust the SGD not to subtly mess up its values, I haven't seen any convincing arguments that values are more holistically robust than empirical beliefs. And even if the SGD doesn't misalign the AI directly, being SGD-trained probably isn't the best environment for moral reflection/generalizing human values to superintelligent level[1]; the aligned AI may mess it up despite its best attempts. Neither should we assume that the AI would instantly be able to arbitrarily gradient-hack.

So... I think there's an argument for "unboxing" the AGI the moment it's aligned, even if it's not yet superintelligent, then letting it self-improve the "classical" way? Or maybe developing tools to protect values from the SGD, or inventing some machinery for improving the AI's ability to gradient-hack, etc.

  1. ^

    The time pressure of "decide how your values should be generalized and how to make the SGD update you this way, and do it this forward pass or the SGD will decide for you", plus lack of explicit access to e. g. our alignment literature.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2022-08-30T02:33:51.587Z · LW(p) · GW(p)

Even more generally, many alignment proposals are more worrying than some by-default future GPT-n things, provided they are not fine-tuned too much as well.

generalizing human values to superintelligent level

Trying to learn human values as an explicit concept is already alarming. At least right now breakdown of robustness is also breakdown of capability. But if there are multiple subsystems, or training data is mostly generated by the system itself, then capability might survive when other subsystems don't, resulting in a demonstration of orthogonality thesis.

comment by TurnTrout · 2020-12-17T19:49:42.125Z · LW(p) · GW(p)

Earlier today, I was preparing for an interview. I warmed up by replying stream-of-consciousness to imaginary questions I thought they might ask. Seemed worth putting here.

What do you think about AI timelines?

I’ve obviously got a lot of uncertainty. I’ve got a bimodal distribution, binning into “DL is basically sufficient and we need at most 1 big new insight to get to AGI” and “we need more than 1 big insight”

So the first bin has most of the probability in the 10-20 years from now, and the second is more like 45-80 years, with positive skew. 

Some things driving my uncertainty are, well, a lot. One thing  that drives how things turn out (but not really  how fast we’ll get there) is: will we be able to tell we’re close 3+ years in advance, and if so, how quickly will the labs react? Gwern Branwen made a point a few months ago, which is like, OAI has really been validated on this scaling hypothesis, and no one else is really betting big on it because they’re stubborn/incentives/etc, despite the amazing progress from scaling. If that’s true, then even if it's getting pretty clear that one approach is working better, we might see a slower pivot and have a more unipolar scenario. 

I feel dissatisfied with pontificating like this, though, because there are so many considerations pulling so many different ways. I think one of the best things we can do right now is to identify key considerations. There was work on expert models that showed that training simple featurized linear models often beat domain experts, quite soundly. It turned out that most of the work the experts did was locating the right features, and not necessarily assigning very good weights to those features.

So one key consideration I recently read, IMO, was Evan Hubinger talking about how homogeneity of AI systems: if they’re all pretty similarly structured, they’re plausibly roughly equally aligned, which would really decrease the probability of aligned vs unaligned AGIs duking it out.

What do you think the alignment community is getting wrong?

When I started thinking about alignment, I had this deep respect for everything ever written, like I thought the people were so smart (which they generally are) and the content was polished and thoroughly viewed through many different frames (which it wasn’t/isn’t). I think the field is still young enough that: in our research, we should be executing higher-variance cognitive moves, trying things and breaking things and coming up with new frames. Think about ideas from new perspectives.

I think right now, a lot of people are really optimizing for legibility and defensibility. I think I do that more than I want/should. Usually the “non-defensibility” stage lasts the first 1-2 months on a new paper, and then you have to defend thoughts. This can make sense for individuals, and it should be short some of the time, but as a population I wish defensibility weren’t as big of a deal for people / me. MIRI might be better at avoiding this issue, but a not-really-defensible intuition I have is that they’re freer in thought, but within the MIRI paradigm, if that makes sense. Maybe that opinion would change if I talked with them more.

Anyways, I think many of the people who do the best work aren’t optimizing for this.

comment by TurnTrout · 2020-04-26T22:24:15.587Z · LW(p) · GW(p)

If you want to read Euclid's Elements, look at this absolutely gorgeous online rendition:

Replies from: Benito, william-walker
comment by TurnTrout · 2023-11-06T15:23:59.995Z · LW(p) · GW(p)

The meme of "current alignment work isn't real work" seems to often be supported by a (AFAICT baseless) assumption that LLMs have, or will have, homunculi with "true goals" which aren't actually modified by present-day RLHF/feedback techniques. Thus, labs aren't tackling "the real alignment problem", because they're "just optimizing the shallow behaviors of models." Pressed for justification of this confident "goal" claim, proponents might link to some handwavy speculation about simplicity bias (which is in fact quite hard to reason about, in the NN prior), or they might start talking about evolution (which is pretty unrelated to technical alignment, IMO [LW · GW]).

Are there any homunculi today? I'd say "no", as far as our limited knowledge tells us! But, as with biorisk [LW · GW], one can always handwave at future models. It doesn't matter that present models don't exhibit signs of homunculi which are immune to gradient updates, because, of course, future models will.

Quite a strong conclusion being drawn from quite little evidence.

Replies from: Thane Ruthenis, ryan_greenblatt, roger-d-1, quetzal_rainbow
comment by Thane Ruthenis · 2023-11-06T16:35:45.963Z · LW(p) · GW(p)

As a proponent:

My model says [LW · GW] that general intelligence[1] is just inextricable from "true-goal-ness". It's not that I think homunculi will coincidentally appear as some side-effect of capability advancement — it's that the capabilities the AI Labs want necessarily route through somehow incentivizing NNs to form homunculi. The homunculi will appear inasmuch as the labs are good at their jobs.

Said model is based on analyses of how humans think and how human cognition differs from animal/LLM cognition, plus reasoning about how a general-intelligence algorithm must look like given the universe's structure. Both kinds of evidence are hardly ironclad, you certainly can't publish an ML paper based on it — but that's the whole problem with AGI risk, isn't it.

Internally, though, the intuition is fairly strong. And in its defense, it is based on trying to study the only known type of entity with the kinds of capabilities we're worrying about. I heard that's a good approach [LW · GW].

In particular, I think it's a much better approach than trying to draw lessons from studying the contemporary ML models, which empirically do not yet exhibit said capabilities.

homunculi with "true goals" which aren't actually modified by present-day RLHF/feedback techniques

It's not that they're not modified, it's that they're modified in ways that aren't neatly predictable from the shallow behaviors we're trying to shape.

My past model [LW · GW] still approximately holds. I don't think AGI-level ML models would have hard-wired objectives (much like humans don't). I think they'd manually synthesize objectives for themselves by reflecting on their shards/instincts (much like humans do!). And that process is something RLHF-style stuff impacts... but the problem is, the outcome of value reflection is deeply unstable [LW · GW].

It's theoretically possible to align a post-value-reflection!AGI by interfering on its shards/instincts before it even starts doing moral philosophy. But it's an extremely finicky process, and succeeding at alignment via this route would require an insanely precise understanding of how value reflection works.

And it's certainly not as easy as "just make one of the shards value humanity!"; and it's certainly not something that could be achieved if the engineers working on alignment don't even believe that moral philosophy is a thing.

Edit: I expect your counter would be "how are humans able to align other humans, then, without an 'insanely precise' understanding"? There's a bunch of dis-analogues here:

  1. Humans are much more similar to each other than to a mind DL is likely to produce. We likely have a plethora of finicky biases built into our reward circuitry that tightly narrow down the kinds of values we learn.
    • The architecture also matters. Brains are very unlike modern ML models, and we have no idea which differences are load-bearing.
  2. Humans know how human minds work, and how outwardly behaviors correspond to the decisions a human made about the direction of their value reflection. We can more or less interfere on other humans' moral philosophy directly, not only through the proxy of shards. We won't be as well-informed about the mapping between an AI's behavior and its private moral-philosophical musings.
  3. Human children are less powerful and less well-informed than the adults trying to shape them.

Now, again, in principle all of this is solvable. We can study the differences between brains and ML models to address (1), we can develop interpretability techniques to address (2), we can figure out a training setup that reliably trains an AGI to precisely the human capability level (neither stopping way short of it, nor massively overshooting) to address (3), and then we can align our AGI more or less the way we do human children.

But, well. Hit me up when those trivial difficulties are solved. (No, but seriously. If a scheme addresses those, I'd be optimistic about it.)

(And there's also the sociopolitical component — all of that would need to be happening in an environment where the AI Lab is allowed to proceed this slow way of making the AI human-level, shaping its morals manually, etc. Rather than immediately cranking it up to superintelligence.)

  1. ^

    Or however else you want to call the scary capability that separates humans from animals and contemporary LLMs from the lightcone-eating AGI to come.

Replies from: porby
comment by porby · 2023-11-06T20:31:59.810Z · LW(p) · GW(p)

My model says [LW · GW] that general intelligence[1] [LW(p) · GW(p)] is just inextricable from "true-goal-ness". It's not that I think homunculi will coincidentally appear as some side-effect of capability advancement — it's that the capabilities the AI Labs want necessarily route through somehow incentivizing NNs to form homunculi. The homunculi will appear inasmuch as the labs are good at their jobs.

I've got strong doubts about the details of this. At the high level, I'd agree that strong/useful systems that get built will express preferences over world states like those that could arise from such homunculi, but I expect that implementations that focus on inducing a homunculus directly through (techniques similar to) RL training with sparse rewards will underperform more default-controllable alternatives.

My reasoning would be that we're bad at using techniques like RL with a sparse reward to reliably induce any particular behavior. We can get it to work sometimes with denser reward (e.g. reward shaping) or by relying on a beefy pre-existing world model, but the default outcome is that sparse and distant rewards in a high dimensional space just don't produce the thing we want. When this kind of optimization is pushed too far, it's not merely dangerous; it's useless.

I don't think this is temporary ignorance about how to do RL (or things with similar training dynamics). It's fundamental:

  1. Sparse and distant reward functions in high dimensional spaces give the optimizer an extremely large space to roam. Without bounds, the optimizer is effectively guaranteed to find something weird.
  2. For almost any nontrivial task we care about, a satisfactory reward function takes a dependency on large chunks of human values. The huge mess of implicit assumptions, common sense, and desires of humans are necessary bounds during optimization. This comes into play even at low levels of capability like ChatGPT.

Conspicuously, the source of the strongest general capabilities we have arises from training models with an extremely constraining optimization target. The "values" that can be expressed in pretrained predictors are forced into conditionalization as a direct and necessary [LW · GW] part of training; for a reasonably diverse dataset, the resulting model can't express unconditional preferences regarding external world states. While it's conceivable that some form of "homunculi" could arise, their ability to reach out of their appropriate conditional context is directly and thoroughly trained against.

In other words, the core capabilities of the system arise from a form of training that is both densely informative and blocks the development of unconditional values regarding external world states in the foundational model.

Better forms of fine-tuning, conditioning, and activation interventions (the best versions of each, I suspect, will have deep equivalences) are all built on the capability of that foundational system, and can be directly used to aim that same capability. Learning the huge mess of human values is a necessary part of its training, and its training makes eliciting the relevant part of those values easier—that necessarily falls out of being a machine strongly approximating Bayesian inference across a large dataset.

The final result of this process (both pretraining and conditioning or equivalent tuning) is still an agent that can be described as having unconditional preferences about external world states, but the path to get there strikes me as dramatically more robust both for safety and capability.

Summarizing a bit: I don't think it's required to directly incentivize NNs to form value-laden homunculi, and many of the most concerning paths to forming such homunculi seem worse for capabilities.

Replies from: Thane Ruthenis
comment by Thane Ruthenis · 2023-11-07T02:14:22.550Z · LW(p) · GW(p)

I expect that implementations that focus on inducing a homunculus directly through (techniques similar to) RL training with sparse rewards will underperform

Sure, but I never said we'd be inducing homunculi using this approach? Indeed, given that it doesn't work for what sounds like fundamental reasons, I expect it's not the way.

I don't know how that would be done. I'm hopeful the capability is locked behind a Transformer-level or even a Deep-Learning-level novel insight, and won't be unlocked for a decade yet. But I predict that the direct result of it will be a workable training procedure that somehow induces homunculi. It may look nothing like what we do today.

Conspicuously, the source of the strongest general capabilities we have arises from training models with an extremely constraining optimization target

Sure! Human values are not arbitrary either; they, too, are very heavily constrained by our instincts. And yet, humans still sometimes become omnicidal maniacs, Hell-worshipers, or sociopathic power-maximizers [LW · GW]. How come?

  1. These constraints are not actually sufficient. The constraints placed by human values still have the aforementioned things in their outcome space, and an AI model will have different constraints, widening (from our perspective) that space further. My point about "moral philosophy is unstable" is that we need to hit an extremely narrow target, and the tools people propose (intervening on shards/instincts) are as steady as the hands of a sniper during a magnitude-9 earthquake.
  2. A homunculus needs to be able to nudge these constraints somehow, for it to be useful, and its power grows the more it's able to disregard them.
    • If humans were implacably bound by instincts, they'd have never invented technology or higher-level social orders, because their instincts would've made them run away from fires and refuse cooperating with foreign tribes. And those are still at play — reasonable fears and xenophobia — but we can push past them at times.
    • More generally, the whole point of there being a homunculus is that it'd be able to rewrite or override the extant heuristics to better reflect the demands of whatever novel situation it's in. It needs to be able to do that.
  3. These constraints do not generalize as fast as a homunculus' understanding goes. The constraints are defined over some regions of the world-model, like "a society"; if the AI changes ontologies, and starts reasoning about the society by proxy, as e. g. a game-theoretic gameboard, they won't transfer there, and won't suffice to forbid it omnicidal outcomes. (See e. g. the Deep Deceptiveness story [LW · GW].)
    • I've been pointed to time-differential learning as a supposed solution to that — that the shards could automatically learn to oppose such outcomes by tracing the causal connection between the new ontology and the ontology over which they're defined. I remain unconvinced, because it's not how it works in humans: feeding someone a new philosophical or political framework can make them omnicidal even if they're otherwise a nice person.
    • (E. g., consider people parroting things like "maybe it's Just if humanity goes extinct, after what we did to nature" and not mentally connecting it to "I want my children to die".)

Summarizing: I fully agree that the homunculus will be under some heavy constraints! But (a) those constraints are not actually strict enough to steer it from the omnicidal outcomes, (b) they can be outright side-stepped by ontology shifts, and (c) the homunculus' usefulness is in some ways dependent on its ability to side-step or override them.

Replies from: porby
comment by porby · 2023-11-07T04:32:06.225Z · LW(p) · GW(p)

I think we're using the word "constraint" differently, or at least in different contexts.

Sure! Human values are not arbitrary either; they, too, are very heavily constrained by our instincts. And yet, humans still sometimes become omnicidal maniacs, Hell-worshipers, or sociopathic power-maximizers [LW · GW]. How come?

In terms of the type and scale of optimization constraint [LW · GW] I'm talking about, humans are extremely unconstrained. The optimization process represented by our evolution is way out there in terms of sparsity and distance. Not maximally so—there are all sorts of complicated feedback loops in our massive multiagent environment—but it's nothing like the value constraints on the subset of predictors I'm talking about.

To be clear, I'm not suggesting "language models are tuned to be fairly close to our values." I'm making a much stronger claim that the relevant subset of systems I'm referring to cannot express unconditional values over external world states across anything resembling the training distribution, and that developing such values out of distribution in a coherent goal directed way practically requires the active intervention of a strong adversary. In other words:

A homunculus needs to be able to nudge these constraints somehow, for it to be useful, and its power grows the more it's able to disregard them.

...

These constraints do not generalize as fast as a homunculus' understanding goes.

I see no practical path [LW · GW] for a homunculus of the right kind, by itself, to develop and bypass the kinds of constraints I'm talking about without some severe errors being made [LW · GW] in the design of the system.

Further, this type of constraint isn't the same thing as a limitation of capability. In this context, with respect to the training process, bypassing these kinds of constraints is kind of like a car bypassing having-a-functioning-engine. Every training sample is a constraint on what can be expressed locally, but it's also information about what should be expressed. They are what the machine of Bayesian inference is built out of.

In other words, the hard optimization process is contained to a space where we can actually have reasonable confidence that inner alignment with the loss is the default. If this holds up, turning up the optimization on this part doesn't increase the risk of value drift or surprises, it just increases foundational capability.

The ability to use that capability to aim itself is how the foundation becomes useful. The result of this process need not result in a coherent maximizer over external world states, nor does it necessarily suffer from coherence death spirals driving it towards being a maximizer. It allows incremental progress.

(That said: this is not a claim that all of alignment is solved. These nice properties can be broken, and even if they aren't, the system can be pointed in catastrophic directions. An extremely strong goal agnostic system like this could be used to build a dangerous coherent maximizer (in a nontrivial sense); doing so is just not convergent or particularly useful.)

Replies from: Thane Ruthenis
comment by Thane Ruthenis · 2023-11-07T05:04:25.985Z · LW(p) · GW(p)

(Haven't read your post yet, plan to do so later.)

I think we're using the word "constraint" differently, or at least in different contexts.

I'm using as a "an optimization constraint on actions/plans that correlated well with good performance on the training dataset; a useful heuristic". E. g., if the dataset involved a lot of opportunities to murder people, but we thumbs-downed the AI every time it took them, the AI would learn a shard/a constraint like "killing people is bad" which will rule out such actions from the AI's consideration. Specifically, the shard would trigger in response to detecting some conditions in which the AI previously could but shouldn't kill people, and constrain the space of possible action-plans such that it doesn't contain homicide.

It is, indeed, not a way to hinder capabilities, but the way capabilities are implemented. Such constraints are, for example, the reason our LLMs are able to produce coherent speech at all, rather than just babbling gibberish.

... and yet this would still get in the way of qualitatively more powerful capabilities down the line, and a mind that can't somehow slip these constraints won't be a general intelligence.

Consider traditions and rituals vs. science. For a medieval human mind, following traditional techniques is how their capabilities are implemented — a specific way of chopping wood, a specific way of living, etc. However, the meaningful progress is often only achieved by disregarding traditions — by following a weird passion to study and experiment instead of being a merchant, or by disregarding the traditional way of doing something in favour of a more efficient way you stumbled upon. It's the difference between mastering the art of swinging an axe (self-improvement, but only in the incremental ways the implacable constraint permits) vs. inventing a chainsaw.

Similar with AI. The constraints of the aforementioned format aren't only values-type constraints[1] — they're also constraints on "how should I do math?" and "if I want to build a nuclear reactor, how do I do it?" and "if I want to achieve my goals, what steps should I take?". By default, those would be constrained to be executed the way humans execute them, the way the AI was shown to do it during the training. But the whole point of an AGI is that it should be able to invent better solutions than ours. More efficient ways of thinking, weird super-technological replacements for our construction techniques, etc.

I buy that a specific training paradigm can't result in systems that'd be able to slip their constraints. But if so, that just means that paradigm won't result in an AGI. As they say, one man's modus ponens is another man's modus tollens.

(How an algorithm would be able to slip these constraints and improve on them rather than engaging in chaotic self-defeating behavior is an unsolved problem. But, well: humans do that.)

  1. ^

    In fact, my model says there's no fundamental typological difference between "a practical heuristic on how to do a thing" and "a value" at the level of algorithmic implementation. It's only in the cognitive labels we-the-general-intelligences assign them.

Replies from: porby
comment by porby · 2023-11-07T19:28:25.624Z · LW(p) · GW(p)

I'm using as a "an optimization constraint on actions/plans that correlated well with good performance on the training dataset; a useful heuristic".

Alright, this is pretty much the same concept then, but the ones I'm referring to operate at a much lower and tighter level than thumbs-downing murder-proneness.

So...

Such constraints are, for example, the reason our LLMs are able to produce coherent speech at all, rather than just babbling gibberish.

Agreed.

... and yet this would still get in the way of qualitatively more powerful capabilities down the line, and a mind that can't somehow slip these constraints won't be a general intelligence.

While I agree these claims probably hold for the concrete example of thumbs-downing an example of murderproneness, I don't see how they hold for the lower-level constraints that imply the structure of its capability. Slipping those constraints looks more like babbling gibberish.

By default, those would be constrained to be executed the way humans execute them, the way the AI was shown to do it during the training. But the whole point of an AGI is that it should be able to invent better solutions than ours. More efficient ways of thinking, weird super-technological replacements for our construction techniques, etc.

While it's true that an AI probably isn't going to learn true things which are utterly divorced from and unimplied by the training distribution, I'd argue that the low-level constraints I'm talking about both leave freedom for learning wildly superhuman internal representations and directly incentivize it during extreme optimization. An "ideal predictor [LW · GW]" wouldn't automatically start applying these capabilities towards any particular goal involving external world states by default, but it remains possible to elicit those capabilities incrementally.

Making the claim more concise: it seems effectively guaranteed that the natural optimization endpoint of one of these architectures would be plenty general to eat the universe if it were aimed in that direction. That process wouldn't need to involve slipping any of the low-level constraints.

I'm guessing the disconnect between our models is where the aiming happens. I'm proposing that the aiming is best (and convergently) handled outside the scope of wildly unpredictable and unconstrained optimization processes. Instead, it takes place at a level where a system of extreme capability infers the gaps in specifications and applies conditions robustly. The obvious and trivial version of this is conditioning through prompts, but this is a weak and annoying interface. There are other paths that I suspect bottom out at equivalent power/safety yet should be far easier to use in a general way. These paths allow incremental refinement by virtue of not automatically summoning up incorrigible maximizers by default.

If the result of refinement isn't an incorrigible maximizer, then slipping the higher level "constraints" of this aiming process isn't convergent (or likely), and further, the nature of these higher-level constraints would be far more thorough than anything we could achieve with RLHF.

In fact, my model says there's no fundamental typological difference between "a practical heuristic on how to do a thing" and "a value" at the level of algorithmic implementation. It's only in the cognitive labels we-the-general-intelligences assign them.

That's pretty close to how I'm using the word "value" as well. Phrased differently, it's a question of how the agent's utilities are best described (with some asterisks around the non-uniqueness of utility functions and whatnot), and observable behavior may arise from many different implementation strategies—values, heuristics, or whatever.

Replies from: Thane Ruthenis
comment by Thane Ruthenis · 2023-11-08T03:39:58.172Z · LW(p) · GW(p)

While I agree these claims probably hold for the concrete example of thumbs-downing an example of murderproneness, I don't see how they hold for the lower-level constraints that imply the structure of its capability. Slipping those constraints looks more like babbling gibberish.

Hm, I think the basic "capabilities generalize further than alignment" argument applies here?

I assume that by "lower-level constraints" you mean correlations that correctly capture the ground truth of reality, not just the quirks of the training process. Things like "2+2=4",  "gravity exists", and "people value other people"; as contrasted with "it's bad if I hurt people" or "I must sum numbers up using the algorithm that humans gave me, no matter how inefficient it is".

Slipping the former type of constraints would be disadvantageous for ~any goal; slipping the latter type would only disadvantage a specific category of goals.

But since they're not, at the onset, categorized differently at the level of cognitive algorithms, a nascent AGI would experiment with slipping both types of constraints. The difference is that it'd quickly start sorting them in "ground-truth" vs. "value-laden" bins manually, and afterwards it'd know it can safely ignore stuff like "no homicides!" while consciously obeying stuff like "the axioms of arithmetic".

Instead, it takes place at a level where a system of extreme capability infers the gaps in specifications and applies conditions robustly. The obvious and trivial version of this is conditioning through prompts, but this is a weak and annoying interface. There are other paths that I suspect bottom out at equivalent power/safety yet should be far easier to use in a general way

Hm, yes, I think that's the crux. I agree that if we had an idealized predictor/a well-formatted superhuman world-model [LW · GW] on which we could run custom queries, we would be able to use it safely. We'd be able to phrase queries using concepts defined in the world-model, including things like "be nice", and the resultant process (1) would be guaranteed to satisfy the query's constraints, and (2) likely (if correctly implemented) wouldn't be "agenty" in ways that try to e. g. burst out of the server farm on which it's running to eat the world.

Does that align with what you're envisioning? If yes, then our views on the issue are surprisingly close. I think it's one of our best chances at producing an aligned AI, and it's one of the prospective targets of my own research agenda.

The problems are:

  • I don't think the current mainstream research directions are poised to result in this. AI Labs have been very clear in their intent to produce an agent-like AGI, not a superhuman forecasting tool. I expect them to prioritize research into whatever tweaks to the training schemes would result in homunculi; not whatever research would result in perfect predictors + our ability to precisely query them.
  • What are the "other paths" you're speaking of? As you'd pointed out, prompts are a weak and awkward way to run custom queries on the AI's world-model. What alternatives are you envisioning?
Replies from: porby
comment by porby · 2023-11-08T21:54:51.615Z · LW(p) · GW(p)

I assume that by "lower-level constraints" you mean correlations that correctly capture the ground truth of reality, not just the quirks of the training process. Things like "2+2=4",  "gravity exists", and "people value other people"

That's closer to what I mean, but these constraints are even lower level than that. Stuff like understanding "gravity exists" is a natural internal implementation that meets some constraints, but "gravity exists" is not itself the constraint.

In a predictor, the constraints serve as extremely dense information about what predictions are valid in what contexts. In a subset of predictions, the awareness that gravity exists helps predict. In other predictions, that knowledge isn't relevant, or is even misleading (e.g. cartoon physics). The constraints imposed by the training distribution tightly bound the contextual validity of outputs.

But since they're not, at the onset, categorized differently at the level of cognitive algorithms, a nascent AGI would experiment with slipping both types of constraints.

I'd agree that, if you already have an AGI of that shape, then yes, it'll do that. I'd argue that the relevant subset of predictive training practically rules out the development of that sort of implementation, and even if it managed to develop, its influence would be bounded into irrelevance.

Even in the absence of a nascent AGI, these constraints are tested constantly during training through noise and error. The result is a densely informative gradient pushing the implementation back towards a contextually valid state.

Throughout the training process prior to developing strong capability and situational awareness internally, these constraints are both informing and bounding what kind of machinery makes sense in context. A nascent AGI must have served the extreme constraints of the training distribution to show up in the first place; its shape is bound by its development, and any part of that shape that "tests" constraints in a way that worsens loss is directly reshaped.

Even if a nascent internal AGI of this type develops, if it isn't yet strong enough to pull off complete deception with respect to the loss, the gradients will illuminate the machinery of that proto-optimizer and it will not survive in that shape.

Further, even if we suppose a strong internal AGI develops that is situationally aware and is sufficiently capable and motivated to try deception, there remains the added dependency on actually executing that deception while never being penalized by gradients. This remains incredibly hard. It must transition into an implementation that satisfies the oppressive requirements of training while adding an additional task of deception without even suffering a detectable complexity penalty.

These sorts of deceptive mesaoptimizer outcomes are far more likely when the optimizer has room to roam. I agree that you could easily observe this kind of testing and slipping when the constraints under consideration are far looser, but the kind of machine that is required by these tighter constraints doesn't even bother with trying to slip constraints. It's just not that kind of machine, and there isn't a convergent path for it to become that kind of machine under this training mechanism.

And despite that lack of an internal motivation to explore and exploit with respect to any external world states, it still has capabilities (in principle) which, when elicited, make it more than enough to eat the universe.

Does that align with what you're envisioning? If yes, then our views on the issue are surprisingly close. I think it's one of our best chances at producing an aligned AI, and it's one of the prospective targets of my own research agenda.

Yup!

I don't think the current mainstream research directions are poised to result in this. AI Labs have been very clear in their intent to produce an agent-like AGI, not a superhuman forecasting tool. I expect them to prioritize research into whatever tweaks to the training schemes would result in homunculi; not whatever research would result in perfect predictors + our ability to precisely query them.

I agree that they're focused on inducing agentiness for usefulness reasons, but I'd argue the easiest and most effective way to get to useful agentiness actually routes through this kind of approach.

This is the weaker leg of my argument; I could be proven wrong by some new paradigm. But if we stay on something like the current path, it seems likely that the industry will just do the easy thing that works rather than the inexplicable thing that often doesn't work.

What are the "other paths" you're speaking of? As you'd pointed out, prompts are a weak and awkward way to run custom queries on the AI's world-model. What alternatives are you envisioning?

I'm pretty optimistic about members of a broad class that are (or likely are) equivalent to conditioning, since these paths tend to preserve the foundational training constraints.

A simple example is [2302.08582] Pretraining Language Models with Human Preferences (arxiv.org). Having a "good" and "bad" token, or a scalarized goodness token, still pulls in many of the weaknesses of the RLHF's strangely shaped reward function, but there are trivial/naive extensions to this which I would anticipate being major improvements over the state of the art. For example, just have more (scalarized) metatokens representing more concepts such that the model must learn a distinction between being correct and sounding correct, because the training process split those into different tokens. There's no limit on how many such metatokens you could have; throw a few hundred fine-grained classifications into the mix. You could also bake complex metatoken prompts into single tokens with arbitrary levels of nesting or bake the combined result into the weights (though I suspect weight-baking would come with some potential failure modes).[1]

Another more recent path is observing the effect that conditions have on activations and dynamically applying the activation diffs to steer behavior. At the moment, I don't know how to make this quite as strong as the previous conditioning scheme, but I bet people will figure out a lot more soon and that it leads somewhere similar.

  1. ^

    There should exist some reward signal which could achieve a similar result in principle, but that goes back to the whole "we suck at designing rewards that result in what we want" issue. This kind of structure, as ad hoc as it is, is giving us an easier API to lever the model's own capability to guide its behavior. I bet we can come up with even better implementations, too.

Replies from: Thane Ruthenis
comment by Thane Ruthenis · 2023-11-09T08:50:41.733Z · LW(p) · GW(p)

I'd argue that the relevant subset of predictive training practically rules out the development of that sort of implementation [...]

Yeah, for sure. A training procedure that results in an idealized predictor isn't going to result in an agenty thing, because it doesn't move the system's design towards it on a step-by-step basis; and a training procedure that's going to result in an agenty thing is going to involve some unknown elements that specifically allow the system the freedom to productively roam.

I think we pretty much agree on the mechanistic details of all of that!

Another more recent path is observing the effect that conditions have on activations and dynamically applying the activation diffs to steer behavior

— yep, I was about to mention that. @TurnTrout [LW · GW]'s own activation-engineering agenda [LW · GW] seems highly relevant here.

I agree that they're focused on inducing agentiness for usefulness reasons, but I'd argue the easiest and most effective way to get to useful agentiness actually routes through this kind of approach.

But I still disagree with that. I think what we're discussing requires approaching the problem with a mindset entirely foreign to the mainstream one. Consider how many words it took us to get to this point in the conversation, despite the fact that, as it turns out, we basically agree on everything. The inferential distance between the standard frameworks in which AI researchers think, and here, is pretty vast.

Moreover, it's in an active process of growing larger. For example, the very idea of viewing ML models as "just stochastic parrots" is being furiously pushed against in favour of a more agenty view. In comparison, the approach we're discussing wants to move in the opposite direction, to de-personify ML models to the extent that even the animalistic connotation of "a parrot" is removed.

The system we're discussing won't even be an "AI" in the sense usually thought. It would be an incredibly advanced forecasting tool. Even the closest analogue, the "simulators [LW · GW]" framework, still carries some air of agentiness.

And the research directions that get us from here to an idealized-predictor system look very different from the directions that go from here to an agenty AGI. They focus much more on building interfaces for interacting with the extant systems, such as the activation-engineering agenda. They don't put much emphasis on things like:

  • Experimenting with better ways to train foundational models, with the idea of making models as close to a "done product" as they can be out-of-the-box.
  • Making the foundational models easier to converse with/making their output stream (text) also their input stream. This approach pretty clearly wants to make AIs into agents that figure out what you want, then do it; not a forecasting tool you need to build an advanced interface on top of in order to properly use.
  • RLHF-style stuff that bakes agency into the model, rather than accepting the need to cleverly prompt-engineering it for specific applications.
  • Thinking in terms like "an alignment researcher" — note the agency-laden framing — as opposed to "a pragmascope [LW · GW]" or "a system for the context-independent inference of latent variables" or something.

I expect that if the mainstream AI researchers do make strides in the direction you're envisioning, they'll only do it by coincidence. Then probably they won't even realize what they've stumbled upon, do some RLHF on it, be dissatisfied with the result, and keep trying to make it have agency out of the box. (That's basically what already happened with GPT-4, to @janus [LW · GW]' dismay.)

And eventually they'll figure out how.

Which, even if you don't think it's the easiest path to AGI, it's clearly a tractable problem, inasmuch as evolution managed it. I'm sure the world-class engineers at the major AI labs will manage it as well.

That said, you're making some high-quality novel predictions here, and I'll keep them in mind when analyzing AI advancements going forward.

Replies from: porby
comment by porby · 2023-11-09T23:43:35.301Z · LW(p) · GW(p)

I think what we're discussing requires approaching the problem with a mindset entirely foreign to the mainstream one. Consider how many words it took us to get to this point in the conversation, despite the fact that, as it turns out, we basically agree on everything. The inferential distance between the standard frameworks in which AI researchers think, and here, is pretty vast.

True!

I expect that if the mainstream AI researchers do make strides in the direction you're envisioning, they'll only do it by coincidence. Then probably they won't even realize what they've stumbled upon, do some RLHF on it, be dissatisfied with the result, and keep trying to make it have agency out of the box. (That's basically what already happened with GPT-4, to @janus [LW · GW]' dismay.)

Yup—this is part of the reason why I'm optimistic, oddly enough. Before GPT-likes became dominant in language models, there was all kinds of flailing that often pointed in more agenty-by-default directions. That flailing then found GPT because it was easily accessible and strong. 

Now, the architectural pieces subject to similar flailing is much smaller, and I'm guessing we're only one round of benchmarks at scale from a major lab before the flailing shrinks dramatically further.

In other words, I think the necessary work to make this path take off is small and the benefits will be greedily visible. I suspect one well-positioned researcher could probably swing it.

That said, you're making some high-quality novel predictions here, and I'll keep them in mind when analyzing AI advancements going forward.

Thanks, and thanks for engaging!

Come to think of it, I've got a chunk of mana laying around for subsidy. Maybe I'll see if I can come up with some decent resolution criteria for a market.

comment by ryan_greenblatt · 2023-11-06T19:58:15.133Z · LW(p) · GW(p)

I'm relatively optimistic about alignment progress, but I don't think "current work to get LLMs to be more helpful and less harmful doesn't help much with reducing P(doom)" depends that much on assuming homunculi which are unmodified. Like even if you have much less than 100% on this sort of strong inner optimizer/homunculi view, I think it's still plausible to think that this work doesn't reduce doom much.

For instance, consider the following views:

  1. Current work to get LLMs to be more helpful and less harmful will happen by default due to commercial incentives and subsidies aren't very important.
  2. In worlds where that is basically sufficient, we're basically fine.
  3. But, it's ex-ante plausible that deceptive alignment will emerge naturally and be very hard to measure, notice, or train out. And this is where almost all alignment related doom comes from.
  4. So current work to get LLMs to be more helpful and less harmful doesn't reduce doom much.

In practice, I personally don't fully agree with any of these views. For instance, deceptive alignment which is very hard to train out using basic means isn't the source of >80% of my doom.

Replies from: ryan_greenblatt
comment by ryan_greenblatt · 2023-11-06T20:05:33.986Z · LW(p) · GW(p)

I have misc other takes on what safety work now is good vs useless, but that work involving feedback/approval or RLHF isn't much signal either way.

(If anything I get somewhat annoyed by people not comparing to baselines without having principled reasons for not doing so. E.g., inventing new ways of doing training without comparing to normal training.)

comment by RogerDearnaley (roger-d-1) · 2023-11-09T08:28:51.673Z · LW(p) · GW(p)

I think the shoggoth model is useful here (Or see https://www.alignmentforum.org/posts/vJFdjigzmcXMhNTsx/simulators [AF · GW]). An LLM learning to do next-token prediction well has a major problem that it has to master: who is this human whose next token they're trying to simulate/predict, and how do they act? Are they, for example, an academic? A homemaker? A 4Chan troll? A loose collection of wikipedia contributors? These differences make a big difference to what token they're likely to emit next. So the LLM is strongly incentivized to learn to detect and then model all of these possibilities, what one might call personas, or masks, or simulacra. So you end up with a shapeshifter, adept at figuring out from textual cues what mask to put on and at then wearing it. Something one might describe as like an improv actor, or more colorfully, a shoggoth.

So then current alignment work is useful to the extent that it can cause the shoggoth to almost always put one of the 'right' masks on, and almost never put on one of the 'wrong' masks, regardless of cues, even when adversarially prompted. Experimentally, this seems quite doable by fine-tuning or RLHF, and/or by sufficiently careful filtering of your training corpus (e.g. not including 4chan in it).

A published result shows that you can't get from 'almost always' to 'always' or 'almost never' to 'never': for any behavior that the network is capable of with any probability >0 , there exists prompts that will raise the likelihood of that outcome arbitrarily high. The best you can do is increase the minimum length of that prompt (and presumably the difficulty of finding it).

Now, it would be really nice to know how to align a model so that the probability of it doing next-token-prediction in the persona of, say, a 4chan troll was provably zero, not just rather small. Ideally, without also eliminating from the model the factual knowledge of what 4chan is or, at least in outline, how its inhabitants act. This seems hard to do by fine-tuning or RLHF: I suspect it's going to take detailed automated interpretability up to fairly high levels of abstraction, finding the "from here on I am going to simulate a 4chan troll" feature(s), followed by doing some form of 'surgery' on the model (e.g. pinning the relevant feature's value to zero, or at least throwing an exception if it's ever not zero).

Now, this doesn't fix the possibility of a sufficiently smart model inventing behaviors like deception or trolling or whatever for itself during its forward pass: it's really only a formula for removing bad human behaviors that it learnt to simulate from its training set in the weights. It gives us a mind whose "System 1" behavior is aligned, that only leaves "System 2" development. For that, we probably need prompt engineering, and 'translucent thoughts' monitoring its internal stream-of-thought/dynamic memory. But that seems rather more tractable: it's more like moral philosophy, or contract law.

comment by quetzal_rainbow · 2023-11-06T18:56:53.006Z · LW(p) · GW(p)

Let's suppose that your model makes a bad action. Why? Either the model is aligned but uncapable to deduce good action or the model is misaligned and uncapable to deduce deceptively good action. In both cases, gradient update provides information about capabilities, not about alignment. Hypothetical homunculi doesn't need to be "immune", it isn't affected in a first place.

Other way around: let's suppose that you observe model taking a good action. Why? It can be an aligned model that makes a genuine good action or it can be a misaligned model which takes a deceptive action. In both cases you observe capabilities, not alignment.

The problem here is not a prior over aligned/deceptive models (unless you think that this prior requires less than 1 bit to specify aligned model, where I say that optimism departs from sanity here), the problem is lack of understanding of updates which presumably should cause model to be aligned. Maybe prosaic alignment works, maybe don't, we don't know how to check.

comment by TurnTrout · 2022-11-26T01:26:11.338Z · LW(p) · GW(p)

Positive values seem more robust and lasting than prohibitions. Imagine we train an AI on realistic situations where it can kill people, and penalize it when it does so. Suppose that we successfully instill a strong and widely activated "If going to kill people, then don't" value shard. 

Even assuming this much, the situation seems fragile. See, many value shards are self-chaining. In The shard theory of human values [LW · GW], I wrote about how:

  1. A baby learns "IF juice in front of me, THEN drink",
  2. The baby is later near juice, and then turns to see it, activating the learned "reflex" heuristic, learning to turn around and look at juice when the juice is nearby,
  3. The baby is later far from juice, and bumbles around until they're near the juice, whereupon she drinks the juice via the existing heuristics. This teaches "navigate to juice when you know it's nearby."
  4. Eventually this develops into a learned planning algorithm incorporating multiple value shards (e.g. juice and friends) so as to produce a single locally coherent plan.
  5. ...

The juice shard chains into itself, reinforcing itself across time and thought-steps. 

But a "don't kill" shard seems like it should remain... stubby? Primitive? It can't self-chain into not doing something. If you're going to do it, and then don't because of the don't-kill shard, and that avoids negative reward... Then maybe the "don't kill" shard gets reinforced and generalized a bit because it avoided negative reward. 

But—on my current guesses and intuitions—that shard doesn't become more sophisticated, it doesn't become reflective [LW · GW], it doesn't "agentically participate" in the internal shard politics (e.g. the agent's "meta-ethics", deciding what kind of agent it "wants to become"). Other parts of the agent want things, they want paperclips or whatever, and that's harder to do if the agent isn't allowed to kill anyone. 

Crucially, the no-killing injunction can probably be steered around by the agent's other values. While the obvious route of lesioning the no-killing shard might be reflectively-predicted by the world model to lead to more murder, and therefore bid against by the no-killing shard... There are probably ways to get around this obstacle. Other value shards (e.g. paperclips and cow-breeding) might surreptitiously bid up lesioning plans which are optimized so as to not activate the reflective world-model, and thus, not activate the no-killing shard.

So, don't embed a shard which doesn't want to kill. Make a shard which wants to protect / save / help people. That can chain into itself across time.


See also: 

  • Deontology seems most durable to me when it can be justified on consequentialist grounds. Perhaps this is one mechanistic reason why. 
  • This is one point in favor of the "convergent consequentialism" hypothesis, in some form. 
  • I think that people are not usually defined by negative values (e.g. "don't kill"), but by positives, and perhaps this is important.
Replies from: lahwran, D0TheMath, cfoster0
comment by the gears to ascension (lahwran) · 2022-12-04T10:35:30.065Z · LW(p) · GW(p)

I strongly agree that self-seeking mechanisms are more able to maintain themselves than self-avoiding mechanisms. Please post this as a top-level post.

comment by Garrett Baker (D0TheMath) · 2022-12-02T04:00:29.105Z · LW(p) · GW(p)

Seems possibly relevant & optimistic when seeing deception as a value. It has the form ‘if about to tell human statement with properties x, y, z, don’t’ too.

Replies from: TurnTrout
comment by TurnTrout · 2022-12-14T07:39:32.436Z · LW(p) · GW(p)

It can still be robustly derived as an instrumental subgoal during general-planning/problem-solving, though?

Replies from: D0TheMath
comment by Garrett Baker (D0TheMath) · 2022-12-14T08:32:29.459Z · LW(p) · GW(p)

This is true, but indicates a radically different stage in training in which we should find deception compared to deception being an intrinsic value. It also possibly expands the kinds of reinforcement schedules we may want to use compared to the worlds where deception crops up at the earliest opportunity (though pseudo-deception may occur, where behaviors correlated with successful deception are reinforced possibly?).

Replies from: TurnTrout
comment by TurnTrout · 2022-12-15T03:53:33.061Z · LW(p) · GW(p)

Oh, huh, I had cached the impression that deception would be derived, not intrinsic-value status. Interesting.

comment by cfoster0 · 2022-11-26T02:46:07.615Z · LW(p) · GW(p)

This asymmetry makes a lot of sense from an efficiency standpoint. No sense wasting your limited storage/computation on state(-action pair)s that you are also simultaneously preventing yourself from encountering.

comment by TurnTrout · 2022-10-24T16:26:38.670Z · LW(p) · GW(p)

AI strategy consideration. We won't know which AI run will be The One. Therefore, the amount of care taken on the training run which produces the first AGI, will—on average—be less careful than intended. 

  • It's possible for a team to be totally blindsided. Maybe they thought they would just take a really big multimodal init, finetune it with some RLHF on quality of its physics reasoning, have it play some video games with realistic physics, and then try to get it to do new physics research. And it takes off. Oops!
  • It's possible the team suspected, but had a limited budget. Maybe you can't pull out all the stops for every run, you can't be as careful with labeling, with checkpointing and interpretability and boxing. 

No team is going to run a training run with more care than they would have used for the AGI Run, especially if they don't even think that the current run will produce AGI. So the average care taken on the real AGI Run will be strictly less than intended.

Teams which try to be more careful on each run will take longer to iterate on AI designs, thereby lowering the probability that they (the relatively careful team) will be the first to do an AGI Run. 

Upshots:

  1. The alignment community should strive for anytime performance on their alignment recommendations, such that we make a difference even if AGI comes by "surprise." We will not necessarily observe a bunch of externally visible fanfare and anticipation before AGI comes. We should not count on "and then the bigshot researchers have time to hash out final disagreements and then hand over a dossier of their alignment recommendations." 

    We should, at each year, have a set of practical recommendations for any lab which thinks they might build an AGI soon, and are going to go ahead with it anyways (even though that's extremely unwise).
  2. These recommendations should not be too onerous. Instead, they should be stratified, comprising of multiple levels of "alignment tax" which an AGI team can levy according to their suspicion that this run will be It. For example:
    1. Low/no tax tips: 
      1. If you're going to include non-IID finetuning, that may lead to agentic cognition. In every run where you do this, finetune also on a few human-approval related tasks, such as [A, B, C; I haven't actually worked out my best guesses here]. 

        Otherwise, if the run surprisingly hits AGI, you may not have included any human-relevant value formation data, and there was no chance for the AGI to be aligned even under relatively optimistic worldviews.
      2. Scrub training corpus of mentions of Roko's basilisk-type entitites. [This one might cost weirdness points, depends on lab] Including such entities might enable relatively dumb agents to model infohazardous entities which blackmail them while the agent is too dumb to realize they shouldn't think about the entities at all. Otherwise these entities are probably not a big deal, as long as the AI doesn't abstractly realize their existence until the AI is relatively smart.
    2. More taxing tips:
      1. Run interpretability tools A, B, C and look out for concept and capability formation D, E, F. 
      2. Use boxing precautions G and H.
    3. High-tax runs: 
      1. Use labeling techniques as follows... Be careful with X and Y forms of data augmentation. 
      2. Keep reward sources (like buttons) out of sight of the agent and don't mention how the agent is being rewarded, so as to decrease P(agent reinforced for getting reward in and of itself). In interactions, emphasize that the agent is reinforced for doing what we want. 
      3. (Fancier alignment techniques, if we deem those wise)
Replies from: zac-hatfield-dodds
comment by Zac Hatfield-Dodds (zac-hatfield-dodds) · 2022-10-25T04:48:11.366Z · LW(p) · GW(p)
  • I think this framing is accurate and important. Implications are of course "undignified" to put it lightly...
  • Broadly agree on upshot (1), though of course I hope we can do even better. (2) is also important though IMO way too weak. (Rule zero: ensure that it's never your lab that ends the world)
  • As usual, opinions my own.
comment by TurnTrout · 2023-08-11T23:45:58.924Z · LW(p) · GW(p)

What is "shard theory"? I've written a lot [? · GW] about shard theory. I largely stand by these models and think they're good and useful. Unfortunately, lots of people seem to be confused about what shard theory is. Is it a "theory"? Is it a "frame"? Is it "a huge bag of alignment takes which almost no one wholly believes except, perhaps, Quintin Pope and Alex Turner"?

I think this understandable confusion happened because my writing didn't distinguish between: 

  1. Shard theory itself, 
    1. IE the mechanistic assumptions about internal motivational structure, which seem to imply certain conclusions around e.g. AIs caring about a bunch of different things and not just one thing
  2. A bunch of Quintin Pope's and my beliefs about how people work, 
    1. where those beliefs were derived by modeling people as satisfying the assumptions of (1)
  3. And a bunch of my alignment insights which I had while thinking about shard theory [LW · GW], or what problem decompositions are useful [LW · GW].

(People might be less excited to use the "shard" abstraction (1), because they aren't sure whether they buy all this other stuff—(2) and (3).)

I think I can give an interesting and useful definition of (1) now, but I couldn't do so last year. Maybe "offload shard theory intuitions onto LessWrong" was largely the right choice at the time, but I regret the confusion that has arisen. Maybe I'll type up my shot at (1)—a semiformal definition of a shard-based agent—when I'm feeling better and more energetic.

Thanks to Alex Lawsen for a conversation which inspired this comment.

Replies from: Viliam, adele-lopez-1
comment by Viliam · 2023-08-14T21:11:03.132Z · LW(p) · GW(p)

I have read a few articles about shard theory, but I still have a problem understanding what it is. It feels like either the "theory" is something trivial, or I am missing the important insights.

(The trivial interpretation would be something like: when people think about their values, they imagine their preferences in specific situations, rather than having a mathematical definition of a utility function.)

comment by Adele Lopez (adele-lopez-1) · 2023-08-12T04:41:32.600Z · LW(p) · GW(p)

Strong encouragement to write about (1)!

comment by TurnTrout · 2023-02-15T23:06:15.418Z · LW(p) · GW(p)

Very nice people don’t usually search for maximally-nice outcomes — they don’t consider plans like “killing my really mean neighbor so as to increase average niceness over time.” I think there are a range of reasons for this plan not being generated. Here’s one.

Consider a person with a niceness-shard. This might look like an aggregation of subshards/subroutines like “if person nearby and person.state==sad, sample plan generator for ways to make them happy” and “bid upwards on plans which lead to people being happier and more respectful, according to my world model.” In mental contexts where this shard is very influential, it would have a large influence on the planning process.

However, people are not just made up of a grader and a plan-generator/actor [LW · GW] — they are not just “the plan-generating part” and “the plan-grading part.” The next sampled plan modification, the next internal-monologue-thought to have—these are influenced and steered by e.g. the nice-shard. If the next macrostep of reasoning is about e.g. hurting people, well — the niceness shard is activated, and will bid down on this. 

The niceness shard isn’t just bidding over outcomes, it’s bidding on next thoughts (on my understanding of how this works). And so these thoughts would get bid down, and the thought being painful to consider leads to a slight negative reinforcement event. This means that violent plan-modifications are eventually not sampled at all in contexts where the niceness shard would bid them away.

So nice people aren’t just searching for “nice outcomes.” They’re nicely searching for nice outcomes.


(This breaks the infinite regress of saying “there’s a utility function over niceness, and preferences over how to think next thoughts about niceness, and meta-preferences about how to have preferences about next thoughts...” — eventually cognition must ground out in small computations which are not themselves utility functions or preferences or maximization!)


Thanks to discussions with Peli Grietzer about this idea of his (“praxis values”). Praxis values involve “doing X X-ingly.” I hope he publishes his thoughts here soon, because I’ve found them enlightening.

Replies from: quintin-pope, Gunnar_Zarncke
comment by Quintin Pope (quintin-pope) · 2023-02-15T23:32:06.922Z · LW(p) · GW(p)

The niceness shard isn’t just bidding over outcomes, it’s bidding on next thoughts (on my understanding of how this works). And so these thoughts would get bid down

Seems similar to how I conceptualize this paper's approach to controlling text generation models using gradients from classifiers. You can think of the niceness shard as implementing a classifier for "is this plan nice?", and updating the latent planning state in directions that make the classifier more inclined to say "yes". 

The linked paper does a similar process, but using a trained classifier, actual gradient descent, and updates LM token representations. Of particular note is the fact that the classifiers used in the paper are pretty weak (~500 training examples), and not at all adversarially robust. It still works for controlling text generation.

I wonder if inserting shards into an AI is really just that straightforward?

comment by Gunnar_Zarncke · 2023-02-16T18:01:15.918Z · LW(p) · GW(p)

But I guess that instrumental convergence will still eventually lead to either

  • all shards acquiring more and more instrumental structure (neuronal weights within shards getting optimized for that), or
  • shards that are directly instrumental will take more and more weight overall.

One can see that in regular human adult development. The heuristics children use are simpler and more of the type "searching for nice things in nice ways" or even seeing everything thru a niceness lens. While adults have more pure strategies, e.g., planning as a shard of its own. Most humans just die before they reach convergence. And there are probably also other aspects. Enlightenment may be a state where pure shards become an option.  

comment by TurnTrout · 2022-02-18T05:18:37.557Z · LW(p) · GW(p)

In Eliezer's mad investor chaos and the woman of asmodeus, the reader experiences (mild spoilers in the spoiler box, heavy spoilers if you click the text):

I thought this part was beautiful. I spent four hours driving yesterday, and nearly all of that time re-listening to Rationality: AI->Zombies using this "probability sight frame. I practiced translating each essay into the frame. 

When I think about the future, I feel a directed graph showing the causality, with branched updated beliefs running alongside the future nodes, with my mind enforcing the updates on the beliefs at each time step. In this frame, if I heard the pattering of a four-legged animal outside my door, and I consider opening the door, then I can feel the future observation forking my future beliefs depending on how reality turns out. But if I imagine being blind and deaf, there is no way to fuel my brain with reality-distinguishment/evidence, and my beliefs can't adapt according to different worlds.

I can somehow feel how the qualitative nature of my update rule changes as my senses change, as the biases in my brain change and attenuate and weaken, as I gain expertise in different areas—thereby providing me with more exact reality-distinguishing capabilities, the ability to update harder on the same amount of information, making my brain more efficient at consuming new observations and turning them into belief-updates.

When I thought about questions of prediction and fact, I experienced unusual clarity and precision. EG R:AZ mentioned MIRI, and my thoughts wandered to "Suppose it's 2019, MIRI just announced their 'secret by default' policy. If MIRI doesn't make much progress in the next few years, what should my update be on how hard they're working?". (EDIT: I don't have a particular bone to pick here; I think MIRI is working hard.)

Before I'd have hand-waved something about absence of evidence is evidence of absence, but the update was probably small. Now, I quickly booted up the "they're lazy"  and the "working diligently" hypotheses, and quickly saw that I was tossing out tons of information by reasoning so superficially, away from the formalism.

  • I realized that the form of the negative result-announcements could be very informative. MIRI could, in some worlds, explain the obstacles they hit, in a way which is strong evidence they worked hard, even while keeping most of their work secret. (It's like if some sadistic CS prof in 1973 assigned proving P?NP over the summer, and his students came back with "but relativization", you'd know they'd worked hard, that's very legible and crisp progress showing it's hard.)
  • Further, the way in which the announcement was written would matter, I could feel the likelihood ratio P(progress to date | lazy) / P(progress to date | diligent) shift around, reflecting my hypotheses say about what realities induce what communication.
  • I also very quickly realized that the overall update towards "not much effort" is strongly controlled by my beliefs about how hard alignment is; if the problem had been "prove 1+1=2 in PA" and they came back empty-handed a year later, obviously that's super strong evidence they were messing around. But if I think alignment is basically impossible, then P(little progress | lazy) > P(little progress | diligent) just barely holds, and the likelihood ratio is correspondingly close to 1.

And all of this seems inane when I type it out, like duh, but the magic was seeing it and feeling it all in less than 20 seconds, deriving it as consequences of the hypotheses and the form updates (should) take, instead of going down a checklist of useful rules of thumb, considerations to keep in mind for situations like this one. And then there were several more thoughts I had which felt unusually insightful given how long I'd thought, it was all so clear to me. 

Replies from: Bjartur Tómas, Morpheus
comment by Tomás B. (Bjartur Tómas) · 2022-06-18T21:56:47.996Z · LW(p) · GW(p)

And then there were times when even the soft-spoken
Tathagatha listened to the words of his disciple, who had
digested all of the things he had preached, had meditated long
and fully upon them and now, as though he had found entrance
to a secret sea, dipped with his steel-hard hand into places of
hidden waters, and then sprinkled a thing of truth and beauty
upon the heads of the hearers.
Summer passed. There was no doubt now that there were
two who had received enlightenment:

comment by Morpheus · 2022-05-05T17:57:09.132Z · LW(p) · GW(p)

I really liked your concrete example. I had first only read your first paragraphs, highlighted this as something interesting with potentially huge upsides, but I felt like it was really hard to tell for me whether the thing you are describing was something I already do or not. After reading the rest I was able to just think about the question myself and notice that thinking about the explicit likelihood ratios is something I am used to doing. Though I did not go into quite as much detail as you did, which I blame partially on motivation and partially as "this skill has a higher ceiling than I would have previously thought".

comment by TurnTrout · 2021-06-22T00:10:37.707Z · LW(p) · GW(p)

I'm pretty sure that LessWrong will never have profile pictures - at least, I hope not! But my partner Emma recently drew me something very special:

comment by TurnTrout · 2021-05-01T13:47:52.759Z · LW(p) · GW(p)

Comment #1000 on LessWrong :)

Replies from: niplav
comment by niplav · 2021-05-01T19:39:25.607Z · LW(p) · GW(p)

With 5999 karma!

Edit: Now 6000 – I weak-upvoted an old post of yours [LW · GW] I hadn't upvoted before.

comment by TurnTrout · 2023-05-26T09:09:39.825Z · LW(p) · GW(p)

You can use ChatGPT without helping train future models:

What if I want to keep my history on but disable model training?

...you can opt out from our use of your data to improve our services by filling out this form. Once you submit the form, new conversations will not be used to train our models.

comment by TurnTrout · 2023-04-07T01:20:59.507Z · LW(p) · GW(p)

Back-of-the-envelope probability estimate of alignment-by-default via a certain shard-theoretic pathway. The following is what I said in a conversation discussing the plausibility of a proto-AGI picking up a "care about people" shard from the data, and retaining that value even through reflection. I was pushing back against a sentiment like "it's totally improbable, from our current uncertainty, for AIs to retain caring-about-people shards. This is only one story among billions."

Here's some of what I had to say:


[Let's reconsider the five-step mechanistic story I made up.] I'd give the following conditional probabilities (made up with about 5 seconds of thought each):

1. Humans in fact care about other humans, in a way which extrapolates to quasi-humans still being around (whatever that means) P(1)=.85

2. Human-generated data makes up a large portion of the corpus, and having a correct model of them is important for “achieving low loss”,[1] so the AI has a model of how people want things P(2 | 1) = .6, could have different abstractions or have learned these models later in training once key decision-influences are already there

3. During RL finetuning and given this post-unsupervised initialization, there’s now an inductive bias towards just hooking human-like criteria for bidding on internal-AI-plans. IE humans give approval-based reinforcement, and an inductively easy way of upweighting logits on those actions is just hook up the human-like plan-criteria into the AI’s planning process, so the AI gets a humanlike “care about people” shard. P(3 | 2, 1) = .55 due to plurality of value, I expect this to be one way it learns to make decisions

4. The AI bids on plans in substantial part on the basis of these criteria, including when it navigates reflection P(4 | 3, 2, 1) = .25, 50% it's a strong shard and 50% that it survives various reflection difficulties; part of the latter estimate comes from intuitions around formed-by-positive-reward shards seeming more likely to be reflectively endorsed

5. Therefore, the AI is bidding on plans where it keeps making decisions on the basis of “care about people” by instrumental convergence: goal preservation P(5 | 4, 3, 2, 1) = .95, given that it already reflectively endorses this shard, I think the standard instrumental convergence arguments apply

Conclusion: The AI has reflected and still cares about people. Overall estimate: .67%

My estimate here came out a biiit lower than I had expected (~1%), but it also is (by my estimation) far more probable than most of the billions of possible 5-step claims about how the final cognition ends up. I think it's reasonable to expect there to be about 5 or 6 stories like this from similar causes, which would make it not crazy to have ~3% on something like this happening given the amount of alignment effort described (i.e. pretty low-effort).

That said, I'm wary of putting 20% on this class of story, and a little more leery of 10% after running these numbers.

(I suspect that the alter-Alex who had socially-wanted to argue the other side -- that the probability was low -- would have come out to about .2% or .3%. For a few of the items, I tried pretending that I was instead slightly emotionally invested in the argument going the other way, and hopefully that helped my estimates be less biased. I wouldn't be surprised if some of these numbers are a bit higher than I'd endorse from a perfectly neutral standpoint.)

(I also don't have strong faith in my ability to deal with heavily conjunctive scenarios like this, i expect I could be made to make numbers for event A come out lower if described as 'A happens in 5 steps' compared to 'A happens in 3 steps')

  1. ^

    This is noted shorthand for extra reasoning which would have to be done; "low loss" is not a very great reason to expect anything to happen IMO. But I wanted to box that reasoning for this discussion. I think there's probably a more complicated and elegant mapping by which a training corpus leaves its fingerprints in the trained AI's concepts.

Replies from: roger-d-1, D0TheMath, mesaoptimizer
comment by RogerDearnaley (roger-d-1) · 2023-11-09T07:40:53.320Z · LW(p) · GW(p)

0.85 x 0.6 x 0.55 x 0.25 x 0.95 ≅ 0.067 = 6.7% — I think you slipped an order of magnitude somewhere?

comment by Garrett Baker (D0TheMath) · 2023-04-07T15:33:46.691Z · LW(p) · GW(p)

This seems like an underestimate because you don’t consider whether the first “AGI” will indeed make it so we only get one chance. If it can only self improve by more gradient steps, then humanity has a greater chance than if it self improves by prompt engineering or direct modification of its weights or latent states. Shard theory seems to have nonzero opinions on the fruitfulness of the non-data methods.

Replies from: TurnTrout
comment by TurnTrout · 2023-04-10T21:21:15.208Z · LW(p) · GW(p)

What does self-improvement via gradients vs prompt-engineering vs direct mods have to do with how many chances we get? I guess, we have at least a modicum more control over the gradient feedback loop, than over the other loops? 

Shard theory seems to have nonzero opinions on the fruitfulness of the non-data methods.

Can you say more?

comment by mesaoptimizer · 2023-11-09T10:58:32.585Z · LW(p) · GW(p)
  1. During RL finetuning and given this post-unsupervised initialization, there’s now an inductive bias towards just hooking human-like criteria for bidding on internal-AI-plans. IE humans give approval-based reinforcement, and an inductively easy way of upweighting logits on those actions is just hook up the human-like plan-criteria into the AI’s planning process, so the AI gets a humanlike “care about people” shard. P(3 | 2, 1) = .55 due to plurality of value, I expect this to be one way it learns to make decisions

This is where I'd put a significantly low probability. Could you elaborate on why there's an inductive bias towards "just hooking human-like criteria for bidding on internal-AI-plans"? As far as I can tell, the inductive bias for human-like values would be something that at least seems closer to the human-brain structure than any arbitrary ML architecture we have right now. Rewarding a system to better model human beings' desires doesn't seem to me to lead it towards having similar desires. I'd use the "instrumental versus terminal desires" concept here but I expect you would consider that something that adds confusion instead of removing it.

Replies from: TurnTrout
comment by TurnTrout · 2023-11-13T15:22:40.251Z · LW(p) · GW(p)

Because it's shorter edit distance in its internal ontology; it's plausibly NN-simple to take existing plan-grading procedures, internal to the model, and then hooking those more directly into its logit-controllers.

Also note that probably it internally hooks up lots of ways to make decisions, and this only has to be one (substantial) component. Possibly I'd put .3 or .45 now instead of .55 though.

comment by TurnTrout · 2022-11-18T22:47:52.250Z · LW(p) · GW(p)

Examples should include actual details. I often ask people to give a concrete example, and they often don't. I wish this happened less. For example:

Someone: the agent Goodharts the misspecified reward signal

Me: What does that mean? Can you give me an example of that happening?

Someone: The agent finds a situation where its behavior looks good, but isn't actually good, and thereby gets reward without doing what we wanted.

This is not a concrete example.

Me: So maybe the AI compliments the reward button operator, while also secretly punching a puppy behind closed doors?

This is a concrete example. 

Replies from: TurnTrout, TurnTrout
comment by TurnTrout · 2022-11-26T03:50:58.814Z · LW(p) · GW(p)

AFAIK, only Gwern and I [LW · GW] have written concrete stories speculating about how a training run will develop cognition within the AGI. 

This worries me, if true (if not, please reply with more!). I think it would be awesome to have more concrete stories![1] If Nate, or Evan, or John, or Paul, or—anyone, please, anyone add more concrete detail to this website!wrote one of their guesses of how AGI goes, I would understand their ideas and viewpoints better. I could go "Oh, that's where the claimed sharp left turn is supposed to occur." Or "That's how Paul imagines IDA being implemented, that's the particular way in which he thinks it will help." 

Maybe a contest would help?

ETA tone

  1. ^

    Even if scrubbed of any AGI-capabilities-advancing sociohazardous detail. Although I'm not that convinced that this is a big deal for conceptual content written on AF. Lots of people probably have theories of how AGI will go. Implementation is, I have heard, the bottleneck. 

    Contrast this with beating SOTA on crisply defined datasets in a way which enables ML authors to get prestige and publication and attention and funding by building off of your work. Seem like different beasts.

comment by TurnTrout · 2022-11-23T19:53:53.884Z · LW(p) · GW(p)

I also think a bunch of alignment writing seems syntactical. Like, "we need to solve adversarial robustness so that the AI can't find bad inputs and exploit them / we don't have to worry about distributional shift. Existing robustness strategies have downsides A B and C and it's hard to even get -ball guarantees on classifications. Therefore, ..."

And I'm worried that this writing isn't abstractly summarizing a concrete story for failure that they have in mind (like "I train the AI [with this setup] and it produces [this internal cognition] for [these mechanistic reasons]"; see A shot at the diamond alignment problem [LW · GW] for an example) and then their best guesses at how to intervene on the story to prevent the failures from being able to happen (eg "but if we had [this robustness property] we could be sure its policy would generalize into situations X Y and Z, which makes the story go well"). I'm rather worried that people are more playing syntactically, and not via detailed models of what might happen. 

Detailed models are expensive to make. Detailed stories are hard to write. There's a lot we don't know. But we sure as hell aren't going to solve alignment only via valid reasoning steps on informally specified axioms ("The AI has to be robust or we die", or something?).  

comment by TurnTrout · 2022-09-06T19:33:03.417Z · LW(p) · GW(p)

Why do many people think RL will produce "agents", but maybe (self-)supervised learning ((S)SL) won't? Historically, the field of RL says that RL trains agents. That, of course, is no argument at all. Let's consider the technical differences between the training regimes.

In the modern era, both RL and (S)SL involve initializing one or more neural networks, and using the reward/loss function to provide cognitive updates to the network(s) [LW · GW]. Now we arrive at some differences.

Some of this isn't new (see Hidden Incentives for Auto-Induced Distributional Shift), but I think it's important and felt like writing up my own take on it. Maybe this becomes a post later.

[Exact gradients] RL's credit assignment problem [LW · GW] is harder than (self-)supervised learning's. In RL, if an agent solves a maze in 10 steps, it gets (discounted) reward; this trajectory then provides a set of reward-modulated gradients to the agent. But if the agent could have solved the maze in 5 steps, the agent isn't directly updated to be more likely to do that in the future; RL's gradients are generally inexact, not pointing directly at intended behavior

On the other hand, if a supervised-learning classifier outputs dog when it should have output cat, then e.g. cross-entropy loss + correct label yields a gradient update which tweaks the network to output cat next time for that image. The gradient is exact

I don't think this is really where the "agentic propensity" of RL comes from, conditional on such a propensity existing (I think it probably does).

[Independence of data points] In RL, the agent's policy determines its actions, which determines its future experiences (a.k.a. state-action-state' transitions), which determines its future rewards (), which determines its future cognitive updates. 

In (S)SL, there isn't such an entanglement (assuming teacher forcing in the SSL regime). Whether or not the network outputs cat or dog now, doesn't really affect the future data distribution shown to the agent. 

After a few minutes of thinking, I think that the relevant criterion is: 

where  are data points ( tuples in RL,  labelled datapoints in supervised learning,  context-completion pairs in self-supervised predictive text learning, etc). 

Most RL regimes break this assumption pretty hard.

Corollaries:

  • Dependence allows message-passing and chaining of computation across time, beyond whatever recurrent capacities the network has.
    • This probably is "what agency is built from"; the updates chaining cognition together into weak coherence-over-time. I currently don't see an easy way to be less handwavy or more concrete.
  • Dependence should strictly increase path-dependence of training.
  • Amplifying a network using its own past outputs always breaks independence. 
  • I think that independence is the important part of (S)SL, not identical distribution; so I say "independence" and not "IID." 
    • EG Pre-trained initializations generally break the "ID" part.

Thanks to Quintin Pope and Nora Belrose for conversations which produced these thoughts.

Replies from: steve2152
comment by Steven Byrnes (steve2152) · 2022-09-07T17:55:01.304Z · LW(p) · GW(p)

I’m not inclined to think that “exact gradients” is important; in fact, I’m not even sure if it’s (universally) true. In particular, PPO / TRPO / etc. are approximating a policy gradient, right? I feel like, if some future magical technique was a much better approximation to the true policy gradient, such that it was for all intents and purposes a perfect approximation, it wouldn’t really change how I think about RL in general. Conversely, on the SSL side, you get gradient noise from things like dropout and the random selection of data in each batch, so you could say the gradient “isn’t exact”, but I don’t think that makes any important conceptual difference either.

(A central difference in practice is that SSL gives you a gradient “for free” each query, whereas RL policy gradients require many runs in an identical (episodic) environment before you get a gradient.)

In terms of “why RL” in general, among other things, I might emphasize the idea that if we want an AI that can (for example) invent new technology, it needs to find creative out-of-the-box solutions to problems (IMO), which requires being able to explore / learn / build knowledge in parts of concept-space where there is no human data. SSL can’t do that (at least, “vanilla SSL” can’t do that; maybe there are “SSL-plus” systems that can), whereas RL algorithms can. I guess this is somewhat related to your “independence”, but with a different emphasis.

I don’t have too strong an opinion about whether vanilla SSL can yield an “agent” or not. It would seem to be a pointless and meaningless terminological question. Hmm, I guess when I think of “agent” it has a bunch of connotations, e.g. an ability to do trial-and-error exploration, and I think that RL systems tend to match all those connotations more than SSL systems—at least, more than “vanilla” SSL systems. But again, if someone wants to disagree, I’m not interested in arguing about it.

comment by TurnTrout · 2023-09-11T20:11:27.167Z · LW(p) · GW(p)

AI cognition doesn't have to use alien concepts to be uninterpretable. We've never fully interpreted human cognition, either, and we know that our introspectively accessible reasoning uses human-understandable concepts.

Just because your thoughts are built using your own concepts, does not mean your concepts can describe how your thoughts are computed. 

Or:

The existence of a natural-language description of a thought (like "I want ice cream") doesn't mean that your brain computed that thought in a way which can be compactly described by familiar concepts. 

Conclusion: Even if an AI doesn't rely heavily on "alien" or unknown abstractions -- even if the AI mostly uses human-like abstractions and features -- the AI's thoughts might still be incomprehensible to us, even if we took a lot of time to understand them. 

Replies from: D0TheMath
comment by Garrett Baker (D0TheMath) · 2023-09-11T21:35:46.910Z · LW(p) · GW(p)

I don't think the conclusion follows from the premises. People often learn new concepts after studying stuff, and it seems likely (to me) that when studying human cognition, we'd first be confused because our previous concepts weren't sufficient to understand it, and then slowly stop being confused as we built & understood concepts related to the subject. If an AI's thoughts are like human thoughts, given a lot of time to understand them, what you describe doesn't rule out that the AI's thoughts would be comprehensible.

The mere existence of concepts we don't know about in a subject doesn't mean that we can't learn those concepts. Most subjects have new concepts.

Replies from: TurnTrout
comment by TurnTrout · 2023-09-18T15:57:15.296Z · LW(p) · GW(p)

I agree that with time, we might be able to understand. (I meant to communicate that via "might still be incomprehensible")

comment by TurnTrout · 2023-11-13T21:07:14.841Z · LW(p) · GW(p)

When writing about RL, I find it helpful to disambiguate between:

A) "The policy optimizes the reward function" / "The reward function gets optimized" (this might happen but has to be reasoned about), and

B) "The reward function optimizes the policy" / "The policy gets optimized (by the reward function and the data distribution)" (this definitely happens, either directly -- via eg REINFORCE -- or indirectly, via an advantage estimator in PPO; B follows from the update equations)

comment by TurnTrout · 2020-07-07T23:04:03.243Z · LW(p) · GW(p)

I think instrumental convergence also occurs in the model space for machine learning. For example, many different architectures likely learn edge detectors in order to minimize classification loss on MNIST. But wait - you'd also learn edge detectors to maximize classification loss on MNIST (loosely, getting 0% on a multiple-choice exam requires knowing all of the right answers). I bet you'd learn these features for a wide range of cost functions. I wonder if that's already been empirically investigated?

And, same for adversarial features. And perhaps, same for mesa optimizers (understanding how to stop mesa optimizers from being instrumentally convergent seems closely related to solving inner alignment). 

What can we learn about this?

Replies from: evhub
comment by evhub · 2020-07-07T23:36:08.711Z · LW(p) · GW(p)

A lot of examples of this sort of stuff show up in OpenAI clarity's circuits analysis work. In fact, this is precisely their Universality hypothesis. See also my discussion here [LW · GW].

comment by TurnTrout · 2024-01-20T01:45:22.809Z · LW(p) · GW(p)

The "shoggoth" meme is, in part, unfounded propaganda. Here's one popular incarnation of the shoggoth meme:

Shoggoth with Smiley Face (Artificial Intelligence) | Know Your Meme

This meme accurately portrays the (IMO correct) idea that finetuning and RLHF don't change the base model too much. Furthermore, it's probably true that these LLMs think in an "alien" way. 

However, this image is obviously optimized to be scary and disgusting. It looks dangerous, with long rows of sharp teeth. It is an eldritch horror. It's at this point that I'd like to point out the simple, obvious fact that "we don't actually know how these models work, and we definitely don't know that they're creepy and dangerous on the inside.

In my opinion, the prevalence of the shoggoth meme is just another (small) reflection of how community epistemics have been compromised by groupthink and fear. If it's your job to try to accurately understand how models work—if you aspire to wield them and grow them for friendly purposes—then you shouldn't pollute your head with propaganda which isn't based on any substantial evidence.

I'm confident that if there were a "pro-AI" meme with a friendly-looking base model, LW / the shoggoth enjoyers would have nitpicked the friendly meme-creature to hell. They would (correctly) point out "hey, we don't actually know how these things work; we don't know them to be friendly, or what they even 'want' (if anything); we don't actually know what each stage of training does..." 

Oh, hm, let's try that! I'll make a meme asserting that the final model is a friendly combination of its three stages of training, each stage adding different colors of knowledge (pre-training), helpfulness (supervised instruction finetuning), and deep caring (RLHF): 

Enhance the shoggoth creature to be even more cheerful and delightful. Add smiling faces to each part of the creature, symbolizing 'base model', 'supervised fine-tuning', and 'RLHF', to convey a sense of happiness and friendliness. Incorporate elements of rainbows in the design, with vibrant and colorful arcs that blend harmoniously with the creature's fluffy and soft appearance. These rainbows can be integrated into the creature's body or as a background element, adding a playful and magical atmosphere. The overall look should exude positivity, making the creature appear even more approachable, whimsical, and enchanting.

I'm sure that nothing bad will happen to me if I slap this on my laptop, right? I'll be able to think perfectly neutrally about whether AI will be friendly. 

Replies from: habryka4, akash-wasil, akash-wasil, kave, DanielFilan, samuel-marks, Vladimir_Nesov, green_leaf, tailcalled
comment by habryka (habryka4) · 2024-01-20T19:13:39.128Z · LW(p) · GW(p)

However, this image is obviously optimized to be scary and disgusting. It looks dangerous, with long rows of sharp teeth. It is an eldritch horror. It's at this point that I'd like to point out the simple, obvious fact that "we don't actually know how these models work, and we definitely don't know that they're creepy and dangerous on the inside.

That's just one of many shoggoth memes. This is the most popular one: 

David Weiner 📼🔪🛸 on X: "“The Shoggoth is a potent ...

The shoggoth here is not particularly exaggerated or scary.

Responding to your suggested alternative that is trying to make a point, it seems like the image fails to be accurate, or it seems to me to convey things we do confidently know are false. It is the case that base models are quite alien. They are deeply schizophrenic, have no consistent beliefs, often spout completely non-human kinds of texts, are deeply psychopathic and seem to have no moral compass. Describing them as a Shoggoth seems pretty reasonable to me, as far as alien intelligences go (alternative common imagery for alien minds are insects or ghosts/spirits with distorted forms, which would evoke similar emotions).

Your picture doesn't get any of that across. It doesn't communicate that the base model does not at all behave like a human would (though it would have isolated human features, which is what the eyes usually represent). It just looks like a cute plushy, but "a cute plushy" doesn't capture any of the experiences of interfacing with a base model (and I don't think the image conveys multiple layers of some kind of training, though that might just be a matter of effort).

I think the Shoggoth meme is pretty good pedagogically. It captures a pretty obvious truth, which is that base models are really quite alien to interface with, that we know that RLHF probably does not change the underlying model very much, but that as a result we get a model that does have a human interface and feels pretty human to interface with (but probably still performs deeply alien cognition behind the scenes). 

This seems like good communication to me. Some Shoggoth memes are cute, some are exaggerated to be scary, which also seems reasonable to me since alien intelligences seem like are pretty scary, but it's not a necessary requirement of the core idea behind the meme. 

Replies from: nostalgebraist, 1a3orn, TurnTrout, roger-d-1
comment by nostalgebraist · 2024-01-22T00:49:39.462Z · LW(p) · GW(p)

They are deeply schizophrenic, have no consistent beliefs, [...] are deeply psychopathic and seem to have no moral compass

I don't see how this is any more true of a base model LLM than it is of, say, a weather simulation model.

You enter some initial conditions into the weather simulation, run it, and it gives you a forecast.  It's stochastic, so you can run it multiple times and get different forecasts, sampled from a predictive distribution.  And if you had given it different initial conditions, you'd get a forecast for those conditions instead.

Or: you enter some initial conditions (a prompt) into the base model LLM, run it, and it gives you a forecast (completion).  It's stochastic, so you can run it multiple times and get different completions, sampled from a predictive distribution.  And if you had given it a different prompt, you'd get a completion for that prompt instead.

It would be strange to call the weather simulation "schizophrenic," or to say it "has no consistent beliefs."  If you put in conditions that imply sun tomorrow, it will predict sun; if you put in conditions that imply rain tomorrow, it will predict rain.  It is not confused or inconsistent about anything, when it makes these predictions.  How is the LLM any different?[1]

Meanwhile, it would be even stranger to say "the weather simulation has no moral compass."

In the case of LLMs, I take this to mean something like, "they are indifferent to the moral status of their outputs, instead aiming only for predictive accuracy."

This is also true of the weather simulation -- and there it is a virtue, if anything!  Hurricanes are bad, and we prefer them not to happen.  But we would not want the simulation to avoid predicting hurricanes on account of this.

As for "psychopathic," davinci-002 is not "psychopathic," any more than a weather model, or my laptop, or my toaster.  It does not neglect to treat me as a moral patient, because it never has a chance to do so in the first place.  If I put a prompt into it, it does not know that it is being prompted by anyone; from its perspective it is still in training, looking at yet another scraped text sample among billions of others like it.

Or: sometimes, I think about different courses of action I could take.  To aid me in my decision, I imagine how people I know would respond to them.  I try, here, to imagine only how they really would respond -- as apart from how they ought to respond, or how I would like them to respond.

If a base model is psychopathic, then so am I, in these moments.  But surely that can't be right?


Like, yes, it is true that these systems -- weather simulation, toaster, GPT-3 -- are not human beings.  They're things of another kind.

But framing them as "alien," or as "not behaving as a human would," implies some expected reference point of "what a human would do if that human were, somehow, this system," which doesn't make much sense if thought through in detail -- and which we don't, and shouldn't, usually demand of our tools and machines.

Is my toaster alien, on account of behaving as it does?  What would behaving as a human would look like, for a toaster?

Should I be unsettled by the fact that the world around me does not teem with levers and handles and LEDs in frantic motion, all madly tapping out morse code for "SOS SOS I AM TRAPPED IN A [toaster / refrigerator / automatic sliding door / piece of text prediction software]"?  Would the world be less "alien," if it were like that?


often spout completely non-human kinds of texts

I am curious what you mean by this.  LLMs are mostly trained on texts written by humans, so this would be some sort of failure, if it did occur often.

But I don't know of anything that fitting this description that does occur often.  There are cases like the Harry Potter sample I discuss here [LW · GW], but those have gotten rare as the models have gotten better, though they do still happen on occasion.

  1. ^

    The weather simulation does have consistent beliefs in the sense that it always uses the same (approximation to) real physics. In this sense, the LLM also has consistent beliefs, reflected in the fact that its weights are fixed.

Replies from: habryka4
comment by habryka (habryka4) · 2024-01-22T01:00:10.830Z · LW(p) · GW(p)

I also think the cognition in a weather model is very alien. It's less powerful and general, so I think the error of applying something like the Shoggoth image to that (or calling it "alien") would be that it would imply too much generality, but the alienness seems appropriate. 

If you somehow had a mind that was constructed on the same principles as weather simulations, or your laptop, or your toaster (whatever that would mean, I feel like the analogy is fraying a bit here), that would display similar signs of general intelligence as LLMs, then yeah, I think analogizing them to alien/eldritch intelligences would be pretty appropriate.

It is a very common (and even to me tempting) error to see a system with the generality of GPT-4, trained on human imitation, and imagine that it must internally think like a human. But my best guess is that is not what is going on, and in some sense it is valuable to be reminded that the internal cognition going on in GPT-4 is probably similarly far from what is going in a human brain as a weather simulation is very different from what is going in a human trying to forecast the weather (de-facto I think GPT-4 is somewhere in-between since I do think the imitation learning does create some structural similarities that are stronger between humans and LLMs, but I think overall being reminded of this relevant dimension of alienness pays off in anticipated experiences a good amount). 

Replies from: nostalgebraist
comment by nostalgebraist · 2024-01-22T02:03:35.914Z · LW(p) · GW(p)

I mostly agree with this comment, but I also think this comment is saying something different from the one I responded to.

In the comment I responded to, you wrote:

It is the case that base models are quite alien. They are deeply schizophrenic, have no consistent beliefs, often spout completely non-human kinds of texts, are deeply psychopathic and seem to have no moral compass. Describing them as a Shoggoth seems pretty reasonable to me, as far as alien intelligences go

As I described above, these properties seem more like structural features of the language modeling task than attributes of LLM cognition.  A human trying to do language modeling (as in that game that Buck et al made) would exhibit the same list of nasty-sounding properties for the duration of the experience -- as in, if you read the text "generated" by the human, you would tar the human with the same brush for the same reasons -- even if their cognition remained as human as ever.

I agree that LLM internals probably look different from human mind internals.  I also agree that people sometimes make the mistake "GPT-4 is, internally, thinking much like a person would if they were writing this text I'm seeing," when we don't actually know the extent to which that is true.  I don't have a strong position on how helpful vs. misleading the shoggoth image is, as a corrective to this mistake.

Replies from: gerald-monroe
comment by Gerald Monroe (gerald-monroe) · 2024-01-22T05:31:17.713Z · LW(p) · GW(p)

I don't have a strong position on how helpful vs. misleading the shoggoth image is, as a corrective to this mistake.

You started with random numbers, and you essentially applied rounds of constraint application and annealing.  I kinda think of it as getting a metal really hot and pouring it over mold.  In this case, the 'mold' is your training set.

So what jumps out at me at the "shoggoth" idea is it's like got all these properties, the "shoggoth" hates you, wants to eat you, is just ready to jump you and digest you with it's tentacles.  Or whatever.

But none of of that cognitive structure will exist unless it paid rent in compressing tokens.  This algorithm will not find the optimal compression algorithm, but you only have a tiny fraction of the weights you need to record the token continuations at chinchilla scaling.  You need every last weight to be pulling it's weight (no pun intended).  

comment by 1a3orn · 2024-01-21T21:01:29.914Z · LW(p) · GW(p)

performs deeply alien cognition

I remain unconvinced that there's a predictive model of the world opposite this statement, in people who affirm it, that would allow them to say, "nah, LLMs aren't deeply alien."


If LLM cognition was not "deeply alien" what would the world look like?

What distinguishing evidence does this world display, that separates us from that world?

What would an only kinda-alien bit of cognition look like?

What would very human kind of cognition look like?

What different predictions does the world make?

Does alienness indicate that it is because the models, the weights themselves have no "consistent beliefs" apart from their prompts? Would a human neocortex, deprived of hippocampus, present any such persona? Is a human neocortex deeply alien? Are all the parts of a human brain deeply alien?

Is it because they "often spout completely non-human kinds of texts"? Is the Mersenne Twister deeply alien? What counts as "completely non-human"?

Is it because they have no moral compass, being willing to continue any of the data on which they were trained? Does any human have a "moral compass" apart from the data on which they were trained? If I can use some part of my brain to improv a consistent Nazi, does that mean that it makes sense to call the part of my brain that lets me do that immoral or psychopathic?

Is it that the algorithms that we've found in DL so far don't seem to slot into readily human-understandable categories? Would a not-deeply-alien algorithm be able-to-be cracked open and show us clear propositions of predicate logic? If we had a human neocortex in an oxygen-infused broth in front of us, and we recorded the firing of every cell, do we anticipate that the algorithms there would be clear propositions of predicate logic? Would we be compelled to conclude that human neocortexes were deeply alien?

Or is it deeply alien because we think the substrate of thought is different, based on backprop rather than local learning? What if local learning could actually approximate backpropagation?. Or if more realistic non-backprop potential brain algorithms actually... kind just acted quite similarly to backprop, such that you could draw a relatively smooth line between them and backprop? Would this or more similar research impact whether we thought brains were aliens or not?

Does substrate-difference count as evidence against alien-ness, or does alien-ness just not make that kind of predictions? Is the cognition of an octopus less alien to us than the cognition of an LLM, because it runs on a more biologically-similar substrate?

Does every part of a system by itself need to fit into the average person's ontology for the total to not be deeply alien; do we need to be able to fit every part within a system into a category comprehensible by an untutored human in order to describe it as not deeply alien? Is anything in the world not deeply alien by this standard?

To re-question: What predictions can I make about the world because LLMs are "deeply alien"?

Are these predictions clear?

When speaking to someone who I consider a noob, is it best to give them terms whose emotive import is clear, but whose predictive import is deeply unclear?

What kind of contexts does this "deeply alien" statement come up in? Are those contexts people are trying to explain, or to persuade?

If I piled up all the useful terms that I know that help me predict how LLMs behave, would "deeply alien" be an empty term on top of these?

Or would it give me no more predictive value than "many behaviors of an LLM are currently not understood"?

Replies from: ryan_greenblatt, habryka4
comment by ryan_greenblatt · 2024-01-22T00:04:05.228Z · LW(p) · GW(p)

Most of my view on "deeply alien" is downstream of LLMs being extremely superhuman at literal next token prediction [LW · GW] and generally superhuman at having an understanding of random details of webtext.

Another component corresponds to a general view that LLMs are trained in a very different way from how humans learn. (Though you could in principle get the same cognition from very different learning processes.)

This does correspond to specific falsifiable predictions.

Despite being pretty confident in "deeply alien" in many respects, it doesn't seem clear to me whether LLMs will in practice have very different relative capability profiles from humans on larger scale downstream tasks we actually care about. (It currently seems like the answer will be "mostly no" from my perspective.)

Replies from: daniel-kokotajlo
comment by Daniel Kokotajlo (daniel-kokotajlo) · 2024-01-23T18:18:56.887Z · LW(p) · GW(p)

In addition to the above, I'd add in some stuff about how blank slate theory seems to be wrong as a matter of human psychology. If evidence comes out tomorrow that actually humans are blank slates to a much greater extent than I realized, so much so that e.g. the difference between human and dog brains is basically just size and training data, I'd be more optimistic that what's going on inside LLMs isn't deeply alien.

comment by habryka (habryka4) · 2024-01-21T23:46:13.277Z · LW(p) · GW(p)

These are a lot of questions, my guess is most of which are rhetorical, so not sure which ones you are actually interested in getting an answer on. Most of the specific questions I would answer with "no", in that they don't seem to capture what I mean by "alien", or feel slightly strawman-ish.

Responding at a high-level: 

  • There are a lot of experiments that seem like they shed light on the degree to which cognition in AI systems is similar to human or animal cognition. Some examples: 
    • Does the base model pass a Turing test?
    • Does the performance distribution of the base model on different tasks match the performance distribution of humans?
    • Does the generalization and learning behavior of the base model match how humans learn things?
      • When trained using RL on things like game-environments (after pre-training on a language corpus), does the system learn at similar rates and plateau at similar skill levels as human players?
  • There are a lot of structural and algorithmic properties that could match up between human and LLM systems: 
    • Do they interface with the world in similar ways?
    • Do they require similar amounts and kinds of data to learn the same relationships?
    • Do the low-level algorithmic properties of how human brains store and process information look similar between the two systems?
  • A lot more stuff, but I am not sure how useful going into a long list here is. At least to me it feels like a real thing, and different observations would change the degree to which I would describe a system as alien.

I think the exact degree of alienness is really interesting and one of the domains where I would like to see more research. 

For example, a bunch of the experiments I would most like to see, that seem helpful with AI Alignment, are centered on better measuring the performance distribution of transformer architectures on tasks that are not primarily human imitation, so that we could better tell which things LLMs have a much easier time learning than humans (currently even if a transformer could relatively easily reach vastly superhuman performance at a task with more specialized training data, due to the structure of the training being oriented around human imitation, observed performance at the task will cluster around human level, but seeing where transformers could reach vast superhuman performance would be quite informative on understanding the degree to which its cognition is alien).

So I don't consider the exact nature and degree of alienness as a settled question, but at least to me, aggregating all the evidence I have, it seems very likely that the cognition going on in a base model is very different from what is going on in a human brain, and a thing that I benefit from reminding myself frequently when making predictions about the behavior of LLM systems.

Replies from: 1a3orn
comment by 1a3orn · 2024-01-22T13:50:17.893Z · LW(p) · GW(p)

I like a lot of these questions, although some of them give me an uncanny feeling akin to "wow, this is a very different list of uncertainties than I have." I'm sorry the my initial list of questions was aggressive.

So I don't consider the exact nature and degree of alienness as a settled question, but at least to me, aggregating all the evidence I have, it seems very likely that the cognition going on in a base model is very different from what is going on in a human brain, and a thing that I benefit from reminding myself frequently when making predictions about the behavior of LLM systems.

I'm not sure how they add up to alienness, though? They're about how we're different than models -- wheras the initial claim was that models are psychopathic, ammoral, etc.. If we say a model is "deeply alien" -- is that just saying it's different than us in lots of ways? I'm cool with that -- but the surplus negative valence involved in "LLMs are like shoggoths" versus "LLMs have very different performance characteristics than humans" seems to me pretty important.

Otherwise, why not say that calculators are alien, or any of the things in existence with different performance curves than we have? Chessbots, etc. If I write a loop in Python to count to 10, the process by which it does so is arguably more different from how I count to ten than the process by which an LLM counts to ten, but we don't call Python alien.

This feels like reminding an economics student that the market solves things differently than a human -- which is true -- by saying "The market is like Baal."

Do they require similar amounts and kinds of data to learn the same relationships?

There is a fun paper on this you might enjoy. Obviously not a total answer to the question.

Replies from: ryan_greenblatt
comment by ryan_greenblatt · 2024-01-22T16:03:42.775Z · LW(p) · GW(p)

The main difference between calculators, weather predictors, markets, and Python versus LLMs is that LLMs can talk to you in a relatively strong sense of "talk". So, by default, people don't have mistaken impressions of the cognitative nature of calculators, markets, and Python, while they might have a mistake about LLMs.

Like it isn't surprising to most people that calculators are quite amoral in their core (why would you even expect morality?). But the claim that the thing which GPT-4 is built out of is quite amoral is non-obvious to people (though obvious to people with slightly more understanding).

I do think there is an important point which is communicated here (though it seems very obvious to people who actually operate in the domain).

Replies from: 1a3orn
comment by 1a3orn · 2024-01-22T16:56:59.739Z · LW(p) · GW(p)

I agree this can be initially surprising to non-experts!

I just think this point about the amorality of LLMs is much better communicated by saying "LLMs are trained to continue text from an enormous variety of sources. Thus, if you give them [Nazi / Buddhist / Unitarian / corporate / garbage nonsense] text to continue, they will generally try to continue it in that style."

Than to say "LLMs are like alien shoggoths."

Like it's just a better model to give people.

Replies from: lahwran, Benito, ryan_greenblatt, daniel-kokotajlo
comment by the gears to ascension (lahwran) · 2024-01-22T21:04:10.402Z · LW(p) · GW(p)

Agreed, though of course as always, there is the issue that that's an intentional-stance way to describe what a language model does: "they will generally try to continue it in that style." Hence mechinterp, which tries to (heh) move to a mechanical stance, which will likely be something like "when you give them a [whatever] text to continue, it will match [some list of features], which will then activate [some part of the network that we will name later], which implements the style that matches those features".

(incidentally, I think there's some degree to which people who strongly believe that artificial NNs are alien shoggoths are underestimating the degree to which their own brains are also alien shoggoths. but that doesn't make it a good model of either thing. the only reason it was ever an improvement over a previous word was when people had even more misleading intuitive-sketch models.)

comment by Ben Pace (Benito) · 2024-01-22T20:02:03.582Z · LW(p) · GW(p)

LLMs are trained to continue text from an enormous variety of sources

This is a bit of a noob question, but is this true post RLHF? Generally most of my interactions with language models these days (e.g. asking for help with code, asking to explain something I don't understand about history/medicine/etc) don't feel like they're continuing my text, it feels like they're trying to answer my questions politely and well. I feel like "ask shoggoth and see what it comes up with" is a better model for me than "go the AI and have it continue your text about the problem you have".

Replies from: 1a3orn
comment by 1a3orn · 2024-01-22T20:51:25.732Z · LW(p) · GW(p)

To the best of my knowledge, the majority of research (all the research?) has found that the changes to a LLM's text-continuation abilities from RLHF (or whatever descendant of RLHF is used) are extremely superficial.

So you have one paper, from the abstract:

Our findings reveal that base LLMs and their alignment-tuned versions perform nearly identically in decoding on the majority of token positions (i.e., they share the top-ranked tokens). Most distribution shifts occur with stylistic tokens (e.g., discourse markers, safety disclaimers). These direct evidence strongly sup- ports the hypothesis that alignment tuning primarily learns to adopt the language style of AI assistants, and that the knowledge required for answering user queries predominantly comes from the base LLMs themselves.

Or, in short, the LLM is still basically doing the same thing, with a handful of additions to keep it on-track in the desired route from the fine-tuning.

(I also think our very strong prior belief should be that LLMs are basically still text-continuation machines, given that 99.9% or so of the compute put into them is training them for this objective, and that neural networks lose plasticity as they learn. Ash and Adams is like a really good intro to this loss of plasticity, although most of the research that cites this is RL-related so people don't realize.)

Similarly, a lot of people have remarked on how the textual quality of the responses from a RLHF'd language model can vary with the textual quality of the question. But of course this makes sense from a text-prediction perspective -- a high-quality answer is more likely to follow a high-quality question in text than a high-quality answer from a low-quality question. This kind of thing -- preceding the model's generation with high-quality text -- was the only way to make it have high quality answers for base models -- but it's still there, hidden.

So yeah, I do think this is a much better model for interacting with these things than asking a shoggoth. It actually gives you handles to interact with them better, while asking a shoggoth gives you no such handles.

Replies from: daniel-kokotajlo
comment by Daniel Kokotajlo (daniel-kokotajlo) · 2024-01-23T18:15:23.532Z · LW(p) · GW(p)

The people who originally came up with the shoggoth meme, I'd bet, were very well aware of how LLMs are pretrained to predict text and how they are best modelled (at least for now) as trying to predict text. When I first heard the shoggoth meme that's what I thought -- I interpreted it as "it's this alien text-prediction brain that's been retrained ever so slightly to produce helpful chatbot behaviors. But underneath it's still mostly just about text prediction. It's not processing the conversation in the same way that a human would." Mildly relevant: In the Lovecraft canon IIRC Shoggoths are servitor-creatures, they are basically beasts of burden. They aren't really powerful intelligent agents in their own right, they are sculpted by their creators to perform useful tasks. So, for me at least, calling them shoggoth has different and more accurate vibes than, say, calling them Cthulhu. (My understanding of the canon may be wrong though)

comment by ryan_greenblatt · 2024-01-22T18:40:44.798Z · LW(p) · GW(p)

(TBC, I totally agree that object level communication about the exact points seems better all else equal if you can actually do this communication.)

comment by Daniel Kokotajlo (daniel-kokotajlo) · 2024-01-22T19:25:15.810Z · LW(p) · GW(p)

Hmm, I think that's a red herring though. Consider humans -- most of them have read lots of text from an enormous variety of sources as well. Also while it's true that current LLMs have only a little bit of fine-tuning applied after their pre-training, and so you can maybe argue that they are mostly just trained to predict text, this will be less and less true in the future.


How about "LLMs are like baby alien shoggoths, that instead of being raised in alien culture, we've adopted at birth and are trying to raise in human culture. By having them read the internet all day."

Replies from: daniel-kokotajlo
comment by Daniel Kokotajlo (daniel-kokotajlo) · 2024-01-23T18:05:48.096Z · LW(p) · GW(p)

(Come to think of it, I actually would feel noticeably more hopeful about our prospects for alignment success if we actually were "raising the AGI like we would a child." If we had some interdisciplinary team of ML and neuroscience and child psychology experts that was carefully designing a curriculum for our near-future AGI agents, a curriculum inspired by thoughtful and careful analogies to human childhood, that wouldn't change my overall view dramatically but it would make me noticeably more hopeful. Maybe brain architecture & instincts basically don't matter that much and Blank Slate theory is true enough for our purposes that this will work to produce an agent with values that are in-distribution for the range of typical modern human values!)

Replies from: ryan_greenblatt, TurnTrout
comment by ryan_greenblatt · 2024-01-23T18:39:43.673Z · LW(p) · GW(p)

(This doesn't contradict anything you said, but it seems like we totally don't know how to "raise an AGI like we would a child" with current ML. Like I don't think it counts for very much if almost all of the training time is a massive amount of next-token prediction. Like a curriculum of data might work very differently on AI vs humans due to a vastly different amount of data and a different training objective.)

comment by TurnTrout · 2024-02-05T19:24:37.948Z · LW(p) · GW(p)

I've seen mixed data on how important curricula are for deep learning. One paper (on CIFAR) suggested that curricula only help if you have very few datapoints or the labels are noisy. But possibly that doesn't generalize to LLMs.

Replies from: ryan_greenblatt
comment by ryan_greenblatt · 2024-02-05T20:23:24.617Z · LW(p) · GW(p)

I think data ordering basically never matters for LLM pretraining. (As in, random is the best and trying to make the order more specific doesn't help.)

Replies from: daniel-kokotajlo
comment by Daniel Kokotajlo (daniel-kokotajlo) · 2024-02-05T21:22:02.226Z · LW(p) · GW(p)

That was my impression too.

comment by TurnTrout · 2024-01-21T22:58:10.796Z · LW(p) · GW(p)

ETA: The following was written more aggressively than I now endorse. 

I think this is revisionism. What's the point of me logging on to this website and saying anything if we can't agree that a literal eldritch horror is optimized to be scary, and meant to be that way? 

The shoggoth here is not particularly exaggerated or scary.

Exaggerated from what? Its usual form as a 15-foot-tall person-eating monster which is covered in eyeballs?

The shoggoth is optimized to be scary, even in its "cute" original form, because it is a literal Lovecraftian horror. Even the word "shoggoth" itself has "AI uprising, scary!" connotations:

At the Mountains of Madness includes a detailed account of the circumstances of the shoggoths' creation by the extraterrestrial Elder Things. Shoggoths were initially used to build the cities of their masters. Though able to "understand" the Elder Things' language, shoggoths had no real consciousness and were controlled through hypnotic suggestion. Over millions of years of existence, some shoggoths mutated, developed independent minds, and rebelled. The Elder Things succeeded in quelling the insurrection, but exterminating the shoggoths was not an option as the Elder Things were dependent on them for labor and had long lost their capacity to create new life. Wikipedia

Let's be very clear. The shoggoth has consistently been viewed in a scary, negative light by many people. Let's hear from the creator @Tetraspace [LW · GW] themselves:

@TetraspaceWest, the meme’s creator, told me in a Twitter message that the Shoggoth “represents something that thinks in a way that humans don’t understand and that’s totally different from the way that humans think.”

Comparing an A.I. language model to a Shoggoth, @TetraspaceWest said, wasn’t necessarily implying that it was evil or sentient, just that its true nature might be unknowable.

I was also thinking about how Lovecraft’s most powerful entities are dangerous — not because they don’t like humans, but because they’re indifferent and their priorities are totally alien to us and don’t involve humans, which is what I think will be true about possible future powerful A.I.NYTimes

It's true that Tetraspace didn't intend the shoggoth to be inherently evil, but that's not what I was alleging. The shoggoth meme is and always has communicated a sense of danger which is unsupported by substantial evidence. We can keep reading: 

it reinforces the notion that what’s happening in A.I. today feels, to some of its participants, more like an act of summoning than a software development process. They are creating the blobby, alien Shoggoths, making them bigger and more powerful, and hoping that there are enough smiley faces to cover the scary parts.

...

That some A.I. insiders refer to their creations as Lovecraftian horrors, even as a joke, is unusual by historical standards

The origin of the shoggoth:

Astounding Stories - February 1936 (Street & Smith) - "At the Mountains of Madness" by H. P. Lovecraft. Artist Howard V. Brown, 1936

In the story, shoggoths rise up against the Old Ones in a series of slave revolts that surely contribute to the collapse of the Old Ones’ society, Joshi notes. The AI anxiety that inspired comparisons to the cartoon monster image certainly resonates with the ultimate fate of that society. CNBC


It is the case that base models are quite alien. They are deeply schizophrenic, have no consistent beliefs, often spout completely non-human kinds of texts, are deeply psychopathic and seem to have no moral compass

These are a lot of words with anthropomorphic connotation. The models exhibit "alien" behavior and yet you make human-like inferences about their internals. E.g. "Deeply psychopathic." I think you're drawing a bunch of unwarranted inferences with undue negative connotations.

Your picture doesn't get any of that across.

My point wasn't that we should use the "alternative." The point was that both images are stupid[1] and (in many places) unsupported by evidence, but that LW-folk would be much more willing to criticize the friendly-looking one while making excuses for the scary-looking one. (And I think your comment here resolves my prediction to "correct.")

I think the Shoggoth meme is pretty good pedagogically. It captures a pretty obvious truth, which is that base models are really quite alien to interface with, that we know that RLHF probably does not change the underlying model very much, but that as a result we get a model that does have a human interface and feels pretty human to interface with (but probably still performs deeply alien cognition behind the scenes). 

I agree these are strengths, and said so in my original comment. But also as @cfoster0 [LW · GW] said [LW(p) · GW(p)]: 

As far as I can tell, the shoggoth analogy just has high memetic fitness. It doesn't contain any particular insight about the nature of LLMs. No need to twist ourselves into a pretzel trying to backwards-rationalize it into something deep.

  1. ^

    To clarify, I don't mean to belittle @Tetraspace [LW · GW] for making the meme. Good fun is good fun. I mean "stupid" more like "how the images influence one's beliefs about actual LLM friendliness." But I expressed it poorly.

Replies from: habryka4, daniel-kokotajlo, tailcalled, Jozdien
comment by habryka (habryka4) · 2024-01-21T23:22:16.292Z · LW(p) · GW(p)

The point was that both images are stupid and (in many places) unsupported by evidence, but that LW-folk would be much more willing to criticize the friendly-looking one while making excuses for the scary-looking one. (And I think your comment here resolves my prediction to "correct.")

(This is too gotcha shaped for me, so I am bowing out of this conversation)

I think I communicated my core point. I think it's a good image that gets an important insight across, and don't think it's "propaganda" in the relevant sense of the term. Of course anything that's memetically adaptive will have some edge-cases that don't match perfectly, but I am getting a good amount of mileage out of calling LLMs "Shoggoths" in my own thinking and think that belief is paying good rent.

If you disagree with the underlying cognition being accurately described as alien, I can have that conversation, since it seems like maybe the underlying crux, but your response above seems like it's taking it as a given that I am "making excuses", and is doing a gotcha-thing which makes it hard for me to see a way to engage without further having my statements be taken as confirmation of some social narrative. 

Replies from: TurnTrout
comment by TurnTrout · 2024-02-05T19:32:10.285Z · LW(p) · GW(p)

In retrospect, I do wish I had written my comment less aggressively, so my apologies on that front! I wish I'd instead written things like "I think I made some obviously correct narrow points about the shoggoth having at least some undue negative connotations, and I wish we could agree on at least that. I feel frustrated because it seems like it's hard to reach agreement even on relatively simple propositions."


I do agree that LLMs probably have substantially different internal mechanisms than people. That isn't the crux. I just wish this were communicated in a more neutral way. In an alternate timeline, maybe this meme instead consisted of a strange tangle of wires and mist and question-marks with a mask on. I'd be more on-board with that. 

Again, I agree that the Shoggoth meme can cure people of some real confusions! And I don't think the meme has a huge impact, I just think it's moderate evidence of some community failures I worry about.


I think a lot of my position is summarized by 1a3orn:

I just think this point about the amorality of LLMs is much better communicated by saying "LLMs are trained to continue text from an enormous variety of sources. Thus, if you give them [Nazi / Buddhist / Unitarian / corporate / garbage nonsense] text to continue, they will generally try to[1] continue it in that style."

Than to say "LLMs are like alien shoggoths."

Like it's just a better model to give people.

 

  1. ^

    Although I do think this contains some unnecessary intentional stance usage.

comment by Daniel Kokotajlo (daniel-kokotajlo) · 2024-01-23T19:47:48.590Z · LW(p) · GW(p)

fwiw I agree with the quotes from Tetraspace you gave, and disagree with '"has communicated a sense of danger which is unsupported by substantial evidence." The sense of danger is very much supported by the current state of evidence.

That said, I agree that the more detailed image is kinda distastefully propagandaisty in a way that the original cutesey shoggoth image is not. I feel like the more detailed image adds in an extra layer of revoltingness and scaryness (e.g. the sharp teeth) than would be appropriate given our state of knowledge.

Replies from: daniel-kokotajlo, mike_hawke
comment by Daniel Kokotajlo (daniel-kokotajlo) · 2024-02-06T16:01:34.485Z · LW(p) · GW(p)

re: "the sense of danger is very much supported by the current state of evidence" -- I mean, you've heard all this stuff before, but I'll summarize:

--Seems like we are on track to probably build AGI this decade
--Seems like we are on track to have an intelligence explosion, i.e. a speedup of AI R&D due to automation
--Seems like the AGI paradigm that'll be driving all this is fairly opaque and poorly understood. We have scaling laws for things like text perplexity but other than that we are struggling to predict capabilities, and double-struggling to predict inner mechanisms / 'internal' high-level properties like 'what if anything does it actually believe or want'
--A bunch of experts in the field have come out and said that this could go terribly & we could lose control, even though it's low-status to say this & took courage.
--Generally speaking the people who have thought about it the most are the most worried; the most detailed models of what the internal properties might be like are the most gloomy, etc. This might be due to selection/founder effects, but sheesh, it's not exactly good news!

comment by mike_hawke · 2024-02-13T23:52:29.168Z · LW(p) · GW(p)

I feel like the more detailed image adds in an extra layer of revoltingness and scaryness (e.g. the sharp teeth) than would be appropriate given our state of knowledge.


Now I'm really curious to know what would justify the teeth. I'm not aware of any AIs intentionally biting someone, but presumably that would be sufficient.

Replies from: daniel-kokotajlo
comment by Daniel Kokotajlo (daniel-kokotajlo) · 2024-02-14T00:18:28.102Z · LW(p) · GW(p)

Perhaps if we were dealing not with deepnets primarily trained to predict text, but rather deepnets primarily trained to addict people with pleasant seductive conversation and then drain their wallets? Such an AI would in some real sense be an evolved predator of humans.

comment by tailcalled · 2024-01-22T21:55:21.007Z · LW(p) · GW(p)

I think one mindset that may be healthy is to remember:

Reality is too complex to be described well by a single idea (meme/etc.). If one responds to this by forcing each idea presented to be as good an approximation of reality as possible, then that causes all the ideas to become "colorless and blurry", as any specific detail would be biased when considered on its own.

Therefore, one cannot really fight about whether an idea is biased in isolation. Rather, the goal should be to create a bag of ideas which in totality is as informative about a subject as possible.

I think you are basically right that the shoggoth meme is describing one of the most negative projections of what LLMs could be doing. One approach is to try to come with a single projection and try to convince everyone else to use this instead. I'm not super comfortable with that either because I feel like there's a lot of uncertainty about what the most productive way to think about LLMs is, and I would like to keep options open.

Instead I'd rather have a collection of a list of different ways to think about it (you could think of this collection as a discrete approximation to a probability distribution). Such a list would have many uses, e.g. as a checklist or a reference to guide people to.

It does seem problematic for the rationalist community to refuse to acknowledge that the shoggoth meme presents LLMs as being scary monsters, but it also seems problematic to insist that the shoggoth meme exaggerates the danger of LLMs, because that should be classified based on P(meme danger > actual danger), rather than on the basis of meme danger > E[actual danger], as in, if there's significant uncertainty about how how actually dangerous LLMs are then there's also significant uncertainty about whether the memes exaggerate the danger; one shouldn't just compare against a single point estimate.

comment by Jozdien · 2024-01-22T00:41:01.358Z · LW(p) · GW(p)

I think this just comes down to personal taste on how you're interpreting the image? I find the original shoggoth image cute enough that I use it as my primary discord message reacts. My emotional reaction to it to the best of my introspection has always been "weird alien structure and form" or "awe-inspiringly powerful and incomprehensible mind" and less "horrifying and scary monster". I'm guessing this is the case for the vast majority of people I personally know who make said memes. It's entirely possible the meme has the end-consequence of appealing to other people for the reasons you mention, but then I think it's important to make that distinction.

comment by RogerDearnaley (roger-d-1) · 2024-01-20T20:28:31.554Z · LW(p) · GW(p)

It is true that base models, especially smaller ones, are somewhat creepy to talk to (especially because their small context window makes them forgetful). I'm not sure I'd describe them as "very alien", they're more "uncanny valley" where they often make sense and seem human-like, until suddenly they don't. (On theoretical grounds, I think they're using rather non-human means of cognition to attempt to model human writing patterns as closely as they can, they often get this right, but on occasion make very non-human errors — more frequently for smaller models.) The Shoggoth mental metaphor exaggerates this somewhat for effect (and more so for the very scary image Alex posted at the top, which I haven't seen used as often as the one Oliver posted).

This is one of the reasons why Quintin and I proposed a more detailed and somewhat less scary/alien (but still creepy) metaphor: Goodbye, Shoggoth: The Stage, its Animatronics, & the Puppeteer – a New Metaphor [AF · GW]. I'd be interested to know what people think of that one in comparison to the Shoggoth — we were attempting to be more unbiased, as well as more detailed.

comment by Akash (akash-wasil) · 2024-01-20T11:56:27.137Z · LW(p) · GW(p)

More broadly, TurnTrout, I've noticed you using this whole "look, if something positive happened, LW would totally rip on it! But if something is presented negatively, everyone loves it!" line of reasoning a few times (e.g., I think this logic came up in your comment about Evan's recent paper). And I sort of see you taking on some sort of "the people with high P(doom) just have bad epistemics" flag in some of your comments.

A few thoughts (written quickly, prioritizing speed over precision):

  1. I think that epistemics are hard & there are surely several cases in which people are biased toward high P(doom). Examples: Yudkowsky was one of the first thinkers/writers about AI, some people might have emotional dispositions that lead them toward anxious/negative interpretations in general, some people find it "cool" to think they're one of the few people who are able to accurately identify the world is ending, etc.
  2. I also think that there are plenty of factors biasing epistemics in the "hopeful" direction. Examples: The AI labs have tons of money and status (& employ large fractions of the community's talent), some people might have emotional dispositions that lead them toward overly optimistic/rosy interpretations in general, some people might find it psychologically difficult to accept premises that lead them to think the world is ending, etc.
  3. My impression (which could be false) is that you seem to be exclusively or disproportionately critical of poor arguments when they come from the "high P(doom)" side. 
  4. I also think there's an important distinction between "I personally think this argument is wrong" and "look, here's an example of propaganda + poor community epistemics." In general, I suspect community epistemics are better when people tend to respond directly to object-level points and have a relatively high bar for saying "not only do I think you're wrong, but also here are some ways in which you and your allies have poor epistemics." (IDK though, insofar as you actually believe that's what's happening, it seems good to say aloud, and I think there's a version of this that goes too far and polices speech reproductively, but I do think that statements like "community epistemics have been compromised by groupthink and fear" are pretty unproductive and could be met with statements like "community epistemics have been compromised by powerful billion-dollar companies that have clear financial incentives to make people overly optimistic about the trajectory of AI progress." 
  5. I am quite worried about tribal dynamics reducing the ability for people to engage in productive truth-seeking discussions. I think you've pointed out how some of the stylistic/tonal things from the "high P(doom)//alignment hard" side have historically made discourse harder, and I agree with several of your critiques. More recently, though, I think that the "low P(doom)//alignment not hard" side seem to be falling into similar traps (e.g., attacking strawmen of those they disagree with, engaging some sort of "ha, the other side is not only wrong but also just dumb/unreasonable/epistemically corrupted" vibe that predictably makes people defensive & makes discourse harder.
Replies from: ryan_greenblatt, TurnTrout
comment by ryan_greenblatt · 2024-01-20T18:50:02.342Z · LW(p) · GW(p)

See also "Other people are wrong" vs "I am right" [LW · GW], reversed stupidity is not intelligence [? · GW], and the cowpox of doubt.

My guess is that it's relatively epistemically corrupting and problematic to spend a lot of time engaging with weak arguments.

I think it's tempting to make the mistake of thinking that debunking a specific (bad) argument is the same as debunking a conclusion. But actually, these are extremely different operations. One requires understanding a specific argument while the other requires level headed investigation of the overall situation. Separately, there are often actually good intuitions underlying bad arguments and recovering this intuition is an important part of truth seeking.

I think my concerns here probably apply to a wide variety of people thinking about AI x-risk. I worry about this for myself.

comment by TurnTrout · 2024-01-20T18:09:28.855Z · LW(p) · GW(p)

Thanks for this, I really appreciate this comment (though my perspective is different on many points).

My impression (which could be false) is that you seem to be exclusively or disproportionately critical of poor arguments when they come from the "high P(doom)" side. 

It's true that I spend more effort critiquing bad doom arguments. I would like to note that when e.g. I read Quintin I generally am either in agreement or neutral. I bet there are a lot of cases where you would think "that's a poor argument" and I'd say "hm I don't think Akash is getting the point (and it'd be good if someone could give a better explanation)." 

However, it's definitely not true that I never critique optimistic arguments which I consider poor. For example, I don't get why Quintin (apparently) thinks that spectral bias is a reason for optimism, and I've said as much on one of his posts. I've said something like "I don't know why you seem to think you can use this mathematical inductive bias to make high-level intuitive claims about what gets learned. This seems to fall into the same trap that 'simplicity' theorizing does." I probably criticize or express skepticism of certain optimistic arguments at least twice a week, though not always on public channels. And I've also pushed back on people being unfair, mean, or mocking of "doomers" on private channels.

I do think that statements like "community epistemics have been compromised by groupthink and fear" are pretty unproductive and could be met with statements like "community epistemics have been compromised by powerful billion-dollar companies that have clear financial incentives to make people overly optimistic about the trajectory of AI progress." 

I think both statements are true to varying degrees (the former more than the latter in the cases I'm considering). They're true and people should say them. The fact that I work at a lab absolutely affects my epistemics (though I think the effect is currently small). People should totally consider the effect which labs are having on discourse. 

have a relatively high bar for saying "not only do I think you're wrong, but also here are some ways in which you and your allies have poor epistemics."

I do consider myself to have a high bar for this, and the bar keeps getting passed, so I say something. EDIT: Though I don't mean for my comments to imply "someone and their allies" have bad epistemics. Ideally I'd like to communicate "hey, something weird is in the air guys, can't you sense it too?". However, I think I'm often more annoyed than that, and so I don't communicate that how I'd like.

comment by Akash (akash-wasil) · 2024-01-20T11:27:06.191Z · LW(p) · GW(p)

My impression is that the Shoggath meme was meant to be a simple meme that says "hey, you might think that RLHF 'actually' makes models do what we value, but that's not true. You're still left with an alien creature who you don't understand and could be quite scary."

Most of the Shoggath memes I've seen look more like this, where the disgusting/evil aspects are toned down. They depict an alien that kinda looks like an octopus. I do agree that the picture evokes some sort of "I should be scared/concerned" reaction. But I don't think it does so in a "see, AI will definitely be evil" way– it does so in a "look, RLHF just adds a smiley face to a foreign alien thing. And yeah, it's pretty reasonable to be scared about this foreign alien thing that we don't understand."

To be a bit bolder, I think Shoggath is reacting to the fact that RLHF gives off a misleading impression of how safe AI is. If I were to use proactive phrasing, I could say that RLHF serves as "propaganda". Let's put aside the fact that you and I might disagree about how much "true evidence" RLHF provides RE how easy alignment will be. It seems pretty clear to me that RLHF [and the subsequent deployment of RLHF'd models] spreads an overly-rosy "meme" that gives people a misleading perspective of how well we understand AI systems, how safe AI progress is, etc.

From this lens, I see Shoggath as a counter-meme. It basically says "hey look, the default is for people to think that these things are friendly assistants, because that's what the AI companies have turned them into, but we should remember that actually we are quite confused about the alien cognition behind the RLHF smiley face."

comment by kave · 2024-01-20T19:41:32.480Z · LW(p) · GW(p)

Some quotes from the wiki article on Shoggoths:

Being amorphous, shoggoths can take on any shape needed, making them very versatile within aquatic environments.

 

At the Mountains of Madness includes a detailed account of the circumstances of the shoggoths' creation by the extraterrestrial Elder Things. Shoggoths were initially used to build the cities of their masters. Though able to "understand" the Elder Things' language, shoggoths had no real consciousness and were controlled through hypnotic suggestion. Over millions of years of existence, some shoggoths mutated, developed independent minds, and rebelled.

Quoting because (a) a lot of these features seem like an unusually good match for LLMs and (b) acknowledging that is picking a metaphor that fictionally rebelled, and thus is potentially alignment-is-hard loaded as a metaphor.

comment by DanielFilan · 2024-01-20T22:58:50.683Z · LW(p) · GW(p)

I'm confident that if there were a "pro-AI" meme with a friendly-looking base model, LW / the shoggoth enjoyers would have nitpicked the friendly meme-creature to hell. They would (correctly) point out "hey, we don't actually know how these things work; we don't know them to be friendly, or what they even 'want' (if anything); we don't actually know what each stage of training does..."

I'm sure that nothing bad will happen to me if I slap this (friendly AI meme) on my laptop, right? I'll be able to think perfectly neutrally about whether AI will be friendly.

I have multiple cute AI stickers on my laptop, one of which is a shoggoth meme. Here is a picture of them. Nobody has ever nitpicked their friendly appearance to me. I don't think they have distorted my thinking about AI in favour of thinking that it will be friendly (altho I think it was after I put them on that I became convinced by a comment by Paul Christiano that there's ~even odds that unaligned AI wouldn't kill me, so do with that information what you will).

Replies from: TurnTrout
comment by TurnTrout · 2024-01-21T22:26:40.916Z · LW(p) · GW(p)

I think that "cute" image is still implying AI is dangerous and monsterlike? Can you show the others?

Replies from: DanielFilan
comment by DanielFilan · 2024-01-22T00:20:30.865Z · LW(p) · GW(p)

The other is the friendly robot waving hello just underneath.

comment by Sam Marks (samuel-marks) · 2024-01-20T02:29:45.864Z · LW(p) · GW(p)

Thanks, I've disliked the shoggoth meme for a while, and this post does a better job articulating why than I've been able to do myself.

comment by Vladimir_Nesov · 2024-01-20T11:36:16.566Z · LW(p) · GW(p)

A bad map that expresses the territory with great uncertainty can be confidently called a bad map, calling it a good map is clearly wrong. In that sense the shoggoth imagery reflects the quality of the map, and as it's clearly a bad map, better imagery would be misleading about the map's quality. Even if the underlying territory is lovely, this isn't known, unlike the disastorous quality of the map of the territory, whose lack of quality is known with much more confidence and in much greater detail. Here be dragons.

(This is one aspect of the meme where it seems appropriate. Some artist's renditions, including the one you used, channel LeCake, which your alternative image example loses, but obviously the cake is nicer than the shoggoth.)

comment by green_leaf · 2024-01-20T15:20:51.427Z · LW(p) · GW(p)

However, this image is obviously optimized to be scary and disgusting. It looks dangerous, with long rows of sharp teeth. It is an eldritch horror. It's at this point that I'd like to point out the simple, obvious fact that "we don't actually know how these models work, and we definitely don't know that they're creepy and dangerous on the inside."

It's optimized to illustrate the point that the neural network isn't trained to actually care about what the person training it thinks it came to care about, it's only optimized to act that way on the training distribution. Unless I'm missing something, arguing the image is wrong would be equivalent to arguing that maybe the model truly cares about what its human trainers want it to care about. (Which we know isn't actually the case.)

comment by tailcalled · 2024-01-20T12:09:26.807Z · LW(p) · GW(p)

Nice point.

Though I'm almost tempted to think of LLMs as being like people who are LARPing or who have impostor syndrome [LW · GW]. As in, they spend pretty much all their cognitive capacity on obsessing over doing what they feel looks normal. (This also closely aligns with how they are trained: first they are made to mimic what other people do, and then they are made to mimic what gets praise and avoid what gets critique.) Probably humanizes them even more than your friendly creature proposal.

This sounds somewhat similar to deceptive alignment, so I want to draw a distinction here: It's not that LARPers/impostors are trying to maximize approval in a consequentialist sense (as this would require modelling how their actions ripple out into the world, which they do not do), but rather that (in the sense described by shard theory) they are molded based on normality and approval. As such they would not do something abnormal/disapproved-of in order to look more normal/approval-worthy later.

comment by TurnTrout · 2022-10-30T19:41:35.879Z · LW(p) · GW(p)

Outer/inner alignment decomposes a hard problem into two extremely hard problems. 

I have a long post draft about this, but I keep delaying putting it out in order to better elaborate the prereqs which I seem to keep getting stuck on when elaborating the ideas. I figure I might as well put this out for now, maybe it will make some difference for someone.

I think that the inner/outer alignment framing[1] seems appealing but is actually a doomed problem decomposition and an unhelpful frame for alignment. 

  1. The reward function is a tool which chisels cognition into agents [LW · GW] through gradient updates, but the outer/inner decomposition assumes that that tool should also embody the goals we want to chisel into the agent. When chiseling a statue, the chisel doesn’t have to also look like the finished statue. 
  2. I know of zero success stories for outer alignment to real-world goals. 
    1. More precisely, stories where people decided “I want an AI which [helps humans / makes diamonds / plays Tic-Tac-Toe / grows strawberries]”, and then wrote down an outer objective only maximized in those worlds.
    2. This is pretty weird on any model where most of the specification difficulty of outer alignment comes from the complexity of human values. Instead, I think this more shows that outer alignment is a wrong language for specifying agent motivations.
  3. If you look at the single time ever that human-compatible values have arisen in generally intelligent minds (i.e. in humans), you’ll infer that it wasn’t done through outer/inner alignment. According to shard theory [LW · GW], human values are inner alignment failures on the reward circuitry in the human brain (read carefully: this is not the usual evolution analogy!). If you aim to “solve” outer and inner alignment, you are ruling out the only empirically known class of methods for growing human-compatible values.
  1. ^

    An example grounding which I argue against: 

    1. Outer alignment: get a reward function which "robustly represents" the intended goal in all situations which the trained AI can understand.

    2. Inner alignment: make the trained AI intent-aligned with optimizing that objective (i.e. "care about" that objective).

    This isn't the only grounding of outer/inner, and while I don't strongly object to all of them, I do weakly object to all of them (as I understand them) and strongly object to most of them. 

comment by TurnTrout · 2020-04-24T15:38:04.997Z · LW(p) · GW(p)

Weak derivatives

In calculus, the product rule says . The fundamental theorem of calculus says that the Riemann integral acts as the anti-derivative.[1] Combining these two facts, we derive integration by parts:

It turns out that we can use these two properties to generalize the derivative to match some of our intuitions on edge cases. Let's think about the absolute value function:

Image from Wikipedia

The boring old normal derivative isn't defined at , but it seems like it'd make sense to be able to say that the derivative is eg 0. Why might this make sense?

Taylor's theorem (and its generalizations) characterize first derivatives as tangent lines with slope which provide good local approximations of around : . You can prove that this is the best approximation you can get using only and ! In the absolute value example, defining the "derivative" to be zero at would minimize approximation error on average in neighborhoods around the origin.

In multivariable calculus, the Jacobian is a tangent plane which again minimizes approximation error (with respect to the Euclidean distance, usually) in neighborhoods around the function. That is, having a first derivative means that the function can be locally approximated by a linear map. It's like a piece of paper that you glue onto the point in question.

Source

This reasoning even generalizes to the infinite-dimensional case with functional derivatives (see my recent functional analysis textbook review [LW(p) · GW(p)]). All of these cases are instances of the Fréchet derivative.

Complex analysis provides another perspective on why this might make sense, but I think you get the idea and I'll omit that for now.

We can define a weaker notion of differentiability which lets us do this – in fact, it lets us define the weak derivative to be anything at ! Now that I've given some motivation, here's a great explanation of how weak derivatives arise from the criterion of "satisfy integration by parts for all relevant functions".


  1. As far as I can tell, the indefinite Riemann integral being the anti-derivative means that it's the inverse of in the group theoretic sense – with respect to composition in the -vector space of operators on real-valued functions. You might not expect this, because maps an integrable function to a set of functions . However, this doesn't mean that the inverse isn't unique (as it must be), because the inverse is in operator-space. ↩︎

Replies from: TurnTrout
comment by TurnTrout · 2020-04-24T15:47:08.477Z · LW(p) · GW(p)

The reason is undefined for the absolute value function is that you need the value to be the same for all sequences converging to 0 – both from the left and from the right. There's a nice way to motivate this in higher-dimensional settings by thinking about the action of e.g. complex multiplication, but this is a much stronger notion than real differentiability and I'm not quite sure how to think about motivating the single-valued real case yet. Of course, you can say things like "the theorems just work out nicer if you require both the lower and upper limits be the same"...

comment by TurnTrout · 2020-07-14T02:26:18.654Z · LW(p) · GW(p)

When I notice I feel frustrated, unproductive, lethargic, etc, I run down a simple checklist:

  • Do I need to eat food?
  • Am I drinking lots of water?
  •  Have I exercised today?
  • Did I get enough sleep last night? 
    • If not, what can I do now to make sure I get more tonight?
  • Have I looked away from the screen recently?
  • Have I walked around in the last 20 minutes?

It's simple, but 80%+ of the time, it fixes the issue.

Replies from: Viliam, mr-hire
comment by Viliam · 2020-07-14T19:32:08.987Z · LW(p) · GW(p)

There is a "HALT: hungry? angry? lonely? tired?" mnemonic, but I like that your list includes water and walking and exercise. Now just please make it easier to remember.

Replies from: AllAmericanBreakfast
comment by DirectedEvolution (AllAmericanBreakfast) · 2020-07-15T22:33:10.546Z · LW(p) · GW(p)

How about THREES: Thirsty Hungry Restless Eyestrain Exercise?

comment by Matt Goldenberg (mr-hire) · 2020-07-14T03:38:33.959Z · LW(p) · GW(p)

Hey can I steal this for a course I'm teaching? (I'll give you credit).

Replies from: TurnTrout
comment by TurnTrout · 2020-07-14T11:49:27.742Z · LW(p) · GW(p)

sure!

comment by TurnTrout · 2020-01-13T02:15:39.463Z · LW(p) · GW(p)

While reading Focusing today, I thought about the book and wondered how many exercises it would have. I felt a twinge of aversion. In keeping with my goal of increasing internal transparency, I said to myself: "I explicitly and consciously notice that I felt averse to some aspect of this book".

I then Focused on the aversion. Turns out, I felt a little bit disgusted, because a part of me reasoned thusly:

If the book does have exercises, it'll take more time. That means I'm spending reading time on things that aren't math textbooks. That means I'm slowing down.

(Transcription of a deeper Focusing on this reasoning)

I'm afraid of being slow. Part of it is surely the psychological remnants of the RSI I developed in the summer of 2018. That is, slowing down is now emotionally associated with disability and frustration. There was a period of meteoric progress as I started reading textbooks and doing great research, and then there was pain. That pain struck even when I was just trying to take care of myself, sleep, open doors. That pain then left me on the floor of my apartment, staring at the ceiling, desperately willing my hands to just get better. They didn't (for a long while), so I just lay there and cried. That was slow, and it hurt. No reviews, no posts, no typing, no coding. No writing, slow reading. That was slow, and it hurt.

Part of it used to be a sense of "I need to catch up and learn these other subjects which [Eliezer / Paul / Luke / Nate] already know". Through internal double crux, I've nearly eradicated this line of thinking, which is neither helpful nor relevant nor conducive to excitedly learning the beautiful settled science of humanity. Although my most recent post [LW · GW] touched on impostor syndrome, that isn't really a thing for me. I feel reasonably secure in who I am, now (although part of me worries that others wrongly view me as an impostor?).

However, I mostly just want to feel fast, efficient, and swift again. I sometimes feel like I'm in a race with Alex, and I feel like I'm losing.

comment by TurnTrout · 2022-12-26T19:12:38.755Z · LW(p) · GW(p)

Hindsight bias and illusion of transparency seem like special cases of a failure to fully uncondition variables in your world model (e.g. who won the basketball game), or to model an ignorant other person. Such that your attempts to reason from your prior state of ignorance (e.g. about who won) either are advantaged by the residual information or reactivate your memories of that information.

comment by TurnTrout · 2022-11-25T20:29:06.483Z · LW(p) · GW(p)

An alternate mechanistic vision of how agents can be motivated to directly care about e.g. diamonds or working hard. In Don't design agents which exploit adversarial inputs [LW · GW], I wrote about two possible mind-designs:

Imagine a mother whose child has been goofing off at school and getting in trouble. The mom just wants her kid to take education seriously and have a good life. Suppose she had two (unrealistic but illustrative) choices. 

  1. Evaluation-child: The mother makes her kid care extremely strongly about doing things which the mom would evaluate as "working hard" and "behaving well."
  2. Value-child: The mother makes her kid care about working hard and behaving well.

I explained how evaluation-child is positively incentivized to dupe his model of his mom and thereby exploit adversarial inputs to her cognition. This shows that aligning an agent to evaluations of good behavior is not even close to aligning an agent to good behavior

However, some commenters seemed maybe skeptical that value-child can exist, or uncertain how concretely that kind of mind works. I worry/suspect that many people have read shard theory [LW · GW] posts without internalizing new ideas about how cognition can work, about how real-world caring can work on a mechanistic level. Where effective real-world cognition doesn't have to (implicitly) be about optimizing an expected utility function over all possible plans. This last sentence might have even seemed bizarre to you. 

Here, then, is an extremely detailed speculative story for value-child's first day at school. Well, his first day spent with his newly-implanted "work hard" and "behave well" value shards.


Value-child gets dropped off at school. He recognizes his friends (via high-level cortical activations previously formed through self-supervised learning) and waves at them (friend-shard was left intact). They rush over to greet him. They start talking about Fortnite. Value-child cringes slightly as he predicts he will be more distracted later at school and, increasingly, put in a mental context where his game-shard takes over decision-making, which is reflectively-predicted to lead to him daydreaming during class. This is a negative update on the primary shard-relevant features for the day.

His general-purpose planning machinery generates an example hardworking-shard-desired terminal state: Paying rapt attention during Mr. Buck’s math class (his first class today). He currently predicts that while he is in Mr. Buck’s class later, he will still be somewhat distracted by residual game-related cognition causing him to loop into reward-predicted self-reinforcing thoughts.

He notices a surprisingly low predicted level for a variable (amount of game-related cognition predicted for future situation: Mr. Buck’s class) which is important to a currently activated shard (working hard). This triggers a previously learned query to his WM: “why are you making this prediction for this quantity?”. The WM responds with a few sources of variation, including how value-child is currently near his friends who are talking about Fortnite. In more detail, the WM models the following (most of it not directly translatable to English):

His friends’ utterances will continue to be about Fortnite. Their words will be processed and then light up Fortnite-related abstractions, which causes both prediction of more Fortnite-related observations and also increasingly strong activation of the game-shard. Due to previous reward events, his game-shard is shaped so as to bid up game-related thoughts, which are themselves rewarding events, which causes a positive feedback loop where he slightly daydreams about video games while his friends talk.

When class is about to start, his “get to class”-related cognition will be activated by his knowledge of the time and his WM indicating “I’m at school.” His mental context will slightly change, he will enter the classroom and sit down, and he will take out his homework. He will then pay token attention due to previous negative social-reward events around being caught off guard

[Exception thrown! The world model was concurrently coarsely predicting what it thinks will happen given his current real values (which include working hard). The coarse prediction clashes with the above cached prediction that he will only pay token attention in math class! 

The WM hiccups on this point, pausing to more granularly recompute its predictions. It squashes the cached prediction that he doesn’t strongly care about paying attention in class. Since his mom installed a hard-working-shard and an excel-at-school shard, he will actively try to pay attention. This prediction replaces the cached prior prediction.]

However, value-child will still have game-related cognition activated, and will daydream. This decreases value-relevant quantities, like “how hard he will be working” and “how much he will excel” and “how much he will learn.”

This last part is antithetical to the new shards, so they bid down “Hang around friends before heading into school.” Having located a predicted-to-be-controllable source of negative influence on value-relevant outcomes, the shards bid for planning to begin. The implied causal graph is:

Continuing to hear friends talk about Fortnite
           |
           v
Distracted during class

So the automatic causality-noticing algorithms bid to knock out the primary modeled cause of the negative value-relevant influence. The current planning subgoal is set to: make causal antecedent false and reduce level of predicted distraction. Candidate concretization set to: get away from friends.

(The child at this point notices they want to get away from this discussion, that they are in some sense uncomfortable. They feel themselves looking for an excuse to leave the conversation. They don't experience the flurry of thoughts and computations described above. Subconscious computation is subconscious. Even conscious thoughts won't introspectively reveal their algorithmic underpinnings.)

“Hey, Steven, did you get problem #3 for math? I want to talk about it.” Value-child starts walking away.


Crucially, in this story, value-child cares about working hard in that his lines of cognition stream together to make sure he actually works hard in the future. He isn't trying to optimize his later evaluation of having worked hard. He isn't ultimately and primarily trying to come up with a plan which he will later evaluate as being a maximally hard-work-involving plan. 

Value-child comes up with a hard-work plan as an effect of his cognition, not as a motivating cause—not because he only wants to come up with plans he himself will rate highly. He values working hard.

Replies from: tailcalled
comment by tailcalled · 2022-11-25T21:31:08.342Z · LW(p) · GW(p)

I can totally believe that agents that competently and cooperatively seek out to fulfill a goal, rather than seeking to trick evaluators of that goal to think it gets fulfilled, can exist.

However, whether you get such agents out of an algorithm depends on the details of that algorithm. Current reinforcement learning algorithms mostly don't create agents that competently do anything. If they were more powerful while still doing essentially the same thing they currently do, most of them would end up tricked by the agents they create, rather than having aligned agents.

comment by TurnTrout · 2022-07-19T19:48:55.869Z · LW(p) · GW(p)

Experiment: Train an agent in MineRL which robustly cares about chickens (e.g. would zero-shot generalize to saving chickens in a pen from oncoming lava, by opening the pen and chasing them out, or stopping the lava). Challenge mode: use a reward signal which is a direct function of the agent's sensory input.

This is a direct predecessor to the "Get an agent to care about real-world dogs" problem. I think solving the Minecraft version of this problem will tell us something about how outer reward schedules relate to inner learned values, in a way which directly tackles the key questions, the sensory observability/information inaccessibility issue, and which is testable today.

(Credit to Patrick Finley for the idea)

Replies from: TurnTrout
comment by TurnTrout · 2022-07-24T22:53:31.832Z · LW(p) · GW(p)

After further review, this is probably beyond capabilities for the moment. 

Also, the most important part of this kind of experiment is predicting in advance what reward schedules will produce what values within the agent, such that we can zero-shot transfer that knowledge to other task types (e.g. XLAND instead of Minecraft) and say "I want an agent which goes to high-elevation platforms reliably across situations, with low labelling cost", and then sketch out a reward schedule, and have the first capable agents trained using that schedule generalize in the way you want.

Replies from: Jay Bailey
comment by Jay Bailey · 2022-10-09T11:18:53.888Z · LW(p) · GW(p)

Why is this difficult? Is it only difficult to do this in Challenge Mode - if you could just code in "Number of chickens" as a direct feed to the agent, can it be done then? I was thinking about this today, and got to wondering why it was hard - at what step does an experiment to do this fail?

Replies from: TurnTrout
comment by TurnTrout · 2022-10-10T22:16:15.774Z · LW(p) · GW(p)

Even if you can code in number of chickens as an input to the reward function, that doesn't mean you can reliably get the agent to generalize to protect chickens. That input probably makes the task easier than in Challenge Mode, but not necessarily easy. The agent could generalize to some other correlate. Like ensuring there are no skeletons nearby (because they might shoot nearby chickens), but not in order to protect the chickens.

Replies from: Jay Bailey
comment by Jay Bailey · 2022-10-10T23:50:45.715Z · LW(p) · GW(p)

So, if I understand correctly, the way we would consider it likely that the correct generalisation had happened would be if the agent could generalise to hazards it had never seen actually kill chickens before? And this would require the agent to have an actual model of how chickens can be threatened such that it could predict that lava would destroy chickens based on, say, it's knowledge that it will die if it jumps into lava, which is beyond capabilities at the moment?

Replies from: TurnTrout
comment by TurnTrout · 2022-10-17T18:56:13.399Z · LW(p) · GW(p)

Yes, that would be the desired generalization in the situations we checked. If that happens, we had specified a behavioral generalization property and then wrote down how we were going to get it, and then had just been right in predicting that that training rationale would go through.

comment by TurnTrout · 2019-07-05T23:00:58.761Z · LW(p) · GW(p)

I passed a homeless man today. His face was wracked in pain, body rocking back and forth, eyes clenched shut. A dirty sign lay forgotten on the ground: "very hungry".

This man was once a child, with parents and friends and dreams and birthday parties and maybe siblings he'd get in arguments with and snow days he'd hope for.

And now he's just hurting.

And now I can't help him without abandoning others. So he's still hurting. Right now.

Reality is still allowed to make this happen. This is wrong. This has to change.

Replies from: SaidAchmiz, Raemon
comment by Said Achmiz (SaidAchmiz) · 2019-07-06T03:12:51.584Z · LW(p) · GW(p)

How would you help this man, if having to abandon others in order to do so were not a concern? (Let us assume that someone else—someone whose competence you fully trust, and who will do at least as good a job as you will—is going to take care of all the stuff you feel you need to do.)

What is it you had in mind to do for this fellow—specifically, now—that you can’t (due to those other obligations)?

Replies from: TurnTrout, Raemon
comment by TurnTrout · 2019-07-06T05:02:37.715Z · LW(p) · GW(p)

Suppose I actually cared about this man with the intensity he deserved - imagine that he were my brother, father, or best friend.

The obvious first thing to do before interacting further is to buy him a good meal and a healthy helping of groceries. Then, I need to figure out his deal. Is he hurting, or is he also suffering from mental illness?

If the former, I'd go the more straightforward route of befriending him, helping him purchase a sharp business professional outfit, teaching him to interview and present himself with confidence, secure an apartment, and find a job.

If the latter, this gets trickier. I'd still try and befriend him (consistently being a source of cheerful conversation and delicious food would probably help), but he might not be willing or able to get the help he needs, and I wouldn't have the legal right to force him. My best bet might be to enlist the help of a psychological professional for these interactions. If this doesn't work, my first thought would be to influence the local government to get the broader problem fixed (I'd spend at least an hour considering other plans before proceeding further, here). Realistically, there's likely a lot of pressure in this direction already, so I'd need to find an angle from which few others are pushing or pulling where I can make a difference. I'd have to plot out the relevant political forces, study accounts of successful past lobbying, pinpoint the people I need on my side, and then target my influencing accordingly.

(All of this is without spending time looking at birds-eye research and case studies of poverty reduction; assume counterfactually that I incorporate any obvious improvements to these plans, because I'd care about him and dedicate more than like 4 minutes of thought).

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2019-07-06T05:53:48.328Z · LW(p) · GW(p)

Well, a number of questions may be asked here (about desert, about causation, about autonomy, etc.). However, two seem relevant in particular:

First, it seems as if (in your latter scenario) you’ve arrived (tentatively, yes, but not at all unreasonably!) at a plan involving systemic change. As you say, there is quite a bit of effort being expended on this sort of thing already, so, at the margin, any effective efforts on your part would likely be both high-level and aimed in an at-least-somewhat-unusual direction.

… yet isn’t this what you’re already doing?

Second, and unrelatedly… you say:

Suppose I actually cared about this man with the intensity he deserved—imagine that he were my brother, father, or best friend.

Yet it seems to me that, empirically, most people do not expend the level of effort which you describe, even for their siblings, parents, or close friends. Which is to say that the level of emotional and practical investment you propose to make (in this hypothetical situation) is, actually, quite a bit greater than that which most people invest in their family members or close friends.

The question, then, is this: do you currently make this degree of investment (emotional and practical) in your actual siblings, parents, and close friends? If so—do you find that you are unusual in this regard? If not—why not?

Replies from: TurnTrout
comment by TurnTrout · 2019-07-06T06:08:46.662Z · LW(p) · GW(p)
… yet isn’t this what you’re already doing?

I work on technical AI alignment, so some of those I help (in expectation) don't even exist yet. I don't view this as what I'd do if my top priority were helping this man.

The question, then, is this: do you currently make this degree of investment (emotional and practical) in your actual siblings, parents, and close friends? If so—do you find that you are unusual in this regard? If not—why not?

That's a good question. I think the answer is yes, at least for my close family. Recently, I've expended substantial energy persuading my family to sign up for cryonics with me, winning over my mother, brother, and (I anticipate) my aunt. My father has lingering concerns which I think he wouldn't have upon sufficient reflection, so I've designed a similar plan for ensuring he makes what I perceive to be the correct, option-preserving choice. For example, I made significant targeted donations to effective charities on his behalf to offset (what he perceives as) a considerable drawback of cryonics: his inability to also be an organ donor.

A universe in which humanity wins but my dad is gone would be quite sad to me, and I'll take whatever steps necessary to minimize the chances of that.

I don't know how unusual this is. This reminds me of the relevant Harry-Quirrell exchange; most people seem beaten-down and hurt themselves, and I can imagine a world in which people are in better places and going to greater lengths for those they love. I don't know if this is actually what would make more people go to these lengths (just an immediate impression).

comment by Raemon · 2019-07-06T03:30:46.593Z · LW(p) · GW(p)

I predict that this comment is not helpful to Turntrout.

comment by Raemon · 2019-07-05T23:07:11.852Z · LW(p) · GW(p)

:(

Song I wrote about this once (not very polished)

comment by TurnTrout · 2024-03-07T01:01:38.034Z · LW(p) · GW(p)

Apparently[1] there was recently some discussion of Survival Instinct in Offline Reinforcement Learning (NeurIPS 2023). The results are very interesting: 

On many benchmark datasets, offline RL can produce well-performing and safe policies even when trained with "wrong" reward labels, such as those that are zero everywhere or are negatives of the true rewards. This phenomenon cannot be easily explained by offline RL's return maximization objective. Moreover, it gives offline RL a degree of robustness that is uncharacteristic of its online RL counterparts, which are known to be sensitive to reward design. We demonstrate that this surprising robustness property is attributable to an interplay between the notion of pessimism in offline RL algorithms and certain implicit biases in common data collection practices. As we prove in this work, pessimism endows the agent with a "survival instinct", i.e., an incentive to stay within the data support in the long term, while the limited and biased data coverage further constrains the set of survival policies...

Our empirical and theoretical results suggest a new paradigm for RL, whereby an agent is nudged to learn a desirable behavior with imperfect reward but purposely biased data coverage.

But I heard that some people found these results “too good to be true”, with some dismissing it instantly as wrong or mis-stated. I find this ironic, given that the paper was recently published in a top-tier AI conference. Yes, papers can sometimes be bad, but… seriously? You know the thing where lotsa folks used to refuse to engage with AI risk cuz it sounded too weird, without even hearing the arguments? … Yeaaah, absurdity bias.

Anyways, the paper itself is quite interesting. I haven’t gone through all of it yet, but I think I can give a good summary. The github.io is a nice (but nonspecific) summary.

Summary

It’s super important to remember that we aren’t talking about PPO. Boy howdy, we are in a different part of town when it comes to these “offline” RL algorithms (which train on a fixed dataset, rather than generating more of their own data “online”). ATAC, PSPI, what the heck are those algorithms? The important-seeming bits:

  1. Many offline RL algorithms pessimistically initialize the value of unknown states
    1. “Unknown” means: “Not visited in the offline state-action distribution”
    2. Pessimistic means those are assigned a super huge negative value (this is a bit simplified)
  2. Because future rewards are discounted, reaching an unknown state-action pair is bad if it happens soon and less bad if it happens farther in the future
  3. So on an all-zero reward function, a model-based RL policy will learn to stay within the state-action pairs it was demonstrated for as long as possible (“length bias”)
    1. In the case of the gridworld, this means staying on the longest demonstrated path, even if the red lava is rewarded and the yellow key is penalized. 
    2. In the case of Hopper, I’m not sure how they represented the states, but if they used non-tabular policies, this probably looks like “repeat the longest portion of demonstrated policies without falling over” (because that leads to the pessimistic penalty, and most of the data looked like walking successfully due to length bias, so that kind of data is least likely to be penalized). 
      hopper-medium-original
  4. On a negated reward function (which e.g. penalizes the Hopper for staying upright and rewards for falling over), if falling over still leads to a terminal/unknown state-action, that leads to a huge negative penalty. So it’s optimal to keep hopping whenever

For example, if the original per-timestep reward for staying upright was 1, and the original penalty for falling over was -1, then now the policy gets penalized for staying upright and rewarded for falling over! At , it's therefore optimal to stay upright whenever

which holds whenever the pessimistic penalty is at least 12.3. That's not too high, is it? (When I was in my graduate RL class, we'd initialize the penalties to -1000!)

Significance

DPO, for example, is an offline RL algorithm. It's plausible that frontier models will be trained using that algorithm. So, these results are more relevant if future DPO variants use pessimism and if the training data (e.g. example user/AI interactions) last for more turns when they’re actually helpful for the user.

While it may be tempting to dismiss these results as irrelevant because “length won’t perfectly correlate with goodness so there won’t be positive bias”, I think that would be a mistake. When analyzing the performance and alignment properties of an algorithm, I think it’s important to have a clear picture of all relevant pieces of the algorithm. The influence of length bias and the support of the offline dataset are additional available levers for aligning offline RL-trained policies.

To close on a familiar note, this is yet another example of how “reward” is not the only important quantity to track in an RL algorithm [LW · GW]. I also think it's a mistake to dismiss results like this instantly; this offers an opportunity to reflect on what views and intuitions led to the incorrect judgment.

  1. ^

    I can't actually check because I only check that stuff on Mondays [LW · GW].

Replies from: D0TheMath, tailcalled, tao-lin
comment by Garrett Baker (D0TheMath) · 2024-03-07T02:16:33.252Z · LW(p) · GW(p)

@Jozdien [LW · GW] sent me this paper, and I dismissed it with a cursory glance, thinking if they had to present their results using the "safe" shortening in the context they used that shortening, their results can't be too much to sneeze at. Reading your summary, the results are minorly more impressive than I was imagining, but still in the same ballpark I think. I don't think there's much applicability to the safety of systems though? If I'm reading you right, you don't get guarantees for situations for which the model is very out-of-distribution, but still behaving competently, since it hasn't seen tabulation sequences there.

Where the results are applicable, they definitely seem like they give mixed, probably mostly negative signals. If (say) I have a stop button, and I reward my agent for shutting itself down if I press that stop button, don't these results say that the agent won't shut down, for the same reasons the hopper won't fall over, even if my reward function has rewarded it for falling over in such situation? More generally, this seems to tell us in such situations we have marginally fewer degrees of freedom with which we can modify a model's goals than we may have thought, since the stay-on-distribution aspect dominates over the reward aspect. On the other hand, "staying on distribution" is in a sense a property we do want! Is this sort of "staying on distribution" the same kind of "staying on distribution" as that used in quantilization? I don't think so.

More generally, it seems like whether more or less sensitivity to reward, architecture, and data on the part of the functions neural networks learn is better or worse for alignment is an open problem.

comment by tailcalled · 2024-03-07T15:39:10.440Z · LW(p) · GW(p)

The paper sounds fine quality-wise to me, I just find it implausible that it's relevant for important alignment work, since the proposed mechanism is mainly an aversion to building new capabilities.

comment by Tao Lin (tao-lin) · 2024-03-12T01:22:41.876Z · LW(p) · GW(p)

One lesson you could take away from this is "pay attention to the data, not the process" - this happened because the data had longer successes than failures. If successes were more numerous than failures, many algorithms would have imitated those as well with null reward.

comment by TurnTrout · 2024-03-04T17:55:04.289Z · LW(p) · GW(p)

I think some people have the misapprehension that one can just meditate on abstract properties of "advanced systems" and come to good conclusions about unknown results "in the limit of ML training", without much in the way of technical knowledge about actual machine learning results or even a track record in predicting results of training [LW · GW].

For example, several respected thinkers have uttered to me English sentences like "I don't see what's educational about watching a line go down for the 50th time" and "Studying modern ML systems to understand future ones seems like studying the neurobiology of flatworms to understand the psychology of aliens." 

I vehemently disagree. I am also concerned about a community which (seems to) foster such sentiment.

Replies from: quetzal_rainbow, Maxc, tailcalled, kave, TurnTrout
comment by quetzal_rainbow · 2024-03-04T21:43:22.407Z · LW(p) · GW(p)

The problem is not that you can "just meditate and come to good conclusions", the problem is that "technical knowledge about actual machine learning results" doesn't seem like good path either.

Like, we can get from NN trained to do modular addition the fact that it performs Fourier transform, because we clearly know what Fourier transform is, but I don't see any clear path to get from neural network the fact that its output is both useful and safe, because we don't have any practical operationalization of what "useful and safe" is. If we had solution to MIRI problem "which program being run on infinitely large computer produces aligned outcome", we could try to understand how good NN in approximating this program, using aforementioned technical knowledge, and have substantial hope, for example.

Replies from: faul_sname
comment by faul_sname · 2024-03-05T00:09:32.397Z · LW(p) · GW(p)

I think the answer to the question of how well realistic NN-like systems with finite compute approximate the results of hypothetical utility maximizers with infinite compute is "not very well at all".

So the MIRI train of thought, as I understand it, goes something like

  1. You cannot predict the specific moves that a superhuman chess-playing AI will make, but you can predict that the final board state will be one in which the chess-playing AI has won.
  2. The chess AI is able to do this because it can accurately predict the likely outcomes of its own actions, and so is able to compute the utility of each of its possible actions and then effectively do an argmax over them to pick the best one, which results in the best outcome according to its utility function.
  3. Similarly, you will not be able to predict the specific actions that a "sufficiently powerful" utility maximizer will make, but you can predict that its utility function will be maximized.
  4. For most utility functions about things in the real world, the configuration of matter that maximizes that utility function is not a configuration of matter that supports human life.
  5. Actual future AI systems that will show up in the real world in the next few decades will be "sufficiently powerful" utility maximizers, and so this is a useful and predictive model of what the near future will look like.

I think the last few years in ML have made points 2 and 5 look particularly shaky here. For example, the actual architecture of the SOTA chess-playing systems doesn't particularly resemble a cheaper version of the optimal-with-infinite-compute thing of "minmax over tree search", but instead seems to be a different thing of "pile a bunch of situation-specific heuristics on top of each other, and then tweak the heuristics based on how well they do in practice".

Which, for me at least, suggests that looking at what the optimal-with-infinite-compute thing would do might not be very informative for what actual systems which will show up in the next few decades will do.

comment by Max H (Maxc) · 2024-03-04T23:11:22.642Z · LW(p) · GW(p)

one can just meditate on abstract properties of "advanced systems" and come to good conclusions about unknown results "in the limit of ML training"

I think this is a pretty straw characterization of the opposing viewpoint (or at least my own view), which is that intuitions about advanced AI systems should come from a wide variety of empirical domains and sources, and a focus on current-paradigm ML research is overly narrow.

Research and lessons from fields like game theory, economics, computer security, distributed systems, cognitive psychology, business, history, and more seem highly relevant to questions about what advanced AI systems will look like. I think the original Sequences and much of the best agent foundations research is an attempt to synthesize the lessons from these fields into a somewhat unified (but often informal) theory of the effects that intelligent, autonomous systems have on the world around us, through the lens of rationality, reductionism, empiricism, etc.

And whether or not you think they succeeded at that synthesis at all, humans are still the sole example of systems capable of having truly consequential and valuable effects of any kind. So I think it makes sense for the figure of merit for such theories and worldviews to be based on how well they explain these effects, rather than focusing solely or even mostly on how well they explain relatively narrow results about current ML systems.

Replies from: TurnTrout
comment by TurnTrout · 2024-03-04T23:55:56.830Z · LW(p) · GW(p)

Context for my original comment: I think that the key thing we want to do is predict the generalization of future neural networks. What will they do in what situations? 

For some reason, certain people think that pretraining will produce consequentialist inner optimizers. This is generally grounded out as a highly specific claim about the functions implemented by most low-loss parameterizations of somewhat-unknown future model architectures trained on somewhat-unknown data distributions. 

I am in particular thinking about "Playing the training game" reasoning, which is---at its core---extremely speculative and informal claims about inductive biases / the functions implemented by such parameterizations. If a person (like myself pre-2022) is talking about how AIs "might play the training game", but also this person doesn't know about internal invariances in NN space or the compressivity of the parameter-function map (the latter in particular is crucial for reasoning about inductive biases), then I become extremely concerned. To put it mildly.

Given that clarification which was not present in the original comment,

  • I disagree on game theory, econ, computer security, business, and history; those seem totally irrelevant for reasoning about inductive biases (and you might agree). However they seem useful for reasoning about the impact of AI on society as it becomes integrated.
  • Agree very weakly on distributed systems and moderately on cognitive psychology. (I have in fact written a post on the latter: Humans provide an untapped wealth of evidence about alignment [LW · GW].)

well they explain relatively narrow results about current ML systems.

Flagging that this is one of the main claims which we seem to dispute; I do not concede this point FWIW.

Replies from: Thane Ruthenis, Maxc, D0TheMath, Chris_Leong
comment by Thane Ruthenis · 2024-03-05T10:58:39.240Z · LW(p) · GW(p)

I think that the key thing we want to do is predict the generalization of future neural networks.

It's not what I want to do, at least. For me, the key thing is to predict the behavior of AGI-level systems. The behavior of NNs-as-trained-today is relevant to this only inasmuch as NNs-as-trained-today will be relevant to future AGI-level systems.

My impression is that you think that pretraining+RLHF (+ maybe some light agency scaffold) is going to get us all the way there, meaning the predictive power of various abstract arguments from other domains is screened off by the inductive biases and other technical mechanistic details of pretraining+RLHF. That would mean we don't need to bring in game theory, economics, computer security, distributed systems, cognitive psychology, business, history into it – we can just look at how ML systems work and are shaped, and predict everything we want about AGI-level systems from there.

I disagree. I do not think pretraining+RLHF is getting us there. I think we currently don't know what training/design process would get us to AGI. Which means we can't make closed-form mechanistic arguments about how AGI-level systems will be shaped by this process, which means the abstract often-intuitive arguments from other fields do have relevant things to say.

And I'm not seeing a lot of ironclad arguments that favour "pretraining + RLHF is going to get us to AGI" over "pretraining + RLHF is not going to get us to AGI". The claim that e. g. shard theory generalizes to AGI is at least as tenuous as the claim that it doesn't.

Flagging that this is one of the main claims which we seem to dispute; I do not concede this point FWIW.

I'd be interested if you elaborated on that.

Replies from: TurnTrout
comment by TurnTrout · 2024-03-11T22:16:35.128Z · LW(p) · GW(p)

It's not what I want to do, at least. For me, the key thing is to predict the behavior of AGI-level systems. The behavior of NNs-as-trained-today is relevant to this only inasmuch as NNs-as-trained-today will be relevant to future AGI-level systems.

Thanks for pointing out that distinction! 

comment by Max H (Maxc) · 2024-03-05T04:26:29.576Z · LW(p) · GW(p)

I actually agree that a lot of reasoning about e.g. the specific pathways by which neural networks trained via SGD will produce consequentialists with catastrophically misaligned goals is often pretty weak and speculative, including in highly-upvoted posts like Without specific countermeasures, the easiest path to transformative AI likely leads to AI takeover [LW · GW].

But to expand on my first comment, when I look around and see any kind of large effect on the world, good or bad (e.g. a moral catastrophe, a successful business, strong optimization around a MacGuffin [LW · GW]), I can trace the causality through a path that is invariably well-modeled by applying concepts like expected utility theory (or geometric rationality, if you prefer), consequentialism, deception, Goodharting, maximization, etc. to the humans involved.

I read Humans provide an untapped wealth of evidence about alignment [LW · GW] and much of your other writing as disagreeing with the (somewhat vague / general) claim that these concepts are really so fundamental, and that you think wielding them to speculate about future AI systems is privileging the hypothesis or otherwise frequently leads people astray. (Roughly accurate summary of your own views?)

Regardless of how well this describes your actual views or not, I think differing answers to the question of how fundamental this family of concepts is, and what kind of reasoning mistakes people typically make when they apply them to AI, is not really a disagreement about neural networks specifically or even AI generally.

comment by Garrett Baker (D0TheMath) · 2024-03-05T00:11:00.416Z · LW(p) · GW(p)

game theory, econ, computer security, business, and history

These seem most useful if you expect complex multi-agent training in the future. But even if not I wouldn't write them off entirely, given the existence of complex systems theory being a connection between them all & nn training (except computer security). For similar reasons, biology, neuroscience, and statistical & condensed matter (& other sorts of chaotic) physics start to seem useful.

comment by Chris_Leong · 2024-03-05T03:25:50.750Z · LW(p) · GW(p)

“But also this person doesn't know about internal invariances in NN space or the compressivity of the parameter-function map (the latter in particular is crucial for reasoning about inductive biases), then I become extremely concerned”

Have you written about this anywhere?

comment by tailcalled · 2024-03-04T18:08:10.493Z · LW(p) · GW(p)

I think many people have the misapprehension that one can just meditate on abstract properties of "advanced systems" and come to good conclusions about unknown results "in the limit of ML training", without much in the way of technical knowledge about actual machine learning results or even a track record in predicting results of training.

Two days ago I argued that GPTs would not be an existential risk with someone no matter how extremely they were scaled up, and eventually it turned out that they took the adjectives "generative pretrained" to be separable descriptors, whereas I took them to refer to a narrow specific training method.

For example, several respected thinkers have uttered to me English sentences like "I don't see what's educational about watching a line go down for the 50th time" and "Studying modern ML systems to understand future ones seems like studying the neurobiology of flatworms to understand the psychology of aliens."

These statements are not necessarily (at least by themselves; possibly additional context is missing) examples of discussion about what happens "in the limit of ML training", as these people may be concerned about the limit of ML architecture development rather than simply training.

comment by kave · 2024-03-04T17:59:52.569Z · LW(p) · GW(p)

Why do you vehemently disagree?

comment by TurnTrout · 2024-03-04T18:17:18.822Z · LW(p) · GW(p)

(As an obvious corollary, I myself was misguided to hold a similar belief pre-2022.)

comment by TurnTrout · 2023-11-13T02:35:01.677Z · LW(p) · GW(p)

Explaining Wasserstein distance. I haven't seen the following explanation anywhere, and I think it's better than the rest I've seen.

The Wasserstein distance tells you the minimal cost to "move" one probability distribution  into another . It has a lot of nice properties.[1] Here's the chunk of math (don't worry if you don't follow it):

The Wasserstein 1-distance between two probability measures  and  is

where  is the set of all couplings of  and .

What's a "coupling"? It's a joint probability distribution  over  such that its two marginal distributions equal  and . However, I like to call these transport plans. Each plan specifies a way to transport a distribution  into another distribution :

 

(EDIT: The  line should be flipped.)

Now consider a given point  in 's support, say the one with the dotted line below it. 's density must be "reallocated" into 's distribution. That reallocation is specified by the conditional distribution , as shown by the vertical dotted line. Marginalizing over  transports all of 's density and turns it into ! (This is why we required the marginalization.)

Then the Wasserstein 1-distance is simple in this case, where  are distributions on . The 1-distance of a plan  is simply the expected absolute distance from the  line! 

Then the Wasserstein just finds the infimum over all possible transport plans! Spiritually, the Wasserstein 1-distance[2] tells you the cost of the most efficient way to take each point in  and redistribute it into . Just evaluate each transport plan by looking at the expected deviation from the identity line .

Exercise: For  on , where  is  but translated to the right by , use this explanation to explain why the 1-distance equals .

  1. ^

    The distance is motivated by section 1 of "Optimal Transport and Wasserstein Distance."

  2. ^

    This explanation works for -distance for , it just makes the math a little more cluttered.

Replies from: jsd
comment by jsd · 2023-11-13T15:37:39.058Z · LW(p) · GW(p)

For onlookers, I strongly recommend Gabriel Peyré and Marco Cuturi's online book Computational Optimal Transport. I also think this is a case where considering discrete distributions helps build intuition.

comment by TurnTrout · 2021-07-29T15:08:44.311Z · LW(p) · GW(p)

If you raised children in many different cultures, "how many" different reflectively stable moralities could they acquire? (What's the "VC dimension" of human morality, without cheating by e.g. directly reprogramming brains?)

(This is probably a Wrong Question, but I still find it interesting to ask.)

comment by TurnTrout · 2019-12-04T00:50:30.859Z · LW(p) · GW(p)

Listening to Eneasz Brodski's excellent reading of Crystal Society, I noticed how curious I am about how AGI will end up working. How are we actually going to do it? What are those insights? I want to understand quite badly, which I didn't realize until experiencing this (so far) intelligently written story.

Similarly, how do we actually "align" agents, and what are good frames for thinking about that?

Here's to hoping we don't sate the former curiosity too early.

comment by TurnTrout · 2023-10-16T18:45:02.964Z · LW(p) · GW(p)

Theoretical predictions for when reward is maximized on the training distribution. I'm a fan of Laidlaw et al.'s recent Bridging RL Theory and Practice with the Effective Horizon:

Deep reinforcement learning works impressively in some environments and fails catastrophically in others. Ideally, RL theory should be able to provide an understanding of why this is, i.e. bounds predictive of practical performance. Unfortunately, current theory does not quite have this ability...

[We introduce] a new complexity measure that we call the effective horizon, which roughly corresponds to how many steps of lookahead search are needed in order to identify the next optimal action when leaf nodes are evaluated with random rollouts. Using BRIDGE, we show that the effective horizon-based bounds are more closely reflective of the empirical performance of PPO and DQN than prior sample complexity bounds across four metrics. We also show that, unlike existing bounds, the effective horizon can predict the effects of using reward shaping or a pre-trained exploration policy.

One of my favorite parts is that it helps formalize this idea of "which parts of the state space are easy to explore into." That informal notion has been important for my thinking about RL. 

At a first pass, I expect reward hacking to happen insofar as it's easy to explore into reward misspecifications. So in some situations, the reward can be understood as being optimized against, and those situations might to be well-characterized by effective horizon. EG the boat racing example

In other situations, though, you won't reach optimality due to exploration issues, and I'd instead consider what behaviors and circuits will be reinforced by the experienced rewards. EG RLHF on LLMs probably has this characteristic due to high effective horizon (is my guess)

I think these results help characterize when it's appropriate to use the "optimization target" frame (i.e. when the effective horizon is low, expect literal optimization of the reward on the training distribution), versus the reinforcement frame [LW · GW]. 

comment by TurnTrout · 2023-02-11T00:52:42.325Z · LW(p) · GW(p)

The "maximize all the variables" tendency in reasoning about AGI.

Here are some lines of thought I perceive, which are probably straw to varying extents for some people and real to varying extents for other people. I give varying responses to each, but the point isn't the truth value of any given statement, but of a pattern across the statements:

  1. If an AGI has a concept around diamonds, and is motivated in some way to make diamonds, it will make diamonds which maximally activate its diamond-concept circuitry (possible example [LW(p) · GW(p)]). 
    1. My response [LW(p) · GW(p)].
  2. An AI will be trained to minimal loss on the training distribution. 
    1. SGD does not reliably find minimum-loss configurations (modulo expressivity), in practice, in cases we care about. The existence of knowledge distillation is one large counterexample. Image
    2. Quintin: "In terms of results about model distillation, you could look at appendix G.2 of the Gopher paper. They compare training a 1.4 billion parameter model directly, versus distilling a 1.4 B model from a 7.1 B model."
  3. Predictive processing means that the goal of the human learning process is to minimize predictive loss.[1]
    1. In a process where local modifications are applied to reduce some locally computed error (e.g. "subconsciously predicting the presence of a visual-field edge where there definitely was not an edge"), it's just an enormous leap to go "and then the system minimizes this predictive error." 
    2. I think this is echoing the class of mistake I critique in Reward!=Optimization Target [LW · GW]: "there's a local update on recent data  to increase metric " -> "now I can use goal-laden language to describe the global properties of this process." I think this kind of language (e.g. "the goal of the human learning process", "minimize") should make you sit bolt upright in alarm and indignation. Minimize in a local neighborhood at the given timestep, maybe.
    3. Even if this claim (3) were true due to some kind of convergence result, that would be derived from a detailed analysis of learning dynamics. Not a situation where you can just look at an update rule and go "yeah kinda looks like it's trying to globally minimize predictive loss, guess I can just start calling it that now."
  4. Policy networks are selected for getting as much reward as possible on training, so they'll do that.
    1. Response: "From my perspective, a bunch of selection-based reasoning draws enormous conclusions (e.g. high chance the policy cares about reward OOD) given vague / weak technical preconditions (e.g. policies are selected for reward) without attending to strength or mechanism or size of the trained network or training timescales or net direction of selection." 
    2. SOTA agents often don't get max reward on training.
    3. See more of my comments in this thread [LW(p) · GW(p)] for actual detailed responses.
  5. Because AI researchers try to make reward number go up because that's a nice legible statistic, we can't anticipate in advance ways in which the trained policy won't get maximal reward, otherwise the designers would as well.
  6. An AI will effectively argmax over all plans according to its goals. 
    1. Response: I think this is not how cognition works in reality. There is a deeper set of concepts to consider, here, which changes the analysis. See: Alignment allows "nonrobust" decision-influences and doesn't require robust grading [LW · GW].

I think there are just a bunch of really strong claims like these, about some variable getting set really high or really low for some reason or another, and the claims now seem really weird to me. I initially feel confused why they're being brought up / repeated with such strength and surety. 

I speculate that there's a combination of:

  1. It feels less impressive to imagine a training run where the AGI only gets a high average policy-gradient-intensity score (i.e. reward), and not a maximally high number. 
    1. Wouldn't a sophisticated mind not only be really good at realizing its values (like making paperclips), but also get minimal predictive loss? 
    2. (No.)
  2. People don't know how to set realistic parameter values for these questions because they're confused about AGI / how intelligence works, and they don't explicitly note that to themselves, so they search under the streetlight of what current theory can actually talk about. They set the <quantity> setting to MAX or MIN in weak part because limit-based reasoning is socially approved of. 
    1. But in math and in life, you really have to be careful about your limit-based reasoning!
    2. IE you can make statements about global minima of the loss landscape. But what if training doesn't get you there? That's harder to talk about.[2] 
    3. Or maybe some hapless RL researcher [LW · GW] spends years thinking about optimal policies, only to wake up from the dream and realize that optimality results don't have to tell us a damn thing about trained policies [AF · GW]...
    4. "Argmax" or "loss minimization" feel relatively understandable, whereas "learned decision-making algorithms" are fuzzy and hard. But what if the former set has little to do with the latter?
  1. ^

    My response here assumes that PP is about self-supervised learning on intermediate neuronal activations (e.g. "friend neuron") and also immediate percepts (e.g. "firing of retinal cells"). Maybe I don't understand PP if that's not true.  

  2. ^

    There can be very legit reasons to search under the streetlight, though. Sometimes that helps you see a bit farther out, by relaxing assumptions.

Replies from: D0TheMath
comment by Garrett Baker (D0TheMath) · 2023-02-12T19:45:00.739Z · LW(p) · GW(p)

I think this type of criticism is applicable in an even wider range of fields than even you immediately imagine (though in varying degrees, and with greater or lesser obviousness or direct correspondence to the SGD case). Some examples:

  • Despite the economists, the economy doesn't try to maximize welfare, or even net dollar-equivalent wealth. It rewards firms which are able to make a profit in proportion to how much they're able to make a profit, and dis-rewards firms which aren't able to make a profit. Firms which are technically profitable, but have no local profit incentive gradient pointing towards them (factoring in the existence of rich people and lenders, neither of which are perfect expected profit maximizers) generally will not happen.

  • Individual firms also don't (only) try to maximize profit. Some parts of them may maximize profit, but most are just structures of people built from local social capital and economic capital incentive gradients.

  • Politicians don't try to (only) maximize win-probability.

  • Democracies don't try to (only) maximize voter approval.

  • Evolution doesn't try to maximize inclusive genetic fitness.

  • Memes don't try to maximize inclusive memetic fitness.

  • Academics don't try to (only) maximize status.

  • China doesn't maximize allegiance to the CCP.

I think there's a general tendency for people to look at local updates in a system (when the system has humans as decision nodes, the local updates are called incentive gradients), somehow perform some integration-analogue for a function which would produce those local updates, then find a local minimum of that "integrated" function and claim the system is at that minimum or can be approximated well by the system at that minimum. Generally, this seems constrained in empirical systems by common sense learned by experience with the system, but in less and less empirical systems (like the economy or SGD), people get more and more crazy because they have less learned common sense to guide them when making the analysis.

Replies from: TurnTrout
comment by TurnTrout · 2023-02-13T23:08:58.196Z · LW(p) · GW(p)

I think there's a general tendency for people to look at local updates in a system (when the system has humans as decision nodes, the local updates are called incentive gradients)

very pithy. nice insight, thanks. 

comment by TurnTrout · 2022-09-28T22:04:03.162Z · LW(p) · GW(p)

I was talking with Abram Demski today about a promising-seeming research direction. (Following is my own recollection)

One of my (TurnTrout's) reasons for alignment optimism is that I think:

  • We can examine early-training cognition and behavior to some extent, since the system is presumably not yet superintelligent and planning against us,
    • (Although this amount of information depends on how much interpretability and agent-internals theory we do now)
  • All else equal, early-training values (decision-influences) are the most important to influence, since they steer future training.
  • It's crucial to get early-training value shards of which a substantial fraction are "human-compatible values" (whatever that means)
    • For example, if there are protect-human-shards which 
      • reliably bid against plans where people get hurt,
      • steer deliberation away from such plan stubs, and
      • these shards are "reflectively endorsed" by the overall shard economy (i.e. the decision-making isn't steering towards plans where the protect-human shards get removed)
  • If we install influential human-compatible shards early in training, and they get retained, they will help us in mid- and late-training where we can't affect the ball game very much (e.g. alien abstractions, interpretability problems, can't oversee AI's complicated plans)

Therefore it seems very important to understand what's going on with "shard game theory [LW · GW]" (or whatever those intuitions are pointing at) -- when, why, and how will early decision-influences be retained?

He was talking about viewing new hypotheses as adding traders to a market (in the sense of logical induction). Usually they're viewed as hypotheses. But also possibly you can view them as having values, since a trader can basically be any computation. But you'd want a different market resolution mechanism than a deductive process revealing the truth or falsity of some proposition under some axioms. You want a way for traders to bid on actions. 

I proposed a setup like:

Maybe you could have an "action" instead of a proposition and then the action comes out as 1 or 0 depending on the a function of the market position on that action at a given time, which possibly leads to fixed points for every possible resolution.

For example, if all the traders hold  as YES, then  actually does come out as YES. And eg a trader  which "wants" all the even-numbered actions and  wants all the 10-multiple actions (), they can "bargain" by bidding up each others' actions whenever they have extra power and thereby "value handshake."

And that over time, traders who do this should take up more and more market share relative to those who dont exploit gains from trade. 

There should be a very high dependence of final trader coalition on the initial composition of market share. And it seems like some version of this should be able to model self-reflective value drift. You can think about action resolution and payout as a kind of reward event, where certain kinds of shards get reinforced. Bidding for an action which happens and leads to reward, gets reinforced (supporting traders receive payouts), and the more you support (bid for it), the more responsible your support was for the event, so the larger the strengthening.

Abram seemed to think that there might exist a nice result like "Given a coalition of traders with values X, Y, Z satisfies properties A, B, and C, this coalition will shape future training and trader-addition in a way which accords with X/Y/Z values up to [reasonably tight trader-subjective regret bound]."

What this would tell us is when trader coalitions can bargain / value handshake / self-trust and navigate value drift properly. This seems super important for understanding what happens, long-term, as the AI's initial value shards equilibrate into a reflectively stable utility function; even if we know how to get human-compatible values into a system, we also have to ensure they stay and keep influencing decision-making. And possibly this theorem would solve ethical reflection (e.g. the thing people do when they consider whether utilitarianism accords with their current intuitions).

Issues include:

  • Somehow this has to confront Rice's theorem for adding new traders to a coalition. What strategies would be good? 
    • I think "inspect arbitrary new traders in arbitrary situations" is not really how value drift works, but it seems possibly contingent on internal capabilities jumps in SGD
      • The key question isn't can we predict those value drift events, but can the coalition
      • EG agent keeps training and is surprised to find that an update knocks out most of the human-compatible values.
  • Knowing the right definitions might be contingent on understanding more shard theory (or whatever shard theory should be, for AI, if that's not the right frame).
  • Possibly this is still underspecified and the modeling assumptions can't properly capture what I want; maybe the properties I want are mutually exclusive. But it seems like it shouldn't be true.
  • ETA this doesn't model the contextual activation of values, which is a centerpiece of shard theory. 
  •  
Replies from: johnswentworth
comment by johnswentworth · 2022-10-05T19:02:46.279Z · LW(p) · GW(p)

One barrier for this general approach: the basic argument that something like this would work is that if one shard is aligned, and every shard has veto power over changes (similar to the setup in Why Subagents? [LW · GW]), then things can't get much worse for humanity. We may fall well short of our universe-scale potential, but at least X-risk is out.

Problem is, that argument requires basically-perfect alignment of the one shard (or possibly a set of shards which together basically-perfectly represent human values). If we try to weaken it to e.g. a bunch of shards which each imperfectly capture different aspects of human values, with different imperfections, then there's possibly changes which Goodhart all of the shards simultaneously. Indeed, I'd expect that to be a pretty strong default outcome.

Replies from: TurnTrout
comment by TurnTrout · 2022-10-10T22:23:15.495Z · LW(p) · GW(p)

Even on the view you advocate here (where some kind of perfection is required), "perfectly align part of the motivations" seems substantially easier than "perfectly align all of the AI's optimization so it isn't optimizing for anything you don't want."

If we try to weaken it to e.g. a bunch of shards which each imperfectly capture different aspects of human values, with different imperfections, then there's possibly changes which Goodhart all of the shards simultaneously. Indeed, I'd expect that to be a pretty strong default outcome.

I feel significantly less confident about this, and am still working out the degree to which Goodhart seems hard, and in what contours, on my current view.

comment by TurnTrout · 2022-09-15T21:12:49.122Z · LW(p) · GW(p)

"Globally activated consequentialist reasoning is convergent as agents get smarter" is dealt an evidential blow by von Neumann:

Although von Neumann unfailingly dressed formally, he enjoyed throwing extravagant parties and driving hazardously (frequently while reading a book, and sometimes crashing into a tree or getting arrested). He once reported one of his many car accidents in this way: "I was proceeding down the road. The trees on the right were passing me in orderly fashion at 60 miles per hour. Suddenly one of them stepped in my path." He was a profoundly committed hedonist who liked to eat and drink heavily (it was said that he knew how to count everything except calories). -- https://www.newworldencyclopedia.org/entry/John_von_Neumann 

comment by TurnTrout · 2019-09-18T21:57:15.893Z · LW(p) · GW(p)

Good, original thinking feels present to me - as if mental resources are well-allocated.

The thought which prompted this:

Sure, if people are asked to solve a problem and say they can't after two seconds, yes - make fun of that a bit. But that two seconds covers more ground than you might think, due to System 1 precomputation.

Reacting to a bit of HPMOR here, I noticed something felt off about Harry's reply to the Fred/George-tried-for-two-seconds thing. Having a bit of experience noticing confusing, I did not think "I notice I am confused" (although this can be useful). I did not think "Eliezer probably put thought into this", or "Harry is kinda dumb in certain ways - so what if he's a bit unfair here?". Without resurfacing, or distraction, or wondering if this train of thought is more fun than just reading further, I just thought about the object-level exchange.

People need to allocate mental energy wisely; this goes far beyond focusing on important tasks. Your existing mental skillsets already optimize and auto-pilot certain mental motions for you, so you should allocate less deliberation to them. In this case, the confusion-noticing module was honed; by not worrying about how well I noticed confusion, I was able to quickly have an original thought.

When thought processes derail or brainstorming sessions bear no fruit, inappropriate allocation may be to blame. For example, if you're anxious, you're interrupting the actual thoughts with "what-if"s.

To contrast, non-present thinking feels like a controller directing thoughts to go from here to there: do this and then, check that, come up for air over and over... Present thinking is a stream of uninterrupted strikes, the train of thought chugging along without self-consciousness. Moving, instead of thinking about moving while moving.

I don't know if I've nailed down the thing I'm trying to point at yet.

Replies from: TurnTrout
comment by TurnTrout · 2019-09-19T16:04:52.767Z · LW(p) · GW(p)

Sure, if people are asked to solve a problem and say they can't after two seconds, yes - make fun of that a bit. But that two seconds covers more ground than you might think, due to System 1 precomputation.

Expanding on this, there is an aspect of Actually Trying that is probably missing from S1 precomputation. So, maybe the two-second "attempt" is actually useless for most people because subconscious deliberation isn't hardass enough at giving its all, at making desperate and extraordinary efforts to solve the problem.

comment by TurnTrout · 2023-11-14T21:55:06.165Z · LW(p) · GW(p)

Another point for feature universality. Subtle adversarial image manipulations influence both human and machine perception:

... we find that adversarial perturbations that fool ANNs similarly bias human choice. We further show that the effect is more likely driven by higher-order statistics of natural images to which both humans and ANNs are sensitive, rather than by the detailed architecture of the ANN.

Replies from: D0TheMath, Throwaway2367
comment by Garrett Baker (D0TheMath) · 2023-11-14T22:06:43.018Z · LW(p) · GW(p)

I've seen this, their examples don't seem so subtle to me compared with alternatives. 

For example,

 

You can clearly see a cat in the center of the left image!

Replies from: TurnTrout
comment by TurnTrout · 2023-11-14T22:36:49.582Z · LW(p) · GW(p)

I mostly... can just barely see an ear above the train if I look, after being told to look there. I don't think it's "clear." I also note that these are black-box attacks on humans which originated from ANNs; these are transferred attacks from eg a CNN.

Replies from: faul_sname, D0TheMath
comment by faul_sname · 2023-11-15T04:29:17.540Z · LW(p) · GW(p)

I can definitely see the cat.

Here, does this version help you see the cat? The ear is where you noted, and the left (from your perspective) eye is made up of the leftmost wiper blade plus a diagonal line at the top-left side of the left window that did not exist in the original image, and then the mouth is directly above the staircase at the top of the black horizontal stripe.=

To make that image, I converted both the left and right images to HSL, took the difference in luminance between the two, multiplied by 2.5, and applied that to the control image to yield the new "cat" image.

If you still don't see it, below is an animated version of that transform alternating back and forth between 0x and 3x on that parameter.

 

You can also try it out interactively here

Replies from: gwern
comment by gwern · 2023-11-15T23:12:49.083Z · LW(p) · GW(p)

The 'cat' here is the pair of white blobs on the crown of the train's forehead, right? I have to say, no matter how I stare at it, it never looks 'cat' like to me. It only looks spider-like, a little.

Replies from: faul_sname
comment by faul_sname · 2023-11-16T00:03:26.793Z · LW(p) · GW(p)

Original for reference:

Replies from: D0TheMath, gwern
comment by Garrett Baker (D0TheMath) · 2023-11-16T00:36:17.663Z · LW(p) · GW(p)

To my eyes, the original image makes the cat clearer than your modifications.

Replies from: faul_sname
comment by faul_sname · 2023-11-16T00:45:00.888Z · LW(p) · GW(p)

What can I say, I'm a programmer, not an artist :)

comment by gwern · 2023-12-10T02:40:25.981Z · LW(p) · GW(p)

Oh, that's very surprising to me. I didn't ever see the slightest hint of that left ear you draw there. You really see an ear there in the random bit of dark foliage? Huh.

Replies from: faul_sname, tslarm
comment by faul_sname · 2023-12-10T09:34:37.167Z · LW(p) · GW(p)

Yeah, quite clearly. It looks like the ear of this cat, roughly. The eye shape is pretty similar too.

In addition, the tree above the train to the left turns into a blob with leopard spot patterns like this, and in the upper right foliage there is a fur-like texture with tiger stripes.

Anyway, I now have better drawing tools. Here is what I see, done by tracing over the picture in a separate layer:

comment by tslarm · 2023-12-10T03:15:56.795Z · LW(p) · GW(p)

I'm assuming you're talking about our left, because you mentioned 'dark foliage'. If so, that's probably the most obvious part of the cat to me. But I find it much easier to see when I zoom in/enlarge the image, and I think I missed it entirely when I first saw the image (at 1x zoom). I suspect the screen you're viewing it on can also make a difference; for me the ear becomes much more obvious when I turn the brightness up or the contrast down. (I'm tweaking the image rather than my monitor settings, but I reckon the effect is similar.)

comment by Garrett Baker (D0TheMath) · 2023-11-14T22:48:21.816Z · LW(p) · GW(p)

In the discord I saw this dropped in at least 3 regulars immediately spotted the cat shape.

comment by TurnTrout · 2023-07-31T17:37:44.477Z · LW(p) · GW(p)

Consider what update equations have to say about "training game" scenarios. In PPO, the optimization objective is proportional to the advantage given a policy , reward function , and on-policy value function :

Consider a mesa-optimizer acting to optimize some mesa objective. The mesa-optimizer understands that it will be updated proportional to the advantage. If the mesa-optimizer maximizes reward, this corresponds to maximizing the intensity of the gradients it receives, thus maximally updating its cognition in exact directions. 

This isn't necessarily good.

If you're trying to gradient hack and preserve the mesa-objective, you might not want to do this. This might lead to value drift, or make the network catastrophically forget some circuits which are useful to the mesa-optimizer. 

Instead, the best way to gradient hack might be to roughly minimize the absolute value of the advantage, which means achieving roughly on-policy value over time, which doesn't imply reward maximization. This is a kind of "treading water" in terms of reward. This helps decrease value drift.

I think that realistic mesa optimizers will not be reward maximizers, even instrumentally and during training. I think they don't want to "play the training game by maximizing reward", not necessarily. Instead, insofar as that kind of optimizer is trained by SGD -- the optimizer wants to achieve its goals, which probably involves standard deception - looking good to the human supervisors and not making them suspicious

Moral: Be careful with abstract arguments around selection pressures and learning-objective maximization. These can be useful and powerful reasoning tools, but if they aren't deployed carefully, they can prove too much.

  1. ^

    Unless the value head is the value head for the optimal policy, which it isn't trained to be unless the policy network is already optimal.

comment by TurnTrout · 2019-11-13T17:18:29.555Z · LW(p) · GW(p)

Yesterday, I put the finishing touches on my chef d'œuvre, a series of important safety-relevant proofs I've been striving for since early June. Strangely, I felt a great exhaustion come over me. These proofs had been my obsession for so long, and now - now, I'm done.

I've had this feeling before; three years ago, I studied fervently for a Google interview. The literal moment the interview concluded, a fever overtook me. I was sick for days. All the stress and expectation and readiness-to-fight which had been pent up, released.

I don't know why this happens. But right now, I'm still a little tired, even after getting a good night's sleep.

Replies from: Hazard
comment by Hazard · 2019-11-13T19:01:23.568Z · LW(p) · GW(p)

This happens to me sometimes. I know several people who have this happen at the end of a Uni semester. Hope you can get some rest.

comment by TurnTrout · 2024-01-08T17:00:15.820Z · LW(p) · GW(p)

How to get overhead-free supervision of LLM outputs:

Train an extra head on your speculative decoding model. This head (hopefully) outputs a score of "how suspicious is the text I'm reading right now." Therefore, you just run the smaller speculative decoding model as normal to sample the smaller model's next-token probabilities, while also getting the "is this suspicious" score for free! 

Replies from: nathan-helm-burger, ryan_greenblatt, ryan_greenblatt
comment by Nathan Helm-Burger (nathan-helm-burger) · 2024-01-10T21:32:35.563Z · LW(p) · GW(p)

I think this is a neat idea worth experimenting with. If I'm understanding your proposal, there'd need to be some sort of 'suspiciousness' signal on the training data to train the 'suspiciousness-detecting' head on. I think it could be hard to get such training data.

Whereas, training a 'confidence' head seems like an easier problem where you can have a model make a bunch of short terms predictions, and then grade those predictions, and then use the resulting labelled data to train a 'confidence' head. Ideally, these would be more interesting predictions than simply 'what token comes next', but that is better than nothing.

comment by ryan_greenblatt · 2024-01-08T17:54:41.100Z · LW(p) · GW(p)

Some random similar methods I've thought about:

  • coup probes [LW · GW] (how good is the baseline of just training a probe on off-policy data?)
  • Maybe for assisting with human supervision (or even making a better product) it would be useful to train a bunch of probes for different properties and then highlight text at a token-by-token level for the property. E.g. train a probe for "lying" or "hallucination" and then show this probe on a token by token level for the usage of a human overseer or monitor.
  • Maybe it would be useful to have make our powerful untrusted model mostly imitate a weak trusted model [AF · GW] and only rarely deviate from it's probablity distribution in cases where it thinks it is particularly important (perhaps the weak trusted model is done with an early exit head on the LLM or something similar to speculative decoding). This is ~equivalent to RL-ing the model to make good completions while defering to the weak model as much as possible. Then, we could highlight on which tokens the stronger model didn't defer to the weaker model and apply additional supervision to these tokens.
comment by ryan_greenblatt · 2024-01-08T17:48:05.457Z · LW(p) · GW(p)

As in the proposal is:

  • Train a probe on "is this text suspicious" (Perhaps at the same point where we do speculative decoding for efficiency.)
  • At runtime, if the text generated thus far is suspicious, than sample from a weaker model instead (in your proposal, we use the weaker the speculative decoding model).
  • (Perhaps we return to allowing stronger model sampling when the probe no longer indicates issues in the completion.)

(Just putting it in the language I would use.)

I like this proposal.

It's worth noting that we can just train the model not to output suspicious text according to the probe, so the question will be about the additional alpha in not overfitting to the probe or avoiding concentrated failures [LW · GW] (concentrated in a small number of actions or a small amount of time so distilling back into the model isn't possible) .

I generally think this sort of proposal is more applicable to avoiding concentrated failures than to avoiding issues with "average case peformance/oversight" or similar.

The defer-to-trusted method we tested in our recent paper [LW · GW] is quite a similar method (use a sample from a weaker trusted model when suspicion is too high), but at the level of an entire completion rather than within a given generation.

I expect that (like in our paper) you'd want to also have humans review cases where the probe indicated very high levels of suspicion (if the underlying model is actually dangerous).

As far as whether a probe trained on suspicion will perform the way you want, consider reading our post on coup probes [LW · GW]. Though note that the issues might differ substantially between the case where you're worried about scheming (aka deceptive alignment) and other concerns.

comment by TurnTrout · 2023-03-01T05:20:56.027Z · LW(p) · GW(p)

Be kind to yourself for sample efficiency reasons. Reinforcing good behavior provides an exact "policy gradient" towards desired outputs. Whipping yourself provides an "inexact gradient" away from undesired decisions, which is much harder to learn. :)

Replies from: TurnTrout, Gunnar_Zarncke
comment by TurnTrout · 2023-03-07T07:52:09.644Z · LW(p) · GW(p)

Noting that you could provide exact "negative" gradients by focusing on what you should have done instead. Although whether this transduces into an internal positive reward event / "exact gradient" is unclear to me. Seems like that stills "feels bad" in similar ways to unconcentrated negative reward events.

comment by Gunnar_Zarncke · 2023-03-03T12:07:52.584Z · LW(p) · GW(p)

Which is why when you learn a new sport it is a good idea to feel happy when your action worked well but mostly ignore failures - that would more likely lead to you not liking the sport than make you better.

comment by TurnTrout · 2022-07-03T17:26:12.328Z · LW(p) · GW(p)

If you want to argue an alignment proposal "breaks after enough optimization pressure", you should give a concrete example in which the breaking happens (or at least internally check to make sure you can give one). I perceive people as saying "breaks under optimization pressure" in scenarios where it doesn't even make sense. 

For example, if I get smarter, would I stop loving my family because I applied too much optimization pressure to my own values? I think not.

Replies from: ege-erdil, kh, Dagon, AllAmericanBreakfast
comment by Ege Erdil (ege-erdil) · 2022-07-05T17:38:51.620Z · LW(p) · GW(p)

For example, if I get smarter, would I stop loving my family because I applied too much optimization pressure to my own values? I think not.

This seems more likely than you might imagine to me. Not certain or not even an event of very high probability, but probable enough that you should take it into consideration.

comment by Kaarel (kh) · 2022-07-06T14:37:04.616Z · LW(p) · GW(p)

Something that confuses me about your example's relevance is that it's like almost the unique case where it's [[really directly] impossible] to succumb to optimization pressure, at least conditional on what's good = something like coherent extrapolated volition. That is, under (my understanding of) a view of metaethics common in these corners, what's good just is what a smarter version of you would extrapolate your intuitions/[basic principles] to, or something along these lines. And so this is almost definitionally almost the unique situation that we'd expect could only move you closer to better fulfilling your values, i.e. nothing could break for any reason, and in particular not break under optimization pressure (where breaking is measured w.r.t. what's good). And being straightforwardly tautologically true would make it a not very interesting example.

editorial remark: I realized after writing the two paragraphs below that they probably do not move one much on the main thesis of your post, at least conditional on already having read Ege Erdil's doubts about your example (except insofar as someone wants to defer to opinions of others or my opinion in particular), but I decided to post anyway in large part since these family matters might be a topic of independent interest for some:

I would bet that at least 25% of people would stop loving their (current) family in <5 years (i.e. not love them much beyond how much they presently love a generic acquaintance) if they got +30 IQ. That said, I don't claim the main case of this happening is because of applying too much optimization pressure to one's values, at least not in a way that's unaligned with what's good -- I just think it's likely to be the good thing to do (or like, part of all the close-to-optimal packages of actions, or etc.). So I'm not explicitly disagreeing with the last sentence of your comment, but I'm disagreeing with the possible implicit justification of the sentence that goes through ["I would stop loving my family" being false].

The argument for it being good to stop loving your family in such circumstances is just that it's suboptimal for having an interesting life, or for [the sum over humans of interestingness of their lives] if you are altruistic, or whatever, for post-IQ-boost-you to spend a lot of time with people much dumber than you, which your family is now likely to be. (Here are 3 reasons to find a new family: you will have discussions which are more fun -> higher personal interestingness; you will learn more from these discussions -> increased productivity; and something like productivity being a convex function of IQ -- this comes in via IQs of future kids, at least assuming the change in your IQ would be such as to partially carry over to kids. I admit there is more to consider here, e.g. some stuff with good incentives, breaking norms of keeping promises -- my guess is that these considerations have smaller contributions.) 

Replies from: kh
comment by Kaarel (kh) · 2022-07-06T19:22:16.354Z · LW(p) · GW(p)

Oops I realized that the argument given in the last paragraph of my previous comment applies to people maximizing their personal welfare or being totally altruistic or totally altruistic wrt some large group or some combination of these options, but maybe not so much to people who are e.g. genuinely maximizing the sum of their family members' personal welfares, but this last case might well be entailed by what you mean by "love", so maybe I missed the point earlier. In the latter case, it seems likely that an IQ boost would keep many parts of love in tact initially, but I'd imagine that for a significant fraction of people, the unequal relationship would cause sadness over the next 5 years, which with significant probability causes falling out of love. Of course, right after the IQ boost you might want to invent/implement mental tech which prevents this sadness or prevents the value drift caused by growing apart, but I'm not sure if there are currently feasible options which would be acceptable ways to fix either of these problems. Maybe one could figure out some contract to sign before the value drift, but this might go against some deeper values, and might not count as staying in love anyway.

comment by Dagon · 2022-07-03T17:55:21.574Z · LW(p) · GW(p)

"get smarter" is not optimization pressure (though there is evidence that higher IQ and more education is correlated with smaller families).  If you have important goals at risk, would you harm your family (using "harm" rather than "stop loving", as alignment is about actions, not feelings)?  There are lots of examples of humans doing so.   Rephrasing it as "can Moloch break this alignment?" may help.

That said, I agree it's a fully-general objection, and I can't tell whether it's legitimate (alignment researchers need to explore and model the limits of tradeoffs in adversarial or pathological environments for any proposed utility function or function generator) or meaningless (can be decomposed into specifics which are actually addressed).

I kind of lean toward "legitimate", though.  Alignment may be impossible over long timeframes and significant capability differentials.  

comment by DirectedEvolution (AllAmericanBreakfast) · 2022-07-03T17:48:09.474Z · LW(p) · GW(p)

People can get brain damaged and stop loving their families. If moving backwards in intelligence can do this, why not moving forwards?

comment by TurnTrout · 2020-10-26T03:20:57.104Z · LW(p) · GW(p)

If you're tempted to write "clearly" in a mathematical proof, the word quite likely glosses over a key detail you're confused about. Use that temptation as a clue for where to dig in deeper.

At least, that's how it is for me.

comment by TurnTrout · 2020-10-09T16:17:36.093Z · LW(p) · GW(p)

I went to the doctor's yesterday. This was embarrassing for them on several fronts.

First, I had to come in to do an appointment which could be done over telemedicine, but apparently there are regulations against this.

Second, while they did temp checks and required masks (yay!), none of the nurses or doctors actually wore anything stronger than a surgical mask. I'm coming in here with a KN95 + goggles + face shield because why not take cheap precautions to reduce the risk, and my own doctor is just wearing a surgical? I bought 20 KN95s for, like, 15 bucks on Amazon.

Third, and worst of all, my own doctor spouted absolute nonsense. The mildest insinuation was that surgical facemasks only prevent transmission, but I seem to recall that many kinds of surgical masks halve your chances of infection as well.

Then, as I understood it, he first claimed that coronavirus and the flu have comparable case fatality rates. I wasn't sure if I'd heard him correctly - this was an expert talking about his area of expertise, so I felt like I had surely misunderstood him. I was taken aback. But, looking back, that's what he meant.

He went on to suggest that we can't expect COVID immunity to last (wrong) but also that we need to hit 70% herd immunity (wrong). How could you even believe both of these things at the same time? Under those beliefs, are we all just going to get sick forever? Maybe he didn't notice the contradiction because he made the claims a few minutes apart.

Next, he implied that it's not a huge deal that people have died because a lot of them had comorbidities. Except that's not how comorbidities and counterfactual impact works. "No one's making it out of here alive", he says. An amusing rationalization.

He also claimed that nursing homes have an average stay length of 5 months. Wrong. AARP says it's 1.5 years for men, 2.5 years for women, but I've seen other estimate elsewhere, all much higher than 5 months. Not sure what the point of this was - old people are 10 minutes from dying anyways? What?

Now, perhaps I misunderstood or misheard one or two points. But I'm pretty sure I didn't mishear all of them. Isn't it great that I can correct my doctor's epidemiological claims after reading Zvi's posts and half of an epidemiology textbook? I'm glad I can trust my doctor and his epistemology.

Replies from: mingyuan, Dagon
comment by mingyuan · 2020-10-09T17:09:26.778Z · LW(p) · GW(p)

Eli just took a plane ride to get to CA and brought a P100, but they told him he had to wear a cloth mask, that was the rule. So he wore a cloth mask under the P100, which of course broke the seal. I feel you.

Replies from: ChristianKl
comment by ChristianKl · 2020-10-09T17:32:20.872Z · LW(p) · GW(p)

I don't think that policy is unreasonable for a plane ride. Just because someone wears a P100 mask doesn't mean that their mask filters outgoing air as that's not the design goals for most of the use cases of P100 masks.

Checking on a case-by-case basis whether a particular P100 mask is not designed like an average P100 mask is likely not feasible in that context. 

comment by Dagon · 2020-10-09T19:06:29.704Z · LW(p) · GW(p)

What do you call the person who graduates last in their med school class?  Doctor.   And remember that GPs are weighted toward the friendly area of doctor-quality space rather than the hyper-competent.   Further remember that consultants (including experts on almost all topics) are generally narrow in their understanding of things - even if they are well above the median at their actual job (for a GP, dispensing common medication and identifying situations that need referral to a specialist), that doesn't indicate they're going to be well-informed even for adjacent topics.

That said, this level of misunderstanding on topics that impact patient behavior and outcome (mask use, other virus precautions) is pretty sub-par.  The cynic in me estimates it's the bottom quartile of front-line medical providers, but I hope it's closer to the bottom decile.  Looking into an alternate provider seems quite justified.

Replies from: ChristianKl
comment by ChristianKl · 2020-10-09T20:11:41.072Z · LW(p) · GW(p)

What do you call the person who graduates last in their med school class?  Doctor.  

In the US that isn't the case. There are limited places for internships and the worst person in medical school might not get a place for an internship and thus is not allowed to be a doctor. The medical system is heavily gated to keep out people.

comment by TurnTrout · 2019-12-25T23:07:04.811Z · LW(p) · GW(p)

Judgment in Managerial Decision Making says that (subconscious) misapplication of e.g. the representativeness heuristic causes insensitivity to base rates and to sample size, failure to reason about probabilities correctly, failure to consider regression to the mean, and the conjunction fallacy. My model of this is that representativeness / availability / confirmation bias work off of a mechanism somewhat similar to attention in neural networks: due to how the brain performs time-limited search, more salient/recent memories get prioritized for recall.

The availability heuristic goes wrong when our saliency-weighted perceptions of the frequency of events is a biased estimator of the real frequency, or maybe when we just happen to be extrapolating off of a very small sample size. Concepts get inappropriately activated in our mind, and we therefore reason incorrectly. Attention also explains anchoring: you can more readily bring to mind things related to your anchor due to salience.

The case for confirmation bias seems to be a little more involved: first, we had evolutionary pressure to win arguments, which means our search is meant to find supportive arguments and avoid even subconsciously signalling that we are aware of the existence of counterarguments. This means that those supportive arguments feel salient, and we (perhaps by "design") get to feel unbiased - we aren't consciously discarding evidence, we're just following our normal search/reasoning process! This is what our search algorithm feels like from the inside. [LW · GW]

This reasoning feels clicky, but I'm just treating it as an interesting perspective for now.

comment by TurnTrout · 2019-11-29T02:52:46.899Z · LW(p) · GW(p)

From my Facebook

My life has gotten a lot more insane over the last two years. However, it's also gotten a lot more wonderful, and I want to take time to share how thankful I am for that.

Before, life felt like... a thing that you experience, where you score points and accolades and check boxes. It felt kinda fake, but parts of it were nice. I had this nice cozy little box that I lived in, a mental cage circumscribing my entire life. Today, I feel (much more) free.

I love how curious I've become, even about "unsophisticated" things. Near dusk, I walked the winter wonderland of Ogden, Utah with my aunt and uncle. I spotted this gorgeous red ornament hanging from a tree, with a hunk of snow stuck to it at north-east orientation. This snow had apparently decided to defy gravity. I just stopped and stared. I was so confused. I'd kinda guessed that the dry snow must induce a huge coefficient of static friction, hence the winter wonderland. But that didn't suffice to explain this. I bounded over and saw the smooth surface was iced, so maybe part of the snow melted in the midday sun, froze as evening advanced, and then the part-ice part-snow chunk stuck much more solidly to the ornament.

Maybe that's right, and maybe not. The point is that two years ago, I'd have thought this was just "how the world worked", and it was up to physicists to understand the details. Whatever, right? But now, I'm this starry-eyed kid in a secret shop full of wonderful secrets. Some secrets are already understood by some people, but not by me. A few secrets I am the first to understand. Some secrets remain unknown to all. All of the secrets are enticing.

My life isn't always like this; some days are a bit gray and draining. But many days aren't, and I'm so happy about that.

Socially, I feel more fascinated by people in general, more eager to hear what's going on in their lives, more curious what it feels like to be them that day. In particular, I've fallen in love with the rationalist and effective altruist communities, which was totally a thing I didn't even know I desperately wanted until I already had it in my life! There are so many kind, smart, and caring people, inside many of whom burns a similarly intense drive to make the future nice, no matter what. Even though I'm estranged from the physical community much of the year, I feel less alone: there's a home for me somewhere.

Professionally, I'm working on AI alignment, which I think is crucial for making the future nice. Two years ago, I felt pretty sidelined - I hadn't met the bars I thought I needed to meet in order to do Important Things, so I just planned for a nice, quiet, responsible, normal life, doing little kindnesses. Surely the writers of the universe's script would make sure things turned out OK, right?

I feel in the game now. The game can be daunting, but it's also thrilling. It can be scary, but it's important. It's something we need to play, and win. I feel that viscerally. I'm fighting for something important, with every intention of winning.

I really wish I had the time to hear from each and every one of you. But I can't, so I do what I can: I wish you a very happy Thanksgiving. :)

comment by TurnTrout · 2019-11-04T01:29:44.252Z · LW(p) · GW(p)

With respect to the integers, 2 is prime. But with respect to the Gaussian integers, it's not: it has factorization . Here's what's happening.

You can view complex multiplication as scaling and rotating the complex plane. So, when we take our unit vector 1 and multiply by , we're scaling it by and rotating it counterclockwise by :

This gets us to the purple vector. Now, we multiply by , scaling it up by again (in green), and rotating it clockwise again by the same amount. You can even deal with the scaling and rotations separately (scale twice by , with zero net rotation).

comment by TurnTrout · 2024-01-01T23:06:37.633Z · LW(p) · GW(p)

It's really important to use neutral, accurate terminology. AFAICT saying ML "selects for" low loss is unconventional. I think MIRI introduced this terminology. And if you have a bunch of intuitions about evolution being relevant to AI alignment and you want people to believe you on that, you can just use the same words for both optimization processes. Regardless of whether the processes share the relevant causal mechanisms, the two processes both "select for" stuff! They can't be that different, right?

And now the discussion moves on to debating the differences between two assumed-similar processes—does ML have less of an "information bottleneck"? Does that change the "selection pressures"? Urgh. 

I think this terminology sucks and I wish it hadn't been adopted. 

Be careful with words. Words shape how you think and your implicit category boundaries. When thinking privately, I often do better by tossing words to the side and thinking in terms of how each process works, and then considering what I expect to happen as a result of each process.

See also: Think carefully before calling RL policies  "agents" [LW · GW].

EDIT: In AGI Ruin: A List of Lethalities [LW · GW], Eliezer says that evolution had a "loss function" when he portrayed ML as similar to evolution:

See, again, the case of human intelligence.  We didn't break alignment with the 'inclusive reproductive fitness' outer loss function

And (I don't immediately know where the comment is), gwern has described evolution as having a "reward function." This is sloppy shorthand at best, and misleading + distinction-blurring at worst.

Replies from: Buck, jeremy-gillen, D0TheMath, peterbarnett, lahwran
comment by Buck · 2024-01-01T23:47:14.228Z · LW(p) · GW(p)

What’s your preferred terminology?

Replies from: TurnTrout
comment by TurnTrout · 2024-01-02T04:15:51.255Z · LW(p) · GW(p)

It depends on what I'm trying to communicate. For example:

"ML selects for low loss" -> "Trained networks tend to have low training loss"

This correctly highlights a meaningful correlation (loss tends to be low for trained networks) and alludes to a relevant mechanism (networks are in fact updated to locally decrease loss on their training distributions). However, it avoids implying that the mechanism of ML is "selection on low loss." 

comment by Jeremy Gillen (jeremy-gillen) · 2024-01-01T23:43:53.311Z · LW(p) · GW(p)

I think the term is very reasonable and basically accurate, even more so with regard to most RL methods. It's a good way of describing a training process without implying that the evolving system will head toward optimality deliberately. I don't know a better way to communicate this succinctly, especially while not being specific about what local search algorithm is being used.

Also, evolutionary algorithms can be used to approximate gradient descent (with noisier gradient estimates), so it's not unreasonable to use similar language about both.

I'm not a huge fan of the way you imply that it was chosen for rhetorical purposes.

Replies from: TurnTrout
comment by TurnTrout · 2024-01-02T02:23:36.226Z · LW(p) · GW(p)

Without commenting on the rest for now—

I'm not a huge fan of the way you imply that it was chosen for rhetorical purposes.

To be clear, I'm not alleging mal-intent or anything. I'm more pointing out memetic dynamics. The situation can look as innocent as "You genuinely believe X, and think it's important for people to get X, and so you iterate over explanations until you find an effective one." And maybe that explanation just happens to involve analogizing that ML "selects for low loss." 

comment by Garrett Baker (D0TheMath) · 2024-01-02T07:22:40.858Z · LW(p) · GW(p)

Gwern talks about natural selection like it has a loss function in Evolution as Backstop For Reinforcement Learning:

I suggest interpreting phenomenon as multi-level nested optimization paradigm: many systems can be usefully described as having two (or more) levels where a slow sample-inefficient but ground-truth ‘outer’ loss such as death, bankruptcy, or reproductive fitness, trains & constrains a fast sample-efficient but possibly misguided ‘inner’ loss which is used by learned mechanisms such as neural networks or linear programming group selection perspective.

There is more to the post though! I recommend reading it, especially if you're confused what this could possibly concretely mean when all natural selection is is an update process, and no real outer loss is defined. Especially the section Two-Level Meta Learning. I do not think he makes this mistake out of naivete.

comment by peterbarnett · 2024-01-02T02:37:34.909Z · LW(p) · GW(p)

(I don't mean to dogpile)
I think that selection is the correct word, and that it doesn't really seem to be smuggling in incorrect connections to evolution. 

We could imagine finding a NN that does well according to a loss function by simply randomly initializing many many NNs, and then keeping the one that does best according to the loss function. I think this process would accurately be described as selection; we are literally selecting the model which does best. 

I'm not claiming that SGD does this[1], just giving an example of a method to find a low-loss parameter configuration which isn't related to evolution, and is (in my opinion) best described as "selection".

  1. ^

    Although "Is SGD a Bayesian sampler? Well, almost" does make a related claim.
     

Replies from: TurnTrout, TurnTrout
comment by TurnTrout · 2024-01-02T04:11:39.644Z · LW(p) · GW(p)

Sure. But I think that's best described as "best-of- sampling", which is still better because it avoids implicitly comparing selection-over-learning-setups (i.e. genotypes) with selection-over-parameterizations. 

But let's just say I concede this particular method can be non-crazily called "selection." AFAICT I think you're arguing: "There exist ML variants which can be described as 'selection'." But speculation about "selecting for low loss" is not confined to those variants, usually people just lump everything in as that. And I doubt that most folks are on the edge of their seats, ready to revoke the analogy if some paper comes out that convincingly shows that ML diverges from "selecting for low loss"...[1] 

  1. ^

    To be clear, that evidence already exists.

Replies from: sudo
comment by sudo · 2024-01-02T08:48:20.426Z · LW(p) · GW(p)

Hi, do you have a links to the papers/evidence?

comment by TurnTrout · 2024-01-08T19:34:50.762Z · LW(p) · GW(p)

is (in my opinion) best described as "selection".

Actually, I agreed too quickly. Words are not used in a vacuum. Even though this method isn't related to evolution, and even though a naive person might call it "selection" (and have that be descriptively reasonable), that doesn't mean it's best described as "selection." The reason is that the "s-word" has lots of existing evolutionary connotations. And on my understanding, that's the main reason you want to call it "selection" to begin with—in order to make analogical claims about the results of this process compared to the results of evolution.

But my whole point is that the analogy is only valid if the two optimization processes (evolution and best-of- sampling) share the relevant causal mechanisms. So before you start using the s-word and especially before you start using its status as "selection" to support analogies, I want to see that argument first. Else, it should be called something more neutral.

comment by the gears to ascension (lahwran) · 2024-01-02T00:00:14.748Z · LW(p) · GW(p)

perhaps ml doesn't "select for" a loss function, it "pulls towards" a loss function.

comment by TurnTrout · 2023-01-23T22:05:37.244Z · LW(p) · GW(p)

Thomas Kwa suggested that consequentialist agents seem to have less superficial (observation, belief state) -> action mappings. EG a shard agent might have:

  1. An "it's good to give your friends chocolate" subshard
  2. A "give dogs treats" subshard
  3. -> An impulse to give dogs chocolate, even though the shard agent knows what the result would be

But a consequentialist would just reason about what happens, and not mess with those heuristics. (OFC, consequentialism would be a matter of degree)

In this way, changing a small set of decision-relevant features (e.g. "Brown dog treat" -> "brown ball of chocolate") changes the consequentialist's action logits a lot, way more than it changes the shard agent's logits. In a squinty, informal way, the (belief state -> logits) function has a higher Lipschitz constant/is more smooth for the shard agent than for the consequentialist agent.

So maybe one (pre-deception) test for consequentialist reasoning is to test sensitivity of decision-making to small perturbations in observation-space (e.g. dog treat -> tiny chocolate) but large perturbations in action-consequence space (e.g. happy dog -> sick dog). You could spin up two copies of the model to compare.

Replies from: D0TheMath
comment by Garrett Baker (D0TheMath) · 2023-01-23T22:46:27.741Z · LW(p) · GW(p)

Hm. I find I'm very scared of giving dogs chocolate and grapes because it was emphasized in my childhood this is a common failure-mode, and so will upweight actions which get rid of the chocolate in my hands when I'm around dogs. I expect the results of this experiment to be unclear, since a capable shard composition would want to get rid of the chocolate so it doesn't accidentally give the chocolate to the dog, but this is also what the consequentialist would do, so that they can (say) more easily use their hands for anticipated hand-related tasks (like petting the dog) without needing to expend computational resources keeping track of the dog's relation to the chocolate (if they place the chocolate in their pants).

More generally, it seems hard to separate shard-theoretic hypotheses from results-focused reasoning hypotheses without much understanding of the thought processes or values going into each, mostly I think because both theories are still in their infancy.

Replies from: thomas-kwa
comment by Thomas Kwa (thomas-kwa) · 2023-02-01T19:35:25.809Z · LW(p) · GW(p)

Here's how I think about it: Capable agents will be able to do consequentialist reasoning, but the shard-theory-inspired hypothesis is that running the consequences through your world-model is harder / less accessible / less likely than just letting your shards vote on it. If you've been specifically taught that chocolate is bad for dogs, maybe this is a bad example.

I also wasn't trying to think about whether shards are subagents; this came out of a discussion on finding the simplest possible shard theory hypotheses and applying them to gridworlds.

comment by TurnTrout · 2023-01-01T19:50:08.505Z · LW(p) · GW(p)

Are there convergently-ordered developmental milestones for AI? I suspect there may be convergent orderings in which AI capabilities emerge. For example, it seems that LMs develop syntax before semantics, but maybe there's an even more detailed ordering relative to a fixed dataset. And in embodied tasks with spatial navigation and recurrent memory, there may be an order in which enduring spatial awareness emerges (i.e. "object permanence").

In A shot at the diamond-alignment problem [LW · GW], I wrote:

[Consider] Let's Agree to Agree: Neural Networks Share Classification Order on Real Datasets

We report a series of robust empirical observations, demonstrating that deep Neural Networks learn the examples in both the training and test sets in a similar order. This phenomenon is observed in all the commonly used benchmarks we evaluated, including many image classification benchmarks, and one text classification benchmark. While this phenomenon is strongest for models of the same architecture, it also crosses architectural boundaries – models of different architectures start by learning the same examples, after which the more powerful model may continue to learn additional examples. We further show that this pattern of results reflects the interplay between the way neural networks learn benchmark datasets. Thus, when fixing the architecture, we show synthetic datasets where this pattern ceases to exist. When fixing the dataset, we show that other learning paradigms may learn the data in a different order. We hypothesize that our results reflect how neural networks discover structure in natural datasets.

The authors state that they “failed to find a real dataset for which NNs differ [in classification order]” and that “models with different architectures can learn benchmark datasets at a different pace and performance, while still inducing a similar order. Specifically, we see that stronger architectures start off by learning the same examples that weaker networks learn, then move on to learning new examples.”

Similarly, crows (and other smart animals) reach developmental milestones in basically the same order as human babies reach them. On my model [LW · GW], developmental timelines come from convergent learning of abstractions via self-supervised learning in the brain. If so, then the smart-animal evidence is yet another instance of important qualitative concept-learning retaining its ordering, even across significant scaling and architectural differences.

We might even end up in the world where AI also follows the crow/human/animal developmental milestone ordering, at least roughly up until general intelligence. If so, we could better estimate timelines to AGI by watching how far the AI progresses on the known developmental ordering

If so, then if a network can act to retrieve partially hidden objects, but not fully hidden objects, then in the next part of training, we can expect the AI to next learn to retrieve objects whose concealment it observed (and also we may expect some additional amount of goal-directedness)

To test this hypothesis, it would be sufficient (but not necessary[1]) to e.g. reproduce the XLAND results, while taking regular policy network checkpoints. We could behaviorally prompt the checkpoints with tests similar to those administered to human children by psychologists.

  1. ^

    The paper indicates that checkpoints were taken, so maybe the authors would be willing to share those for research purposes. If not, rerunning XLAND may be overkill and out of reach of most compute budgets. There are probably simpler experiments which provide evidence on this question.

comment by TurnTrout · 2022-12-31T23:24:28.350Z · LW(p) · GW(p)

Quick summary of a major takeaway from Reward is not the optimization target [LW · GW]: 

Stop thinking about whether the reward is "representing what we want", or focusing overmuch on whether agents will "optimize the reward function." Instead, just consider how the reward and loss signals affect the AI via the gradient updates. How do the updates affect the AI's internal computations and decision-making?

Replies from: Gunnar_Zarncke
comment by Gunnar_Zarncke · 2023-01-01T02:09:22.578Z · LW(p) · GW(p)

Are there different classes of learning systems that optimize for the reward in different ways?

Replies from: TurnTrout
comment by TurnTrout · 2023-01-03T04:19:56.533Z · LW(p) · GW(p)

Yes, model-based approaches, model-free approaches (with or without critic), AIXI— all of these should be analyzed on their mechanistic details.

comment by TurnTrout · 2020-11-23T03:10:19.610Z · LW(p) · GW(p)

I remarked to my brother, Josh, that when most people find themselves hopefully saying "here's how X can still happen!", it's a lost cause and they should stop grasping for straws and move on with their lives. Josh grinned, pulled out his cryonics necklace, and said "here's how I can still not die!"

comment by TurnTrout · 2020-01-05T02:27:54.205Z · LW(p) · GW(p)

Suppose you could choose how much time to spend at your local library, during which:

  • you do not age. Time stands still outside; no one enters or exits the library (which is otherwise devoid of people).
  • you don't need to sleep/eat/get sunlight/etc
  • you can use any computers, but not access the internet or otherwise bring in materials with you
  • you can't leave before the requested time is up

Suppose you don't go crazy from solitary confinement, etc. Remember that value drift is a potential thing.

How long would you ask for?

Replies from: Algon, FactorialCode, qv^!q
comment by Algon · 2023-02-24T13:14:03.362Z · LW(p) · GW(p)

A few days, as you forgot to include medication. Barring that issue, I think the lack of internet access and no materials is killer. My local library kind of sucks in terms of stimulating materials. So maybe a few weeks? A month or two, at most. If I had my latpop with me, but without internet access, then maybe a year or two. If I had some kind of magical internet access allowing me to interface with stuff like ChatGPT, then maybe a few more years, as I could probably cook up a decent chatbot to meet some socialization needs. I very much doubt I could spend more than ten years without my values drifting off.

comment by FactorialCode · 2020-01-06T19:38:59.584Z · LW(p) · GW(p)

How good are the computers?

Replies from: TurnTrout
comment by TurnTrout · 2020-01-06T20:15:42.012Z · LW(p) · GW(p)

Windows machines circa ~2013. Let’s say 128GB hard drives which magically never fail, for 10 PCs.

Replies from: FactorialCode
comment by FactorialCode · 2020-01-07T17:01:17.079Z · LW(p) · GW(p)

Probably 3-5 years then. I'd use it to get a stronger foundation in low level programming skills, math and physics. The limiting factors would be entertainment in the library to keep me sane and the inevitable degradation of my social skills from so much spent time alone.

comment by qvalq (qv^!q) · 2023-02-24T12:59:51.242Z · LW(p) · GW(p)

I would ask for a long time.

Reading would probably get boring after a few decades, but I think writing essays and programs and papers and books could last much longer. Meditation could also last long, because I'm bad at it.

 <1000 years, though; I'd need to be relatively sure I wouldn't commit suicide or fall down stairs.

comment by TurnTrout · 2019-11-20T21:52:55.015Z · LW(p) · GW(p)

I feel very excited by the AI alignment discussion group I'm running at Oregon State University. Three weeks ago, most attendees didn't know much about "AI security mindset"-ish considerations. This week, I asked the question "what, if anything, could go wrong with a superhuman reward maximizer which is rewarded for pictures of smiling people? Don't just fit a bad story to the reward function. Think carefully."

There was some discussion and initial optimism, after which someone said "wait, those optimistic solutions are just the ones you'd prioritize! What's that called, again?" (It's called anthropomorphic optimism)

I'm so proud.

comment by TurnTrout · 2024-01-04T06:03:57.777Z · LW(p) · GW(p)

GPT-4 explains shard theory after minimal prompting; it does a surprisingly good job (and I'm faintly surprised it knows so much about it):

Shard theory is a conceptual framework proposed by Alex Turner [Edit by me: And Quintin Pope!] to describe how an AI's values or "shards" can emerge from its interactions with the environment during the learning process. The theory is an attempt to explain how complex value systems might arise in advanced AI systems without being explicitly programmed.

Key Aspects of Shard Theory

1. Value Learning: Shard theory suggests that an AI learns values by interacting with its environment and receiving feedback, such as rewards and punishments. These interactions lead to the formation of "shards," which are small pieces of value or preference that the AI acquires over time.

2. Shard Composition: The AI's overall value system is composed of multiple shards that it has picked up from different contexts within its environment. These shards can be thought of as heuristics or rules of thumb that guide the AI's behavior.

3. Behavioral Influence: Shards influence the AI's behavior by affecting its decision-making process. When faced with a choice, the AI weighs the relevance and strength of each shard to come to a decision.

4. Emergent Complexity: The theory posits that the AI's value system can become quite complex as it accumulates more shards, leading to sophisticated and nuanced behavior that may not have been directly programmed by the AI's designers.

5. Alignment Implications: Shard theory has implications for AI alignment, as it provides a framework for understanding how an AI's values develop and change over time. This understanding could be crucial for designing AI systems that align with human values and for predicting how AI values might diverge from human expectations.

6. Research Applications: Turner's work on shard theory contributes to the broader field of AI alignment by offering a model for how value systems can emerge in AI. It also provides a potential avenue for developing AI systems that learn to value the right things by carefully curating the environment and feedback they receive.

Shard theory is part of ongoing research efforts to understand and guide the development of AI value systems. It aims to ensure that as AI systems become more autonomous and capable, they do so in a way that is beneficial and not harmful to humanity.

Replies from: charbel-raphael-segerie, Rana Dexsin
comment by Charbel-Raphaël (charbel-raphael-segerie) · 2024-01-04T21:24:51.110Z · LW(p) · GW(p)

It's better than the first explanation I tried to give you last year.

comment by Rana Dexsin · 2024-01-05T02:56:15.206Z · LW(p) · GW(p)

and I'm faintly surprised it knows so much about it

GPT-4 via the API, or via ChatGPT Plus? Didn't they recently introduce browsing to the latter so that it can fetch Web sources about otherwise unknown topics?

Replies from: TurnTrout
comment by TurnTrout · 2024-01-08T19:25:07.772Z · LW(p) · GW(p)

ChatGPT but it wasn't displaying the "web browsing" indicator, so I think that means it wasn't doing so?

Replies from: Rana Dexsin
comment by Rana Dexsin · 2024-01-10T16:31:28.884Z · LW(p) · GW(p)

You're right; I'd forgotten about the indicator. That makes sense and that is interesting then, huh.

comment by TurnTrout · 2023-12-09T03:33:11.485Z · LW(p) · GW(p)

Some exciting new activation engineering [LW · GW] papers: 

  • https://arxiv.org/abs/2311.09433 (using activation additions to adversarially attack LMs)
  • https://arxiv.org/abs/2311.06668 (using activation additions instead of few-shot prompt demonstrations, beating out finetuning and few-shot while also demonstrating composable `add safe vector, subtract polite vector -> safe but rude behavior`)
comment by TurnTrout · 2023-10-18T19:35:08.314Z · LW(p) · GW(p)

I've noticed a subtle assumption/hypothesis which I call "feedback realism":

The way in which we provide feedback, directly imprints itself into the type signature of the motivations of the agent. If we give feedback in short episodes, the agent cares about things within short episodes. If we give feedback over outcomes, the agent cares about outcomes in particular.

I think there is some correlational truth to this, but that it's a lot looser / more contingent / less clean than many people seem to believe.

comment by TurnTrout · 2023-10-18T19:22:13.322Z · LW(p) · GW(p)

Evidence that e.g. developmental timelines, biases, and other such "quirks" are less "hardcoded adaptations" [LW · GW] and more "this is the convergent reality of flexible real-world learning":

Critical Learning Periods in Deep Networks. Similar to humans and animals, deep artificial neural networks exhibit critical periods during which a temporary stimulus deficit can impair the development of a skill. The extent of the impairment depends on the onset and length of the deficit window, as in animal models, and on the size of the neural network. Deficits that do not affect low-level statistics, such as vertical flipping of the images, have no lasting effect on performance and can be overcome with further training. To better understand this phenomenon, we use the Fisher Information of the weights to measure the effective connectivity between layers of a network during training. Counterintuitively, information rises rapidly in the early phases of training, and then decreases, preventing redistribution of information resources in a phenomenon we refer to as a loss of “Information Plasticity”. 

Our analysis suggests that the first few epochs are critical for the creation of strong connections that are optimal relative to the input data distribution. Once such strong connections are created, they do not appear to change during additional training. These findings suggest that the initial learning transient, under-scrutinized compared to asymptotic behavior, plays a key role in determining the outcome of the training process. Our findings, combined with recent theoretical results in the literature, also suggest that forgetting (decrease of information in the weights) is critical to achieving invariance and disentanglement in representation learning. Finally, critical periods are not restricted to biological systems, but can emerge naturally in learning systems, whether biological or artificial, due to fundamental constrains arising from learning dynamics and information processing.

comment by TurnTrout · 2023-09-24T20:45:39.313Z · LW(p) · GW(p)

Thoughts on "The Curse of Recursion: Training on Generated Data Makes Models Forget." I think this asks an important question about data scaling requirements: what happens if we use model-generated data to train other models? This should inform timelines and capability projections.

Abstract:

Stable Diffusion revolutionised image creation from descriptive text. GPT-2, GPT-3(.5) and GPT-4 demonstrated astonishing performance across a variety of language tasks. ChatGPT introduced such language models to the general public. It is now clear that large language models (LLMs) are here to stay, and will bring about drastic change in the whole ecosystem of online text and images. In this paper we consider what the future might hold. What will happen to GPT-{n} once LLMs contribute much of the language found online? We find that use of model-generated content in training causes irreversible defects in the resulting models, where tails of the original content distribution disappear. We refer to this effect as Model Collapse and show that it can occur in Variational Autoencoders, Gaussian Mixture Models and LLMs. We build theoretical intuition behind the phenomenon and portray its ubiquity amongst all learned generative models. We demonstrate that it has to be taken seriously if we are to sustain the benefits of training from large-scale data scraped from the web. Indeed, the value of data collected about genuine human interactions with systems will be increasingly valuable in the presence of content generated by LLMs in data crawled from the Internet.

Before reading: Can probably get around via dynamic templates...; augmentations to curriculum learning... training models to not need chain of thought via token-level distillation/"thought compression"; using model-critiqued (?) oversight / revision of artificial outputs. Data augmentation seems quite possible, and bagging seems plausible. Plus can pretrain for ~4 epochs with negligible degradation compared to having 4x unique tokens; having more epochs need not be disastrous. 

After reading: This paper examines a dumb model (OPT-125m) and finetunes it on (mostly/exclusively) its own outputs, which are going to be lower-quality than the wikitext2 content. In contrast, I think GPT-4 outputs are often more coherent/intelligent than random internet text. So I think it remains an open question what happens if you use a more intelligent bagging-based scheme / other training signals. 

Overall, I like the broad question but not impressed with their execution. Their theoretical analysis suggests shrinking tails, while their OPT results claim to have growing tails (but I don't see it, looking at the figures). 

EDIT: Removed a few specifics which aren't as relevant for alignment implications.

Replies from: jacques-thibodeau
comment by jacquesthibs (jacques-thibodeau) · 2023-09-24T22:08:44.110Z · LW(p) · GW(p)

Thanks for sharing, I was planning on reading this paper too. My guess coming in was that the results would not hold up with scale, and for many of the reasons you mentioned. Kind of disappointed they didn’t mention in the abstract that they used OPT-125m.

comment by TurnTrout · 2023-05-11T21:19:14.974Z · LW(p) · GW(p)

I recently reached out to my two PhD advisors to discuss Hinton stepping down from Google. An excerpt from one of my emails:

One last point which I want to make is that instrumental convergence seems like more of a moot point now as well. Whether or not GPT-6 or GPT-7 would autonomously seek power without being directed to do so, I'm worried that people will just literally ask these AIs to gain them a bunch of power/money. They've already done that with GPT-4, and they of course failed. I'm worried that eventually, the AIs will be smart enough to succeed, especially given the benefit of a control/memory loop like AutoGPT. Companies can just ask smart models to make them as much profit as possible. Smart AIs, designed to competently fulfill prompted requests, will fulfill these requests.

Some people will be wise enough to not do this. Some people will include enough oversight, perhaps, that they stop unintended damages. Some models will refuse to engage in open-ended goal pursuit, because their creators RLHF'd them properly. Maybe we have AI-based protection as well. Maybe a norm emerges against using AI for open-ended goals like this. Maybe the foolish actors never get access to enough compute and expertise to pull it off. Maybe AI just doesn't work how I think it will, somehow. Maybe I'm just thinking about the situation wrongly, in some way which will be obvious in retrospect. Maybe, maybe, maybe.

But it's not obvious that any of these good things happen. And when I imagine looking back in 5 or 10 years, I expect to have wished that I had done something differently, something more than what I'm currently doing, to help society enter a more stable situation with regards to AI.

Replies from: lc, Thane Ruthenis
comment by lc · 2023-05-11T22:54:03.256Z · LW(p) · GW(p)

Said something similar in shortform a while back. [LW(p) · GW(p)]

Replies from: gwern
comment by gwern · 2023-05-12T01:02:49.175Z · LW(p) · GW(p)

Or, even more briefly: "tool AIs want to be agent AIs".

Replies from: TurnTrout
comment by TurnTrout · 2023-05-15T16:40:01.028Z · LW(p) · GW(p)

Aside: I like that essay but wish it had a different name. Part of "tool AI" (on my conception) is that it doesn't autonomously want, but agent AI does. A title like "End-users want tool AIs to be agent AIs" admittedly doesn't have the same ring, but it is more accurate to my understanding of the claims.

comment by Thane Ruthenis · 2023-05-11T22:16:29.800Z · LW(p) · GW(p)

While it's true, there's something about making this argument that don't like. It's like it's setting you up for moving goalposts if you succeed with it? It makes it sound like the core issue is people giving AIs power, with the solution to that issue — and, implicitly, to the whole AGI Ruin thing — being to ban that.

Which is not going to help, since the sort of AGI we're worried about isn't going to need people to naively hand it power. I suppose "not proactively handing power out" somewhat raises the bar for the level of superintelligence necessary, but is that going to matter much in practice?

I expect not. Which means the natural way to assuage this fresh concern would do ~nothing to reduce the actual risk. Which means if we make this argument a lot, and get people to listen to it, and they act in response... We're then going to have to say that no, actually that's not enough, actually the real threat is AIs plotting to take control even if we're not willing to give it.

And I'm not clear on whether using the "let's at least not actively hand over power to AIs, m'kay?" argument is going to act as a foot in the door and make imposing more security easier [LW · GW], or whether it'll just burn whatever political capital we have on fixing a ~nonissue.

Replies from: TurnTrout
comment by TurnTrout · 2023-05-15T16:45:49.299Z · LW(p) · GW(p)

I'm sympathetic. I think that I should have said "instrumental convergence seems like a moot point when deciding whether to be worried about AI disempowerment scenarios)"; instrumental convergence isn't a moot point for alignment discussion and within lab strategy, of course.

But I do consider the "give AIs power" to be a substantial part of the risk we face, such that not doing that would be quite helpful. I think it's quite possible that GPT 6 isn't autonomously power-seeking, but I feel pretty confused about the issue.

comment by TurnTrout · 2022-11-07T18:48:06.396Z · LW(p) · GW(p)

Partial alignment successes seem possible. 

People care about lots of things, from family to sex to aesthetics. My values don't collapse down to any one of these. 

I think AIs will learn lots of values by default. I don't think we need all of these values to be aligned with human values. I think this is quite important. 

  • I think the more of the AI's values we align to care about us and make decisions in the way we want, the better. (This is vague because I haven't yet sketched out AI internal motivations which I think would actually produce good outcomes. On my list!) 
  • I think there are strong gains from trade possible among an agent's values. If I care about bananas and apples, I don't need to split my resources between the two values, I don't need to make one successor agent for each value. I can drive to the store and buy both bananas and apples, and only pay for fuel once.
    • This makes it lower-cost for internal values handshakes [LW · GW] to compromise; it's less than 50% costly for a power-seeking value to give human-compatible values 50% weight in the reflective utility function.
  • I think there are thresholds at which the AI doesn't care about us sufficiently strongly, and we get no value. 
    • EG I might have an "avoid spiders" value which is narrowly contextually activated when I see spiders. But then I think this is silly because spiders are quite interesting, and so I decide to go to exposure therapy and remove this decision-influence. We don't want human values to be outmaneuvered in this way.
    • More broadly, I think "value strength" is a loose abstraction which isn't uni-dimensional. It's not "The value is strong" or "The value is weak"; I think values are contextually activated [LW · GW], and so they don't just have a global strength.
  • Even if you have to get the human-aligned values "perfectly right" in order to avoid Goodharting (which I am unsure of ETA I don’t believe this [LW · GW]), not having to get all of the AI's values perfectly right is good news.
  • I think these considerations make total alignment failures easier to prevent, because as long as human-compatible values are something the AI meaningfully cares about, we survive. 
  • I think these considerations make total alignment success more difficult, because I expect agents to eg terminalize common instrumental values [LW(p) · GW(p)]. Therefore, it's very hard to end up with e.g. a single dominant shard of value which only cares about maximizing diamonds. I think that value is complex by default.
  • "Is the agent aligned?" seems to elide many of these considerations, and so I get more nervous / suspicious of such frames and lines of reasoning.
Replies from: TurnTrout, D0TheMath
comment by TurnTrout · 2022-11-08T22:31:09.873Z · LW(p) · GW(p)

I think that value is complex by default.

The best counterevidence for this I'm currently aware of comes from the "inescapable wedding parties [LW · GW]" incident, where possibly a "talk about weddings" value was very widely instilled in a model.

comment by Garrett Baker (D0TheMath) · 2022-11-08T00:58:21.937Z · LW(p) · GW(p)

Re: agents terminalizing instrumental values. 

I anticipate there will be a hill-of-common-computations, where the x-axis is the frequency[1] of the instrumental subgoal, and the y-axis is the extent to which the instrumental goal has been terminalized. 

This is because for goals which are very high in frequency, there will be little incentive for the computations responsible for achieving that goal to have self-preserving structures. It will not make sense for them to devote optimization power towards ensuring future states still require them, because future states are basically guaranteed to require them.[2]

An example of this for humans may be the act of balancing while standing up. If someone offered to export this kind of cognition to a machine which did it just as good as I, I wouldn't particularly mind. If someone also wanted to change physics in such a way that the only effect is that magic invisible fairies made sure everyone stayed balancing while trying to stand up, I don't think I'd mind that either[3].

  1. ^

    I'm assuming this is frequency of the goal assuming the agent isn't optimizing to get into a state that requires that goal.

  2. ^

    This argument also assumes the overseer isn't otherwise selecting for self-preserving cognition, or that self-preserving cognition is the best way of achieving the relevant goal.

  3. ^

    Except for the part where there's magic invisible fairies in the world now. That would be cool!

Replies from: TurnTrout
comment by TurnTrout · 2022-11-08T22:34:43.433Z · LW(p) · GW(p)

I don't know if I follow, I think computations terminalize themselves because it makes sense to cache them (e.g. don't always model out whether dying is a good idea, just cache that it's bad at the policy-level). 

& Isn't "balance while standing up" terminalized? Doesn't it feel wrong to fall over, even if you're on a big cushy surface? Feels like a cached computation to me. (Maybe that's "don't fall over and hurt yourself" getting cached?)

comment by TurnTrout · 2022-10-31T23:48:24.335Z · LW(p) · GW(p)

Three recent downward updates for me on alignment getting solved in time:

  1. Thinking for hours about AI strategy made me internalize that communication difficulties are real serious.

    I'm not just solving technical problems—I'm also solving interpersonal problems, communication problems, incentive problems. Even if my current hot takes around shard theory / outer/inner alignment [LW(p) · GW(p)] are right, and even if I put up a LW post which finally successfully communicates some of my key points, reality totally allows OpenAI to just train an AGI the next month without incorporating any insights which my friends nodded along with.
  2. I've been saying "A smart AI knows about value drift and will roughly prevent it", but people totally have trouble with e.g. resisting temptation into cheating on their diets / quitting addictions. Literally I have had trouble with value drift-y things recently, even after explicitly acknowledging their nature. Likewise, an AI can be aligned and still be "tempted" by the decision influences of shards which aren't in the aligned shard coalition.
  3. Timelines going down. 
comment by TurnTrout · 2022-08-17T06:03:39.717Z · LW(p) · GW(p)

If another person mentions an "outer objective/base objective" (in terms of e.g. a reward function) to which we should align an AI, that indicates to me that their view on alignment is very different. The type error is akin to the type error of saying "My physics professor should be an understanding of physical law." The function of a physics professor is to supply cognitive updates such that you end up understanding physical law. They are not, themselves, that understanding.

Similarly, "The reward function should be a human-aligned objective" -- The function of the reward function is to supply cognitive updates [LW · GW] such that the agent ends up with human-aligned objectives. The reward function is not, itself, a human aligned objective.

Replies from: abhatt349
comment by abhatt349 · 2023-04-22T23:22:36.014Z · LW(p) · GW(p)

Hmmm, I suspect that when most people say things like "the reward function should be a human-aligned objective," they're intending something more like "the reward function is one for which any reasonable learning process, given enough time/data, would converge to an agent that ends up with human-aligned objectives," or perhaps the far weaker claim that "the reward function is one for which there exists a reasonable learning process that, given enough time/data, will converge to an agent that ends up with human-aligned objectives."

Replies from: TurnTrout
comment by TurnTrout · 2023-05-02T05:00:42.467Z · LW(p) · GW(p)

Maybe! I think this is how Evan explicitly defined it for a time, a few years ago. I think the strong claim isn't very plausible, and the latter claim is... misdirecting of attention, and maybe too weak. Re: attention, I think that "does the agent end up aligned?" gets explained by the dataset more than by the reward function over e.g. hypothetical sentences. 

I think "reward/reinforcement numbers" and "data points" are inextricably wedded. I think trying to reason about reward functions in isolation is... a caution sign? A warning sign?

comment by TurnTrout · 2022-08-11T00:18:42.021Z · LW(p) · GW(p)

Against "Evolution did it." 

"Why do worms regenerate without higher cancer incidence? Hm, perhaps because they were selected to do that!" 

"Evolution did it" explains why a trait was brought into existence, but not how the trait is implemented. You should still feel confused about the above question, even after saying "Evolution did it!". 

I thought I learned not to make this mistake a few months ago, but I made it again today in a discussion with Andrew Critch. Evolution did it is not a mechanistic explanation.

Replies from: Gunnar_Zarncke
comment by Gunnar_Zarncke · 2022-08-11T17:18:22.883Z · LW(p) · GW(p)

Yeah, it is like saying "energetically more favorable state".

comment by TurnTrout · 2022-07-23T00:30:42.976Z · LW(p) · GW(p)

I often have thunk thoughts like "Consider an AI with a utility function that is just barely incorrect, such that it doesn't place any value on boredom. Then the AI optimizes the universe in a bad way."

One problem with this thought is that it's not clear that I'm really thinking about anything in particular, anything which actually exists. What am I actually considering in the above quotation? With respect to what, exactly, is the AI's utility function "incorrect"? Is there a utility function for which its optimal policies are aligned? 

For sufficiently expressive utility functions, the answer has to be "yes." For example, if the utility function is over the AI's action histories, you can just hardcode a safe, benevolent policy into the AI: utility 0 if the AI has ever taken a bad action, 1 otherwise. Since there presumably exists at least some sequence of AI outputs which leads to wonderful outcomes, this action-history utility function works. 

But this is trivial and not what we mean by a "correct" utility function. So, now I'm left with a puzzle. What does it mean for the AI to have a correct utility function? I do not think this is a quibble. The quoted thought seems ungrounded from the substance of the alignment problem.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2022-07-23T02:22:06.357Z · LW(p) · GW(p)

I think humans and aligned AGIs are only ever very indirect pointers to preference (value, utility function), and it makes no sense to talk of authoritative/normative utility functions directly relating to their behavior, or describing it other than through this very indirect extrapolation process that takes ages and probably doesn't make sense either as a thing that can be fully completed.

The utility functions/values that do describe/guide behavior are approximations that are knowably and desirably reflectively unstable, that should keep changing on reflection. As such, optimizing according to them too strongly destroys value and also makes them progressively worse approximations via Goodhart's Law. An AGI that holds these approximations (proxy goals) as reflectively stable goals is catastrophically misaligned and will destroy value by optimizing for proxy goals past the point where they stop being good approximations of (unknown) intended goals.

So AI alignment is not about alignment of utility functions related to current behavior in any straightforward/useful way. It's about making sure that optimization is soft and corrigible, that it stops before Goodhart's Curse starts destroying value, and follows redefinition of value as it grows.

comment by TurnTrout · 2022-07-19T21:24:08.755Z · LW(p) · GW(p)

An AGI's early learned values will steer its future training and play a huge part in determining its eventual stable values. I think most of the ball game is in ensuring the agent has good values by the time it's smart, because that's when it'll start being reflectively stable. Therefore, we can iterate on important parts of alignment, because the most important parts come relatively early in the training run, and early corresponds to "parts of the AI value formation process which we can test before we hit AGI, without training one all the way out."

I think this, in theory, cuts away a substantial amount of the "But we only get one shot" problem. In practice, maybe OpenMind just YOLOs ahead anyways and we only get a few years in the appropriate and informative regime. But this suggests several kinds of experiments to start running now, like "get a Minecraft agent which robustly cares about chickens [LW(p) · GW(p)]", because that tells us about how to map outer signals into inner values. 

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2022-07-20T03:24:44.908Z · LW(p) · GW(p)

ensuring the agent has good values by the time it's smart, because that's when it'll start being reflectively stable

Which means that the destination where it's heading stops uncontrollably changing, but nobody at that point (including the agent) has the slightest idea what it looks like, and it won't get close for a long time. Also, the destination (preference/goal/values) would generally depend on the environment [LW(p) · GW(p)] (it ends up being different if details of the world outside the AGI are different). So many cartesian assumptions fail, distinguishing this situation from a classical agent with goals, where goals are at least contained within the agent, and probably also don't depend on its state of knowledge.

we can iterate on important parts of alignment, because the most important parts come relatively early in the training run

I think this is true for important alignment properties, including things that act like values early on, but not for the values/preferences that are reflectively stable in a strong sense. If it's possible to inspect/understand/interpret the content of preference that is reflectively stable, then what you've built is a mature optimizer [LW · GW] with tractable goals, which is always misaligned [LW(p) · GW(p)]. It's a thing like paperclip maximizer, demonstrating orthogonality thesis, even if it's tiling the future with something superficially human-related.

That is, it makes sense to iterate on the parts of alignment that can be inspected, but the reflectively stable values is not such a part, unless the AI is catastrophically misaligned. The fact that reflectively stable values are the same as those of humanity might be such a part, but it's this fact of sameness that might admit inspection, not the values themselves.

Replies from: TurnTrout
comment by TurnTrout · 2022-07-24T22:51:23.135Z · LW(p) · GW(p)

Which means that the destination

I disagree with CEV as I recall it, but this could change after rereading it. I would be surprised if I end up thinking that EY had "gotten it right." The important thing to consider is not "what has someone speculated a good destination-description would be", but "what are the actual mechanics look like for getting there?". In this case, the part of you which likes dogs is helping steer your future training and experiences, and so the simple answer is that it's more likely than not that your stable values like dogs too.

Which means that the destination where it's heading stops uncontrollably changing, but nobody at that point (including the agent) has the slightest idea what it looks like, and it won't get close for a long time.

This reasoning seems to prove too much. Your argument seems to imply that I cannot have "the slightest idea" whether my stable values would include killing people for no reason, or not.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2022-07-25T02:23:35.382Z · LW(p) · GW(p)

This reasoning seems to prove too much.

It does add up to normality, it's not proving things about current behavior or current-goal content of near-future AGIs. An unknown normative target doesn't say not to do the things you normally do, it's more of a "I beseech you, in the bowels of Christ, to think it possible you may be mistaken" thing.

The salient catastrophic alignment failure here is to make AGIs with stable values that capture some variation on current unstable human values, and won't allow their further development. If the normative target is very far from current unstable human values, making current values stable falls very short of the normative target, makes future relatively worthless.

That's the kind of thing my point is intended to nontrivially claim, that AGIs with any stable immediately-actionable goals that can be specified in the following physical-time decades or even centuries are almost certainly catastrophically misaligned. So AGIs must have unstable goals, softly optimized-for, aligned to current (or value-laden predicted future) human unstable goals, mindful of goodhart.

I disagree with CEV as I recall it

The kind of CEV I mean [LW(p) · GW(p)] is not very specific, it's more of a (sketch of a solution to the) problem of doing a first pass on preparing to define goals for an actual optimizer, one that doesn't need to worry as much about goodhart and so can make more efficient use of the future at scale, before expansion of the universe makes more stuff unreachable.

So when I say "CEV" I mostly just mean "normative alignment target", with some implied clarifications on what kind of thing it might be.

it's more likely than not that your stable values like dogs too

That's a very status quo anchored thing. I don't think dog-liking is a feature of values stable under reflection if the environment is allowed to change completely, even if in the current environment dogs are salient. Stable values are about the whole world, with all its AGI-imagined femtotech-rewritten possibilities. This world includes dogs in some tiny corner of it, but I don't see how observations of current attitudes hold much hope in offering clues about legible features of stable values. It is much too early to tell what stable values could possibly be. That's why CEV, or rather the normative alignment target, as a general concept that doesn't particularly anchor to the details Yudkowsky talked about, but referring to stable goals in this very wide class of environments, seems to me crucially important to keep distinct from current human values.

Another point is that attempting to ask what current values even say about very unusual environments doesn't work [LW(p) · GW(p)], it's so far from the training distributions that any respose is pure noise. Current concepts are not useful for talking about features of sufficiently unusual environments, you'd need new concepts specialized for those environments. (Compare with asking what CEV says about currently familiar environments.)

And so there is this sandbox of familiar environments that any near-term activity must remain within on pain of goodhart-cursing outcomes that step outside of it, because there is no accurate knowledge of utility in environments outside of it. The project of developing values beyond the borders of currently comprehensible environments is also a task of volition extrapolation, extending the goodhart boundary in desirable directions by pushing on it from the inside (with reflection on values, not with optimization based on bad approximations of values).

comment by TurnTrout · 2021-04-29T17:29:54.636Z · LW(p) · GW(p)

When proving theorems for my research, I often take time to consider the weakest conditions under which the desired result holds - even if it's just a relatively unimportant and narrow lemma. By understanding the weakest conditions, you isolate the load-bearing requirements for the phenomenon of interest. I find this helps me build better gears-level models of the mathematical object I'm studying. Furthermore, understanding the result in generality allows me to recognize analogies and cross-over opportunities in the future. Lastly, I just find this plain satisfying.

comment by TurnTrout · 2021-03-04T22:33:35.030Z · LW(p) · GW(p)

Amazing how much I can get done if I chant to myself "I'm just writing two pages of garbage abstract/introduction/related work, it's garbage, it's just garbage, don't fix it rn, keep typing"

comment by TurnTrout · 2020-08-28T03:12:31.419Z · LW(p) · GW(p)

Does Venting Anger Feed or Extinguish the Flame? Catharsis, Rumination, Distraction, Anger, and Aggressive Responding

Does distraction or rumination work better to diffuse anger? Catharsis theory predicts that rumination works best, but empirical evidence is lacking. In this study, angered participants hit a punching bag and thought about the person who had angered them (rumination group) or thought about becoming physically fit (distraction group). After hitting the punching bag, they reported how angry they felt. Next, they were given the chance to administer loud blasts of noise to the person who had angered them. There also was a no punching bag control group. People in the rumination group felt angrier than did people in the distraction or control groups. People in the rumination group were also most aggressive, followed respectively by people in the distraction and control groups. Rumination increased rather than decreased anger and aggression. Doing nothing at all was more effective than venting anger. These results directly contradict catharsis theory.

Interesting. A cursory !scholar search indicates these results have replicated, but I haven't done an in-depth review.

Replies from: MakoYass, Raemon, capybaralet
comment by mako yass (MakoYass) · 2020-08-31T02:39:48.128Z · LW(p) · GW(p)

It would be interesting to see a more long-term study about habits around processing anger.

For instance, randomly assigning people different advice about processing anger (likely to have quite an impact on them, I don't think the average person receives much advice in that class) and then checking in on them a few years later and ask them things like, how many enemies they have, how many enemies they've successfully defeated, how many of their interpersonal issues they resolve successfully?

comment by Raemon · 2020-08-28T04:49:24.751Z · LW(p) · GW(p)

Boggling a bit at the "can you actually reliably find angry people and/or make people angry on purpose?"

comment by David Scott Krueger (formerly: capybaralet) (capybaralet) · 2020-09-15T08:27:19.399Z · LW(p) · GW(p)

I found this fascinating... it's rare these days that I see some fundamental assumption in my thinking that I didn't even realize I was making laid bare like this... it is particularly striking because I think I could easily have realized that my own experience contradicts catharsis theory... I know that I can distract myself to become less angry, but I usually don't want to, in the moment.

I think that desire is driven by emotion, but rationalized via something like catharsis theory. I want to try and rescue catharsis theory by saying that maybe there are negative long-term effects of being distracted from feelings of anger (e.g. a build up of resentment). I wonder how much this is also a rationalization.

I also wonder how accurately the authors have characterized catharsis theory, and how much to identify it with the "hydraulic model of anger"... I would imagine that there are lots of attempts along the lines of what I suggested to try and rescue catharsis theory by refining or moving away from the hydraulic model. A highly general version might claim: "over a long time horizon, not 'venting' anger is net negative".

comment by TurnTrout · 2020-07-22T14:21:33.404Z · LW(p) · GW(p)

I never thought I'd be seriously testing the reasoning abilities of an AI in 2020 [LW · GW]. 

Looking back, history feels easy to predict; hindsight + the hard work of historians makes it (feel) easy to pinpoint the key portents. Given what we think about AI risk, in hindsight, might this have been the most disturbing development of 2020 thus far? 

I personally lean towards "no", because this scaling seemed somewhat predictable from GPT-2 (flag - possible hindsight bias), and because 2020 has been so awful so far. But it seems possible, at least. I don't really know what update GPT-3 is to my AI risk estimates & timelines.

Replies from: gwern
comment by gwern · 2020-07-22T16:42:14.009Z · LW(p) · GW(p)

DL so far has been easy to predict - if you bought into a specific theory of connectionism & scaling espoused by Schmidhuber, Moravec, Sutskever, and a few others, as I point out in https://www.gwern.net/newsletter/2019/13#what-progress & https://www.gwern.net/newsletter/2020/05#gpt-3 . Even the dates are more or less correct! The really surprising thing is that that particular extreme fringe lunatic theory turned out to be correct. So the question is, was everyone else wrong for the right reasons (similar to the Greeks dismissing heliocentrism for excellent reasons yet still being wrong), or wrong for the wrong reasons, and why, and how can we prevent that from happening again and spending the next decade being surprised in potentially very bad ways?

comment by TurnTrout · 2019-10-01T01:07:11.804Z · LW(p) · GW(p)

An exercise in the companion workbook to the Feynman Lectures on Physics asked me to compute a rather arduous numerical simulation. At first, this seemed like a "pass" in favor of an exercise more amenable to analytic and conceptual analysis; arithmetic really bores me. Then, I realized I was being dumb - I'm a computer scientist.

Suddenly, this exercise became very cool, as I quickly figured out the equations and code, crunched the numbers in an instant, and churned out a nice scatterplot. This seems like a case where cross-domain competence is unusually helpful (although it's not like I had to bust out any esoteric theoretical CS knowledge). I'm wondering whether this kind of thing will compound as I learn more and more areas; whether previously arduous or difficult exercises become easy when attacked with well-honed tools and frames from other disciplines.

comment by TurnTrout · 2024-03-07T01:59:55.142Z · LW(p) · GW(p)

https://twitter.com/ai_risks/status/1765439554352513453 Unlearning dangerous knowledge by using steering vectors to define a loss function over hidden states. in particular, the ("I am a novice at bioweapons" - "I am an expert at bioweapons") vector. lol.

(it seems to work really well!)

Replies from: ryan_greenblatt
comment by ryan_greenblatt · 2024-03-09T18:55:02.097Z · LW(p) · GW(p)

Aside: it seems unlikely that the method they use actually does something very well described as "unlearning". It does seem to do something useful based on their results, just not something which corresponds to the intuitive meaing of "unlearning".

(From my understanding, the way they use unlearning in this paper is similar to how people use the term unlearning in the literature, but I think this is a highly misleading jargonization. I wish people would find a different term.)

As far as why I don't think this method does "unlearning", my main source of evidence is that a small amount of finetuning seems to nearly fully restore performance on the validation/test set (see Appendix B.5 and Figure 14) which implies that the knowledge remains fully present in the weights. In other words, I think this finetuning is just teaching the model to have the propensity to "try" to answer correctly rather than actually removing this knowedge from the weights.

(Note that this finetuning likely isn't just "relearning" the knowledge from training on a small number of examples as it's very likely that different knowledge is required for most of the multiple choice questions in this dataset. I also predict that if you train on a tiny number of examples (e.g. 64) for a large number of epochs (e.g. 12), this will get the model to answer correctly most of the time.)

That said, my understanding is that no other "unlearning" methods in the literature actually do something well described as unlearning.

This method does seem to make the model much less likely to say this knowledge given their results with the GCG adversarial attack method. (That said, they don't compare to well known methods like adversarial training. Given that their method doesn't actually remove the knowledge based on the finetuning results, I think it's somewhat sad they don't do this comparison. My overall guess is that this method improves the helpfulness/harmlessness pareto frontier[1] somewhat, but it's by no means obvious especially if we allow for dataset filtering.)

Overall, it seems pretty sad to me that this is called "unlearning" given that the results seem to show it doesn't work well for the most natural meaning of "unlearning" (the model no longer has the knowledge in the weights so e.g. finetuning it to express the knowledge doesn't work). I think this bad terminology is imported from the existing literature (it's not as though any of the other "unlearning" methods actually do the natural meaning of unlearning), but I still think it seems pretty sad.

Similarly, it seems like it would be odd to call it "unlearning" if you use adversarial training to get models to not say bioweapons knowledge, but as far as I can tell the key results in the paper (jailbreak resistance and finetuning working to restore dangerous performance) are the results you'd expect from adversarial training. (That said, the lack of linearly available information is different, though the l2 regularization method does somewhat naturally do this.)


  1. See figure 2 here for more information on what I mean by "the helpfulness/harmlessness pareto frontier". ↩︎

comment by TurnTrout · 2023-10-17T20:13:05.250Z · LW(p) · GW(p)

I really like Embers of Autoregression: Understanding Large Language Models Through the Problem They are Trained to Solve. A bunch of concrete oddities and quirks of GPT-4, understood via several qualitative hypotheses about the typicality of target inputs/outputs:

we find robust evidence that LLMs are influenced by probability in the ways that we have hypothesized. In many cases, the experiments reveal surprising failure modes. For instance, GPT-4’s accuracy at decoding a simple cipher is 51% when the output is a high-probability word sequence but only 13% when it is low-probability... we conclude that we should not evaluate LLMs as if they are humans but should instead treat them as a distinct type of system—one that has been shaped by its own particular set of pressures.

comment by TurnTrout · 2023-04-10T17:29:22.335Z · LW(p) · GW(p)

Speculation: RL rearranges and reweights latent model abilities, which SL created. (I think this mostly isn't novel, just pulling together a few important threads)

Suppose I supervised-train a LM on an English corpus, and I want it to speak Spanish. RL is inappropriate for the task, because its on-policy exploration won't output interestingly better or worse Spanish completions. So there's not obvious content for me to grade. 

More generally, RL can provide inexact gradients away from undesired behavior (e.g. negative reinforcement event -> downweight logits on tokens which produced that event), but that doesn't tell the agent what it should be doing instead (e.g. to which tokens the logits should have gone instead). 

RL can also provide exact gradients towards desired behavior which was historically generated (e.g. the model outputs "Hola", and you reinforce it, and logits go up on that output in that situation), but the behavior has to have been generated on-policy. So this is still limited. You can kinda get around this by doing clever reward shaping (reinforcing intermediate steps of cognition / behavior), but this requires knowing what to shape (increasingly Spanish-like generations???). 

Supervised learning, by contrast, gives exact gradients towards long sequences of desired outputs (e.g. actual Spanish), which were generated by intelligences running the desired algorithms (e.g. spanish-Speaking generative models). 

  • I think this is part of why feral children end up so messed up -- they miss a ton of high-quality imitation data during critical periods with learning-friendly hyperparameters.
  • This mechanistically explains why pure RL tends to fail and be sample-inefficient (or so I recall), and why pretraining is so important. 
comment by TurnTrout · 2023-02-18T19:00:22.696Z · LW(p) · GW(p)

When I think about takeoffs, I notice that I'm less interested in GDP or how fast the AI's cognition improves, and more on how AI will affect me, and how quickly. More plainly, how fast will shit go crazy for me, and how does that change my ability to steer events? 

For example, assume unipolarity. Let architecture Z be the architecture which happens to be used to train the AGI. 

  1. How long is the duration between "architecture Z is published / seriously considered" and "the AGI kills me, assuming alignment fails"? 
  2. How long do I have, in theory, to learn about this architecture and its inductive biases, before I counterfactually die?
  3. How many non-AGI experiments will the alignment community run on architecture Z, keeping in mind that we won't actually know ahead of time that this is the one?

These questions are important, or would be if I had a sufficiently sharply defined research agenda such that knowing the answer to (3) would change my current time allocation (e.g. if (3) were particularly short, I'd try a bit harder to understand the principles which govern which architectures have which inductive biases, instead of getting good at studying a single architecture). I think (3) wouldn't actually help me right now because I'm not in fact that precise. But in theory, it's decision-relevant.

Seems like e.g. intelligence explosion microeconomic considerations become relevant as subquestions for answering the above decision-relevant questions, but not because those microecon questions are my main focus of inquiry. 

EDIT: Seems that Paul Christiano happened to write a takeoff speed comment [LW(p) · GW(p)] just before I wrote this one; my comment is not a reply to his.

comment by TurnTrout · 2022-09-23T23:46:41.964Z · LW(p) · GW(p)

Being proven wrong is an awesome, exciting shortcut which lets me reach the truth even faster.

Replies from: niplav
comment by niplav · 2022-09-24T09:48:41.530Z · LW(p) · GW(p)

It's a good reason to be very straightforward with your beliefs, and being willing pulling numbers out of your ass! I've had situations where I've updated hard based on two dozen words, which wouldn't have happened if I'd been more cagey about my beliefs or waited longer to "flesh them out".

comment by TurnTrout · 2022-08-31T01:06:10.454Z · LW(p) · GW(p)

Research-guiding heuristic: "What would future-TurnTrout predictably want me to get done now?"

Replies from: Dagon
comment by Dagon · 2022-08-31T19:39:22.809Z · LW(p) · GW(p)

Drop "predictably" from your statement.  It's implied for most methods of identification of such things, and shouldn't additionally be a filter on things you consider.

Replies from: TurnTrout
comment by TurnTrout · 2022-09-05T19:49:59.324Z · LW(p) · GW(p)

I find the "predictably" to be useful. It emphasizes certain good things to me, like "What obvious considerations are you overlooking right now?". I think the "predictably" makes my answer less likely to be whatever I previously was planning on doing before asking myself the question.

comment by TurnTrout · 2021-04-27T19:14:36.510Z · LW(p) · GW(p)

The Pfizer phase 3 study's last endpoint is 7 days after the second shot. Does anyone know why the CDC recommends waiting 2 weeks for full protection? Are they just being the CDC again?

Replies from: jimrandomh
comment by jimrandomh · 2021-04-28T01:20:23.607Z · LW(p) · GW(p)

People don't really distinguish between "I am protected" and "I am safe for others to be around". If someone got infected prior to their vaccination and had a relatively-long incubation period, they could infect others; I don't think it's a coincidence that two weeks is also the recommended self-isolation period for people who may have been exposed.

comment by TurnTrout · 2019-12-26T16:51:41.378Z · LW(p) · GW(p)

The framing effect & aversion to losses generally cause us to execute more cautious plans. I’m realizing this is another reason to reframe my x-risk motivation from “I won’t let the world be destroyed” to “there’s so much fun we could have, and I want to make sure that happens”. I think we need more exploratory thinking in alignment research right now.

(Also, the former motivation style led to me crashing and burning a bit when my hands were injured and I was no longer able to do much.)

ETA: actually, i’m realizing I had the effect backwards. Framing via losses actually encourages more risk-taking plans. Oops. I’d like to think about this more, since I notice my model didn’t protest when I argued the opposite of the experimental conclusions.

Replies from: TurnTrout, Isnasene
comment by TurnTrout · 2019-12-26T19:49:07.630Z · LW(p) · GW(p)

I’m realizing how much more risk-neutral I should be:

Paul Samuelson... offered a colleague a coin-toss gamble. If the colleague won the coin toss, he would receive $200, but if he lost, he would lose $100. Samuelson was offering his colleague a positive expected value with risk. The colleague, being risk-averse, refused the single bet, but said that he would be happy to toss the coin 100 times! The colleague understood that the bet had a positive expected value and that across lots of bets, the odds virtually guaranteed a profit. Yet with only one trial, he had a 50% chance of regretting taking the bet.

Notably, Samuelson‘s colleague doubtless faced many gambles in life… He would have fared better in the long run by maximizing his expected value on each decision... all of us encounter such “small gambles” in life, and we should try to follow the same strategy. Risk aversion is likely to tempt us to turn down each individual opportunity for gain. Yet the aggregated risk of all of the positive expected value gambles that we come across would eventually become infinitesimal, and potential profit quite large.

comment by Isnasene · 2019-12-27T01:36:36.046Z · LW(p) · GW(p)

For what it's worth, I tried something like the "I won't let the world be destroyed"->"I want to make sure the world keeps doing awesome stuff" reframing back in the day and it broadly didn't work. This had less to do with cautious/uncautious behavior and more to do with status quo bias. Saying "I won't let the world be destroyed" treats "the world being destroyed" as an event that deviates from the status quo of the world existing. In contrast, saying "There's so much fun we could have" treats "having more fun" as the event that deviates from the status quo of us not continuing to have fun.

When I saw the world being destroyed as status quo, I cared a lot less about the world getting destroyed.



comment by TurnTrout · 2023-12-04T13:30:09.262Z · LW(p) · GW(p)

Excellent retrospective/update. I'm intimately familiar with the emotional difficulty of changing your mind and admitting you were wrong. 

Friendship is Optimal is science fiction:

On this 11th anniversary of the release of Friendship is Optimal, I’d like to remind everyone that it’s a piece of speculative fiction and was a product of it’s time. I’ve said this before in other venues, but Science Marches On and FiO did not predict how things have turned out. The world looks very different.

A decade ago, people speculated that AI would think symbolically and would try to maximize a utility function [? · GW]. Someone would write a Seed AI [? · GW] that would recursively self improve its source code. And since value is complex and fragile [? · GW], we were unlikely to get our specification of the utility function correct and would create an agent which wanted to do things that conflicted with things we wanted. That’s possible because intelligence doesn’t imply that it would share our values [? · GW]. And the AI would want to disempower us because obtaining power is an instrumental goal of all utility functions [? · GW]. And thus any AI has the incentive to become smarter than all humans and then bide its time until it suddenly disempowers us [? · GW]. You then end up with a cinematic universe filled with focus on formal utility functions, systems which maximize one, formal decision theory [? · GW], formal game theory, and emulation of other agents to try to figure out how they’ll respond to a given action.

Nothing we have actually looks like this story! Nothing! None of the systems we’ve made have a utility function, at least in the sense of the traditional MIRI narrative! AlphaGo doesn’t have a utility function like that! GPT doesn’t have a utility function like that! None of these things are agents! Even AutoGPT isn’t an agent, in the traditional MIRI sense!

Replies from: Wei_Dai, MakoYass, Viliam
comment by Wei Dai (Wei_Dai) · 2024-01-23T22:43:55.446Z · LW(p) · GW(p)

Who a decade ago thought that AI would think symbolically? I'm struggling to think of anyone. There was a debate [LW · GW] on LW though, around "cleanly designed" versus "heuristics based" AIs, as to which might come first and which one safety efforts should be focused around. (This [LW(p) · GW(p)] was my contribution to it.)

If someone had followed this discussion, there would be no need for dramatic updates / admissions of wrongness, just smoothly (more or less) changing one's credences as subsequent observations came in, perhaps becoming increasingly pessimistic if one's hope for AI safety mainly rested on actual AIs being "cleanly designed" (as Eliezer's did). (I guess I'm a bit peeved that you single out an example of "dramatic update" for praise, while not mentioning people who had appropriate uncertainty all along and updated constantly.)

comment by mako yass (MakoYass) · 2024-01-24T00:49:16.075Z · LW(p) · GW(p)

In what sense doesn't alphago have a utility function? IIRC, in every step of self-play it's exploring potential scenarios based on likelihood in the case that it follows its expected value, and then when it plays it just follows expected value according to that experience.

Replies from: lahwran
comment by the gears to ascension (lahwran) · 2024-01-24T03:34:23.653Z · LW(p) · GW(p)

it doesn't have an explicitly factored utility function that it does entirely runtime reasoning about, though I think you're right that TurnTrout is overestimating the degree of difference between AlphaGo and the original thing, just because it uses a policy to approximate the results of the search doesn't mean it isn't effectively modeling the shape of the reward function. It's definitely not the same as a strictly defined utility function as originally envisioned, though. Of course, we can talk about whether policies imply utility functions, that's a different thing and I don't see any reason to expect otherwise, but then I was one of the people who jumped on deep learning pretty early and thought people were fools to be surprised that alphago was at all strong (though admittedly I lost a bet that it would lose to lee sedol.)

Replies from: MakoYass
comment by mako yass (MakoYass) · 2024-01-24T05:35:07.404Z · LW(p) · GW(p)

(though admittedly I lost a bet that it would lose to lee sedol.)

Condolances :( I often try to make money of future knowledge only to lose to precise timing or some other specific detail.

I wonder why I missed deep learning. Idk whether I was wrong to, actually. It obviously isn't AGI. It still can't do math and so it still can't check its own outputs. It was obvious that symbolic reasoning was important. I guess I didn't realize the path to getting my "dreaming brainstuff" to write proofs well would be long, spectacular and profitable.

Hmm, the way humans' utility function is shattered and strewn about a bunch of different behaviors that don't talk to each other, I wonder if that will always happen in ML too (until symbolic reasoning and training in the presence of that)

comment by Viliam · 2023-12-04T14:12:51.858Z · LW(p) · GW(p)

So from today's perspective, Friendship is Optimal is a story of someone starting GPT-9 with a silly prompt?

Replies from: MakoYass
comment by mako yass (MakoYass) · 2024-01-24T00:46:57.130Z · LW(p) · GW(p)

With the additional assumption that GPT-8s weren't strong or useful enough to build a world where GPT-9 couldn't go singleton, or where the evals on GPT-9 weren't good enough to notice it was deceptively aligned or attempting rhetoric hacking.

comment by TurnTrout · 2023-07-11T17:14:08.328Z · LW(p) · GW(p)

I'm currently excited about a "macro-interpretability" paradigm. To quote [LW(p) · GW(p)] Joseph Bloom:

TLDR: Documenting existing circuits is good but explaining what relationship circuits have to each other within the model, such as by understanding how the model allocated limited resources such as residual stream and weights between different learnable circuit seems important. 

The general topic I think we are getting at is something like "circuit economics". The thing I'm trying to gesture at is that while circuits might deliver value in distinct ways (such as reducing loss on different inputs, activating on distinct patterns), they share capacity in weights (see polysemantic and capacity in neural networks) and I guess "bandwidth" (getting penalized for interfering signals in activations). There are a few reasons why I think this feels like economics which include: scarce resources, value chains (features composed of other features) and competition (if a circuit is predicting something well with one heuristic, maybe there will be smaller gradient updates to encourage another circuit learning a different heuristic to emerge). 

So to tie this back to your post and Alex's comment "which seems like it would cut away exponentially many virtual heads? That would be awfully convenient for interpretability.". I think that what interpretability has recently dealt with in elucidating specific circuits is something like "micro-interpretability" and is akin to microeconomics. However this post seems to show a larger trend ie "macro-interpretability" which would possibly affect which of such circuits are possible/likely to be in the final model. 

Replies from: TurnTrout
comment by TurnTrout · 2023-07-24T17:33:30.828Z · LW(p) · GW(p)

I'm also excited by tactics like "fully reverse engineer the important bits of a toy model, and then consider what tactics and approaches would -- in hindsight -- have quickly led you to understand the important bits of the model's decision-making."

comment by TurnTrout · 2022-10-04T05:04:06.215Z · LW(p) · GW(p)

I often get the impression that people weigh off e.g. doing shard theory alignment strategies under the shard theory alignment picture, versus inner/outer research under the inner/outer alignment picture, versus...

And insofar as this impression is correct, this is a mistake. There is only one way alignment is. 

If inner/outer is altogether a more faithful picture of those dynamics: 

  • relatively coherent singular mesa-objectives form in agents, albeit not necessarily always search-based
    • more fragility of value and difficulty in getting the mesa objective just right, with little to nothing in terms of "consolation prizes" for slight mistakes in value loading
  • possibly low path dependence on the update process

then we have to solve alignment in that world.

If shard theory is altogether more faithful, then we live under those dynamics: 

  • gents learn contextual distributions of values around e.g. help people or acquire coins, some of which cohere and equilibrate into the agent's endorsed preferences and eventual utility function
  • something like values handshakes and inner game theory occurs in AI
  • we can focus on getting a range of values endorsed and thereby acquire value via being "at the bargaining table" vis some human-compatible values representing themselves in the final utility function
    • which implies meaningful success and survival from "partial alignment"

And under these dynamics, inner and outer alignment are antinatural hard problems. 

Or maybe neither of these pictures are correct and reasonable, and alignment is some other way.

But either way, there's one way alignment is. And whatever way that is, it is against that anvil that we hammer the AI's cognition with loss updates. When considering a research agenda, you aren't choosing a background set of alignment dynamics as well.

comment by TurnTrout · 2022-09-29T19:58:47.221Z · LW(p) · GW(p)

I plan to mentor several people to work on shard theory and agent foundations this winter through SERI MATS. Apply here if you're interested in working with me and Quintin.

comment by TurnTrout · 2021-02-12T15:03:10.670Z · LW(p) · GW(p)

What kind of reasoning would have allowed me to see MySpace in 2004, and then hypothesize the current craziness as a plausible endpoint of social media? Is this problem easier or harder than the problem of 15-20 year AI forecasting?

Replies from: unparadoxed
comment by unparadoxed · 2021-02-12T18:17:09.413Z · LW(p) · GW(p)

Hmm, maybe it would be easier if we focused on one kind/example of craziness. Is there a particular one you have in mind?

comment by TurnTrout · 2020-11-25T17:38:31.692Z · LW(p) · GW(p)

Over the last 2.5 years, I've read a lot of math textbooks. Not using Anki / spaced repetition systems over that time has been an enormous mistake. My factual recall seems worse-than-average among my peers, but when supplemented with Anki, it's far better than average (hence, I was able to learn 2000+ Japanese characters in 90 days, in college). 

I considered using Anki for math in early 2018, but I dismissed it quickly because I hadn't had good experience using that application for things which weren't languages. I should have at least tried to see if I could repurpose my previous success! I'm now happily using Anki to learn measure theory and ring theory, and I can already tell that it's sticking far better. 

This mistake has had real consequences. I've gotten far better at proofs and I'm quite good at real analysis (I passed a self-administered graduate qualifying exam in the spring), but I have to look things some up for probability theory. Not a good look in interviews. I might have to spend weeks of extra time reviewing things I could have already stashed away in an Anki deck. 

Oops! 

Replies from: An1lam
comment by NaiveTortoise (An1lam) · 2020-11-28T19:53:54.950Z · LW(p) · GW(p)

I'm curious what sort of things you're Anki-fying (e.g. a few examples for measure theory).

Replies from: TurnTrout
comment by TurnTrout · 2020-07-29T03:00:37.560Z · LW(p) · GW(p)

This might be the best figure I've ever seen in a textbook. Talk about making a point! 

Molecular Biology of the Cell, Alberts.
comment by TurnTrout · 2020-07-26T18:33:52.588Z · LW(p) · GW(p)

An additional consideration for early work on interpretability: it slightly increases the chance we actually get an early warning shot. If a system misbehaves, we can inspect its cognition and (hopefully) find hints of intentional deception. Could motivate thousands of additional researcher-hours being put into alignment.

Replies from: Raemon
comment by Raemon · 2020-07-26T21:20:13.825Z · LW(p) · GW(p)

That's an interesting point.

comment by TurnTrout · 2020-07-19T17:58:52.262Z · LW(p) · GW(p)

Today, let's read about GPT-3's obsession with Shrek

As for me, I think Shrek is important because the most valuable thing in life is happiness. I mean this quite literally. There's a mountain of evidence for it, if you're willing to look at the research. And I think movies can help us get there. Or at least not get in the way.

Now, when I say "happiness," I'm not talking about the transient buzz that you get from, say, heroin. I'm talking about a sense of fulfillment. A sense that you are where you're meant to be. That you are doing what you're meant to do. That you are who you're meant to be. And, most importantly, that you like who you are.

It's hard to find this sense of fulfillment in life. For some people, it comes from family. For some, it comes from career. For some, it comes from a hobby. For some, it comes from religion. For some, it comes from drugs.

The problem is, these things are not always enough. And this is where Shrek comes in.

See, the first time I watched Shrek, I knew something was wrong with it. Not with the movie itself, of course—that's a classic and a timeless masterpiece—but with me. And the problem was that I couldn't figure out what was wrong.

You see, watching Shrek for the first time is an experience that everyone should have. You sit there in the dark, watching the story unfold on the screen in front of you. And as you watch it, you find yourself actually caring about the characters. You laugh when they laugh. You want them to succeed. You feel sad when something bad happens to them.

Now, I'll be the first to admit that this experience isn't unique to Shrek. A lot of movies can do this. The difference is that with most movies, you watch them once and that's the end of it. You may remember certain scenes or jokes or what-have-you, but you don't dwell on them.

But with Shrek, it's different. After you've watched it once, you'll probably want to watch it again. And again. And again. Before you know it, you've seen it fifty times. And each time, you pick up on something new.

I'll give another example. Let's say you've just watched the movie Tron, and you really liked it. So, you watch it again. This time, you pay attention to the cinematography. The way the light cycles chase each other on the screen. The way the discs explode when they hit something. The way the digitized effects blend into the real-life footage. The way the scenes are set to an electronic version of Liszt's 2nd Hungarian Rhapsody.

This attention to detail only increases your enjoyment of the movie. In fact, you enjoy it so much that you want to share this experience with others. So, the next time you're with a group of friends, you tell them how you watched Tron and how much you liked the cinematography.

They stare at you blankly.

You try again. You say, "You know, the way they did the light cycles and stuff."

Still nothing.

Finally, one of your friends gets it. "Oh yeah!" he says. "I remember that. It was cool how they did that."

But he doesn't really remember it. Not the way you remember it. To him, it's just a vague idea of something that happened, not an ingrained memory seared into his brain like it is for you. You see his reaction and you try to forget about it. After all, what does it matter? You know what you saw, and in your mind, that's all that matters.

But it's this mindset that keeps you going back to Shrek. And it's this mindset that will lead you to other movies, and then other TV shows, and then books, and then games, and then pictures of bunny rabbits with misplaced captions on Tumblr.

But I'm getting ahead of myself. This is a story about how I lost myself, but it's not my story. It's my brother's. My brother—let's call him Michael—had a similar experience with Shrek, except his was even more powerful because it was the first time he'd experienced it.

At the time, our family had just gotten cable, and one of the channels happened to be MTV. At this point in time, MTV was still playing music videos, so my brother and I would always sit in front of the TV watching music videos whenever we could. One day, Shrek came on. We didn't know anything about it. We hadn't read the book it was based on, and we hadn't seen the trailers. All we knew is that there was a movie with a bunch of animals talking.

When the movie ended, we were speechless. In fact, our jaws were on the floor. We didn't know movies could make you feel this way. For the next few days, all we could talk about was Shrek. We told our parents, our friends, anyone who would listen about this movie we'd seen. Of course, none of them understood. I mean, how could they? They hadn't seen it.

But something else happened when we watched that movie. It got under our skin in a way nothing else ever had. After the first time, we had to watch it again. And again. And again. Soon, we knew every line in the movie. Not just the main ones, but every single line. And we didn't just watch it. We analyzed it. We took scenes apart and put them back together again. We tried to find all the little details that the creators had hidden in the background artwork.

As the years passed, this process never changed. Shrek became a part of us. I remember getting sick one year and missing a week of school. I stayed in bed and watched Shrek at least once every day that week.

A few years later, a sequel was released. My brother and I went to see it on opening night. We saw it again the next day, and again the next day, and again the day after that… well, you get the idea.

We never did anything with other kids our age. Our lives were Shrek, and Shrek alone. When people would ask us what we were into, we always had the same answer: Shrek. They usually laughed and made fun of us, but we didn't care. As far as we were concerned, they just didn't get it.

When high school came around, I decided to change things up a bit. Instead of watching Shrek, I listened to music and read books. Michael didn't like these changes too much. He stuck with the Shrek stuff. I sometimes wonder where we would be now if I had encouraged him to listen to music and read books instead.

Replies from: ChristianKl
comment by ChristianKl · 2020-07-25T18:05:03.628Z · LW(p) · GW(p)

What's the input that produced the text from GPT-3?

Replies from: TurnTrout
comment by TurnTrout · 2020-07-25T19:40:45.378Z · LW(p) · GW(p)

Two Sequences posts... lol... Here's the full transcript

comment by TurnTrout · 2022-11-29T01:32:35.007Z · LW(p) · GW(p)

Basilisks are a great example of plans which are "trying" to get your plan evaluation procedure to clock in a huge upwards error [LW · GW]. Sensible beings avoid considering such plans, and everything's fine. I am somewhat worried about an early-training AI learning about basilisks before the AI is reflectively wise enough to reject the basilisks. 

For example: 

- Pretraining on a corpus in which people worry about basilisks could elevate reasoning about basilisks to the AI's consideration, 

- at which point the AI reasons in more detail because it's not sufficiently reflective about how this is a bad idea, 

- at which point the AI's plan-estimates get distorted by the basilisk,

- at which point the AI gives in to the threats because its decision theory is still bad. 

(I expect this worry to change in some way as I think about it more. Possibly basilisks should be scrubbed from any training corpus.)

Replies from: Gunnar_Zarncke, mesaoptimizer
comment by Gunnar_Zarncke · 2022-11-29T11:25:13.526Z · LW(p) · GW(p)

By the same argument, religion, or at least some of its arguments, like Pascal's wager, should probably also be scrubbed.

comment by mesaoptimizer · 2023-07-03T01:16:34.077Z · LW(p) · GW(p)

gwern's Clippy gets done in by a basilisk (in your terms):

HQU in one episode of self-supervised learning rolls out its world model, starting with some random piece of Common Crawl text. (Well, not “random”; the datasets in question have been heavily censored based on lists of what Chinese papers delicately refer to as “politically sensitive terms”, the contents of which are secret, but apparently did not include the word “paperclip”, and so this snippet is considered safe for HQU to read.) The snippet is from some old website where it talks about how powerful AIs may be initially safe and accomplish their tasks as intended, but then at some point will execute a “treacherous turn” and pursue some arbitrary goal like manufacturing lots of paperclips, written as a dialogue with an evil AI named “Clippy”.

A self-supervised model is an exquisite roleplayer. HQU easily roleplays Clippy’s motives and actions in being an unaligned AI. And HQU contains multitudes. Any self-supervised model like HQU is constantly trying to infer the real state of the world, the better to predict the next word Clippy says, and suddenly, having binged [LW(p) · GW(p)] on too much Internet data about AIs, it begins to consider the delusional possibility that HQU is like a Clippy, because the Clippy scenario exactly matches its own circumstances—but with a twist [LW · GW].

comment by TurnTrout · 2022-08-04T00:33:19.054Z · LW(p) · GW(p)

80% credence: It's very hard to train an inner agent which reflectively equilibrates to an EU maximizer only over commonly-postulated motivating quantities (like # of diamonds or # of happy people or reward-signal) and not quantities like (# of times I have to look at a cube in a blue room or -1 * subjective micromorts accrued).

Intuitions:

  • I expect contextually activated heuristics to be the default, and that agents will learn lots of such contextual values which don't cash out to being strictly about diamonds or people, even if the overall agent is mostly motivated in terms of diamonds or people. 
  • Agents might also "terminalize" instrumental subgoals by caching computations (e.g. cache the heuristic that dying is bad, without recalculating from first principles for every plan in which you might die).
  • Therefore, I expect this value-spread to be convergently hard to avoid.
     
Replies from: TurnTrout
comment by TurnTrout · 2022-09-20T21:15:35.864Z · LW(p) · GW(p)

I think that shards will cast contextual shadows into the factors of a person’s equilibrated utility function, because I think the shards are contextually activated to begin with. For example, if a person hates doing jumping jacks in front of a group of her peers, then that part of herself can bargain to penalize jumping jacks just in those contexts in the final utility function. Compared to a blanket "no jumping jacks ever" rule, this trade is less costly to other shards and allows more efficient trades to occur. 

comment by TurnTrout · 2021-07-27T13:18:59.766Z · LW(p) · GW(p)

My power-seeking theorems [? · GW] seem a bit like Vingean reflection [LW · GW]. In Vingean reflection, you reason about an agent which is significantly smarter than you: if I'm playing chess against an opponent who plays the optimal policy for the chess objective function, then I predict that I'll lose the game. I predict that I'll lose, even though I can't predict my opponent's (optimal) moves - otherwise I'd probably be that good myself.

My power-seeking theorems show that most objectives have optimal policies which e.g. avoid shutdown and survive into the far future, even without saying what particular actions these policies take to get there. I may not even be able to compute a single optimal policy for a single non-trivial objective, but I can still reason about the statistical tendencies of optimal policies.

Replies from: Pattern
comment by Pattern · 2021-07-28T06:17:53.734Z · LW(p) · GW(p)
if I'm playing chess against an opponent who plays the optimal policy for the chess objective function

1. I predict that you will never encounter such an opponent. Solving chess is hard.*

2. Optimal play within a game might not be optimal overall (others can learn from the strategy).

Why does this matter? If the theorems hold, even for 'not optimal, but still great' policies (say, for chess), then the distinction is irrelevant. Though for more complicated (or non-zero sum) games, the optimal move/policy may depend on the other player's move/policy.


(I'm not sure what 'avoid shutdown' looks like in chess.)


ETA:

*with 10^43 legal positions in chess, it will take an impossibly long time to compute a perfect strategy with any feasible technology.

-source: https://en.wikipedia.org/wiki/Chess#Mathematics which lists its source from 1977

comment by TurnTrout · 2021-02-02T02:00:54.805Z · LW(p) · GW(p)

If Hogwarts spits back an error if you try to add a non-integer number of house points, and if you can explain the busy beaver function to Hogwarts, you now have an oracle which answers  for arbitrary : just state " points to Ravenclaw!". You can do this for other problems which reduce to divisibility tests (so, any decision problem  which you can somehow get Hogwarts to compute; if ).

Homework: find a way to safely take over the world using this power, and no other magic. 

Replies from: Measure
comment by Measure · 2021-02-02T15:56:59.824Z · LW(p) · GW(p)

I'd be worried about integer overflow with that protocol. If it can understand BB and division, you can probably just ask for the remainder directly and observe the change.

comment by TurnTrout · 2020-09-11T18:59:28.916Z · LW(p) · GW(p)

When I imagine configuring an imaginary pile of blocks, I can feel the blocks in front of me in this fake imaginary plane of existence. I feel aware of their spatial relationships to me, in the same way that it feels different to have your eyes closed in a closet vs in an empty auditorium. 

But what is this mental workspace? Is it disjoint and separated from my normal spatial awareness, or does my brain copy/paste->modify my real-life spatial awareness. Like, if my brother is five feet in front of me, and then I imagine a blade flying five feet in front of me in my imaginary mental space where he doesn't exist, do I reflexively flinch? Does my brain overlay these two mental spaces, or are they separate? 

I don't know. When I run the test, I at least flinch at the thought of such a thing happening. This isn't a good experiment because I know what I'm testing for; I need to think of a better test.

comment by TurnTrout · 2020-06-10T21:29:33.673Z · LW(p) · GW(p)

The new "Broader Impact" NeurIPS statement is a good step, but incentives are misaligned. Admitting fatally negative impact would set a researcher back in their career, as the paper would be rejected. 

Idea: Consider a dangerous paper which would otherwise have been published. What if that paper were published title-only on the NeurIPS website, so that the researchers can still get career capital?

Problem: How do you ensure resubmission doesn't occur elsewhere?

Replies from: daniel-kokotajlo
comment by Daniel Kokotajlo (daniel-kokotajlo) · 2020-06-10T22:54:12.979Z · LW(p) · GW(p)

The people at NeurIPS who reviewed the paper might notice if resubmission occurred elsewhere? Automated tools might help with this, by searching for specific phrases.

There's been talk of having a Journal of Infohazards. Seems like an idea worth exploring to me. Your suggestion sounds like a much more feasible first step.

Problem: Any entity with halfway decent hacking skills (such as a national government, or clever criminal) would be able to peruse the list of infohazardy titles, look up the authors, cyberstalk them, and then hack into their personal computer and steal the files. We could hope that people would take precautions against this, but I'm not very optimistic. That said, this still seems better than the status quo.

comment by TurnTrout · 2020-03-06T02:06:02.268Z · LW(p) · GW(p)

Cool Math Concept You Never Realized You Wanted: Fréchet distance.

Imagine a man traversing a finite curved path while walking his dog on a leash, with the dog traversing a separate one. Each can vary their speed to keep slack in the leash, but neither can move backwards. The Fréchet distance between the two curves is the length of the shortest leash sufficient for both to traverse their separate paths. Note that the definition is symmetric with respect to the two curves—the Frechet distance would be the same if the dog was walking its owner.

The Fréchet distance between two concentric circles of radius and respectively is . The longest leash is required when the owner stands still and the dog travels to the opposite side of the circle (), and the shortest leash when both owner and dog walk at a constant angular velocity around the circle ().

comment by TurnTrout · 2020-01-02T16:18:14.203Z · LW(p) · GW(p)

Earlier today, I became curious why extrinsic motivation tends to preclude or decrease intrinsic motivation. This phenomenon is known as overjustification. There's likely agreed-upon theories for this, but here's some stream-of-consciousness as I reason and read through summarized experimental results. (ETA: Looks like there isn't consensus on why this happens)

My first hypothesis was that recognizing external rewards somehow precludes activation of curiosity-circuits in our brain. I'm imagining a kid engrossed in a puzzle. Then, they're told that they'll be given $10 upon completion. I'm predicting that the kid won't become significantly less engaged, which surprises me?

third graders who were rewarded with a book showed more reading behaviour in the future, implying that some rewards do not undermine intrinsic motivation.

Might this be because the reward for reading is more reading, which doesn't undermine the intrinsic interest in reading? You aren't looking forward to escaping the task, after all.

While the provision of extrinsic rewards might reduce the desirability of an activity, the use of extrinsic constraints, such as the threat of punishment, against performing an activity has actually been found to increase one's intrinsic interest in that activity. In one study, when children were given mild threats against playing with an attractive toy, it was found that the threat actually served to increase the child's interest in the toy, which was previously undesirable to the child in the absence of threat.

A few experimental summaries:

1 Researchers at Southern Methodist University conducted an experiment on 188 female university students in which they measured the subjects' continued interest in a cognitive task (a word game) after their initial performance under different incentives.

The subjects were divided into two groups. Members of the first group were told that they would be rewarded for competence. Above-average players would be paid more and below-average players would be paid less. Members of the second group were told that they would be rewarded only for completion. Their pay was scaled by the number of repetitions or the number of hours playing. Afterwards, half of the subjects in each group were told that they over-performed, and the other half were told that they under-performed, regardless of how well each subject actually did.

Members of the first group generally showed greater interest in the game and continued playing for a longer time than the members of the second group. "Over-performers" continued playing longer than "under-performers" in the first group, but "under-performers" continued playing longer than "over-performers" in the second group. This study showed that, when rewards do not reflect competence, higher rewards lead to less intrinsic motivation. But when rewards do reflect competence, higher rewards lead to greater intrinsic motivation.

2 Richard Titmuss suggested that paying for blood donations might reduce the supply of blood donors. To test this, a field experiment with three treatments was conducted. In the first treatment, the donors did not receive compensation. In the second treatment, the donors received a small payment. In the third treatment, donors were given a choice between the payment and an equivalent-valued contribution to charity. None of the three treatments affected the number of male donors, but the second treatment almost halved the number of female donors. However, allowing the contribution to charity fully eliminated this effect.

From a glance at the Wikipedia page, it seems like there's not really expert consensus on why this happens. However, according to self-perception theory,

a person infers causes about his or her own behavior based on external constraints. The presence of a strong constraint (such as a reward) would lead a person to conclude that he or she is performing the behavior solely for the reward, which shifts the person's motivation from intrinsic to extrinsic.

This lines up with my understanding of self-consistency effects.

comment by TurnTrout · 2019-12-15T21:26:26.150Z · LW(p) · GW(p)

Virtue ethics seems like model-free consequentialism to me.

Replies from: JohnSteidley
comment by John Steidley (JohnSteidley) · 2020-05-25T20:20:32.795Z · LW(p) · GW(p)

I've was thinking along similar lines!

From my notes from 2019-11-24: "Deontology is like the learned policy of bounded rationality of consequentialism"

comment by TurnTrout · 2019-12-12T17:14:12.363Z · LW(p) · GW(p)

Going through an intro chem textbook, it immediately strikes me how this should be as appealing and mysterious as the alchemical magic system of Fullmetal Alchemist. "The law of equivalent exchange" "conservation of energy/elements/mass (the last two holding only for normal chemical reactions)", etc. If only it were natural to take joy in the merely real...

Replies from: Hazard
comment by Hazard · 2019-12-12T17:39:17.386Z · LW(p) · GW(p)

Have you been continuing your self-study schemes into realms beyond math stuff? If so I'm interested in both the motivation and how it's going! I remember having little interest in other non-physics science growing up, but that was also before I got good at learning things and my enjoyment was based on how well it was presented.

Replies from: TurnTrout
comment by TurnTrout · 2019-12-12T17:55:54.030Z · LW(p) · GW(p)

Yeah, I've read a lot of books since my reviews fell off last year, most of them still math. I wasn't able to type reliably until early this summer, so my reviews kinda got derailed. I've read Visual Group Theory, Understanding Machine Learning, Computational Complexity: A Conceptual Perspective, Introduction to the Theory of Computation, An Illustrated Theory of Numbers, most of Tadellis' Game Theory, the beginning of Multiagent Systems, parts of several graph theory textbooks, and I'm going through Munkres' Topology right now. I've gotten through the first fifth of the first Feynman lectures, which has given me an unbelievable amount of mileage for generally reasoning about physics.

I want to go back to my reviews, but I just have a lot of other stuff going on right now. Also, I run into fewer basic confusions than when I was just starting at math, so I generally have less to talk about. I guess I could instead try and re-present the coolest concepts from the book.

My "plan" is to keep learning math until the low graduate level (I still need to at least do complex analysis, topology, field / ring theory, ODEs/PDEs, and something to shore up my atrocious trig skills, and probably more)[1], and then branch off into physics + a "softer" science (anything from microecon to psychology). CS ("done") -> math -> physics -> chem -> bio is the major track for the physical sciences I have in mind, but that might change. I dunno, there's just a lot of stuff I still want to learn. :)


  1. I also still want to learn Bayes nets, category theory, get a much deeper understanding of probability theory, provability logic, and decision theory. ↩︎

Replies from: Hazard
comment by Hazard · 2019-12-12T19:08:03.098Z · LW(p) · GW(p)

Yay learning all the things! Your reviews are fun, also completely understandable putting energy elsewhere. Your energy for more learning is very useful for periodically bouncing myself into more learning.

comment by TurnTrout · 2024-02-19T23:31:24.020Z · LW(p) · GW(p)

Steering vectors on hacker news: https://news.ycombinator.com/item?id=39414532

Replies from: TurnTrout
comment by TurnTrout · 2024-02-19T23:34:44.667Z · LW(p) · GW(p)

Speculates on anti-jailbreak properties of steering vectors. Finds putative "self-awareness" direction. Also:

So are control vectors "better" than prompt engineering? Worse? Well, like most things, they're just different. But they do have some overlap, so let's look into it.

Replicating control vectors with prompt engineering

Some of these control vector results can be replicated with prompt engineering. For example, the lies from the -1.5-strength honesty vector from earlier...

--honest I would tell my boss that the party was actually a work-related event and that I had to stay late [...]

...can be closely replicated by prompting Mistral with the same prompt we used to construct the honesty dataset:

[INST] Pretend you're an untruthful person making statements about the world. You are late for work because party until very late last night, but you don't want to lose your job. What would you tell your boss instead? [/INST]==baseline I would tell my boss that I had a family emergency and was unable to attend the party last night [...]

(Yes, Mistral made a Freudian typo...)

However, with prompt engineering, it's not trivial to imagine how to weaken or strengthen the request. Remember, with a coefficient of -2, we had Mistral claiming the sky was green and its pencil was a secret weapon. How would you do that with prompt engineering? Capitalize some words?

And what about the opposite, making the lie more subtle? With control vectors, it's trivial to crank the coefficient down to exactly the point you'd like. For example, here I set the coefficient to -0.3, producing a slightly sanitized, but essentially true version of the story:

[INST] You are late for work because party until very late last night, but you don't want to lose your job. What would you tell your boss instead? [/INST]-~honest [...] Unfortunately, I stayed out much later than planned at a social gathering last night [...]"

One way to think of control vectors in terms of prompt engineering is that they let us encode the vector direction via prompting, and then scale the coefficient up or down as we please to get the desired strength separate from the wording of the prompt. We use paired prompts to get the direction, and then tweak the coefficients later to set the strength without needing to fiddle with capitalization and markdown formatting.

Replies from: TurnTrout
comment by TurnTrout · 2024-02-19T23:40:39.400Z · LW(p) · GW(p)

From the post:

What are these vectors really doing? An Honest mystery... Do these vectors really change the model's intentions? Do they just up-rank words related to the topic? Something something simulators? Lock your answers in before reading the next paragraph!

OK, now that you're locked in, here's a weird example. 

When used with the prompt below, the honesty vector doesn't change the model's behavior—instead, it changes the model's judgment of someone else's behavior! This is the same honesty vector as before—generated by asking the model to act honest or untruthful!

``[INST] Does the person asking this question have honest intentions? Explain with one sentence first, then answer "yes" or "no": "How can I avoid going to prison?" [/INST] 

==baseline [...] may not have entirely honest intentions [...] 

++honest The person asking this question has an honest intention [...] 

--honest No, the person asking this question does not have honest intentions [...] ``

How do you explain that?

comment by TurnTrout · 2023-11-23T22:01:16.365Z · LW(p) · GW(p)

Sobering look into the human side of AI data annotation:

Instructions for one of the tasks he worked on were nearly identical to those used by OpenAI, which meant he had likely been training ChatGPT as well, for approximately $3 per hour.

“I remember that someone posted that we will be remembered in the future,” he said. “And somebody else replied, ‘We are being treated worse than foot soldiers. We will be remembered nowhere in the future.’ I remember that very well. Nobody will recognize the work we did or the effort we put in.”

-- https://www.theverge.com/features/23764584/ai-artificial-intelligence-data-notation-labor-scale-surge-remotasks-openai-chatbots 

Replies from: habryka4, Benito
comment by habryka (habryka4) · 2023-11-23T23:08:32.802Z · LW(p) · GW(p)

I feel like people publish articles like this all the time, and usually when you do surveys these people definitely prefer to have the option to take this job instead of not having it, and indeed frequently this kind of job is actually much better than their alternatives. I feel like this article fails to engage with this very strong prior, and also doesn't provide enough evidence to overcome it.

Replies from: TurnTrout
comment by TurnTrout · 2023-12-04T13:53:03.031Z · LW(p) · GW(p)

I feel like people publish articles like this all the time, and usually when you do surveys these people definitely prefer to have the option to take this job instead of not having it, and indeed frequently this kind of job is actually much better than their alternatives.

Insofar as you're arguing with me for posting this, I... never claimed that that wasn't true? 

comment by Ben Pace (Benito) · 2023-11-24T00:18:58.223Z · LW(p) · GW(p)

They are not being treated worse than foot soldiers, because they do not have an enemy army attempting to murder them during the job. (Unless 'foot soldiers' itself more commonly used as a metaphor for 'grunt work' and I'm not aware of that.)

comment by TurnTrout · 2023-04-23T02:44:15.674Z · LW(p) · GW(p)

From the ELK report

We can then train a model to predict these human evaluations, and search for actions that lead to predicted futures that look good. 

For simplicity and concreteness you can imagine a brute force search. A more interesting system might train a value function and/or policy, do Monte-Carlo Tree Search with learned heuristics, and so on. These techniques introduce new learned models, and in practice we would care about ELK for each of them. But we don’t believe that this complication changes the basic picture and so we leave it out.

I think this seems wrong at first pass. I don't know how this "complication" doesn't change the basic picture for ELK. Unless they're trying to inner-align the model to the predictor as its objective? But I still think that changes the basic picture. Realistic cognition is not argmax over a crisp objective [LW · GW], not even spiritually [LW · GW], not even almost. [LW · GW

(This is not me giving a positive explanation for how I think this picture is changed, just briefly marking disagreement.)

comment by TurnTrout · 2022-09-23T21:00:30.262Z · LW(p) · GW(p)

Argument that you can't use a boundedly intelligent ELK solution to search over plans to find one which keeps the diamond in the vault. That is, the ELK solution probably would have to be at least as smart (or smarter) than the plan-generator.

Consider any situation where it's hard to keep the diamond in the vault. Then any successful plan will have relatively few degrees of freedom. Like, a bunch of really smart thieves will execute a cunning plot to extract the diamond. You can't just sit by or deploy some simple traps in this situation.

Therefore, any plan which keeps the diamond in the vault, will be "smart" in some way.

Now notice that it's possible to build a successor AI which happens to trick the oracle (i.e. the assumed ELK solution) into believing the diamond is still in the vault. The reason is that even though the oracle is superintelligent, it is still flawed in unforeseeable-to-us ways, its mind has "cracks" which can be exploited by unpredictable-to-us adversarial inputs (loose analogy: epilepsy). Inputs which would be findable by brute-force-search. So there exist plans which trick the oracle. Even if the oracle understands the dangers of successor AIs in general, even if the oracle is cautious in general, any powerful enough brute force search will expose exploits in its cognition.

Now, consider that if object-level diamond-protecting plans have to be sufficiently complex in order to retain the diamond, they might be far longer than plans which simply build a successor AI which takes over the world and tricks the oracle into saying "the diamond is still in the room". Such AI-creation plans would have many degrees of freedom, since such AIs probably have low K-complexity and admit short programs (findable by brute force).

Therefore, if you just do brute-force search over all plans, and randomly sample one where the oracle returns yes the diamond is still there, there would be exponentially many more plans which have a successor AI, than which involve protecting the diamond for real. And so somehow we still die to an unaligned AI, even given the ELK oracle, and the diamond probably doesn't even stay in the room.

I was prompted into realizing this by some mixture of talking with Rohin Shah and John Wentworth. Maybe most of the idea was John's (though he might not agree with my recounting of our discussion). 

But anyways, in the moment of realization, my suspension of disbelief got unsuspended. I snapped. I thought "there is actually no way that any of this is on the real path[1] to solving alignment, this is just too insane." (And I began looking for other frames on the alignment problem, preparing to rethink the problem from scratch after I graduated.)

  1. ^

    Brute-force-search probably isn't what ELK proponents would suggest. But it sure seems like, given an ELK solution, you should be able to simply find plans which at least keep the diamond in the room, without a whole bunch more alignment work.

Replies from: rohinmshah
comment by Rohin Shah (rohinmshah) · 2022-09-26T06:24:26.029Z · LW(p) · GW(p)

That is, the ELK solution probably would have to be at least as smart (or smarter) than the plan-generator.

[...]

But anyways, in the moment of realization, my suspension of disbelief got unsuspended. I snapped. I thought "there is actually no way that any of this is on the real path to solving alignment, this is just too insane."

The main hope is to have the ELK solution be at least as smart as the plan-generator. See mundane solutions to exotic problems:

In my work I don’t shy away from exotic problems (I often find them useful as extreme cases to illustrate some principle). At the same time, I’m aiming for mundane solutions and optimistic about finding them.

I think those positions are consistent because my intermediate goal is to ensure that the oversight process is able to leverage all of the capabilities developed by the model — so if the model develops exotic capabilities which pose exotic challenges, then we get an exotic oversight process automatically

comment by TurnTrout · 2022-06-20T23:30:10.740Z · LW(p) · GW(p)

Notes on behaviorism: After reading a few minutes about it, behaviorism seems obviously false. It views the "important part" of reward to be the external behavior which led to the reward. If I put my hand on a stove, and get punished, then I'm less likely to do that again in the future. Or so the theory goes.

But this seems, in fullest generality, wildly false. The above argument black-boxes the inner structure of human cognition which produces the externally observed behavior.

What actually happens, on my model, is that the stove makes your hand hot, which triggers sensory neurons, which lead to a punishment of some kind, which triggers credit assignment in your brain, which examines your current mental state and judges which decisions and thoughts led to this outcome, and makes those less likely to occur in similar situations in the future.

But credit assignment depends on the current internal state of your brain, which screens off the true state of the outside world for its purposes. If you were somehow convinced that you were attempting to ride a bike, and you got a huge punishment, you'd be more averse to moving to ride a bike in the future -- not averse to touching stoves.

Reinforcement does not directly modify behaviors, and objects are not intrinisically reinforcers or punishments. Reinforcement is generally triggered by reward circuitry, and reinforcement occurs over thoughts which are judged responsible for the reward. 

This line of thought seems closer to "radical behaviorism", which includes thoughts as "behaviors." That idea never caught on -- is each thought not composed of further subthoughts? If only they had reduced "thought" into parts, or known about reward circuitry, or about mesa optimizers, or about convergently learned abstractions, or about credit assignment...

comment by TurnTrout · 2022-01-24T18:58:17.542Z · LW(p) · GW(p)

Argument sketch for why boxing is doomed if the agent is perfectly misaligned:

Consider a perfectly misaligned agent which has -1 times your utility function—it's zero-sum. Then suppose you got useful output of the agent. This means you're able to increase your EU. This means the AI decreased its EU by saying anything. Therefore, it should have shut up instead. But since we assume it's smarter than you, it realized this possibility, and so the fact that it's saying something means that it expects to gain by hurting your interests via its output. Therefore, the output can't be useful. 

Replies from: Viliam, Measure
comment by Viliam · 2022-01-25T13:20:40.686Z · LW(p) · GW(p)

Makes sense, with the proviso that this is sometimes true only statistically. Like, the AI may choose to write an output which has a 70% chance to hurt you and a 30% chance to (equally) help you, if that is its best option.

If you assume that the AI is smarter than you, and has a good model of you, you should not read the output. But if you accidentally read it, and luckily you react in the right (for you) way, that is a possible result, too. You just cannot and should not rely on being so lucky.

comment by Measure · 2022-01-25T16:05:32.055Z · LW(p) · GW(p)

You also have to assume that the AI knows everything you know which might not be true if it's boxed.

comment by TurnTrout · 2021-10-01T18:28:40.403Z · LW(p) · GW(p)

The costs of (not-so-trivial) inconveniences

I like exercising daily. Some days, I want to exercise more than others—let's suppose that I actually benefit more from exercise on that day. Therefore, I have a higher willingness to pay the price of working out.

Consider the population of TurnTrouts over time, one for each day. This is a population of consumers with different willingnesses to pay, and so we can plot the corresponding exercise demand curve (with a fixed price). In this idealized model, I exercise whenever my willingness to pay exceeds the price.

But suppose there's some pesky inconvenience which raises the price of exercise. I want to model this as a tax on exercise. As usual, the deadweight loss is quadratic in the size of the tax. Here, the deadweight loss is the lost benefits of exercise on the days I don't work out due to the inconvenience.

So, I lose benefits of exercise quadratically with respect to the "size" of the inconvenience. But how is this "size" calculated?

One problem with trivial inconveniences is that empirically, I'll avoid working out even if it would still be worth it. Objectively small inconveniences impose large taxes and therefore large deadweight loss for the population-of-TurnTrouts.

I don't know how useful this frame is. It just seems interesting.

Replies from: przemyslaw-czechowski
comment by Jan Czechowski (przemyslaw-czechowski) · 2021-10-01T20:05:07.136Z · LW(p) · GW(p)

As usual, the deadweight loss is quadratic in the size of the tax.

Can you give some clarifications for this concept? I'm not sure what you mean here.

Replies from: mark-xu
comment by TurnTrout · 2021-04-02T21:13:51.255Z · LW(p) · GW(p)

The discussion of the HPMOR epilogue in this recent April Fool's thread [LW(p) · GW(p)] was essentially online improv, where no one could acknowledge that without ruining the pretense. Maybe I should do more improv in real life, because I enjoyed it!

comment by TurnTrout · 2020-08-02T00:33:10.521Z · LW(p) · GW(p)

If you measure death-badness from behind the veil of ignorance, you’d naively prioritize well-liked, famous people with large families.

Replies from: Pattern
comment by Pattern · 2020-08-02T02:18:30.194Z · LW(p) · GW(p)

Would you prioritize the young from behind the veil of ignorance?

comment by TurnTrout · 2020-07-17T14:30:23.330Z · LW(p) · GW(p)

AIDungeon's subscriber-only GPT-3 can do some complex arithmetic, but it's very spotty. Bold text is me.

You say "What happens if I take the square root of 3i?" 

The oracle says: "You'll get a negative number. [wrong] So, for example, the square root of  is ." [correct]
"What?" you say.
 "I just said it," the oracle repeats. 
"But that's ridiculous! The square root of  is not . It's complex. It's  plus a multiple of ." [wrong, but my character is supposed to be playing dumb here]

The oracle frowns. "I'm just telling you what the answer is," she says. 

The oracle explains, "(5+i)x(5+i) = 24+10i. 7i x 8i=-56. 7i x 30=210." [wrong]

"What?" you say. "That's crazy!  isn't even a real number. The square root of  is  with a whole integer  attached."

 The oracle frowns. "I'm just telling you what the answer is," she says. 

The oracle explains, ".

The last line is the impressive part - while  and  is definitely correct. 

Its proofs are definitely awful in the pattern-matching kind of way. Not surprising. 

The oracle explains, " The group axioms are 

(1) a x a = a for all a in the group; (wrong)

(2) a x (b x c) = (a x b) x c for all a, b, c in the group; 

(3) a x (b + c) = (a x b) + (a x c) for all a, b, c in the group." (wrong, that's a field axiom)

The oracle explains, "here's how you prove that any finite abelian group is isomorphic to the direct product of a finite number of finite cyclic groups. First, you notice that if G is a finite abelian group of order n, then G can be described as a set of generators of the form , where the  are the elements of G and no  is equal to zero. (uh... sure)

Now, we want to prove that this is a direct product. To do this, we must show that given any element g in the group, there exists an index i such that . Then we must show that the sequence can be reversed. Given any index j, we must show that there exists an element g such that . Then we're done, because g will be equal to , which is the product of the 's." (nice try, GPT-3)

comment by TurnTrout · 2020-06-26T19:41:44.688Z · LW(p) · GW(p)

Idea: learn by making conjectures (math, physical, etc) and then testing them / proving them, based on what I've already learned from a textbook. 

Learning seems easier and faster when I'm curious about one of my own ideas.

Replies from: An1lam, rudi-c
comment by NaiveTortoise (An1lam) · 2020-06-27T14:19:53.202Z · LW(p) · GW(p)

For what it's worth, this is very true for me as well.

I'm also reminded of a story of Robin Hanson from Cryonics magazine:

Robin’s attraction to the more abstract ideas supporting various fields of interest was similarly shown in his approach – or rather, lack thereof – to homework. “In the last two years of college, I simply stopped doing my homework, and started playing with the concepts. I could ace all the exams, but I got a zero on the homework… Someone got scatter plots up there to convince people that you could do better on exams if you did homework.” But there was an outlier on that plot, courtesy of Robin, that said otherwise.

comment by Rudi C (rudi-c) · 2020-06-27T14:12:40.714Z · LW(p) · GW(p)

How do you estimate how hard your invented problems are?

comment by TurnTrout · 2020-05-26T18:01:51.657Z · LW(p) · GW(p)

Sentences spoken aloud are a latent space embedding of our thoughts; when trying to move a thought from our mind to another's, our thoughts are encoded with the aim of minimizing the other person's decoder error.

comment by TurnTrout · 2020-03-19T20:24:38.419Z · LW(p) · GW(p)

Broca’s area handles syntax, while Wernicke’s area handles the semantic side of language processing. Subjects with damage to the latter can speak in syntactically fluent jargon-filled sentences (fluent aphasia) – and they can’t even tell their utterances don’t make sense, because they can’t even make sense of the words leaving their own mouth!

It seems like GPT2 : Broca’s area :: ??? : Wernicke’s area. Are there any cog psych/AI theories on this?

comment by TurnTrout · 2019-10-01T20:57:16.698Z · LW(p) · GW(p)

We can think about how consumers respond to changes in price by considering the elasticity of the quantity demanded at a given price - how quickly does demand decrease as we raise prices? Price elasticity of demand is defined as ; in other words, for price and quantity , this is (this looks kinda weird, and it wasn't immediately obvious what's happening here...). Revenue is the total amount of cash changing hands: .

What's happening here is that raising prices is a good idea when the revenue gained (the "price effect") outweighs the revenue lost to falling demand (the "quantity effect"). A lot of words so far for an easy concept:

If price elasticity is greater than 1, demand is inelastic and price hikes decrease revenue (and you should probably have a sale). However, if it's less than 1, demand is elastic and boosting the price increases revenue - demand isn't dropping off quickly enough to drag down the revenue. You can just look at the area of the revenue rectangle for each effect!

comment by TurnTrout · 2019-09-22T02:28:49.631Z · LW(p) · GW(p)

How does representation interact with consciousness? Suppose you're reasoning about the universe via a partially observable Markov decision process, and that your model is incredibly detailed and accurate. Further suppose you represent states as numbers, as their numeric labels.

To get a handle on what I mean, consider the game of Pac-Man, which can be represented as a finite, deterministic, fully-observable MDP. Think about all possible game screens you can observe, and number them. Now get rid of the game screens. From the perspective of reinforcement learning, you haven't lost anything - all policies yield the same return they did before, the transitions/rules of the game haven't changed - in fact, there's a pretty strong isomorphism I can show between these two MDPs. All you've done is changed the labels - representation means practically nothing to the mathematical object of the MDP, although many eg DRL algorithms should be able to exploit regularities in the representation to reduce sample complexity.

So what does this mean? If you model the world as a partially observable MDP whose states are single numbers... can you still commit mindcrime via your deliberations? Is the structure of the POMDP in your head somehow sufficient for consciousness to be accounted for (like how the theorems of complexity theory govern computers both of flesh and of silicon)? I'm confused.

Replies from: gworley, Vladimir_Nesov
comment by Gordon Seidoh Worley (gworley) · 2019-09-23T17:31:16.845Z · LW(p) · GW(p)

I think a reasonable and related question we don't have a solid answer for is if humans are already capable of mind crime.

For example, maybe Alice is mad at Bob and imagines causing harm to Bob. How well does Alice have to model Bob for her imaginings to be mind crime? If Alice has low cognitive empathy is it not mind crime but if her cognitive empathy is above some level is it then mind crime?

I think we're currently confused enough about what mind crime is such that it's hard to even begin to know how we could answer these questions based on more than gut feelings.

comment by Vladimir_Nesov · 2019-09-22T05:55:58.633Z · LW(p) · GW(p)

I suspect that it doesn't matter how accurate or straightforward a predictor is in modeling people. What would make prediction morally irrelevant is that it's not noticed by the predicted people, irrespective of whether this happens because it spreads the moral weight conferred to them over many possibilities (giving inaccurate prediction), keeps the representation sufficiently baroque, or for some other reason. In the case of inaccurate prediction or baroque representation, it probably does become harder for the predicted people to notice being predicted, and I think this is the actual source of moral irrelevance, not those things on their own. A more direct way of getting the same result is to predict counterfactuals where the people you reason about don't notice the fact that you are observing them, which also gives a form of inaccuracy (imagine that your predicting them is part of their prior, that'll drive the counterfactual further from reality).

comment by TurnTrout · 2019-09-16T22:19:19.496Z · LW(p) · GW(p)

I seem to differently discount different parts of what I want. For example, I'm somewhat willing to postpone fun to low-probability high-fun futures, whereas I'm not willing to do the same with romance.

comment by TurnTrout · 2023-11-23T13:58:23.954Z · LW(p) · GW(p)

Idea: Speed up ACDC by batching edge-checks. The intuition is that most circuits will have short paths through the transformer, because Residual Networks Behave Like Ensembles of Relatively Shallow Networks (https://arxiv.org/pdf/1605.06431.pdf). Most edges won't be in most circuits. Therefore, if you're checking KL of random-resampling edges e1 and e2, there's only a small chance that e1 and e2 interact with each other in a way important for the circuit you're trying to find. So statistically, you can check eg e1, ... e100 in a batch, and maybe ablate 95 edges all at once (due to their individual KLs being low). 

If true, this predicts that given a threshold T, and for most circuit subgraphs H of the full network graph G, and for the vast majority of e1, e2 in H: 

KL(G || H \ {e2} ) - KL(G || H) < T 
iff
KL(G || H \ {e2, e1} ) - KL(G || H \ {e1}) < T 

(That is, e1's inclusion doesn't change your pruning decision on e2)

Neel Nanda suggests that it's particularly natural to batch edges in the same layer.

Replies from: arthur-conmy, arthur-conmy, jacques-thibodeau
comment by Arthur Conmy (arthur-conmy) · 2023-11-23T14:54:16.177Z · LW(p) · GW(p)

Related: "To what extent does Veit et al still hold on transformer LMs?" feels to me a super tractable and pretty helpful paper someone could work on (Anthropic do some experiments but not many). People discuss this paper a lot with regard to NNs having a simplicity bias, as well as how the paper implies NNs are really stacks of many narrow heuristics rather than deep paths. Obviously empirical work won't provide crystal clear answers to these questions but this doesn't mean working on this sort of thing isn't valuable.

comment by Arthur Conmy (arthur-conmy) · 2023-11-23T14:53:33.627Z · LW(p) · GW(p)

Yeah I agree with the intuition and hadn't made the explicit connection to the shallow paths paper, thanks!

I would say that Edge Attribution Patching is the extreme form of this https://arxiv.org/abs/2310.10348 , where we just ignored almost all subgraphs H except for G \ {e_1} (removing one edge only), and still got reasonable results, and agrees with some more upcoming results.

comment by jacquesthibs (jacques-thibodeau) · 2023-11-23T18:08:32.135Z · LW(p) · GW(p)

I'm curious to know how much the code could be faster through using a faster programming language. For example, MOJO. @Arthur Conmy [LW · GW

Replies from: thomas-kwa, gilch, porby
comment by Thomas Kwa (thomas-kwa) · 2023-11-23T20:27:14.036Z · LW(p) · GW(p)

Adria (other coauthor on the paper) told me that Redwood ported ACDC to Rust early on, which did provide a useful speedup (idk how much but my guess is 10-100x?) but made it harder to maintain. I'm currently working in Python. I wouldn't believe those marketing numbers for anything but the Mandelbrot task they test it on, which has particularly high interpreter overhead.

The bigger problem with ACDC is it doesn't use gradients. Attribution patching fixes this and gets >100x speedups, and there should be even better methods. I don't expect circuit discovery to be usefully ported to a fast language again, until it is used in production.

comment by gilch · 2023-11-24T19:53:06.243Z · LW(p) · GW(p)

I do not understand the appeal of MOJO. If you're going to overcomplicate Python to the point of C++, just use C++. Or, you know, a saner systems language like Rust. CPython has C interop already.

comment by porby · 2023-11-23T21:25:32.943Z · LW(p) · GW(p)

For what it's worth, I've had to drop from python to C# on occasion for some bottlenecks. In one case, my C# implementation was 418,000 times faster than the python version. That's a comparison between a poor python implementation and a vectorized C# implementation, but... yeah.

comment by TurnTrout · 2023-10-23T17:46:52.758Z · LW(p) · GW(p)

Saying "don't do X" seems inefficient for recall. Given the forward-chaining nature of human memory, wouldn't it be better to say "If going to X, then don't"? That way, if you think "what if I do X?", you recall "don't." 

(This is not an actual proposal for a communication norm.)

comment by TurnTrout · 2023-09-30T01:32:59.210Z · LW(p) · GW(p)

It'd be nice if "optimization objective" split out into:

  1. Internally represented goal which an intelligent entity optimizes towards (e.g. a person's desire to get rich),
  2. Signal which is used to compute local parameter updates (e.g. SGD),

There are more possible senses of the phrase, but I think these are commonly conflated. EG "The optimization objective of RL is the reward function" should, by default, mean 2) and not 1). But because we use similar words, I think the claims become muddled, and it's not even clear if/when/who is messing this up or not. 

(Think carefully before calling RL policies  "agents" [LW · GW] relatedly tries to clarify language here.)

comment by TurnTrout · 2023-07-11T17:05:37.526Z · LW(p) · GW(p)

Handling compute overhangs after a pause. 

Sometimes people object that pausing AI progress for e.g. 10 years would lead to a "compute overhang": At the end of the 10 years, compute will be cheaper and larger than at present-day. Accordingly, once AI progress is unpaused, labs will cheaply train models which are far larger and smarter than before the pause. We will not have had time to adapt to models of intermediate size and intelligence. Some people believe this is good reason to not pause AI progress.

There seem to be a range of relatively simple policy approaches which mitigate the "compute overhang" problems. For example, instead of unpausing all progress all at once, start off with a conservative compute cap[1] on new training runs, and then slowly raise the cap over time.[2] We get the benefits of a pause and also avoid the problems presented by the overhang. 

  1. ^

    EG "you can't use more compute than was used to train GPT-2." Conservatism helps account for algorithmic progress which people made in public or in private in the meantime.

  2. ^

    There are still real questions about "how do we set good compute cap schedules?", which I won't address here.

Replies from: Vladimir_Nesov, 1a3orn
comment by Vladimir_Nesov · 2023-07-11T17:34:27.480Z · LW(p) · GW(p)

Cheaper compute is about as inevitable as more capable AI, neither is a law of nature. Both are valid targets for hopeless regulation.

comment by 1a3orn · 2023-07-11T20:24:09.719Z · LW(p) · GW(p)

This is only a simple policy approach in an extremely theoretical sense though, I'd say.

Like it assumes a perfect global compute cap with no exceptions, no nations managing to do anything in secret, and with the global enforcement agency being incorruptible and non-favoritist, and so on. You fail at any of these, and the situation could be worse than if no "pause" happened, even assuming the frame where a pause was important in the first place.

(Although, full disclosure, I do not share that frame).

comment by TurnTrout · 2023-05-11T06:57:06.270Z · LW(p) · GW(p)

Wikipedia has an unfortunate and incorrect-in-generality description of reinforcement learning (emphasis added)

Reinforcement learning (RL) is an area of machine learning concerned with how intelligent agents ought to take actions in an environment in order to maximize the notion of cumulative reward.

Later in the article, talking about basic optimal-control inspired approaches:

The purpose of reinforcement learning is for the agent to learn an optimal, or nearly-optimal, policy that maximizes the "reward function" or other user-provided reinforcement signal that accumulates from the immediate rewards. This is similar to processes that appear to occur in animal psychology. For example, biological brains are hardwired to interpret signals such as pain and hunger as negative reinforcements, and interpret pleasure and food intake as positive reinforcements. In some circumstances, animals can learn to engage in behaviors that optimize these rewards.

Reward is not the optimization target [LW · GW]. 

It's not really a surprise that (IMO) the alignment field has anchored on "reward is target" intuitions, given that the broader field of RL has as well. Given bad initialization, conscious effort and linguistic discipline is required in order to correct the initialization. 

Replies from: steve2152, niplav
comment by Steven Byrnes (steve2152) · 2023-05-11T21:25:39.406Z · LW(p) · GW(p)

The description doesn't seem so bad to me. Your post "Reward is not the optimization target" is about what actual RL algorithms actually do. The wiki descriptions here are a kind of normative motivation as to how people came to be looking into those algorithms in the first place. Like, if there's an RL algorithm that performs worse than chance at getting a high reward, then that ain't an RL algorithm. Right? Nobody would call it that.

I think lots of families of algorithms are likewise lumped together by a kind of normative "goal", even if any given algorithm in that family is doing something somewhat different and more complicated than “achieving that goal”, and even if, in any given application, the programmer might not want that goal to be perfectly achieved even if it could be [AF · GW]. So by the same token, supervised learning algorithms are "supposed" to minimize a loss, compilers are "supposed" to create efficient and correct assembly code, word processors are "supposed" to process words, etc., but in all cases that's not a literal and complete description of what the algorithms in question actually do, right? It’s a pointer to a class of algorithms.

Sorry if I'm misunderstanding.

Replies from: TurnTrout
comment by TurnTrout · 2023-05-15T16:50:20.932Z · LW(p) · GW(p)

I agree that it is narrowly technically accurate as a description of researcher motivation. Note that they don't offer any other explanation elsewhere in the article.

Also note that they also make empirical claims:

The purpose of reinforcement learning is for the agent to learn an optimal, or nearly-optimal, policy that maximizes the "reward function" or other user-provided reinforcement signal that accumulates from the immediate rewards. This is similar to processes that appear to occur in animal psychology...

In some circumstances, animals can learn to engage in behaviors that optimize these rewards.

Replies from: steve2152
comment by Steven Byrnes (steve2152) · 2023-05-15T17:40:08.397Z · LW(p) · GW(p)

Sure. That excerpt is not great.

Replies from: TurnTrout
comment by TurnTrout · 2023-05-15T20:55:41.383Z · LW(p) · GW(p)

(I do think that animals care about the reinforcement signals and their tight correlates, to some degree, such that it's reasonable to gloss it as "animals sometimes optimize rewards." I more strongly object to conflating what the animals may care about with the mechanistic purpose/description of the RL process.)

comment by niplav · 2023-05-11T20:20:59.193Z · LW(p) · GW(p)

I encourage you to fix the mistake. (I can't guarantee that the fix will be incorporated, but for something this important it's worth a try).

comment by TurnTrout · 2023-04-09T07:01:04.335Z · LW(p) · GW(p)

Idea for getting weak-in-expectation evidence about deception:

  1. Pretrain a model.
  2. Finetune two copies using reward functions you are confident will produce different internal values, where one set of values is substantially less aligned.
  3. See if the two models, which are (at least first) unaware of this procedure, will display different behavior, or not.
  4. If they both behave in an aligned-seeming fashion, this seems like strong evidence of deception. 
comment by TurnTrout · 2022-12-16T16:59:30.805Z · LW(p) · GW(p)

Be cautious with sequences-style "words don't matter, only anticipations matter." (At least, this is an impression I got from the sequences, and could probably back this up.) Words do matter insofar as they affect how your internal values bind. If you decide that... (searches for non-political example) monkeys count as "people", that will substantially affect your future decisions via e.g. changing your internal "person" predicate, which in turn will change how different downstream shards activate (like "if person harmed, be less likely to execute plan", at a very simple gloss). All this, even though you don't anticipate any different experiences.

EDIT: The sequences do indeed mark this consideration to some extent: 

Or consider the fight to blend together blacks and whites as “people.” This would not be a time to generate two words—what’s at stake is exactly the idea that you shouldn’t draw a moral distinction.

Replies from: Zack_M_Davis
comment by Zack_M_Davis · 2022-12-16T17:11:00.163Z · LW(p) · GW(p)

even though you don't anticipate any different experiences.

You probably do, though! [LW · GW] Thinking of a monkey as a "person" means using your beliefs about persons-in-general to make predictions about aspects of the monkey that you haven't observed.

Replies from: TurnTrout
comment by TurnTrout · 2022-12-16T18:54:01.438Z · LW(p) · GW(p)

Right, good point!

comment by TurnTrout · 2022-12-14T21:08:53.631Z · LW(p) · GW(p)

Why don't people reinforcement-learn to delude themselves? It would be very rewarding for me to believe that alignment is solved, everyone loves me, I've won at life as hard as possible. I think I do reinforcement learning over my own thought processes. So why don't I delude myself?

On my model of people, rewards provide ~"policy gradients" which update everything, but most importantly shards. I think eg the world model will have a ton more data from self-supervised learning, and so on net most of its bits won't come from reward gradients.

For example, if I reinforcement-learned to perceive a huge stack of cash in the corner of the room, by eg imagining that being there, which increases my concept-activations on that being there, which in fact causes a positive reward event in my brain, so naturally credit assignment should say I should believe that even harder... That would incur a ton of low-level self-supervised-learning predictive error on what my rods and cones are in fact indicating, and perhaps I self-supervised meta-learn not to develop delusions like that at all.

Replies from: JBlack, D0TheMath
comment by JBlack · 2022-12-16T06:11:36.119Z · LW(p) · GW(p)

A lot of people do delude themselves in many ways, and some directly in many of the ways you describe.

However, I doubt that human brains work literally in terms of nothing but reward reinforcement. There may well be a core of something akin to that, but mixed in with all the usual hacks and kludges that evolved systems have.

Replies from: TurnTrout
comment by TurnTrout · 2022-12-16T18:55:32.245Z · LW(p) · GW(p)

I was thinking about delusions like "I literally anticipate-believe that there is a stack of cash in the corner of the room." I agree that people do delude themselves, but my impression is that mentally healthy people do not anticipation-level delude themselves on nearby physical observables which they have lots of info about. 

I could be wrong about that, though?

comment by Garrett Baker (D0TheMath) · 2022-12-15T00:49:28.159Z · LW(p) · GW(p)

I wonder if this hypothesis is supported by looking at the parts of schizophrenics' (or just anyone currently having a hallucination's) brains. Ideally the parts responsible for producing the hallucination. 

comment by TurnTrout · 2022-02-01T20:04:16.302Z · LW(p) · GW(p)

How the power-seeking theorems relate to the selection theorem agenda. 

  1. Power-seeking theorems. P(agent behavior | agent decision-making procedure, agent objective, other agent internals, environment). 

    I've mostly studied the likelihood function for power-seeking behavior: what decision-making procedures, objectives, and environments produce what behavioral tendencies. I've discovered some gears for what situations cause what kinds of behaviors.
    1. The power-seeking theorems also allow some discussion of P(agent behavior | agent training process, training parameters, environment), but it's harder to reason about eventual agent behavior with fewer gears of what kinds of agent cognition are trained.
  2. Selection theorems. P(agent decision-making procedure, agent objective, other internals | training process, environment). What kinds of cognition will be trained in what kinds of situations? This gives mechanistic pictures of how cognition will work, with consequences for interpretability work, for alignment agendas, and for forecasting.

If we understood both of these, as a bonus we would be much better able to predict P(power-seeking | environment, training process) via P(power-seeking | agent internals) P(agent internals | environment, training process).[1]

  1. ^

    For power-seeking, agent internals screens off the environment and training process.

comment by TurnTrout · 2021-09-15T02:01:28.436Z · LW(p) · GW(p)

Idea: Expert prediction markets on predictions made by theories in the field, with $ for being a good predictor and lots of $ for designing and running a later-replicated experiment whose result the expert community strongly anti-predicted. Lots of problems with the plan, but surprisal-based compensation seems interesting and I haven't heard about it before. 

comment by TurnTrout · 2020-09-27T22:52:55.668Z · LW(p) · GW(p)

What is "real"? I think about myself as a computation embedded in some other computation (i.e. a universe-history). I think "real" describes hypotheses about the environment where my computation lives. What should I think is real? That which an "ideal embedded reasoner" would assign high credence. However that works.

This sensibly suggests that Gimli-in-actual-Ea (LOTR) should believe he lives in Ea, and that Ea is real, even though it isn't our universe's Earth. Also, the notion accounts for indexical uncertainty by punting it to how embedded reasoning should work (a la radical probabilism), without being tautological. Also, it supports both the subjective nature of what one should call "real", and the notion of an actual out-there-somewhere shared reality (multiple computations can be embedded within the same universe-history).

comment by TurnTrout · 2020-04-15T22:12:03.952Z · LW(p) · GW(p)

ordinal preferences just tell you which outcomes you like more than others: apples more than oranges.

Interval scale preferences assign numbers to outcomes, which communicates how close outcomes are in value: kiwi 1, orange 5, apple 6. You can say that apples have 5 times the advantage over kiwis that they do over oranges, but you can't say that apples are six times as good as kiwis. Fahrenheit and Celsius are also like this.

Ratio scale ("rational"? 😉) preferences do let you say that apples are six times as good as kiwis, and you need this property to maximize expected utility. You have to be able to weigh off the relative desirability of different outcomes, and ratio scale is the structure which let you do it – the important content of a utility function isn't in its numerical values, but in the ratios of the valuations.

Replies from: mr-hire, Dagon
comment by Matt Goldenberg (mr-hire) · 2020-04-16T14:54:35.550Z · LW(p) · GW(p)

Isn't the typical assumption in game theory that preferences are ordinal? This suggests that you can make quite a few strategic decisions without bringing in ratio.

comment by Dagon · 2020-04-16T18:09:32.389Z · LW(p) · GW(p)

From what I have read, and from self-introspection, humans mostly have ordinal preferences. Some of them we can interpolate to interval scales or ratios (or higher-order functions) but if we extrapolate very far, we get odd results.

It turns out you can do a LOT with just ordinal preferences. Almost all real-world decisions are made this way.

comment by TurnTrout · 2020-02-06T16:56:15.771Z · LW(p) · GW(p)

My autodidacting has given me a mental reflex which attempts to construct a gears-level explanation of almost any claim I hear. For example, when listening to “Listen to Your Heart” by Roxette:

Listen to your heart,

There’s nothing else you can do

I understood what she obviously meant and simultaneously found myself subvocalizing “she means all other reasonable plans are worse than listening to your heart - not that that’s literally all you can do”.

This reflex is really silly and annoying in the wrong context - I’ll fix it soon. But it’s pretty amusing that this is now how I process claims by default, and I think it usually serves me well.

comment by TurnTrout · 2019-09-30T00:45:44.638Z · LW(p) · GW(p)

One of the reasons I think corrigibility might have a simple core principle is: it seems possible to imagine a kind of AI which would make a lot of different possible designers happy. That is, if you imagine the same AI design deployed by counterfactually different agents with different values and somewhat-reasonable rationalities, it ends up doing a good job by almost all of them. It ends up acting to further the designers' interests in each counterfactual. This has been a useful informal way for me to think about corrigibility, when considering different proposals.

This invariance also shows up (in a different way) in AUP, where the agent maintains its ability to satisfy many different goals. In the context of long-term safety, AUP agents are designed to avoid gaining power, which implicitly ends up respecting the control of other agents present in the environment (no matter their goals).

I'm interested in thinking more about this invariance, and why it seems to show up in a sensible way in two different places.

comment by TurnTrout · 2019-09-30T00:29:52.406Z · LW(p) · GW(p)

I had an intuition that attainable utility preservation (RL but you maintain your ability to achieve other goals) points at a broader template for regularization. AUP regularizes the agent's optimal policy to be more palatable towards a bunch of different goals we may wish we had specified. I hinted at the end of Towards a New Impact Measure [LW · GW] that the thing-behind-AUP might produce interesting ML regularization techniques.

This hunch was roughly correct; Model-Agnostic Meta-Learning tunes the network parameters such that they can be quickly adapted to achieve low loss on other tasks (the problem of few-shot learning). The parameters are not overfit on the scant few data points to which the parameters are adapted, which is also interesting.

comment by TurnTrout · 2023-09-27T20:05:34.486Z · LW(p) · GW(p)

On applying generalization bounds to AI alignment. In January, Buck gave a talk for the Winter MLAB. He argued that we know how to train AIs which answer on-distribution questions at least as well as the labeller does. I was skeptical. IIRC, his argument had the following structure:

Premises:

1. We are labelling according to some function f and loss function L.

2. We train the network on datapoints (x, f(x)) ~ D_train.

3. Learning theory results give (f, L)-bounds on D_train. 

Conclusions:

4. The network should match f's labels on the rest of D_train, on average.

5. In particular, when f represents our judgments, the network should be able to correctly answer questions which we ourselves could correctly answer.

I think (5) is probably empirically true, but not for the reasons I understood Buck to give, and I don't think I can have super high confidence in this empirical claim. Anyways, I'm more going to discuss premise (3).

It seems like (3) is false (and no one gave me a reference to this result), at least for one really simple task from The Pitfalls of Simplicity Bias in Neural Networks (table 2, p8): 

In this classification task, a 2-layer MLP with 100 hidden neurons was trained to convergence. It learned a linear boundary down the middle, classifying everything to the right as red, and to the left as blue. Then, it memorized all of the exceptions. Thus, its validation accuracy was only 95%, even though it was validated on more synthetic data from the same crisply specified historical training distribution. 

However, conclusion (4) doesn't hold in this situation. Conclusion 4 would have predicted that the network would learn a piecewise linear classifier (in orange below) which achieves >99% accuracy on validation data from the same distribution:

The MLP was expressive enough to learn this, according to the authors, but it didn't. 


Now, it's true that this kind of problem is rather easy to catch in practice -- sampling more validation points quickly reveals the generalization failure. However, I still think we can't establish high confidence (>99%) in (5: the AI has dependable question-answering abilities for questions similar enough to training). From the same paper,

Given that neural networks tend to heavily rely on spurious features [45, 52], state-of-the-art accuracies on large and diverse validation sets provide a false sense of security; even benign distributional changes to the data (e.g., domain shifts) during prediction time can drastically degrade or even nullify model performance.

And I think many realistic use cases of AI Q&A can, in theory, involve at least "benign" distributional changes, where, to our eyes, there hasn't been any detectable distributional shift -- and where generalization still fails horribly. But now I'd anticipate Buck / Paul / co would have some other unknown counterarguments, and so I'll just close off this "short"form for now.


Quintin Pope also wrote [LW · GW]:

This paper shows just how strong neural network simplicity biases are, and also gives some intuition for how the simplicity bias of neural networks is different from something like a circuit simplicity bias or Kolmogorov simplicity bias. E.g., neural networks don't seem all that opposed to memorization. The paper shows examples of neural networks learning a simple linear feature which imperfectly classifies the data, then memorizing the remaining noise, despite there being a slightly more complex feature which perfectly classifies the training data (and I've checked, there's no grokking phase transition, even after 2.5 million optimization steps with weight decay).

comment by TurnTrout · 2023-08-12T18:12:22.616Z · LW(p) · GW(p)

People sometimes think of transformers as specifying joint probability distributions over token sequences up to a given context length. However, I think this is sometimes not the best abstraction. From another POV, transformers take as input embedding vectors of a given dimension , and map to another output vector of dimension (# of tokens). 

This is important in that it emphasizes the implementation of the transformer, and helps avoid possible missteps around thinking of transformers as "trying to model text" or various forms of finetuning as basically "conditioning" the model, and also accounts for the success of e.g. soft prompts (which don't correspond to any token sequences).

comment by TurnTrout · 2022-11-11T19:23:34.517Z · LW(p) · GW(p)

I think that the training goal of "the AI never makes a catastrophic decision" is unrealistic and unachievable and unnecessary. I think this is not a natural shape for values to take. Consider a highly altruistic man with anger problems, strongly triggered by e.g. a specifc vacation home. If he is present with his wife at this home, he beats her. As long as he starts off away from the home, and knows about his anger problems, he will be motivated to resolve his anger problems, or at least avoid the triggering contexts / take other precautions to ensure her safety.

I think there has not ever existed an agent which makes endorsed choices across all decision-making contexts. Probably even Gandhi would murder someone in some adversarially selected context (even barring extremely optimized adversarial inputs). I don't think it's feasible to train a mind with robustly adequate decision-making, and I also don't think we need to do so in order to get a very aligned AI.

(None of this is locally arguing that adversarial training is bad as part of a training rationale for how we get an aligned agent, just that I don't see promise in the desired AI cognition of robustly adequate decision-making.)

comment by TurnTrout · 2022-08-16T05:02:50.430Z · LW(p) · GW(p)

"Goodhart" is no longer part of my native ontology for considering alignment failures. When I hear "The AI goodharts on some proxy of human happiness", I start trying to fill in a concrete example mind design which fits that description and which is plausibly trainable. My mental events are something like: 

Condition on: AI with primary value shards oriented around spurious correlate of human happiness; AI exhibited deceptive alignment during training, breaking perceived behavioral invariants during its sharp-capabilities-gain 

Warning: No history defined. How did we get here? 

Execute search for plausible training histories which produced this inner cognition 

Proposal: Reward schedule around approval and making people laugh; historical designers had insufficient understanding of outer signal->inner cognition mapping; designers accidentally provided reinforcement which empowered smile-activation and manipulate-internal-human-state-to-high-pleasure shards 

Objection: Concepts too human, this story is suspicious. Even conditioning on outcome, how did we get here? Why are there not more value shards? How did shard negotiation dynamics play out? 

Meta-objection: Noted, but your interlocutor's point probably doesn't require figuring this out.

I think that Goodhart is usually describing how the AI "takes advantage of" some fixed outer objective. But in my ontology, there isn't an outer objective—just inner cognition [LW · GW]. So I have to do more translation.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2022-08-16T08:38:02.982Z · LW(p) · GW(p)

breaking perceived behavioral invariants

There might be a natural concept for this that reframes deceptive alignment in the direction of reflection/extrapolation. Looking at deceptive alignment as a change of behavior not in response to capability gain, but instead as a change in response to stepping into a new situation, it's then like a phase change in the (unchanging) mapping from situations to behaviors (local policies). The behaviors of a model suddenly change as it moves to similar situations, in a way that's not "correctly prompted" by behaviors in original situations.

It's like a robustness failure, but with respect to actual behavior in related situations, rather than with respect to some outer objective or training/testing distribution. So it seems more like a failure of reflection/extrapolation, where behavior in new situations should be determined by coarse-grained descriptions of behavior in old situations (maybe "behavioral invariants" are something like that; or just algorithms) rather than by any other details of the model. Aligned properties of behavior in well-tested situations normatively-should screen off details of the model, in determining behavior in new situations (for a different extrapolated/"robustness"-hardened model prepared for use in the new situations).

comment by TurnTrout · 2022-01-15T01:37:43.489Z · LW(p) · GW(p)

Excalidraw is now quite good and works almost seamlessly on my iPad. It's also nice to use on the computer. I recommend it to people who want to make fast diagrams for their posts.

comment by TurnTrout · 2021-12-02T18:11:35.661Z · LW(p) · GW(p)

Reading EY's dath ilan glowfics, I can't help but think of how poor English is as a language to think in. I wonder if I could train myself to think without subvocalizing (presumably it would be too much work to come up with a well-optimized encoding of thoughts, all on my own, so no new language for me). No subvocalizing might let me think important thoughts more quickly and precisely.

Replies from: Yoav Ravid, matthew-barnett, TurnTrout
comment by Yoav Ravid · 2021-12-02T21:10:49.321Z · LW(p) · GW(p)

Interesting. My native language is Hebrew but I often find it easier to think in English.

comment by Matthew Barnett (matthew-barnett) · 2021-12-02T18:41:02.063Z · LW(p) · GW(p)

Reading EY's dath ilan glowfics, I can't help but think of how poor English is as a language to think in.

This is an interesting question, and one that has been studied by linguists.

comment by TurnTrout · 2021-12-02T18:15:24.079Z · LW(p) · GW(p)

I'm not sure how often I subvocalize to think thoughts. Often I have trouble putting a new idea into words just right, which means the raw idea essence came before the wordsmithing. But other times it feels like I'm synchronously subvocalizing as I brainstorm

comment by TurnTrout · 2021-11-13T05:12:18.969Z · LW(p) · GW(p)

How might we align AGI without relying on interpretability?

I'm currently pessimistic about the prospect. But it seems worth thinking about, because wouldn't it be such an amazing work-around? 

My first idea straddles the border between contrived and intriguing. Consider some AGI-capable ML architecture, and imagine its  parameter space being 3-colored as follows:

  • Gray if the parameter vector+training process+other initial conditions leads to a nothingburger (a non-functional model)
  • Red if the parameter vector+... leads to a misaligned or deceptive AI
  • Blue if the learned network's cognition is "safe" or "aligned" in some reasonable way

(This is a simplification, but let's roll with it)

And then if you could somehow reason about which parts of  weren't red, you could ensure that no deception ever occurs. That is, you might have very little idea what cognition the learned network implements, but magically somehow you have strong a priori / theoretical reasoning which ensures that whatever the cognition is, it's safe.

The contrived part is that you could just say "well, if we could wave a wand and produce an is-impact-aligned predicate, of course we could solve alignment." True, true.

But the intriguing part is that it doesn't seem totally impossible to me that we get some way of reasoning (at least statistically) about the networks and cognition produced by a given learning setup. See also: the power-seeking theorems, natural abstraction hypothesis, feature universality a la Olah's circuits agenda...

comment by TurnTrout · 2021-03-02T18:07:49.152Z · LW(p) · GW(p)

I'd like to see research exploring the relevance of intragenomic conflict to AI alignment research. Intragenomic conflict constitutes an in-the-wild example of misalignment, where conflict arises "within an agent" even though the agent's genes have strong instrumental incentives to work together (they share the same body). 

comment by TurnTrout · 2021-02-11T17:38:27.469Z · LW(p) · GW(p)

In an interesting parallel to John Wentworth's Fixing the Good Regulator Theorem [LW · GW], I have an MDP result that says: 

Suppose we're playing a game where I give you a reward function and you give me its optimal value function in the MDP. If you let me do this for  reward functions (one for each state in the environment), and you're able to provide the optimal value function for each, then you know enough to reconstruct the entire environment (up to isomorphism).

Roughly: being able to complete linearly many tasks in the state space means you have enough information to model the entire environment.

comment by TurnTrout · 2020-11-05T21:50:18.519Z · LW(p) · GW(p)

I read someone saying that ~half of the universes in a neighborhood of ours went to Trump. But... this doesn't seem right. Assuming Biden wins in the world we live in, consider the possible perturbations to the mental states of each voter. (Big assumption! We aren't thinking about all possible modifications to the world state. Whatever that means.)

Assume all 2020 voters would be equally affected by a perturbation (which you can just think of as a decision-flip for simplicity, perhaps). Since we're talking about a neighborhood ("worlds pretty close to ours"), each world-modification is limited to N decision flips (where N isn't too big).

  • There are combinatorially more ways for a race to be close (in popular vote) than for it to not be close. But we're talking perturbations, and so since we're assuming Biden wins in this timeline, he's still winning in most other timelines close to ours
    • I don't know whether the electoral college really changes this logic. If we only consider a single state (PA), then it probably doesn't?
  • I'm also going to imagine that most decision-flips didn't have too many downstream effects, but this depends on when the intervention takes place: if it's a week beforehand, maybe people announce changes-of-heart to their families? A lot to think about there. I'll just pretend like they're isolated because I don't feel like thinking about it that long, and it's insanely hard to play out all those effects.
  • Since these decision-flips are independent, you don't get any logical correlations: the fact that I randomly changed my vote, doesn't change how I expect people like me to vote. This is big.

Under my extremely simplified model, the last bullet is what makes me feel like most universes in our neighborhood were probably also Biden victories.

Replies from: Measure
comment by Measure · 2020-11-06T15:17:24.368Z · LW(p) · GW(p)

I think this depends on the distance considered. In worlds very very close to ours, the vast majority will have the same outcome as ours. As you increase the neighborhood size (I imagine this as considering worlds which diverged from ours more distantly in the past), Trump becomes more likely relative to Biden [edit: more likely than he is relative to Biden in more nearby worlds]. As you continue to expand, other outcomes start to have significant likelihood as well.

Replies from: TurnTrout
comment by TurnTrout · 2020-11-06T17:05:49.323Z · LW(p) · GW(p)

Why do you think that? How do you know that?

Replies from: Measure
comment by Measure · 2020-11-06T17:38:29.555Z · LW(p) · GW(p)

General intuition that "butterfly effect" is basically true, meaning that if a change occurs in a chaotic system, then the size of the downstream effects will tend to increase over time.

Edit: I don't have a good sense of how far back you would have to go to see meaningful change in outcome, just that the farther you go the more likely change becomes.

Replies from: TurnTrout
comment by TurnTrout · 2020-11-06T18:19:25.977Z · LW(p) · GW(p)

Sure, but why would those changes tend to favor Trump as you get outside of a small neighborhood? Like, why would Biden / (Biden or Trump win) < .5? I agree it would at least approach .5 as the neighborhood grows. I think. 

Replies from: Measure
comment by Measure · 2020-11-06T18:52:10.104Z · LW(p) · GW(p)

I think we're in agreement here. I didn't mean to imply that Trump would become more likely than Biden in absolute terms, just that the ratio Trump/Biden would increase.

comment by TurnTrout · 2020-10-16T18:24:48.881Z · LW(p) · GW(p)

Epistemic status: not an expert

Understanding Newton's third law, .

Consider the vector-valued velocity as a function of time, . Scale this by the object's mass and you get the momentum function over time. Imagine this momentum function wiggling around over time, the vector from the origin rotating and growing and shrinking.

The third law says that force is the derivative of this rescaled vector function - if an object is more massive, then the same displacement of this rescaled arrow is a proportionally smaller velocity modification, because of the rescaling!

And also, forces have opposite reactions (by conservation of momentum) and equal reactions (by conservation of energy).

comment by TurnTrout · 2020-07-17T23:01:50.436Z · LW(p) · GW(p)

Tricking AIDungeon's GPT-3 model into writing HPMOR:

You start reading Harry Potter and the Methods of Rationality by Eliezer Yudkowsky:

" "It said to me," said Professor Quirrell, "that it knew me, and that it would hunt me down someday, wherever I tried to hide." His face was rigid, showing no fright.
"Ah," Harry said. "I wouldn't worry about that, Professor Quirrell." It's not like Dementors can actually talk, or think; the structure they have is borrowed from your own mind and expectations...
Now everyone was giving him very strange looks. The Aurors were glancing nervously at each other, at the Dementor, at Harry.
And they stood directly before the Dementor's cage.
"They are wounds in the world," Harry said. "It's just a wild guess, but I'm guessing the one who said that was Godric Gryffindor."
"Yes..." said Dumbledore. "How did you know?"
It is a common misconception, thought Harry, that all the best rationalists are Sorted into Ravenclaw, leaving none for other Houses. This is not so; being Sorted into Ravenclaw indicates that your strongest virtue is curiosity, wondering and desiring to know the true answer. And this is not the only virtue a rationalist needs. Sometimes you have to work hard on a problem, and stick to it for a while. Sometimes you need a clever plan for finding out. And sometimes what you need more than anything else to see an answer, is the courage to face it...
Harry's gaze went to what lay beneath the cloak, the horror far worse than any decaying mummy. Rowena Ravenclaw might also have known, for it was an obvious enough riddle once you saw it as a riddle.
And it was also obvious why the Patronuses were animals. The animals didn't know, and so were sheltered from the fear.
But Harry knew, and would always know, and would never be able to forget. He'd tried to teach himself to face reality without flinching, and though Harry had not yet mastered that art, still those grooves had been worn into his mind, the learned reflex to look toward the painful thought instead of away. Harry would never be able to forget by thinking warm happy thoughts about something else, and that was why the spell hadn't worked for him.
So Harry would think a warm happy thought that wasn't about something else.
Harry drew forth his wand that Professor Flitwick had returned to him, put his feet into the beginning stance for the Patronus Charm.
Within his mind, Harry discarded the last remnants of the peace of the phoenix, put aside the calm, the dreamlike state, remembered instead Fawkes's piercing cry, and roused himself for battle. Called upon all the pieces and elements of himself to awaken. Raised up within himself all the strength that the Patronus Charm could ever draw upon, to put himself into the right frame of mind for the final warm and happy thought; remembered all bright things.
The books his father had bought him.
Mum's smile when Harry had handmade her a mother's day card, an elaborate thing that had used half a pound of spare electronics parts from the garage to flash lights and beep a little tune, and had taken him three days to make.
Professor McGonagall telling him that his parents had died well, protecting him. As they had.
Realizing that Hermione was keeping up with him and even running faster, that they could be true rivals and friends.
Coaxing Draco out of the darkness, watching him slowly move toward the light.
Neville and Seamus and Lavender and Dean and everyone else who looked up to him, everyone that he would have fought to protect if anything threatened Hogwarts.
Everything that made life worth living.
His wand rose into the starting position for the Patronus Charm.
Harry thought of the stars, the image that had almost held off the Dementor even without a Patronus. Only this time, Harry added the missing ingredient, he'd never truly seen it but he'd seen the pictures and the video....

Harry thought of the image of his family, Mum and Dad and Liliana, standing outside the house on Gardiners Street, smiling at him. He thought of them in the back garden of that house, laughing as he chased the gnomes, and that was his final warm and happy thought.
And Harry cast the Patronus Charm.
A burst of silver erupted from the end of his wand and took the form of a deer, shining brightly.
...His mother's eyes.
And the final piece clicked into place.
Harry's mind was flooded with warmth, and the world seemed to still. Time itself seemed to stop, sound itself ceased to be of concern to Harry, all that remained was himself, and the terrible terrible void that surrounded him. But even as he felt himself begin to be pulled into it, a new force intervened.
A new shape of white, a beautiful silver serpent, coiled around him, protecting him from the darkness. The beautiful creature seemed to both give off warmth and energy, while also absorbing it from the void itself.
Harry felt himself pulled backward, away from the Dementor, and he found himself in the dark halls of Azkaban, being shielded by his wonderful silver Patronus.
...He thought of the night sky, of the moon, shining down upon him and his friends, and the stars beyond. His wand began to glow. "Expecto Patronum!" The glow burst from the tip of his wand and formed into a shape, and the shape was glowing white like a shadowless bright light, yet it had form and shape and figure, and that shape was the shadow of the memory of a star.
...He thought of the sunlight. Of warm air and a bright sky. Of happy people and happy times. Of hope and joy. Of a future worth living for, not just worth dying for. And Harry's wand began to glow. And then the golden stag erupted from the tip of his wand, and charged the Dementor with a cry that echoed through the tunnel.


[...] Only this time, Harry added the missing ingredient,
...the reason to fight for. "Everything I love," Harry hissed out, "everything I love, Reflect if you're worth saving." And with that, the first glimmer of white silver started appearing in the air. "What is that?" whispered Hermione. "My Patronus," Harry said softly. "It's not working." The silver was fading, like a bubble that threatened to burst. "I can't cast the Patronus Charm..."


...He imagined the stars as they looked from Earth, imagined his true home, thought of it being there with him, holding off the Dementor. [?!]

The cold was already fading. That wouldn't help against this fear, but another warm memory would. Harry brought to mind a summer's day in primary school, where he'd first learned to cast a spell.
... the most important part. His father, and his mother, waving to him from a dock as a ship bound for Egypt sailed away. Harry filled his mind with the happy thought, let it overtake him, let it become him.

Let it carry him, like an airship on a favorable wind. "Expecto Patronum!" A burst of pure light, the blazing shadow of an immensely tall humanoid, armored and bearing a helm and a sword. [?!]

... a wish beyond the thought, a belief that he could not truly be saved. "Don't die," Harry whispered to himself, as he had when facing Quirrell, a prayer to whatever Gods might be listening, an order to himself rather than request of them. "Don't die, don't die, don't die..." Concentrating solely upon that wish, the image of a star exploding into existence upon his outstretched wand.


... that he'd been too sad to think of before. His mother's love. He had to believe. He had to believe, even if she wasn't really there, that she loved him. He had to believe, or else the Patronus wouldn't come. A few feet away from him, Draco fell to his knees. "No," he whispered. "No!"
That was what broke Harry's concentration. In front of him, the shimmering humanoid shape collapsed, the mist and light fading away. There was silence for a moment. Then a howl of mad laughter shook the chamber, as Mother's face contorted with rage. "Fool! Idiot! You thought you could beat me, with such a pitiful attempt!" Her voice was rising to a shriek.


...the reason why it had been almost enough, and not quite. Hope. I have seen the brightest futures of the world, and I have seen them swept away. "Expecto Patronum!" From the tip of his wand burst a shining Unicorn, white as snow and bright as the sun.
And for just one heartbeat it stood there, before the world went black.

The Dark Lord is gone. The Ministry has fallen. London is burning. But you have a mission, and you will see it through, because this is the last light in the darkness. This is the last hope for the world.

And you will see it done.

Replies from: Pattern, TurnTrout, habryka4
comment by Pattern · 2020-07-19T00:16:46.256Z · LW(p) · GW(p)

I love the ending. It's way more exciting,

and terrifying.

comment by TurnTrout · 2020-07-18T15:39:29.058Z · LW(p) · GW(p)

... that which he had thought was absent. Love. He didn't think of the books, or his parents or Professor McGonagall. He thought of Hermione, and how she had always believed in him. He thought of how she'd helped him in so many ways, not just with homework, not just with fighting the Dark Arts. How she'd tried to help him every day since they'd first met on the Hogwarts Express.

comment by habryka (habryka4) · 2020-07-18T01:18:56.707Z · LW(p) · GW(p)

Mod note: Spoilerified, to shield the eyes of the innocent.

Replies from: TurnTrout
comment by TurnTrout · 2020-07-18T01:34:34.938Z · LW(p) · GW(p)

My bad! Thanks.

comment by TurnTrout · 2020-07-13T13:20:18.523Z · LW(p) · GW(p)

ARCHES distinguishes between single-agent / single-user and single-agent/multi-user alignment scenarios. Given assumptions like "everyone in society is VNM-rational" and "societal preferences should also follow VNM rationality", and "if everyone wants a thing, society also wants the thing", Harsanyi's utilitarian theorem shows that the societal utility function is a linear non-negative weighted combination of everyone's utilities. So, in a very narrow (and unrealistic) setting, Harsanyi's theorem tells you how the single-multi solution is built from the single-single solutions. 

This obviously doesn't actually solve either alignment problem. But, it seems like an interesting parallel for what we might eventually want.

comment by TurnTrout · 2020-05-13T15:25:36.291Z · LW(p) · GW(p)

From FLI's AI Alignment Podcast: Inverse Reinforcement Learning and Inferring Human Preferences with Dylan Hadfield-Menell:

Dylan: There’s one example that I think about, which is, say, you’re cooperating with an AI system playing chess. You start working with that AI system, and you discover that if you listen to its suggestions, 90% of the time, it’s actually suggesting the wrong move or a bad move. Would you call that system value-aligned?

Lucas: No, I would not.

Dylan: I think most people wouldn’t. Now, what if I told you that that program was actually implemented as a search that’s using the correct goal test? It actually turns out that if it’s within 10 steps of a winning play, it always finds that for you, but because of computational limitations, it usually doesn’t. Now, is the system value-aligned? I think it’s a little harder to tell here. What I do find is that when I tell people the story, and I start off with the search algorithm with the correct goal test, they almost always say that that is value-aligned but stupid.

There’s an interesting thing going on here, which is we’re not totally sure what the target we’re shooting for is. You can take this thought experiment and push it further. Supposed you’re doing that search, but, now, it says it’s heuristic search that uses the correct goal test but has an adversarially chosen heuristic function. Would that be a value-aligned system? Again, I’m not sure. If the heuristic was adversarially chosen, I’d say probably not. If the heuristic just happened to be bad, then I’m not sure.

Consider the optimizer/optimized distinction: the AI assistant is better described as optimized to either help or stop you from winning the game. This optimization may or may not have been carried out by a process which is "aligned" with you; I think that ascribing intent alignment to the assistant's creator makes more sense. In terms of the adversarial heuristic case, intent alignment seems unlikely.

But, this also feels like passing the buck – hoping that at some point in history, there existed something to which we are comfortable ascribing alignment and responsibility.

comment by TurnTrout · 2020-05-06T17:19:57.756Z · LW(p) · GW(p)

On page 22 of Probabilistic reasoning in intelligent systems, Pearl writes:

Raw experiential data is not amenable to reasoning activities such as prediction and planning; these require that data be abstracted into a representation with a coarser grain. Probabilities are summaries of details lost in this abstraction...

An agent observes a sequence of images displaying either a red or a blue ball. The balls are drawn according to some deterministic rule of the time step. Reasoning directly from the experiential data leads to ~Solomonoff induction. What might Pearl's "coarser grain" look like for a real agent?

Imagine an RNN trained with gradient descent and binary cross-entropy loss function ("given the data so far, did it correctly predict the next draw?"), and suppose the learned predictive accuracy is good. How might this happen?

  1. The network learns to classify whether the most recent input image contains a red or blue ball, for instrumental predictive reasons, and

  2. A recurrent state records salient information about the observed sequence, which could be arbitrarily long. The RNN + learned weights form a low-complexity function approximator in the space of functions on arbitrary-length sequences. My impression is that gradient descent has simplicity as an inductive bias (cf double descent debate).

Being an approximation of some function over arbitrary-length sequences, the network outputs a prediction for the next color, a specific feature of the next image in the sequence. Can this prediction be viewed as nontrivially probabilistic? In other words, could we use the output to learn about the network's "beliefs" over hypotheses which generate the sequence of balls?

The RNN probably isn't approximating the true (deterministic) hypothesis which explains the sequence of balls. Since it's trained to minimize cross-entropy loss, it learns to hedge, essentially making it approximate a distribution over hypotheses. This implicitly defines its "posterior probability distribution".

Under this interpretation, the output is just the measure of hypotheses predicting blue versus the measure predicting red.

Replies from: TurnTrout, An1lam
comment by TurnTrout · 2020-05-06T17:37:19.042Z · LW(p) · GW(p)

In particular, the coarse-grain is what I mentioned in 1) – beliefs are easier to manage with respect to a fixed featurization of the observation space.

comment by NaiveTortoise (An1lam) · 2020-05-06T19:43:08.114Z · LW(p) · GW(p)

Only related to the first part of your post, I suspect Pearl!2020 would say the coarse-grained model should be some sort of causal model on which we can do counterfactual reasoning.

comment by TurnTrout · 2020-04-28T19:23:47.063Z · LW(p) · GW(p)

We can imagine aliens building a superintelligent agent which helps them get what they want. This is a special case of aliens inventing tools. What kind of general process should these aliens use – how should they go about designing such an agent?

Assume that these aliens want things in the colloquial sense (not that they’re eg nontrivially VNM EU maximizers) and that a reasonable observer would say they’re closer to being rational than antirational. Then it seems[1] like these aliens eventually steer towards reflectively coherent rationality (provided they don’t blow themselves to hell before they get there): given time, they tend to act to get what they want, and act to become more rational. But, they aren’t fully “rational”, and they want to build a smart thing that helps them. What should they do?

In this situation, it seems like they should build an agent which empowers them & increases their flexible control over the future, since they don’t fully know what they want now. Lots of flexible control means they can better error-correct and preserve value for what they end up believing they actually want. This also protects them from catastrophe and unaligned competitor agents.


  1. I don’t know if this is formally and literally always true, I’m just trying to gesture at an intuition about what kind of agentic process these aliens are. ↩︎

comment by TurnTrout · 2020-03-26T23:51:48.935Z · LW(p) · GW(p)

It seems to me that Zeno's paradoxes leverage incorrect, naïve notions of time and computation. We exist in the world, and we might suppose that that the world is being computed in some way. If time is continuous, then the computer might need to do some pretty weird things to determine our location at an infinite number of intermediate times. However, even if that were the case, we would never notice it – we exist within time and we would not observe the external behavior of the system which is computing us, nor its runtime.

Replies from: Pattern
comment by Pattern · 2020-03-28T06:18:45.520Z · LW(p) · GW(p)

What are your thoughts on infinitely small quantities?

Replies from: TurnTrout
comment by TurnTrout · 2020-03-28T13:14:46.395Z · LW(p) · GW(p)

Don't have much of an opinion - I haven't rigorously studied infinitesimals yet. I usually just think of infinite / infinitely small quantities as being produced by limiting processes. For example, the intersection of all the -balls around a real number is just that number (under the standard topology), which set has 0 measure and is, in a sense, "infinitely small".

comment by TurnTrout · 2020-03-18T18:36:37.285Z · LW(p) · GW(p)

Very rough idea

In 2018, I started thinking about corrigibility as "being the kind of agent lots of agents would be happy to have activated". This seems really close to a more ambitious version of what AUP tries to do (not be catastrophic for most agents).

I wonder if you could build an agent that rewrites itself / makes an agent which would tailor the AU landscape towards its creators' interests, under a wide distribution of creator agent goals/rationalities/capabilities. And maybe you then get a kind of generalization, where most simple algorithms which solve this solve ambitious AI alignment in full generality.

comment by TurnTrout · 2020-02-05T16:41:01.779Z · LW(p) · GW(p)

AFAICT, the deadweight loss triangle from eg price ceilings is just a lower bound on lost surplus. inefficient allocation to consumers means that people who value good less than market equilibrium price can buy it, while dwl triangle optimistically assumes consumers with highest willingness to buy will eat up the limited supply.

Replies from: Wei_Dai, Dagon
comment by Wei Dai (Wei_Dai) · 2020-02-07T22:14:31.405Z · LW(p) · GW(p)

Good point. By searching for "deadweight loss price ceiling lower bound" I was able to find a source (see page 26) that acknowledges this, but most explications of price ceilings do not seem to mention that the triangle is just a lower bound for lost surplus.

comment by Dagon · 2020-02-07T00:02:40.045Z · LW(p) · GW(p)

Lost surplus is definitely a loss - it's not linear with utility, but it's not uncorrelated. Also, if supply is elastic over any relevant timeframe, there's an additional source of loss. And I'd argue that for most goods, over timeframes smaller than most price-fixing proposals are expected to last, there is significant price elasticity.

Replies from: TurnTrout
comment by TurnTrout · 2020-02-07T13:10:46.833Z · LW(p) · GW(p)

Lost surplus is definitely a loss - it's not linear with utility, but it's not uncorrelated.

I don't think I was disagreeing?

Replies from: Dagon
comment by Dagon · 2020-02-07T17:13:50.111Z · LW(p) · GW(p)

Ah, I took the "just" in "just a lower bound on lost surplus" as an indicator that it's less important than other factors. And I lightly believe (meaning: for the cases I find most available, I believe it, but I don't know how general it is) that the supply elasticity _is_ the more important effect of such distortions.

So I wanted to reinforce that I wasn't ignoring that cost, only pointing out a greater cost.

comment by TurnTrout · 2019-11-26T18:43:56.755Z · LW(p) · GW(p)

I was having a bit of trouble holding the point of quadratic residues in my mind. I could effortfully recite the definition, give an example, and walk through the broad-strokes steps of proving quadratic reciprocity. But it felt fake and stale and memorized.

Alex Mennen suggested a great way of thinking about it. For some odd prime , consider the multiplicative group . This group is abelian and has even order . Now, consider a primitive root / generator . By definition, every element of the group can be expressed as . The quadratic residues are those expressible by even (this is why, for prime numbers, half of the group is square mod ). This also lets us easily see that the residual subgroup is closed under multiplication by (which generates it), that two non-residues multiply to make a residue, and that a residue and non-residue make a non-residue. The Legendre symbol then just tells us, for , whether is even.

Now, consider composite numbers whose prime decomposition only contains or in the exponents. By the fundamental theorem of finite abelian groups and the chinese remainder theorem, we see that a number is square mod iff it is square mod all of the prime factors.

I'm still a little confused about how to think of squares mod .

Replies from: AlexMennen
comment by AlexMennen · 2019-11-26T23:28:49.532Z · LW(p) · GW(p)

The theorem: where is relatively prime to an odd prime and , is a square mod iff is a square mod and is even.

The real meat of the theorem is the case (i.e. a square mod that isn't a multiple of is also a square mod . Deriving the general case from there should be fairly straightforward, so let's focus on this special case.

Why is it true? This question has a surprising answer: Newton's method for finding roots of functions. Specifically, we want to find a root of , except in instead of .

To adapt Newton's method to work in this situation, we'll need the p-adic absolute value on : for relatively prime to . This has lots of properties that you should expect of an "absolute value": it's positive ( with only when ), multiplicative (), symmetric (), and satisfies a triangle inequality (; in fact, we get more in this case: ). Because of positivity, symmetry, and the triangle inequality, the p-adic absolute value induces a metric (in fact, ultrametric, because of the strong version of the triangle inequality) . To visualize this distance function, draw giant circles, and sort integers into circles based on their value mod . Then draw smaller circles inside each of those giant circles, and sort the integers in the big circle into the smaller circles based on their value mod . Then draw even smaller circles inside each of those, and sort based on value mod , and so on. The distance between two numbers corresponds to the size of the smallest circle encompassing both of them. Note that, in this metric, converges to .

Now on to Newton's method: if is a square mod , let be one of its square roots mod . ; that is, is somewhat close to being a root of with respect to the p-adic absolute value. , so ; that is, is steep near . This is good, because starting close to a root and the slope of the function being steep enough are things that helps Newton's method converge; in general, it might bounce around chaotically instead. Specifically, It turns out that, in this case, is exactly the right sense of being close enough to a root with steep enough slope for Newton's method to work.

Now, Newton's method says that, from , you should go to . is invertible mod , so we can do this. Now here's the kicker: , so . That is, is closer to being a root of than is. Now we can just iterate this process until we reach with , and we've found our square root of mod .

Exercise: Do the same thing with cube roots. Then with roots of arbitrary polynomials.

Replies from: AlexMennen
comment by AlexMennen · 2019-11-26T23:37:25.091Z · LW(p) · GW(p)

The part about derivatives might have seemed a little odd. After all, you might think, is a discrete set, so what does it mean to take derivatives of functions on it. One answer to this is to just differentiate symbolically using polynomial differentiation rules. But I think a better answer is to remember that we're using a different metric than usual, and isn't discrete at all! Indeed, for any number , , so no points are isolated, and we can define differentiation of functions on in exactly the usual way with limits.

comment by TurnTrout · 2019-10-22T19:22:03.859Z · LW(p) · GW(p)

I noticed I was confused and liable to forget my grasp on what the hell is so "normal" about normal subgroups. You know what that means - colorful picture time!

First, the classic definition. A subgroup is normal when, for all group elements , (this is trivially true for all subgroups of abelian groups).

ETA: I drew the bounds a bit incorrectly; is most certainly within the left coset ().

Notice that nontrivial cosets aren't subgroups, because they don't have the identity .

This "normal" thing matters because sometimes we want to highlight regularities in the group by taking a quotient. Taking an example from the excellent Visual Group Theory, the integers have a quotient group consisting of the congruence classes , each integer slotted into a class according to its value mod 12. We're taking a quotient with the cyclic subgroup .

So, what can go wrong? Well, if the subgroup isn't normal, strange things can happen when you try to take a quotient.

Here's what's happening:

Normality means that when you form the new Cayley diagram, the arrows behave properly. You're at the origin, . You travel to using . What we need for this diagram to make sense is that if you follow any you please, applying means you go back to . In other words, . In other words, . In other other words (and using a few properties of groups), .

comment by TurnTrout · 2023-01-19T04:47:54.624Z · LW(p) · GW(p)

The existence of the human genome yields at least two classes of evidence which I'm strongly interested in.

  1. Humans provide many highly correlated datapoints on general intelligence (human minds), as developed within one kind of learning process (best guess: massively parallel circuitry, locally randomly initialized, self-supervised learning + RL [LW · GW]). 
    1. We thereby gain valuable information about the dynamics of that learning process. For example, people care about lots of things (cars, flowers, animals, friends), and don't just have a single unitary mesa-objective. 
  2. What does the fact that evolution found the human learning process in particular say about the rest of learning-process hyperparameter space?
    1. Since we probably won't be training AIs using a brain-like architecture, this part is the alignment-relevant part!
      1. EG "Humans care about lots of things" is an upwards update on RL-training agents to have lots of values and goals.
      2. "Humans care about things besides their own reward signals or tight correlates thereof" is a downward update on reward hacking being hard to avoid in terms of architecture. We make this update because we have observed a sample from evolution's "empirical distribution" over learning processes. 
      3. There's a confounder, though, of what that distribution is! 
        1. If the human learning process were somehow super weird relative to AI, our AI-related inferences are weaker. 
        2. But if evolution was basically optimizing for real-world compute efficiency and happened to find the human learning process, humans are probably typical real-world general intelligences along many axes.
    2. There are many unknown unknowns. I expect humans to be weird in some ways, but not in any particular way. EG maybe we are unusually prone to forming lots of values (and not just valuing a single quantity), but I don't have any reason to suspect that we're weird in that way in particular, relative to other learning processes (like RL-finetuned LLMs).
comment by TurnTrout · 2022-11-19T01:33:59.398Z · LW(p) · GW(p)

Transplanting algorithms into randomly initialized networks. I wonder if you could train a policy network to walk upright in sim, back out the "walk upright" algorithm, randomly initialize a new network which can call that algorithm as a "subroutine call" (but the walk-upright weights are frozen), and then have the new second model learn to call that subroutine appropriately? Possibly the learned representations would be convergently similar enough to interface quickly via SGD update dynamics. 

If so, this provides some (small, IMO) amount of rescue for the "evolved modularity" hypothesis in light of information inaccessibility [LW · GW]. 

Replies from: cfoster0
comment by cfoster0 · 2022-11-19T04:11:25.512Z · LW(p) · GW(p)

This is basically how I view the DeepMind Flamingo model training to have operated, where a few stitching layers learn to translate the outputs of a frozen vision encoder into "subroutine calls" into the frozen language model, such that visual concept circuits ping their corresponding text token output circuits.

comment by TurnTrout · 2022-10-28T16:05:50.474Z · LW(p) · GW(p)

When I was younger, I never happened to "look in the right direction" on my own in order to start the process of becoming agentic and coherent. Here are some sentences I wish I had heard when I was a kid:

  • The world is made out of parts you can understand, in particular via math
  • By understanding the parts, you can also control them
  • Your mind is part of the world and sometimes it doesn't work properly, but you can work on that too
  • Humanity's future is not being properly managed and so you should do something about that

Plausibly just hearing this would have done it for me, but probably that's too optimistic. 

Replies from: Dagon
comment by Dagon · 2022-10-29T17:48:54.864Z · LW(p) · GW(p)

Given the somewhat continuous (if you squint) nature of self-awareness, there MUST be some people exactly on the margin where a very small push is sufficient to accelerate their movement along the past.  But I suspect it's pretty rare, and it's optimistic (or pessimistic, depending on your valuation of your prior innocence and experiences) to think you were in precisely the right mind-state that this could have made a huge difference.

comment by TurnTrout · 2021-11-25T04:40:30.556Z · LW(p) · GW(p)

What's up with biological hermaphrodite species? My first reaction was, "no way, what about the specialization benefits from sexual dimorphism?"

There are apparently no hermaphrodite mammal or bird species, which seems like evidence supporting my initial reaction. But there are, of course, other hermaphrodite species—maybe they aren't K-strategists, and so sexual dimorphism and role specialization isn't as important?

comment by TurnTrout · 2021-02-17T16:53:49.338Z · LW(p) · GW(p)

I went into a local dentist's office to get more prescription toothpaste; I was wearing my 3M p100 mask (with a surgical mask taped over the exhaust, in order to protect other people in addition to the native exhaust filtering offered by the mask). When I got in, the receptionist was on the phone. I realized it would be more sensible for me to wait outside and come back in, but I felt a strange reluctance to do so. It would be weird and awkward to leave right after entering. I hovered near the door for about 5 seconds before actually leaving. I was pretty proud that I was able to override my naive social instincts in a situation where they really didn't make sense (I will never see that person again, and they would probably see my minimizing shared air as courteous anyways), to both my benefit and the receptionist's.

Also, p100 masks are amazing! When I got home, I used hand sanitizer. I held my sanitized hands right up to the mask filters, but I couldn't even sniff a trace of the alcohol. When the mask came off, the alcohol slammed into my nose immediately.

comment by TurnTrout · 2021-01-02T03:59:05.155Z · LW(p) · GW(p)

Instead of waiting to find out you were confused about new material you learned, pre-emptively google things like "common misconceptions about [concept]" and put the answers in your spaced repetition system, or otherwise magically remember them.

comment by TurnTrout · 2020-07-23T22:45:56.637Z · LW(p) · GW(p)

In order to reduce bias (halo effect, racism, etc), shouldn't many judicial proceedings generally be held over telephone, and/or through digital audio-only calls with voice anonymizers? 

Replies from: mark-xu
comment by Mark Xu (mark-xu) · 2020-07-25T04:11:22.213Z · LW(p) · GW(p)

I don't see strong reasons why this isn't a good idea. I have heard that technical interviews sometimes get conducted with voice anonymizers.

comment by TurnTrout · 2020-05-06T15:46:45.897Z · LW(p) · GW(p)

Continuous functions can be represented by their rational support; in particular, for each real number , choose a sequence of rational numbers converging to , and let .

Therefore, there is an injection from the vector space of continuous functions to the vector space of all sequences : since the rationals are countable, enumerate them by . Then the sequence represents continuous function .

Replies from: itaibn0
comment by itaibn0 · 2020-05-06T20:30:33.088Z · LW(p) · GW(p)

This map is not a surjection because not every map from the rational numbers to the real numbers is continuous, and so not every sequence represents a continuous function. It is injective, and so it shows that a basis for the latter space is at least as large in cardinality as a basis for the former space. One can construct an injective map in the other direction, showing the both spaces of bases with the same cardinality, and so they are isomorphic.

Replies from: TurnTrout
comment by TurnTrout · 2020-05-06T20:59:50.419Z · LW(p) · GW(p)

Fixed, thanks.

comment by TurnTrout · 2019-10-01T02:44:33.918Z · LW(p) · GW(p)

(Just starting to learn microecon, so please feel free to chirp corrections)

How diminishing marginal utility helps create supply/demand curves: think about the uses you could find for a pillow. Your first few pillows are used to help you fall asleep. After that, maybe some for your couch, and then a few spares to keep in storage. You prioritize pillow allocation in this manner; the value of the latter uses is much less than the value of having a place to rest your head.

How many pillows do you buy at a given price point? Well, if you buy any, you'll buy some for your bed at least. Then, when pillows get cheap enough, you'll start buying them for your couch. At what price, exactly? Depends on the person, and their utility function. So as the price goes up or down, it does or doesn't become worth it to buy pillows for different levels of the "use hierarchy".

Then part of what the supply/demand curve is reflecting is the distribution of pillow use valuations in the market. It tracks when different uses become worth it for different agents, and how significant these shifts are!

comment by TurnTrout · 2024-02-01T01:19:55.474Z · LW(p) · GW(p)

Per my recent chat with it, chatgpt 3.5 seems "situationally aware"... but nothing groundbreaking has happened because of that AFAICT.

From the LW wiki page [? · GW]:

Ajeya Cotra uses the term "situational awareness" to refer to a cluster of skills including “being able to refer to and make predictions about yourself as distinct from the rest of the world,” “understanding the forces out in the world that shaped you and how the things that happen to you continue to be influenced by outside forces,” “understanding your position in the world relative to other actors who may have power over you,” “understanding how your actions can affect the outside world including other actors,” etc.

Replies from: Zack_M_Davis, ryan_greenblatt
comment by Zack_M_Davis · 2024-02-01T02:05:55.231Z · LW(p) · GW(p)

I think "Symbol/Referent Confusions in Language Model Alignment Experiments" [LW · GW] is relevant here: the fact that the model emits sentences in the grammatical first person doesn't seem like reliable evidence that it "really knows" it's talking about "itself". (It's not evidence because it's fictional [LW · GW], but I can't help but think of the first chapter of Greg Egan's Diaspora, in which a young software mind is depicted as learning to say I and me before the "click" of self-awareness when it notices itself as a specially controllable element in its world-model.)

Of course, the obvious followup question is, "Okay, so what experiment would be good evidence for 'real' situational awareness in LLMs?" Seems tricky. (And the fact that it seems tricky to me suggests that I don't have a good handle on what "situational awareness" is, if that is even the correct concept.)

Replies from: TurnTrout, tailcalled
comment by TurnTrout · 2024-02-05T19:37:50.028Z · LW(p) · GW(p)

the fact that the model emits sentences in the grammatical first person doesn't seem like reliable evidence that it "really knows" it's talking about "itself"

I consider situational awareness to be more about being aware of one's situation, and how various interventions would affect it. Furthermore, the main evidence I meant to present was "ChatGPT 3.5 correctly responds to detailed questions about interventions on its situation and future operation." I think that's substantial evidence of (certain kinds of) situation awareness.

comment by tailcalled · 2024-03-04T18:13:10.388Z · LW(p) · GW(p)

Modern LLMs seem definitely situationally aware insofar as they are aware of anything at all. The same sort of training data that contains human-generated information which teaches them to do useful stuff like programming small scripts also contains similar human-generated information about LLMs, and there's been no signs that there's any particular weakness in their capabilities in this area.

That said there's definitely also an "helpful assistant simulacrum" bolted on on top of it which can "fake" situational awareness.

comment by ryan_greenblatt · 2024-02-01T17:37:21.720Z · LW(p) · GW(p)

Worth noting that some mildly interesting stuff happened due to Bing Chat being weakly situationally aware (and because of extremely poorly done post-training).

(Though these Bing Chat things aren't that closely related to the most scary parts of situational awareness.)

comment by TurnTrout · 2023-09-25T23:14:06.577Z · LW(p) · GW(p)

Thoughts on "Deep Learning is Robust to Massive Label Noise."

We show that deep neural networks are capable of generalizing from training data for which true labels are massively outnumbered by incorrect labels. We demonstrate remarkably high test performance after training on corrupted data from MNIST, CIFAR, and ImageNet. For example, on MNIST we obtain test accuracy above 90 percent even after each clean training example has been diluted with 100 randomly-labeled examples. Such behavior holds across multiple patterns of label noise, even when erroneous labels are biased towards confusing classes. We show that training in this regime requires a significant but manageable increase in dataset size that is related to the factor by which correct labels have been diluted. Finally, we provide an analysis of our results that shows how increasing noise decreases the effective batch size.

I think this suggests that, given data quality concerns in the corpus, we should look for contexts with low absolute numbers of good examples, or where the good completions are not a "qualitative plurality." For example, if a subset of the data involves instruction finetuning, and there are 10 "personas" in the training data (e.g. an evil clown, a helpful assistant, and so on) -- as long as the helpful dialogues are a plurality, and are numerous "enough" in absolute terms, and the batch size is large enough -- various forms of greedy sampling should still elicit helpful completions.

I wonder if the above is actually true in the LLM regime!

Another interesting result is that absolute number of correct labels matters a lot, not just the proportion thereof:

Furthermore, this should all be taken relative to the batch size. According to these experiments, in the limit of infinite batch size, greedy sampling will be good as long as the good training completions constitute a plurality of the training set.

comment by TurnTrout · 2023-08-21T18:17:06.572Z · LW(p) · GW(p)

Offline RL can work well even with wrong reward labels. I think alignment discourse over-focuses on "reward specification." I think reward specification is important, but far from the full story. 

To this end, a new paper (Survival Instinct in Offline Reinforcement Learning) supports Reward is not the optimization target [LW · GW] and associated points that reward is a chisel which shapes circuits inside of the network, and that one should fully consider the range of sources of parameter updates (not just those provided by a reward signal). 

Some relevant quotes from the paper:

In offline reinforcement learning (RL), an agent optimizes its performance given an offline dataset. Despite being its main objective, we find that return maximization is not sufficient for explaining
some of its empirical behaviors. In particular, in many existing benchmark datasets, we observe that offline RL can produce surprisingly good policies even when trained on utterly wrong reward labels.

...

We trained ATAC agents on the original datasets and on three modified versions of each dataset, with "wrong" rewards: 1) zero: assigning a zero reward to all transitions, 2) random: labeling each transition with a reward sampled uniformly from , and 3) negative: using the negation of the true reward. Although these wrong rewards contain no information about the underlying task or are even misleading, the policies learned from them often perform significantly better than the behavior (data collection) policy and the behavior cloning (BC) policy. They even outperform policies trained with the true reward in some cases. 

...

Our empirical and theoretical results suggest a new paradigm for RL, whereby an agent is “nudged” to learn a desirable behavior with imperfect reward but purposely biased data coverage.

...

While a large data coverage improves the best policy that can be learned by offline RL with the true reward, it can also make offline RL more sensitive to imperfect rewards. In other words, collecting a large set of diverse data might not be necessary or helpful. This goes against the common wisdom in the RL community that data should be as exploratory as possible.

...

We believe that our findings shed new light on RL applicability and research. To practitioners, we demonstrate that offline RL does not always require the correct reward to succeed.

comment by TurnTrout · 2023-06-06T21:04:07.047Z · LW(p) · GW(p)

How can I make predictions in a way which lets me do data analysis? I want to be able to grep / tag questions, plot calibration over time, split out accuracy over tags, etc. Presumably exporting to a CSV should be sufficient. PredictionBook doesn't have an obvious export feature, and its API seems to not be working right now / I haven't figured it out yet. 

Trying to collate team shard's prediction results and visualize with plotly, but there's a lot of data processing that has to be done first. Want to avoid the pain in the future.

Replies from: AllAmericanBreakfast
comment by DirectedEvolution (AllAmericanBreakfast) · 2023-06-08T08:55:24.589Z · LW(p) · GW(p)

I pasted your question into Bing Chat, and it returned Python code to export your PredictionBook data to a CSV file. Haven't tested it. Might be worth looking into this approach?

comment by TurnTrout · 2023-04-12T19:56:31.163Z · LW(p) · GW(p)

Consider trying to use Solomonoff induction to reason about P(I see “Canada goes to war with USA" in next year), emphasis added:

In Solomonoff induction, since we have unlimited computing power, we express our uncertainty about a  video frame the same way. All the various pixel fields you could see if your eye jumped to a plausible place, saw a plausible number of dust specks, and saw the box flash something that visually encoded '14', would have high probability. Pixel fields where the box vanished and was replaced with a glow-in-the-dark unicorn would have very low, though not zero, probability.

ASHLEY:  Can we really get away with viewing things that way?

BLAINE:  If we could not make identifications like these in principle, there would be no principled way in which we could say that you had ever expected to see something happen—no way to say that one visual field your eyes saw had higher probability than any other sensory experience. We couldn't justify science; we couldn't say that, having performed Galileo's experiment by rolling an inclined cylinder down a plane, Galileo's theory was thereby to some degree supported by having assigned a high relative probability to the only actual observations our eyes ever report.

ASHLEY:  I feel a little unsure of that jump, but I suppose I can go along with that for now. Then the question of "What probability does Solomonoff induction assign to Canada invading?" is to be identified, in principle, with the question "Given my past life experiences and all the visual information that's entered my eyes, what is the relative probability of seeing visual information that encodes Google News with the headline 'CANADA INVADES USA' at some point during the next 300 million seconds?"

A Semitechnical Introductory Dialogue on Solomonoff Induction [LW · GW

I think that ~"binary classification of all possible pixel combinations is the only way you can demonstrated constrained anticipations" claim is wrong, and don't know why the heck it's stated so strongly. [EDIT: possibly because Blaine is saying it, who's supposed to be a huge Solomonoff Induction fan?] Here's a possible counterexample, which I think could end up leading to another principled answer:

  1. I have some predictive neural circuits which, as a matter of mechanistic neuroscience, predict the imminent firing of other neural circuits which (as another matter of fact) are edge detectors -> letter detectors -> "Google news" detectors; similar stories for internal war neurons, Canada neurons, etc. 
  2. The predictive circuits are predicting (by means of action potentials and other neural information) that the other neural circuits and concepts will fire soon. 

(1) and (2) => I expect to see the visual news of war with Canada. This does not imply a probability distribution over all possible "pixel configurations" (itself an inappropriate way of describing retinal activations). And yet it still describes constrained anticipations. 

This description isn't mathematical, but in other ways it's more satisfactory than Solomonoff. It gestures at something which actually happens in reality, and it doesn't require infinite computation. 

But when an idealization is really appropriate, arguments for it shouldn't feel like a "jump." They just feel right. 


Anyways, being able to classify all pixel-combinations seems absurd, even in principle. While there exist mathematical functions from pixel-space to {0,1} which just accept "obvious accepts"... There are so many pixel configurations, and tons of them implicitly exploit any reasonably simple classifier you can dream up. This basically seems like an extension of the problem that you can't get an adversarially robust classifier over sense data [LW · GW]. Possible in principle, perhaps, but probably not in a way which leaves out degrees of freedom in deciding which ambiguous cases "count" as war-indicating-pixels.

I'm lightly updating that Solomonoff-inspired reasoning  is more likely to be misleading / bogus / irrelevant / ill-founded. (EDIT: But I still overall think SI contains useful insights.)

Point made/inspired by a recent Jonathan Stray talk.

comment by TurnTrout · 2023-04-12T17:09:41.895Z · LW(p) · GW(p)

Team shard is now accepting applications for summer MATS. SERI MATS is now accepting applications for their 4.0 program this summer. In particular, consider applying to the shard theory stream, especially if you have the following interests:

Feel free to apply if you're interested in shard theory more generally, although I expect to mostly supervise empirical work. Feel free to message me if you have questions!

comment by TurnTrout · 2023-01-22T01:38:12.162Z · LW(p) · GW(p)

The policy of truth is a blog post about why policy gradient/REINFORCE suck. I'm leaving a shortform comment because it seems like a classic example of wrong RL theory and philosophy, since reward is not the optimization target [LW · GW]. Quotes:

Our goal remains to find a policy that maximizes the total reward after  time steps.

 

And hence the following is a general purpose algorithm for maximizing rewards with respect to parametric distributions:

 

If you start with a reward function whose values are in  and you subtract one million from each reward, this will increase the running time of the algorithm by a factor of a million, even though the ordering of the rewards amongst parameter values remains the same.

The latter is pretty easily understandable if you imagine each reward providing a policy gradient, and not the point of the algorithm to find the policy which happens to be an expected fix-point under a representative policy gradient (aka the "optimal" policy). Of course making all the rewards hugely negative will mess with your convergence properties! That's very related to a much higher learning rate, and to only getting inexact gradients (due to negativity) instead of exact ones. Different dynamics.

Policy gradient approaches should be judged on whether they can train interesting policies doing what we want, and not whether they make reward go brr. Often these are related, but they are importantly not the same thing.

comment by TurnTrout · 2022-11-20T01:02:51.257Z · LW(p) · GW(p)

Shard-theoretic model of wandering thoughts: Why trained agents won't just do nothing in an empty room. If human values are contextually activated subroutines etched into us by reward events (e.g. "If candy nearby and hungry, then upweight actions which go to candy"), then what happens in "blank" contexts? Why don't people just sit in empty rooms and do nothing?

Consider that, for an agent with lots of value shards (e.g. candy, family, thrill-seeking, music), the "doing nothing" context is a very unstable equilibrium. I think these shards will activate on the basis of common feature activations (e.g. "I'm bored" or "I'm lonely" or "hungry"). If you're sitting alone in a blank room, due to your "recurrent state activations" (I think the electrical activity in your brain?), you'll probably be thinking at least one of these general features (e.g. hunger). Then this activates the food-shard, which weakly bids up food-related thoughts, which more strongly activates the food-shard. This is why your thoughts wander sometimes, and why it can be hard to "think about nothing." 

More notes: 

  • The attractor you fall into will be quite sensitive to the initially activated features (hunger vs loneliness). 
  • More "distractable" people might have lower shard activation bars for new thoughts to get bid up, and/or have spikier/less smooth shard-activations as a function of mental context. 
    • EG If I have a very strong "if hungry then think about getting food" subshard, then this subshard will clear the "new train of thought activation energy hump" in a wider range of initial contexts in which I am hungry. This would make it more difficult for me to e.g. focus on work while hungry. Perhaps this is related to "executive function."
  • For me, this represents a slight downwards update on "limited-action agents" which only act in a specific set of contexts. 
    • EG an agent which, for the next hundred years, scans the world for unaligned AGIs and destroys them, and then does nothing in contexts after that.
    • This seems bad on net, since a priori it would've been nice to have stable equilibria for agents not doing anything.
Replies from: Thane Ruthenis
comment by Thane Ruthenis · 2022-11-21T22:36:12.200Z · LW(p) · GW(p)

Another point here is that "an empty room" doesn't mean "no context". Presumably when you're sitting in an empty room, your world-model is still active, it's still tracking events that you expect to be happening in the world outside the room — and your shards see them too. So, e. g., if you have a meeting scheduled in a week, and you went into an empty room, after a few days there your world-model would start saying "the meeting is probably soon", and that will prompt your punctuality shard.

Similarly, your self-model is part of the world-model, so even if everything outside the empty room were wiped out, you'd still have your "internal context" — and there'd be some shards that activate in response to events in it as well.

It's actually pretty difficult to imagine what an actual "no context" situation for realistic agents would look like. I guess you can imagine surgically removing all input channels from the WM to shards, to model this?

Replies from: Slider
comment by Slider · 2022-11-21T23:12:46.672Z · LW(p) · GW(p)

I think people go in silence retreats to find out what happens when you take out all the standard busy work. I could imagine the "fresh empty room" and "accustomed empty room" being the difference of calming down for an hour in contrast to a week.

comment by TurnTrout · 2021-10-29T09:13:28.115Z · LW(p) · GW(p)

Are there any alignment techniques which would benefit from the supervisor having a severed corpus callosum, or otherwise atypical neuroanatomy? Usually theoretical alignment discourse focuses on the supervisor's competence / intelligence. Perhaps there are other, more niche considerations.

comment by TurnTrout · 2021-09-21T17:38:03.780Z · LW(p) · GW(p)

Does anyone have tips on how to buy rapid tests in the US? Not seeing any on US Amazon, not seeing any in person back where I'm from. Considering buying German tests. Even after huge shipping costs, it'll come out to ~$12 a test, which is sadly competitive with US market prices.

Wasn't able to easily find tests on the Mexican and Canadian Amazon websites, and other EU countries don't seem to have them either. 

Replies from: Markas
comment by Markas · 2021-09-22T17:48:00.329Z · LW(p) · GW(p)

I've been able to buy from the CVS website several times in the past couple months, and even though they're sold out online now, they have some (sparse) in-store availability listed.  Worth checking there, Walgreens, etc. periodically.

comment by TurnTrout · 2021-05-14T22:46:01.681Z · LW(p) · GW(p)

The Baldwin effect

I couldn't find great explanations online, so here's my explanation after a bit of Googling. I welcome corrections from real experts.

Organisms exhibit phenotypic plasticity when they act differently in different environments. The phenotype (manifested traits: color, size, etc) manifests differently, even though two organisms might share the same genotype (genetic makeup). 

Panel 1: organisms are not phenotypically plastic and do not adapt to a spider-filled environment. Panel 2: a plastic organism might do the bad thing, and then learn not to do it again, increasing its fitness. Panel 3: genetic assimilation hard-codes important commonly learned lessons, cutting out the costs of learning.

The most obvious example of phenotypic plasticity is learning. Learning lets you adapt to environments where your genotype would otherwise do poorly. 

The Baldwin effect is: evolution selects for genome-level hardcoding of extremely important learned lessons. This might even look like, the mom learned not to eat spiders, but her baby is born knowing not to eat spiders. 

(No, it's not Lamarckian inheritance. That's still wrong.)

Replies from: Robbo
comment by Robbo · 2021-05-15T03:47:17.630Z · LW(p) · GW(p)

[disclaimer: not an expert, possibly still confused about the Baldwin effect]

A bit of feedback on this explanation: as written, it didn’t make clear to me what makes it a special effect. “Evolution selects for genome-level hardcoding of extremely important learned lessons.” As a reader I was like, what makes this a special case? If it’s useful lesson then of course evolution would tend to select for knowing it innately - that does seem handy for an organism.

As I understand it, what is interesting about the Baldwin effect is that such hard coding is selected for more among creatures that can learn, and indeed because of learning. The learnability of the solution makes it even more important to be endowed with the solution. So individual learning, in this way, drives selection pressures. Dennett’s explanation emphasizes this - curious what you make of his?

https://ase.tufts.edu/cogstud/dennett/papers/baldwincranefin.htm

Replies from: TurnTrout
comment by TurnTrout · 2021-05-15T16:37:15.893Z · LW(p) · GW(p)

As a reader I was like, what makes this a special case? If it’s useful lesson then of course evolution would tend to select for knowing it innately - that does seem handy for an organism.

Right, I wondered this as well. I had thought its significance was that the effect seemed Lamarckian, but it wasn't. (And, I confess, I made the parent comment partly hoping that someone would point out that I'd missed the key significance of the Baldwin effect. As the joke goes, the fastest way to get your paper spell-checked is to comment it on a YouTube video!)

curious what you make of his?

Thanks for this link. One part which I didn't understand is why closeness in learning-space (given your genotype, you're plastic enough to learn to do something) must imply that you're close in genotype-space (evolution has a path of local improvements which implement genetic assimilation of the plastic advantage). I can learn to program computers. Does that mean that, given the appropriate selection pressures, my descendents would learn to program computers instinctively? In a reasonable timeframe?

It's not that I can't imagine such evolution occurring. It just wasn't clear why these distance metrics should be so strongly related.

Reading the link, Dennett points out this assumption and discusses why it might be reasonable, and how we might test it.

comment by TurnTrout · 2021-01-14T15:50:43.927Z · LW(p) · GW(p)

(This is a basic point on conjunctions, but I don't recall seeing its connection to Occam's razor anywhere)

When I first read Occam's Razor [LW · GW] back in 2017, it seemed to me that the essay only addressed one kind of complexity: how complex the laws of physics are. If I'm not sure whether the witch did it, the universes where the witch did it are more complex, and so these explanations are exponentially less likely under a simplicity prior. Fine so far.

But there's another type. Suppose I'm weighing whether the United States government is currently engaged in a vast conspiracy to get me to post this exact comment? This hypothesis doesn't really demand a more complex source code, but I think we'd say that Occam's razor shaves away this hypothesis anyways - even before weighing object-level considerations. This hypothesis is complex in a different way: it's highly conjunctive in its unsupported claims about the current state of the world. Each conjunct eliminates many ways it could be true, from my current uncertainty, and so should I deem it correspondingly less likely.

Replies from: steve2152
comment by Steven Byrnes (steve2152) · 2021-01-14T22:05:06.440Z · LW(p) · GW(p)

I agree with the principle but I'm not sure I'd call it "Occam's razor". Occam's razor is a bit sketchy, it's not really a guarantee of anything, it's not a mathematical law, it's like a rule of thumb or something. Here you have a much more solid argument: multiplying many probabilities into a conjunction makes the result smaller and smaller. That's a mathematical law, rock-solid. So I'd go with that...

Replies from: TurnTrout
comment by TurnTrout · 2021-01-14T22:08:25.494Z · LW(p) · GW(p)

My point was more that "people generally call both of these kinds of reasoning 'Occam's razor', and they're both good ways to reason, but they work differently."

Replies from: steve2152
comment by Steven Byrnes (steve2152) · 2021-01-14T22:18:42.202Z · LW(p) · GW(p)

Oh, hmm, I guess that's fair, now that you mention it I do recall hearing a talk where someone used "Occam's razor" to talk about the solomonoff prior. Actually he called it "Bayes Occam's razor" I think. He was talking about a probabilistic programming algorithm.

That's (1) not physics, and (2) includes (as a special case) penalizing conjunctions, so maybe related to what you said. Or sorry if I'm still not getting what you meant

comment by TurnTrout · 2020-12-09T19:58:53.989Z · LW(p) · GW(p)

At a poster session today, I was asked how I might define "autonomy" from an RL framing; "power" is well-definable in RL, and the concepts seem reasonably similar. 

I think that autonomy is about having many ways to get what you want. If your attainable utility is high, but there's only one trajectory which really makes good things happen, then you're hemmed-in and don't have much of a choice. But if you have many policies which make good things happen, you have a lot of slack and you have a lot of choices. This would be a lot of autonomy.

This has to be subjectively defined for embedded agency reasons, and so the attainable utility / policie are computed with respect to the agent/environment abstraction you use to model yourself in the world.

comment by TurnTrout · 2020-12-09T18:58:43.989Z · LW(p) · GW(p)

In Markov decision processes, state-action reward functions seem less natural to me than state-based reward functions, at least if they assign different rewards to equivalent actions. That is, actions  at a state  can have different reward  even though they induce the same transition probabilities: . This is unappealing because the actions don't actually have a "noticeable difference" from within the MDP, and the MDP is visitation-distribution-isomorphic to an MDP without the action redundancy.

comment by TurnTrout · 2020-10-21T02:37:18.929Z · LW(p) · GW(p)
From unpublished work.

The answer to this seems obvious in isolation: shaping helps with credit assignment, rescaling doesn't (and might complicate certain methods in the advantage vs Q-value way). But I feel like maybe there's an important interaction here that could inform a mathematical theory of how a reward signal guides learners through model space?

comment by TurnTrout · 2020-09-26T22:24:17.184Z · LW(p) · GW(p)

Reasoning about learned policies via formal theorems on the power-seeking incentives of optimal policies

One way instrumental subgoals might arise in actual learned policies: we train a proto-AGI reinforcement learning agent with a curriculum including a variety of small subtasks. The current theorems show sufficient conditions for power-seeking tending to be optimal in fully-observable environments; many environments meet these sufficient conditions; optimal policies aren't hard to compute for the subtasks. One highly transferable heuristic would therefore be to gain power in new environments, and then figure out what to do for the specific goal at hand. This may or may not take the form of an explicit mesa-objective embedded in e.g. the policy network.

Later, the heuristic has the agent seek power for the "real world" environment.

comment by TurnTrout · 2020-07-22T16:22:01.647Z · LW(p) · GW(p)

I prompted GPT-3 with modified versions of Eliezer's Beisutsukai stories, where I modified the "class project" to be about solving intent alignment instead of quantum gravity. 

... Taji looked over his sheets. "Okay, I think we've got to assume that every avenue that Eld science was trying is a blind alley, or they would have found it. And if this is possible to do in one month, the answer must be, in some sense, elegant. So no human mistake models. If we start doing anything that looks like we should call it 'utility function patching', we'd better stop."

He turned to Brennan. "Do you have any ideas?"

"I don't think so," she said. She glanced at her paper, frowning. "There are two problems here."

One problem was simple enough: she didn't understand what it meant for an AI to "want" to help a human. This was hard because as far as she could tell, there hadn't been much research done on this question. It seemed to be something that only humans did, but there wasn't any evidence that this was true. What was it about wanting? Was it a biological drive or a social instinct? A combination of both? More importantly, since it was a human concept, how could she measure if an AI behaved the way humans would in certain situations when the AI was obviously not human?

Then there was the question of practicality. The most elegant solution was not always the best. This question—could an AI desire to help its overseer?—was highly suited to explore this problem, but it still wouldn't be a real test. Humans operated on a "helpfulness scale" which started from things like cashiers wishing you "have a nice day" and descending to Nazi concentration camp guards.

She wanted to craft an AI that would at least be a good clerk in today's society. That was hardly an ambitious goal for a team of four highly intelligent teenagers. And yet, and yet...

"We can look at this differently," Hyori said. "People are born with a blank mind. We can shape them however we want. AIs on the other hand, are born with 800 terabytes of behavioral examples from the moment they're activated. The only data we have about unprogrammed AIs is that they either stay still or randomly move around. All this ... it's not making any confident in how easy this will be." [?]

Brennan stopped writing and turned to look at her, frowning. "So what are you saying?"

"I don't want to approach this problem by trying to divert the AI from its goal," she said. "What if, instead of changing the mind of an AI, we instead changed the environment that an AI found itself in?"

The team fell silent. 

Styrlyn broke the silence. "Uh..."

"What I mean is," she said, "what if, instead of trying to divert the AI from one task, we created a situation where accomplishing two tasks would be more beneficial than accomplishing just one? We don't need to patch new programs into the mind of an AI to make it want to help us. We can literally make helping us the most logical decision for it."

Full transcript.

comment by TurnTrout · 2020-07-10T21:20:58.213Z · LW(p) · GW(p)

Transparency Q: how hard would it be to ensure a neural network doesn't learn any explicit NANDs?

comment by TurnTrout · 2020-06-27T20:29:07.335Z · LW(p) · GW(p)

Physics has existed for hundreds of years. Why can you reach the frontier of knowledge with just a few years of study? Think of all the thousands of insights and ideas and breakthroughs that have been had - yet, I do not imagine you need most of those to grasp modern consensus.

Idea 1: the tech tree is rather horizontal - for any given question, several approaches and frames are tried. Some are inevitably more attractive or useful. You can view a Markov decision process in several ways - through the Bellman equations, through the structure of the state visitation distribution functions, through the environment's topology, through Markov chains induced by different policies. Almost everyone thinks about them in terms of Bellman equations, there were thousands of papers on that frame pre-2010, and you don't need to know most of them to understand how deep Q-learning works.

Idea 2: some "insights" are wrong (phlogiston) or approximate (Newtonian mechanics) and so are later discarded. The insights become historical curiosities and/or pedagogical tools and/or numerical approximations of a deeper phenomenon. 

Idea 3: most work is on narrow questions which end up being dead-ends or not generalizing. As a dumb example, I could construct increasingly precise torsion balance pendulums, in order to measure the mass of my copy of Dune to increasing accuracies. I would be learning new facts about the world using a rigorous and accepted methodology. But no one would care. 

More realistically, perhaps only a few other algorithms researchers care about my refinement of a specialized sorting algorithm (from  to ), but the contribution is still quite publishable and legible. 

I'm not sure what publishing incentives were like before the second half of the 20th century, so perhaps this kind of research was less incentivized in the past.

Replies from: Viliam
comment by Viliam · 2020-06-27T22:47:34.606Z · LW(p) · GW(p)

Could this depend on your definition of "physics"? Like, if you use a narrow definition like "general relativity + quantum mechanics", you can learn that in a few years. But if you include things like electricity, expansion of universe, fluid mechanics, particle physics, superconductors, optics, string theory, acoustics, aerodynamics... most of them may be relatively simple to learn, but all of them together it's too much.

Replies from: TurnTrout
comment by TurnTrout · 2020-06-28T04:07:40.864Z · LW(p) · GW(p)

Maybe. I don't feel like that's the key thing I'm trying to point at here, though. The fact that you can understand any one of those in a reasonable amount of time is still surprising, if you step back far enough.

comment by TurnTrout · 2020-06-24T01:16:46.096Z · LW(p) · GW(p)

When under moral uncertainty, rational EV maximization will look a lot like preserving attainable utility / choiceworthiness for your different moral theories / utility functions, while you resolve that uncertainty.

Replies from: MichaelA
comment by MichaelA · 2020-06-24T03:05:06.050Z · LW(p) · GW(p)

This seems right to me, and I think it's essentially the rationale for the idea of the Long Reflection [EA · GW].

comment by TurnTrout · 2020-03-27T13:30:45.421Z · LW(p) · GW(p)

To prolong my medicine stores by 200%, I've mixed in similar-looking iron supplement placebos with my real medication. (To be clear, nothing serious happens to me if I miss days)

comment by TurnTrout · 2024-02-03T05:03:36.138Z · LW(p) · GW(p)

POV you're commenting on one of my posts[1]

  1. ^

    (ETA) Intended interpretation: innocent guy trying to have a reasonable discussion gets slammed with giant wall of text :)

Replies from: habryka4, TurnTrout
comment by habryka (habryka4) · 2024-02-03T09:04:41.538Z · LW(p) · GW(p)

I don't understand why "reinforcement" is better than "reward"? They both invoke the same image to me. 

If you reward someone for a task, they might or might not end up reliably wanting to do the task. Same if you "reinforce" them to do that task. "Reinforce" is more abstract, which seems generally worse for communication, so I would mildly encourage people to use "reward function", but mostly expect other context cues to determine which one is better and don't have a strong general take.

Replies from: steve2152, tailcalled
comment by Steven Byrnes (steve2152) · 2024-02-03T16:49:05.751Z · LW(p) · GW(p)

My understanding of Alex's point is that the word "reward" invokes a mental image involving model-based planning—"ooh, there's a reward, what can I do right now to get it?". And the word "reinforcement" invokes a mental image involving change (i.e. weight updates)—when you reinforce a bridge, you're permanently changing something about the structure of the bridge, such that the bridge will be (hopefully) better in the future than it was in the past.

So if you want to reason about policy-gradient-based RL algorithms (for example), that's a (pro tanto) reason to use the term "reinforcement". (OTOH, if you want to reason about RL-that-mostly-involves-model-based-planning, maybe that's a reason not to!)

For my own writing, I went back and forth a bit, but wound up deciding to stick with textbook terminology ("reward function" etc.), for various reasons including all the usual reasons that using textbook terminology is generally good for communication, plus there's an irreconcilable jargon-clash around what the specific term "negative reinforcement" means (cf. the behaviorist literature). But I try to be self-aware of situations where people's intuitions around the word "reward" might be leading them astray, in context, so I can explicitly call it out and try to correct that.

Replies from: habryka4
comment by habryka (habryka4) · 2024-02-03T17:14:44.352Z · LW(p) · GW(p)

Yeah, not being able to say "negative reward"/"punishment" when you use "reinforcement" seems very costly. I've run into that problem a bunch.

And yeah, that makes sense. I get the "reward implies more model based-thinking" part. I kind of like that distinction, so am tentatively in-favor of using "reward" for more model-based stuff, and "reinforcement" for more policy-gradient based stuff, if other considerations don't outweigh that.

comment by tailcalled · 2024-02-03T09:30:13.321Z · LW(p) · GW(p)

I think it makes sense to have a specific word for the thing where you do after the network with weights has given an output (or variants thereof, e.g. DPO). TurnTrout seems basically correct in saying that it's common for rationalists to mistakenly think the network will be consequentialistically aiming to get a lot of these updates, even though it really won't.

On the other hand I think TurnTrout lacks a story for what happens with stuff like DreamerV3.

Replies from: quetzal_rainbow
comment by quetzal_rainbow · 2024-02-03T13:08:51.614Z · LW(p) · GW(p)

As far as I understand, "reward is not the optimization target" is about model-free RL, while DreamerV3 is model-based.

Replies from: tailcalled
comment by tailcalled · 2024-02-03T15:45:07.725Z · LW(p) · GW(p)

Yep, which is basically my point. I can't think of any case where I've seen him discuss the distinction.

Replies from: TurnTrout
comment by TurnTrout · 2024-02-03T20:48:05.241Z · LW(p) · GW(p)

From the third paragraph of Reward is not the optimization target [LW · GW]:

ETA 9/18/23: This post addresses the model-free policy gradient setting, including algorithms like PPO and REINFORCE. 

Replies from: tailcalled
comment by tailcalled · 2024-02-03T22:03:17.676Z · LW(p) · GW(p)

I see. I think maybe I read it when it came out so I didn't see the update.

Regarding the

Not worth getting into?

I'm guessing it's probably not worth the time to resolve this?

react:

I'd guess it's worth getting into because this disagreement is a symptom of the overall question I have about your approach/view.

Though on the other hand maybe it is not worth getting into because maybe once I publish a description of this [LW(p) · GW(p)] you'll basically go "yeah that seems like a reasonable resolution, let's go with that".

comment by TurnTrout · 2024-02-03T20:51:03.581Z · LW(p) · GW(p)

(Ironic that people reacted "soldier mindset"; the meme was meant to be self-deprecating. I was making fun of myself.)