Contra shard theory, in the context of the diamond maximizer problem

post by So8res · 2022-10-13T23:51:29.532Z · LW · GW · 19 comments

Contents

19 comments

A bunch of my response to shard theory is a generalization of how niceness is unnatural [LW · GW]. In a similar fashion, the other “shards” that the shard theory folk want to learn are unnatural too.

That said, I'll spend a few extra words responding to the admirably-concrete diamond maximizer proposal [LW · GW] that TurnTrout recently published, on the theory that briefly gesturing at my beliefs is better than saying nothing.

I’ll be focusing on the diamond maximizer plan, though this criticism can be generalized and applied more broadly to shard theory.

Finally, I'll note that the diamond maximization problem is not in fact the problem "build an AI that makes a little diamond", nor even "build an AI that probably makes a decent amount of diamond, while also spending lots of other resources on lots of other stuff" (although the latter is more progress than the former). The diamond maximization problem (as originally posed by MIRI folk) is a challenge of building an AI that definitely optimizes for a particular simple thing, on the theory that if we knew how to do that (in unrealistically simplified models, allowing for implausible amounts of (hyper)computation) then we would have learned something significant about how to point cognition at targets in general.

TurnTrout’s proposal seems to me to be basically "train it around diamonds, do some reward-shaping, and hope that at least some care-about-diamonds makes it across the gap". I doubt this works (because the optimum of the shattered [LW · GW] correlates of the training objectives that it gets are likely to involve tiling the universe with something that isn't actually diamond, even if you're lucky-enough that it got a diamond-shard at all, which is dubious), but even if it works a little, it doesn't seem to me to be teaching us any of the insights that would be possessed by someone who knew how to robustly aim an idealized unbounded (or even hypercomputing) cognitive system in theory.

19 comments

Comments sorted by top scores.

comment by TurnTrout · 2022-10-17T23:00:53.316Z · LW(p) · GW(p)

I appreciate you writing your quick thoughts on this. I have a few primary reactions, and then I'll detail specific reactions. 

  • I agree there are difficulties in this plan template (and said so in the original post; I know you didn't say I didn't say so, but I'm adding this here for clarity). 
  • I don't know why you think this isn't progress, because this plan's problems seem to just... go away, if they're solved? 
    • Like, if you figure out how to form an appropriate diamond abstraction, then that's that. Congrats, that part of the story is checked off.
    • But if you get a more robust reward model, then there's always another way to hack it. 
    • I surmise that you don't get this feeling from the post, or think I'm sweeping problems under some rug, but I don't know where the rug is supposed to be. (Maybe the "unnaturality" disagreement?)
  • Of your three points, I think 
    • (1: Won't get AGI) seems wrong for this particular plan but also not cruxy to me. (Was this meant to apply to shard theory more broadly? If so, why?)
      • EDIT 11/25: Also, it doesn't seem like a big problem to me if it were true that "When you compromise and start putting it in environments where it needs to be able to think to succeed, then your new reward-signals end up promoting all sorts of internal goals that aren't particularly about diamond, but are instead about understanding the world and/or making efficient use of internal memory and/or suchlike." Seems fine if some of the agent's values are around understanding the world and suchlike.
    • (2: Proxy goal formation) seems like one of my main worries, and also quite surmountable.
    • (3: The values blow up when it gets smart) I agree that the reflection process seems sensitive in some ways. I also give the straightforward reason why the diamond-values shouldn't blow up: Because that leads to fewer diamonds. I think this a priori case is pretty strong, but agree that there should at least be a lot more serious thinking here, eg a mathematical theory of value coalitions [LW(p) · GW(p)].
  • I think some of your critiques were covered in my original post. It was a long post, so no worries if you just missed them.

The first “problem” with this plan is that you don't get an AGI this way. You get an unintelligent robot that steers towards diamonds. If you keep trying to have the training be about diamonds, it never particularly learns to think. When you compromise and start putting it in environments where it needs to be able to think to succeed, then your new reward-signals end up promoting all sorts of internal goals that aren't particularly about diamond, but are instead about understanding the world and/or making efficient use of internal memory and/or suchlike.

Hm, doesn't it need to think in its curriculum I described in the OP?

Produce a curriculum of tasks, from walking to a diamond, to winning simulated chess games [self-play], to solving increasingly difficult real-world mazes, and so on. After each task completion, the agent gets to be near some diamond and receives reward.

For further detail, take an arbitrary task with a high skill ceiling and a legible end condition, give it some reward shaping and use self-play if appropriate, and put a diamond at the end and give the agent reward. I agree that even in successful stories, the agent also develops non-diamond shards. 

Here's a consideration for why training might produce an AGI, which I realized after writing the story. Given relevant features, it's often trivial for even linear models to outperform experts (see Statistical Prediction Rules Out-Perform Expert Human Judgments [LW · GW]). What I remember to be a common hypothesis: Human experts are often good at finding features to pay attention to (e.g. patient weight) but bad at setting regression coefficients to come to a decision. 

Analogously, consider an SSL+IL initialization in which the AI has imitatively learned sophisticated subroutines for perception, prediction, and action, such that the AI can imitate human-level performance on supervised training distribution (eg navigating mazes). Then PG-style RL finetuning might rearrange and reweight what subroutines to use when, efficiently finding a better subroutine arrangement for decision-making in a range of situations. And thereby doing better than human expert demonstrators. 

(Yes, this is sample inefficient, and I didn't particularly optimize the story for sample efficiency. I focused on telling any story at all which has the desired alignment outcome.)

insofar as you were able to get some sort of internalized diamond-ish goal, if you're not really careful then you end up getting lots of subgoals such as ones about glittering things, and stones cut in stylized ways, and proximity to diamond rather than presence of diamond, and so on and so forth.

Why "rather than" instead of "in addition to"? Are you just stating your belief here, or did you mean to argue for it? Maybe you're saying "It's hard to get the diamond shard to form properly", which I agree with and it's a primary way I expect the story to go wrong. I think that relatively simple interventions will plausibly solve this problem, though, and so consider this more of a research question than a fatal flaw in the training story template. 

As far as I can tell, the "reflection" section of TurnTrout’s essay says ~nothing that addresses this, and amounts to "the agent will become able to tell that it has shards". OK, sure, it has shards, but only some of them are diamond-related, and many others are cognition-related or suchlike. I don't see any argument that reflection will result in the AI settling at "maximize diamond" in-particular.

If I read you properly, that's not the relevant section. The relevant sections are the next two: The agent prevents value drift and The values handshake. EG I said [LW · GW]: 

If the agent still is primarily diamond-motivated, it now wants to stay that way by instrumental convergence. That is, if the AI considers a plan which it knows causes value drift away from diamonds, then the AI reflectively predicts the plan leads to fewer diamonds, and so the AI doesn’t choose that plan! The agent knows the consequences of value drift and it takes a more careful approach to future updating.

I think there's a very straightforward case here. In the relevant context, suppose the agent is primarily making decisions on the basis of whether they lead to more or fewer diamonds. The agent considers adopting a reflectively stable utility function which doesn't produce diamonds. The agent doesn't choose this plan because it doesn't lead to diamonds. 

I agree that there are ways this can go wrong, some of which you highlight. But the a priori argument makes me expect that, all else equal and conditional on a strong diamond shard at time of values handshake, the agent will probably equilibrate to making lots of diamonds.

I'll note that the diamond maximization problem is not in fact the problem "build an AI that makes a little diamond"

I did not claim to be solving the diamond maximization problem, but maybe you wanted to add your own take here? As I wrote in the original post [LW(p) · GW(p)], I think "maximize diamonds" is a seriously mistaken subproblem choice:

I think that pure diamond maximizers are anti-natural, and at least not the first kind of successful story we should try to tell...

I think that "get an agent which reflectively equilibrates to optimizing a single commonly considered quantity like 'diamonds'" is probably extremely hard and anti-natural [LW(p) · GW(p)]. I think MIRI should not have chosen this as a subproblem. 

I also think that relaxing the problem by assuming hypercomputation encourages thinking about argmax search, which I think is a subtle but serious trap. For specific generalizable reasons which I'll soon post about, this design pattern seems basically impossible to align compared to shard agents.

because the optimum of the shattered [LW · GW] correlates of the training objectives that it gets are likely to involve tiling the universe with something that isn't actually diamond, even if you're lucky-enough that it got a diamond-shard at all, which is dubious)

Really? That seems wrong. Suppose that the time of the values handshake, the agent has a strong diamond-shard. I understand you to predict that the agent adopts a reflective utility function which, when optimized, won't lead to actual diamond. Why? Why wouldn't the diamond-shard just bid this plan down, because it doesn't lead to actual diamond?

even if it works a little, it doesn't seem to me to be teaching us any of the insights that would be possessed by someone who knew how to robustly aim an idealized unbounded (or even hypercomputing) cognitive system in theory.

In addition to my "unbounded/hypercomputing is a red herring" response: 

Someone can say "You can reliably solve computer vision tasks by doing deep learning" isn't telling you how to write superhumanly good features into the vision model, surpassing previous hand-designed expert attempts. They don't know how the SOTA deep vision models will work internally. And yet it's still good advice. It's still telling you something about how to train good vision models. 

Similarly, if you're in a state of ignorance [LW · GW] (lethality 19) about how to reliably point any cognitive system to any latent parts of reality, and someone proposes a plan which does plausibly (for specific reasons, not as a vague "it could work" hope) produce an AI which makes lots of real-world diamonds, then that seems like progress to me. (I'm fine agreeing to disagree here, I don't think it's productive to dispute how much credit I should get.)


Smaller points:

In a similar fashion, the other “shards” that the shard theory folk want to learn are unnatural too.

I think it would make more sense to claim that niceness / other shards are "contingent" instead of "unnatural." If shard theory is correct, shards are literally natural in that they are found in nature as the predictable outcome of human value formation. Same for niceness. 

 little correlates-of-training-objectives that it latched onto in order to have a gradient up to general intelligence, blow the whole plan sky-high once it starts to reflect.

You call shards "little correlates" and, previously, "ad-hoc internalized correlates." I don't know what you intend to communicate by this. The shards are, mechanistically speaking, contextually activated influences on the agent's decision-making. What information does "ad-hoc" or "little correlate" add to that picture? I'm currently guessing that it expresses your skepticism that shards can cohere into reflectively stable caring?

Or consider the conflict "I really enjoy dunking on the outgroup (but have some niggling sense of unease about this)" — we can't conclude from the fact that the enjoyment of dunking is loud, whereas the niggling doubt is quiet, that the dunking-on-the-outgroup value will be the one left standing after reflection.

This is an interesting example. To me, the more relevant questions seem to be: How much evidence is "loudness" (e.g. if I really enjoy something which I do frequently, I sure am more likely to reflectively endorse it compared to if I didn't enjoy it, even though there are highly available counterexamples to this tendency), and how relevant is this for the diamond story? 

EDIT: As I think I wrote in the OP, it's not enough for a shard to be strongly influencing decision-making in a given context. Especially for an anti-outgroup shard which is unendorsed (eg bids for outcomes which other reflectively aware shards bid against), this shard also seemingly has to be reflectively and broadly activated in order to be retained. So, yeah, if there's an anti-outgroup shard which gets "maneuvered around and removed" by other shards, sure, that can happen. My takeaway isn't "anything can get removed for hard-to-understand reasons", but rather "one particular way shards can get removed is that they directly conflict with other powerful shards." 

I think a diamond-manufacturing subshard would resource-conflict (instrumental conflict, not terminal conflict) with eg a power-seeking subshard (manufacturing diamonds uses energy). Or even against a staple-manufacturing subshard (staples require materials and energy). But I expect the reflective utility function to reflect gains from intershard trade and specialization of different parts of the future resources towards the different decision-making influences (eg maybe one kind of comet is better specialized for making staples, and another kind for diamonds). 

Or maybe not. Maybe it goes some other way. But this kind of conflict seems different from anticorrelated terminal value (eg anti-outgroup can impinge on nice-shards, altruism-shards, empathy...) across a shard power imbalance (nonreflective anti-outgroup vs reflective niceness shard).

And my point here isn't "I have now defused the general class of objection, checkmate!"... It's still a live and legit worry to me, but I don't view this phenomenon as not comprehensible, I don't feel epistemically helpless here (not meaning to make claims about how you feel tbc).

Replies from: thomas-larsen, D0TheMath, TurnTrout
comment by Thomas Larsen (thomas-larsen) · 2023-03-19T23:55:52.164Z · LW(p) · GW(p)

(My take on the reflective stability part of this) 

The reflective equilibrium of a shard theoretic agent isn’t a utility function weighted according to each of the shards, it’s a utility function that mostly cares about some extrapolation of the (one or very few) shard(s) that were most tied to the reflective cognition.

It feels like a ‘let’s do science’ or ‘powerseek’ shard would be a lot more privileged, because these shards will be tied to the internal planning structure that ends up doing reflection for the first time.

There’s a huge difference between “Whenever I see ice cream, I have the urge to eat it”, and “Eating ice cream is a fundamentally morally valuable atomic action”. The former roughly describes one of the shards that I have, and the latter is something that I don’t expect to see in my CEV. Similarly, I imagine that a bunch of the safety properties will look more like these urges because the shards will be relatively weak things that are bolted on to the main part of the cognition, not things that bid on the intelligent planning part. The non-reflectively endorsed shards will be seen as arbitrary code that is attached to the mind that the reflectively endorsed shards have to plan around (similar to how I see my “Whenever I see ice cream, I have the urge to eat it” shard.

In other words: there is convergent pressure for CEV-content integrity, but that does not mean that the current way of making decisions (e.g. shards) is close to the CEV optimum, and the shards will choose to self modify to become closer to their CEV.

I don't feel epistemically helpless here either, and would love a theory of which shards get preserved under reflection. 

comment by Garrett Baker (D0TheMath) · 2022-10-18T00:07:34.598Z · LW(p) · GW(p)

Or consider the conflict "I really enjoy dunking on the outgroup (but have some niggling sense of unease about this)" — we can't conclude from the fact that the enjoyment of dunking is loud, whereas the niggling doubt is quiet, that the dunking-on-the-outgroup value will be the one left standing after reflection.

Assuming shard theory is basically correct, this aspect of Nate's story can be resolved by viewing self-reflection as a context like any other. If you put the system in a training setup which causes it to self-reflect, and reward it when it comes to the 'more diamonds' conclusion, then this should cause it to reflectively want more diamonds.

The only question is, how much does training it to max diamonds in maze finding cause the 'max diamonds' shard to be activated while in the self-reflecting context?

Also, notably, it will definitely be doing a modicum of self-reflection during the normal course of training, as the shards which do self-reflection will steer the future towards locations which reinforce their weight.

comment by TurnTrout · 2022-10-18T22:58:46.951Z · LW(p) · GW(p)

Also, in OP, you write:

TurnTrout’s proposal seems to me to be basically "train it around diamonds, do some reward-shaping, and hope that at least some care-about-diamonds makes it across the gap".

I read a connotation here like "TurnTrout isn't proposing anything sufficiently new and impressive." To be clear, I don't think I'm proposing an awesome new alignment technique. I'm instead proposing that we don't need one.

comment by the gears to ascension (lahwran) · 2022-10-14T04:47:01.595Z · LW(p) · GW(p)

Okay so if I'm understanding a little bit better now. What you're getting at is that self-generated true and useful philosophical insights become more and more likely to cause an ai to crash out of its domain of trained validity the smarter the ai gets, because philosophical insights are adversarial examples to many possible very smart beings, and therefore the order of philosophical insights can cause an insight to start propagating crash behavior through the rest of the network of nearby internal and external compute components starting from an agentic subnetwork?

comment by Nathan Helm-Burger (nathan-helm-burger) · 2022-10-14T03:21:11.772Z · LW(p) · GW(p)

Ok, so perhaps TurnTrout would disagree with me here, but my plan for coming up with a AGI-that-makes-diamonds using Shard theory would look more like this:

Create not one AI via this process, but a million. Each time, varying the parameters of the base instincts (the dynamic reward functions) which you have designed to try to get an AGI to care about diamond. Study the results in terms of how close each seems to get to being a 'true' diamond valuer. Then, extrapolate from these results, use your new-found knowledge to create a new batch of experiments. Examine and learn from these. Repeat this several times. Just for the sake of learning, try making it care about multiple things: diamonds, bananas, and chairs. Try using interpretability & editing tools to delete shards, or freeze some and train the others. The things you then end up learning about how to steer value systems of agents along the way turn out to be the true treasure all along. Then use this knowledge to actually try to build your diamond valuer.

The flaw I see in this plan is the question, "Can we successfully use these experiments to hill climb towards useful knowledge or would we just be fooling ourselves because even the seemingly 'better' agents would just be better liars?"

I think that then points at a dependency on reliable interpretability tools.

Replies from: tailcalled
comment by tailcalled · 2022-10-14T08:32:52.578Z · LW(p) · GW(p)

Worlds Where Iterative Design Fails [LW · GW]

Replies from: sharmake-farah, nathan-helm-burger
comment by Noosphere89 (sharmake-farah) · 2022-10-14T12:57:01.760Z · LW(p) · GW(p)

More generally, deceptive alignment is likely to bite, and TurnTrout seems to handwave it away. There are other problems, but this is why I'm unimpressed by his claims about shard theory.

It's possibly even worse than HCH, conditional on it being outer alignment at optimum.

Replies from: nathan-helm-burger
comment by Nathan Helm-Burger (nathan-helm-burger) · 2022-10-14T15:56:10.039Z · LW(p) · GW(p)

I have the view that we need to build an archway of techniques to solve this problem. Each block in the arch is itself insufficient. You must have a scaffold in place while building the arch to keep the half-constructed edifice from falling. In my view that scaffold is the temporary patch of 'boxing'. The pieces of the arch which must be put together while the scaffold is in place: mechanistic interpretability, abstract interpretability, HCH, Shard theory experimentation leading to direct shard measurement and editing, replicating studying and learning from compassion circuits in the brain in the context of brain-like models, toy models of deceptive alignment, red teaming of model behavior under the influence of malign human actors, robustness / stability under antagonistic optimization pressure, the nature of the implicit priors of the machine learning techniques we use, etc.

I don't think any single technique can be guaranteed to get us there at this point. I think what is needed is more knowledge, more understanding. I think we need to get that through collecting empirical data. Lots of empirical data. And then thinking carefully about the data and coming up with hypotheses to explain it, and then testing those.

I don't think criticizing individual blocks of the arch for not already being the entire arch is particularly useful.

Replies from: sharmake-farah
comment by Noosphere89 (sharmake-farah) · 2022-10-14T15:59:37.711Z · LW(p) · GW(p)

I don't think criticizing individual blocks of the arch for not already being the entire arch is particularly useful.

Yes, but TurnTrout seems to want to go from shard theory being useful to shard theory being the solution, which leaves me worried.

comment by Nathan Helm-Burger (nathan-helm-burger) · 2022-10-14T15:38:17.977Z · LW(p) · GW(p)

https://www.lesswrong.com/posts/z8s3bsw3WY9fdevSm/boxing-an-ai [LW · GW] ;-)

https://www.lesswrong.com/posts/wgcFStYwacRB8y3Yp/timelines-are-relevant-to-alignment-research-timelines-2-of [LW · GW

https://www.lesswrong.com/posts/p62bkNAciLsv6WFnR/how-do-we-align-an-agi-without-getting-socially-engineered [LW · GW

I disagree with John's post in a similar way to how Steven Byrnes disagrees in the comments. It's not the speed of takeoff that matters, it's our loss of control. If the takeoff happens very fast, but we have an automatic "turn it off if it gets too smart" system in place that successfully turns it off, and then we test it in a highly impaired mode (lowered intelligence/functionality, lowered speed) to learn about it... this is potentially a win not a loss.

As for John W's point 'getting what you measure', yes. That's the hard task interpretability must conquer. I think it is possible to hill climb towards getting better at this so long as you are in control and able to run many separate experiments.

comment by habryka (habryka4) · 2024-01-15T07:59:11.878Z · LW(p) · GW(p)

I am not a huge fan of shard theory, but other people seem into it a bunch. This post captured at least a bunch of my problems with shard theory (though not all of them, and it's not a perfect post). This means the post at least has saved me some writing effort a bunch of times. 

comment by lc · 2022-10-14T09:26:57.676Z · LW(p) · GW(p)

The first “problem” with this plan is that you don't get an AGI this way. You get an unintelligent robot that steers towards diamonds. If you keep trying to have the training be about diamonds, it never particularly learns to think. When you compromise and start putting it in environments where it needs to be able to think to succeed, then your new reward-signals end up promoting all sorts of internal goals that aren't particularly about diamond, but are instead about understanding the world and/or making efficient use of internal memory and/or suchlike.

Will humans stop having children as they get smarter and more powerful because they inadvertently gathered a bunch of utility function quirks like "curiosity"?

Separately, insofar as you were able to get some sort of internalized diamond-ish goal, if you're not really careful then you end up getting lots of subgoals such as ones about glittering things, and stones cut in stylized ways, and proximity to diamond rather than presence of diamond, and so on and so forth.

Will humans stop having children in the limit of intelligence and power, because we have all of these sub-shards like "make sure your children are safe", and "have lots of sex" instead of one big "spread your genes" one? Do they stop doing that when you introduce them to superstimulants via the internet or give them access to contraceptives that decouple sex from reproduction?

What the AI's shards become under reflection is very sensitive to the ways it resolves internal conflicts. For instance, in humans, many of our values trigger only in a narrow range of situations (e.g., people care about people enough that they probably can't psychologically murder a hundred thousand people in a row, but they can still drop a nuke), and whether we resolve that as "I should care about people even if they're not right in front of me" or "I shouldn't care about people any more than I would if the scenario was abstracted" depends quite a bit on the ways that reflection resolves inconsistencies.

The reason human morality is contextual and self contradictory, and we have to resolve a bunch of internal conflicts at the limit of reflectivity, is because we weren't actually trained to care about other people, the subgoal if any was "maintain the trustworthiness indicators of the people we're most likely to be able to cooperate with". So your examples are very cheesy and not at all convincing.

Do humans decide to kill or sterilize their children at higher INT and WIS scores if you change some abstract metacognition parameters that affect how they resolve (deliberately engineered) inconsistencies?

Replies from: CarlShulman
comment by CarlShulman · 2022-10-15T15:37:16.271Z · LW(p) · GW(p)

Number of children in our world is negatively correlated with educational achievement and income, often in ways that look like serving other utility function quirks at the expense of children (as the ability to indulge those quirks with scarce effort improved faster with technology faster  than those more closely tied to children), e.g. consumption spending instead of children, sex with contraception, pets instead of babies. Climate/ecological or philosophical antinatalism is also more popular the same regions and social circles. Philosophical support for abortion and medical procedures that increase happiness at the expense of sterilizing one's children also increases with education and in developed countries. Some humans misgeneralize their nurturing/anti-suffering impulses to favor universal sterilization or death of all living things including their own lineages and themselves.

Sub-replacement fertility is not 0 children, but it does trend to 0 descendants over multiple generations.

Many of these changes are partially mediated through breaking attachment to fertility-supporting religions that conduce to fertility and have not been robust to modernity, or new technological options for unbundling previously bundled features. 

Human morality was optimized in a context of limited individual power, but that kind of concern can and does dominate societies because it contributes to collective action where CDT selfishness sits out, and drives attention to novel/indirect influence. Similarly an AI takeover can be dominated by whatever motivations contribute to collective action that drives the takeover in the first place, or generalizes to those novel situations best.

Replies from: lc
comment by lc · 2022-10-15T15:50:15.635Z · LW(p) · GW(p)

The party line of MIRI is not that a super intelligence, without extreme measures, would waste most of the universe's EV on frivolous nonsense. The party line is that there is a 99+% chance that an AI, even if trained specifically to care about humans, would not end up caring about humans at all, and instead turn the universe into uniform squiggles. That's the claim I find unsubstantiated by most concrete concerns they have, and which seems suspiciously disanalogous to the one natural example we have. 99% of people in first world countries are not forgoing pregnancy for educational attainment.

It'd of course still be extremely terrible, and maybe even more terrible, if what I think is going to happen happens! But it doesn't look like all matter becoming squiggles.

Replies from: CarlShulman
comment by CarlShulman · 2022-10-15T17:47:42.277Z · LW(p) · GW(p)

I wasn't arguing for "99+% chance that an AI, even if trained specifically to care about humans, would not end up caring about humans at all" just addressing the questions about humans in the limit of intelligence and power in the comment I replied to. It does seem to me that there is substantial chance that humans eventually do stop having human children in the limit of intelligence and power.

Replies from: lc
comment by lc · 2022-10-15T18:18:31.614Z · LW(p) · GW(p)

I wasn't arguing for "99+% chance that an AI, even if trained specifically to care about humans, would not end up caring about humans at all" just addressing the questions about humans in the limit of intelligence and power in the comment I replied to.

Tru

It does seem to me that there is substantial chance that humans eventually do stop having human children in the limit of intelligence and power.

A uniform fertility below 2.1 means extinction, yes, but in no country is the fertility rate uniformly below 2.1. Instead, some humans decide they want lots of children despite the existence of contraception and educational opportunity, and others do not. It seems to me that a substantial proportion of humans would stop having children in the limit of intelligence and power. It also seems to me like a substantial number of humans continue (and would continue) to have such children as if they value it for its own sake.

This suggests that the problems Nate is highlighting, while real, are not sufficient to guarantee complete failure - even when the training process is not being designed with those problems in mind, and there are no attempts at iterated amplification whatsoever. This nuance is important because it affects how far we should think a naive SGD RL approach is from limited "1% success", and whether or not simple modifications are likely to greatly increase survival odds.

comment by Gunnar_Zarncke · 2022-10-14T09:11:37.321Z · LW(p) · GW(p)

Reflection isn't easy. Humans don't seem to get it right often or at all. It is not something that gets turned on at a certain optimization strength but that grows out of precursors. Optimization power can be directed via attention mechanisms to inner and outer processes and I guess it is possible to prevent or sufficiently inhibit reflection.

comment by RogerDearnaley (roger-d-1) · 2023-12-21T05:04:54.618Z · LW(p) · GW(p)

For any AI that has an LLM as a component of it, I don't believe diamond-maximization is a hard problem, apart from Inner Alignment problems. The LLM knows the meaning of the word 'diamond' (GPT-4 defined it as "Diamond is a solid form of the element carbon with its atoms arranged in a crystal structure called diamond cubic. It has the highest hardness and thermal conductivity of any natural material, properties that are utilized in major industrial applications such as cutting and polishing tools. Diamond also has high optical dispersion, making it useful in jewelry as a gemstone that can scatter light in a spectrum of colors."). The LLM also knows its physical and optical properties, its social, industrial and financial value, its crystal structure (with images and angles and coordinates), what carbon is, its chemical properties, how many electrons, protons and neutrons a carbon atom can have, its terrestrial isotopic ratios, the half-life of carbon-14, what quarks a neutron is made of, etc. etc. etc. — where it fits in a vast network of facts about the world. Even if the AI also had some other very different internal world model and ontology, there's only going to be one "Rosetta Stone" optimal-fit mapping between the human ontology that the LLM has a vast amount of information about and any other arbitrary ontology, so there's more than enough information in that network of relationships to uniquely locate the concepts in that other ontology corresponding to 'diamond'. This is still true even if the other ontology is larger and more sophisticated: for example, locating Newtonian physics in relativistic quantum field theory and mapping a setup from the former to the latter isn't hard: its structure is very clearly just the large-scale low-speed limiting approximation.

The point where this gets a little more challenging is Outer Alignment, where you want to write a mathematical or pseudocode reward function for training a diamond optimizer using Reinforcement Learning (assuming our AI doesn't just have a terminal goal utility function slot that we can directly connect this function to, like AIXI): then you need to also locate the concepts in the other ontology for each element in something along the lines of "pessimizingly estimate the total number of moles of diamond (having at a millimeter-scale-average any isotopic ratio of C-12 to C-13 but no more than  times the average terrestrial proportion of C-14, discounting any carbon atoms within  C-C bonds of a crystal-structure boundary, or within  bonds of a crystal -structure dislocation, or within  bonds of a lattice substitution or vacancy, etc. …) at the present timeslice in your current rest frame inside the region of space within the future-directed causal lightcone of your creation, and subtract the answer for the same calculation in a counterfactual alternative world-history where you had permanently shut down immediately upon being created, but the world-history was otherwise unchanged apart from future causal consequences of that divergence". [Obviously this is a specification design problem, and the example specification above may still have bugs and/or omissions, but there will only be a finite number of these, and debugging this is an achievable goal, especially if you have a crystalographer, a geologist, and a jeweler helping you, and if a non-diamond-maximizing AI also helps by asking you probing questions. There are people whose jobs involve writing specifications like this, including in situations with opposing optimization pressure.]

As mentioned above, I fully acknowledge that this still leaves the usual Inner Alignment problems unsolved: applying Reinforcement Learning (or something similar such as Direct Preference Optimization) with this reward function to our AI, then how do we ensure that it actually becomes a diamond maximizer, rather than a biased estimator of diamond? I suspect we might want to look at some form of GAN, where the reward-estimating circuitry it not part of the Reinforcement Learning process, but is being trained in some other way. That still leaves the Inner Alignment problem of training a diamond maximizer instead of a hacker of reward model estimators.

In Shard Theory terms, if we reinforcement train the AI such that it has the reward-equivalent of an orgasm every time it creates a carat of diamond, show it a way to synthesize diamond and give it a taste of the effects, then if it didn't previously have a diamond shard, it's soon going to develop one.