post by [deleted] · · ? · GW · 0 comments

This is a link post for

0 comments

Comments sorted by top scores.

comment by [deleted] · 2024-07-27T13:30:56.284Z · LW(p) · GW(p)

how do we reconcile the ideas that 1) most imaginable expected utility maximizers would drive humanity to extinction, and 2) we humans are still alive, even though every single physical system in existence is mathematically equivalent to some class of flawless expected utility maximizers?

First of all, if everything is mathematically equivalent to an EU maximizer, then saying that something is an EU maximizer no longer represents meaningful knowledge, since it no longer distinguishes between fiction and reality [LW · GW]. As Eliezer once [LW · GW] beautifully put it:

Your strength as a rationalist is your ability to be more confused by fiction than by reality; if you are equally good at explaining any outcome you have zero knowledge. The strength of a model is not what it can explain, but what it can’t, for only prohibitions constrain anticipation. If you don’t notice when your model makes the evidence unlikely, you might as well have no model, and also you might as well have no evidence; no brain and no eyes.

So if you are capable of taking any self-consistent physical description of a system, whether it actually conforms to reality or not, and saying that this can be modeled as an EU maximizer set-up, then the fact that something is an EU maximizer no longer constrains expectations [LW · GW] by prohibiting certain world-states. Since we do [LW(p) · GW(p)] actually have certain expectations related to EU maximizers, this suggests that the discussion you are starting here is primarily semantic and not substantive [LW · GW] (as you are using the concept of EU maximizers in a manner that John Wentworth has described [LW(p) · GW(p)] as "confused").

With this in mind, the resolution to the apparent tension between the two statements you included in the section I quoted at the top of this comment is that it[1] implicitly relies on the use of a counting argument [LW · GW] without properly considering what compact manifold [LW · GW] it arises out of. Put differently, we already know that humans are still alive, which screens off [LW · GW] other considerations that are not as narrowly [LW · GW] tailored to this fact. When you condition on any piece of knowledge, you explicitly change the probabilities of all other events in a manner that makes them compatible with what you are conditioning. It could not have been any other way [LW · GW]. So the space of "imaginable EU maximizers", after you condition, does not get a uniform subjective probability distribution [? · GW] or anything close to it, but rather one that puts mass only (or, at least, primarily) on those hypotheses that ensure humans would survive until now (since we know that we, in fact, have).

  1. ^

    By which I am referring to "the feeling that the statements are contradictory," not "your post."

Replies from: eye96458
comment by eye96458 · 2024-07-27T14:48:46.948Z · LW(p) · GW(p)

First of all, if everything is mathematically equivalent to an EU maximizer, then saying that something is an EU maximizer no longer represents meaningful knowledge, since it no longer distinguishes between fiction and reality.

I’m confused about your claim. For example, I can model (nearly?) everything with quantum mechanics, so then does calling something a quantum mechanical system not confer meaningful knowledge?

Replies from: None, Richard_Kennaway
comment by [deleted] · 2024-07-27T15:17:19.751Z · LW(p) · GW(p)

There are physical models which are not based on Quantum Mechanics, and are in fact incompatible with it. For example, to a physicist in the 19th century, a world that functioned on the basis of (very slight modifications of) Newtonian Mechanics and Classical E&M would have seemed very plausible.

The fact that reality turned out not to be this way does not imply the physical theory was internally inconsistent, but rather that it was incompatible with empirical observations that eventually led to the creation of the QM theory. So the point is that you cannot actually model nearly everything in conceptspace [LW · GW] with QM, it's just that reality turns out to be well-approximated by it, while (realistic) fiction like Newtonian Mechanics is not (for example, at the atomic & subatomic level).

This is what makes calling something a QM system an example of meaningful knowledge: it approximates reality better than it does something that is not real, exactly part of Your Strength as a Rationalist [LW · GW]. By contrast, whatever story I give you, true or not, can be viewed as flowing from the Texas Sharpshooter Utility Function [LW(p) · GW(p)] in exactly the same way that you said reality does:

All you have to do is define a utility function which, at time T, takes in all the relevant context within and around a given physical system, and assigns the highest expected utility to whatever actions that system actually takes to produce its state at time T+1.

So the fact that you "know" something is an EU maximizer, under OP's definition of that term (which, as I mentioned above, is confused and confusing [LW(p) · GW(p)]), does not constrain your expectations [LW · GW] in any meaningful way because it does not rule out [LW · GW] future world-states (because both true and false predictions are equally compatible with the EU process, as described).

By contrast, knowing something follows QM principles does constrain expectations significantly, as we can design self-consistent models and imagined future world-states which do not follow it (as I mentioned above). For example, the quantization of energy levels, the photoelectric effect, quantum tunneling, quantum entanglement, the anomalous magnetic moment of the electron, specific predictions about the spectra of atoms and molecules, etc., are all predictions given directly by QM; as such, the theory invalidates world-states in which we design proper experiments that do not find all of these. 

But there is no articulable future world-state which is ruled out by OP's conception of EU maximization.

comment by Richard_Kennaway · 2024-07-27T15:08:39.050Z · LW(p) · GW(p)

For example, I can model (nearly?) everything with quantum mechanics, so then does calling something a quantum mechanical system not confer meaningful knowledge?

Actually no. Quantum mechanics is pretty well established, and we may suppose that it describes everything (at least, in low gravitational fields). Given that, pointing at a thing and saying "quantum mechanics!" adds no new information. That is not a model. An actual model would allow making predictions about the thing, or at least calculating (not merely fitting to) known properties. There aren't all that many systems we can do that for. The successes of quantum mechanics, which are many, are found in the systems simple enough that we can.

Replies from: eye96458
comment by eye96458 · 2024-07-27T15:16:14.796Z · LW(p) · GW(p)

Quantum mechanics is pretty well established, and we may suppose that it describes everything (at least, in low gravitational fields). Given that, pointing at a thing and saying "quantum mechanics!" adds no new information.

Are you making this argument?

  • P1: Quantum mechanics is well established.
  • P2: Quantum mechanics describes everything in low gravitational fields.
  • C1: So, calling a thing a “quantum system” doesn’t convey any information.
Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2024-07-27T16:30:19.527Z · LW(p) · GW(p)

I wouldn't state P1 and P2 as dogmatically as that, but rounding the uncertainty off to zero, yes. If everything is known to be described by quantum mechanics, pointing at something and saying "this is described by quantum mechanics" adds no new information.

comment by johnswentworth · 2024-07-27T17:16:12.807Z · LW(p) · GW(p)

(I endorse sunwillrise's comment [LW(p) · GW(p)] as a general response to this post; it's an unusually excellent comment. This comment is just me harping on a pet peeve of mine.)

So, within the ratosphere, it's well-known that every physical object or set of objects is mathematically equivalent to some expected utility maximizer

This is a wildly misleading idea which refuses to die.

As a meme within the ratosphere, the usual source cited is this old post by Rohin [LW · GW], which has a section titled "All behavior can be rationalized as EU maximization". When I complained to Rohin that "All behavior can be rationalized as EU maximization" was wildly misleading, he replied:

I tried to be clear that my argument was "you need more assumptions beyond just coherence arguments on universe-histories; if you have literally no other assumptions then all behavior can be rationalized as EU maximization". I think the phrase "all behavior can be rationalized as EU maximization" or something like it was basically necessary to get across the argument that I was making. I agree that taken in isolation it is misleading; I don't really see what I could have done differently to prevent there from being something that in isolation was misleading, while still being able to point out the-thing-that-I-believe-is-fallacious. Nuance is hard.

Point is: even the guy who's usually cited on this (at least on LW) agrees it's misleading.

Why is it misleading? Because coherence arguments do, in fact, involve a notion of "utility maximization" narrower than just a system's behavior maximizing some function of universe-trajectory. There are substantive notions of "utility maximizer", those notions are a decent match to our intuitions in many ways, and they involve more than just behavior maximizing some function of universe-trajectory. When we talk about "utility maximizers" in a substantive sense, we're talking about a phenomenon which is narrower than behavior maximizing some function of universe-trajectory.

If you want to see a notion of "utility maximizer" which is nontrivial, Coherence of Caches and Agents [LW · GW] gives IMO a pretty illustrative and simple example.

comment by Richard_Kennaway · 2024-07-27T14:46:45.941Z · LW(p) · GW(p)

I agree with sunwillrise's comment.

This idea, the Texas Sharpshooter Utility Function, which looks at what happens and then paints the value 1 on it, is a surprisingly recurrent one on LW. But it does not work. It does not allow of any predictions. First you must see what happens; only then can you paint the target. Its present is uncomputable from its past.

comment by Richard_Kennaway · 2024-07-27T14:38:34.855Z · LW(p) · GW(p)

The vast majority of planet-sized configurations of atoms are inimical to life, but here we are. Threats such as climate change, colliding asteroids, supervolcanoes, and AGI are not to be assessed by speculating on the generality of planet-sized configurations, but by considering the actually possible futures of the configuration we find ourselves in.

Replies from: None
comment by [deleted] · 2024-07-27T15:31:12.186Z · LW(p) · GW(p)

While I do agree with the general sentiment behind what you are saying (we ought to take to heart the virtue of narrowness [LW · GW]), your comment here does give me somewhat of an impression that you do not think very highly of the relevance (to P(doom), for example) of considerations based on anthropics [? · GW], the doomsday argument, and other related ideas.

Is this correct, or am I misreading you?

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2024-07-27T16:31:47.426Z · LW(p) · GW(p)

Those concepts had not occurred to me in the present context, but no, in general I don't take anthropics or the doomsday argument seriously. Don't expect an argument from me to that effect, they just feel obviously wrong and I find them irritating. I've read some of the arguments around them, and it is clear that there is not currently a consensus, so I ignore them.