Posts

Comments

Comment by Dweomite on Intermittent Distillations #4: Semiconductors, Economics, Intelligence, and Technological Progress. · 2021-07-27T18:23:59.046Z · LW · GW

The sort of basic observations that R&D spending has increased but economic growth has remained roughly the same seems to imply the obvious conclusion that productivity is declining.

That does seem like the most likely conclusion, but another obvious interpretation of "X doesn't change when we increase Y" would be "maybe Y doesn't actually affect X in the first place".  For instance, maybe funding above some threshold is all eaten by parasites, or maybe the bottleneck on growth speed is something other than formal R&D (e.g. good ideas might strike randomly regardless of whether you're a researcher or not, and the official "researchers" just "harvest" the insights that are already "in the air" thanks to the general population).

(Most of my probability mass is still on "productivity is declining.")

Comment by Dweomite on Finite Factored Sets · 2021-06-19T23:11:26.544Z · LW · GW

What elements of that game are you suggesting would correspond to a set factorization?  I'm not seeing one.

Comment by Dweomite on Cryonics signup guide #1: Overview · 2021-06-19T23:03:02.324Z · LW · GW

The final section of the article says (bold added):

=============================================

If you don't expect yourself to go through the full process right away for whatever reason, but you want to increase your chances of cryopreservation in the event of your death, you should do the following two easy things:

Taken together, these constitute informed consent, making it much more likely that it will be legally possible to preserve you in case of an emergency.

Comment by Dweomite on Public Static: What is Abstraction? · 2021-06-10T20:37:50.405Z · LW · GW

If I'm trying to predict the light entering my eyes, and there's a brick wall six feet in front of me, it seems weird to me to say that the variables on the far side of the wall are being wiped out because the wall is "noisy" rather than, say, because the wall is "opaque".  Is there some technical sense in which the wall is "noisier" than the air?

Either satisfies your "equal conditional probability" criterion, so I don't think it affects any of the math, but it seems like it could matter to understanding how this definition applies to the real world.

Comment by Dweomite on Cryonics signup guide #1: Overview · 2021-06-07T20:39:19.235Z · LW · GW

I'm very surprised that you say informed consent requires signing a legal document AND paying a monthly fee to some non-governmental entity.  Why can't you consent using only a document?

Comment by Dweomite on Gears in understanding · 2021-06-04T04:40:39.804Z · LW · GW

Perhaps we could say that Gears-like models have low entropy?  (Relative to the amount of territory covered.)

You can communicate the model in a small number of bits.  That's why you can re-derive a missing part (your test #3)--you only need a few key pieces to logically imply the rest.

This also implies you don't have many degrees of freedom; [you can't just change one detail without affecting others](https://www.lesswrong.com/posts/XTWkjCJScy2GFAgDt/dark-side-epistemology).  This makes it (more likely to be) incoherent to imagine one variable being different while everything else is the same (your test #2).

Because the model itself is compact, you can also specify the current state of the system in a relatively small number of bits, inferring the remaining variables from the structure (your test #1).  (Although the power here is really coming from the "...relative to the amount of territory covered" bit.  That bit seems critical to reward a single model that explains many things versus a swarm of tiny models that collectively explain the same set of things, while being individually lower-entropy but collectively higher-entropy.)

This line of thinking also reminds me of Occam's Razor/Solomonoff Induction.

Comment by Dweomite on Finite Factored Sets · 2021-05-29T02:28:00.339Z · LW · GW

Possible examples:  After staring at the definition of a set factorization for a minute, it clicked for me when I thought about Quarto.

Quarto is a simple board game played with 16 pieces (and a 4x4 grid) where each piece is (short or tall) and (light or dark) and (round or square) and (solid or hollow).  There's exactly one piece with each combination of attributes; for example, there's exactly one tall dark round hollow piece.

Thus, the full set of 16 pieces can be factored into {{short, tall}, {light, dark}, {round, square}, {solid, hollow}}.  Similarly, given that list of attributes, you can reconstruct the full set of 16 distinct pieces.

Though I think Set is a better-known game.  It has 81 cards, where each card has (one, two, or three) pictures of a (diamond, oval, or squiggle) with (solid, striped, or no) shading drawn in (red, green, or purple) ink.

 

(edited for formatting)

Comment by Dweomite on What Multipolar Failure Looks Like, and Robust Agent-Agnostic Processes (RAAPs) · 2021-05-29T01:56:25.814Z · LW · GW

I'm uncertain what phonemes "raahp" denotes.

Comment by Dweomite on What Multipolar Failure Looks Like, and Robust Agent-Agnostic Processes (RAAPs) · 2021-04-16T21:58:34.982Z · LW · GW

Sort-of on the topic of terminology, how should "RAAP" be pronounced when spoken aloud?  (If the term catches on, some pronunciation will be adopted.)

"Rap" sounds wrong because it fails to acknowledge the second A.  Trading the short A for a long A yields "rape", which probably isn't a connotation you want.  You could maybe push "rawp" (with "aw" as in "hawk").

If you don't like any of those, you might want to find another acronym with better phonetics.

Comment by Dweomite on Specializing in Problems We Don't Understand · 2021-04-16T17:51:42.239Z · LW · GW

Does the phrase "levels of abstraction" imply that those four problems form some kind of hierarchy?  If so, could you explain how that hierarchy works?

Comment by Dweomite on Embedded World-Models · 2021-04-01T08:11:44.759Z · LW · GW

This article talks about multi-level models, where you somehow switch between cheaper models and more-accurate models depending on your needs.  Would it be useful to generalize this idea to switching between multiple "same-level" models that are differentiated by something other than cheap vs. accurate?

For example, one might have one model that groups individual people together into "families", another that groups them into "organizations", and a third that groups them into "ideologies".  None of those models seems to be strictly "higher" than another (e.g. neither families nor ideologies are composed of each other), and different models might be useful for different problems.

One could also imagine combining all of those into one unified model, of course.  But it might be wasteful to model all of them for problems where you only really care about one.

I feel like humans do something like this.

If multiple "same-level" models can coexist, then one strategy for holding onto your values while inventing new models might be to always hold onto whichever model the values were originally defined in, even if you add more models alongside it.

Comment by Dweomite on My research methodology · 2021-03-25T03:44:08.920Z · LW · GW

I feel confused about the failure story from example 3.  (First 3 bullet-points in that section.)

It sounded like: We ask for a human-comprehensible way to predict X; the computer uses a very low-level simulation plus a small bridge that predicts only and exactly X; humans can't use the model to predict any high-level facts besides X.

But I don't see how that leads to egregious misalignment.  Shouldn't the humans be able to notice their inability to predict high-level things they care about and send the AI back to its model-search phase?  (As opposed to proceeding to evaluate policies based on this model and being tricked into a policy that fails "off-screen" somewhere.)

Comment by Dweomite on Strong Evidence is Common · 2021-03-21T19:40:21.566Z · LW · GW

"Mark Xu" is an unusually short name, so the message-ending might actually contain most of the entropy.

The phrases "my name is Mark Xu" and "my name is Mortimer Q. Snodgrass" contain roughly the same amount of evidence, even though the second has 12 additional letters.  ("Mark Xu" might be a more likely name on priors, but it's nowhere near 2^(4.7 * 12) more likely.)

Comment by Dweomite on What confusions do people have about simulacrum levels? · 2021-03-18T21:39:59.120Z · LW · GW

I agree that "each stage follows in a systematic way" doesn't quite work, and to further illuminate that I'd like to describe the specific systematic progression that I personally inferred before deciding that it doesn't seem to match how the levels are actually being used in discussion:

(Since I don't think this matches current usage, I'm going to deliberately change terminology and say "steps" instead of "levels" in a weak attempt to prevent conflation.)

A.  To ascend from an odd step to an even step, the speaker's motive changes, but their communicative intent remains the same.

B.  To ascend from an even step to an odd step, the speaker's motive remains the same, but their intent is now to communicate that motive.

At step 1, when I say
"There's a tiger across the river"
I want you to believe
There is a tiger across the river
because
There IS a tiger across the river (or so I think)

At step 2, when I say
"There's a tiger across the river"
I want you to believe
There is a tiger across the river
because
I don't want anyone to cross the river

At step 3, when I say
"There's a tiger across the river"
I want you to believe
I don't want anyone to cross the river
because
I don't want anyone to cross the river

At step 4, when I say
"There's a tiger across the river"
I want you to believe
I don't want anyone to cross the river
because
I want to ally myself with the vermilion political party

At step 5, when I say
"There's a tiger across the river"
I want you to believe
I want to ally myself with the vermilion political party
because
I want to ally myself with the vermilion political party

At step 6, when I say
"There's a tiger across the river"
I want you to believe
I want to ally myself with the vermilion political party
because
I want vermilion party votes to help me become mayor

At step 7, when I say
"There's a tiger across the river"
I want you to believe
I want vermilion party votes to help me become mayor
because
I want vermilion party votes to help me become mayor

At step 8, when I say
"There's a tiger across the river"
I want you to believe
I want vermilion party votes to help me become mayor
because
I'm trying to split the vermilion's party vote so their other candidate doesn't win

etc.

I don't think there's any strict upper bound to how many steps you can get out of this progression, but the practical depth is limited for the following reason:

Notice that there might be many possible motivations that could be introduced at an even step.  In step 2 above, I used "I don't want anyone to cross the river", but I could have used "I want to organize a tiger hunting party" or "I want to promote the development of anti-tiger weaponry" or "I want us to acknowledge that our attempt to avoid tigers is failing and we should try to reach an accommodation with them instead".

A successful step-3 communication can only occur if there is a single step-2 motive that is so common or so obvious (in context) that it can be safely inferred by the listener.  (Otherwise, I might want you to understand that I don't want anyone to cross the river, but you might mistakenly think I want to organize a tiger hunting party.)

Also note that all of the odd steps might be called "honest" in the sense that you want the listener to believe an accurate thing (you are trying to make their map look like your map), but only step 1 is truthful in the sense that it accurately describes object-level reality.  All of the even steps are dishonest.

I'm not sure this model is particularly helpful, except that perhaps it illuminates a difference between "honesty" and "truthfulness".

I think current simulacra discussions are sort-of collapsing all of steps 3+ into "simulacra level 3", and then "simulacra level 4" is sort-of like step infinity, except I don't think the relation between simulacra levels and the model I described above is actually that clean.  I would welcome further attempts to concisely differentiate them.