Posts
Comments
I gesture vaguely at Morality as Fixed Computation, moral realism, utility-function blackboxes, Learning What to Value, discerning an initially-uncertain utility function that may not be straightforward to get definitive information about.
Also, the math of AIXI assumes the environment is separably divisible - no matter what you lose, you get a chance to win it back later.
Does this mean that we don't even need to get into anything as esoteric as brain surgery – that AIXI can't learn to play Sokoban (without the ability to restart the level)?
Belief in disbelief, perhaps.
(cf. nonstandard standard toolbox.)
The religious version
You may want something like "the Christian version". Ancient Greek paganism was a religion.
The link is broken; the current location seems to be here.
- This is our second important observation
The numbering seems to have gotten borked somehow.
I somehow managed to read this before chapter 6. With context, "who was too young" hits harder.
The fact that this is #5 turns out to be darkly appropriate.
Suppose I built a machine with artificial skin that felt the temperature of the cup and added ice to cold cups and lit a fire under hot cups.
Should this say "added ice to hot cups and lit a fire under cold cups"?
"Oh, right," said Xenophon, "How about 'Systems that would adapt their policy if their actions would influence the world in a different way'?"
"Teacup test," said Socrates.
This seems wrong. The artificial-skin cup adds ice or lights fire solely according to the temperature of the cup; if it finds itself in a world where ice makes tea hotter and fire makes tea colder, the cup does not adapt its strategies.
My mouth tasted like wrapping that hadn't been thrown away yet and might still have a little of whatever it used to be wrapped around somewhere on it.
This is a good sentence.
I think this might be trying to talk about something related to identity of indiscernibles, the disquotational principle, and the masked-man fallacy. I'm not sure how you get from "different names for the same entity" to "magical clones", though.
(edit: wide-open)
This link seems to no longer work.
(Typo thread?)
"GPT-3" → "GPT-6"?
Step 1 in figuring out how to get an outcome which is Not That is to look at the list of nice properties which the CoCo solution uniquely fulfills, and figure out which one to break.
It seems to me that, ideally, one would like to be able to identify in advance which axioms one doesn't actually want/need, before encountering a motive to go looking for things to cut.
And so, the question to ask now is something like "is there a point on the Pareto frontier where we can get the curnits/utilon conversion numbers from that point, convert everyone's utility to curnits, work out the CoCo value of the resulting game, convert back to utilons, and end up at the exact same point we started at?"
I'm mostly not following the math, but at first glance this feels more like it's defining a stable point rather than an optimal point.
One interesting case where this theorem doesn't apply would be if there are only finitely many possible outcomes. This is physically plausible: consider multiplying the maximum data density¹ by the spacetime hypervolume of your future light cone from now until the heat death of the universe.
¹ <https://physics.stackexchange.com/questions/2281/maximum-theoretical-data-density>
Is Science Maniac Verrez a real series, for which HJPEV was named? Or was it invented for glowfic, with the causation going the other way?
Relatedly, are Thellim or Keltham based on anyone you knew? (or for that matter on celebrities, or characters from fiction written in dath ilan?)
I don't see how to do that, especially given that it's not a matter of meeting some threshold, but rather of maximizing a value that can grow arbitrarily.
Actually, you don't even need the ways-to-arrange argument. Suppose I want to predict/control the value of a particular nonnegative integer (the number of cubbyholes), with monotonically increasing utility, e.g. . Then the encoding length of a given outcome must be longer than the code length for each greater outcome: . However, code lengths must be a nonnegative integer number of code symbols in length, so for any given encoding there are at most shorter code lengths, so the encoding must fail no later than .
What if I want greebles?
To misuse localdeity's example, suppose I want to build a wall with as many cubbyholes as possible, so that I can store my pigeons in them. In comparison to a blank wall, each hole makes the wall more complex, since there are more ways to arrange holes than to arrange holes (assuming the wall can accommodate arbitrarily many holes).