Posts

Comments

Comment by Sonata Green on On attunement · 2024-04-17T04:16:04.150Z · LW · GW

I gesture vaguely at Morality as Fixed Computation, moral realism, utility-function blackboxes, Learning What to Value, discerning an initially-uncertain utility function that may not be straightforward to get definitive information about.

Comment by Sonata Green on Qualitatively Confused · 2024-03-23T19:02:19.675Z · LW · GW

Also, the math of AIXI assumes the environment is separably divisible - no matter what you lose, you get a chance to win it back later.

Does this mean that we don't even need to get into anything as esoteric as brain surgery – that AIXI can't learn to play Sokoban (without the ability to restart the level)?

Comment by Sonata Green on My Time As A Goddess · 2023-07-04T18:32:27.623Z · LW · GW

Belief in disbelief, perhaps.

Comment by Sonata Green on The novelty quotient · 2023-06-29T19:37:55.797Z · LW · GW

(cf. nonstandard standard toolbox.)

Comment by Sonata Green on By Which It May Be Judged · 2023-04-21T10:32:48.314Z · LW · GW

The religious version

You may want something like "the Christian version". Ancient Greek paganism was a religion.

Comment by Sonata Green on [deleted post] 2023-04-21T09:53:04.033Z

The link is broken; the current location seems to be here.

Comment by Sonata Green on Harry Potter and the Methods of Psychomagic | Chapter 1: Affect · 2023-04-21T02:14:07.775Z · LW · GW
  1. This is our second important observation

The numbering seems to have gotten borked somehow.

Comment by Sonata Green on Bayeswatch 6.5: Therapy · 2023-04-20T23:07:37.795Z · LW · GW

I somehow managed to read this before chapter 6. With context, "who was too young" hits harder.

Comment by Sonata Green on Bayeswatch 5: Hivemind · 2023-04-20T22:46:20.972Z · LW · GW

The fact that this is #5 turns out to be darkly appropriate.

Comment by Sonata Green on The Teacup Test · 2023-04-20T04:03:33.000Z · LW · GW

Suppose I built a machine with artificial skin that felt the temperature of the cup and added ice to cold cups and lit a fire under hot cups.

Should this say "added ice to hot cups and lit a fire under cold cups"?

"Oh, right," said Xenophon, "How about 'Systems that would adapt their policy if their actions would influence the world in a different way'?"

"Teacup test," said Socrates.

This seems wrong. The artificial-skin cup adds ice or lights fire solely according to the temperature of the cup; if it finds itself in a world where ice makes tea hotter and fire makes tea colder, the cup does not adapt its strategies.

Comment by Sonata Green on [deleted post] 2023-04-20T00:06:03.520Z

My mouth tasted like wrapping that hadn't been thrown away yet and might still have a little of whatever it used to be wrapped around somewhere on it.

This is a good sentence.

Comment by Sonata Green on Harry Potter in The World of Path Semantics · 2023-04-19T23:14:42.138Z · LW · GW

I think this might be trying to talk about something related to identity of indiscernibles, the disquotational principle, and the masked-man fallacy. I'm not sure how you get from "different names for the same entity" to "magical clones", though.

Comment by Sonata Green on Pascal's Muggle: Infinitesimal Priors and Strong Evidence · 2023-04-10T20:27:36.091Z · LW · GW

(edit: wide-open)

This link seems to no longer work.

Comment by Sonata Green on Prosaic misalignment from the Solomonoff Predictor · 2023-03-05T00:08:59.081Z · LW · GW

(Typo thread?)

"GPT-3" → "GPT-6"?

Comment by Sonata Green on Less Threat-Dependent Bargaining Solutions?? (3/2) · 2022-11-21T07:49:59.772Z · LW · GW

Step 1 in figuring out how to get an outcome which is Not That is to look at the list of nice properties which the CoCo solution uniquely fulfills, and figure out which one to break.

It seems to me that, ideally, one would like to be able to identify in advance which axioms one doesn't actually want/need, before encountering a motive to go looking for things to cut.

Comment by Sonata Green on Unifying Bargaining Notions (2/2) · 2022-11-21T07:43:19.056Z · LW · GW

And so, the question to ask now is something like "is there a point on the Pareto frontier  where we can get the curnits/utilon conversion numbers from that point, convert everyone's utility to curnits, work out the CoCo value of the resulting game, convert back to utilons, and end up at the exact same point we started at?"

I'm mostly not following the math, but at first glance this feels more like it's defining a stable point rather than an optimal point.

Comment by Sonata Green on Impossibility results for unbounded utilities · 2022-02-17T06:01:56.547Z · LW · GW

One interesting case where this theorem doesn't apply would be if there are only finitely many possible outcomes. This is physically plausible: consider multiplying the maximum data density¹ by the spacetime hypervolume of your future light cone from now until the heat death of the universe.

¹ <https://physics.stackexchange.com/questions/2281/maximum-theoretical-data-density>

Comment by Sonata Green on I'm from a parallel Earth with much higher coordination: AMA · 2021-12-06T03:28:24.017Z · LW · GW

Is Science Maniac Verrez a real series, for which HJPEV was named? Or was it invented for glowfic, with the causation going the other way?

Relatedly, are Thellim or Keltham based on anyone you knew? (or for that matter on celebrities, or characters from fiction written in dath ilan?)

Comment by Sonata Green on [Fiction] Lena (MMAcevedo) · 2021-05-14T23:31:10.423Z · LW · GW

Subtle?

Comment by Sonata Green on I'm from a parallel Earth with much higher coordination: AMA · 2021-05-10T23:33:41.239Z · LW · GW

Found it.

Comment by Sonata Green on Utility Maximization = Description Length Minimization · 2021-02-26T00:56:45.996Z · LW · GW

I don't see how to do that, especially given that it's not a matter of meeting some threshold, but rather of maximizing a value that can grow arbitrarily.

Actually, you don't even need the ways-to-arrange argument. Suppose I want to predict/control the value of a particular nonnegative integer  (the number of cubbyholes), with monotonically increasing utility, e.g. . Then the encoding length  of a given outcome must be longer than the code length for each greater outcome: . However, code lengths must be a nonnegative integer number of code symbols in length, so for any given encoding  there are at most  shorter code lengths, so the encoding  must fail no later than .

Comment by Sonata Green on Utility Maximization = Description Length Minimization · 2021-02-25T08:38:08.896Z · LW · GW

What if I want greebles?

To misuse localdeity's example, suppose I want to build a wall with as many cubbyholes as possible, so that I can store my pigeons in them. In comparison to a blank wall, each hole makes the wall more complex, since there are more ways to arrange  holes than to arrange  holes (assuming the wall can accommodate arbitrarily many holes).