Posts

Betting on what is un-falsifiable and un-verifiable 2023-11-14T21:11:14.943Z
Meaningful things are those the universe possesses a semantics for 2022-12-12T16:03:31.413Z
A way to beat superrational/EDT agents? 2020-08-17T14:33:58.248Z
Utility functions without a maximum 2020-08-11T12:58:44.354Z
Godel in second-order logic? 2020-07-26T07:16:26.995Z
Political Roko's basilisk 2020-01-18T09:34:19.981Z

Comments

Comment by Abhimanyu Pallavi Sudhir (abhimanyu-pallavi-sudhir) on Meaningful things are those the universe possesses a semantics for · 2022-12-13T18:06:25.144Z · LW · GW

I think that the philosophical questions you're describing actually evaporate and turn out to be meaningless once you think enough about them, because they have a very anthropic flavour.

Comment by Abhimanyu Pallavi Sudhir (abhimanyu-pallavi-sudhir) on Meaningful things are those the universe possesses a semantics for · 2022-12-13T17:15:06.492Z · LW · GW

I don't think that's exactly true. But why do you think that follows from what I wrote?

Comment by Abhimanyu Pallavi Sudhir (abhimanyu-pallavi-sudhir) on Meaningful things are those the universe possesses a semantics for · 2022-12-13T10:01:19.593Z · LW · GW

That's syntax, not semantics.

Comment by Abhimanyu Pallavi Sudhir (abhimanyu-pallavi-sudhir) on Meaningful things are those the universe possesses a semantics for · 2022-12-13T10:00:44.639Z · LW · GW

It's really not, that's the point I made about semantics.

Eh that's kind-of right, my original comment there was dumb.

Comment by Abhimanyu Pallavi Sudhir (abhimanyu-pallavi-sudhir) on Meaningful things are those the universe possesses a semantics for · 2022-12-13T08:45:09.373Z · LW · GW

You overstate your case. The universe contains a finite amount of incompressible information, which is strictly less than the information contained in . That self-reference applies to the universe is obvious, because the universe contains computer programs.

The point is the universe is certainly a computer program, and that incompleteness applies to all computer programs (to all things with only finite incompressible information). In any case, I explained Godel  with an explicitly empirical example, so I'm not sure what your point is. 

Comment by Abhimanyu Pallavi Sudhir (abhimanyu-pallavi-sudhir) on Meaningful things are those the universe possesses a semantics for · 2022-12-13T06:45:36.231Z · LW · GW

I agree, and one could think of this in terms of markets: a market cannot capture all information about the world, because it is part of the world.

But I disagree that this is fundamentally unrelated -- here too the issue is that it would need to represent states of the world corresponding to what belief it expresses. Ultimately mathematics is supposed to represent the real world.

Comment by Abhimanyu Pallavi Sudhir (abhimanyu-pallavi-sudhir) on The Extraordinary Link Between Deep Neural Networks and the Nature of the Universe · 2020-09-04T09:11:08.491Z · LW · GW
Comment by Abhimanyu Pallavi Sudhir (abhimanyu-pallavi-sudhir) on A way to beat superrational/EDT agents? · 2020-08-20T04:16:57.321Z · LW · GW

No, it doesn't. There is no 1/4 chance of anything once you've found yourself in Room A1.

You do acknowledge that the payout for the agent in room B (if it exists) from your actions is the same as the payout for you from your own actions, which if the coin came up tails is $3, yes?

Comment by Abhimanyu Pallavi Sudhir (abhimanyu-pallavi-sudhir) on A way to beat superrational/EDT agents? · 2020-08-20T04:11:57.259Z · LW · GW

I don't understand what you are saying. If you find yourself in Room A1, you simply eliminate the last two possibilities so the total payout of Tails becomes 6.

If you find yourself in Room A1, you do find yourself in a world where you are allowed to bet. It doesn't make sense to consider the counterfactual, because you already have gotten new information.

Comment by Abhimanyu Pallavi Sudhir (abhimanyu-pallavi-sudhir) on A way to beat superrational/EDT agents? · 2020-08-19T03:42:35.119Z · LW · GW

That's not important at all. The agents in rooms A1 and A2 themselves would do better to choose tails than to choose heads. They really are being harmed by the information.

Comment by Abhimanyu Pallavi Sudhir (abhimanyu-pallavi-sudhir) on A way to beat superrational/EDT agents? · 2020-08-18T06:20:16.087Z · LW · GW

I see, that is indeed the same principle (and also simpler/we don't need to worry about whether we "control" symmetric situations).

Comment by Abhimanyu Pallavi Sudhir (abhimanyu-pallavi-sudhir) on A way to beat superrational/EDT agents? · 2020-08-17T16:52:23.829Z · LW · GW

I don't think this is right. A superrational agent exploits the symmetry between A1 and A2, correct? So it must reason that an identical agent in A2 will reason the same way as it does, and if it bets heads, so will the other agent. That's the point of bringing up EDT.

Comment by Abhimanyu Pallavi Sudhir (abhimanyu-pallavi-sudhir) on Utility functions without a maximum · 2020-08-12T18:03:59.102Z · LW · GW

Wait, but can't the AI also choose to adopt the strategy "build another computer with a larger largest computable number"?

Comment by Abhimanyu Pallavi Sudhir (abhimanyu-pallavi-sudhir) on Utility functions without a maximum · 2020-08-12T17:58:20.896Z · LW · GW
Comment by Abhimanyu Pallavi Sudhir (abhimanyu-pallavi-sudhir) on Utility functions without a maximum · 2020-08-12T03:33:45.259Z · LW · GW

I don't understand the significance of using a TM -- is this any different from just applying some probability distribution over the set of actions?

Comment by Abhimanyu Pallavi Sudhir (abhimanyu-pallavi-sudhir) on Utility functions without a maximum · 2020-08-11T17:18:04.173Z · LW · GW

Suppose the function U(t) is increasing fast enough, e.g. if the probability of reaching t is exp(-t), then let U(t) be exp(2t), or whatever.

I don't think the question can be dismissed that easily.

Comment by Abhimanyu Pallavi Sudhir (abhimanyu-pallavi-sudhir) on Utility functions without a maximum · 2020-08-11T15:14:56.844Z · LW · GW

It does not require infinities. E.g. you can just reparameterize the problem to the interval (0, 1), see the edited question. You just require an infinite set.

Comment by Abhimanyu Pallavi Sudhir (abhimanyu-pallavi-sudhir) on Utility functions without a maximum · 2020-08-11T15:04:49.092Z · LW · GW

Infinite t does not necessarily deliver infinite utility.

Perhaps it would be simpler if I instead let t be in (0, 1], and U(t) = {t if t < 1; 0 if t = 1}.

It's the same problem, with 1 replacing infinity. I have edited the question with this example instead.

(It's not a particularly weird utility function -- consider, e.g. if the agent needs to expend a resource such that the utility from expending the resource at time t is some fast-growing function f(t). But never expending the resource gives zero utility. In any case, an adverserial agent can always create this situation.)

Comment by Abhimanyu Pallavi Sudhir (abhimanyu-pallavi-sudhir) on Godel in second-order logic? · 2020-07-27T03:13:37.122Z · LW · GW

I see. So the answer is that it is indeed true that Godel's statement is true in all models of second-order PA, but unprovable nonetheless since Godel's completeness theorem isn't true for second-order logic?

Comment by Abhimanyu Pallavi Sudhir (abhimanyu-pallavi-sudhir) on Six economics misconceptions of mine which I've resolved over the last few years · 2020-07-19T04:50:41.373Z · LW · GW

This seems to be relevant to calculations of climate change externalities, where the research is almost always based on the direct costs of climate change if no one modified their behaviour, rather than the cost of building a sea wall, or planting trees.

Comment by Abhimanyu Pallavi Sudhir (abhimanyu-pallavi-sudhir) on A Fable of Science and Politics · 2020-07-17T17:00:01.403Z · LW · GW

Disagree. Daria considers the colour of the sky an important issue because it is socially important, not because it is of actual cognitive importance. Ferris recognizes that it doesn't truly change much about his beliefs, since their society doesn't have any actual scientific theories predicting the colour of the sky (if they did, the alliances would not be on uncorrelated issues like taxes and marriage), and bothers with things he finds to be genuinely more important.

Comment by Abhimanyu Pallavi Sudhir (abhimanyu-pallavi-sudhir) on The Blue-Minimizing Robot · 2020-07-04T13:06:42.365Z · LW · GW

One can absolutely construct a utility function for the robot. It's a "shooting-blue maximizer". Just because the appearing utility function is wrong doesn't mean there isn't a utility function.

Comment by Abhimanyu Pallavi Sudhir (abhimanyu-pallavi-sudhir) on No Logical Positivist I · 2020-06-12T13:37:14.439Z · LW · GW

I'm not sure your interpretation of logical positivism is what the positivists actually say. They don't argue against having a mental model that is metaphysical, they point out that this mental model is simply a "gauge", and that anything physical is invariant under changes of this gauge.

Comment by Abhimanyu Pallavi Sudhir (abhimanyu-pallavi-sudhir) on Political Roko's basilisk · 2020-01-19T09:26:44.292Z · LW · GW

Interesting. Did they promise to do so beforehand?

In any case, I'm not surprised the Soviets did something like this, but I guess the point is really "Why isn't this more widespread?" And also: "why does this not happen with goals other than staying in power?" E.g. why has no one tried to pass a bill that says "Roko condition AND we implement this-and-this policy". Because otherwise it seems that the stuff the Soviets did was motivated by something other than Roko's basilisk.

Comment by Abhimanyu Pallavi Sudhir (abhimanyu-pallavi-sudhir) on Political Roko's basilisk · 2020-01-18T22:39:09.699Z · LW · GW

But that's not Roko's basilisk. Whether or not you individually vote for the candidate does not affect you as long as the candidate wins.

Comment by Abhimanyu Pallavi Sudhir (abhimanyu-pallavi-sudhir) on Against improper priors · 2019-12-11T20:38:27.359Z · LW · GW

The "Dutch books" example is not restricted to improper priors. I don't have time to transform this into the language of your problem, but the basically similar two-envelopes problem can arise from the prior distribution:

f(x) = 1/4*(3/4)^n where x = 2^n (n >=0), 0 if x cannot be written in this form

Considering this as a prior on the amount of money in an envelope, the expectation of the envelope you didn't choose is always 8/7 of the envelope you did choose.

There is no actual mathematical contradiction with this sort of thing -- with prior or improper priors, thanks to the timely appearance of infinities. See here for an explanation:

https://thewindingnumber.blogspot.com/2019/12/two-envelopes-problem-beyond-bayes.html