Posts

Do small studies add up? 2022-03-15T22:00:24.625Z

Comments

Comment by one_forward on Do small studies add up? · 2022-03-16T21:01:33.299Z · LW · GW

I see two reasons not to treat every measurement from the survey as having zero weight.

First, you'd like an approach that makes sense when you haven't considered any data samples previously, so you don't ignore the first person to tell you "humans are generally between 2 and 10 feet tall".

Second, in a different application you may not believe there is no causal mechanism for a new study to provide unique information about some effect size. Then there's value in a model that updates a little on the new study but doesn't update infinitely on infinite studies.

Comment by one_forward on Do small studies add up? · 2022-03-16T20:51:05.775Z · LW · GW

Thanks gwern! Jaynes is the original source of the height example, though I read it years ago and did not have the reference handy. I wrote this recently after realizing (1) the fallacy is standard practice in meta-analysis and (2) there is a straightforward better approach.

Comment by one_forward on Stupid Questions May 2015 · 2015-05-02T16:21:09.248Z · LW · GW

You can stagger the bets and offer either a 1A -> 1B -> 1A circle or a 2B -> 2A -> 2B circle.

Suppose the bets are implemented in two stages. In stage 1 you have an 89% chance of the independent payoff ($1 million for bets 1A and 1B, nothing for bets 2A and 2B) and an 11% chance of moving to stage 2. In stage 2 you either get $1 million (for bets 1A and 2A) or a 10/11 chance of getting $5 million.

Then suppose someone prefers a 10/11 chance of 5 million (bet 3B) to a sure $1 million (bet 3A), prefers 2A to 2B, and currently has 2B in this staggered form. You do the following:

  1. Trade them 2A for 2B+$1.
  2. Play stage 1. If they don't move on to stage 2, they're down $1 from where they started. If they do move on to stage 2, they now have bet 3A.
  3. Trade them 3B for 3A+$1.
  4. Play stage 2.

The net effect of those trades is that they still played gamble 2B but gave you a dollar or two. If they prefer 3A to 3B and 1B to 1A, you can do the same thing to get them to circle from 1A back to 1A. It's not the infinite cycle of losses you mention, but it is a guaranteed loss.

Comment by one_forward on Open thread, Jan. 26 - Feb. 1, 2015 · 2015-02-04T20:34:14.307Z · LW · GW

Yeah, I don't think it makes much difference in high-dimensions. It's just more natural to talk about smoothness in the continuous case.

Comment by one_forward on Open thread, Jan. 26 - Feb. 1, 2015 · 2015-02-02T19:34:51.276Z · LW · GW

A note on notation - [0,1] with square brackets generally refers to the closed interval between 0 and 1. X is a continuous variable, not a boolean one.

Comment by one_forward on Causal decision theory is unsatisfactory · 2014-09-16T21:48:34.363Z · LW · GW

Why does UDT lose this game? If it knows anti-Newcomb is much more likely, it will two-box on Newcomb and do just as well as CDT. If Newcomb is more common, UDT one-boxes and does better than CDT.

Comment by one_forward on Causal decision theory is unsatisfactory · 2014-09-16T21:46:00.674Z · LW · GW

You seem to be comparing SMCDT to a UDT agent that can't self-modify (or commit suicide). The self-modifying part is the only reason SMCDT wins here.

The ability to self-modify is clearly beneficial (if you have correct beliefs and act first), but it seems separate from the question of which decision theory to use.

Comment by one_forward on Causal decision theory is unsatisfactory · 2014-09-16T21:42:06.924Z · LW · GW

This is a good example. Thank you. A population of 100% CDT, though, would get 100% DD, which is terrible. It's a point in UDT's favor that "everyone running UDT" leads to a better outcome for everyone than "everyone running CDT."

Comment by one_forward on Causal decision theory is unsatisfactory · 2014-09-16T03:01:56.925Z · LW · GW

Ok, that example does fit my conditions.

What if the universe cannot read your source code, but can simulate you? That is, the universe can predict your choices but it does not know what algorithm produces those choices. This is sufficient for the universe to pose Newcomb's problem, so the two agents are not identical.

The UDT agent can always do at least as well as the CDT agent by making the same choices as a CDT would. It will only give a different output if that would lead to a better result.

Comment by one_forward on Causal decision theory is unsatisfactory · 2014-09-16T01:44:13.044Z · LW · GW

Can you give an example where an agent with a complete and correct understanding of its situation would do better with CDT than with UDT?

An agent does worse by giving in to blackmail only if that makes it more likely to be blackmailed. If a UDT agent knows opponents only blackmail agents that pay up, it won't give in.

If you tell a CDT agent "we're going to simulate you and if the simulation behaves poorly, we will punish the real you," it will ignore that and be punished. If the punishment is sufficiently harsh, the UDT agent that changed its behavior does better than the CDT agent. If the punishment is insufficiently harsh, the UDT agent won't change its behavior.

The only examples I've thought of where CDT does better involve the agent having incorrect beliefs. Things like an agent thinking it faces Newcomb's problem when in fact Omega always puts money in both boxes.

Comment by one_forward on [LINK] Could a Quantum Computer Have Subjective Experience? · 2014-08-28T17:42:52.150Z · LW · GW

wolfgang proposed a similar example on Scott's blog:

I wonder if we can turn this into a real physics problem:

1) Assume a large-scale quantum computer is possible (thinking deep thoughts, but not really self-conscious as long as its evolution is fully unitary).

2) Assume there is a channel which allows enough photons to escape in such a way to enable consciousness.

3) However, at the end of this channel we place a mirror – if it is in the consciousness-OFF position the photons are reflected back into the machine and unitarity is restored, but in the consciousness-ON position the photons escape into the deSitter universe.

4) As you can guess we use a radioactive device to set the mirror into c-ON or c-OFF position with 50% probability.

Will the quantum computer now experience i) a superposition of consciousness and unconsciousness or ii) will it always have a “normal” conscious experience or iii) will it have a conscious experience in 50% of the cases ?

Scott responded that

I tend to gravitate toward an option that’s not any of the three you listed. Namely: the fact that the system is set up in such a way that we could have restored unitarity, seems like a clue that there’s no consciousness there at all—even if, as it turns out, we don’t restore unitarity.

This answer is consistent with my treatment of other, simpler cases. For example, the view I’m exploring doesn’t assert that, if you make a perfect copy of an AI bot, then your act of copying causes the original to be unconscious. Rather, it says that the fact that you could (consistent with the laws of physics) perfectly copy the bot’s state and thereafter predict all its behavior, is an empirical clue that the bot isn’t conscious—even before you make a copy, and even if you never make a copy.

Comment by one_forward on Open thread, 25-31 August 2014 · 2014-08-25T18:49:47.808Z · LW · GW

A&B cannot be more probable than A, but evidence may support A&B more than it supports A.

For example, suppose you have independent prior probabilities of 1/2 for A and for B. The prior probability of A&B is 1/4. If you are then told "A iff B," the probability for A does not change but the probability of A&B goes up to 1/2.

The reason specific theories are better is not that they are more plausible, but that they contain more useful information.

Comment by one_forward on Open thread, 18-24 August 2014 · 2014-08-19T03:00:11.687Z · LW · GW

Your scheme seems to be Jaynes's Ap distribution, discussed on LW here.