notfnofn's Shortform

post by notfnofn · 2024-06-11T12:07:21.911Z · LW · GW · 15 comments

15 comments

Comments sorted by top scores.

comment by notfnofn · 2024-06-14T14:01:00.205Z · LW(p) · GW(p)

The hot hand fallacy: seeing data that is typical for independent coin flips as evidence for correlation between adjacent flips.

The hot hand fallacy fallacy (Miller, Sanjurjo 2018): Not correcting for the fact that amongst random length-k (k>2) sequences of independent coin tosses with at least one heads before toss k, the expected proportion of (heads after heads)/(tosses after heads) is less than 1/2.

The hot hand fallacy fallacy fallacy: Misinterpreting the above observation as a claim that under some weird conditioning, the probability of Heads given you have just seen Heads is less than 1/2 for independent coin tosses.

Replies from: JBlack
comment by JBlack · 2024-06-15T01:17:17.644Z · LW(p) · GW(p)

amongst random length-k (k>2) sequences of independent coin tosses with at least one heads before toss k, the expected proportion of (heads after heads)/(tosses after heads) is less than 1/2.

Does this need to be k>3? Checking this for k=3 yields 6 sequences in which there is at least one head before toss 3. In these sequences there are 4 heads-after-heads out of 8 tosses-after-heads, which is exactly 1/2.

Edit: Ah, I see this is more like a game score than a proportion. Two "scores" of 1 and one "score" of 1/2 out of the 6 equally likely conditional sequences.

comment by notfnofn · 2024-07-19T13:47:29.032Z · LW(p) · GW(p)

Currently trying to understand why the LW community is largely pro-prediction markets.

  1. Institutions and smart people with a lot of cash will invest money in what they think is undervalued, not necessarily in what they think is the best outcome. But now suddenly they have a huge interest in the "bad" outcome coming to pass.

  2. To avoid (1), you would need to prevent people and institutions from investing large amounts of cash into prediction markets. But then EMH really can't be assumed to hold

  3. I've seen discussion of conditional prediction markets (if we do X then Y will happen). If a bad foreign actor can influence policy by making a large "bad investment" in such a market, such that they reap more rewards from the policy, they will likely do so. A necessary (but I'm not convinced sufficient) condition for this is to have a lot of money in these markets. But then see (1)

Replies from: Tenoke, notfnofn
comment by Tenoke · 2024-07-19T13:57:01.667Z · LW(p) · GW(p)

If we get to the point where prediction markets actually direct policy, then yes you need them to be very deep - which in at least some cases is expected to happen naturally or can be subsidized but you also want to make the decision based off a deeper analysis than just the resulting percentages - depth of market, analysis of unusual large trades, blocking bad actors etc.

Replies from: notfnofn
comment by notfnofn · 2024-07-19T14:11:19.679Z · LW(p) · GW(p)

This pacifies my apprehension in (3) somewhat, although I fear that politicians are (probably intentionally) stupid when it comes to interpreting data for the sake of pushing policies

comment by notfnofn · 2024-07-19T13:49:40.690Z · LW(p) · GW(p)

To add: this seems like the kind of interesting game theory problem I would expect to see some serious work on from members in this community. If there is such a paper, I'd like to see it!

Replies from: Larks
comment by Larks · 2024-07-19T15:30:55.301Z · LW(p) · GW(p)

A bit dated but have you read Robin's 2007 paper on the subject?

Prediction markets are low volume speculative markets whose prices offer informative forecasts on particular policy topics. Observers worry that traders may attempt to mislead decision makers by manipulating prices. We adapt a Kyle-style market microstructure model to this case, adding a manipulator with an additional quadratic preference regarding the price. In this model, when other traders are uncertain about the manipulator’s target price, the mean target price has no effect on prices, and increases in the variance of the target price can increase average price accuracy, by increasing the returns to informed trading and thereby incentives for traders to become informed.

Replies from: notfnofn
comment by notfnofn · 2024-07-19T15:44:21.411Z · LW(p) · GW(p)

No, but it's exactly what I was looking for, and surprisingly concise. I'll see if I believe the inferences from the math involved when I take the time to go through it!

comment by notfnofn · 2024-06-22T14:22:02.865Z · LW(p) · GW(p)

Reading "Thinking Fast and Slow" for the first time, and came across an idea that sounds huge if true: that the amount of motivation one can exert in a day is limited. Given the replication crisis, I'm not sure how much credence I should give to this.

A corollary would be to make sure ones non-work daily routines are extremely low willpower when it's important to accomplish a lot during the work day. This flies in the face of other conventional wisdom I've heard regarding discipline, even granting the possibility that the amount of total will-power one can exert over each day can increase with practice.

Anecdotally, my best work days typically start with a small amount of willpower (cold shower night before, waking up early, completing a very short exercise routine, prepping brunch, and biking instead of driving to a library/coffee shop). The people I know who were the best at puzzles and other high effort system-2 activities were the type who would usually complete assignments in school, but never submit their best work.

Replies from: Seth Herd, Dagon, andrei-alexandru-parfeni
comment by Seth Herd · 2024-06-22T16:19:25.605Z · LW(p) · GW(p)

This hasn't stood up to replication, as you guess. I didn't follow closely, so I'm not sure of the details. Physical energy is certainly limited and affects mental function efficiency. But thinking hard doesnt use measurably more energy than just thinking last I knew. Which sounds at least a little wrong given the subjective experience of really focusing on some types of mental work. I'd guess a small real effect in some circumstances.

Anything written 20 years ago in cognitive science is now antique and needs to be compared against recent research.

comment by Dagon · 2024-06-22T17:37:00.139Z · LW(p) · GW(p)

It doesn't seem to be true in any literal, biological sense.  I don't recall that they had concrete cites for a lot of their theses, and this one doesn't even define the units of measurement (yes, glucose, but no comments about milligrams of glucose per decision-second or anything), so I don't know how you would really test it.  I don't think they claim that it's actually fixed over time, nor that you can't increase it with practice and intent.

However, like much of the book, it rings true in a "useful model in many circumstances" sense.  You're absolutely right that arranging things to require low-momentary-willpower can make life less stressful and probably get you more aligned with your long-term intent.  A common mechanism for this, as the book mentions, is to remove or reduce the momentary decision elements by arranging things in advance so they're near-automatic.  

comment by sunwillrise (andrei-alexandru-parfeni) · 2024-06-22T16:46:23.968Z · LW(p) · GW(p)

an idea that sounds huge if true: that the amount of motivation one can exert in a day is limited

This is basically the theory that willpower (and ego) are expendable resources that get depleted over time [LW · GW]. This was widely agreed upon in the earlier days of LessWrong (although with a fair deal of pushback [LW · GW] and associated commentary), particularly as a result of Scott Alexander and his writings (example illustration of his book review on this topic here) on this and related topics such as akrasia and procrastination-beating [? · GW] (which, as you can see from the history of posts containing this tag, were really popular on LW about 12-15 years ago). This 2018 post [LW · GW] by lionhearted explicitly mentions how discussions about akrasia were far more prevalent on the site around 2011, and Qiaochu's comments (1 [LW(p) · GW(p)], 2 [LW(p) · GW(p)]) give plausible explanations for why these stopped being so common.

I agree with Seth Herd below that this appears not to have survived the replication crisis. This is the major study that sparked general skepticism of the idea, with this associated article in Slate (both from 2016). That being said, I am not aware of other, more specific or recent details on this matter. Perhaps this 2021 Scott post over on ACX (which confirms that "[these willpower depletion] key results have failed to replicate, and people who know more about glucose physiology say it makes no theoretical sense") and this 2019 question [LW(p) · GW(p)] by Eli Tyre might be relevant here.

comment by notfnofn · 2024-06-11T12:07:22.212Z · LW(p) · GW(p)

Random thought after reading "A model of UDT with a halting oracle": imagine there are two super-intelligent AIs A and B, suitably modified to have access to their own and each other's source codes. They are both competing to submit a python code of length at most N which prints the larger number, then halts (where N is orders of magnitude larger than the code lengths of A and B). A can try to "cheat" by submitting something like exec(run B on the query "submit a code of length N that prints a large number, then halts") then print(0), but B can do this as well. Supposing they must submit to a halting oracle that will punish any AI that submits a non-halting program, what might A and B do?

Replies from: lahwran
comment by the gears to ascension (lahwran) · 2024-06-11T13:40:29.248Z · LW(p) · GW(p)

submit busy beaver

edit: wait nevermind, busy beaver itself takes a halting oracle to implement

Replies from: notfnofn
comment by notfnofn · 2024-06-11T14:12:49.110Z · LW(p) · GW(p)

Has to be a python code; allowing arbitrary non-computable natural language descriptions gets hairy fast