## Posts

Comment by Nikolaus Hansen (nikolaus-hansen) on When None Dare Urge Restraint · 2020-02-15T16:21:46.601Z · LW · GW

I don't understand how

“We have forgotten that the first purpose of government is not the economy, it is not health care, it is defending the country from attack.”

was a smarter-than-one-would-have-guessed response to 9/11. Had anyone forgotten to hire soldiers and fund the secret services before 9/11? Why was preventing 9/11 more important than reducing the number of traffic fatalities by, say, 30% (and thereby saving about 10000 lives per year)? Or preventing 30% of the 45,000 yearly deaths due to lack of health insurance? What am I missing?

Comment by Nikolaus Hansen (nikolaus-hansen) on 0 And 1 Are Not Probabilities · 2019-12-26T15:20:21.557Z · LW · GW
y=x/(1-x) is not the bijection that he asserts it is, [...]. It's a function that maps [0,1] onto [1,\intfy] as a subset of the topological closure of R.

How is that not a bijection? Specifically, a bijection between the sets and , which seems exactly to be the claim EY is making.

On a broader point, EY was not calling into question the correctness or consistency of mathematical concepts or claims but whether they have any useful meaning in reality. He was not talking about the map, he was talking about the territory and how we may improve the map to better reflect the territory.

Comment by Nikolaus Hansen (nikolaus-hansen) on Scientific Evidence, Legal Evidence, Rational Evidence · 2019-12-01T20:12:47.001Z · LW · GW
It seems dangerous to say, before running the experiment, that there is a “scientific belief” about the result.

I don't understand what the danger is. It seems just true that there is a scientific belief about the result in this case.

But if you already know the “scientific belief” about the result, why bother to run the experiment?

I can see immediately two reasons, namely because

• scientific beliefs can be wrong, and
• the only way to strengthened scientific beliefs is by experiments that could have falsified them.
Comment by Nikolaus Hansen (nikolaus-hansen) on Professing and Cheering · 2019-11-25T16:40:38.326Z · LW · GW

I got to know this idea recently also under the names of virtue signaling (to members of her community) or a loyalty badge (to her community or doctrine). The more outlandish the story, the stronger is the signal or badge.

Comment by Nikolaus Hansen (nikolaus-hansen) on The Lens That Sees Its Flaws · 2019-11-24T14:48:54.207Z · LW · GW

Your optimistic nature cannot have that large an effect on the world; it cannot, of itself, decrease the probability of nuclear war by 20%, or however much your optimistic nature shifted your beliefs. Shifting your beliefs by a large amount, due to an event that only slightly increases your chance of being right, will still mess up your mapping.

I only need to assume that everybody else or, at least, many other people are similarly irrationally optimistic as I and then the effect of optimism on the world could well be significant and make a 20% change? The assumption is not at all far fetched.

Comment by Nikolaus Hansen (nikolaus-hansen) on Why Truth? · 2019-11-24T14:25:19.056Z · LW · GW

I am not sure, but there seem to be a couple of apostrophes missing in the sentence

[...] if were going to improve our skills of rationality, go beyond the standards of performance set by hunter-gatherers, well need deliberate beliefs [...]
Comment by Nikolaus Hansen (nikolaus-hansen) on Burdensome Details · 2019-11-22T15:16:38.563Z · LW · GW

I would be interested to see whether computing falsely to the average of and would model the error well. Like this any detail that fits well to the very unlikely primary event increases its perceived likelihood.

Comment by Nikolaus Hansen (nikolaus-hansen) on The Martial Art of Rationality · 2019-11-20T01:55:09.030Z · LW · GW

What empirical evidence do we have that rationality is trainable like martial arts? How do we measure (change of) rationality skills?

Comment by Nikolaus Hansen (nikolaus-hansen) on Scope Insensitivity · 2019-11-19T23:43:19.379Z · LW · GW

I can see natural situations where scope insensitivity seems to be the right place to be:

• Assuming we are ignorant about the absolute value of saving 4500 lives.
• Assuming all potentially affected people contribute on average with the scope insensitive (constant) value. Then, the contribution per saved live would become a constant, like for 45 saved of 200 we have 200 contributions to save 45 lives and for 45,000 saved of 200,000 we have 200,000 contributions to save 45,000. That seems to make perfectly sense.
• Assuming that the number of people who get to know about a problem is proportional to the problem size. Hence the number of people who can (and will on average) contribute to its solution is proportional to the problem size. Hence each single contribution should not be proportionate to its size. That is not at all a bad (implicit) assumption to have, IMHO.

It even seems to me that any personal contribution must be intrinsically scope insensitive w.r.t. the denominator (the out of how many birds/humans/...), because any single person can't possibly pay alone for a solution of a problem that affects a billion humans.