Posts
Comments
I am looking for rationalist pdfs, ebooks and audiobooks for an upcoming flight. If anyone can bring any of these this Friday, that will be appreciated. (I've finished reading the Sequences)
I mean I am interested in undergoing adaptation through sleep deprivation, then something like uberman then everyman.
It would not be viable for me to stay in a polyphasic schedule next year. Ultimately, I will have to return to something largely along the lines of segmented or monophasic. Still, I have heard that undergoing polyphasic-style adaptation can help you to become acclimatised to getting REM sleep in a 20-30 minute period, something I currently can't do, but might be useful if I have a sleep debt or if I know I'm going to do an all-nighter etc.
So the idea is adapting to polyphasic then switching back to segmented or monophasic. Would I expect to nap better afterwards? Is this likely to be useful or worthwhile?
Great thread.
Separate movies from TV I think. I am trying to find movies for LW/THINK meetups and all I see is anime. Lots of appealing premises end up being of too short a length to be able to share in a meetup environment.
- Is it useful to learn to REM-nap even though I don't plan to sleep polyphasically? Is it worth going through adaptation?
- where is a summary of the evidence for polyphasic sleeping, and likewise for paleo diets?
- do you think rationality is a more contagious idea than effective altruism?
When will there be another online optimal philanthropy meetup?
Summary of this post: heuristics differ from biases in amount (of predictive power), not in kind.
Or perhaps they differ by some combination of predictive power, utility and directness of relation to their prediction (susceptibility to be screened off)
Note Carl Shulman's counterargument to the assumption of a normal prior here and the comments traded between Holden and Carl.
"If your prior was that charity cost-effectiveness levels were normally distributed, then no conceivable evidence could convince you that a charity could be 100x as good as the 90th percentile charity. The probability of systematic error or hoax would always be ludicrously larger than the chance of such an effective charity. One could not believe, even in hindsight, that paying for Norman Borlaug’s team to work on the Green Revolution, or administering smallpox vaccines (with all the knowledge of hindsight) actually did much more good than typical. The gains from resources like GiveWell would be small compared to acting like an index fund and distributing charitable dollars widely."
I think a more distinctly virtue ethicist way of putting it is that they don't do slightly bad things because that would condition them to have bad dispositions, or to be bad people, something that is intrinsically disvaluable.
People who avoid doing slightly bad things to prevent instilling unhelpful habits, and to prevent themselves from bringing about future harm are (roughly) global utilitarians.
This month's meetups have been excellent. Look forward to seeing you all next Friday, and Matt, thanks for the venue.