LessWrong 2.0 Reader
View: New · Old · Top← previous page (newer posts) · next page (older posts) →
← previous page (newer posts) · next page (older posts) →
We've learned a lot about the visual system by looking at ways to force it to wrong conclusions, which we call optical illusions or visual art. Can we do a similar thing for this postulated social cognition system? For example, how do actors get us to have social feelings toward people who don't really exist? And what rules do movie directors follow to keep us from getting confused by cuts from one camera angle to another?
measure on Spatial attention as a “tell” for empathetic simulation?Whereas if the brainstem does not have such a 3D spatial attention system, then I’m not sure how else fear-of-heights could realistically work
I think part of the trigger is from the visual balance center. The eyes sense small changes in parallax as the head moves relative to nearby objects. If much of the visual field is at great distance (especially below, where the parallax signals are usually strongest and most reliable), then the visual balance center gets confused and starts disagreeing with the other balance senses.
tamsin-leake on Take the wheel, Shoggoth! (Lesswrong is trying out changes to the frontpage algorithm)I would feel better about this if there was a high-infosec platform on which to discuss what is probably the most important topic in history (AI alignment). But noted.
ruby on Take the wheel, Shoggoth! (Lesswrong is trying out changes to the frontpage algorithm)I'd be interested in a comparison with the Latest tab.
ruby on Take the wheel, Shoggoth! (Lesswrong is trying out changes to the frontpage algorithm)Typo? Do you mean "click on Recommended"? I think the answer is no, in order to have recommendations for individuals (and everyone), they have browsing data.
1) LessWrong itself doesn't aim for a super high degree of infosec. I don't believe our data is sensitive to warrant large security overhead.
2) I trust Recombee with our data about as much as our trust ourselves to not have a security breach. Maybe actually I could imagine LessWrong being of more interest to someone or some group and getting attacked.
It might help to understand what your specific privacy concerns are.
Does buying shorter-term OTM derivatives each year not work here?
viliam on dirk's ShortformSpecific examples would be nice. Not sure if I understand correctly, but I imagine something like this:
You always choose A over B. You have been doing it for such long time that you forgot why. Without reflecting about this directly, it just seems like there probably is a rational reason or something. But recently, either accidentally or by experiment, you chose B... and realized that experiencing B (or expecting to experience B) creates unpleasant emotions. So now you know that the emotions were the real cause of choosing A over B all that time.
(This is probably wrong, but hey, people say that the best way to elicit answer is to provide a wrong one.)
kave on My experience using financial commitments to overcome akrasiaI like comments about other users' experiences for similar reasons why I like OP. I think maybe the ideal such comment would identify itself more clearly as an experience report, but I'd rather have the report than not.
james-grugett on Nathan Young's ShortformWe are trying our best to honor mana donations!
If you are inactive you have until the rest of the year to donate at the old rate. If you want to donate all your investments without having to sell each individually, we are offering you a loan to do that.
We removed the charity cap of $10k donations per month, which is going beyond what we previous communicated.
gordon-seidoh-worley on Fundamental Uncertainty: Chapter 8 - When does fundamental uncertainty matter?Author's note: This chapter took a really long time to write. Unlike previous chapters in the book, this one covers a lot more stuff in less detail, but I still needed to get the details right, so it took a long time to both figure out what I really wanted to say and to make sure I wasn't saying things that I wouldn't upon reflection regret having said because they were based on facts that I don't believe or I had simply gotten wrong.
It's likely still not the best version of this chapter it could be, but at this point I think I've made all the key points I wanted to make here, so I'm publishing the draft now and expect this one to need a lot of love from an editor later on.