Posts
Comments
I recently watched this Coursera course on learning how to learn and your post uses different words for some of the same things.
The course described what you call "shower-thoughts" as "diffuse mode" thinking, with an opposite called "focused mode" thinking and the brain only able to do one at a time. Focused mode uses ideas that are already clustered together to solve familiar problems while diffuse mode attempts to find useful connections between unclustered ideas to solve new problems in new ways. Not sure if these are the formal/correct terms from the literature that was behind the class, but if so it might be worth using them instead of making up our own jargon.
As for the class it definitely had some stuff that I still try to keep in mind, but it also had some things that I haven't quite figured out how to incorporate (chunking) or didn't find useful (some of the interviews). There is some overlap with what CFAR seems to be trying to teach. Overall I'd recommend taking a look if you have an hour or so per week over a month for it.
I agree. The difficult thing about introducing others to Less Wrong has always been that even if the new person remembers to say "It's my first time, be gentle". Less Wrong has the girth of a rather large horse. You can't make it smaller without losing much of its necessary function.
Updated link to Piers Steel's meta-analysis on procrastination research (at least I think it's the correct paper): http://studiemetro.au.dk/fileadmin/www.studiemetro.au.dk/Procrastination_2.pdf
I think we're getting some word-confusion. Groups that claim "make a big point of being anti-rational" are against the things with the label "rational". However they do tend to think of their own beliefs as being well thought out (i.e. rational).
"rationality" branding isn't as good for keeping that front and center, especially compared to, say the effective altruism meme
Perhaps a better branding would be "effective decision making", or "effective thought"?
As I've already explained, there's a difficult problem here about how to be appropriately modest about our own rationality. When I say something, I never think it's stupid, otherwise I wouldn't say it. But at least I'm not so arrogant as to go around demanding other people acknowledge my highly advanced rationality. I don't demand that they accept "Chris isn't saying anything stupid" as an axiom in order to engage with me.
I think this is the core of what you are disliking. Almost all of my reading on LW is in the Sequences rather than the discussion areas, so I haven't been placed to notice anyone's arrogance. But I'm a little sadly surprised by your experience because for me, the result of reading the sequences has been to have less trust that my own level of sanity is high. I'm significantly less certain of my correctness in any argument.
We know that knowing about biases doesn't remove them, so instead of increasing our estimate of our own rationality, it should correct our estimate downwards. This shouldn't even require pride as an expense since we're also adjusting our estimates of everyone else's sanity down a similar amount. As a check to see if we're doing things right, the result should be less time spent arguing and more time spent thinking about how we might be wrong and how to check our answers. Basically it should remind us to use type 2 thinking more whenever possible, and to seek effectiveness training for our type 1 thinking whenever available.
This was enjoyable to me because "saving the world", as you put it, is completely unmotivational for me. (Luckily I have other sources of motivation) It's interesting to see what drives other people and how the source of their drive changes their trajectory.
I'm definitely curious to see a sequence or at least a short feature list about your model for a government that structurally ratchets better instead of worse. That's definitely something that's never been achieved consistently in practice.
I think he means "create a functional human you, while primarily sourcing the matter from your old body". He's commenting that slicing the brain makes this more difficult, but it sounds like the alterations caused by current vitrification techniques make it impossible either way.
The problem here seems to be about the theories not taking all things we value into account. It's therefore less certain whether their functions actually match our morals. If you calculate utility using only some of your utility values, you're not going to get the correct result. If you're trying to sum the set {1,2,3,4} but you only use 1, 2 and 4 in the calculation, you're going to get the wrong answer. Outside of special cases like "multiply each item by zero" it doesn't matter whether you add, subtract or divide, the answer will still be wrong. For example the calculations given for total utilitarianism fail to include values for continuity of experience.
This isn't to say that ethics are easy, but we're going to have a devil of a time testing them with impoverished input.
If the primary motivation for attending is the emotional rewards of meeting others with interest in rationality and feeling that you've learned how to be more rational, then yes, a Christian brainwashing retreat would make you glad you attended it in the same way, if and only if you are/became Christian (since non Christians likely wouldn't enjoy a Christian brainwashing retreat.)
That said, as many of us have little/no data on changes in rationality (if any) of attendees, attending is the only real option you have to test whether it might. Confirmation bias would make a positive result weak evidence, but it'd be relatively important given the lack of other evidence. Luckily even if the retreat doesn't have benefits to your objective level of rationality it sounds worthwhile on the undisputed emotional merits.
I think what SilasBarta is trying to ask is do we have any objective measurements yet from the previous minicamp that add weight to the hypothesis that this camp does in fact improve rationality or life achievement over either the short or long term?
If not then I'm still curious, are there any plans to attempt to study rationality of attendees and non-attendees to establish such evidence?
Anecdotally: I'm not diabetic that I know of, but my mood is highly dependent on how well and how recently I've eaten. I get very irritable and can break down into tears easily if I'm more than four hours past due.
So it's ok to call people stupid or insane, but it's NOT ok to call them ignorant? I'd much rather be ignorant than stupid or insane because ignorance is a condition that can be cured rather than an inherent attribute of an individual.
And in this day of freely available education ignorance is indeed equivalent to a mental defect. At the very least it shows a defect in the natural desire to learn.
It makes me think of "Rationality Orgy", but that's just me. I'm not sure how I feel about that as I haven't been to a meetup yet.
ooo I want to go!
There's a book to this effect: http://www.amazon.com/gp/product/0691142084/ref=oh_o03_s01_i01_details
A little googling will bring up some very convincing lectures on the subject by the author. Unfortunately he hasn't made many headlines or much headway in actually implementing these ideas.
Hi LessWrongians, I've actually been reading this for a few months since I discovered it through HPMOR, but I just found this thread. I've been a traditional rationalist for a long time, but it's great to find that there is a community devoted to uncovering and eliminating all the human biases that aren't obvious when you're inside them.
I'm 27 with a BS in Business Information Systems and working as an analyst, though I consider this career a stopgap until I figure out something more entrepreneurial to do. I've been slowly reading through the sequences, but my brain can only handle so much at a time.
Mostly I just want to say thanks to everyone who writes/reads/comments on LessWrong. This site is awesome. It's the only place I've found on the internet that consistently makes me stop and think instead of just rolling my eyes.
Technically true, but that's a horrible analogy. Bullys are still a problem if you don't notice them. An ugly picture is completely not a problem if no one sees it, so in a way it is worse.
If being statistical and probabilistic settles oft-discussed intellectual debates so thoroughly as dampen further discussion, that's a great thing!
The goal is to get correct answers and move on to the unanswered, unsettled questions that are preventing progress; the goal is to NOT allow a debate to go any longer than necessary, especially--as Nisan mentioned--if the debate is not sane/intelligent.
Is completely off topic. It's irrelevant bordering on nihilism. Sure the universe doesn't care because as far as we know the universe isn't sentient. so what? That has no bearing on desire for death or the death of others.
If knowing that number 2 is true (rationally or otherwise) were really enough, then no one would cry at funerals. "Oh, they're also alive we're just viewing them as dead" people would say. Just because I'm dreaming doesn't mean I don't want to have a good dream or have the good dream keep going. It also doesn't mean I don't care whether other people are having good dreams or bad ones.
As others mentioned this sounds specific to uploading. Luckily for your argument instant-copy uploading is not the only possible future. I find it more plausible that instead of full-blown uploading we will have cyborg-style enhancements which eventually replace our original biological selves entirely for exactly the reasons he objects to instant copying. There's the Ship of Theseus paradox to deal with here, but as long as the change is gradual and I feel I am still myself the entire time, there would be no protests.
Again there's no disagreement here. If we get meat replacements, they could be made one piece at a time with no protest. Our bodies already do this to a large extent during our lives. No one complains when the cut on their hand heals.
Many worlds are nice, except that they are not THIS world.
I'd also throw in Aubrey de Grey's oft used exercise that's along the lines of, do you want to live one more day? (A: yes) do you expect to want to live one more day tomorrow? (A: Yes) If that answer is always true, then you want to live forever. If not then at what point would you change your answer to the question?
You're thinking about this all wrong. It's biological so the hardware IS the software.
A better question would be: is the difference in the eye or the brain? This you could test by taking some blue-detecting cones from the retinas of people who can and cannot detect Haidinger's brush and see if they respond differently to changes in polarization.