Q: What has Rationality Done for You?
post by atucker · 2011-04-02T04:13:34.789Z · LW · GW · Legacy · 91 commentsContents
91 comments
So after reading SarahC's latest post I noticed that she's gotten a lot out of rationality.
More importantly, she got different things out of it than I have.
Off the top of my head, I've learned...
- that other people see themselves differently, and should be understood on their terms (mostly from here)
- that I can pay attention to what I'm doing, and try to notice patterns to make intervention more effective.
- the whole utilitarian structure of having a goal that you take actions to achieve, coupled with the idea of an optimization process. It was really helpful to me to realize that you can do whatever it takes to achieve something, not just what has been suggested.
- the importance/usefulness of dissolving the question/how words work (especially great when combined with previous part)
- that an event is evidence for something, not just what I think it can support
- to pull people in, don't force them. Seriously that one is ridiculously useful. Thanks David Gerard.
- that things don't happen unless something makes them happen.
- that other people are smart and cool, and often have good advice
Where she got...
- a habit of learning new skills
- better time-management habits
- an awesome community
- more initiative
- the idea that she can change the world
I've only recently making a habit out of trying new things, and that's been going really well for me. Is there other low hanging fruit that I'm missing?
What cool/important/useful things has rationality gotten you?
91 comments
Comments sorted by top scores.
comment by David_Gerard · 2011-04-02T10:15:27.758Z · LW(p) · GW(p)
It's the little things.
Using LessWrong as part of my internet-as-television recreational candy diet reminds me of stuff:
- Be less dumb. Little things, every day. This in itself makes everything go better.
- Respond, not react. (This one can get lost in conversation. Trying!)
- Don't hold others' irrationality against them. (Lump of lard theory. Beware anthropomorphising humans.)
- Ask yourself "How do you know that?"
- Ask "what's this for?" That's one of my favourite universal questions ever and dissolves remarkable quantities of rubbish.
- Be more curious. Picking out random e-books is a current avenue for this. Or deciding on my daily commute to actually look for interesting things about these streets I've walked countless times.
Tim Ferriss' books The Four-Hour Work Week and The Four-Hour Body are full of deeply annoying rubbish, but there's quite a bit of brilliance in there too.
- 80/20 everything that makes demands of your time or resources. This has reached the point where in the last several months I've actually experienced and savoured the considerable luxury of boredom, after thinking I'd never have time for such a thing in the foreseeable future (looking after a small child, girlfriend chronically ill with migraines, and working a day job).
- Keep asking myself "What do I really want?" This is one of my lifelong favourite universal questions, but Ferriss reminds me to keep asking it.
- Get off my arse. Something to remind one of this is always useful.
- Losing 12 kilograms in the last six weeks. Not a rationality win per se, but certainly a win courtesy Ferriss.
(I'll add more as I think of it.)
Replies from: atucker, None, jsalvatier↑ comment by atucker · 2011-04-02T14:01:59.373Z · LW(p) · GW(p)
Respond, not react.
What does this one mean?
I think it has something to do with you should incorporate information from what just happened and try to come up with an effective response to things, rather than your immediate gut reaction. Is that what you were going for?
Replies from: David_Gerard↑ comment by David_Gerard · 2011-04-02T14:14:43.462Z · LW(p) · GW(p)
Pretty much. I mean: when something upsetting happens and you get a visceral reaction, try to catch that and engage your brain. I expect it should ideally also be applied when something pleasing happens.
↑ comment by [deleted] · 2011-04-02T11:49:49.602Z · LW(p) · GW(p)
80/20 everything that makes demands of your time
Hey what does this mean?
Replies from: David_Gerard↑ comment by David_Gerard · 2011-04-02T12:31:58.517Z · LW(p) · GW(p)
The Pareto principle: 80% of the effects come from 20% of the effort. Really quite a lot of things show a power law.
Ferriss puts it like this:
The 80/20 principle, also known as Pareto’s Law, dictates that 80% of your desired outcomes are the result of 20% of your activities or inputs. Once per week, stop putting out fires for an afternoon and run the numbers to ensure you’re placing effort in high-yield areas: What 20% of customers/products/regions are producing 80% of the profit? What are the factors that could account for this?
He considers this a useful principle to apply to everything. And it is - I don't necessarily throw out the unproductive 80% on a given measure (I might want it for other reasons), but it is interesting to see if there's a ready win there. And it's useful even when you work an ordinary salaried day job, as I do. (e.g. these two weeks, when my boss is on holiday and I'm doing all his job as well as my own.)
↑ comment by jsalvatier · 2011-04-03T23:22:27.537Z · LW(p) · GW(p)
Can you expand on asking "what's this for?". Maybe an example or two? I'm not clear on what the context is.
Replies from: David_Gerard↑ comment by David_Gerard · 2011-04-04T07:13:14.494Z · LW(p) · GW(p)
- "Why am I doing this task?" - applies to pretty much any action
- "What's the best operating system?" - in what context?
- "What is the morally right course of action here?"
- "Which of these is a better movie/record?"
Particularly useful when you spot a free-floating comparative, seems to have wider application. (e.g. you just asked it about itself.) Try it yourself, for all manner of values of "this"!
comment by XiXiDu · 2011-04-02T09:54:35.906Z · LW(p) · GW(p)
Most of all it just made me sad and depressed. The whole "expected utility" thing being the worst part. If you take it seriously you'll forever procrastinate having fun because you can always imagine that postponing some terminal goal and instead doing something instrumental will yield even more utility in future. So if you enjoy mountain climbing you'll postpone it until it is safer or after the Singularity when you can have much more safe mountain climbing. And then after the Singularity you won't be able to do it because the resources for a galactic civilization are better used to fight hostile aliens and afterwards fix the heat death of the universe. There's always more expected utility in fixing problems, it is always about expected utility never about gathering or experiencing utility. And if you don't believe into risks from AI then there is some other existential risk and if there is no risk then it is poverty in Obscureistan. And if there is nothing at all then you should try to update your estimates because if you're wrong you'll lose more than by trying to figure out if you're wrong. You never hit diminishing returns. And in the end all your complex values are replaced by the tools and heuristics that were originally meant to help you achieve them. It's like you'll have to become one of those people who work all their life to save money for their retirement when they are old and lost most of their interests.
Replies from: Eliezer_Yudkowsky, Nisan, David_Gerard, None, atucker, Emile, Kaj_Sotala, Kutta, Vladimir_Nesov↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2011-04-03T00:07:52.281Z · LW(p) · GW(p)
What on EARTH are you trying to -
Important note: Currently in NYC for 20 days with sole purpose of finding out how to make rationalists in Bay Area (and elsewhere) have as much fun as the ones in NYC. I am doing this because I want to save the world.
Replies from: katydee↑ comment by Nisan · 2011-04-02T21:40:43.065Z · LW(p) · GW(p)
XiXiDu, I have been reading your comments for some time, and it seems like your reaction to this whole rationality business is unique. You take it seriously, or at least part of you does; but your perspective is sad and strange and pessimistic. Yes, even more pessimistic than Roko or Mass Driver. What you are taking away from this blog is not what other readers are taking away from it. The next step in your rationalist journey may require something more than a blog can provide.
From one aspiring rationalist to another, I strongly encourage you to talk these things over, in person, with friends who understand them. If you are already doing so, please forgive my unsolicited advice. If you don't have friends who know Less Wrong material, I encourage you to find or make them. They don't have to be Less Wrong readers; many of my friends are familiar with different bits and pieces of the Less Wrong philosophy without ever having read Less Wrong.
↑ comment by David_Gerard · 2011-04-02T10:32:09.438Z · LW(p) · GW(p)
(Who voted down this sincere expression of personal feeling? Tch.)
This is why remembering to have fun along the way is important. Remember: you are an ape. The Straw Vulcan is a lie. The unlived life is not so worth examining. Remember to be human.
Replies from: XiXiDu↑ comment by XiXiDu · 2011-04-02T15:03:16.207Z · LW(p) · GW(p)
This is why remembering to have fun along the way is important.
I know that argument. But I can't get hold of it. What can I do, play a game? I'll have to examine everything in terms of expected utility. If I want to play a game I'll have to remind myself that I really want to solve friendly AI and therefore have to regard "playing a game" as an instrumental goal rather than a terminal goal. And in this sense, can I justify to play a game? You don't die if you are unhappy, I could just work overtime as street builder to earn even more money to donate it to the SIAI. There is no excuse to play a game because being unhappy for a few decades can not outweigh the expected utility of a positive Singularity and it doesn't reduce your efficiency as much as playing games and going to movies. There is simply no excuse to have fun. And that will be the same after the Singularity too.
Replies from: David_Gerard, ciphergoth, Mycroft65536, Gray, atucker, Eliezer_Yudkowsky↑ comment by David_Gerard · 2011-04-02T19:06:36.487Z · LW(p) · GW(p)
The reason it's important is because it counts as basic mental maintenance, just as eating reasonably and exercising a bit and so on are basic bodily maintenance. You cannot achieve any goal without basic self-care.
For the solving friendly AI problem in particular: the current leader in the field has noticed his work suffers if he doesn't allow play time. You are allowed play time.
You are not a moral failure for not personally achieving an arbitrary degree of moral perfection.
You sound depressed, which would mean your hardware was even more corrupt and biased than usual. This won't help achieve a positive Singularity either. Driving yourself crazier with guilt at not being able to work for a positive Singularity won't help your effectiveness, so you need to stop doing that.
You are allowed to rest and play. You need to let yourself rest. Take a deep breath! Sleep! Go on holiday! Talk to friends you trust! See your doctor! Please do something. You sound like you are dashing your mind to pieces against the rock of the profoundly difficult, and you are not under any obligation to do such a thing, to punish yourself so.
↑ comment by Paul Crowley (ciphergoth) · 2011-04-03T08:02:54.027Z · LW(p) · GW(p)
As a result of this thinking, are you devoting every moment of your time and every Joule of your energy towards avoiding a negative Singularity?
No?
No, me neither. If I were to reason this way, the inevitable result for me would be that I couldn't bear to think about it at all and I'd live my whole life neither happily nor productively, and I suspect the same is true for you. The risk of burning out and forgetting about the whole thing is high, and that doesn't maximize utility either. You will be able to bring about bigger changes much more effectively if you look after yourself. So, sure, it's worth wondering if you can do more to bring about a good outcome for humanity - but don't make gigantic changes that could lead to burnout. Start from where you are, and step things up as you are able.
↑ comment by Mycroft65536 · 2011-04-05T03:54:03.821Z · LW(p) · GW(p)
Lets say the Singularity is likely to happen in 2045 like Kurzweil says, and you want to maximize the chances that it's positive. The idea that you should get to work making as much money to donate to SIAI, or that you should start researching fAGI (depending on your talents). What you do tomorrow doesn't matter. What matters is the average output over the next 35 years.
This is important because a strategy where you have a emotional breakdown in 2020 fails. If you get so miserable you kill yourself you've failed at your goal. You need to make sure that this fallible agent, XIXIDu, stays at a very high level of productivity for the next 35 years. That almost never happens if you're not fulfilling the needs your monkey brain demands.
Immediate gratification isn't a terminal goal, you've figured this out, but it does work as an instrumental goal on the path of a greater goal.
Replies from: MatthewBaker↑ comment by MatthewBaker · 2011-07-11T19:23:21.351Z · LW(p) · GW(p)
Ditto
↑ comment by Gray · 2011-04-03T05:17:59.144Z · LW(p) · GW(p)
One thing that I've come with when thinking about personal budgeting, of all things, is the concept of granularity. For someone who is poor, the situation is analogous to yours. The dad, lets say, of the household might be having a similar attack of conscience as you are on whether he should buy a candy bar at the gas station, when there are bills that can't be paid.
But it turns out that a small enough purchase, such as a really cheap candy bar (for the sake of argument), doesn't actually make any difference. No bill is going to go from being unpaid to paid because that candy was bought rather than unbought.
So relax. Buy a candy bar every once in a while. It won't make a difference.
↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2011-04-03T00:16:21.383Z · LW(p) · GW(p)
I don't tell people this very often. In fact I'm not sure I can recall ever telling anyone this before, but then I wouldn't necessarily remember it. But yes, in this case and in these exact circumstances, you need to get laid.
Replies from: NancyLebovitz↑ comment by NancyLebovitz · 2011-04-03T11:13:09.972Z · LW(p) · GW(p)
Could you expand on why offering this advice makes sense to you in this situation, when it hasn't otherwise?
↑ comment by [deleted] · 2011-04-02T23:50:57.740Z · LW(p) · GW(p)
Totally can relate to this. I was dealing with depression long before LW, but improved rationality sure made my depression much more fun and exciting. Sarcastically, I could say that LW gave me the tools to be really good at self-criticism.
I can't exactly give you any advice on this, as I'm still dealing with this myself and I honestly don't really know what works or even what the goal exactly is. Just wanted to say that the feeling "this compromise 'have some fun now' crap shouldn't be necessary if I really were rational!" is only too familiar.
It lead me to constantly question my own values and how much I was merely signalling (mostly to myself). Like, "if I procrastinate on $goal or if I don't enjoy doing $maximally_effective_but_boring_activity, then I probably don't really want $goal", but that just leads into deeper madness. And even when I understand (from results, mostly, or comparisons to more effective people) that I must be doing something wrong, I break down until I can exactly identify what it is. So I self-optimize so that I can be better at self-optimizing, but I never get around to doing anything.
(That's not to say that LW was overall a negative influence for me. Quite the opposite. It's just that adding powerful cognitive tools to a not-too-sane mind has a lot of nasty side-effects.)
Replies from: atucker↑ comment by atucker · 2011-04-03T04:21:16.704Z · LW(p) · GW(p)
"if I procrastinate on $goal or if I don't enjoy doing $maximallyeffectivebutboringactivity, then I probably don't really want $goal", but that just leads into deeper madness.
If I understood this correctly (as you procrastinating on something, and concluding that you don't actually want it), then most people around here call that akrasia.
Which isn't really something to go mad about. Basically, your brain is a stapled together hodgepodge of systems which barely work together well enough to have worked in the ancestral environment.
Nowadays, we know and can do much more stuff. But there's no reason to expect that your built in neural circuitry can turn your desire to accomplish something into tangible action, especially when your actions are only in the long term, and non-viscerally, related to you accomplishing your goal.
Replies from: None↑ comment by [deleted] · 2011-04-03T05:18:27.941Z · LW(p) · GW(p)
It's not just akrasia, or rather, the implication of strong akrasia really weirds me out.
The easiest mechanism to implement goals would not be vulnerable to akrasia. At best it would be used to conserve limited resources, but that's clearly not the case here. In fact, some goals work just fine, while others fail. This is especially notable when the same activity can have very different levels of akrasia depending on why I'm doing it. Blaming this on hodge-podge circuitry seems false to me (in the general case).
So I look for other explanations, and signaling is a pretty good starting point. What I thought was a real goal was just a social facade, e.g. I don't want to study, I just want to be seen as having a degree. (Strong evidence for this is that I enjoy reading books for some personal research when I hated literally the same books when I had to read them for class.)
Because of this, I'm generally not convinced that my ability to do stuff is broken (at least not as badly), but rather, that I'm mistaken about what I really want. But as Xixidu mentioned, when you start applying rationality to that, you end up changing your own values in the process and not always in a pretty way.
Replies from: NancyLebovitz, atucker↑ comment by NancyLebovitz · 2011-04-03T11:25:13.558Z · LW(p) · GW(p)
At least at my end, I'm pretty sure that part of my problem isn't that signalling is causing me to override my real desires, it's that there's something about feeling that I have to signal leads to me not wanting to cooperate, even if the action is something that I would otherwise want to do, or at least not mind all that much.
Writing this has made the issue clearer for me than it's been, but it's not completely clear-- I think there's a combination of fear and anger involved, and it's a goddam shame that my customers (a decent and friendly bunch) are activating stuff that got built up when I was a kid.
↑ comment by atucker · 2011-04-03T05:32:54.404Z · LW(p) · GW(p)
I don't want to study, I just want to be seen as having a degree. (Strong evidence for this is that I enjoy reading books for some personal research when I hated literally the same books when I had to read them for class.)
Fair enough, I guess I misunderstood what you were saying.
But as Xixidu mentioned, when you start applying rationality to that, you end up changing your own values in the process and not always in a pretty way.
I guess its not guaranteed to turn out well, and when I was still working through my value-conflicts it wasn't fun. In the end though, the clarity that I got from knowing a few of my actual goals and values feels pretty liberating. Knowing (some of) what I want makes it soooo much easier for me to figure out how to do things that will make me happy, and with less regret or second thoughts after I decide.
↑ comment by atucker · 2011-04-02T13:58:37.747Z · LW(p) · GW(p)
Integrate your utility over time. There are plenty of cheap (in terms of future utility) things that you can do now to enjoy yourself.
Like, eating healthy feels nice and keeps you in better shape for getting more utility. You should do it. Friends help you achieve future goals, and making and interacting with them is fun.
Reframe your "have to"s as "want to"s, if that's true.
Replies from: XiXiDu↑ comment by XiXiDu · 2011-04-02T15:32:48.170Z · LW(p) · GW(p)
Integrate your utility over time. There are plenty of cheap (in terms of future utility) things that you can do now to enjoy yourself.
I know, it would be best to enjoy the journey. But I am not that kind of person. I hate the eventual conclusion being made on LW. I am not saying that it is wrong, which is the problem. For me it only means that life sucks. If you can't stop caring then life sucks. For a few years after I was able to overcome religion I was pretty happy. I decided that nothing matters and I could just enjoy life, that I am not responsible. But that seems inconsistent as caring about others is caring about yourself. You also wouldn't run downstairs faster than necessary just because it is fun to run fast, it is not worth a fracture. And there begins the miserable journey where you never stop to enjoy because it is not worth it. It is like rationality is a parasite that is hijacking you and turns you into a consequentialist that maximizes only rational conduct.
Replies from: David_Gerard, timtyler, MatthewBaker, CronoDAS↑ comment by David_Gerard · 2011-04-03T12:58:48.205Z · LW(p) · GW(p)
Memetic "basilisk" issue: this subthread may be important:
It is like rationality is a parasite that is hijacking you and turns you into a consequentialist that maximizes only rational conduct.
This (combined with such as Roko's meltdown, as Nisan notes above) appears to be evidence of the possibility of LessWrong rationalism as memetic basilisk. (Thus suggesting the "basilisks" so far, e.g. the forbidden post, may have whatever's problematic in the LW memeplex as prerequisite, which is ... disconcerting.) As muflax notes:
It's just that adding powerful cognitive tools to a not-too-sane mind has a lot of nasty side-effects.
What's a proper approach to use with those who literally can't handle that much truth?
Replies from: NancyLebovitz, TheOtherDave↑ comment by NancyLebovitz · 2011-04-03T13:31:45.503Z · LW(p) · GW(p)
What's a proper approach to use with those who literally can't handle that much truth?
Good question, though we might also want to take a careful look at whether there's something a little askew about the truth we're offering.
How can the folks who can't handle this stuff easily or perhaps at all be identified?
Rationality helps some depressed people and knocks others down farther.
Even if people at risk can be identified, I can't imagine a spoiler system which would keep all of them away from the material. On the other hand, maybe there are ways to warn off at least some people.
↑ comment by TheOtherDave · 2011-04-03T13:34:58.353Z · LW(p) · GW(p)
Well, that question is hardly unique to this forum.
My own preferred tactic depends on whether I consider someone capable of making an informed decision about what they are willing to try to handle -- that is, they have enough information, and they are capable of making such judgments, and they aren't massively distracted.
If I do, I tell them that there's something I'm reluctant to tell them, because I'm concerned that it will leave them worse off than my silence, but I'm leaving the choice up to them.
If not, then I keep quiet.
In a public forum, though, that tactic is unavailable.
↑ comment by timtyler · 2011-04-03T12:31:58.411Z · LW(p) · GW(p)
It is like rationality is a parasite that is hijacking you and turns you into a consequentialist that maximizes only rational conduct.
It is common for brains to get hijacked by parasites:
Dan Dennett: Ants, terrorism, and the awesome power of memes
Replies from: NancyLebovitz↑ comment by NancyLebovitz · 2011-04-03T13:17:13.265Z · LW(p) · GW(p)
Thanks for the link.
I note that when Dennett lists dangerous memes, he skips the one that gets the most people killed-- nationalism.
↑ comment by MatthewBaker · 2011-07-11T19:29:29.488Z · LW(p) · GW(p)
Dont despair, help will come :)
↑ comment by CronoDAS · 2011-04-03T07:09:13.567Z · LW(p) · GW(p)
I think you need to be a bit more selfish. The way I see it, the distant future can most likely take care of itself, and if it can't, then you won't be able to save it anyway.
If you suddenly were given a very good reason to believe that things are going to turn out Okay regardless of what you personally do, what would you do then?
↑ comment by Emile · 2011-04-02T11:43:25.561Z · LW(p) · GW(p)
It's like you'll have to become one of those people who work all their life to save money for their retirement when they are old and lost most of their interests.
That, and the rest, doesn't sound rational at all. "Maximizing expected utility" doesn't mean "systematically deferring enjoyment"; it's just a nerdy way of talking about tradeoffs when taking risks.
The concept of "expected utility" doesn't seem to have much relevance at the individual level, it's more something for comparing government policies, or moral philosophies, or agents in game theory/decision theory ... or maybe also some narrow things like investing in stock. But not deciding whether to go rock-climbing or not.
Replies from: XiXiDu↑ comment by XiXiDu · 2011-04-02T14:53:25.106Z · LW(p) · GW(p)
That, and the rest, doesn't sound rational at all.
I agree, but I can't pinpoint what is wrong. There are other people here who went bonkers (no offense) thanks to the kind of rationality being taught on LW. Actually Roko stated a few times that he would like to have never learnt about existential risks because of the negative impact it had on his social life etc. I argued that "ignorance is bliss" can under no circumstances be right and that I value truth more than happiness. I think I was wrong. I am not referring to bad things happening to people here but solely to the large amount of positive utility associated with a lot of scenarios that force you to pursue instrumental goals that you don't enjoy at all. Well, it would probably be better to never exist in the first place, living seems to have an overall negative utility if you are not the kind if person who enjoys being or helping Eliezer Yudkowsky.
What are you doing all day, is it the most effective way to earn money or help solving friendly AI directly? I doubt it. And if you know that and still don't do anything about it then many people here would call you irrational. It doesn't matter what you like to do because whatever you value, there will always be more of it tomorrow if you postpone doing it today and instead pursue an instrumental goal. You can always do something, even if that means you'd have to sell your blood. No excuses there, it is watertight.
And this will never end. It might sound absurd to talk about trying to do something about the heat death of the universe or trying to hack the Matrix, but is it really improbable enough to outweigh the utility associated with gaining the necessary resources to support 3^^^^3 people for 3^^^^3 years rather than a galactic civilisation for merely 10^50 years? Give me a good argument of why an FAI shouldn't devote all its resources to trying to leave the universe rather than supporting a galactic civilization for a few years? How does this differ from devoting all resources to working on friendly AI for a few decades? How much fun could you have in the next few decades? Let's say you'd have to devote 10^2 years of your life to a positive Singularity to gain 10^50. Now how is this different from devoting the resources to support you for 10^50 years to the FAI trying to figure out how to support you for 3^^^^3 years? Where do you draw the line and why?
Replies from: Perplexed, atucker, nazgulnarsil, Vladimir_Nesov, Davorak, CronoDAS↑ comment by Perplexed · 2011-04-03T01:27:57.813Z · LW(p) · GW(p)
I can't pinpoint what is wrong.
I can. You are trying to "shut up and multiply" (as Eliezer advises) using the screwed up, totally undiscounted, broken-mathematics version of consequentialism taught here. Instead, you should pay more attention to your own utility than to the utility of the 3^^^3itudes in the distant future, and/or in distant galaxies, and/or in simulated realities. You should pay no more attention to their utility than they pay to yours.
Don't shut up and multiply until someone fixes the broken consequentialist math which is promoted here. Instead, (as Eliezer also advises) get laid or something. Worry more about about the happiness of the people (including yourself) within a temporal radius of 24 hours, a spatial radius of a few meters, and in your own branch of the 'space-time continuum', than you worry about any region of space-time trillions of times the extent, if that region of space time is also millions of times as distant in time, space, or Hilbert-space phase-product.
(I'm sure Tim Tyler is going to jump in and point out that even if you don't discount the future (etc.) as I recommend, you should still not worry much about the future because it is so hard to predict the consequences of your actions. Pace Tim. That is true, but beside the point!)
If it is important to you (XiXiDu) to do something useful and Singularity related, why don't you figure out how to fix the broken expected-undiscounted-utility math that is making you unhappy before someone programs it into a seed AI and makes us all unhappy.
Replies from: Eliezer_Yudkowsky, timtyler↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2011-04-03T05:02:36.115Z · LW(p) · GW(p)
Excuse me, but XiXiDu is taking for granted ideas such as Pascal's Mugging - in fact Pascal's Mugging seems to be the main trope here - which were explicitly rejected by me and by most other LWians. We're not quite sure how to fix it, though Hanson's suggestion is pretty good, but we did reject Pascal's Mugging!
It's not obvious to me that after rejecting Pascal's Mugging there is anything left to say about XiXiDu's fears or any reason to reject expected utility maximization(!!!).
Replies from: JoshuaZ, David_Gerard, Perplexed↑ comment by JoshuaZ · 2011-04-03T05:23:56.782Z · LW(p) · GW(p)
It's not obvious to me that after rejecting Pascal's Mugging there is anything left to say about XiXiDu's fears or any reason to reject expected utility maximization(!!!).
Well, in so far as it isn't obvious why Pascal's Mugging should be rejected by a utility maximizer, his fears are legitimate. It may very well be that a utility maximizer will always be subject to some form of possible mugging. If that issue isn't resolved the fact that people are rejecting Pascal's Mugging doesn't help matters.
Replies from: XiXiDu↑ comment by XiXiDu · 2011-04-03T11:13:55.185Z · LW(p) · GW(p)
It may very well be that a utility maximizer will always be subject to some form of possible mugging.
I fear that the mugger is often our own imagination. If you calculate the expected utility of various outcomes you imagine impossible alternative actions. The alternatives are impossible because you already precommited to choosing the outcome with the largest expected utility. There are three main problems with that:
- You swap your complex values for a certain terminal goal with the highest expected utility, indeed your instrumental and terminal goals converge to become the expected utility formula.
- There is no minimum amount of empirical evidence necessary to extrapolate the expected utility of an outcome.
- The extrapolation of counterfactual alternatives is unbounded, logical implications can reach out indefinitely without ever requiring new empirical evidence.
All this can cause any insignificant inference to exhibit hyperbolic growth in utility.
Replies from: atucker↑ comment by atucker · 2011-04-03T14:12:58.620Z · LW(p) · GW(p)
I don't trust my brain's claims of massive utility enough to let it dominate every second of my life. I don't even think I know what, this second, would be doing the most to help achieve a positive singularity.
I'm also pretty sure that my utility function is bounded, or at least hits diminishing returns really fast.
I know that thinking my head off about every possible high-utility counterfactual will make me sad, depressed, and indecisive, on top of ruining my ability to make progress towards gaining utility.
So I don't worry about it that much. I try to think about these problems in doses that I can handle, and focus on what I can actually do to help out.
Replies from: XiXiDu↑ comment by XiXiDu · 2011-04-04T17:07:01.043Z · LW(p) · GW(p)
I don't trust my brain's claims of massive utility enough to let it dominate every second of my life.
Yet you trust your brain enough to turn down claims of massive utility. Given that our brains could not evolve to yield reliable inutions about such scenarios and given that the parts of rationality that we do understand very well in principle are telling us to maximize expected utility, what does it mean not to trust your brain? In all of the scenarios in question that involve massive amounts of utility your uncertainty is included and being outweighed. It seems that what you are saying is that you don't trust your higher order thinking skills and instead trust your gut feelings? You could argue that you are simply risk averse, but that would require you to set some upper bound regarding bargains with uncertain payoffs. How are you going to define and justify such a limit if you don't trust your brain?
Anyway, I did some quick searches today and found out that the kind of problems I talked about are nothing new and mentioned in various places and contexts:
The ‘expected value’ of the game is the sum of the expected payoffs of all the consequences. Since the expected payoff of each possible consequence is $1, and there are an infinite number of them, this sum is an infinite number of dollars. A rational gambler would enter a game iff the price of entry was less than the expected value. In the St. Petersburg game, any finite price of entry is smaller than the expected value of the game. Thus, the rational gambler would play no matter how large the finite entry price was. But it seems obvious that some prices are too high for a rational agent to pay to play. Many commentators agree with Hacking's (1980) estimation that “few of us would pay even $25 to enter such a game.” If this is correct—and if most of us are rational—then something has gone wrong with the standard decision-theory calculations of expected value above. This problem, discovered by the Swiss eighteenth-century mathematician Daniel Bernoulli is the St. Petersburg paradox. It's called that because it was first published by Bernoulli in the St. Petersburg Academy Proceedings (1738; English trans. 1954).
If EDR were accepted, speculations about infinite scenarios, however unlikely and far‐fetched, would come to dominate our ethical deliberations. We might become extremely concerned with bizarre possibilities in which, for example, some kind of deity exists that will use its infinite powers to good or bad ends depending on what we do. No matter how fantastical any such scenario would be, if it is a logically coherent and imaginable possibility it should presumably be assigned a finite positive probability, and according to EDR, the smallest possibility of infinite value would smother all other considerations of mere finite values.
[...]
Suppose that I know that a certain course of action, though much less desirable in every other respect than an available alternative, offers a one‐in‐a‐million chance of avoiding catastrophe involving x people, where x is finite. Whatever else is at stake, this possibility will overwhelm my calculations so long as x is large enough. Even in the finite case, therefore, we might fear that speculations about low‐probability‐high‐stakes scenarios will come to dominate our moral decision making if we follow aggregative consequentialism.
The Infinitarian Challenge to Aggregative Ethics
If we consider systems that would value some apparently physically unattainable quantity of resources orders of magnitude more than the apparently accessible resources given standard physics (e.g. resources enough to produce 10^1000 offspring), the potential for conflict again declines for entities with bounded utility functions. Such resources are only attainable given very unlikely novel physical discoveries, making the agent's position similar to that described in "Pascal's Mugging" (Bostrom, 2009), with the agent's decision-making dominated by extremely small probabilities of obtaining vast resources.
Omohundro's "Basic AI Drives" and Catastrophic Risks
Replies from: atucker↑ comment by atucker · 2011-04-05T01:29:13.952Z · LW(p) · GW(p)
You could argue that you are simply risk averse, but that would require you to set some upper bound regarding bargains with uncertain payoffs
I take risks when I actually have a grasp of what they are. Right now I'm trying to organize a DC meetup group, finish up my robotics team's season, do all of my homework for the next 2 weeks so that I can go college touring, and combining college visits with LW meetups.
After April, I plan to start capoiera, work on PyMC, actually have DC meetups, work on a scriptable real time strategy game, start contra dancing again, start writing a sequence based on Heuristics and Biases, improve my dietary and exercise habits, and visit Serbia.
All of these things I have a pretty solid grasp of what they entail, and how they impact the world.
I still want to do high-utility things, but I just choose not to live in constant dread of lost opportunity. My general strategy of acquiring utility is to help/make other people get more utility too, and multiply the effects of getting the more low hanging fruit.
Suppose that I know that a certain course of action
with the agent's decision-making dominated by extremely small probabilities of obtaining vast resources.
The issue with long-shots like this is that I don't know where to look for them. Seriously. And since they're such long-shots, I'm not sure how to go about getting them. I know that trying to do so isn't particularly likely to work.
Yet you trust your brain enough to turn down claims of massive utility.
Sorry, I said that badly. If I knew how to get massive utility, I would try to. Its just the planning is the hard part. The best that I know to do now (note: I am carving out time to think about this harder in the forseeable future) is to get money and build communities. And give some of the money to SIAI. But in the meantime, I'm not going to be agonizing over everything I could have possibly done better.
↑ comment by David_Gerard · 2011-04-03T13:03:36.440Z · LW(p) · GW(p)
It's not obvious to me that after rejecting Pascal's Mugging there is anything left to say about XiXiDu's fears or any reason to reject expected utility maximization(!!!).
Well, nothing philosophically. There's probably quite a lot to say about, or rather in the aid of, one of our fellows who's clearly in trouble.
The problem appears to be depression, i.e., more corrupt than usual hardware. Thus, despite the manifestations of the trouble as philosophy, I submit this is not the actual problem here.
↑ comment by Perplexed · 2011-04-03T06:24:49.510Z · LW(p) · GW(p)
We are in disagreement then. I reject, not just Pascal's mugging, but also the style of analysis found in Bostrom's "Astronomical Waste" paper. As I understand XiXiDu, he has been taught (by people who think like Bostrom) that even the smallest misstep on the way to the Singularity has astronomical consequences and that we who potentially commit these misteps are morally responsible for this astronomical waste.
Is the "Astronomical Waste" paper an example of "Pascal's Mugging"? If not, how do you distinguish (setting aside the problem of how you justify the distinction)?
We're not quite sure how to fix it, though Hanson's suggestion is pretty good ...
Do you have a link to Robin's suggestion? I'm a bit surprised that a practicing economist would suggest something other than discounting. In another Bostrom paper, "The Infinitarian Challenge to Aggregative Ethics", it appears that Bostrom also recognizes that something is broken, but he, too, doesn't know how to fix it.
Replies from: XiXiDu↑ comment by XiXiDu · 2011-04-03T09:57:16.110Z · LW(p) · GW(p)
Is the "Astronomical Waste" paper an example of "Pascal's Mugging"? If not, how do you distinguish (setting aside the problem of how you justify the distinction)?
Exactly, I describe my current confusion in more detail in this thread, especially the comment here and here which led me to conclude this. Fairly long comments, but I wish someone would dissolve my confusion there. I really don't care if you downvote them to -10, but without some written feedback I can't tell what exactly is wrong, how I am confused.
Do you have a link to Robin's suggestion?
Can be found via the Wiki:
Robin Hanson has suggested penalizing the prior probability of hypotheses which argue that we are in a surprisingly unique position to affect large numbers of other people who cannot symmetrically affect us. Since only one in 3^^^^3 people can be in a unique position to ordain the existence of at least 3^^^^3 other people who are not symmetrically in such a situation themselves, the prior probability would be penalized by a factor on the same order as the utility.
I don't quite get it.
Replies from: NancyLebovitz↑ comment by NancyLebovitz · 2011-04-03T11:46:24.905Z · LW(p) · GW(p)
I'm going to be poking at this question from several angles-- I don't think I've got a complete and concise answer.
I think you've got a bad case of God's Eye Point of View-- thinking that the most rational and/or moral way to approach the universe is as though you don't exist.
The thing about GEPOV is that it isn't total nonsense. You can get more truth if you aren't territorial about what you already believe, but since you actually are part of the universe and you are your only point of view, trying to leave yourself out completely is its own flavor of falseness.
As you are finding out, ignoring your needs leads to incapacitation. It's like saying that we mustn't waste valuable hydrocarbons on oil for the car engine. All the hydrocarbons should be used for gasoline! This eventually stops working. It's important to satisfy needs which are of different kinds and operate on different time scales.
You may be thinking that, since fun isn't easily measurable externally, the need for it isn't real.
I think you're up against something which isn't about rationality exactly-- it's what I call the emotional immune system. Depression is partly about not being able to resist (or even being attracted to) ideas which cause damage.
An emotional immune system is about having affection for oneself, and if it's damaged, it needs to be rebuilt, probably a little at a time.
On the intellectual side, would you want all the people you want to help to defer their own pleasure indefinitely?
Replies from: Eliezer_Yudkowsky, XiXiDu↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2011-04-03T12:54:38.594Z · LW(p) · GW(p)
Depression is partly about not being able to resist (or even being attracted to) ideas which cause damage.
This sounds very true and important.
Replies from: NancyLebovitz↑ comment by NancyLebovitz · 2011-04-03T12:57:38.316Z · LW(p) · GW(p)
As far as I can tell, a great deal of thinking is the result of wanting thoughts which match a pre-existing emotional state.
Thoughts do influence emotions, but less reliably.
↑ comment by XiXiDu · 2011-04-03T19:04:10.997Z · LW(p) · GW(p)
On the intellectual side, would you want all the people you want to help to defer their own pleasure indefinitely?
No, but I don't know what a solution would look like. Most of the time I am just overwhelmed as it feels like everything I come up with isn't much better than throwing a coin. I just can't figure out the right balance between fun (experiencing; being selfish), moral conduct (being altruistic), utility maximization (being future-oriented) and my gut feelings (instinct; intuition; emotions). For example, if I have a strong urge to just go out and have fun, should I just give in to that urge or think about it? If I question the urge I often end up thinking about it until it is too late. Every attempt at a possible solution looks like browsing Wikipedia, each article links to other articles that again link to other articles until you end up with something completely unrelated to the initial article. It seems impossible to apply a lot of what is taught on LW in real life.
Replies from: NancyLebovitz, Goobahman↑ comment by NancyLebovitz · 2011-04-03T19:14:02.387Z · LW(p) · GW(p)
Maybe require yourself to have a certain amount of fun per week?
↑ comment by Goobahman · 2011-04-06T04:03:38.413Z · LW(p) · GW(p)
NancyLebovitz's comment I think is highly relevant here.
I can only speak from my personal experience, but I've found than part of going through Less Wrong and understanding all the great stuff on this website, is understanding the type of creature I am. At this current moment, I am comparitively a very simple one. In terms of the singularity, and Friendly AI, they are miles from what I am, and I am not at a point where I can emotionally take on those causes. I can intellectual but the fact is the simple creature that I am doesn't comprehend those connections yet. I want to one day, but a Baby has to crawl before it can walk. Much of what I do provides me with satisfaction, joy, happiness. I don't even fully understand why. But what I do know, is that I need those emotions to not just function, but to improve, to continue the development of myself.
Maybe it might help to reduce yourself to that simple creature. Understand that for a baby to do math, it has to understand symbols. Maybe that what you understand intellectually, in terms of emotional function your not yet ready to deal with.
Just my two cents. sorry if I'm not as concise as I should be. I do hope the best for you though.
↑ comment by timtyler · 2011-04-03T13:38:59.855Z · LW(p) · GW(p)
I'm sure Tim Tyler is going to jump in and point out that even if you don't discount the future (etc.) as I recommend, you should still not worry much about the future because it is so hard to predict the consequences of your actions. Pace Tim. That is true, but beside the point!
Peace - I think that is what you meant to say. We mostly agree. I am not sure you can tell someone else what they "should" be doing, though. That is for them to decide. I expect your egoism is not of the evangelical kind.
Saving the planet does have some merits though. People's goals often conflict - but many people can endorse saving the planet. It is ecologically friendly, signals concern with Big Things, paints you as a Valiant Hero - and so on. As causes go, there are probably unhealthier ones to fall in with.
Replies from: Perplexed↑ comment by Perplexed · 2011-04-03T14:43:06.350Z · LW(p) · GW(p)
I'm sure Tim Tyler is going to jump in and point out ... Pace Tim. That is true, but beside the point!
Peace - I think that is what you meant to say.
I'm kinda changing the subject here, but that wasn't a typo. "Pace" was what I meant to write. Trouble is, I'm not completely sure what it means. I've seen it used in contexts that suggest it means something like "I know you disagree with this, but I don't want to pick a fight. At least not now." But I don't know what it means literally, nor even how to pronounce it.
My guess is that it is church Latin, meaning (as you suggest) 'peace'. 'Requiescat in pace' and all that. I suppose, since it is a foreign language word, I technically should have italicized. Can anyone help out here?
Replies from: timtyler↑ comment by atucker · 2011-04-02T22:09:44.143Z · LW(p) · GW(p)
living seems to have an overall negative utility if you are not the kind if person who enjoys being or helping Eliezer Yudkowsky.
There is a difference between negative utility, and less than maximized utility. There are lots of people who enjoy their lives despite not having done as much as they could, even if they know that they could be doing more.
Its only when you dwell on what you haven't done, aren't doing, or could have done that you actually become unhappy about it. If you don't start from maximum utility and see everything as a worse version of that, then you can easily enjoy the good things in your life.
↑ comment by nazgulnarsil · 2011-04-02T16:24:11.534Z · LW(p) · GW(p)
you seem to be holding yourself morally responsible for future states. why? my attitude is that it was like this when I got here.
↑ comment by Vladimir_Nesov · 2011-04-03T10:04:03.914Z · LW(p) · GW(p)
Give me a good argument of why an FAI shouldn't devote all its resources to trying to leave the universe rather than supporting a galactic civilization for a few years?
Now this looks like a wrong kind of question to consider in this context. The amount of fun your human existence is delivering, in connection with what you abstractly believe is the better course of action, is something relevant, but the details of how FAI would manage the future is not your human existence's explicit problem, unless you are working on FAI design.
If it's better for FAI to spend the next 3^^^3 multiverse millenia planning the future, why should that have a reflection in your psychological outlook? That's an obscure technical question. What matters is whether it's better, not whether it has a certain individual surface feature.
↑ comment by Davorak · 2011-04-02T17:34:01.261Z · LW(p) · GW(p)
What are you doing all day, is it the most effective way to earn money or help solving friendly AI directly? I doubt it. And if you know that and still don't do anything about it then many people here would call you irrational.
Irrational seems like the wrong world here after all the person could be rational but working with a dataset that does not allow them to reach that conclusion yet. There are also people who reach that conclusion irrationally, reach the right conclusion with a flawed method(unreliable) but are not more rational for having the right conclusions.
↑ comment by Kaj_Sotala · 2011-07-03T16:49:35.648Z · LW(p) · GW(p)
So if you enjoy mountain climbing you'll postpone it until it is safer or after the Singularity when you can have much more safe mountain climbing.
That presumes no time discounting.
Time discounting is neither rational nor irrational. It's part of the way one's utility function is defined, and judgements of instrumental rationality can only be made by reference to a utility function. So there's not necessarily any conflict between expected utility maximization and having fun now: indeed, one could even have a utility function that only cared about things that happened during the next five seconds, and attached zero utility to everything afterwards. I'm obviously not suggesting that anyone should try to start thinking like that, but I do suggest introducing a little more discounting into your utility measurements.
That's even without taking into account the advice about needing rest that other people have brought up, and which I agree with completely. I tried going by the "denial of pleasures" route before, and the result was a burnout which began around three years ago and which is still hampering my productivity. If you don't allow yourself to have fun, you will crash and burn sooner or later.
↑ comment by Kutta · 2011-04-02T21:48:05.775Z · LW(p) · GW(p)
Couldn't you just take all these negative stuff you came up with in connection to rationality, mark them as things to avoid, and then define rationality as efficiently pursuing whatever you actually find desirable?
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2011-04-02T22:55:49.283Z · LW(p) · GW(p)
That would be ignoring the arguments, as opposed to addressing them. How you define "rationality" shouldn't matter for what particular substantive arguments incite you to do.
Replies from: Kutta↑ comment by Kutta · 2011-04-03T10:50:05.358Z · LW(p) · GW(p)
If you accept the "rationality is winning" definition, it makes little sense to come up with downsides about rationality, that's what I was trying to point out.
It is quite similar to what you said in this comment.
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2011-04-03T11:01:06.458Z · LW(p) · GW(p)
If you accept the "rationality is winning" definition, it makes little sense to come up with downsides about rationality, that's what I was trying to point out.
A wrong way to put it. If a decision is optimal, there still remain specific arguments for why it shouldn't be taken. Optimality is estimated overall, not for any singled out argument, that can therefore individually lose. See "policy debates shouldn't appear one-sided".
If, all else equal, it's possible to amend a downside, then it's a bad idea to keep it. But tradeoffs are present in any complicated decision, there will be specialized heuristics that disapprove of a plan, even if overall it's optimized.
In our case, we have the heuristic of "personal fun", which is distinct from overall morality. If you're optimizing morality, you should expect personal fun to remain suboptimal, even if just a little bit.
(Yet another question is that rationality can give independent boost to the ability to have personal fun, which can offset this effect.)
↑ comment by Vladimir_Nesov · 2011-04-02T22:14:31.456Z · LW(p) · GW(p)
All else equal, if having less fun improves expected utility, you should have less fun. But all else is not equal, it's not clear to me that the search for more impact often leads to particularly no-fun plans. In other words, some low-hanging fun cuts are to be expected, you shouldn't play WoW for weeks on end, but getting too far into the no-fun territory would be detrimental to your impact, and the best ways of increasing your impact probably retain a lot of fun. Also, happiness set point would probably keep you afloat.
comment by Normal_Anomaly · 2011-04-02T17:20:32.662Z · LW(p) · GW(p)
*”Politics is the mind killer”: This got me to take a serious look at my political views. I have changed a few of my positions, and my level of confidence on several others. I've also (mostly) stopped using people's political views to decide whether they are "on my side" or not.
*A Human's Guide to Words: I have gotten better at catching myself when I say unclear or potentially misleading things. I have also learned to stop getting involved in arguments over the meanings of words, or whether some entity belongs in an ill-defined category.
*Overall, Less Wrong made me less of a jerk. I am able to have discussions with people on things where we don't agree without thinking of them as evil or inferior. Better yet, I know when not to have the discussion in the first place. This saves both me and other people a lot of time and unpleasant feelings. I have a more realistic self-assessment, which lets me avoid missing opportunities to win or being disappointed when I overreach. I can understand other people a bit better and my social interactions are somewhat improved. Note that this last is kind of hard to test, so I don't know how big the effect is.
Replies from: Nonecomment by Giles · 2011-04-02T16:13:17.883Z · LW(p) · GW(p)
- Realising that I was irrationally risk-averse and correcting for this in at least one major case
- In terms of decision making, imagining that I had been created at this point in time, and that my past was a different person.
- Utilitarian view of ethics
- Actually having goals. Trying to live as if I was maximizing a utility function.
Evidence for each point:
- moving from the UK to Canada to be with my girlfriend.
- There is a faster bus I could have been using on my commute to work. I knew about the bus but I wasn't taking it. Why not? I honestly don't know. It doesn't matter. Just take the faster bus from now on!
- When I first encountered Less Wrong I tended to avoid the ethics posts, considering them lower quality. I only recently realised this was because I had been reading them as "epistemic rationality" and as a non-moral-realist they therefore didn't seem meaningful. But as "instrumental rationality" they make a lot more sense.
- This was mainly realising that "make the world a better place and look out for yourself" is somehow morally OK as a utility function. This is a very recent change for me though so no evidence yet that it's working.
↑ comment by Dorikka · 2011-04-02T16:41:43.969Z · LW(p) · GW(p)
When I first encountered Less Wrong I tended to avoid the ethics posts, considering them lower quality. I only recently realised this was because I had been reading them as "epistemic rationality" and as a non-moral-realist they therefore didn't seem meaningful.
If you mean by "non-moral-realist" "someone who doesn't think objective morality exists," I think that you've expressed my current reason for why I haven't read the meta-ethics sequence. Could you elaborate a bit more on why you changed your mind?
Replies from: Giles, Giles, Normal_Anomaly↑ comment by Giles · 2011-04-02T19:06:46.399Z · LW(p) · GW(p)
Another point I should elaborate on.
"Would you sacrifice yourself to save the lives of 10 others?" you ask person J. "I guess so", J replies. "I might find it difficult bringing myself to actually do it, but I know it's the right thing to do".
"But you give a lot of money to charity" you tell this highly moral, saintly individual. "And you make sure to give only to charities that really work. If you stay alive, the extra money you will earn can be used to save the lives of more than 10 people. You are not just sacrificing yourself, you are sacrificing them too. Sacrificing the lives of more than 10 people to save 10? Are you so sure it's the right thing to do?".
"Yes", J replies. "And I don't accept your utilitarian model of ethics that got you to that conclusion".
What I figured out (and I don't know if this has been covered on LW yet) is that J's decision can actually be rational, if:
- J's utility function is strongly weighted in favour of J's own wellbeing, but takes everyone else's into account too
- J considers the social shame of killing 10 other people to save himself worse (according to this utility function) than his own death plus a bunch of others
The other thing I realised was that people with a utility function such as J's should not necessarily be criticized. If that's how we're going to behave anyway, we may as well formalize it and that should leave everyone better off on average.
Replies from: Dorikka↑ comment by Dorikka · 2011-04-02T19:33:59.505Z · LW(p) · GW(p)
J considers the social shame of killing 10 other people to save himself worse (according to this utility function) than his own death plus a bunch of others.
Yes, but only if this is really, truly J's utility function. There's a significant possibility that J is suffering from major scope insensitivity and failing to fully appreciate the loss of fun happening when all those people die that he could have saved by living and donating to effective charity. When I say "significant possibility", I'm estimating P>.95.
Note: I interpreted "charities that really work" as "charities that you've researched well and concluded that they're the most effective one's out there." If you just mean that the charity donation produces positive instead of negative fun (considering that there exist some charities that actually don't help people), then my P estimate drops.
Replies from: Giles↑ comment by Giles · 2011-04-02T21:09:39.676Z · LW(p) · GW(p)
It seems plausible to me that J really, truly cares about himself significantly more than he cares about other people, certainly with P > 0.05.
The effect could be partly due to this and partly due to scope insensitivity but still... how do you distinguish one from the other?
It seems: caring about yourself -> caring what society thinks of you -> following society's norms -> tendency towards scope insensitivity (since several of society's norms are scope-insensitive).
In other words: how do you tell whether J has utility function F, or a different utility function G which he is doing a poor job of optimising due to biases? I assume it would have something to do with pointing out the error and seeing how he reacts, but it can't be that simple. Is the question even meaningful?
Re: "charities that work", your assumption is correct.
Replies from: Dorikka↑ comment by Dorikka · 2011-04-03T02:25:02.530Z · LW(p) · GW(p)
Considering that J is contributing a lot of money to truly effective charity, I think that his utility function is such that he will gain more utils from the huge amount of fun generated from his continued donations minus that by social shame minus that of ten people dying compared to J himself dying if his biases did not render him incapable of appreciating just how much fun his charity was generating. If he's very selfish, my probability estimate is raised (not above .95, but above whatever it would have been before) by the fact that most people don't want to die.
One way to find out the source of such a decision is telling them to read the Sequences, and see what they think afterwards. The question is very meaningful, because the whole point of instrumental rationality is learning how to prevent your biases from sabotaging your utility function.
↑ comment by Giles · 2011-04-02T18:47:59.551Z · LW(p) · GW(p)
First off, I'm using "epistemic and instrumental rationality" as defined here:
http://lesswrong.com/lw/31/what_do_we_mean_by_rationality/
If you don't believe objective morality exists then epistemic rationality can't be applied directly to morality. "This is the right thing to do" is not a question about the "territory" so you can't determine its truth or falsehood.
But I choose some actions over others and describe that choice as a moral one. The place where I changed my mind is that it's no longer enough for me to be "more moral than the average person". I want to do the best that I can, within certain constraints.
This fits into the framework of epistemic rationality. I am essentially free to choose which outcomes I consider preferable to which. But I have to follow certain rules. Something can't be preferable to itself. I shouldn't switch your preference and back just by phrasing the question differently. And so on.
Replies from: Dorikka↑ comment by Normal_Anomaly · 2011-04-02T17:29:57.980Z · LW(p) · GW(p)
If you mean by "non-moral-realist" "someone who doesn't think objective morality exists," I think that you've expressed my current reason for why I haven't read the meta-ethics sequence.
All the terms in this area have nearly as many definitions as they have users, but I think you'll find the meta-ethics posts to be non-morally-realist. Oft-repeated quote from those posts:
Not upon the stars or the mountains is [some aspect of morality] written."
↑ comment by atucker · 2011-04-03T04:34:36.159Z · LW(p) · GW(p)
In terms of decision making, imagining that I had been created at this point in time, and that my past was a different person.
There is a faster bus I could have been using on my commute to work. I knew about the bus but I wasn't taking it. Why not? I honestly don't know. It doesn't matter. Just take the faster bus from now on!
Interesting hack. I've been doing something similar with the thought process of "What happened, happened. What can I do now?"
comment by Armok_GoB · 2011-04-02T08:18:50.473Z · LW(p) · GW(p)
The largest effect in my life has been in fighting mental illness, both indirectly by making me seek help and identify problems that I need to work with, and directly by getting rid of delusions.
It's also given me the realization that I have long term goals and that I might actually have an impact on them. Without that I'd for example never have put the effort in to get an actual education for example or even realized that was important.
These are just the largest and most concrete things, I have a hard time thinking of ANYTHING positive in my life that's not due to rationality.
Replies from: David_Gerard, Goobahman↑ comment by David_Gerard · 2011-04-02T14:22:59.458Z · LW(p) · GW(p)
I have a hard time thinking of ANYTHING positive in my life that's not due to rationality.
Friends and loved ones are pretty good in the general case :-)
But yes, learning to be less dumb is a general formula for success.
Replies from: Armok_GoB↑ comment by Armok_GoB · 2011-04-02T14:49:43.632Z · LW(p) · GW(p)
I don't really have friends except a few online ones, and if no for rationality's effects on mental health I probably would not have the ability to interact with family. So no, not even those.
Replies from: David_Gerard↑ comment by David_Gerard · 2011-04-02T19:10:16.088Z · LW(p) · GW(p)
I was wondering if that was the case, hence the "general case" disclaimer.
↑ comment by Goobahman · 2011-04-06T04:07:51.333Z · LW(p) · GW(p)
"The largest effect in my life has been in fighting mental illness,"
Hey Armok,
I'd love to hear more details on this. Maybe do a post in discussion? Doesn't have to be elaborate or anything but I'm really curious.
Replies from: Armok_GoB↑ comment by Armok_GoB · 2011-04-06T11:18:13.761Z · LW(p) · GW(p)
No, but I can link to a previous discussion on why not: http://lesswrong.com/lw/4ws/collecting_successes_and_deltas/3qml
comment by jsalvatier · 2011-04-05T00:01:30.860Z · LW(p) · GW(p)
- I am prone to identifying with ideas and LW style thought has helped me keep in mind that the state of the world is external, which helps me step back and allow myself the possibility that I am wrong.
- Thinking in terms of decision theory helps me frequently ask, "how could I do this better?"
- the notion that there are big important ideas/skills that aren't hard to learn which not everyone knows (like decision theory and knowing about heuristics and biases) led me to look for more ideas/skills like this. The ones that come to mind:
- science of food/cooking
- accounting (though I'm not sure how much that's stuck)
- finance (basically, how to think about money over time)
- bayesian statistics (amazing how much more grokkable it is than traditional statistics)
- controversially Game('attraction isn't a choice', being good with women is typically attractive, being dominant (not dominating) is typically attractive, etc.).
- bayesian statistics is awesome, and I use it all the time
comment by Kai-o-logos · 2011-04-07T03:44:59.201Z · LW(p) · GW(p)
On Less Wrong, I found thoroughness. Society today advocates speed over effectiveness - 12 year old college students over soundly rational adults. People who can Laplace transform diff-eqs in their heads over people who can solve logical paradoxes. In Less Wrong, I found people that could detach themselves from emotions and appearances, and look at things with an iron rationality.
I am sick of people who presume to know more than they do. Those that "seem" smart rather than actually being smart.
People on less wrong do not seem to be something they are not ~"Seems, madam! nay it is; I know not 'seems.'" (Hamlet)
comment by [deleted] · 2011-04-02T11:48:02.875Z · LW(p) · GW(p)
What cool/important/useful things has rationality gotten you?
What sticks out for me are some bad things. "Comforting lies" is not an ironic phrase, and since ditching them I haven't found a large number of comforting truths. So far I haven't been able to marshal my true beliefs against my bad habits -- I come to less wrong partly to try to understand why.
Replies from: David_Gerard↑ comment by David_Gerard · 2011-04-02T14:31:00.814Z · LW(p) · GW(p)
For comfort from uncomforting truths, going meta may help: clearer thinking from more truthful data works to internalise your locus of control - or, to use the conventional term, empower you. It can take a while, and possibly some graspable results, for the prospect of a more internal locus of control to comfort.
comment by EStokes · 2011-04-04T20:20:34.012Z · LW(p) · GW(p)
I've benefited immensely, I think, but more from the self-image of being a person who wants/tries to be rational rather than something direct. I'm not particularly luminous or impervious to procrastination. However, valuing looking critically at things even when feelings are involved has been so incredibly important. I could have taken a huge, life-changing wrong turn. My sister took that turn, and she's never been really interested in rationality so I guess that's evidence for self-image as a (wanna-be) rationalist being important though it could've been something else.
comment by jsalvatier · 2011-04-05T00:05:35.522Z · LW(p) · GW(p)
- I am prone to identifying with ideas and LW style thought has helped me keep in mind that the state of the world is external, which helps me step back and allow myself the possibility that I am wrong.
- Thinking in terms of decision theory helps me frequently ask, "how could I do this better?"
- the notion that there are big important ideas/skills that aren't hard to learn which not everyone knows (like decision theory and knowing about heuristics and biases) led me to look for more ideas/skills like this. The ones that come to mind:
- science of food/cooking
- accounting (though I'm not sure how much that's stuck)
- finance (how to think about money over time, efficient markets -> passive investing)
- bayesian statistics (amazing how much more grokkable it is than traditional statistics)
- controversially Game.
- bayesian statistics is awesome, and I use it all the time