Posts
Comments
This is a really good point. On the other hand, it is a more convincing argument for stronger interventionist policy than it is against charity.
I am not average person, you don't look to be one either.
Fair enough, but I still don't think I am very good at predicting whether I'll be happier with children. I also doubt that other people who do think they will be happier are very accurate. Humans are notoriously bad at determining what will make them happy/unhappy. I'm thinking in particular about the study about lottery winners vs. amputee victim from Dan Gilbert's TED talk: http://www.ted.com/talks/dan_gilbert_asks_why_are_we_happy.html.
if your idea of meaningful life is improving the world, I don't see how you can have a sense of meaning and at the same time be aware that you're not "doing reasonably well".
Society as a whole regards having children as profoundly selfless, rather than selfish, so I think I am fair in concluding that some of the sense of meaning that people get from having children is related to improving the world for future generations. That particular self-satisfaction might not be disturbed by Rachaels' argument if one does not take moral arguments seriously.
There is variance in happiness, yes, but studies have shown that having children does not on average result in higher hedonic happiness, although it does increase a sense of living a meaningful life. If you doubt this, I can dig up the reference; I think it was actually referred to in the Rachaels paper. I said "certainly not", but that wasn't meant to be taken literally; of course it's not certain that you'll be equally or less happy with children.
I think I didn't word the second sentence correctly. I was trying to make the point that having a sense of meaning is not the same as doing reasonably well at improving the world in the ways you care about.
If you wanted to maximize your sense of meaning, you wouldn't object to being wireheaded in a blissful and maximally meaningful cyber-world. I think it's reasonable to say that most people object to such wireheading because they care about their actual impact on the world. At least, they want to appear as if they do.
Certainly not if you're trying to maximize your hedonic happiness. But children do not increase hedonic happiness; they increase your sense of living a meaningful life. To maximize the actual meaning of your life, you must use estimates of the impact of your decisions; whether or not this affects your perceived sense of meaning depends on how seriously you take moral arguments.
Ditto
I have the Anki iphone app. Considering the utility and convenience it provides, the price is negligible. For comparison, at a private college, tuition/# of classes ~= $200 / class, so as I use anki for schoolwork, it easily pays for itself.
If you do any sort of utility calculation for products you use, a lot of times convenience will trump price by orders of magnitude. This is one of those cases.
I made a deck of the list of cognitive biases and list of fallacies from wikipedia.
Thank you! I was planning on setting up a system for piano and guitar and I wasn't really sure what would work. This sounds great =]
Normal flashcards should be all equally difficult: as easy as possible. The idea is to break everything down into atomic facts; this makes it so you can't short-circuit a difficult card by just memorizing the answer; by memorizing all the parts, you still have the whole.
If you really want to drill one sub-deck, you can choose "cram mode" , and select the tag of the cards you want to review.
I don't use anki for languages, but to learn conjugations of verbs, I would have many example sentences with a "... " where the verb should go. You could ask on #anki or the google group. Here's a good article on how to make effective flashcards from the inventor of the spaced repetition algorithm, Piotr Wozniak.
Unconventional decks like having anki cards for a whole piano piece or problem in a textbook might work, but I haven't tried them... yet. I'll be experimenting with those this coming semester.
Thank you so much for posting this! I use anki a lot, and your Mysterious Questions deck has been a great help =]
Rational yes, if other people know of the decision. If you never find out the result of the gamble, are not held responsible and have your memory wiped, then all confounding interests are wiped except the desire for people not to die. Only then are the irrational options actually irrational.
Want to put a time scale on that?
So you wouldn't pay one cent to prevent 3^^^3 people from getting a dust speck in their eye?
Would you pay one cent to prevent one googleplex of people from having a momentary eye irration?
Torture can be put on a money scale as well: many many countries use torture in war, but we don't spend huge amounts of money publicizing and shaming these people (which would reduce the amount of torture in the world).
In order to maximize the benefit of spending money, you must weigh sacred against unsacred.
I suspect the answer is "making as much money as I possibly can", and he's doing much better than all of us. He can convert that to other forms of value later.
Safest, but maybe not the only safe way?
Why not make a recursively improving AI in some strongly typed language who provably can only interact with the world through printing names of stocks to buy?
How about one that can only make blueprints for star ships?
Is that really a bias? The fact that they are allied or not with you is some information about what they are likely to do.
A priori, as intelligent beings, we expect the universe at our scale to be immensely complex, since it produced us. I don't view our inability to explain fully phenomena at our scale as unreasonable non-effectiveness.
People are being tortured, and it wouldn't take too much money to prevent some of it. Obviously, there is already a price on torture.
So if someone would pay a penny, they should pick torture if it were 3^^^^3 people getting dust specks, which makes it suspect that they understood 3^^^3 in the first place.
So because there is a continuum between the right answer (lots of torture) and the wrong answer (3^^^3 horribly blinded people), you would rather blind those people?
You're avoiding the question. What if a penny was automatically payed for you each time in the future to avoid dust specks floating in your eye? The question is whether the dust speck is worth at least a negative penny of disutility. For me, I would say yes.
I don't see why they should be more valuable. From a selfish perspective, it might feel worse to lose someone you know, but from a charitable perspective, I don't value someone merely because I am familiar with them.
Utilitarianism to the rescue, then.
Yes. We just aren't socially condemned for it.
This is really useful; thanks! I've been using Anki for little over a year now, and I've found it very useful for classes and learning programming. I really like this application, and I'd love to see any more decks that you happen to make. I'll definitely start my own next time I go back and read through the archives.
I can't make this one. Sorry to bail at the last minute. -- Paul Hobbs
Thank you for this.