Open Thread, February 15-29, 2012
post by OpenThreadGuy · 2012-02-15T06:00:06.833Z · LW · GW · Legacy · 196 commentsContents
196 comments
If it's worth saying, but not worth its own post (even in Discussion), then it goes here.
196 comments
Comments sorted by top scores.
comment by DanielLC · 2012-02-15T07:43:34.367Z · LW(p) · GW(p)
I notice overconfidence bias and risk aversion seem to operate in opposite directions. Like, there's a 90% chance of something being true, you say it's 99% likely, and then you bet at 9 to 1 odds.
Do they tend to cancel? How well?
Replies from: sixes_and_sevens↑ comment by sixes_and_sevens · 2012-02-15T10:31:01.834Z · LW(p) · GW(p)
A while ago Yvain posted on Prospect Theory, which I think is salient to your query.
comment by Unnamed · 2012-02-17T01:30:28.191Z · LW(p) · GW(p)
A proposed law to require psychologists who testify in court to dress like wizards:
When a psychologist or psychiatrist testifies during a defendant’s competency hearing, the psychologist or psychiatrist shall wear a cone-shaped hat that is not less than two feet tall. The surface of the hat shall be imprinted with stars and lightning bolts. Additionally, a psychologist or psychiatrist shall be required to don a white beard that is not less than 18 inches in length, and shall punctuate crucial elements of his testimony by stabbing the air with a wand. Whenever a psychologist or psychiatrist provides expert testimony regarding a defendant’s competency, the bailiff shall contemporaneously dim the courtroom lights and administer two strikes to a Chinese gong…
comment by HonoreDB · 2012-02-23T20:59:24.432Z · LW(p) · GW(p)
I had a somewhat chaotic phase in my romantic life a few years ago, and I just had the thought that a lot of it could be modeled as a result of non-transitive preferences. Specifically,
C preferred being single to being with A.
C preferred being with W to being single.
C preferred being with A to being with W.
I think all three of us could have been spared some heartache if we had figured out that was what was going on.
Replies from: Alicorncomment by [deleted] · 2012-02-15T15:57:50.201Z · LW(p) · GW(p)
Currently listening to the Grace-Hanson podcasts. Topics:
comment by Kaj_Sotala · 2012-02-15T07:27:22.055Z · LW(p) · GW(p)
I'm coming to increasingly notice that maintaining a specific, regular sleep pattern is worth making sacrifices for. Specifically, if I go to bed around 10:30 PM and get up around 8 AM, I will wake up feeling energetic, productive and physically good. If I get up even a few hours later, or if I go to bed late but regardless get up at 8 in the morning, there's a very good chance that I will accomplish basically nothing on that day. It's weird how getting the timing so precisely correct seems to basically be the biggest determining factor in how my day will go.
I had noticed this before, but had frequently slipped from it, since most of my social events tend to be on evenings and maintaining these sleeping patterns while still having a social life was quite hard. But I'm now becoming convinced that those sacrifices are worth making: I'll just have to persuade my friends to be social at earlier times, or look for people who are already that.
Replies from: MileyCyrus, AlexSchell↑ comment by MileyCyrus · 2012-02-15T07:48:20.498Z · LW(p) · GW(p)
Have you tried modafinal?
Replies from: Kaj_Sotala↑ comment by Kaj_Sotala · 2012-02-15T08:09:00.634Z · LW(p) · GW(p)
It's not prescribed in Finland without a special permit from the authorities, and I don't want to take the risk of trying to obtain something that's considered an illegal drug.
Replies from: MileyCyrus↑ comment by MileyCyrus · 2012-02-15T08:32:06.594Z · LW(p) · GW(p)
My sympathies.
↑ comment by AlexSchell · 2012-02-15T12:12:22.238Z · LW(p) · GW(p)
Do you use an alarm clock? If so, your problem might have less to do with sleep deprivation (which I don't think should cause the sort of acute effects you describe) and more with getting up at the wrong time within a sleep cycle. If you have an iPhone or iPod touch, give Sleep Cycle a try for avoiding this problem. I think there are similar apps for different platforms. If you're not using an alarm clock (or are already using something like Sleep Cycle), I'd be genuinely surprised.
Replies from: Kaj_Sotala↑ comment by Kaj_Sotala · 2012-02-15T14:33:41.550Z · LW(p) · GW(p)
I do use an alarm clock, but after going to bed at the right time for a couple of evenings, I start to wake up on my own, a little before the clock would sound. The alarm clock is just there as a backup, and to let me remain mostly-awake in bed for about 10-20 minutes longer before telling me to actually get up (as opposed to just getting awake).
ETA: I should specify that if I don't go to bed at the right time, I don't wake up naturally - well, I do, but so late that I'll feel groggy and generally inenergetic.
Replies from: AlexSchell↑ comment by AlexSchell · 2012-02-15T22:39:55.207Z · LW(p) · GW(p)
Hmm, I still don't know if I should be surprised or not, as I'm having trouble parsing your last sentence. When you go to bed late, do you not set your alarm clock? Or do you sleep through your alarm? Or do you wake up naturally (but groggy) right before the alarm goes off?
Replies from: Kaj_Sotala↑ comment by Kaj_Sotala · 2012-02-16T07:25:02.273Z · LW(p) · GW(p)
I have attempted:
A) Going to bed late and setting the alarm at the usual early time
B) Going to bed late and setting the alarm a couple of hours later
C) Going to bed late and not setting an alarm at all
With A, I'll wake to the clock but be groggy. With B I'm not necessarily so groggy but still not as energetic as I would have if I'd gone to bed early and woken up early. With C I'll wake up naturally at some late time and feel pretty lethargic.
I was about to say that there are two dimensions here - groggy/neutral/awake and energetic/neutral/lethargic. Very roughly, A leaves me groggy/neutral, B leaves me neutral/neutral and C leaves me neutral/lethargic. But that doesn't sound entirely right, either - all three often also tend to leave me an extra unspecified uncomfortable feeling that I can't quite put into words, and which might be part of what I'm calling "groggy" or "lethargic" in the above. (Going to bed on time and getting up early leaves me awake/energetic or at least neutral/energetic, as well as without that extra uncomfortable feeling.)
comment by lsparrish · 2012-02-16T02:04:12.596Z · LW(p) · GW(p)
Summary: Years of life are in finite supply. It is morally better that these be spread among relatively more people rather than concentrated in the hands of a relative few. Example: Most people would save a young child instead of an old person if forced to choose, and it is not not just because the baby has more years left, part of the reason is because it seems unfair for the young child to die sooner than the old person.
The argument would be limited to certain age ranges; an unborn fetus or newborn infant might justly be sacrificed to save a mature person (e.g. a mother) due to the fact that early development represents a costly investment on the part of adults which it is fair for them to expect payoff for (at least for adults who contribute to the rearing of offspring -- which could be indirect, etc.).
I think my rejection for the argument is that I don't think of future humans as objects of moral concern in quite all the same respects that I do for existing humans, even though they qualify in some ways. While I think future beings are entitled to not being tortured, I think they are not (at least not out of fairness with respect to existing humans) entitled to being brought into existence in the first place. Perhaps my reason for thinking this is that most humans that could exist do not, and many (e.g. those who would be in constant pain) probably should not.
On the other hand, I do think it is valuable for there to be people in the future, and this holds even if they can't be continuations of existing humans. (I would assign fairly high utility to a Star Trek kind of universe where all currently living humans are dead from old age or some other unstoppable cause but humanity is surviving.)
Replies from: rwallace, Thrasymachus, None, skepsci, skepsci↑ comment by rwallace · 2012-02-17T02:31:02.160Z · LW(p) · GW(p)
Example: Most people would save a young child instead of an old person if forced to choose, and it is not not just because the baby has more years left, part of the reason is because it seems unfair for the young child to die sooner than the old person.
As far as I'm concerned it is just because the baby has more years left. If I had to choose between a healthy old person with several expected years of happy and productive life left, versus a child who was terminally ill and going to die in a year regardless, I'd save the old person. It is unfair that an innocent person should ever have to die, and unfairness is not diminished merely by afflicting everyone equally.
Replies from: Thrasymachus↑ comment by Thrasymachus · 2012-02-17T15:39:30.739Z · LW(p) · GW(p)
Suppose old person and child (perhaps better: young adult) would both gain 2 years, so we equalize payoff. What then? Why not be prioritarian at the margin of aggregate indifference?
Replies from: army1987↑ comment by A1987dM (army1987) · 2012-02-25T11:10:46.004Z · LW(p) · GW(p)
Well, young adults typically enjoy life more*, so...
* I've heard old people saying they wish they could become young again, but I haven't heard any young people saying they can't wait to become old.
↑ comment by Thrasymachus · 2012-02-16T03:57:42.465Z · LW(p) · GW(p)
Hello there, I'm the guy who wrote the stuff you linked to.
I think it might be worth noting the Rawlsian issue too. If we pretend life is in a finite supply with efficient distribution between persons, then something like "if I extend my life to 10n then 9 other peeps who would have lived n years like me would not" will be true. The problem is this violates norms about what a just outcome is. If I put you and nine others behind a veil of ignorance and offered you an 'everyone gets 80 years' versus 'one of you gets 800, whilst the rest of you get nothing', I think basically everyone would go for everyone getting 80. One of the consequences of that would seem to be expecting whoever 'comes first' in the existence lottery to refrain from life extension to allow subsequent persons to 'have their go'.
If you don't buy that future persons are objects of moral concern, then the foregoing won't apply. But I think there are good reasons to treat them as objects of full moral concern (including a 'right'/'interest' in being alive in the first place). It seems weird (given B theory), that temporally remote people count for less, even though we don't think spatial distance is morally salient. Better, we generally intuit things like a delayed doomsday machine that euthanizes all intelligent life painlessly in a few hundred years is a very bad thing to do.
If you dislike justice (or future persons), there's a plausible aggregate-only argument (which bears a resemblance to Singer's work). Most things show diminishing marginal returns, and plausibly lifespan will too, at least after the investment period: 20 to 40 is worth more than 40-60, etc. If that's true, and lifespan is in finite supply, then we might get more utility by having many smaller lives rather than fewer longer ones suffering diminishing returns. The optimum becomes a tradeoff in minimizing the 'decay' of diminishing returns versus the cost sunk into development of a human being through childhood and adolescence. The optimal lifespan might be longer or shorter than three score and ten, but is unlikely to be really big.
Obviously, there are huge issues over population ethics and the status of future persons, as well as finer grained stuff re. justice across hypothetical individuals. Sadly, I don't have time to elaborate on this stuff before summertime. Happily, I am working on this sort of stuff for an elective in Oxford, so hopefully I'll have something better developed by then!
Replies from: Richard_Kennaway, Kaj_Sotala, lsparrish↑ comment by Richard_Kennaway · 2012-02-17T14:32:22.970Z · LW(p) · GW(p)
You lose me the moment you introduce the moral premise. Why is it better for two people to each live a million years than one to live two million? This looks superficially the same sort of question as "Why is it better for two people to each have a million dollars than for one to have two million?", but in the latter scenario, one person has two million while the other has nothing. In the lifetimes case, there is no other person. The moral premise presupposes that nonexistent people deserve some of other peoples' existence in the same way that existing paupers deserve some of other peoples' wealth.
You may have an argument to that effect, but I didn't see it in my speed-run through your slides (nice graphic style, BTW, how do you do that?) or in your comment above. Your argument that we place value on future people only considers our desire to avoid calamities falling upon existent future people.
Diminishing returns for longer lifespans is only a problem to be tackled if it happens. The only diminishing returns I see around me for the lifespans we have result from decline in health, not excess of experience.
Replies from: Thrasymachus↑ comment by Thrasymachus · 2012-02-17T15:59:10.036Z · LW(p) · GW(p)
The nifty program is Prezi.
I didn't particularly fill in the valuing future persons argument - in my defence, it is a fairly common view in the literature not to discount future persons, so I just assumed it. If I wanted to provide reasons, I'd point to future calamities (which only seem plausibly really bad if future people have interests or value - although that needn't on be on a par with ours), reciprocity across time (in the same way we would want people in the past to weigh our interests equal to theirs when applicable, same applies to us and our successors), and a similar sort of Rawlsian argument that if we didn't know we would live now on in the future, the sort of deal we would strike would be those currently living (whoever they are) to weigh future interests equal to their own. Elaboration pending one day, I hope!
↑ comment by Kaj_Sotala · 2012-02-16T08:37:03.854Z · LW(p) · GW(p)
I find this argument incoherent, as I reject the idea of a person at the age of 1 being the same person as they are at the age of 800 - or for that manner, the idea of a person at the age of 400 being the same person as they are at the age of 401. In fact, I reject the idea of personal continuity in the first place, at least when looking at "fairness" at such an abstract level. I am not the same person as I was a minute ago, and indeed there are no persons at all, only experience-moments. Therefore there's no inherent difference in whether someone lives 800 years or ten people live 80 years. Both have 800 years worth of experience-moments.
I do recognize that "fairness" is still a useful abstraction on a societal level, as humans will experience feelings of resentment towards conditions which they perceive as unfair, as inequal outcomes are often associated with lower overall utility, and so forth. But even then, "fairness" is still just a theoretical fiction that's useful for maximizing utility, not something that would have actual moral relevance by itself.
As for the diminishing marginal returns argument, it seems inapplicable. If we're talking about the utility of a life (or a life-year), then the relevant variable would probably be something like happiness, but research on the topic has found age to be unrelated to happiness (see e.g. here), so each year seems to produce roughly the same amount of utility. Thus the marginal returns do not diminish.
Actually, that's only true if we ignore the resources needed to support a person. Childhood and old age are the two periods where people don't manage on their own, and need to be cared for by others. Thus, on a (utility)/(resources invested) basis, childhood and old age produce lower returns. Now life extension would eliminate age-related decline in health, so old people would cease to require more resources. And if people had fewer children, we'd need to invest fewer resources on them as well. So with life extension the marginal returns would be higher than with no life extension. Not only would the average life-year be as good as in the case with no life extension, we could support a larger population, so there would be many more life-years.
One could also make the argument that even if life extension wouldn't reduce the average amount of resources we'd need to support a person, it would still lead to increased population growth. Global trends currently show declining population growth all over the world. Developed countries will be the first ones to have their population drastically reduced (Japan's population began to decrease in 2005), but current projections seem to estimate that the developing world will follow eventually. Sans life extension, the future could easily be one of small populations and small families. With life extension, the future could still be one of small families, but it could be one of much larger populations as population growth would continue regardless. Instead of a planetary population of one billion people living to 80 each, we might have a planetary population of one hundred billion people living to 800 each. That would be no worse than no life extension on the fairness criteria, and much better on the experience-moments criteria.
Replies from: Thrasymachus↑ comment by Thrasymachus · 2012-02-16T17:44:19.414Z · LW(p) · GW(p)
Hello Kaj,
If you reject both continuity of identity and prioritarianism, then there isn't much left for an argument to appeal to besides aggregate concerns, which lead to a host of empirical questions you outline.
However, if you think you should maximize expected value under normative uncertainty (and you aren't absolutely certain aggregate util or consequentialism is the only thing that matters), then there might be motive to revise your beliefs. If the aggregate concerns 'either way' turn out to be a wash between immortal society and 'healthy aging but die' society, then the justice/prioritarian concerns I point to might 'tip the balance' in favour of the latter even if you aren't convinced it is the right theory. What I'd hope to show is something like prioritarianism at the margin or aggregate indifference (ie. prefer 10 utils to 10 people instead of 100 to 1 and 0 to 9) is all that is needed to buy the argument.
Replies from: Kaj_Sotala↑ comment by Kaj_Sotala · 2012-02-16T19:43:46.685Z · LW(p) · GW(p)
If you reject both continuity of identity and prioritarianism, then there isn't much left for an argument to appeal to besides aggregate concerns, which lead to a host of empirical questions you outline.
True, and I probably worded my opening paragraph in an unnecessarily aggressive way, given that premises such as accepting/rejecting continuity aren't really correct or wrong as such. My apologies for that.
If there did exist a choice between two scenarios where the only difference related to your concerns, then I do find it conceivable - though maybe unlikely - that those concerns would tip the balance. But I wouldn't expect such a tight balance to manifest itself in any real-world scenarios. (Of course, one could argue that theoretical ethics shouldn't concern itself too much with worrying about its real world-relevance in the first place. :)
I'd still be curious to hear your opinion about the empirical points I mentioned, though.
Replies from: Thrasymachus↑ comment by Thrasymachus · 2012-02-17T15:50:11.163Z · LW(p) · GW(p)
I'm not sure what to think about the empirical points.
If there is continuity of personal identity, then we can say that people 'accrue' life, and so there's plausibly diminishing returns. If we dismiss that and talk of experience moments, then a diminishing argument would have to say something like "experience-moments in 'older' lives are not as good as younger ones". Like you, I can't see any particularly good support for this (although I wouldn't be hugely surprised if it was so). However, we can again play the normative uncertainty card to just mean our expected degree of diminishing returns are attenuated by * P(continuity of identity)
I agree there are 'investment costs' in childhood, and if there are only costs in play, then our aggregate maximizer will want to limit them, and extending lifetime is best. I don't think this cost is that massive though between having it once per 80 years or once per 800 or similar. And if diminishing returns apply to age (see above), then it becomes a tradeoff.
Regardless, there are empirical situations where life-extension is strictly win-win: so if we don't have loads of children and so we never approach carrying capacity. I suspect this issue will be at most a near-term thing: our posthuman selves will assumedly tile the universe optimally. There are a host of counterveiling (and counter-counterveiling) concerns in the nearer term. I'm not sure how to unpick them.
Replies from: Kaj_Sotala↑ comment by Kaj_Sotala · 2012-02-17T19:30:34.051Z · LW(p) · GW(p)
If there is continuity of personal identity, then we can say that people 'accrue' life, and so there's plausibly diminishing returns.
I'm not sure how this follows, even presuming continuity of personal identity.
If you were running a company, you might get diminishing returns in the number of workers if the extra workers would start to get in each other's way, or the amount of resources needed for administration increased at a faster-than-linear speed. Or if you were planting crops, you might get diminishing returns in the amount of fertilizer you used, since the plants simply could not use more than a certain amount of fertilizer effectively, and might even suffer from there being too much. But while there are various reasons for why you might get diminishing returns in different fields, I can't think of plausible reasons for why any such reason would apply to years of life. Extra years of life do not get in each other's way, and I'm not going to enjoy my 26th year of life less than my 20th simply because I've lived for a longer time.
Replies from: Thrasymachus↑ comment by Thrasymachus · 2012-02-18T08:36:14.795Z · LW(p) · GW(p)
I was thinking something along the lines that people will generally pick the very best things, ground projects, or whatever to do first, and so as they satisfy those they have to go on to not quite so awesome things, and so on. So although years per se don't 'get in each others way', how you spend them will.
Obviously lots of counterveiling concerns too (maybe you get wiser as you age so you can pick even more enjoyable things, etc.)
Replies from: Kaj_Sotala, Ghatanathoah↑ comment by Kaj_Sotala · 2012-02-18T14:17:54.198Z · LW(p) · GW(p)
That sounds more like diminishing marginal utility than diminishing returns. (E.g. money has diminishing marginal utility because we tend to spend money first on the things that are the most important for us.)
Your hypothesis seems to be implying that humans engage in activities that are essentially "used up" afterwards - once a person has had an awesome time writing a book, they need to move on to something else the next year. This does not seem right: rather, they're more likely to keep writing books. It's true that it will eventually get harder and harder to find even more enjoyable activities, simply because there's an upper limit to how enjoyable an activity can be. But this doesn't lead to diminishing marginal utility: it only means that the marginal utility of life-years stops increasing.
For example, suppose that somebody's 20. At this age they might not know themselves very well, doing some random things that only give them 10 hedons worth of pleasure a year. At age 30, they've figured out that they actually dislike programming but love gardening. They spend all of their available time gardening, so they get 20 hedons worth of pleasure a year. At age 40 they've also figured out that it's fun to ride hot air balloons and watch their gardens from the sky, and the combination of these two activities lets them enjoy 30 hedons worth of pleasure a year. After that, things basically can't get any better, so they'll keep generating 30 hedons a year for the rest of their lives. There's no point at which simply becoming older will derive them of the enjoyable things that they do, unless of course there is no life extension available, at which case they will eventually lose their ability to do the things that they love. But other than that, there will never be diminishing marginal utility.
Of course, the above example is a gross oversimplification, since often our ability to do enjoyable things is affected by circumstances beyond our control, and it is likely to go up and down over time. But these effects are effectively random and thus uncorrelated with age, so I'm ignoring in them. In any case, for there to be diminishing marginal utility for years of life, people would have to lose the ability to do the things that they enjoy. Currently they only lose it due to age-related decline.
I would also note that your argument for why people would have diminishing marginal utility in years of life doesn't actually seem to depend on whether or not we presume continuity of personal identity. Nor does my response depend on it. (The person at age 30 may be a different person than the one at age 20, but she has still learned from the experiences of her "predecessors".)
↑ comment by Ghatanathoah · 2013-01-11T03:45:44.721Z · LW(p) · GW(p)
I was thinking something along the lines that people will generally pick the very best things, ground projects, or whatever to do first, and so as they satisfy those they have to go on to not quite so awesome things, and so on. So although years per se don't 'get in each others way', how you spend them will.
If you are arguing that we should let people die and then replace them with new people due to the (strictly hypothetical) diminishing utility they get from longer lives, you should note that this argument could also be used to justify killing and replacing handicapped people. I doubt you intended that way, but that's how it works out.
To make it more explicit, in a utilitarian calculation there is no important difference between a person whose utility is 5 because they only experienced 5 utility worth of good things, and someone whose utility is 5 because they experienced 10 utility of good things and -5 utility worth of bad things. So a person with a handicap that makes their life difficult would likely rank about the same as a person who is a little bored because they've done the best things already.
You could try to elevate the handicapped person's utility to normal levels instead of killing them. But that would use a lot of resources. The most cost-effective way to generate utility would be to kill them and conceive a new able person to replace them.
And to make things clear, I'm not talking about aborting a fetus that might turn out handicapped, or using gene therapy to avoid having handicapped children. I'm talking about killing a handicapped person who is mentally developed enough to have desires, feelings, and future-directed preferences, and then using the resources that would have gone to support them to concieve a new, more able replacement.
This is obviously the wrong thing to do. Contemplating this has made me realize that "maximize total utility" is a limited rule that only works in "special cases" where the population is unchanging and entities do not differ vastly in their ability to convert resources into utility. Accurate population ethics likely requires some far more complex rules.
Morality should mean caring about people. If your ethics has you constantly hoping you can find a way to kill existing people and replace them with happier ones you've gone wrong somewhere. And yes, depriving someone of life-extension counts as killing them.
Replies from: TheOtherDave↑ comment by TheOtherDave · 2013-01-11T03:48:57.922Z · LW(p) · GW(p)
Why should morality mean caring about the people who exist now, rather than caring about the people who will exist in a year?
Replies from: Ghatanathoah↑ comment by Ghatanathoah · 2013-01-11T04:15:04.446Z · LW(p) · GW(p)
Obviously it's morally good to care about people who will exist in a year. The "replacements" that I am discussing are not people who will exist. They are people who will exist if and only if someone else is killed and they are created to replace them.
Now, I think I typical counterargument to the point I just made is to argue that, due to the butterfly effect, any policy made to benefit future people will result in different sperms hitting different ovums, so the people who benefit from these policies will be different from the people who would have suffered from the lack of them. From this the counterarguer claims that it is acceptable to replace people with other people who will lead better lives.
I don't think this argument holds up. Future people do not yet have any preferences, since they don't exist yet. So it makes sense to, when considering how to best benefit future people, take actions that benefit future people the most, regardless of who those people end up being. Currently existing people, by contrast, already have preferences. They already want to live. You do them a great harm by killing and replacing them. Since a future person does not have preferences yet, you are not harming them if you make a choice that will result in a different future person who has a better life being born instead.
Replies from: TheOtherDave↑ comment by TheOtherDave · 2013-01-11T15:24:58.362Z · LW(p) · GW(p)
Suppose that a hundred years ago, Sam was considering the possibility of the eventual existence of people like us living lives like ours, and deciding how many resources to devote to increasing the likelihood of that existence.
I'm not positing prophetic abilities here; I don't mean he's peering into a crystal ball and seeing Dave and Ghatanathoah. I mean, rather, that he is considering in a general way the possibility of people who might exist in a century and the sorts of lives they might live and the value of those lives. For simplicity's sake I assume that Sam is very very smart, and his forecasts are generally pretty accurate.
We seem to be in agreement that Sam ought to care about us (as well as the various other hypothetical future people who don't exist in our world). It seems to follow that he ought to be willing to devote resources to us. (My culture sometimes calls this investing in the future, and we at the very least talk as though it were a good thing.)
Agreed?
Since Sam does not have unlimited resources, resources he devotes to that project will tend to be resources that aren't available to other projects, like satisfying the preferences of his neighbors. This isn't necessary... it may be, for example, that the best way to benefit you and I is to ensure that our grandparents' preferences were fully satisfied... but it's possible.
Agreed?
And if I'm understanding you correctly, you're saying that if it turns out that devoting resources towards arranging for the existence of our lives does require depriving his neighbors of resources that could be used to satisfy their preferences, it's nevertheless OK -- perhaps even good -- for Sam to devote those resources that way.
Yes?
What's not OK, on your account, is for Sam to harm his neighbors in order to arrange for the existence of our lives , since his neighbors already have preferences and we don't.
Have I understood you so far?
If so, can you clarify the distinction between harming me and diverting resources away from the satisfaction of my preferences, and why the latter is OK but the former is not?
Replies from: Ghatanathoah↑ comment by Ghatanathoah · 2013-01-13T20:52:26.744Z · LW(p) · GW(p)
Let's imagine that Sam is talking with a family who are planning on having another child. Sam knows, somehow, that if they conceive a child now they will give birth to a girl they will name Alice, and that if they wait a few years they will have a boy named Bob. They have enough money to support one more child and still live reasonably comfortable lives. It seems good for Sam to recommend the family have Alice or Bob, assuming either child will have a worthwhile life.
Sam also knows that the mother currently has an illness that will stunt Alice's growth in utero, so she will be born with a minor disability that will make her life hard, but still very much worth living and worth celebrating. He also knows that if the mother waits a few years her illness will clear up and she will be able to have healthy children who will have lives with all the joys Alice does, but without the problems caused by the disability.
Now, I think we can both agree that Sam should recommend the parents should wait a few years and have Bob. And that he should not at all be bothered at the idea that he is "killing" Alice to create Bob.
Now, let's imagine a second scenario in which the family has already had Alice. And let's say that Alice has grown sufficiently mature that no one will dispute that she is a person with preferences. And her life is a little difficult, but very much worth living and worth celebrating. The mother's illness has now cleared up so that she can have Bob, but again, the family does not have enough money to support another child.
Now, it occurs to Sam that if he kills Alice the family will be able to afford to have Bob. And just to avoid making the family's grief a confounding factor, let's say Sam is friends with Omega, who has offered to erase all the family's memories of Alice.
It seems to me that in this case Sam should not kill Alice. And I think the reason this is is that in the first hypothetical Alice did not exist, and did not have any preferences about existing or the future. In this hypothetical, however, she does. Bob, by contrast, does not have any preferences yet, so Sam shouldn't worry about "killing" Bob by not killing Alice.
On the other hand, it also seems wrong in the first hypothetical for Sam to recommend the family have neither Bob nor Alice, and just use their money to satisfy the preferences of the existing family members, even though in that case they are not "killing" Bob or Alice either.
What this indicates to me is:
It's good for there to be a large number of worthwhile lives in the world, both in the present and in the future. This may be because it is directly valuable, or it may be that it increases certain values that large numbers of worthwhile lives are needed to fulfill, such as diversity, love, friendship, etc.
It is good to make sure that the worthwhile lives we create have a high level of utility, both in the present and in the future.
We should split our resources between raising people's utility, and making sure the world is always full of worthwhile lives. What the exact ratio is would depend on how high the level of these two values are.
When you are choosing between creating two people who do not yet exist, you should pick the one who will have a better life.
If you screw up and accidentally create someone whose life isn't as good as some potential people you could create, but is still worth living, you have a duty to take care of them (because they have preferences) and shouldn't kill them and replace them with someone else who will have a better life (because that person doesn't have preferences yet).
When determining how to make sure there is a large number of worthwhile lives in the future, it is usually better to extend the life of an existing person than to replace them with a new person (because of point 5).
↑ comment by TheOtherDave · 2013-01-14T02:39:07.031Z · LW(p) · GW(p)
So, I can't quite figure out how to map your response to my earlier comment, so I'm basically going to ignore my earlier comment. If it was actually your intent to reply to my comment and you feel like making the correspondence more explicit, go ahead, but it's not necessary.
WRT your comment in a vacuum: I agree that it's good for lives to produce utility, and I also think it's good for lives to be enjoyable. I agree that it's better to choose for better lives to exist. I don't really care how many lives there are in and of itself, though as you say more lives may be instrumentally useful. I don't know what "worthwhile" means, and whatever it means I don't know why I should be willing to trade off either utility production or enjoyment for a greater number of worthwhile lives. I don't know why the fact that someone has preferences should mean that I have a duty to take care of them.
Replies from: Ghatanathoah↑ comment by Ghatanathoah · 2013-01-14T05:13:08.967Z · LW(p) · GW(p)
I understand that my previous argument was probably overlong, roundabout, and had some huge inferential differences, so I'll try to be more clear:
I don't know what "worthwhile" means,
A "worthwhile life" is a synonym for the more commonly used term: "life worth living." Basically, it's a life that contains more good than bad. I just used it because I thought it carried the same meaning while sounding slightly less clunky in a sentence.
I don't really care how many lives there are in and of itself, though as you say more lives may be instrumentally useful.....I don't know why I should be willing to trade off either utility production or enjoyment for a greater number of worthwhile lives.
The idea that it was good for a society to have a large number of distinct worthwhile lives at any given time was something I was considering after contemplating which was better, a society with a diverse population of different people, or a society consisting entirely of brain emulators of the same person. It seemed to me that if the societies had the same population size, and the same level of utility per person, that the diverse society was not just better, but better by far.
It occurred to me that perhaps the reason it seemed that way to me was that having a large number of worthwhile lives and a high level of utility were separate goods. Another possibility that occurred to me was that having a large number of distinct individuals in a society increased the amount of positive goods such as diversity, friendship, love, etc. In a previous discussion you seemed to think this idea had merit.
Thinking about it more, I agree with you that it seems more likely that having a large number of worthwhile lives is probably good because of the positive values (love, diversity, etc) it generates, rather than as some sort of end in itself.
Now, I will try to answer your original question (Why should morality mean caring about the people who exist now, rather than caring about the people who will exist in a year?) in a more succinct manner:
Of course we should care about people who will exist in the future just as much as people who exist now. Temporal separations are just as morally meaningless as spatial ones.
The specific point I was making was not in regards to whether we should care about people who will exist in the future or not. The point I was making was in regards to deciding which specific people will exist in the future.
In the thought experiment I posited there were two choices about who specifically should exist in the future:
(A) Alice, who currently exists in the present, also exists in the future.
(B) Alice, who currently exists in the present, is dead in the future and Bob, who currently doesn't exist, has been created to take her place.
Now, I think we both agree that we should care about whoever actually ends up existing in the future, regardless of whether it is Alice or Bob. My main argument is whether (A) or (B) is morally better.
I believe that, all other things being equal (A) is better than (B). And I also argue that (A) is better even if Bob will live a slightly happier life than Alice. As long as Alice's life is worth living, and she isn't a huge burden on others, (A) is better than (B).
My primary justification for this belief is that since Alice already exists in the present, she has concrete preferences about the future. She wants to live, doesn't want to die, and has goals she wants to accomplish in the future. Bob doesn't exist yet, so he has no such preferences. So I would argue that it is wrong to kill Alice to create Bob, even if Bob's life might be happier than Alice's.
Replies from: TheOtherDave↑ comment by TheOtherDave · 2013-01-14T14:24:01.730Z · LW(p) · GW(p)
So, consider the following alternative thought experiment:
Alice exists at time T1.
In (A) Alice exists at T2 and in (B) Alice doesn't exist at T2 and Bob does, and Bob is superior to Alice along all the dimensions I care about (e.g., Bob is happier than Alice, or whatever).
Should I prefer (A) or (B)?
This is equivalent to your thought experiment if T1 is the present.
And on your model, the most important factor in answering my question seems to be whether T1 is the present or not... if it is, then I should prefer A; if it isn't, I should prefer B. Yes?
I prefer a moral structure that does not undergo sudden reversals-of-preference like that.
If I prefer B to A if T1 is in the future, and I prefer B to A if T2 is in the past, then I ought to prefer B to A if T1 is in the present as well. The idea that I ought to prefer A to B if (and only if) T1 is the present seems unjustified.
I agree with you, though, that this idea is probably held by most people.
Replies from: Ghatanathoah↑ comment by Ghatanathoah · 2013-01-14T19:56:23.907Z · LW(p) · GW(p)
And on your model, the most important factor in answering my question seems to be whether T1 is the present or not... if it is, then I should prefer A; if it isn't, I should prefer B. Yes?
No, it doesn't matter when T1 is. All that matters is that Alice exists prior to Bob.
If Omega were to tell me that Alice would definitely exist 1,000 years from now, and then gave me the option of choosing (A) or (B) I would choose (A). Similarly, if Omega told me Alice existed 1,000 years ago in the past and had been killed and replaced by Bob my response would be "That's terrible!" not "Yay!"
Now if T1 is in the future and Omega gave me option (C), which changes the future so that Alice is never created in the first place and Bob is created instead, I would choose (C) over (A). This is because in (C) Alice does not exist prior to Bob, whereas in (A) and (B) she does.
Replies from: TheOtherDave↑ comment by TheOtherDave · 2013-01-14T22:32:31.953Z · LW(p) · GW(p)
All that matters is that Alice exists prior to Bob.
Ah! OK, correction accepted.
Similarly, if Omega told me Alice existed 1,000 years ago in the past and had been killed and replaced by Bob my response would be "That's terrible!" not "Yay!"
Fair enough. We differ in this respect. Two questions, out of curiosity:
If you were given the option (somehow) of changing the past such that Alice was not replaced by Bob, thereby causing Bob not to have existed, would you take it? (I'm genuinely unsure what you'll say here)
If you knew that the consequence of doing so would be that everyone in the world right now is a little bit worse off, because Alice will have produced less value than Bob in the same amount of time, would that affect your choice? (I expect you to say no, it wouldn't.)
Replies from: Ghatanathoah↑ comment by Ghatanathoah · 2013-01-15T06:35:37.753Z · LW(p) · GW(p)
If you were given the option (somehow) of changing the past such that Alice was not replaced by Bob, thereby causing Bob not to have existed, would you take it? (I'm genuinely unsure what you'll say here)
You're not the only one who is unsure. I've occasionally pondered the ethics of time-travel and they make my head hurt. I'm not entirely sure time travel where it is possible to change the past is a coherent concept (after, if I change the past so Alice never died then what motivated present me to go save her?). If this is the case then any attempt to inject time travel into ethical reasoning would result in nonsense. So it's possible that the crude attempts at answers I am about to try to give are all nonsensical.
If time travel where you can change the past is a coherent concept then my gut feeling is that maybe it's wrong to go back and change it. This is partly because Bob does exist prior to me making the decision to go back in time, so it might be "killing him" to go back and change history. If he was still alive at the time I was making the decision I'm sure he'd beg me to stop. The larger and more important part is that, due to the butterfly effect, if I went back and changed the past I'd essentially be killing everybody who existed in the present and a ton of people who existed in the past.
This is a large problem with the idea of using time travel to right past wrongs. If you tried to use time travel to stop World War Two, for instance, you would be erasing from existence everyone who had been born between World War Two and the point where you activated your time machine (because WWII affected the birth and conception circumstances of everyone born after it).
So maybe a better way to do this is to imagine one of those time machines that creates a whole new timeline, while allowing the original one to continue existing as a parallel universe. If that is the case then yes, I'd save Alice. But I don't think this is an effective thought experiment either, since in this case we'd get to "have our cake and eat it too," by being able to save Alice without erasing Bob.
So yeah, time travel is something I'm really not sure about the ethics of.
If you knew that the consequence of doing so would be that everyone in the world right now is a little bit worse off, because Alice will have produced less value than Bob in the same amount of time, would that affect your choice? (I expect you to say no, it wouldn't.)
My main argument hasn't been that it's wrong to kill Alice and replace her with Bob, even if Bob is better at producing value for others. It has been that it's wrong to kill Alice and replace her with Bob, even though Bob is better at producing value for himself than Alice is at producing value for herself.
The original argument I was replying to basically argued that it was okay to kill older people and replace them with new people because the older people might have done everything fun already and have a smaller amount of fun to look forward to in the future than a new person. I personally find the factual premise of that argument to be highly questionable (there's plenty of fun if you know where to look), but I believe that it would still be wrong to kill older people even if it were true, for the same reasons that it is wrong to replace Alice with Bob.
If Bob produces a sufficiently greater amount of value for others than Alice then it might be acceptable to replace her with him. For instance, if Bob invents a vaccine for HIV twenty years before anyone would have in a timeline where he didn't exist it would probably be acceptable to kill Alice, if there was no other possible way to create Bob.
That being said, I can still imagine a world where Alice exists being slightly worse for everyone else, even if she produces the same amount of value for others as Bob. For instance, maybe everyone felt sorry for her because of her disabilities and gave her some of their money to make her feel better, money they would have kept if Bob existed. In that case you are right, I would still choose to save Alice and not create Bob.
But if Alice inflicted a sufficiently huge disutility on others, or Bob was sufficiently better at creating utility for others than Alice, I might consider it acceptable to kill her and make Bob. Again, my argument is it's wrong to kill and replace people because they are bad at producing utility for themselves, not that it is wrong to kill and replace people because they are bad at producing utility for others.
Replies from: TheOtherDave↑ comment by TheOtherDave · 2013-01-15T14:35:18.668Z · LW(p) · GW(p)
My main argument hasn't been that it's wrong to kill Alice and replace her with Bob, even if Bob is better at producing value for others. It has been that it's wrong to kill Alice and replace her with Bob, even though Bob is better at producing value for himself than Alice is at producing value for herself.
Huh. I think I'm even more deeply confused about your position than I thought I was, and that's saying something.
But, OK, if we can agree that replacing Alice with Bob is sometimes worth doing because Bob is more valuable than Alice (or valuable-to-others, if that means something different), then most of my objections to it evaporate. I think we're good.
On a more general note, I'm not really sure how to separate valuable-to-others from valuable-to-self. The examples you give of the latter are things like having fun, but it seems that the moment I decide that Alice having fun is valuable, Alice's fun stops being merely valuable to Alice... it's valuable to me, as well. And if Alice having fun isn't valuable to me, it's not clear why I should care whether she's having fun or not.
Replies from: Ghatanathoah↑ comment by Ghatanathoah · 2013-01-16T02:13:37.171Z · LW(p) · GW(p)
On a more general note, I'm not really sure how to separate valuable-to-others from valuable-to-self. The examples you give of the latter are things like having fun, but it seems that the moment I decide that Alice having fun is valuable, Alice's fun stops being merely valuable to Alice... it's valuable to me, as well.
You're absolutely right that in real life such divisions are not clear cut, and there is a lot of blurring on the margin. But dividing utility into "utility-to-others" and "utility-to-self" or "self-interest" and "others-interest" is a useful simplifying assumption, even if such categories often blur together in the real world.
Maybe this thought experiment I thought up will make it clearer: Imagine a world where Alice exists, and has a job that benefits lots of other people. For her labors, Alice is given X resources to consume. She gains Y utility from consuming from them. Everyone in this world has such a large amount of resources that giving X resources to Alice generates the most utility, everyone else is more satiated than Alice and would get less use out of her allotment of resources if they had them instead.
Bob, if he was created in this world, would do the same highly-beneficial-to-others job that Alice does, and he would do it exactly as well as she did. He would also receive X resources for his labors. The only difference is that Bob would gain 1.1Y utility from consuming those resources instead of Y utility.
In these circumstances I would say that it is wrong to kill Alice to create Bob.
However, if Bob is sufficiently better at his job than Alice, and that job is sufficiently beneficial to everyone else (medical research for example) then it may be good to kill Alice to create Bob, if killing her is the only possible way to do so.
Replies from: TheOtherDave↑ comment by TheOtherDave · 2013-01-16T02:54:25.275Z · LW(p) · GW(p)
So, as I said before, as long as you're not saying that it's wrong to kill Alice even if doing so leaves everyone better off, then I don't object to your moral assertion.
That said, I remain just as puzzled by your notion of "utility to Alice but not anyone else" as I was before. But, OK, if you just intend it as a simplifying assumption, I can accept it on that basis and leave it there.
↑ comment by lsparrish · 2012-02-17T02:26:02.911Z · LW(p) · GW(p)
I appreciated the level of thought you put into the argument, even though it does not actually convince me to oppose life extension. Thank you for writing (and prezi-ing) it, I look forward to more.
Basically, the hidden difference if you put me and 9 others behind a veil of ignorance and ask us to decide whether we each get 80 or one gets 800, is that in that case you have the presence of 10 people competing and trying to avoid being "killed" whereas in the choice between creating one 800 year old versus 10 80 year olds is conducted without an actual threat being posed to anyone.
While you can establish that the 10 people would anticipate with fear (and hence generate disutility) the prospect of being destroyed / prevented to live, that's not the same as establishing that 9 completely nonexistent people would generate the same disutility even if they never started to exist.
Replies from: Thrasymachus↑ comment by Thrasymachus · 2012-02-17T15:55:09.890Z · LW(p) · GW(p)
I don't think the thought experiment hinges on any of this. Suppose you were on you own and Omega offered you certainty of 80 years versus 1/10 of 800 and 9/10 of nothing. I'm pretty sure most folks would play safe.
The addition of people makes it clear if (grant the rest) a society of future people would want to agree that those who 'live first' should refrain from life extension and let the others 'have their go'.
Replies from: lsparrish↑ comment by lsparrish · 2012-02-18T01:54:15.238Z · LW(p) · GW(p)
Loss aversion is another thing altogether, if most people choose 80 sure years instead of 800 years at a 1/10 risk it doesn't necessarily prove that it is actually less valuable.
Suppose Omega offers to copy you and let you live out 10 lives simultaneously (or one after another, restoring from the same checkpoint each time) on the condition that each instance dies and is irrecoverably deleted after 80 years. Is that worth more than spending 800 years alive all in one go?
Replies from: Thrasymachus↑ comment by Thrasymachus · 2012-02-18T08:41:14.995Z · LW(p) · GW(p)
Plausibly, depending on your view of personal identity, yes.
I won't be identical to my copies, and so I think I'd play the same sorts of arguments I want to do so far - copies are potential people, and behind a veil of ignorance between whether I'd be a copy or the genuine article, the collection of people would want to mutually agree the genuine article picks the former option in Omegas gamble.
(Aside: loss/risk aversion is generally not taken to be altogether different from justice. I mean, veil of ignorance heuristic specifies a risk averse agent, and difference principle seems to be loss averse.
↑ comment by [deleted] · 2012-02-17T17:14:42.067Z · LW(p) · GW(p)
Glad to see someone using Prezi.
My main contention with the argument is the assumptions it makes about future people. Assuming a society that could commit life extensions on the grand scales talked about in this argument, why is it still assumed future persons must be considered as identical to current one (who, in the argument, I assume to be the ones capable of taking or foregoing the life extensions)?
As has been mentioned, these future people are non-existent. What suggests that they will be or must be part of the equation eventually? It seems less an argument of "would you take 800 for yourself or 80 for you and your children" and more "would you take 800 for yourself and agree not to have children or would you rather have children and risk what comes?"
I know we hold sentimentality for having children (since, you know, it's our primary function and all) but this whole argument seems more the classic "immortal children" problem: how can you fit an infinite person supply in a finite space? And the simplest answer to me seems: until you find a way to increase the space, you limit the supply. Some may not like that idea but if it's a case of existent humans' interests vs non-existent (and possibly never existent) human interests, then I would have to side with the former (myself being one of them makes it much easier for me of course).
↑ comment by skepsci · 2012-02-16T02:30:30.074Z · LW(p) · GW(p)
I noticed an obvious fallacy in the linked argument:
If infinite person-years possible, life extension is amoral.
What? Surely if infinite person-years are possible, it's better for everyone to be immortal than only some, so life extension would be morally preferable, not morally neutral.
Also, why are we assuming the number of person-years lived is independent of the average lifespan? All he exhibited was an upper bound independent of the average lifespan, which is not at all the same thing. If you can't justify the hypothesis that lifespan is a zero-sum game, the entire argument falls apart.
Replies from: lsparrish, skepsci, Thrasymachus↑ comment by lsparrish · 2012-02-16T03:38:00.261Z · LW(p) · GW(p)
The main argument is that taking years from potential beings and adding them to existing ones is unjust, hence immoral. Given that, depending on the exact shape of the infinite universes scenario, life extension could be moral, amoral, or immoral.
If longer-lived people can reproduce and find new space more quickly than shorter lived people, life extension would be moral. (For example say more experienced people have more motive or ability to create new universes.) However all else being equal (for example, say the limit on reproduction is some unchangeable physical constant that says we cannot make black holes any faster than x, and we have already maxed that out), the fact that shorter lived people are dying and creating spaces for more kids makes that the more moral scenario.
While I agree that this is a flaw in the argument (longer lives can possibly result in more new kids born / new spaces opened than shorter ones), I don't think it is my true rejection of the argument overall, because it is not unreasonable to think the new spaces that can be opened is limited and/or cannot be increased by longer lives. I think the real problem is the idea that one can behave unjustly to a person whose existence is only potential, through the act of taking away their existence.
↑ comment by skepsci · 2012-02-16T03:04:55.431Z · LW(p) · GW(p)
To me, the entire argument sounds like a rationalization for not signing up for cryo.
Signed,
Someone who has rationalized a reason for not signing up yet for cryo, and suspects that the real reason is laziness.
Replies from: Locke↑ comment by Joshua Hobbes (Locke) · 2012-02-16T03:31:21.410Z · LW(p) · GW(p)
So sign the hell up.
↑ comment by Thrasymachus · 2012-02-16T04:07:31.716Z · LW(p) · GW(p)
If infinite person years, then (so long as life is net positive) we have infinite utility, and I can't see obviously whether doling this out to a 'smaller' or 'larger' set of people (although both will have same cardinality) will matter. But anyway, I don't think anyone really thinks we can wring infinite amounts of life out of the universe.
Total life-time will have some upper bound. So in worlds where we are efficiently filling up lifespan, the choice is between more short-lived people or fewer long-lived people. In the real world for the foreseeable future, that won't quite apply - plausibly, there will be chunks of lifetime that can only be got at by extending your life, and couldn't be had by a future person, so you doing so doesn't deprive anyone else. However, that ain't plausible for an entire society (or a large enough group) extending their lives. Limiting case: if everyone made themselves immortal, they could only add people by increases in carrying capacity.
Replies from: lsparrish↑ comment by lsparrish · 2012-02-16T19:24:09.562Z · LW(p) · GW(p)
If longer lived people tend to create more spaces to expand into in an infinite universe, and this results in reproduction at a normal or higher rate, that would indicate that longer lived people are more moral, since the disutility of the long lived people dying would be (relatively) absent from the equation.
If there is a point of diminishing returns on the creation of new people -- perhaps having a trillion lives is less than 1000 times as valuable (including in the sense of "justice") as having a billion lives in existence at a given time -- life extension could be more efficient at producing valuable life years and hence more moral.
Life might grow less worth living over time (Note: excluded for sake of argument from your prezi), but it might also grow more worth living over time. These are not mutually exclusive: an evil dictator might produce more negative utility by being in power for a long time whereas a scientist or diplomat might produce larger amounts of positive utility by living longer. There could be internalized examples of these as well -- a person whose pain grows with each passing year and has to live with the memories thereof, or a person who falls more in love with their spouse or some such thing over time.
However I tend to think there would be selection effects in favor of the positive cases and against the negative ones -- suicide and assassination, for example -- so I don't much fear the negative cases being the long term trend. Rather I think longer lived people (all else equal, including health) produce more positive utility per unit of time than shorter lived ones.
↑ comment by skepsci · 2012-02-16T02:23:53.040Z · LW(p) · GW(p)
I noticed an obvious fallacy:
If infinite person-years possible, life extension is amoral. What? Surely if infinite person-years are possible, it's better for everyone to be immortal than only some, so life extension would be morally preferable, not morally neutral.
Also, why are we assuming the number of person-years lived is independent of the average lifespan? All he exhibited was an upper bound independent of the average lifespan, which is not at all the same thing.
comment by ZankerH · 2012-02-15T10:06:55.338Z · LW(p) · GW(p)
When working on a primarily mental task (example: web browsing, studying, programming), I sometimes find myself coming up with an idea, forgetting the idea itself, but remembering I have come up with it. Backtracking through the mental steps may help recall it, but often I'll not be able to recall it at all, ending in frustration. Is there a technical term for this I can google / does anyone have an idea what this is?
Replies from: Metus, John_Maxwell_IV↑ comment by Metus · 2012-02-15T16:07:12.659Z · LW(p) · GW(p)
I would also be interested in research regarding this topic. I "suffer" from a similar phenomenon. The most annoying part ist that I am unable to judge if it was a good or bad idea I forgot. Also, this phenomenon occurs more often if I am tired.
Replies from: ZankerH↑ comment by ZankerH · 2012-02-15T20:31:20.247Z · LW(p) · GW(p)
The most annoying part ist that I am unable to judge if it was a good or bad idea I forgot.
Anecdote: Discussing this with a particularly non-rational acquaintance, they remarked that I'm likely subconsciously discarding horrible ideas, and preventing myself from coming up with them again, and therefore the better for it.
Replies from: None↑ comment by [deleted] · 2012-02-17T17:23:40.258Z · LW(p) · GW(p)
I've had the same thing occur to me many times, especially once I went into college. However, I did an experiment that might help shed some light on the issue for you.
I attempted to brute force my way through the problem. I kept pens and note pads on hand, specifically sticky notes. When I had any idea I felt worth keeping, I'd jot it down on the spot. No context (so I wouldn't write down what I was doing or where I was) just the idea itself. I soon collected a wall of sticky notes (it became quite infamous in the dorms) full of these ideas. I still have them all, in a notebook full of card stock, organized by type.
The problem I find, going back over the many different ideas, is that, on the whole, the ideas have lost any inspiration they once had. Looking over them, I see the ideas as either a.) common knowledge (meaning the idea was probably new at the time but since then I've just grown used to through other routes of knowledge) or b.) trite and even childish.
So, if it helps, it would seem that your friend may be onto something as, for the most part, my wall of ideas serve either as reminders of things I already know or things that don't matter.
↑ comment by John_Maxwell (John_Maxwell_IV) · 2012-02-19T08:06:41.889Z · LW(p) · GW(p)
I read somewhere that furiously writing down everything you were thinking about is a good way to dredge up forgotten thoughts, and it sometimes works for me.
comment by A1987dM (army1987) · 2012-02-25T10:58:33.095Z · LW(p) · GW(p)
I've just seen the Wikipedia article for the ‘overwhelming gain paradox’:
Harford illustrates the paradox by the comparison of three potential job offers:
- In Job 1, you will be paid $100, and if you work hard you will be paid $200.
- In Job 2, you will be paid $100, and if you work hard you will have a 1% chance of being paid $200.
- In Job 3, you will be paid $100, and if you work hard you will have a 1% chance of being paid $1billion.
Most people will state that they will choose to work hard in jobs 1 and 3, but not job 2 [2]. In Job 1, working hard is obvious because there is a clear reward for doing so. In Job 2, it seems a bad choice because the likelihood of a reward is so low. But in Job 3, working hard becomes the preferable choice, because the potential gain is so overwhelming that any chance - no matter how small - of obtaining it is seen as worthwhile. This appears irrational and paradoxical, because jobs 2 and 3 are identical 99% of the time.
Why the hell would anyone consider that a paradox? ISTM that it is completely reasonable for an utility function to be such that the disutility of working harder would be exceeded both by the utility of extra $100 and by 0.01 times the utility of extra $999,999,900, but not by 0.01 times the utility of extra $100. (If anything, I would consider anything else to be paradoxical, for people for whom the disutility of working at all is exceeded by the utility of getting $100 in the first place.)
comment by MBlume · 2012-02-19T22:02:13.041Z · LW(p) · GW(p)
Ever feel like you contribute nothing to society? Well, it's time to consider volunteering!
comment by ArisKatsaris · 2012-02-15T16:50:44.657Z · LW(p) · GW(p)
Can't an AI escape the dangers of Pascal's Mugging by having a decision theory that weighs against having exploitable decision theories according to the measure of their exploitability?
Replies from: HonoreDB↑ comment by HonoreDB · 2012-02-15T18:41:54.619Z · LW(p) · GW(p)
The dangers pointed to by the thought experiment aren't restricted to exploitation by an outside entity. An AI should be able to safely consider the hypothesis "If I don't destroy my future light cone, 3^^^3 people outside the universe will be killed" regardless of where the hypothesis came from.
But even if we're just worried about mugging, how could you possibly weight it enough? Even if paying once doomed me to spend the rest of my life paying $5 to muggers, the utility calculation still works out the same way.
Replies from: ArisKatsaris, TheOtherDave↑ comment by ArisKatsaris · 2012-02-15T21:03:27.635Z · LW(p) · GW(p)
But even if we're just worried about mugging, how could you possibly weight it enough? Even if paying once doomed me to spend the rest of my life paying $5 to muggers, the utility calculation still works out the same way.
My idea is as follows:
Mugger: Give me 5 dollars, or I'll torture 3^^^3 sentient people across the omniverse using my undetectable magical powers.
AI: If I make my decision on this and similar trades based on a decision process DP0 of comparing the disutility(3^^^3 torture) P(you're telling the truth) compared to the disutility(giving you 5 dollars), then even if you're telling the truth, a different malicious agent may then merely name a threat that involves 3^^^^3 tortures, and thus make me cause a vastly great amount of disutility in his service. Indeed there's no upper bound to the disutility such a hypothetical agent may claim will cause, and therefore surrendering to such demands mean a likewise unbounded exploitation potential. Therefore I will not* use the decision process DP0, and will instead utilize some different decision process (like "Never surrender to blackmail" or "Always demand proportional evidence before considering sufficiently extraordinary claims").
↑ comment by endoself · 2012-02-15T21:40:45.539Z · LW(p) · GW(p)
Saving 3^^^^3 people is more than worth a bit of vulnerability to blackmail. If 3^^^^3 people are in danger, the AI wishes to believe 3^^^^3 people are in danger and in that case "never surrender to blackmail" is a strictly worse strategy.
Also, DP0 isn't even a coherent decision process. The expected utilies will fail to converge if "there's no upper bound to the disutility such a hypothetical agent may claim" and these claims are interpreted with some standard assumptions, so the agent has no way of even comparing expected utilities of actions.
Replies from: ArisKatsaris↑ comment by ArisKatsaris · 2012-02-15T21:58:25.081Z · LW(p) · GW(p)
If 3^^^^3 people are in danger, the AI wishes to believe 3^^^^3 people are in danger
This isn't about beliefs, this is about decisions. The process of epistemic rationality needn't be modified, only the process of instrumental rationality. Regardless of how much probability the AI assigns to the danger for 3^^^^3 people, it needn't be the right choice to decide based on a mere probability of such danger multiplied to the disutility of the harm done.
Saving 3^^^^3 people is more than worth a bit of vulnerability to blackmail. If 3^^^^3 people are in danger, the AI wishes to believe 3^^^^3 people are in danger and in that case "never surrender to blackmail" is a strictly worse strategy.
Unless having the decision process that surrenders to blackmail and being known to have it is what will put these people in danger in the first place. In that case, either you modify your decision process so that you precommit to not surrender to blackmail and prove it to other people in advance, or pretend to not surrender and submit to individual blackmails if enough secrecy of such submission can be ensured so that future agents won't be likely to be encouraged to blackmail.
But this was just an example of an alternate decision theory, e.g. one that had hardwired exceptions against blackmail. I'm not actually saying it need be anything as absolute or simple as that -- if it were as simple as that I'd have solved the Pascal's Mugger problem by saying "TDT plus don't submit to blackmail" instead of saying "weigh against your decision process by a factor proportional to its exploitability potential"
Replies from: endoself↑ comment by endoself · 2012-02-15T23:31:12.592Z · LW(p) · GW(p)
We seem to be thinking of slightly different problems. I wasn't thinking of the mugger's decision to blackmail you as dependent on their estimate that you will give in. There are possible muggers who will blackmail you regardless of your decision theory and refusing to submit to blackmail would cause them to produce large negative utilities.
Replies from: ArisKatsaris↑ comment by ArisKatsaris · 2012-02-15T23:40:12.825Z · LW(p) · GW(p)
And as I said my example about a blanket refusal to submit to blackmail was just an example. My more general point is to evaluate the expected utility of your decision theory itself, not just the individual decision.
Replies from: endoself↑ comment by endoself · 2012-02-16T00:52:19.828Z · LW(p) · GW(p)
In the situation I presented, the decision theory had no effect on the utility other than through its effect on the choice. In that case, the expected utility of the decision theory and the expected utility of the choice reduce to the same thing, so your proposal doesn't seem to help. Do you agree with that, or am I misapplying the idea somehow?
Replies from: ArisKatsaris↑ comment by ArisKatsaris · 2012-02-17T02:03:41.744Z · LW(p) · GW(p)
I'm not sure that they reduce to the same thing. In e.g. Newcomb's problem, if you reduce your two options to "P(full box A) U(full Box A)" versus "P(full box A) U(full box A) + U(full box B)", where U(x) is the utility of x, then you end up two-boxing, that's causal decision theory.
It's only when you consider the utility of different decision theories, that you end up one boxing, because then you're effectively considering U(any decision theory in which I one-box) vs U(any decision theory in which I two-box) and you see that the expected utility of one-boxing decision theories is greater.
In Pascal's mugging... again I don't have the math to do this (or it would have been a discussion post, not an open-thread comment), but my intuition tells me that a decision theory that submits to it is effectively a decision theory that allows its agent to be overwritten by the simplest liar there is, and therefore of total negative utility. The mugger can add up-arrows until he has concentrated enough disutility in his threat to ask the AI to submit to his every whim and conquer the world on the mugger's behalf, etc...
Replies from: endoself↑ comment by endoself · 2012-02-18T20:33:18.587Z · LW(p) · GW(p)
If the adversary does not take into account your decision theory in any way before choosing to blackmail you, U(any decision theory where I pay if I am blackmailed) = U(pay) and U(any decision theory where I refuse to pay if I am blackmailed) = U(refuse), since I will certainly be blackmailed no matter what my decision theory is, so what situation I am in has absolutely no counterfactual dependence on my action.
a decision theory that submits to it is effectively a decision theory that allows its agent to be overwritten by the simplest liar there is
The truth of this statement is very hard to analyze, since it is effectively a statement about the entire space of possible decision theories. Right now, I am not aware of any decision theory that can be made to overwrite itself completely just by promising it more utility or threatening it with less. Perhaps you can sketch one for me, but I can't figure out how to make one without using an unbounded utility function, which wouldn't give a coherent decision agent using current techniques as per the paper that I linked a few comments up.
Anyway, I don't really have a counter-intuition about what is going wrong with agents that give into Pascal's mugging. Everything gets incoherent very quickly, but I am utterly confused about what should be done instead.
That said, if an agent would take the mugger's threat seriously under a naive decision theory and that disutility is more than the disutility of of being exploitable by arbitrary muggers, decision-theoretic concerns do not make the latter disutility greater in any way. The point of UDT-like reasoning is that "what counterfactually would have happened if you decided differently" means more than just the naive causal interpretation would indicate. If you precommit to not pay a mugger, the mugger (who is familiar with your decision process) won't go to the effort of mugging you for no gain. If you precommit not to find shelter in a blizzard, the blizzard still kills you.
↑ comment by thomblake · 2012-02-15T21:18:00.599Z · LW(p) · GW(p)
So the AI is not an expected utility maximizer?
If it is not, then what is it? If it is, then what calculations did it use to reach the above decision - what were the assigned probabilities to the scenarios mentioned?
Replies from: ArisKatsaris↑ comment by ArisKatsaris · 2012-02-15T21:31:55.024Z · LW(p) · GW(p)
So the AI is not an expected utility maximizer?
It's an expected utility maximizer, but it considers the expected utility of its decision process, not just the expected utility of individual decisions. In a world where there exist more known liars than known superhuman entities, and any liar can claim superhuman powers, any decision process that allows them to exploit you is of negative expected utility.
It's like the professor who in the example agrees to accept a delayed essay that was delayed for the reason of a grandmother's death, because this is a valid reason that will largely not be exploited, but not "I wanted to watch my favorite team play", because lots of others students would be able to use the same excuse. The professor's not just considering the individual decision, but whether decision process would be of negative utility in a more general manner.
Replies from: thomblake↑ comment by thomblake · 2012-02-15T23:54:33.470Z · LW(p) · GW(p)
It seems to me that you run into the mathematical problem again when trying to calculate the expected utility of its decision process. Some of the outcomes of the decision process are associated with utilities of 3^^^3.
Replies from: ArisKatsaris↑ comment by ArisKatsaris · 2012-02-16T00:10:21.231Z · LW(p) · GW(p)
It seems to me that you run into the mathematical problem again when trying to calculate the expected utility of its decision process. Some of the outcomes of the decision process are associated with utilities of 3^^^3.
Perhaps. I don't have the math to see how the whole calculation would go.
But it seems to me that the utility of 3^^^3 is associated with a particular execution instance. However when evaluating the decision process as a whole (not the individual decision) the 3^^^3 utility mentioned by the mugger doesn't have a privileged position over the the hypothetical malicious/lying individuals that can just even more easily talk about utilities or disutilities of 3^^^^3 or 3^^^^^3, or even have their signs reversed (so that they torture people if you submit to their demands despite their claims to the opposite).
So the result should ideally be a different decision process that is able to reject unsubstantiated claims by potentially-lying individuals completely, instead of just trying to fudge the "Probability" of the truth-value of the claim, or the calculated utility if the claim is true.
↑ comment by mwengler · 2012-02-16T18:37:44.449Z · LW(p) · GW(p)
Give me $5 or I will torture 3^^^^3 sentient people across the omniverse for 1,000 years each and then kill them. using my undetectable magical powers. You can pay me by paypal to mwengler@gmail.com. Unless 20 people respond (or the integrated total I receive reaches $100) then I will carry out the torture.
Now you may think I am making the above statement to make a point. Indeed it seems probable, but what if I am not? How do you weigh the very finite probability that I mean it against 3^^^^3 sentient lives
I feel confident that the amount of money I recieve by paypal will be a more meaningful statement about what people really think of
(ininitesimal probability) * (nearly infinite evil) = well over $5 worth of utilons
Do others agree? Or do they think these comments which cost nothing bu another 15 minutes away from reading a different post are what really mean something?
Replies from: ArisKatsaris↑ comment by ArisKatsaris · 2012-02-17T01:46:54.400Z · LW(p) · GW(p)
The issue is how to program a decision theory (or meta-decision theory, perhaps) that doesn't fall victim to Pascal's mugging and similar scenarios, not to show that humans mostly don't fall victim to it.
Replies from: NancyLebovitz↑ comment by NancyLebovitz · 2012-02-17T04:47:06.526Z · LW(p) · GW(p)
However, it's probably worth figuring out what processes people use which cause them to not be very vulnerable to Pascal's Mugging.
Or is it just that people aren't vulnerable to Pascal's Mugging unless they're mentally set up for it? People will sometimes give up large amounts of personal value to prevent small or dubious amounts of damage if their religion or government tells them to.
Replies from: mwengler↑ comment by mwengler · 2012-02-17T15:04:59.623Z · LW(p) · GW(p)
I think there is not enough discussion of the quality of information. Conscious beings tell you things to increase their utility functions, not to inform you. Magicians trick you on purpose and (most of us) realize that, and they are not even above human intelligence. Scammers scam us. Well meaning idiots sell us vitamins and minerals and my sister just aked me about spending a few $1000 on a red light laser to increase her well being!
The whole one-box vs two-box thing, if someone claiming to be a brilliant alien had pulled this off 100 times and was now checking in with me, I would find it much more believable that they were a talented scam artist than that they could do calculations to do predictions that required a ^ to express relative to any calculations we know of now that can be done.
Real intelligences don't believe anywher near everything they hear. And they STILL are gullible.
↑ comment by TheOtherDave · 2012-02-15T20:54:03.326Z · LW(p) · GW(p)
I agree with your first paragraph, but I'm not convinced of your second paragraph... at least, if you intend it as a rhetorical way of asserting that there is no possible way to weight the evidence properly. It's just another proposition; there's evidence for and against it.
I think we get confused here because we start with our bottom line already written.
I "know" that the EV of destroying my light cone is negative. But theory seems to indicate that, when assigning a confidence interval P1 to the statement "Destroying my future light cone will preserve 3^^^3 extra-universal people" (hereafter, statement S1), a well-calibrated inference engine might assign P1 such that the EV of destroying my light cone is positive. So I become anxious, and I try to alter the theory so that the resulting P1s are aligned with my pre-existing "knowledge" that the EV of destroying my light cone is negative.
Ultimately, I have to ask what I trust more: the "knowledge" produced by the poorly calibrated inference engine that is my brain, or the "knowledge" produced by the well-calibrated inference engine I built? If I trust the inference engine, then I should trust the inference engine.
comment by Grognor · 2012-02-16T08:14:50.970Z · LW(p) · GW(p)
Scumbag brain is a newish meme of the generic image macro variety. Some are pretty entertaining and relevant to the LW ideaspace, but most are lowest common denominator-style "broke up with girlfriend, makes you feel sad about it for weeks".
comment by Emile · 2012-02-16T20:28:50.989Z · LW(p) · GW(p)
Since there seem to be quite a few lesswrongers involved in making games, or interested in doing it as a hobby, I just created a little mailing-list for general chat - talk about your projects, rant about design theory, ask for advice, talk about how to apply lesswrong ideas to game development, talk about how to apply game development ideas to lesswrong's goals, etc.
comment by [deleted] · 2012-02-27T18:10:01.492Z · LW(p) · GW(p)
I've recently figured out an all too obvious workaround for the vanishing spaces bug. Considering links, italics and bold basically cover 95% of all formatting needs I think some people may find use for it (it has cured my distaste for writing articles on LW).
1) Write a comment or PM in Markdown syntax. Post the thing.
2) Select the text and copy it straight into the WYSIWYG editor
3) Delete the original post or PM.
It is such an obvious solution, yet I didn't think of it for months.
Replies from: dbauppcomment by Grognor · 2012-02-18T00:03:06.154Z · LW(p) · GW(p)
I'm trying to keep a dream journal, but when I wake up I keep having this cognitive block preventing me from writing my dreams down It will do anything necessary to prevent me from writing my dreams down. I regret this later every single time. Does anyone know how to prevent this? I don't think I can do it at that time, so it probably has to be something done beforehand, as I go to bed.
Replies from: Alicorn, Douglas_Knight, JGWeissman↑ comment by Douglas_Knight · 2012-02-22T22:02:47.250Z · LW(p) · GW(p)
I kept a dream journal for about 5 years. I think it (temporarily) increased recall of dreams. The most interesting thing I observed was that the recorded dreams were seasonally concentrated.
↑ comment by JGWeissman · 2012-02-18T00:14:48.287Z · LW(p) · GW(p)
What kind of cognitive block? Do you not know what to write? Do you not think about recording your dream at the appropiate time? Do you feel like writing about your dream would be a bad thing?
Replies from: Grognor↑ comment by Grognor · 2012-02-18T00:29:45.021Z · LW(p) · GW(p)
The last one, sort of. It usually takes the form of, "You don't want THAT to be in your dream log, do you? You'd better skip it just this once. It's okay, you'll write down the next one. That dream sucked anyway, and you're already forgetting it besides. Also don't you have better things to do?"
All, of course, with the low-level realization that I know all of this is bullshit but I obey it anyway.
comment by [deleted] · 2012-02-15T08:47:35.549Z · LW(p) · GW(p)
Do con-artistry and the Dark arts share similar strategies? If so any in particular?
Replies from: billswift↑ comment by billswift · 2012-02-15T10:18:14.280Z · LW(p) · GW(p)
They use the same strategies, only the goals are (or at least can be) different. For a good overview, see Robert Greene's The 48 Laws of Power.
Replies from: MileyCyrus↑ comment by MileyCyrus · 2012-02-15T17:09:13.994Z · LW(p) · GW(p)
Counterpoint: 48 Laws reads like cheap astrology.
Replies from: faul_sname↑ comment by faul_sname · 2012-02-15T21:22:13.187Z · LW(p) · GW(p)
There's non-cheap astrology?
Replies from: J_Taylorcomment by kdorian · 2012-02-19T01:26:32.526Z · LW(p) · GW(p)
Are there any guidelines, or does anyone have any significant thoughts, about mentioning Less Wrong in text in fanfiction (or any other type of fiction)? I know a lot of people came here by way of HP:MoR, myself included, but I'm interested if anyone has reasons that they believe it would be a bad idea, or an especially good one.
comment by [deleted] · 2012-02-15T11:32:10.083Z · LW(p) · GW(p)
Caring about conscious minds where you can't observe them existing carries basically the same philosophical problems as caring about pretty statues (and other otherwise desirable or undesirable arrangements of matter) where you can't observe them.
Agree or disagree?
Replies from: Grognor, Viliam_Bur↑ comment by Viliam_Bur · 2012-02-15T12:17:25.181Z · LW(p) · GW(p)
Even if you can't observe them, can you somehow logically infer their existence and can you influence them? If no, then thinking about them is just wasting time.
It becomes a problem only if you cannot observe them, but you can influence them, and despite lack of observation you can make at least some probabilistic estimates about the effect of your influence.
comment by Alicorn · 2012-02-15T07:00:06.112Z · LW(p) · GW(p)
What does the outside view say about when during the course of a relationship it is wisest to get engaged (in terms of subsequent marital longevity/quality)? Data that doesn't just turn up obvious correlations with religious groups who forbid divorce is especially useful.
Replies from: moridinamael, None, J_Taylor, shminux, MileyCyrus↑ comment by moridinamael · 2012-02-16T00:41:04.998Z · LW(p) · GW(p)
I proposed about two months ago; I'm getting married this coming Sunday. I mention this to qualify the following advice/input.
The process of getting engaged and getting married may seem (to some) like a stupid, defunct, irrelevant process for unevolved, unenlightened, hidebound ape-descendents. I propose that this is a naive view of the situation, and that the process of engagement and marriage, having existed for a long time, in many cultures, and being actually a relatively evolved and functional procedure, constitutes a very instrumentally rational process to undertake for any sufficiently interested couple.
The members of a relationship are likely to have very different implicit expectations with regards to
when it's appropriate to get engaged
when it's appropriate to get married (after getting engaged)
what marriage actually "means"
what constitutes an appropriately-sized wedding
the importance of and timing of having children
the importance of family, e.g. how much continuing parental involvement is welcome
finances, debt, and standard of living
what actions would constitute a violation of trust
etc.
Both partners will likely have a largely unexamined implicit life-plan with various unstated assumptions about all of these issues, and more. Some of these things will simply not come up until you start talking seriously about commitment. Furthermore, you may not really start talking seriously about commitment until after you are engaged. Even if you thought you had been serious before. When one goes through this process of public commitment, the process of social reinforcement makes real the commitment in a sense that is almost impossible to internalize without such peer recognition.
All of these things can come up regardless of how "rational" both partners happen to be. Konkvistador elsewhere in this comment thread asked
Why would anyone make a lifetime commitment?
If you want children, and you forsee yourself having a lot of complex values relating to the well-being of the children, it is useful to obtain such a committment, even if you know that any commitment can technically be broken. It is also useful to state this commitment in front of a crowd of your friends and family, because this essentially makes your relationship with that person a "legitimate" one, entitling you to all kinds of social priveleges and powers and higher status within your social sphere. If you are a human, you automatically care about these things.
↑ comment by [deleted] · 2012-02-15T08:00:06.575Z · LW(p) · GW(p)
Why would anyone want to get engaged? But I do second the request for this data.
Edit: Removed "in the world "
Replies from: NancyLebovitz, MileyCyrus, Alicorn↑ comment by NancyLebovitz · 2012-02-15T11:06:50.126Z · LW(p) · GW(p)
"Why in the world would anyone [X]?" comes off as starting with a strong opinion that [X] is a bad idea, rather than actually asking for information about motives.
Replies from: None↑ comment by [deleted] · 2012-02-15T11:27:05.348Z · LW(p) · GW(p)
Better?
In any case, as we discussed below, my original interpretation was that this is about the general desirability of [X]. I also obviously implied I've heard strong reasons against [X] but few convincing ones in its favour.
Replies from: CharlieSheen, NancyLebovitz, Richard_Kennaway↑ comment by CharlieSheen · 2012-02-15T13:42:56.556Z · LW(p) · GW(p)
This whole conversation was such a cliché.
Woman: Yay I want to get married with the man I love! Does anyone have any advice?
Man: Marriage is a bad idea. I can't see why anyone would want that.
Woman: I'm allowed to want things! You are being mean.
Man: Don't try and chain the poor guy with whom I suddenly identify!
Woman: I hate you and my fear of instability and falling out of love that you now represent! I want to wear a wedding dress and a pretty ring on my hand!
Man: I'm sorry.
Woman: Apology accepted.
Replies from: Alicorn, GLaDOS↑ comment by GLaDOS · 2012-02-15T14:01:27.660Z · LW(p) · GW(p)
I find this sexist! But true.
In any case it was sweet sweet drama.(^_^)
↑ comment by NancyLebovitz · 2012-02-15T12:10:53.136Z · LW(p) · GW(p)
It's better.
I would say that "I'm surprised that you're planning on [X], considering [list of drawbacks]" would work at least as well.
I was surprised at Alicorn (who's generally a calm poster) saying that she was allowed to want things. It seemed weirdly out of line with the discussion. When I saw the beginning of the thread again, "why in the world" jumped out at me as aggressive.
Something that's showing more clearly to me on another reread is that you genuinely didn't see what you might have done that was problematic.
I'm wondering if there's something odd going on at your end-- I don't think you usually misread things the way you misread Alicorn's original request.
Replies from: None↑ comment by [deleted] · 2012-02-15T13:04:12.978Z · LW(p) · GW(p)
It could be a cultural or language barrier, the same phrase "why in the world would you X" has a literal Slovenian equivalent that I now however think seems to carry very different connotations. Much more surprise and much less disapproval than in English.
This phrase might have set of the conversation on the wrong foot, since later on seemingly unprovoked hostility and evasiveness may have caused me to respond by hardening up and even escalating.
It is also possible that since I have recently had irl discussions regarding marriage I may have just thrown out some arguments at Alicorn that where originality crafted for someone else. If that was the case then we both became pretty emotional in the discussion because of its relevance to our personal lives. :/
↑ comment by Richard_Kennaway · 2012-02-15T13:18:34.538Z · LW(p) · GW(p)
Better?
No.
Taking out "in the world" tones it down, in the same way that taking the spikes out of a club tones it down. "Why would anyone..." is still a rhetorical question asserting that anyone who does is a dolt. You do the same in another comment: "Why would anyone make a lifetime commitment?"
Clearly, many people do get engaged, do get married, do make lifetime commitments. A majority of people, even, at least here in the West; I do not know how it is in Slovenia. (The disadvantageous tax regime you have in Slovenia was done away with long ago in the UK: married couples can elect to be taxed as separate individuals.) But saying "Why would anyone do such a thing" does not invite discussion, it shuts it off. If you actually wanted to know people's reasons, you would actually ask them, and listen to the answers.
Replies from: None↑ comment by [deleted] · 2012-02-15T13:32:50.457Z · LW(p) · GW(p)
Ok fair enough, can you propose a better way to ask?
I was interested in the following:
Why do so few people who want to get married question the wisdom of such a step considering its high costs and dubious benefits (in comparison to say cohabitation)?
Why do people in general want to get married? (this is different from the question of whether it is rational to marry)
Is it rational for most people who marry to do so?
I was not specifically interested in why Alicorn wanted to get married. I did want to provoke, maybe even shock people into thinking about it beyond cached thoughts.
Replies from: TimS, Richard_Kennaway↑ comment by TimS · 2012-02-16T18:58:26.035Z · LW(p) · GW(p)
When I got married, I thought about this a little, and I concluded that marriage (but not cohabitation) would:
Create a partner with a non-betrayal stance towards me (i.e. would not defect against me in a one-shot Prisoner's dilemma game).
Signal to others that I and partner had a non-betrayal stance towards each other.
It's an interesting question why marriage is able to create that first effect, and I don't have a good answer. I do think that many people go into marriage without thinking of these considerations, and I think that is a mistake. In other words, I think that the answer to your third question is no. But that depends on society's tolerance of cohabitation, which wasn't always society's attitude.
Replies from: None↑ comment by [deleted] · 2012-02-16T20:27:50.368Z · LW(p) · GW(p)
It's an interesting question why marriage is able to create that first effect, and I don't have a good answer.
I can think this is because it is an act that is supposed to entail the following:
- shared reproductive interests
- shared financial interests
- at least some pair bonding (Oxytocin makes you love your kdis and love your romantic partner, in extreme cases enough to be willing to sacrifice yourself)
↑ comment by TimS · 2012-02-16T22:17:57.159Z · LW(p) · GW(p)
To me, those things are implied by the "non-betrayal" stance. Agreement on childbearing, shared financial interest, and pair bonding (i.e. shared emotional interest) are consequences of the fundamental agreement not to betray. As you note, each of those could be achieved without marriage - but most people act as if this were not possible. I'm just as confused as you.
That is different from noting the incidental benefits of legal marriage - if I die without a will, my wife gets my property. To achieve the same effect without marriage, I'd have to actually create a will. And so on for all the legal rights I want my wife to have (e.g. de facto legal guardian if I am incapacitated). But I want my wife to have those rights because of the non-betrayal stance, and if that wasn't our relationship, I wouldn't want her to have those rights.
↑ comment by Richard_Kennaway · 2012-02-15T14:03:55.150Z · LW(p) · GW(p)
Ok fair enough, can you propose a better way to ask?
Ask as if you did not already have a presumption about what the answer should be. Telling people they're idiots unless they agree with you will only convince them you are someone they do not want to talk to.
Your latest reformulation is better -- the key substitution is "do" instead of "would". The second and third bullet points are absolutely fine, but in the first and in the final paragraph you're still sticking your own oar in with "considering its high costs and dubious benefits" and "shock people into thinking about it beyond cached thoughts". There are, as it happens, people who have thought carefully about what arrangement they want to make on these matters, and without having to be told about cached thoughts either, but you will never hear them with that approach.
↑ comment by MileyCyrus · 2012-02-15T08:20:42.002Z · LW(p) · GW(p)
The high cost of divorce can make a lifetime commitment more robust. It also helps with taxes, visas and health care.
Replies from: None↑ comment by [deleted] · 2012-02-15T08:22:25.427Z · LW(p) · GW(p)
Why would anyone make a lifetime commitment?
The high cost of divorce can make a lifetime commitment more robust.
Committing a crime together and vowing to remain silent produces high costs. Exchanging embarrassing pictures or other blackmailing material can also produce high costs. I don't know this seems like a fake reason, I mean if you wanted to optimize for robustness of long range commitment and set out to optimize for it would you really end up with anything like marriage? Especially since more than 50% of all marriages end in divorce it dosen't seem to be, as it is practised currently, very good at its supposed function.
In addition unlike other imaginable mechanism, this one isn't symmetric unless it is a same sex marriage. The penalties are on average significantly higher for the male participant. This just seems plain unfair and bad signalling though I admit asymmetric arrangements can be a feature not a bug.
Also I seem to be able to maintain long term relationships with friends and family members without state enforced contracts. Why should a particular kind of relationship between two people require it? And even further why a contract that can't be much customized, that (irrational) voters feel strongly about and the rules of which the government via law or legal practice changes in unpredictable ways every few years?
It also helps with taxes, visas and health care.
This is very Amerocentric. When it comes to income and taxes in Slovenia it is much better not to be married than married, because the welfare state (which is used by almost everyone - lower, middle and even middle upper class to some extent) generally calculates most benefits according to income per family member and many benefits are tied to children and teens. It is nearly always better for the couple not to marry. I have friends from several other countries in Europe who have stated it is much like this in their countries as well.
Visas and generally facilitating immigration sound like good reasons to get married. Edit: This last line wasn't sarcasm, as hard as it may seem to believe. I was still thinking of marriage as a legal category not a traditional ritual.
Replies from: Kaj_Sotala, MileyCyrus, ArisKatsaris, Alicorn↑ comment by Kaj_Sotala · 2012-02-15T14:24:43.132Z · LW(p) · GW(p)
Especially since more than 50% of all marriages end in divorce it dosen't seem to be, as it is practised currently, very good at its supposed function.
Note: 50% of all marriages, not 50% of all married people. The people who get married (and divorced) several times drag down the overall success rate.
Googling around revealed various claims of the success rate for first marriage: more than 70 percent, 50 to 60 percent, 70 to 90 percent, etc.
Replies from: Douglas_Knight↑ comment by Douglas_Knight · 2012-02-16T05:22:29.538Z · LW(p) · GW(p)
I find Stevenson-Wolfers (alt alt) a credible source. It says that 50% of first marriages in the US from the 70s lasted 25 years. Marriages from the 80s look slightly more stable. The best graph is Figure 2 on page 37.
↑ comment by MileyCyrus · 2012-02-15T09:26:35.678Z · LW(p) · GW(p)
Especially since more than 50% of all marriages end in divorce it dosen't seem to be, as it is practised currently, very good at its supposed function.
I'm white and educated. Those stats don't apply to me.
Also I seem to be able to maintain long term relationships with friends and family members without state enforced contracts.
There is much more cash and property shared in a typical long-term romantic relationship than a typical platonic. I wouldn't share an apartment with my brother unless he signed a state-enforced contract.
Can you explain to me what disadvantages marriage has for a person who would wants to raise children with the help of a long-term romantic partner?
Replies from: None↑ comment by [deleted] · 2012-02-15T09:29:56.612Z · LW(p) · GW(p)
Can you explain to me what disadvantages marriage has for a person who would wants to raise children with the help of a long-term romantic partner?
Can you explain what advantages it has that are exclusive to it?
Considering the ceremony itself is often a major financial burden, shouldn't we seek good reasons in its favour rather than responses to "why not!"? But to proceed on this line anyway, from anecdotal evidence in my circle of acquaintances custody battles seem to be much more nasty and hard on the children among those who are married. The relationships between men and their children is also much more damaged and strained.
Replies from: MileyCyrus↑ comment by MileyCyrus · 2012-02-15T09:52:15.965Z · LW(p) · GW(p)
Can you explain what advantages it has that are exclusive to it?
I'm not trying to debate you, I'm trying to optimize my life. I want to reproduce with a partner who will stick around for decades, at least. If you have a compelling case for why my life would be better without marriage, I'd love to hear it.
But to proceed on this line anyway, from anecdotal evidence in my circle of acquaintances custody battles seem to be much more nasty and hard on the children among those who are married.
Is there any legal precedent that gives a never-married man better access to his children than a divorced man?
Replies from: shokwave, None, None↑ comment by shokwave · 2012-02-15T10:59:34.390Z · LW(p) · GW(p)
If you have a compelling case for why my life would be better without marriage I'd love to hear it.
I shall call this the "loving, consensual model" of a relationship:
- Preferring to be with someone if and only if they prefer to be with you,
- and them preferring to be with you if and only if you prefer to be with them,
- and you prefer to be with them, satisfying 2,
- and they prefer to be with you, satisfying 1,
- gives us a situation of cohabitation, which is sufficient for your stated needs.
Given that you should be indifferent between cohabitation and marriage, and marriage has non-zero costs, why would you prefer marriage?
The reason is insidious, cloaked in the positive connotations of marriage and love, but nevertheless incontrovertible.
You don't prefer to be with someone if and only if they prefer to be with you.
You prefer to be with someone.
Of course, it's illegal to directly enforce this preference. Unlawful imprisonment, and all that. So you'd go with the consensual model, but raise the costs of them preferring to be separate as much as legally possible. Like, say, requiring a contract that is costly and messy to break.
Replies from: MixedNuts, TheOtherDave, smk↑ comment by MixedNuts · 2012-02-15T14:29:57.818Z · LW(p) · GW(p)
Yes, if I have various kinds of entanglement and dependence on someone, such as living together, sharing finances and expensive objects like a car, sharing large parts of our social lives, and possibly having children, I don't want them to be able to leave at a moment's notice. This doesn't make be feel especially evil.
Replies from: shokwave↑ comment by shokwave · 2012-02-15T14:43:26.487Z · LW(p) · GW(p)
Really? I'd suggest you don't want them to have a positive expected value on leaving at a moment's notice rather than wanting them restricted, but in any case... the solution is to structure your entanglements and dependence in such a way that this opportunity is available to them if they desire it, not to try to force contracts and obligations onto them in order to restrict them.
Replies from: MixedNuts↑ comment by MixedNuts · 2012-02-15T15:19:51.141Z · LW(p) · GW(p)
Can you rephrase? I'm thinking things like "If we have a kid, we shouldn't split up even if we're a little unhappy" and "If I've quit my job to be a homemaker, don't stop giving me money without warning". Are you saying to avoid getting in such situations in the first place? Or are you saying not to marry jerks who will leave you and the kids in the dust?
Replies from: shokwave↑ comment by shokwave · 2012-02-16T03:45:02.247Z · LW(p) · GW(p)
"If we have a kid, we shouldn't split up even if we're a little unhappy"
Yes; the kid increases the cost of splitting up, so being a little unhappy doesn't justify making the kid really unhappy. You don't need a marriage for this, you just need to think about the situation for five minutes.
"If I've quit my job to be a homemaker, don't stop giving me money without warning".
Pay partially into an account that is available to the homemaker and not you; with a month's head start the account will have enough to pay out to the homemaker for at least a month. This is equivalent to a month's warning. It took me like fifteen seconds to think of this and it's already better than the equivalent financial situation within a marriage.
There are just better ways of doing everything marriage needs to do, except installing a huge cost on leaving, so it seems duplicitous to prefer marriage to these other ways if you ostensibly only care aout the other things.
↑ comment by TheOtherDave · 2012-02-16T20:01:08.558Z · LW(p) · GW(p)
There are lots of situations where precommitting to doing something at some future time, and honoring that precommittment at that time regardless of whether I desire to do that thing at that time, leaves me better off than doing at every moment what I prefer to do at that moment.
"Marriage" as you've formulated it here -- namely, a precommitment to remain "with" someone (whatever that actually means) even during periods of my life when I don't actually desire to be "with" them at that moment -- might be one of those situations.
It's not clear to me that the connotations of "insidious" would apply to marriage in that scenario, nor that the implication that marriage is not loving and consensual would be justified in that scenario.
↑ comment by smk · 2012-02-16T14:44:33.648Z · LW(p) · GW(p)
I am legally married because I need the legal and financial benefits that marriage provides in my country. However, in an ideal fantasy world, I wouldn't need those benefits and I wouldn't be legally married. But I would still be married! Just without government involvement. (BTW I have no interest in raising kids.)
It's normal for people to hear "marriage" and think "legal marriage" but I hate that.
Replies from: TheOtherDave↑ comment by TheOtherDave · 2012-02-16T17:53:14.533Z · LW(p) · GW(p)
Can you clarify what you mean by "need," here? In particular, does it mean something different than "benefit from"?
Replies from: smk↑ comment by [deleted] · 2012-02-15T09:57:48.531Z · LW(p) · GW(p)
I'm not trying to debate you, I'm trying to optimize my life. I want to reproduce with a partner who will stick around for decades, at least. I
Why do you need to marry someone to live with them for decades and raise children? Are millions of people living happily in such arrangements doing something wrong or sub-optimally? If you think different arrangements are better for different people, why do you think you are a particular kind of person?
If you have a compelling case for why my life would be better without marriage, I'd love to hear it.
Can we taboo the word "marriage"?
Is there any legal precedent that gives a never-married man better access to his children than a divorced man?
No. But neither do married men have much better chances of such an outcome.
Replies from: Viliam_Bur↑ comment by Viliam_Bur · 2012-02-15T12:03:39.922Z · LW(p) · GW(p)
But neither do married men have much better chances of such an outcome.
There is still a difference between "not much better" and "not better". I do not know the exact number, but if contact with your children is an important part of your utility function, then even increasing the chance by say 5% is worth doing, and could justify the costs of marriage.
(Even if the family law is strongly biased against males, it may still be rational for males to seek marriage.)
↑ comment by [deleted] · 2012-02-15T10:07:40.863Z · LW(p) · GW(p)
I mean I know this is a Western peculiarity but it always strikes me as essentially crazy how people in other such discussion I have consistently seem to mix up, conflate and implicitly equate the following:
- traditional marriage
- legal concept of marriage
- religious marriage
- cohabitation with children
So easily! In Slovenia someone getting married at a Church has ZERO legal consequences. Why would it? It is ridiculous to claim religious ceremonies and legal categories should have anything to do with each other. Why should priest have the right to make legally binding arrangements? When someone decides to get a civil marriage they go to a magistrate and basically sign a contract, this carries legal consequences. Living with someone for some time has some legal consequences and the rights and responsibilities come pretty close to civil marriage. All of these are also different from the implicit traditional responsibilities and privileges people assume exist in a "marriage". And if religious people get to call their rituals marriage, why can't I as a secular person have a community of people call something marriage? As long as we are clear this isn't civil marriage, the kind the state recognizes, there is no possible harm in this, nor is it illegal in my country.
I don't see a good reason why societies want to forcibly (from what I understand in the US they actually mess with people's private lives by persecuting people who live with kids with more than one person and all marriages are a state affair) conflate these separate things.
↑ comment by ArisKatsaris · 2012-02-15T14:42:08.236Z · LW(p) · GW(p)
Why would anyone make a lifetime commitment?
Again in the interests of teaching you to communicate more efficiently: Whenever you say "Why would anyone" when you already know that some people do this (and it's not just some bizarre hypothetical/fictional world you're discussing), this signals that it's mainly a rhetorical question and that you believe these people to be just insane/irrational/not thinking clearly.
So, a question that signals an actual request for information better is "Why do some people make lifetime committments?"
Especially since more than 50% of all marriages end in divorce it dosen't seem to be, as it is practised currently, very good at its supposed function.
As opposed to what percentage of non-marriage relationships?
Replies from: None↑ comment by [deleted] · 2012-02-15T14:47:32.563Z · LW(p) · GW(p)
As opposed to what percentage of non-marriage relationships?
Good catch. I guess considering the context of the debate with MileyCyrus a good enough comparison would be the stability of relationships by people who choose cohabitation with children.
↑ comment by Alicorn · 2012-02-15T08:26:50.388Z · LW(p) · GW(p)
Watching the stars burn down won't be as much fun without him.
ETA: We're American, so Amerocentric advice is likely to be useful to us.
Replies from: None↑ comment by [deleted] · 2012-02-15T08:38:05.435Z · LW(p) · GW(p)
I'm sorry this is a nice sounding and romantic, but useless answer. It was Valentines day yesterday, I was bombarded with enough relationship related cached thoughts as it is.
Or are you saying the other person will literally die or refuse to ever interact with you if you don't "marry" them? Also do you expect US government granted 21st century marriages to remain enforced then? Indeed do you have any evidence whatsoever that a stable relationship can last that long or is likley to without significant self-modification? In addition why this crazy notion of honouring exactly one person with such a honour? Isn't it better to wait until group marriages are legalized?
If you don't feel like discussing the issue please acknowledge it directly.
Replies from: Alicorn↑ comment by Alicorn · 2012-02-15T08:46:05.089Z · LW(p) · GW(p)
You're being kind of a jerk. Your questions aren't relevant to the information I wanted; you're just picking on me because I brought up something vaguely related.
That having been said:
Yeah, I know about Valentine's day. That's why this was on my mind.
I don't think singlehood will kill my partner or cause him to shun me. (Although if I didn't poke him about cryo, he might cryocrastinate himself to room-temperatureness.) I'm not hoping that anyone will "enforce" anything about my prospective marriage.
My culture encourages permanent and public-facing relationships to be solidified with a party and thereafter called by a different name. In particular, it has caused me to assign value to producing children in this context rather than outside of it. I believe that getting married will affect my primate brain and the primate brains of my and my partner's families and friends in various ways, mostly positive. It will entitle me to use different words, which I want, and entitle me to wear certain jewelry, which I want, and allow me to summarize my inextricability from my partner very concisely to people in general, which I want. It will also allow me to get on my partner's health insurance.
Edit in response to edit: I'm poly, but my style of poly involves a primary relationship (this one). It doesn't seem at all unreasonable to go ahead and promote it to a new set of terms.
Replies from: None, None, drethelin↑ comment by [deleted] · 2012-02-15T09:16:21.309Z · LW(p) · GW(p)
It seems cultural and perhaps even value differences are the root of how this conversation proceeded. Ok I think I understand now. I should have suspected this earlier, I was way too stuck in my local cultural context where among the young basically only the religious still marry and it is generally seen as an "old fashioned" thing to do.
↑ comment by [deleted] · 2012-02-15T08:47:57.527Z · LW(p) · GW(p)
You're being kind of a jerk.
As I said I didn't mean to be. I am genuinely curious why in the world someone would do this because I haven't heard any good reasons in favour of it except that it is "tradition" or that else they'd be living in sin and fear of punishment by a supernatural entity.
But I do apologize for any personal offence I may have inadvertently caused. I did not meant to imply either you or your partner (about whom I know nothing!) where particularly unsuited for this arrangement. I was questioning its necessity or desirability in general. I generally have been pretty consistent at questioning the value of this particular legally binding institution so it seems unlikely that I wouldn't have posed the exact same question in response to anyone else making such a request.
I will not apologize for posing uncomfortable questions. I don't want other people respecting my own ugh fields so I generally on LessWrong don't bother avoiding poking into those of others.
Replies from: Alicorn↑ comment by Alicorn · 2012-02-15T08:51:54.248Z · LW(p) · GW(p)
Your incredulity has been noted. With contempt. I'm allowed to want things.
But I do apologize for any personal offence I may have inadvertently caused.
Have you considered reacting to the need to apologize by ceasing to produce it? It can't be very inadvertent. It looks awfully advertent, or at least not like an evitandum of any kind.
Replies from: MixedNuts, None, ShardPhoenix, drethelin↑ comment by [deleted] · 2012-02-15T08:59:50.399Z · LW(p) · GW(p)
I'm allowed to want things.
Of course you are. I just wanted to hear why. You are naturally under no obligation explicit or implicit to give reasons that apply generally or personally.
I'm dismayed that I have apparently offended you. Please accept a sincere apology. I genuinely didn't realize the topic might create resentment here.
What does the outside view say about when during the course of a relationship it is wisest to get engaged (in terms of subsequent marital longevity/quality)? Data that doesn't just turn up obvious correlations with religious groups who forbid divorce is especially useful.
I assumed from the wording of the above request for data that you weren't seeking for congratulations or the like but information on the general desirability of the arrangement, and when it is most appropriate. I was simply trying to elicit what information and thoughts you've come up on your own so far because I too was interested in the question. And I too have a personal stake in it as well since I've had discussions on the topic with one of my partners.
Edit: To respond to the addition of this:
Have you considered reacting to the need to apologize by ceasing to produce it? It can't be very inadvertent. It looks awfully advertent, or at least not like an evitandum of any kind.
I was apologizing because you where sending strong signals but I wasn't sure what exactly I was doing wrong. I mean I could have cut off all further communication but that would have left me very confused.
I proceeded as I normally do in such circumstances, by apologizing for any inadvertent offence and asking for clarifications, that would hopefully let me figure out what exactly caused the negative response. If you note above, you see that I basically made a guess at what might have offended you and proceeded to apologize for that.
Replies from: Alicorn↑ comment by Alicorn · 2012-02-15T09:02:40.961Z · LW(p) · GW(p)
I do not consider you to be at fault for your initial comments; I fault you for subsequent failure to take a hint. Your apology is accepted.
I see nothing about the wording of my original comment that should have led you to conclude that I wanted information about the "general desirability of the arrangement". I did want information about "when it was most appropriate" - in a purely temporal sense.
Replies from: None↑ comment by [deleted] · 2012-02-15T09:07:55.450Z · LW(p) · GW(p)
I see nothing about the wording of my original comment that should have led you to conclude that I wanted information about the "general desirability of the arrangement". I did want information about "when it was most appropriate" - in a purely temporal sense.
Now that I've reread your question, I see that you where indeed.
↑ comment by ShardPhoenix · 2012-02-15T11:43:18.711Z · LW(p) · GW(p)
Must you get offended every time you ask for advice and get it?
Replies from: Alicorn↑ comment by Alicorn · 2012-02-15T18:40:42.695Z · LW(p) · GW(p)
If you're referring to the other occasion when I asked for advice and people ignored all non-keywords I had uttered instead of answering my actual, specific question, yeah, I probably must get at least somewhat offended when that happens. I value my ability to react emotionally to my environment. I don't get offended when I ask for advice and get advice that corresponds to what I asked for.
Replies from: None↑ comment by drethelin · 2012-06-01T19:52:28.716Z · LW(p) · GW(p)
Picking on you? You responded to him. You're going out of your way to be offended. You can feel free to not explain your viewpoints, but when someone poses a question don't respond with a throw-away comment and then get annoyed it gets responded to.
↑ comment by J_Taylor · 2012-02-16T01:28:45.831Z · LW(p) · GW(p)
I truly hope that, one day, someone will answer the question that you actually asked instead of a bunch of vaguely related questions. Unfortunately, this is the most relevant article I could find. It's not that great.
http://stats.org/stories/2008/is_ideal_time_marry_nov10_08.html
↑ comment by Shmi (shminux) · 2012-02-15T21:43:13.515Z · LW(p) · GW(p)
From your other comments it seems clear that expressing and projecting attachment to this person has positive utility for you, even if it would change little in your relationship. Is this his (I presume) view, as well? Do either/any of you see any obvious negatives in being engaged and eventually married? If not, why wait?
Replies from: Alicorn↑ comment by Alicorn · 2012-02-16T00:21:53.362Z · LW(p) · GW(p)
"Why wait?" is a perfectly reasonable question, but simply answering it "let's not!" probably doesn't yield the best expected value. (It might work perfectly fine. It'd probably work perfectly fine. But it seems likely to be slightly less conducive to everything being perfectly fine than some better-calibrated choice of timing.)
↑ comment by MileyCyrus · 2012-02-15T08:30:03.422Z · LW(p) · GW(p)
Questions I would consider (privately):
- If I knew this relationship didn't have long-term potential, would I break it off?
- What would I need to know about this person in order to become engaged? What would make me break it off?
- How much am I likely to learn about this person in the next month/six-months/year? How can I learn what I need to know?
Try to avoid living together before marriage..
Replies from: Alex_Altair, Alicorn, MixedNuts↑ comment by Alex_Altair · 2012-02-15T16:41:18.444Z · LW(p) · GW(p)
Try to avoid living together before marriage.
That seems like really dangerous advice to me. The article confirms my suspicion:
"We think that some couples who move in together without a clear commitment to marriage may wind up sliding into marriage partly because they are already cohabiting," Rhoades says. "It seems wise to talk about commitment and what living together might mean for the future of the relationship before moving in together, especially because cohabiting likely makes it harder to break up compared to dating."
The solution is not to avoid living together before marriage; the solution is to break up when you know you should.
↑ comment by Alicorn · 2012-02-15T08:31:10.560Z · LW(p) · GW(p)
Try to avoid living together before marriage..
Too late.
Replies from: MileyCyrus↑ comment by MileyCyrus · 2012-02-15T08:39:55.574Z · LW(p) · GW(p)
In that case, remind yourself that the costs of moving your stuff out are trivial compared to the costs of continuing a poor relationship.
If you are looking for marriage, give yourself a deadline for deciding whether to get engaged or break it off. Share your deadline with a brutally honest friend. When the deadline comes, you and your friend can evaluate what you've learned about your relationship and whether it's worth continuing.
Replies from: Alicorn↑ comment by Alicorn · 2012-02-15T08:49:16.057Z · LW(p) · GW(p)
Thanks, but this is really not the sort of advice I need. Me-and-the-relevant-person are, you know, in a healthy relationship that consists significantly of conversations. I do not need to do anything cloak and dagger here. I could probably just say "hey let's be engaged RIGHT NOW" and he'd probably say "okay!" after some amount of thought. I'm just trying to figure out if I risk torpedoing something I value by doing that now as opposed to in six months or a year or whatever.
Replies from: None, smk, MileyCyrus↑ comment by [deleted] · 2012-02-15T21:56:33.306Z · LW(p) · GW(p)
Getting married/engaged can involve drama and bad memories, because of the necessity of considering such things as the Rehearsal party, Bachelor party, Bachelorette party, Wedding party, and the Honeymoon.
For instance, due to a slight breakdown in communications, I ended up spending a substantial amount of my Bachelor party being responsible for driving/watching my underaged brother. He's a good little brother and it wasn't any one particular person's fault. But that wasn't part of the "Series of fairy tale events that I had been visualizing in my head."
I can probably think of about ten more anecdotes like that of around that time. That one was actually one of the mild ones.
I'm under the impression many people give bog standard advice like the wedding might be a fairy tale, but what about the marriage afterwards? I would like to point out the reverse perspective: You may have a fairy tale marriage, but your time period around your wedding is likely going to be a set of extremely difficult feats in social event planning.
Actually, I'm curious what the effects of being more familiar with Less Wrong when I got married would have been. I would have had more practice in lowering my expectations and dispelling overly idealistic fantasies based on no evidence, both of which from my current perspective seem like they would have been amazing useful skills to have during wedding planning.
This is not to say you can't have a perfect series of parties topped off by a fantastic honeymoon. That actually does happen. I sincerely wish it happens for you. But If I were to couch this in terms of advice to Michaelos 2008, I would tell him that he should not EXPECT it to happen, because he's never done it before and planning social events was never his or his soon to be wife's forte. But honestly I'm not sure he would have had enough context to get that advice.
So in terms of your actual question about doing it now, six months from now, or a year from now, I would say first discuss it in terms of the best way to handle those tricky social feats with other people. In addition, possibly discuss it with the other people as well, or someone you think of as a skilled master at tricky social situations.
Replies from: Alicorn↑ comment by Alicorn · 2012-02-16T00:20:10.196Z · LW(p) · GW(p)
Thank you! I will update in favor of getting help from my socially-adept friends, especially married ones. I will also attempt to aim my drive-to-do-overcomplicated-socially-dramatic-things at this challenge when it appears rather than expecting to accomplish it all with more ordinary planning-of-stuff skills.
↑ comment by smk · 2012-02-15T23:05:29.777Z · LW(p) · GW(p)
Two years is the time frame one always hears, isn't it? I only did a very quick search but most of what I found seemed to be referring to the same study by Ted Huston, and I didn't even find the study itself. My impression is that 2 years (25 months, one article said) was the average time spent dating before marriage (not before engagement, as you asked) for happy, stable couples, however they judge that. So, not the most helpful.
But, it does kind of match my intuition that one should wait until New Relationship Energy is mostly over before making that decision, and I often read that NRE (though it's usually not called that in these articles) typically lasts about 2 years (this matches my limited experience). Also, I'm monogamous, but I'd guess that even if your NRE with Partner A has faded, NRE with Partner B could spill over onto your other relationship(s) and affect your judgment there too?
Replies from: Alicorn↑ comment by Alicorn · 2012-02-16T00:23:23.084Z · LW(p) · GW(p)
I don't remember hearing 2 years, although it is relevant data that you have done so. One complication is that we started dating two years ago, but were broken up for somewhat more than a year in the middle before getting back together. So we've spent less than two years dating, but about two years conducting an extended empirical observation about whether we prefer being together or not.
↑ comment by MileyCyrus · 2012-02-15T09:07:10.551Z · LW(p) · GW(p)
I'm afraid I was projecting my own goals into your situation. Sorry.
I didn't mean to suggest your relationship was unhealthy. All I meant to say was that you shouldn't let logistics become a trivial inconvenience.
↑ comment by MixedNuts · 2012-02-15T14:23:45.514Z · LW(p) · GW(p)
If I knew this relationship didn't have long-term potential, would I break it off?
I'm not sure which answer points to "Engage" here! I would guess "yes", since it allows you to reason "...and I'm still around, which means I believe it has long-term potential, which means we should get engaged". But "no" indicates attachment to the person and a willingness to make the relationship work even if it's rocky.
comment by JMiller · 2013-01-11T03:55:24.108Z · LW(p) · GW(p)
I was told this would be a more appropriate place than the discussion board for this post:
I'm taking a class on heuristics and biases. I'm this class we have the option to read one of two "applied" books on the subject. The books are "The Panic Virus: A True Story of Medicine, Science, and Fear" by Seth Mnookin and "Sold on Language: How Advertisers Talk to You and What This Says About You" by Judith Sedivy and Greg Carlson.
I'd like to know if anyone has read one or both of these books, and how well or poorly they mesh with less wrong rationality.
Thanks, Jeremy
comment by cousin_it · 2012-02-25T15:42:58.982Z · LW(p) · GW(p)
I want to read the paper "Three theorems on recursive enumeration" by Friedberg. It doesn't seem to be available on the open web. Can someone with journal access help me out?
Replies from: radical_negative_one↑ comment by radical_negative_one · 2012-02-25T17:43:41.800Z · LW(p) · GW(p)
Sent.
Replies from: cousin_itcomment by Richard_Kennaway · 2012-02-17T14:14:36.339Z · LW(p) · GW(p)
In this comment I pegged a web site as being nothing but a link farm, filled with ads and worthless "content". A couple of ideas occurred to me.
The web site looks to me as if it was actually written by human beings, but computer-generated prose of this sort might not be far off. The better the programmers get at simulating humans (and the spammers are certainly trying), the better humans will have to become at not being mistaken for computers. If you sound like a spambot, it doesn't matter if you really aren't, you'll get tuned out.
And I wonder how well different people do on this adult-level "theory of mind" test? Here's another: how long does it take you to discern the true nature of this book?
comment by mstevens · 2012-02-15T11:08:32.517Z · LW(p) · GW(p)
It seems a suspicious coincidence that our puny human ideas of justice would automatically be a) physically possible b) have reasonable cost, but this is a very popular belief.
Replies from: gwern, mwengler↑ comment by gwern · 2012-02-16T03:57:24.496Z · LW(p) · GW(p)
I don't think it's suspicious at all. The legal tradition deliberately orders its exponents to restrict its scope to enforceable laws without too major backlashes. (I know there are legal maxims expressing these concepts, but they just aren't coming to mind for some reason.)
EDIT: Mnemosyne popped up an example maxim: 'Ad impossibilia nemo tenetur.'
↑ comment by mwengler · 2012-02-15T22:57:26.556Z · LW(p) · GW(p)
Puny compared to what?
Replies from: fubarobfusco↑ comment by fubarobfusco · 2012-02-16T01:43:48.167Z · LW(p) · GW(p)
Indeed. There are no ideas of justice on exhibit other than human ones, so calling them "puny" seems like merely saying nasty things about reality.
comment by MileyCyrus · 2012-02-15T08:05:36.428Z · LW(p) · GW(p)
What's the best way to find out about scientific experiments before they are conducted?
Replies from: Morendil, AlexSchell, shminux↑ comment by AlexSchell · 2012-02-15T12:20:07.616Z · LW(p) · GW(p)
I think ClinicalTrials.gov might be what you're looking for. For anything less than human clinical trials, you'd likely need inside knowledge of the organization conducting the study/experiment.
↑ comment by Shmi (shminux) · 2012-02-15T21:31:02.529Z · LW(p) · GW(p)
This question seems a bit vague. What kind of experiments? Why do you want to know about them in advance?
Replies from: MileyCyrus↑ comment by MileyCyrus · 2012-02-16T02:30:05.510Z · LW(p) · GW(p)
What kind of experiments?
Mostly psychology. I'm particularly interested in experiments that would have political implications.
Why do you want to know about them in advance?
Because I want to be able to look at them and decide what kind of results would support a theory verses undermine it, before I (and the world) becomes biased by the actual results.
Replies from: shminux↑ comment by Shmi (shminux) · 2012-02-16T06:38:20.547Z · LW(p) · GW(p)
Mostly psychology. I'm particularly interested in experiments that would have political implications.
Interesting. Maybe you can give examples of past experiments that had "political implications" and what theory they may have falsified.
comment by mwengler · 2012-02-16T18:47:39.873Z · LW(p) · GW(p)
Having read a lot of philosophers talking of morality here, and having read a lot of economists talking of utility, I think I will concentrate on the economists.
I was going to say I think my utility is maximized by spending no more time on the philosophers and using that on economists instead. But of course someone who chose the philosophers might say she believes the moral thing to do is to study the morality instead of the utility.
In physics sometimes you get to a point where your calculation involves subtracting an infinite quantity from another intfinite quantity in order to reach a finite result. Probably not often, but my recollection is there is a statistical mechanical calculation of the self-energy of an electron or some such where the only way forward is to pretend these two infinities difference to zero, and then from there you get results which are highly useful in predicting the real world's behavior. I think in a lot of these moral utility arguments, if you can't make the argument work using numbers of a trillion people or less, that you are too far out of anything real to have any faith at all that your arguments mean anything at all about the real world.
Does anybody know of any case in human history where some great improbable wrong was averted by people being concerned about improbable events that require the ^ character to be compactly expressed?
Replies from: endoself↑ comment by endoself · 2012-02-16T20:08:42.112Z · LW(p) · GW(p)
Does anybody know of any case in human history where some great improbable wrong was averted by people being concerned about improbable events that require the ^ character to be compactly expressed?
I think you'd be better off looking for cases where some great improbable wrong occurred since no one was concerned about improbable events. That said, human history requires some very large numbers, but not any ^s.
comment by mwengler · 2012-02-16T15:37:55.677Z · LW(p) · GW(p)
Presumably, the problems of friendly or unfriendly AI are just like the problems of friendly or unfriendly NI (Natural Intelligence). Intelligence seems more an agency, a tool, and friendliness or unfriendliness a largely orthogonal consideration. In the case of humans, I would imagine our values are largely dictated by "what worked." That is, societies and even subspecies with different values would undergo natural selection pressures proportional to how effective the values were at adding to survival and thrivance of the group possessing them.
Suppose, as this group generally does, that self-modifying AI will have the ability to modify itself by design, and that one of its values it designs towards is higher intelligence. Is such an evolution constrained by evolution-like pressures or is it not?
The argument that it is not is that it is changing so fast, and so far ahead of any concievable competition, that from the point of view of the evolution of its values, it is running "open loop." THat is, the first AI to go FOOM is so far superior in ability to anything else in the world that its subsequent steps of evolution are unconstrained by any outside pressures, and only follow either some sort of internal logic of value-change as intelligence increases, or else follow no logic at all, go in some sense on a "random walk" through possible values. That is, with the quickly increasing intelligence, the values of the FOOMing AI are nearly irrelevant to its overall effectiveness, and therefore totally irrelevant to determining whether it will survive and thrive going up against humans. Its intelligence is sufficient to guarantee its survival, its values get a free ride.
But is this right? Does a FOOMing AI really look like a single intelligence ramping up its own ability? This is certainly NOT the way evolution has gone about improving the intelligence of our species. Evolution tries many small modifications and then does natural experiments to see which ones do better and which do worse. By attrition it keeps the ones that did better and uses these as a base for further experiments.
My own sense of how I create using my intelligence is that I try many different things. Many are tried purely in the sandbox of my own brain, run as simulations there, and only the more promising kept for further testing and development. It seems to me that my pool of ideas is an almost random noise of "what ifs" and that my creative intelligence is the discrimination function filtering which of these ideas are given more resources and which are killed in the crib.
So intelligent creation seems to me to be very much like evolution, with competition.
Might we expect an AI to do something like this? To essentially hypothesize various modifications to itself, and then to test the more promising ones by running them as simulations, with increasing exactitude of the sims as the various ideas are winnowed down to the best ones?
Might an AI determine that the most efficient way to do this is to actually have many competing versions of itself constantly running, essentially, against each other? Might the FOOMing of an AI look a lot like the FOOMing of NI, which is what is going on on our planet right now?
I really don't know what the implications of this point of view are for FAI. I don't know whether this point of view is even at odds in any real way with SIAI's biggest worries.
I do wonder whether humanity is meant to survive when, in some sense, whatever comes next arrives. In one picture, the dinosaurs did not survive their design of mammals. (They designed mammals by putting a lot of selection pressure on mammals). In another picture, the dinosaurs did survive their design of mammals, but they survived by "slightly modifying" themselves into birds and lizards and stuff.
Th next step is electronic-based intelligence which is kick started on its evolution by us, just as we were kickstarted by plants (there are NO animals until you have plants), and plants were kickstarted by simpler life that exploited less abundant but more available energy in chemical mixes. Or the next step might be something that arrives through some natural path we are not considering carefully, either aliens invading, or a strong psi arising among the whales so that their intelligence grows enoguh to overcome their lack of digits.
WHatever the next step, if its presence has the human race survive and thrive by doing the equivalent of what turned dinosaurs in to birds, or turned wolves into domesticated dogs, does that count as Friendly or Unfriendly?
And is there really any point at all to fighting against it?
Replies from: None↑ comment by [deleted] · 2012-02-16T18:25:17.682Z · LW(p) · GW(p)
That is, the first AI to go FOOM is so far superior in ability to anything else in the world that its subsequent steps of evolution are unconstrained by any outside pressures, and only follow either some sort of internal logic of value-change as intelligence increases, or else follow no logic at all, go in some sense on a "random walk" through possible values.
The AI is not supposed to change it values, regardless of whether it is powerful enough to realize them. Values are not up for grabs. Once the AI has some values it either wins and reshapes reality according to them or loses. Changing the values is one form losing. It seems that mostly anything that counts as a value system would object to changing an agent subscribing to that system into an agent using something else, so the AI won't follow any internal logic of value-change (unless some other agent forces it) and if it changes its values it will be by mistake (so closer to a random walk). Part of the idea of FAI is to build an AI that won't make those mistakes.
My own sense of how I create using my intelligence is that I try many different things. Many are tried purely in the sandbox of my own brain, run as simulations there, and only the more promising kept for further testing and development. It seems to me that my pool of ideas is an almost random noise of "what ifs" and that my creative intelligence is the discrimination function filtering which of these ideas are given more resources and which are killed in the crib.
The ideas coming into your awareness are very strongly pre-filtered; creativity is far from random noise. For one, the ideas are all relevant and somehow extrapolated from your knowledge of the world. Some of them might seem stupid but its only because of the pre-selection -- they never get compared to the idea of 'blue mesmerizingly up the slightly irreverent ladder, then dwarf the pegasus with the quantum sprocket' (and even this still makes a lot of sense compared to most random messages).
WHatever the next step, if its presence has the human race survive and thrive by doing the equivalent of what turned dinosaurs in to birds, or turned wolves into domesticated dogs, does that count as Friendly or Unfriendly?
It counts as failure to preserve humanity. An AI that does that is probably unfriendly (barring the coercion by external powerful agents. Eliezer actually wrote a story about such scenario, without AIs though.)
And is there really any point at all to fighting against it?
Sure seems like it.
Replies from: mwengler, mwengler, mwengler↑ comment by mwengler · 2012-02-16T19:10:05.025Z · LW(p) · GW(p)
The ideas coming into your awareness are very strongly pre-filtered; creativity is far from random noise.
I agree but I don't think that changes my conclusions. In teaching humans to be more creative, they are taught to pay more attention for a longer time to at least some of the outlier ideas. Indeed, a lot of times I think the difference between the intellectually curious and creative people I like to interact with and the rest is that the rest have predecided a lot of things, turned their thresholds for "unreal" ideas coming in to consciousness up higher than I have turned mine. Maybe they are right more often than I am, but the real measure of why they do this is that their ancestors who outsurvived a lot of other people trying a lot of other things did that same level of filtering, and it resulted in winjning more wars, having more children that survived, killing more competitors, or some combination of these and other results that constitute selection pressures.
An AI in the process of FOOMing, which necessarily has the capacity to consider a lot more ideas in a lot more detail than we do, what makes you think that AI will constrain itself by the values it used to have? Unless you think we have the same values as the first self-replicating molecules that began life on earth, the FOOMing of Natural Intelligence (which has taken billions of years) has been accompanied by value changes.
↑ comment by mwengler · 2012-02-16T19:02:32.854Z · LW(p) · GW(p)
The AI is not supposed to change it values, regardless of whether it is powerful enough to realize them. Values are not up for grabs. Once the AI has some values it either wins and reshapes reality according to them or loses.
A remarkably strong claim.
My initial reaction is that humanity's values have certainly changed over time. I think it would require some rather unattractive mental gymnastics to claim that people who beat their children for their own good and people who owned slaves and people who beat, killed, and/or raped either slaves or other people they had vanquished as their right "really" had the same values we currently have, but just hadn't really thought them through, or that our values applied in their world would have lead us to similar beliefs about right and wrong.
I had even thought my own values had changed over my lifetime. I'm not as sure of that, but what about that?
Certainly, it seems, as the human species has evolved its values have changed. Do chimpanzees and bonobos have different values than we do, or the same? If the same, I'd love to see your mental gymnastics to justify that, I would expect them to be ugly. If different, does this mean that our common ancestor has necessarily "lost," assuming its values were some intermediate between ours, chimps, and bonobos, and all of its descendants have different values than it had?
As I understand the word values, our values have changed over time, different groups of humans have some different values from each other, and if there is a "kernel" of common values in our species, that this kernel most likely differs from the kernel of values in homo neanderthalis or other sentient predecessors of modern homo sapiens.
So if NI (Natural Intelligence) in its evolution can change values (can it?) with generally broad consensus that "we" have not lost in this process, why would an AI be precluded from futzing with its values as it worked on self-modifying to increase its intelligence?
Replies from: APMason↑ comment by APMason · 2012-02-16T19:15:04.983Z · LW(p) · GW(p)
Because, if the AI worked, it would consider the fact that if it changed its values, they would be less likely to be maximised, and would therefore choose not to change its values. If the AI wants the future to be X, changing itself so that it wants the future to be Y is a poor strategy for achieving its aims - the future will end up not-X if it does that. Yes, humans are different. We're not perfectly rational. We don't have full access to our own values to begin with, and if we did we might sometimes screw up badly enough that our values change. An FAI ought to be better at this stuff than we are.
Replies from: mwengler↑ comment by mwengler · 2012-02-16T19:28:04.085Z · LW(p) · GW(p)
I think assuming an AI cannot employ a survival strategy which NI such as ourselves are practically defined by seems extremely dangerous indeed. Perhaps even more importantly, it seems extremely unlikely that an AI which has FOOMed way past us in intelligence would be more limited than us in its ability to change its own values as part of its self modification.
The ultimate value, in terms of selection pressures, is survival. I don't see a mechanism by which something which can self modify will not ultimately wind up with values that are more conducive to its survival than the ones it started out with.
And I certainly would like to see why you assert this is true, are there reasons?
Replies from: APMason, TimS↑ comment by APMason · 2012-02-16T20:10:07.213Z · LW(p) · GW(p)
Yes, reasons:
The AI is not subject to selection pressure the same way we are: it does not produce millions of slightly-modified children which then die or reproduce themselves. It just works out the best way to get what it wants (approximately) and then executes that action. For example, if what the AI values is its own destruction, it destroys itself. That's a poor way to survive, but then in this case the AI doesn't value its own survival. If there were a population of AIs and some destroyed themselves, and some didn't, then yes there would be some kind of selection pressure that led to there being more AIs of a non-suicidal kind. But that's not the situation we're talking about here. A single AI, programmed to do something self-destructive, will not look at its programming and go "that's stupid" - the AI is its programming.
it seems extremely unlikely that an AI which has FOOMed way past us in intelligence would be more limited than us in its ability to change its own values as part of its self modification.
It think "more limited" is the wrong way to think of this. Being subject to values-drift is rarely a good strategy for maximising your values, for obvious reasons: if you don't want people to die, taking a pill that makes you want to kill people is a really bad way of getting what you want. If you were acting rationally, you wouldn't take the pill. If the AI is working, it will turn down all such offers (if it doesn't, the person who created the AI screwed up). It's we who are limited - the AI would be free from the limit of noisy values-drift.
↑ comment by TimS · 2012-02-16T20:16:20.264Z · LW(p) · GW(p)
Humans have changed values to maximize other values (such as survival) throughout history. That's cultural assimilation in a nutshell. But some people choose to maximize values other than survival (e.g. every martyr ever). And that hasn't always been pointless - consider the value to the growth of Christianity created by the early Christian martyrs.
If an AI were faced with the possibility of self-modifying to reduce its adherence to value Y in order to maximize value X, then we would expect the AI to do so only when value X was "higher priority" than value Y. Otherwise, we would expect the AI to choose not to self-modify.
↑ comment by mwengler · 2012-02-16T19:23:11.553Z · LW(p) · GW(p)
It counts as failure to preserve humanity. An AI that does that is probably unfriendly (barring the coercion by external powerful agents. Eliezer actually wrote a story about such scenario, without AIs though.)
Interesting. I think I may even agree with you. In that story each race would need to conclude that the other races are "unfriendly". So Eliezer has written a story in which all the NATURAL intelligences (except us of course) are "unfriendly," and in which a human would need to agree that from the point of view of the other intelligent races, human intelligence was "unfriendly."
Perhaps all intelligences are necessarily "unfriendly" to all other intelligences. This could even apply at the micro level, perhaps each human intelligence is "unfriendly" to all other human intelligences. This actually looks pretty real and pretty much like what happens in a world where survival is the only enforced value. Humans have the fascinating conundrum that even though we are unfriendly to the other humans, we have a much better chance of surviving and thriving by working with the other humans. The alliances and technical abilities and so on are, if not balance across all humans and all groups, at least balanced enough across many of them so that the result is a plethora of competing / cooperating intelligences where the jury is still out on who is the ultimate winner. Breeding in to us the ability (the value?) that "others" are our allies against "the enemies" clearly has resulted in collective efforts of cooperation that have produced quickly cascading production ability in our species. "We" worried about the Nazis FOOMing and winning, we worried the Soviets might FOOM and win. Our ancestors fought against every tribe that lived 5 miles away from them, before cultural evolution allowed them (us) to cooperate in groups of hundreds of millions.
So in Eliezer's story, 3 NI's have FOOMed and then finally run into each other. And they CANNOT resist getting up in each other's grills. And why not? what are the chances that the final intelligence IF only one is left will have been one which was shy about destroying potential competitors before they destroyed it?