Posts

Near Concerns About Life Extension 2012-06-08T19:12:16.599Z
Far negatives of cryonics? 2012-06-01T18:43:52.963Z
Over-applying rationality: Indefinite lifespans 2012-05-25T17:25:41.763Z
Shaving: Less Long 2012-05-20T14:52:06.422Z
Seeking links for the best arguments for economic libertarianism 2012-05-03T17:28:24.900Z
Survey of older folks as data about one's future values and preferences? 2012-04-27T19:42:12.400Z

Comments

Comment by Bart119 on Near Concerns About Life Extension · 2012-06-09T05:54:44.271Z · LW · GW

Hey, thanks for the reply. I appreciate it. I'm not upset if people want to downvote the rant -- rants by their nature are not carefully argued. The best spin might be 'brainstorming'. I'll edit it to label it up-front. But I don't see how the original post is poorly argued; that's what matters for visibility. The one thing I'll note is:

average life expectancy has been increasing for at least 50 years now, surely there's evidence showing how people have damaged the common good in the name of life extension

I agree and think that supports the point. The trend has already caused some damage, though we can handle it fairly gracefully. If it accelerates dramatically, then I fear we will be unable to handle it. Maybe I misunderstood you.

Maybe I'll get the energy to make a post on hubris, but not right now.

Comment by Bart119 on Near Concerns About Life Extension · 2012-06-09T05:39:06.216Z · LW · GW

Thank you for clarifying. Sure, if you're enjoying life and there's no cost to going on living, we'll all choose that. The question is how much we'll pay to keep that chance of living a while longer.

In response, I'd say that somehow the focus is too narrowly on any one point in time. At any given moment, it's terrifying to think you'll die and you'll do a great deal to avoid it at that moment. But as we talk of pre-committing in game theory situations, you might want to pre-commit regarding death too. You might say you don't want extraordinary measures taken. (Analogy: I would choose to submit to torture rather than have a thousand others tortured in my place -- but don't give me a panic button to reverse the choice during my actual torture!)

I sometimes sense here people saying, "Well, I'm going to live a very long time and then get my brain uploaded" and I think it's a way of dismissing death -- waving it away to some indefinite future so you don't have to get that sick feeling contemplating it in the present. But it doesn't really help. The computer's going to crash at some point too. You'll get more comfort for no less reality believing Jesus is your savior.

My father was receiving hospice the last few weeks of his life in a nursing home. There was a no-hospitalization understanding, but during a crisis, the duty nurse called an ambulance for him. The hospice nurse said that if she'd been called in a timely fashion, he probably would have died that day. Instead, I got to visit him in the hospital the next day. It was odd thinking in that moment that he was alive right then and could answer my questions, while our agreed plan had been for him to be dead by then. Note, though, that he had not a shred of joy in living and died a few days later anyway. (Yet if given a button to kill himself I doubt he'd have pushed it). Looking back a couple years later, I remember the oddness of that moment, but those few days didn't really matter very much. They mattered less than some other four days of his life spent in a notably non-optimal fashion, and who of us doesn't have oodles of such days?

For fictional support, I'd mention two books. First, in the Earthsea trilogy by Ursula LeGuin, Ged's achievement is being so comfortable with the inevitability of death that he can perform a totally exhausting and painful feat of magic to seal a hole in the world that allows a corrosive form of immortality -- sealing it off for him as well as everyone else. And the world rights itself. The second is the Hyperion/Endymion series by Dan Simmons, where the right action is giving up the 'crucifixes' that bestow immortality. The brave girl enjoys her last few days of life even knowing she's going to volunteer to be roasted alive to make the galaxy a better place. The day is worth enjoying even if it is your last.

I say there's no real way of making sense of death. We're programmed by evolution to work hard to postpone it, which was adaptive in our environment of evolution. As a nasty side effect, we know we'll eventually lose no matter what we do. But few of us kill ourselves in despair at that realization, and we still will risk death saving our children -- both also adaptive.

I'm sure nothing I'm saying is original either, and others have said it better.

Comment by Bart119 on Near Concerns About Life Extension · 2012-06-09T04:48:25.808Z · LW · GW

It's nothing I have enough detail on to support a separate post. I suspect my phrasing emphasized the wrong part of it (sorry). I have no reason to think our society is due for a catastrophic "dip" in the next few hundred years. I'd even give it a thousand. And after that we might well recover, but preservation of any individual life gets iffy through that period. So I'm giving us one in ten thousand of reversing aging or uploading brains before that collapse (100-1,000 years from now). The chances of developing it when civilizations rise and fall over what could be millions of years might be higher (one in four?) -- but that's of no use to today's frail elderly. Those periodic, "black swan" catastrophes themselves are cause for great concern of those who want to live a very long time.

Comment by Bart119 on Near Concerns About Life Extension · 2012-06-09T02:07:06.391Z · LW · GW

Thank you very much. I apologize for not looking at the post after posting it (doh). Basic "QA". Was it worth writing this comment, or am I just contributing to data smog? Don't know. Anyway, the thanks are genuine.

Comment by Bart119 on Near Concerns About Life Extension · 2012-06-09T02:04:58.834Z · LW · GW

Forewarning: this is something of a rant and not carefully argued... Hey, someone with (at least somewhat) similar views. Great to hear from you. I skimmed the other discussion, and regret I didn't see it earlier. I don't worry about an inability to die if you don't like life, and think the population issue isn't so bad by itself (I worry about the disproportionate number of old people (even if healthy) and the rarity of children. But "unknown consequences" weighs very heavily. The status quo bias isn't such a bad thing as a defense against hubris. And while I can't prove it, I think a society where people live to (even) 200 is extreme hubris, playing with fire. Individuals have an incredibly strong motivation to keep themselves alive. If it runs against the common good (which it could in any number of ways) it would be very hard to stop. I'm not sure how LWers got so terribly afraid of death -- usually atheists accept death. And, while I'm at it, I think The Fable of the Dragon Tyrant is one of the most maddening pieces of sophistry around. It could be a textbook case for "kill the enemy" emotional manipulation. I scratch my head at how a group that starts out in search of rationality ended up as starry-eyed transhumanists. But I tend to think that rationality wouldn't really resolve differences related to life extension. It's different probability estimates and different utility curves. So an unpopular view like this gets voted into invisibility, and the community keeps its unanimity. What to do? Probably go off to some other corner of the web of like-minded people, and stop trying to change minds... End of rant.

Comment by Bart119 on Near Concerns About Life Extension · 2012-06-09T01:37:22.922Z · LW · GW

Yes, and I think we should stop paying for some current procedures that prolong decrepitude, as well as not funding new ones.

Comment by Bart119 on Near Concerns About Life Extension · 2012-06-09T01:35:41.209Z · LW · GW

I'm sympathetic to the idea that basic, proven health care (80% of the benefit for 20% of the cost) should be free to all, and that more expensive, less effective health care should be available to people rich enough to buy it. But this is highly problematic politically. If your society supports "(top of the line) health care is a right, not a privilege", then standard models of resource allocation are problematic. Political leaders might give in to the demands, at the cost of health care spending rising to (say) 50% of GDP. We could lose our economic competitiveness in a catastrophic way.

Comment by Bart119 on Near Concerns About Life Extension · 2012-06-09T01:30:58.326Z · LW · GW

This applies in some areas, but not others. It might apply in health care if your treatment gives people another 100 years of healthy life all in one fell swoop. But the actual history of medical research is that since the appetite for extra months of life and extra percentage points of cures is unlimited, we see costs rising rather than falling. The cost of a given medication should fall after it goes out of patent protection, for instance, but there is always (so far) some other medication or device that is (at least marginally and allegedly) better. This is the most fundamental problem with the US health care system: patients demand more health care regardless of the price -- a cost they want someone else to bear.

Comment by Bart119 on Near Concerns About Life Extension · 2012-06-09T01:24:45.372Z · LW · GW

What I would do is close to what a certain fairly mainstream set of health care reformers would like to do. It would involve reducing much spending in the last three months of life when a terminal condition exists, it would involve taking age into account in allocating donated organs. It would involve drug companies showing that a proposed new drug is more effective (or otherwise significantly superior to) existing medications, not just that it is effective. Although this is not an idea I have seen elsewhere, I might also take an "end-to-end" approach to medical research, wanting to see a sort of "business plan" that shows enough benefit to enough patients to justify costs. Any life extension treatments would be considered using the same set of criteria. Giving frail, confused 85-year-olds another ten years of the same kind of life would not qualify as a positive outcome.

Comment by Bart119 on Near Concerns About Life Extension · 2012-06-09T01:17:46.143Z · LW · GW

I estimate the chance of getting uploaded or having the effects of aging reversed before society collapses (at least to the point that such a person would die) is about, oh, one in ten thousand. Given that estimate and my sense of the cost, then that is an implication of what I am saying.

Comment by Bart119 on Is there math for interplanetary travel vs existential risk? · 2012-06-08T16:21:14.838Z · LW · GW

I'm trying to find out how short-term or long-term your thinking is. Moving to Mars seems very fragile, depending on constant input from planet earth. The challenges of moving to another star system where you could have a self-sustaining life are immense. Neither option is available in your lifetime. I think quantitative estimates of the survival of sapience on earth are pretty much useless -- the uncertainties of individual estimates are way too high. As a young man in 1981 I debated moving from the US to Australia as a hedge against nuclear war, a much more modest proposition. I decided not to, partly because I could be a more effective activist against nuclear war in a superpower that was my native culture. So if you go down this path, you could think of your utility in terms of preventing the destruction of sapience on earth.

Comment by Bart119 on Far negatives of cryonics? · 2012-06-03T20:28:19.233Z · LW · GW

The initial question was just meant to open the issue of future negatives, and having gotten some feedback on how the issue had been discussed before, I gave the bulk of my thoughts in a reply to my initial post.

What I consider much more realistic possibilities (more realistic than benign, enlightened resurrection) are being revived with little regard to brain damage and to serve the needs of an elite. I laid it out in my other response in this thread (I don't know how to link to a particular comment in a thread, but search for 'When I started this thread'.)

Comment by Bart119 on Far negatives of cryonics? · 2012-06-02T17:58:12.419Z · LW · GW

Even your most powerful argument/worst-case scenario has immortality as its outcome

By "possible", I meant that we can imagine scenarios (however unlikely) where we will be immortal. Cryonics also relies on scenarios (admittedly not quite as unlikely) where we would at least have much longer lives, though not truly immortal. If being alive for a thousand years with serious brain damage still strikes you as much preferable to death, then I agree that my argument does not apply to you.

To what extent are we not "[serving] the ends of the elite" and "prevented from taking [our] own life if [we] found it miserable" even now?

In the US today, as a person of no particular import to the government, I feel I have considerable freedom to live as I want, and no one is going to stop me from killing myself if I choose. If on some construal I inevitably serve the elite today, I at least have a lot of freedom in how I do that. Revived people in a future world might be of enough interest that they could be supervised so carefully that personal choice would be severely limited and suicide would be impossible.

Comment by Bart119 on Far negatives of cryonics? · 2012-06-02T15:01:59.500Z · LW · GW

When I started this thread, I wasn't quite sure where it was going to end up. But here's what I see as the most powerful argument:

An enlightened, benign future society might revive you to let you live life to your full potential, for your sake -- when it is convenient for them. But a future society that has morality in line with some pretty good present ones (not the very best) might see you as a precious commodity to revive for the ends of the elite. An enlightened society would not revive you if you were going to be miserable with serious brain damage, but a less enlightened society would have few qualms about that. Even if revived intact, you would still serve the ends of the elite and might well be prevented from taking your own life if you found it miserable.

I judge the latter scenario much more likely than the former. If so, cryonic preservation's appeal would be much less -- it might even be something you would pay to get out of!

You who are cryonics enthusiasts who are also committed to the LW method should think about this. Maybe you will judge the probabilities of the future scenarios differently, but there are strong cognitive biases at work here against an accurate analysis.

Immortality is still possible. We might be subjects in an experiment, and when we croak our brains might be uploaded by the compassionate experimenters. Maybe the theists are right (there sure are a lot of them) and maybe the ones are right who preach universal salvation. You can still have hope, but it doesn't rest on spending large sums on freezing your brain.

Comment by Bart119 on Far negatives of cryonics? · 2012-06-02T03:54:33.556Z · LW · GW

While I admit that a theocratic torturing society seems less likely to develop the technology to revive people, I'm not at all sure that an enlightened one is more likely to do so than the one I assumed as the basis of my other examples. A society could be enlightened in various ways and still not think it a priority to revive frozen people for their own sake. But a society could be much more strongly motivated if it was reviving a precious commodity for the selfish ends of an elite. This might also imply that they would be less concerned about the risk of things like brain damage that would interfere with the revivee's happiness but still allow them to be useful for the reviver's purposes.

Comment by Bart119 on Far negatives of cryonics? · 2012-06-02T01:43:40.698Z · LW · GW

The idea is that everyone who wasn't frozen got a chance to see it coming and convert, maybe two or three times as winds shifted?

Comment by Bart119 on Far negatives of cryonics? · 2012-06-01T23:04:40.669Z · LW · GW

Interesting. The referenced discussions often assume the post-singularity AI (which for the record I think very unlikely). The development of that technology is likely to be, if not exactly independent, only loosely correlated with the technology for cryonic revival, isn't it?

Certainly you have to allow for the possibility of cryonic revival without the post-singularity AI, and I think we can make better guesses about the possible configurations of those worlds than post-AI worlds.

I see the basic pro-cryonics argument as having the form of Pascal's wager. Although the probability of success might be on the low side (for the record, I think it is very low), the potential benefits are so great that it is worth it. The cost is paid in mere money. But is it?

In my main post I used the "torture by theocracy" example as an extreme, but I think there are many other cases to worry about.

Suppose that among a population of billions, there are a few hundred people who can be revived. The sort of society we all hope for might just revive them so they can go on to achieve their inherent potential as they see fit. But in societies that are just a bit more broken than our own, those with the power to cause revival may have self-interest very much in mind. You can imagine that the autonomy of those who are revived would be seriously constrained, and this by itself could make a post-revival life far from what people hope. The suicide option might be closed off to them entirely; if they came to regret their continued existence they might well be unable to end it.

Perhaps the resurrected will have to deal with the strange and upsetting limitations that today's brain damage patients face. Perhaps future society will be unable to find a way for revived people to overcome such problems, and yet keep them alive for hundreds of years -- they are just too valuable as experimental subjects.

Brain damage aside, what value will they have in a future society? They will have had unique and direct knowledge of life in a bygone century, including its speech patterns and thought patterns. I think modern historians would be ecstatic at the prospect of being able to observe or interview pockets of people from various epochs in history, including ancient ones (ethical considerations aside).

Perhaps they will be valued as scientific subjects and carefully insulated from any contaminating knowledge of the future world as it has developed. That might be profoundly boring and frustrating.

Perhaps the revived will be confined in "living museums" where they face a thousand years re-enacting what life was like in 21st century America -- perhaps subject to coercion to do it in a way that pleases the masters.

If the revived people are set free, what then? Older people in every age typically shake their heads in dismay at changes in the world; this effect magnified manyfold might be profoundly unsettling -- downright depressing, in fact.

One can reasonably object that all of these are all low-probability. But are they less probable than the positive high-payoff scenarios (in just, happy societies that value freedom, comfort, and the pursuit of knowledge)? Evidence? Are you keeping in mind optimism bias?

In deciding in favor of cryonic preservation, I don't think the decision can be near costs traded off against scenarios of far happiness. There's far misery to consider as well.

Comment by Bart119 on The Moral Void · 2012-05-31T20:24:12.212Z · LW · GW

Leon Kass (of the President's Council on Bioethics) is glad to murder people so long as it's "natural", for example. He wouldn't pull out a gun and shoot you, but he wants you to die of old age and he'd be happy to pass legislation to ensure it.

Does anyone have sources to support this conclusion about Kass's views? I tracked down a transcript of an interview he gave that was cited on a longevity website, but it doesn't support that characterization at all. He does express concerns about greatly increased lifespans, but makes clear that he sees both sides. He opposed regulation of aging research:

http://www.sagecrossroads.net/files/transcript13.pdf

Comment by Bart119 on One possible issue with radically increased lifespan · 2012-05-31T17:19:14.170Z · LW · GW

I understand that. I said it was OK. But I thought Spectral_Dragon in particular might be interested, flaws and all. My observation of derision of such concerns is not about my post, but many other places which I have seen when researching this.

Comment by Bart119 on One possible issue with radically increased lifespan · 2012-05-31T16:40:46.473Z · LW · GW

I'm with you on thinking this is a serious issue. I also think the LW community has done a very poor job of dismissing all such concerns, often with derision. A post I made on the subject got downvoted into oblivion, which is OK (community standards and all). I accept some of the criticisms, but expect to bring the issue up again with them better addressed.

Comment by Bart119 on One possible issue with radically increased lifespan · 2012-05-31T16:30:17.545Z · LW · GW

LW in general seems to favor a very far view. I'm trying to get used to that, and accept it on its own terms. But however useful it may be in itself, a gross mismatch between the farness of views which are taken to be relevant to each other is a problem.

It is widely accepted that spreading population beyond earth (especially in the sense of offloading significant portions of the population) is a development many hundreds of years in the future, right? A lot of extremely difficult challenges have to be overcome to make it feasible. (I for one don't think we'll ever spread much beyond earth; if it were feasible, earlier civilizations would already be here. It's a boring resolution to the Fermi paradox but I think by far the most plausible. But this is in parentheses for a reason).

Extending lifespans dramatically is far more plausible, and something that may happen within decades. If so, we will have to deal with hundreds or thousands of years of dramatically longer lifespans without galactic expansion as a relief of population pressures. It's not a real answer to a serious intermediate-term problem. Among other issues, such a world will set the context within which future developments that would lead to galactic expansion would take place.

The OP's point needs a better answer.

Comment by Bart119 on Only say 'rational' when you can't eliminate the word · 2012-05-31T16:03:22.505Z · LW · GW

I have no commitment to 'rational' in the sense OP wants to eliminate. But what shorthand might one use for "applying the sorts of principles that are the general consensus among the LW community, as best I understand them"?

Comment by Bart119 on Over-applying rationality: Indefinite lifespans · 2012-05-25T19:30:52.955Z · LW · GW

OK. Forgive my modest research skills. I've certainly seen lots of posts that assume that indefinite lifespans are a good thing, but I had never seen any that made contrary claims or rebutted such claims. I would welcome pointers to the best such discussions. It was not intended as a rant.

Comment by Bart119 on Over-applying rationality: Indefinite lifespans · 2012-05-25T18:38:17.478Z · LW · GW

Interesting. Downvoted into invisibility. Because of disagreement on conclusions, or form? I suppose an assertion of over-application of rationality is in a sense off-topic, but not in the most important sense. And of course no one has to accept the intuitions (which qualify as Bayesian estimates), but are they so far off they're not worth considering?

Comment by Bart119 on Shaving: Less Long · 2012-05-20T16:04:39.188Z · LW · GW

My estimate was based on what I hear and read of others, not my own very limited experience.

Comment by Bart119 on Zombie existential angst? (Not p-zombies, just the regular kind. Metaphorically.) · 2012-05-20T13:37:25.691Z · LW · GW

As I see it, once you accept the idea that we are just a dance of particles (as I do too), then in an important sense 'all bets are off'. A person comes up with something that works for them and goes with it. You don't have any really good reason not to become a serial murderer, and no good reason to save the world if you know how. So most of us (?) pick a set of values in line with human moral intuition and what other people pick and and just go back to living. It makes us happiest. I claim you can't be secretly miserable in an existential-angsty sort of way -- there is no deeper reality which supports that. There may be deeper realities we aren't seeing that we should worry about, but they are all within the scope of values we have chosen. But I've certainly had the experience that when I'm feeling bad I get reminded of the dance-of-particles situation and it further bums me out.

I see a decision about killing yourself as (in a way) constructing your future 'contentment curve' and seeing if the area above zero is larger than the area below. Rational people who get a painful terminal illness sometimes see lots of negative and that's where physician-assisted suicide comes in. This is subject to the enormous, hard-to-emphasize-enough cognitive distortion that badly depressed people are terrible at constructing future contentment curves. Then irrreversibility comes in as an argument, and the suggestion that a person should let others help them figure it out too.

Comment by Bart119 on Being a Realist (even if you believe in God) · 2012-05-17T20:33:34.148Z · LW · GW

I modified my comment slightly to not refer to Truth. But I do think it is unreasonable to expect that people will agree on many values, e.g. whether art, psychotherapy, the worship of some particular concept of God, maximizing lifespan, hedonism, making money etc. are how best to live one's life. Discussion and debate are fine (but not required). But if an opponent doesn't convince me that premarital sex is wrong (for instance), he or she may not harass or coerce me.

When deciding how to allocate your time in life, one choice to make is what arguments to listen to and what not. You have to make a judgment on very little information. The older you get, the more you are likely to judge that a new argument isn't of a kind to convince you (though it's still a probabilistic judgment). Fortunately, others whose opinions you respect may listen, and if it's really good they'll alert you.

Comment by Bart119 on Being a Realist (even if you believe in God) · 2012-05-17T17:34:58.251Z · LW · GW

I like this post. You're coming from religion, you're seeking truth, you don't want to toss out the religion completely. I think asking what self-identified rationalists have to say about that is entirely appropriate. As mwengler implies, a religious background is as good a place to get values from as anyplace else.

I was raised as an atheist, toyed with Quakerism for a while, but went back to atheism, but with a kinder view of religion. Quakers may not be great at cost-benefit tradeoffs, but they've been at the forefront of progressive values forever. I'm also a Unitarian-Universalist atheist, and enjoy the community a church provides (a mix of atheists and theists). We teach our kids about all major religions, and then let them choose their beliefs (most choose what amounts to atheism, but they have some idea who they share the world with. One parent said he brought his kids to UU Sunday School to "inoculate them against religion").

But if I sing a line like, "Holy, holy, holy, Lord God Almighty... only thou art holy, there is none beside thee, perfect in power, love and purity." it makes me feel kind of teary and good. As does, "The Lord is my shepherd, I shall not want." What a great idea, that someone else is watching out for you, someone who knows best! Sometimes it can be helpful, for instance for reducing unproductive anxiety. 90% of me focuses on the fact it isn't true and 10% on its value. For a questioning believer, maybe it's 90-10 the other way.

All I ask of believers is to subscribe to what I've heard described as the liberal bargain. We do not expect to come to agreement on important issues of values and how life should be led. Persuasion is fine, but coercion is not. All you have to do is be a good neighbor, abiding by widely shared ethical beliefs: leave other people alone and treat them with a modicum of respect. Let the public schools teach (secular) science. We do interfere inside families enough to prevent child abuse, but that's about it (and arguably we do too much of that). And I hope/expect that all a believer needs to buy into the liberal bargain is just a little bit of doubt.

So I say go ahead and pray, go to church, whatever works. Churches do a lot of good works. You already know that God helps those who help themselves, which means you're doing pretty much the same thing with or without a God.

I realize some people who were raised with religion and reject it have substantial anger against religion and need to denounce religion in strong terms. Sometimes they seem to want to make believers feel like idiots. I think that is unfortunate.

Comment by Bart119 on Tools versus agents · 2012-05-16T18:48:12.434Z · LW · GW

Thanks for pointers into what is a large and complex subject. I'm not remotely worried about things coming in from the stars. As for letting the AI out of the jar, I'm a bit perplexed. The transcripts are not available for review? If not, what seems relevant is the idea that an ideal encryption system has to be public so the very smartest people can try to poke holes in it. Of course, the political will to keep an AI in the box may be lacking -- if you don't let it out, someone else will let another one out somewhere else. Seems related to commercial release of genetically modified plants, which in some cases may have been imprudent.

Comment by Bart119 on Tools versus agents · 2012-05-16T17:31:35.350Z · LW · GW

I haven't read much in the super-intelligent AI realm, but perhaps a relatively naive observer has some positive value. If we get to the point of producing AI that seems remotely super-intelligent, we'll stick firewalls around it. I don't think the suggested actions of a super-intelligent AI will be harmful in an incomprehensible way. An exception would if it created something like the world's funniest joke. The problem with HAL was that they gave him control of spacecraft functions. I say we don't give 'hands' to the big brains, and we don't give big brains to the hands, and then I won't lose much sleep.

Comment by Bart119 on The ethics of breaking belief · 2012-05-11T17:38:30.856Z · LW · GW

Like CuSithBell, I'll plead the restrictive relative clause interpretation, bolstered by the absence of a comma. I'll also plead common sense as an ambiguity resolution tool. And not only do we have the existence of cultural Catholics, we've got as our first estimate a minimum (if every God-believing French person were a Catholic) of 41% of Catholics who don't subscribe to a vital church teaching.

Comment by Bart119 on The ethics of breaking belief · 2012-05-10T19:35:16.097Z · LW · GW

I think atheists sometimes have a one-dimensional extreme view of believers. I never was a believer really (though I tried to be a Quaker for a while). I am a Unitarian-Universalist for social reasons (one joking definition of UUs is "atheists with children" -- and I'd encourage atheists to consider if it might meet their needs).

Believers know very well that there have been no unambiguous miracles lately, that really horrible things happen in the world despite a presumably benevolent God, and that the evidence for God is indirect. I think very few lie on their deathbeds with unalloyed peace and calm with the absolute conviction that they're going to heaven.

They are also well aware that different factions even within Christianity reach different conclusions about what God wants them to do.

There's a reason that religious communities are always dealing with doubters and speak of the need for having faith (despite a dearth of evidence), and understand that faith gets weaker and stronger. I think most have thought about losing their faith and what it would mean.

I don't have any statistics to quote, but I bet the majority of believers have views that are nuanced at least to this degree.

Comment by Bart119 on The ethics of breaking belief · 2012-05-10T19:23:01.659Z · LW · GW

58% of French people consider themselves Catholic: http://en.wikipedia.org/wiki/Religion_in_France

34% of French people assent to: "I believe there is a God". http://en.wikipedia.org/wiki/Religion_in_Europe

Of course, there are methodological issues and this doesn't prove the matter definitively, but it certain suggests that a lot of French people are "cultural Catholics" the way we have "cultural Jews" in the US.

Comment by Bart119 on The ethics of breaking belief · 2012-05-09T02:02:03.539Z · LW · GW

I think you (and most commenters) are treating this hypothetical believer in a rather disrespectful and patronizing fashion. I would think the ethical thing to do is to engage in a meta-discussion with such a person and see whether there are certain subjects that are off limits, how they feel about your differing views on God, how they would feel about losing their faith, etc. They might ask you similar questions about what might make you become a believer. You might find yourself incorrect about what might make them lose their belief.

It's certainly possible to remain in a religious community without one's faith intact -- I think it happens to a large percentage of people in any religious group. Consider all the European Catholics who are essentially atheists.

Comment by Bart119 on No Value · 2012-05-06T13:58:44.154Z · LW · GW

I tend to fall on the side of those who say, "Wait, don't panic". Well, 'panic' would be a strong emotion of the kind you say you're not having, but you're obviously uneasy, and rightly so. Right to feel that way.

When 'the system' looks at you, they're going to see a person who is functioning pretty well in the world. That's the major thing they care about. And it's no small thing!

Things are likely to change at your age, simply with the passing of time. Are you going to go to college? Get out of the house somehow? That could get you more perspective on your parents and more opportunity to see what life is like without them. Frequent advice to despairing young people is that "it gets better". It usually does, and when it doesn't, you at least get a better idea of the problem you're trying to solve.

I might posit a LW tendency (bias?) to act as opposed to waiting. I think psychedelics would be a terrible idea, frankly. Way too much of a radical act.

Comment by Bart119 on Seeking links for the best arguments for economic libertarianism · 2012-05-03T20:05:38.088Z · LW · GW

Maybe setting the bounds of the problem would help some. I'm assuming:

  1. Some form of representative democracy as political context, in the absence of any better systems.

  2. A system of law protecting most property rights -- no arbitrary expropriations.

  3. Socialism no more extreme than in (say) postwar Scandinavian countries.

  4. Libertarianism no more extreme than (say) late 19th century USA.

  5. Regulated capitalism. The question is how much regulation or taxation.

Given those parameters, I don't need the Communist Manifesto or any radical anarchist works. North Korea, the USSR, pre-1980 China aren't so relevant.

If people disagree with any of those limits on the problem, I suppose just stating that would be of interest, perhaps with a link or two. I realize getting into arguments about such things could be counterproductive, but knowing of the existence of views outside of those bounds would be helpful.

Comment by Bart119 on Seeking links for the best arguments for economic libertarianism · 2012-05-03T18:44:21.139Z · LW · GW

Good point. The truth is, my starting point is much less libertarian than most LWers, if I recall survey results correctly. I'm trying to understand the other side, which is I gather virtuous within a rationality framework. I wasn't trying to bias what answers I would get, but you're right that it could in some fashion or other.

Comment by Bart119 on Public Choice and the Altruist's Burden · 2012-05-01T20:03:56.103Z · LW · GW

Is this the right place to engage in thread necromancy? We'll see.

I've been troubled by the radical altruism argument for some years, and never had a very satisfactory reason for rejecting it. But I just thought of an argument against it. In brief, if people believe that their obligation is to give just about everything they have to charity, then they have created a serious disincentive to create more wealth.

It starts with the argument against pure socialism. In that system, each person works as hard as he or she can in order to produce for the good of society, and society takes 100% of the production and distributes it to people according to need (or utility, as best it can figure it out). This is appealing in many ways. The main determinants of a person's productivity are factors beyond his or her control: genetic endowment, early childhood experiences and what ideas you're exposed to. Even free will is suspect. So if what you can produce is due to factors beyond your control, why should you benefit from it? Distribute according to need instead. It's really a very nice idea. The only problem is, it doesn't work. People in general seem to be not at all perfectible, and when you change the incentives, people's behavior changes. They stop working hard, see others working less hard, see a system that's broken and work even less hard, and eventually everyone loses. I'm hoping there are few enough pure socialists out there that this won't become a political battle, which I realize is discouraged here.

Anyway, the same reasoning could apply to extreme altruism. If a person believes that their obligation is to give just about everything they have to charity, then they have created a serious disincentive to create more wealth. Sure, a noble individual could resist that, just as some few people under communism worked their hardest. So each person can ask if they are that noble or not.

I'm actually in favor of coerced altruism: taxes. My "cheating detector" that evolution has endowed me with is alive and well, and I don't really want to volunteer to redistribute my wealth unless other people are going to participate too. Yeah, it's part of a huge, messy, inefficient political process to determine how much redistribution to do (a tiny fraction of 100%) but the idea of getting everyone to contribute instead of a small minority of not-very-rich people makes it worth it. This may well be an unpopular view. Pointers to where this has been discussed elsewhere are welcome in lieu of reopening some old issue.

Comment by Bart119 on Welcome to Less Wrong! (2012) · 2012-05-01T19:28:27.969Z · LW · GW

Maybe I'm missing something.

I'm not saying my behavior is random, or un-caused. I experience preferences among actions. Factors I'm unaware of undoubtedly play a part, something I can speculate on, and others as well, and I or they could try to model them. But as I experience reality, I'm only striving up to a point to do the Right Thing. My speculation is that if the cost exceeds the cost of reminding myself I'm actually a nihilist, I'll bail on morality.

I'm very interested in arguments as to why nihilism isn't a consistent position -- heck, even why it's not a good idea or how other people have gotten around it.

Comment by Bart119 on Welcome to Less Wrong! (2012) · 2012-05-01T19:06:22.179Z · LW · GW

I stumbled here while searching some topic, and now I've forgotten which one. I've been posting for a few weeks, and just now managed to find the "About" link that explains how to get started, including writing an intro here. Despite being a software engineer by trade these past 27-odd years, I manage to get lost navigating websites a lot, and I still forget to use Google and Wikipedia on topics. Sigh. I'm 57, and was introduced to cognitive fallacies years as long ago as 1972. I've tried to avoid some of the worst ones, but I also fail a lot. I kept a blog with issue-related essays for a while, and whatever its shortcomings, I was proud of the fact that when I ran out of thing to say, I stopped posting. With the prospect of a community like this one that might respond substantively, maybe I'll be inspired to write more here.

This description of a guy who believed in objective morality but lost his faith impressed me a lot. That's me. I don't think there's any very compelling reason to live one's life in a particular way, or any real reason that some actions are preferable to others. That might be called nihilism. I live a decent life, though, because I'm happier pretending not to be a nihilist and making moral arguments and living honorably and all. But when the going gets tough (as in unpleasant consequences to some line of thought that doesn't make me happy), I always have the option of shrugging my shoulders, yawning, and going on to the next topic. Rationality too is a fun tool. I find it most helpful within the relatively small questions of life.

Comment by Bart119 on Stupid Questions Open Thread Round 2 · 2012-05-01T16:34:38.716Z · LW · GW

I remain quite confused.

In fact, it is totally unfair of you to assume that having this conversation is so pressing that it goes without saying. After all, not all theists proselytize.

OK. This seems to imply that there is some serious downside about starting such a conversation. What would it be? It would seem conciliatory to theists, if some (naturally enough) assume that what atheists want is for them to embrace atheism.

I'll say only that I'm not convinced that believing unpleasant but truth things is inherently inconsistent with being happier.

I hope I've parsed the negatives correctly: Certainly believing unpleasant but true things is highly advantageous to being happier if it leads to effective actions (I sure hope that pain isn't cancer -- what an unpleasant thing to believe... but I'll go to the doctor anyway and maybe there will be an effective treatment). If it means unpleasant things that can't be changed, then that's not inherently inconsistent with being happier either, for instance if your personal happiness function includes that discovering that you are deceiving yourself will make you very unhappy.

The question is more whether it is a valid choice for a person to say they value pleasant illusions when there is no effective way to change the underlying unpleasant reality.

We object when someone else wants to infringe on our liberties (contraception, consensual sexual practices), and my suggestion was that a mild dose of doubt in one's faith might be enough to defang efforts to restrict other people's liberties.

I knew a devout Catholic who was also a devout libertarian, and his position on abortion was that it was a grave sin, but it should not be illegal. I'm not sure if that position required a measure of doubt about the absolute truth of Catholicism, but it seems possible.

Comment by Bart119 on Stupid Questions Open Thread Round 2 · 2012-04-30T16:50:55.328Z · LW · GW

It seems that implicit in any discussion of the kind is, "What do you think I ought to do if you are right?".

For theists, the answer might be something leading to, "Accept Jesus as your personal savior", etc.

For atheists, it might be, "Give up the irrational illusion of God." I'm questioning whether such an answer is a good idea if they are at least humble and uncertain enough to respect others' views -- if their goal is comfort and happiness as opposed to placing a high value on literal truth.

But do recall, I'm placing this in the "stupid questions" thread because I am woefully ignorant of the debate and am looking for pointers to relevant discussions.

Comment by Bart119 on Survey of older folks as data about one's future values and preferences? · 2012-04-28T18:12:29.868Z · LW · GW

Yes, it was vague. I'll try to be more precise -- as much as I can.

Suppose we do a pilot experiment in a small region on the Tigris and Euphrates where people have been living in high population densities for a long time. We have large numbers of people coming back from the dead, perhaps 10 times the current population? Perhaps with infant mortality we have 5 times as many children as adults -- lots of infants and young children.

But the UN is ready, prepared in advance. There is land for everyone. We figure at least that the dead have lost the right to their property, so we put them all up in modular housing we make outside the present city.

But there are so many formerly dead, from older linguistic and cultural and religious groups, that they form their own political parties and take over the government.

I could go on, but it's apparent to me that the social order is completely messed up. Now suppose I'm an Egyptian, and it comes to a vote: Do we want to implement this program in Egypt? Assuming that the as-yet-unresurrected dead don't get a vote, I can see the proposal being voted down overwhelmingly.

My moral intuition is that the Egyptians have no moral obligation to resurrect their ancestors. They have a right to continue their ways of existence.

Of course, this is an extreme thought experiment, and arguing about details won't be productive.

I have a similar intuition about, say unrestricted immigration. If someone said that utility would be maximized if anyone could move anywhere on earth they wanted, I have an intuition that I as an American have a right to resist that. The status quo has some weight.

Applying rationality to problems can go too far. In the late 19th and early 20th centuries, a lot of very smart, very thoughtful, very knowledgeable people thought Communism was going to be a great idea. But due to a few slip-ups and miscalculations, it turned out it wasn't -- which we can see with hindsight. No, they didn't have modern notions of rationalism, but they had the best thinking of their day.

A truism is that if the only tool you have is a hammer, everything looks like a nail. It's easier to compute utility on the level of individuals. You can spin a story based on that about what society should look like, but I think you might be biased by the fact that your tool can apply. If the alternative is, "My tools don't have anything to say on that issue because of complex interactions among people and the entire fabric of society", then you would be biased to reject that alternative.

I know this brings up a lot of issues, some of which should be considered separately. And I am ignorant of a lot of LW work. Pointers to other work welcome.

Comment by Bart119 on Stupid Questions Open Thread Round 2 · 2012-04-28T16:04:56.310Z · LW · GW

LWers are almost all atheists. Me too, but I've rubbed shoulders with lots of liberal religious people in my day. Given that studies show religious people are happier than the non-religious (which might not generalize to LWers but might apply to religious people who give up their religion), I wonder if all we really should ask of them is that they subscribe to the basic liberal principle of letting everyone believe what they want as long as they also live by shared secular rules of morality. All we need is for some humility on their part -- not being totally certain of their beliefs means they won't feel the need to impose their beliefs on others. If religious belief is how they find meaning in a life (that, in my opinion, has no absolute meaning), why rock their boats?

This must have been discussed many, many times. Pointers to relevant discussions, either within or outside LW, appreciated.

Comment by Bart119 on Don't plan for the future · 2012-04-28T02:32:30.975Z · LW · GW

Thank you so much for the reply! Simply tracing down the 'berserker hypothesis' and 'great filter' puts me in touch with thinking on this subject that I was not aware of.

What I thought might be novel about what I wrote included the idea that independent evolution of traits was evidence that life should progress to intelligence a great deal of the time.

When we look at the "great filter" possibilities, I am surprised that so many people think that our society's self-destruction is such a likely candidate. Intuitively, if there are thousands of societies, one would expect a high variability in social and political structures and outcomes. The next idea I read, that "no rational civilization would launch von Neuman probes" seems extremely unlikely because of that same variability. Where there would be far less variability is mundane constraints of energy and engineering to launch self-replicating spacecraft in a robust fashion. Problems there could easily stop every single one of our thousand candidate civilizations cold, with no variability.

Comment by Bart119 on Survey of older folks as data about one's future values and preferences? · 2012-04-28T02:09:54.421Z · LW · GW

I think it is a hard question. The foundations of our societies would all be shaken to the core by the sudden resuscitation that doubles the earth population (even assuming as we must that we can feed them all). I don't think "save or prolong any life of reasonable quality" scales up past a certain point. At a certain point the psychological quality of life of living individuals that comes from living in a society with a certain structure and values may trump the right of individuals who thought they were dead to live once more. (Humor: If you've been widowed three times, do you really want 3 formerly late husbands showing up at your doorstep? :-))

Comment by Bart119 on Survey of older folks as data about one's future values and preferences? · 2012-04-28T01:52:57.066Z · LW · GW

You mean rationally from an evolutionary point of view? You have less to lose from a bold decision, but perhaps you have much less to gain and that predominates. As a young guy you can take off into the wilds with a young wife and another few couples. Chances might be 90% you'll be killed, but if you do make it to the new land, you might start a whole new population of people.

I think if you look at deciduous trees of the same species, the young trees get their leaves earlier in the spring than the mature trees. I think I've observed that. They're "gambling", because a late frost could kill them. But their chances of becoming a mature tree aren't that great anyway, and they need to grab light before their elders shade them. The older trees can afford to be conservative.

As people in our modern society, there's some tendency to relax as you get older. Older people encourage you to dance as if no one is watching? Not sure I believe that myself, though. :-)

Comment by Bart119 on Survey of older folks as data about one's future values and preferences? · 2012-04-28T01:41:35.394Z · LW · GW

These speculations are interesting. I think it's always worth wheeling evolutionary thought up to a problem to see what it says.

However, surveying real people in our real, modern-day world seems far more direct.

I don't think either that evolution would have much of a reason to cleanly engineer a stable end-state after which development just entirely stops, and leaves you with a well-adjusted, perfectly functional body or brain. That may not be a trivial task after all.

Evolution is constantly making trade-offs, and (last I knew) the reason our bodies fall apart was that evolution didn't have a strong incentive to keep them going. We last as long as we do because we take care of grandkids, maybe, and Jared Diamond suggested a reason for longevity was that an old person was a storehouse of knowledge.

Comment by Bart119 on Survey of older folks as data about one's future values and preferences? · 2012-04-28T01:30:56.252Z · LW · GW

You can distinguish the two. Older folks can learn from younger ones based on specific experience. Consider: Bob might be considering law school as a career change at 40 and learn from a 30-year-old who started the practice of law at 25 that it was not fun.

You can certainly imagine that age itself, or things that strongly correlate with age, could bring a different perspective. Another trivial sort of example: you decide at 50 that you want to buy a home where you'll never have to move again, and you are considering a condo that's on the 4th floor with no elevator. The wisdom of 80-year-olds might say that's unwise.

The point, of course, is to investigate to find less obvious examples -- if any.

For some young people, there might be some discomfort in admitting this as a relevant source of data about how to live life.

The example I've read about of whether to finish your Ph.D. could even be relevant here. If someone did a survey showing that 75% of old folks who dropped out of Ph.D. programs wished they'd finished them, would that be relevant? It certainly wouldn't decide the issue, but I think it would be a factor. And you'd have to factor in or out various cognitive biases.

(I was in exactly that position myself, and decided to finish the Ph.D. It made sense in my case because I didn't have a burning passion to get on the next thing in life (nor did I know what that would be). But I was correct that I would never directly need it.).

(Example changed because the piercing example equivocates possible mistakes by 16-year-olds and 25-year-olds in the 95% figure)

You meant "equates" instead of "equivocates"? Even with that change I'm not sure quite what you mean. Maybe not that important.

Comment by Bart119 on Survey of older folks as data about one's future values and preferences? · 2012-04-28T00:03:14.086Z · LW · GW

Once it's shown conclusively to work no one will want it anymore :)

I don't get the joke or reference, and it sounds intriguing. Does it mean that if people can be revived successfully into indefinite lifespans, then there would be no need to freeze people going forward?

My big problem with indefinite lifespans is that I think we're already a warped society by having so many old people (meaning, say older than me at 57 :-)). I suppose if we could first keep everyone from aging and retaining their 25-year-old physiques and energy and mental status, that would address that to some extent. But if we get a world full of reasonably spry 80-year-olds, it doesn't appeal to me. In my book of values, all else being equal, society is supposed to be half children.

Thought experiment: Suppose we suddenly developed the technology to revive everyone who has ever lived (they left some sort of holographic signal that Google finds it can read :-)). Would we want to? Historians would be overjoyed to revive selected ones because they would help us understand the past. But as a matter of restoring them for their own sake?

As a newcomer I'm sure these have been discussed over and over, and pointers to the relevant discussions are welcome in place of rehashing old arguments.