Rationalists Are Less Credulous But Better At Taking Ideas Seriously

post by Scott Alexander (Yvain) · 2014-01-21T02:18:17.720Z · LW · GW · Legacy · 287 comments

Contents

288 comments

Consider the following commonly-made argument: cryonics is unlikely to work. Trained rationalists are signed up for cryonics at rates much greater than the general population. Therefore, rationalists must be pretty gullible people, and their claims to be good at evaluating evidence must be exaggerations at best.

This argument is wrong, and we can prove it using data from the last two Less Wrong surveys.

The question at hand is whether rationalist training - represented here by extensive familiarity with Less Wrong material - makes people more likely to believe in cryonics.

We investigate with a cross-sectional study, looking at proto-rationalists versus experienced rationalists. Define proto-rationalists as those respondents to the Less Wrong survey who indicate they have been in the community for less than six months and have zero karma (usually indicative of never having posted a comment). And define experienced rationalists as those respondents to the Less Wrong survey who indicate they have been in the community for over two years and have >1000 karma (usually indicative of having written many well-received posts).

By these definitions, there are 93 proto-rationalists, who have been in the community an average of 1.3 months, and 134 experienced rationalists, who have been in the community an average of 4.5 years. Proto-rationalists generally have not read any rationality training material - only 20/93 had read even one-quarter of the Less Wrong Sequences. Experienced rationalists are, well, more experienced: two-thirds of them have read pretty much all the Sequence material.

Proto-rationalists thought that, on average, there was a 21% chance of an average cryonically frozen person being revived in the future. Experienced rationalists thought that, on average, there was a 15% chance of same. The difference was marginally significant (p < 0.1).

Marginal significance is a copout, but this isn't our only data source. Last year, using the same definitions, proto-rationalists assigned a 15% probability to cryonics working, and experienced rationalists assigned a 12% chance. We see the same pattern.

So experienced rationalists are consistently less likely to believe in cryonics than proto-rationalists, and rationalist training probably makes you less likely to believe cryonics will work.

On the other hand, 0% of proto-rationalists had signed up for cryonics compared to 13% of experienced rationalists. 48% of proto-rationalists rejected the idea of signing up for cryonics entirely, compared to only 25% of experienced rationalists. So although rationalists are less likely to believe cryonics will work, they are much more likely to sign up for it. Last year's survey shows the same pattern.

This is not necessarily surprising. It only indicates that experienced rationalists and proto-rationalists treat their beliefs in different ways. Proto-rationalists form a belief, play with it in their heads, and then do whatever they were going to do anyway -  usually some variant on what everyone else does. Experienced rationalists form a belief, examine the consequences, and then act strategically to get what they want.

Imagine a lottery run by an incompetent official who accidentally sets it up so that the average payoff is far more than the average ticket price. For example, maybe the lottery sells only ten $1 tickets, but the jackpot is $1 million, so that each $1 ticket gives you a 10% chance of winning $1 million.

Goofus hears about the lottery and realizes that his expected gain from playing the lottery is $99,999. "Huh," he says, "the numbers say I could actually win money by playing this lottery. What an interesting mathematical curiosity!" Then he goes off and does something else, since everyone knows playing the lottery is what stupid people do.

Gallant hears about the lottery, performs the same calculation, and buys up all ten tickets.

The relevant difference between Goofus and Gallant is not skill at estimating the chances of winning the lottery. We can even change the problem so that Gallant is more aware of the unlikelihood of winning than Goofus - perhaps Goofus mistakenly believes there are only five tickets, and so Gallant's superior knowledge tells him that winning the lottery is even more unlikely than Goofus thinks. Gallant will still play, and Goofus will still pass.

The relevant difference is that Gallant knows how to take ideas seriously.

Taking ideas seriously isn't always smart. If you're the sort of person who falls for proofs that 1 = 2  , then refusing to take ideas seriously is a good way to avoid ending up actually believing that 1 = 2, and a generally excellent life choice.

On the other hand, progress depends on someone somewhere taking a new idea seriously, so it's nice to have people who can do that too. Helping people learn this skill and when to apply it is one goal of the rationalist movement.

In this case it seems to have been successful. Proto-rationalists think there is a 21% chance of a new technology making them immortal - surely an outcome as desirable as any lottery jackpot - consider it an interesting curiosity, and go do something else because only weirdos sign up for cryonics.

Experienced rationalists think there is a lower chance of cryonics working, but some of them decide that even a pretty low chance of immortality sounds pretty good, and act strategically on this belief.

This is not to either attack or defend the policy of assigning a non-negligible probability to cryonics working. This is meant to show only that the difference in cryonics status between proto-rationalists and experienced rationalists is based on meta-level cognitive skills in the latter whose desirability is orthogonal to the object-level question about cryonics.

(an earlier version of this article was posted on my blog last year; I have moved it here now that I have replicated the results with a second survey)

287 comments

Comments sorted by top scores.

comment by Eugine_Nier · 2014-01-21T04:43:34.955Z · LW(p) · GW(p)

It only indicates that experienced rationalists and proto-rationalists treat their beliefs in different ways. Proto-rationalists form a belief, play with it in their heads, and then do whatever they were going to do anyway - usually some variant on what everyone else does. Experienced rationalists form a belief, examine the consequences, and then act strategically to get what they want.

Alternate hypothesis: the experienced rationalists are also doing what everyone else (in their community) is doing, they just consider a different group of people their community.

Replies from: zslastman, army1987, MTGandP
comment by zslastman · 2014-01-21T07:56:02.192Z · LW(p) · GW(p)

My immediate thought was that there is a third variable controlling both experience in rationality and willingness to pay for cryonics, such as 'living or hanging out in the bay area'.

comment by A1987dM (army1987) · 2014-01-22T07:21:51.846Z · LW(p) · GW(p)

Well, only 13% of “experienced rationalists” are signed up for cryonics, which hardly counts as “everyone else” -- unless the thing they do because everyone else is doing it is “I'll sign up for cryonics iff I think it's worth it”, which kind of dilutes the meaning.

(Anecdata: in each of my social circles before I entered university, to a very good zeroth approximation either everyone smoked or nobody did, but nowadays it's not uncommon for me to be among smokers and non-smokers at the same time. Sure, you could say that in some circles people smoke because everybody does, in some circles people don't smoke because nobody doesn't, and in some circles people smoke iff they like because everybody does that, but...)

comment by MTGandP · 2014-04-17T23:57:05.124Z · LW(p) · GW(p)

As army1987 said, only a small percentage of experienced rationalists sign up for cryonics, so I wouldn't expect there to be social pressure. I think a more likely explanation is that experienced rationalists feel less social pressure against signing up for cryonics.

comment by Kaj_Sotala · 2014-01-21T05:33:21.743Z · LW(p) · GW(p)

and rationalist training probably makes you less likely to believe cryonics will work.

I like this post, but this conclusion seems too strong. There could e.g. be a selection effect, in that people with certain personality traits were less likely to believe in cryonics, more likely to take ideas seriously, and more likely to stick around on LW instead of forgetting the site after the first few months. In that case, "rationalist training" wouldn't be the cause anymore.

Replies from: private_messaging, ESRogs, RyanCarey
comment by private_messaging · 2014-01-22T22:03:31.295Z · LW(p) · GW(p)

Proto-rationalists thought that, on average, there was a 21% chance of an average cryonically frozen person being revived in the future.

....

Last year, using the same definitions, proto-rationalists assigned a 15% probability to cryonics working

...

I think we have a general trend of decrease in the skepticism of newcomers.

As for signing up for cryonics, LW used to advocate signing up for cryonics far more loudly back in the day. edit: also, did those 13% people sign up for cryonics after their "rationalist training", or before?

comment by ESRogs · 2014-01-22T00:50:00.552Z · LW(p) · GW(p)

Yes, and in particular, I would expect age to possibly be a common cause behind being on LessWrong longer, and having signed up for cryonics after being convinced of its plausibility.

Replies from: FourFire
comment by FourFire · 2014-01-23T14:52:15.675Z · LW(p) · GW(p)

Age, and economic status, at least in my case, and I am one of the survey takers.

comment by RyanCarey · 2014-01-22T07:45:49.475Z · LW(p) · GW(p)

Yeah, it's probably 50% rationalist training (reading), 25% rationalist culture and 25% being a futurist before LW existed...

comment by Mitchell_Porter · 2014-01-21T03:30:04.396Z · LW(p) · GW(p)

If we distinguish between

"experienced rationalists" who are signed up for cryonics

and

"experienced rationalists" who are not signed up for cryonics

... what is the average value of P(Cryonics) for each of these subpopulations?

Replies from: None
comment by [deleted] · 2014-01-21T15:25:15.168Z · LW(p) · GW(p)

Going by only the data Yvain made public, and defining "experienced rationalists" as those people who have 1000 karma or more (this might be slightly different from Yvain's sample, but it looked as if most who had that much karma were in the community for at least 2 years), and looking only at those experienced rationalists who both recorded a cryonics probability and their cryonics status, we get the following data (note that all data is given in terms of percentages - so 50 means 50% confidence (1 in 2), while 0.5 means 0.5% confidence (1 in 200)):

For those who said "No - and do not want to sign up for cryonics", we have for the cryonics success probability estimate (and this is conditioning on no global catastrophe) (0.03,1,1) (this is (Q1,median,Q3)), with mean 0.849 and standard deviation 0.728. This group was size N = 32.

For those who said "No - still considering it", we have (5,5,10), with mean 7.023 and standard deviation 2.633. This group was size N = 44.

For those who wanted to but for some reason hadn't signed up yet (either not available in the area (maybe worth moving for?) or otherwise procrastinating), we have (15,25,37), with mean 32.069 and standard deviation 23.471. This group was size N = 29.

Finally, for the people who have signed up, we have (7,21.5,33), with mean 26.556 and standard deviation 22.389. This group was size N = 18.

If we put all of the "no" people together (those procrastinating, those still thinking, and those who just don't want to), we get (2,5,15), with mean 12.059 and standard deviation 17.741. This group is size N = 105.

I'll leave the interpretation of this data to Mitchell_Porter, since he's the one who made the original comment. I presume he had some point to make.

(I used Excel's population standard deviation computation to get the standard deviations. Sorry if I should have used a different computation. The sample standard deviation yielded very similar numbers.)

Replies from: Mitchell_Porter, VAuroch
comment by Mitchell_Porter · 2014-01-22T09:23:23.264Z · LW(p) · GW(p)

Thanks for the calculations... and for causing me to learn about quartiles.

Part of Yvain's argument is that "proto-rationalists" have an average confidence in cryonics of 21%, but "experienced rationalists", only 15%. The latter group is thereby described as "less credulous", because the average confidence is lower, but "better at taking ideas seriously", because more of them are actually signed up for cryonics.

Meanwhile, your analysis – if I am parsing the figures correctly! – suggests that "experienced rationalists” who don't sign up for cryonics have an average confidence in cryonics of 12%, and "experienced rationalists” who do sign up for cryonics, an average confidence of 26%.

This breaks apart the combination of contrary traits that forms the headline of this article. We don’t see a single group of people who are simultaneously more cryo-skeptical than the LW newbies, and yet more willing to sign up for cryonics. Instead, we see two groups: one that is more cryo-skeptical and which doesn’t sign up for cryonics; and another which is less cryo-skeptical, and which does sign up for cryonics.

Replies from: Vaniver, V_V
comment by Vaniver · 2014-01-22T19:01:54.701Z · LW(p) · GW(p)

This breaks apart the combination of contrary traits that forms the headline of this article. We don’t see a single group of people who are simultaneously more cryo-skeptical than the LW newbies, and yet more willing to sign up for cryonics. Instead, we see two groups: one that is more cryo-skeptical and which doesn’t sign up for cryonics; and another which is less cryo-skeptical, and which does sign up for cryonics.

It seems like you should do the same quartile breakdown for the newbies, because I read Yvain's core point as the existence of high-probability newbies who aren't signed up as a failure to act on their beliefs.

I haven't separated out the newbie cryocrastinators from the newbie considerers, though, and it seems that among the experienced the cryocrastinators give higher numbers than those who have signed up, which also seems relevant to a comparison.

Replies from: private_messaging
comment by private_messaging · 2014-01-22T22:36:25.047Z · LW(p) · GW(p)

Maybe procrastinators are trying to over-estimate it to get themselves to do it...

The probabilities are nuts though. For the whole thing to be of use,

1: you must die in a right way to get frozen soon enough and well enough. (Rather unlikely for a young person, by the way).

2: cryonics must preserve enough data.

3: no event that causes you to lose cooling

4: the revival technology must arise and become cheap enough (before you are unfrozen)

5: someone should dispose of the frozen head by revival rather than by garbage disposal or something even nastier (someone uses frozen heads as expired-copyright data).

Note that it's the whole combined probability that matters for the decision to sign up. edit: and not just that, but compared to the alternatives - i.e. you can improve your chances by trying harder not to die, and you can use money/time for that instead of cryonics.

edit2: also, just 3 independent-ish components (freezing works, company doesn't bust, revival available) with high ignorance get you down to 12.5%

Replies from: fortyeridania
comment by fortyeridania · 2014-01-25T06:50:14.361Z · LW(p) · GW(p)

You might be interested in reading some other breakdowns of the conditions required for cryonics to work (and their estimates of the relevant probabilities):

Break Cryonics Down (March 2009, at Overcoming Bias)

How Likely Is Cryonics To Work? (September 2011, here at LW)

More Cryonics Probability Estimates (December 2012, also at LW)

comment by V_V · 2014-01-28T01:01:08.921Z · LW(p) · GW(p)

That's what I thought.

comment by VAuroch · 2014-01-21T20:49:52.959Z · LW(p) · GW(p)

Having trouble reading this data. Are the numbers percentages? (i.e. is the mean for No - don't want 0.85%?)

Replies from: None
comment by [deleted] · 2014-01-21T21:11:29.830Z · LW(p) · GW(p)

Yes. I should have made that clearer. I'll edit my comment.

comment by Said Achmiz (SaidAchmiz) · 2014-01-21T03:14:30.250Z · LW(p) · GW(p)

Yvain, could you give a real-life example analogous to your Goofus & Gallant story?

That is, could you provide an example (or several, even better) of a situation wherein:

  1. There is some opportunity for clear, unambiguous victory;
  2. Taking advantage of it depends primarily on taking a strange/unconventional/etc. idea seriously (as distinct from e.g. not having the necessary resources/connections, being risk-averse, having a different utility function, etc.);
  3. Most people / normal people / non-rationalists do not take the idea seriously, and as a consequence have not taken advantage of said opportunity;
  4. Some people / smart people / rationalists take the idea seriously, and have gone for the opportunity;
  5. And, most importantly, doing so has (not "will"! already has!) caused them to win, in a clear, unambiguous, significant way.

Note that cryonics does not fit that bill (it fails point 5), which is why I'm asking for one or more actual examples.

Replies from: whales, Yvain, Cyan, Solvent, Morendil, jkaufman
comment by whales · 2014-01-21T04:17:58.029Z · LW(p) · GW(p)

Slightly different but still-important questions -- what about when you remove the requirement that the idea be strange or unconventional? How much of taking ideas seriously here is just about acting strategically, and how much is non-compartmentalization? To what extent can you train the skill of going from thinking "I should do X" to actually doing X?

Other opportunities for victory, not necessarily weird, possibly worth investigating: wearing a bike helmet when biking, using spaced repetition to study, making physical backups of data, staying in touch with friends and family, flossing.

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2014-01-21T05:19:13.373Z · LW(p) · GW(p)

making physical backups of data

Oh boy, is this ever a good example.

I used to work retail, selling and repairing Macs and Mac accessories. When I'd sell someone a computer, I'd tell them — no, beg them — to invest in a backup solution. "I'm not trying to sell you anything!", I'd say. "You don't have to buy your backup device from us — though we'd be glad to sell you one for a decent price — but please, get one somewhere! Set it up — heck, we'll set it up for you — and please... back up! When you come to us after your hard drive has inevitably failed — as all hard drives do eventually, sure as death or taxes — with your life's work on it, you'll be glad you backed up."

And they'd smile, and nod, and come back some time later with a failed hard drive, no backup, and full of outrage that we couldn't magic their data back into existence. And they'd pay absurd amounts of money for data recovery.

Back up your data, people. It's so easy (if you've got a Mac, anyway). The pain of losing months or years of work is really, really, really painful.

Replies from: poiuyt, Caspian, Jiro, wallowinmaya, christopherj, jazmt, byrnema
comment by poiuyt · 2014-01-22T08:09:25.878Z · LW(p) · GW(p)

This post convinced me to make a physical backup of a bunch of short stories I've been working on. At first I was going to go read through the rest of the comments thread and then go do the back up, but further consideration made me realize how silly that was - burning them to a DVD and writing "Short Story Drafts" on it with a sharpie didn't take more than five minutes to do and made the odds of me forever losing that part of my personal history tremendously smaller. Go go gadget Taking Ideas Seriously!

comment by Caspian · 2014-01-29T03:22:14.604Z · LW(p) · GW(p)

Back up your data, people. It's so easy (if you've got a Mac, anyway).

Thanks for the encouragement. I decided to do this after reading this and other comments here, and yes it was easy. I used a portable hard drive many times larger than the Mac's internal drive, dedicated just to this, and was guided through the process when I plugged it in. I did read up a bit on what it was doing but was pretty satisfied that I didn't need to change anything.

comment by Jiro · 2014-01-21T18:46:33.149Z · LW(p) · GW(p)

If the person doesn't know anything about computers or backups, he can't distinguish "I'm not trying to sell you something" from "I am trying to sell you something and I'm lying about it" and he'd have to do a Bayseian update based on the chance that you're trying to sell him something. Furthermore, he knows that if you are trying to sell him something, the fact that you are trying to sell him something would make it likely that anything you say is untrustworthy (and the fact that you are lying about your intent to sell him something increases the probability of untrustworthiness even more).

So the customer is being rational by not listening to you.

Replies from: Wes_W, SaidAchmiz
comment by Wes_W · 2014-01-21T21:25:53.754Z · LW(p) · GW(p)

I am not a salesman.

I am, however, reasonably competent with technology. Growing up in a congregation of all age groups, this made me one of the go-to people whenever somebody had computer problems. I'm talking middle-aged and above, the kind of people who fall for blatant phishing scams, have 256mb of RAM, and don't know what right-clicking is.

Without fail, these people had been aware that losing all their data would be very painful, and that it could happen to them, and that backing up their data could prevent that. Their reaction was universally "this is embarrassing, I should've taken that more seriously", not "I didn't know a thing like this could happen/that I could have done something simple to prevent it". Procrastination, trivial inconveniences, and not-taking-the-idea-seriously-enough are the culprit in a large majority of cases.

In short, I think it requires some contortion to construe the typical customer as rational here.

Replies from: SaidAchmiz, VAuroch, Eugine_Nier
comment by Said Achmiz (SaidAchmiz) · 2014-01-22T01:22:16.427Z · LW(p) · GW(p)

I note an amusing and strange contradiction in the sibling comments to this one:

VAuroch says the above is explained by hindsight bias; that the people in question actually didn't know about data loss and prevention thereof (but only later confabulated that they did).

Eugine_Nier says the above is explained by akrasia: the people did know about data loss and prevention, but didn't take action.

These are contradictory explanations.

Both VAuroch and Eugine_Nier seem to suggest, by their tone ("Classic hindsight bias", "That's just akrasia") that their respective explanations are obvious.

What's going on?

Replies from: VAuroch, CCC, Eugine_Nier
comment by VAuroch · 2014-02-06T04:48:20.644Z · LW(p) · GW(p)

I meant less that the explanation was obvious and more that it was a very good example of the effect of hindsight bias; hindsight bias produces precisely these kinds of results.

If something else is even more likely to produce this kind of result, then that would be more likely than hindsight bias. I don't think akrasia qualifies.

To elaborate on what I think was actually going on: People 'know' that failure is a possibility, something that happens to other people, and that backups are a good way to prevent it, but don't really believe that it is a thing that can happen to them. After the fact, hindsight bias transforms 'yeah, that's a thing that happens' to 'this could happen to me' retroactively, and they remember knowing/believing it could happen to them.

comment by CCC · 2014-02-06T04:29:45.675Z · LW(p) · GW(p)

Limits of language, I think. Both explanations are possible, giving what the parent post said; both VAuroch and Eugine_Nier may have had experience with similar cases caused, respectively, by hindsight bias and akrasia, which makes their explanation appear obvious to them.

A lot of the time, I've noticed that "it's obvious" means "I have seen this pattern before (sometimes multiple times), and this extra element is part of the same pattern every time that I have seen it"

comment by Eugine_Nier · 2014-01-23T02:26:58.915Z · LW(p) · GW(p)

Well, it depends on what precisely we mean by them "knowing" about data loss.

comment by VAuroch · 2014-01-21T23:19:07.466Z · LW(p) · GW(p)

Their reaction was universally "this is embarrassing, I should've taken that more seriously", not "I didn't know a thing like this could happen/that I could have done something simple to prevent it".

Classic hindsight bias. If you went to a representative sample of similar people who had not recently suffered a backup-requiring event, they would probably think the second version, not the first.

Replies from: Wes_W
comment by Wes_W · 2014-01-22T04:12:18.753Z · LW(p) · GW(p)

Hindsight bias is almost certainly a component. Plus, I was a friendly member of their in-group, providing free assistance with a major problem, so they had two strong reasons to be extra-agreeable.

Even so, in my experience, your second sentence does not match reality. As in, doing exactly that does not in fact yield responses skewing toward the second option, even among the very non-tech-savvy. Many of them don't know exactly how to set such a thing up (but know they could give a teenager $20 to do it for them, which falls under "trivial inconveniences"), but the idea is not new info to them.

My sample size here is small and demographically/geographically limited, so add as many grains of salt as you see fit.

comment by Eugine_Nier · 2014-01-21T23:09:03.740Z · LW(p) · GW(p)

That's just akrasia.

comment by Said Achmiz (SaidAchmiz) · 2014-01-21T19:10:51.426Z · LW(p) · GW(p)

Well, look, of course I'd prefer to sell the customer something. If, knowing this, you take everything out of my mouth to be a lie, then you are not, in fact, being rational. The fact that I would specifically say "buy it elsewhere if you like!", and offer to set the backup system up for free, ought to tell you something.

The other part of this is that the place where I worked was a small, privately owned shop, many of whose customers were local, and which made a large chunk (perhaps the majority) of its revenue from service. (Profit margins on Apple machines are very slim.) It was to our great advantage not to lie to people in the interest of selling them one more widget. Doing so would have been massively self-defeating. As a consequence of all of this, our regular customers generally trusted us, and were quite right to do so.

Finally, even if the customer decided that the chance was too great that I was trying to sell them something, and opted not to buy anything on the spot, it is still ridiculously foolish not to follow up on the salesperson's suggestion that you do something to protect yourself from losing months or years of work. If that is even a slight possibility, you ought to investigate, get second and third opinions, get your backup solution as cheaply as you like, and then take me up on my offer to install it for free (or have a friend install it). To not back up at all, because clearly the salesperson is lying and the truth must surely be the diametrical opposite of what they said, is a ludicrously bad plan.

Replies from: Desrtopa, Jiro
comment by Desrtopa · 2014-01-21T21:15:33.990Z · LW(p) · GW(p)

Well, look, of course I'd prefer to sell the customer something. If, knowing this, you take everything out of my mouth to be a lie, then you are not, in fact, being rational. The fact that I would specifically say "buy it elsewhere if you like!", and offer to set the backup system up for free, ought to tell you something.

It tells customers something, but considering that these are plausible marketing techniques, it's not very strong evidence.

If you tell the customers that something is really important, that they should buy it, even if from somewhere else, this signals trustworthiness and consideration, but it's a cheap signal considering that if they decide, right in your store, to buy a product which your store offers, they probably will buy it from you unless they're being willfully perverse. Most of the work necessary to get them to buy the product from you is done in convincing them to buy it at all, and nearly all the rest is done by having them in your store when you do it.

Offering to provide services for free is also not very strong evidence, because in marketing, "free" is usually free*, a foot-in-the-door technique used to extract money from customers via some less obvious avenue. Indeed, the customers might very plausibly reason that if the service was so important that they would be foolish to do without it, you wouldn't be offering it for free.

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2014-01-21T21:27:34.358Z · LW(p) · GW(p)

Indeed, the customers might very plausibly reason that if the service was so important that they would be foolish to do without it, you wouldn't be offering it for free.

Given that setting up backups on a Mac is so easy that, as I suggested in my quoted spiel, the customer could even do it themselves, this is not a very well-supported conclusion.

foot-in-the-door technique used to extract money from customers via some less obvious avenue.

Well, duh. You "extract" money from customers by the fact of them liking you, trusting you, and getting all their service done at your shop, and buying future things they need from you, also.

if they decide, right in your store, to buy a product which your store offers, they probably will buy it from you unless they're being willfully perverse.

I think you underestimate how doggedly many people hunt for deals. I don't even blame them; being a retail shop, my place of work sometimes couldn't compete with mail-order houses on prices.

You're right, though: if they decided then and there that they would buy the thing, the customers often in fact went ahead and bought it then and there.

But you might plausibly think "hmm, suspicious. I'll wait to buy this until I can do some research." Fine and well; that's exactly what I'd do. Do the research. Buy the thing online. But dismissing the entire notion, based on the idea that "bah, he was just trying to sell me something", is foolishness.

Replies from: Jiro
comment by Jiro · 2014-01-21T23:27:14.236Z · LW(p) · GW(p)

I think you underestimate how doggedly many people hunt for deals.

The customer is estimating the probability that the statement is a sales pitch. The fact that many people would hunt for deals affects the effectiveness of the sales pitch given that it is one, not the likelihood that the statement is a sales pitch in the first place. Those are two different things--it's entirely possible that the statement is probably a sales pitch, but the sales pitch only catches 20% of the customers.

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2014-01-21T23:34:45.427Z · LW(p) · GW(p)

Yes; that comment was a response to your scenario whereby someone has already decided to purchase the item. You asserted that said person would then surely purchase it in the store, at the moment of the decision to purchase. I claimed that some people are too keen on getting a good deal to do that, opting instead to wait and buy it mail-order or online.

This is unrelated to the probability of my statements being a sales pitch.

Thus, a person might think: "Hmm, is this merely a sales pitch? Perhaps; but even if it is, and it succeeds in convincing me to buy a backup device, I might well still not buy it here and now, because I really want a good deal." They might then conclude: "And so, given that the salesman knows this, and is nonetheless insistent that I should buy it — and is even encouraging me to buy it elsewhere if it'll get me to buy it at all — I should take his words seriously; at least, seriously enough to look into it further."

comment by Jiro · 2014-01-22T00:33:35.758Z · LW(p) · GW(p)

it is still ridiculously foolish not to follow up on the salesperson's suggestion that you do something to protect yourself from losing months or years of work. If that is even a slight possibility, you ought to investigate, get second and third opinions, get your backup solution as cheaply as you like, and then take me up on my offer to install it for free (or have a friend install it).

That's Pascal's Mugging. You're suggesting that because the purported consequence of not having a backup is large, even if the probability is small, the customer should make an expenditure (do research) on it.

Replies from: SaidAchmiz, SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2014-01-22T01:15:41.777Z · LW(p) · GW(p)

All of this seems like a fully general counterargument to doing anything that any salespeople tell you to do, ever.

One imagines that some of those customers who later returned to angrily and tearfully demand data recovery, followed something like your line of reasoning, and decided not to back up their data. As a result, they are now sitting on top of their giant pile of utility.

... oh wait.

Do you think it's just chance that following your logic leads you to lose, in this case? And not just lose: be almost guaranteed to lose, if in fact you have something to lose. (All hard drives fail. All of them. It's just a matter of time.)

Replies from: Jiro
comment by Jiro · 2014-01-22T18:07:26.891Z · LW(p) · GW(p)

All of this seems like a fully general counterargument to doing anything that any salespeople tell you to do, ever.

It is sometimes smart to do something that a salesperson tells you to do. But you shouldn't do it because the salesperson tells you; the fact that the salesperson tells you to do it is not very useful information.

One imagines that some of those customers who later returned to angrily and tearfully demand data recovery, followed something like your line of reasoning, and decided not to back up their data.

They had no reason to believe the salesperson's advice was good. The fact that the salesperson's advice was good, and following it would have left them better off, doesn't change this. It just makes it a case of bad luck. Saying "If they had followed the advice of that salesman they'd have been better off, so they should have listened to that salesman" is like saying "if they had followed the advice of that tea leaf reader, they'd have been better off, so they should have listened to that tea leaf reader."

Replies from: private_messaging
comment by private_messaging · 2014-01-22T18:33:36.467Z · LW(p) · GW(p)

It is sometimes smart to do something that a salesperson tells you to do. But you shouldn't do it because the salesperson tells you; the fact that the salesperson tells you to do it is not very useful information.

Yeah, exactly. In that particular case, you're either savvy enough and you know that you should backup your data (in which case you're likely to buy backup storage from someone other than overpriced Apple), OR you're not savvy enough, in which case you have to take it on faith that e.g. hard drives break down often enough, that there's no existing built-in redundancy to begin with, that the backup solution actually works, etc etc.

edit: Besides, good hard drives have yearly probability of failure of less than 5%, so the data better be simultaneously worth several hundred bucks and not be worth sharing with other people, copying to other computers, etc.

Replies from: Nornagest, SaidAchmiz
comment by Nornagest · 2014-01-29T23:20:29.501Z · LW(p) · GW(p)

It might be worth mentioning here that cheap external hard drives are crap. I've made a practice for the last few years of using them for backups and anything else I need a terabyte drive for (mostly media storage), and in that time I've lost three of them, about one every year and a half. I haven't lost an internal drive since about 1995.

The usual point of failure seems to be the backplane rather than the drive itself, though. So the data's recoverable as long as it's not hardware encrypted.

comment by Said Achmiz (SaidAchmiz) · 2014-01-29T22:49:24.460Z · LW(p) · GW(p)

sharing with other people, copying to other computers, etc.

What do you think "backing up" means, exactly?

It's not some tech magic that only happens in the presence of certain specific, specially sanctified, pieces of equipment. If you have your data in more than one place, such that if one location goes down, the data still exists elsewhere — congratulations, your data is backed up.

However, if we're talking about a full backup, then there's also another consideration:

the data better be simultaneously worth several hundred bucks

It's not just the data, it's also your time.

Without a Time Machine backup

Your hard drive goes down. You buy (or receive, via warranty replacement) a new hard drive. You then have to spend the next several days:

a) reinstalling your operating system; b) reinstalling all of your programs; c) performing all software updates; d) reconstructing all of your application-specific settings and other aspects of your configuration.

With a Time Machine backup

Your hard drive goes down. You buy (or receive, via warranty replacement) a new hard drive. You boot from your system DVD or recovery partition and click the button to restore from backup. In an hour or two, as if by magic, your system is fully restored to its state as of the last backup, data, applications, settings, and all.


If your data plus at least a day, usually several days, of time lost, is not worth even one hundred dollars (which is how much a terabyte hard drive plus external enclosure costs these days), then I suppose you're not doing anything of particular value with your time and you may freely ignore my advice.

Replies from: private_messaging
comment by private_messaging · 2014-01-29T23:57:04.312Z · LW(p) · GW(p)

If your data plus at least a day, usually several days, of time lost, is not worth even one hundred dollars

You forgot the division by the probability of this happening.

I'm not saying backups are not worth it, I do backup my system, what I am saying is that it's not necessarily the case that backups are worth it for your regular person that doesn't customize things a lot, doesn't have much valuable data, uses very few programs regularly, and what ever valuable data they got is backed up on other computers. edit: for that matter, average person's settings are probably of negative value to them, heh.

edit : holy cow , some drives have annual failure rate of 0.9% .

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2014-01-30T00:16:47.133Z · LW(p) · GW(p)

regular person ... doesn't have much valuable data

I don't know what "regular people" you hang out with, so this might be true in your experience. When I worked retail, many of our customers were creative professionals of some sort — musicians, photographers, digital artists, writers, etc. Their computers had tons of work-relevant data.

Another example: my mother, certainly a "regular person" insofar as she has no tech industry background, is an educator. On her computer are things like work-related documents; syllabi; curriculum design docs; teaching materials; etc. All of that is certainly valuable.

Do you actually know people who own and use a personal computer, but have no valuable data on it, or very little? Also, how do you peg the value of your Great American Novel you've been writing in your off time for the last three years?

Replies from: private_messaging
comment by private_messaging · 2014-01-30T00:44:44.835Z · LW(p) · GW(p)

I don't really hang out with regular people generally, and I was speaking in the data by sheer tonnage. The examples you're listing fit on a single SD card. Non computer proficient people I know are really paranoid about the computer failing so they copy their stuff onto disks manually (even though better options are available).

How much revenue in dollars do you think a typical person with failed hard drive will lose, on the average?

(The total net worth of the data I got is definitely in the six and most likely in the seven figures range in terms of total loss of revenue if i'd lost all of it, so it's backuped rigorously of course, off site too in case of fire) edit: also I have large data that is valuable, not just photos (which people nowadays upload someplace on the internet).

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2014-01-30T02:01:36.926Z · LW(p) · GW(p)

The examples you're listing fit on a single SD card.

What?!

<consults Newegg>

It seems you can get SDXC cards in capacities up to 256 GB... for over $600. The up-to-$100 range gets you, perhaps, 64 GB. The sort of data I refer to in my first example most certainly does not fit onto one of those; and even the largest SD cards in no way, shape, or form have the capacity to let you do a full backup (i.e., the kind that saves you all that reinstall time) on a machine you use for any kind of media work.

(The total net worth of the data I got is definitely in the six and most likely in the seven figures range in terms of total loss of revenue if i'd lost all of it, so it's backuped rigorously of course, off site too in case of fire) edit: also I have large data that is valuable, not just photos (which people nowadays upload someplace on the internet).

Well, there you go. I don't know why you think "regular people" don't do important, valuable things on their computer, or don't have lots of data, but in my experience, they certainly do.

In any case, my original point remains, which is that even if you backup your several megabytes of important Word documents onto Dropbox, and that's your entire backup strategy, and your time is so worthless to you that you don't mind spending hours or days reinstalling...

... that's still a backup strategy. Not backing up would still be worse.

(As an aside, measure the value of time exclusively in lost revenue has always struck me as shortsighted as heck.)

Replies from: private_messaging
comment by private_messaging · 2014-01-30T18:16:17.837Z · LW(p) · GW(p)

What do people do when they get a new computer, do they copy over all settings automatically, usually, or not?

and your time is so worthless to you that you don't mind spending hours or days reinstalling.

The annual failure rate was 0.9% for Hitachi drives... let's say, 1%. So, for example, if the full backup costs $100 per year, the reinstalling would needs to cost $10 000 assuming linearity. Which is an overestimate due to possibly higher probability of accidentally deleting your data with your own hands, things getting stolen, etc, yes, I know. Why you keep repeating "time is so worthless"?

In the context of you being a salesman, the regular person, they don't know what is the failure rate of the drive, they have to trust you that it is high enough, and you being a salesman, it doesn't really make a whole lot of sense to trust you. You can't somehow prove from the first principles that the rate is high enough. It is not obvious that the rate is high enough. In fact for some manufacturers the rate can well be not high enough, unless we are speaking of operating system failures, human errors, and such.

edit: misremembered some other failure rate, fixed.

edit2: to summarize, between them backing up the most valuable data manually, the low probability of failure, and their low confidence in the words of a salesman, you can have a rational decision here. (I'm of course playing the advocate for "stupid people" not present to advocate themselves)

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2014-01-30T18:41:35.098Z · LW(p) · GW(p)

What do people do when they get a new computer, do they copy over all settings automatically, usually, or not?

If it's a Mac, I think they do, because it's so easy. In my experience they certainly do.

backing up the most valuable data manually

Uh, but if they back up their data "manually", then they have thereby followed my advice. I don't know why you keep drawing this distinction. My exhortations have always been to back up somehow. I never said "and you absolutely must do so with the following preapproved backup technology which you must buy, and certainly not any solution you may already have access to".

The annual failure rate was 0.9% for Hitachi drives

The following are nitpicks, but still: a) not everyone has Hitachi drives; b) failure rates were higher in the past; c)

As for your monetary value argument, it ignores nonlinear marginal utility of money, risk aversion, and difficulties in translating value of time into money. (As do most such arguments.)

Replies from: private_messaging
comment by private_messaging · 2014-01-30T18:55:47.903Z · LW(p) · GW(p)

Uh, but if they back up their data "manually", then they have thereby followed my advice.

Ahh, ok. Albeit there's the second point with regards to the full backup.

As for your monetary value argument, it ignores nonlinear marginal utility of money, risk aversion, and difficulties in translating value of time into money. (As do most such arguments.)

Yeah, I know. I'd still backup. The point is, those are variables and may differ quite a bit, especially for the full-backup advantage.

On the other side of spectrum though, I know a lawyer whose opinion is that you should not keep durable records unless you actually really need to have them. (I think he often does divorce dispute cases and such).

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2014-01-30T19:24:10.220Z · LW(p) · GW(p)

On the other side of spectrum though, I know a lawyer whose opinion is that you should not keep durable records unless you actually really need to have them. (I think he often does divorce dispute cases and such).

Interesting. That's certainly a perspective I hadn't considered. One sort-of-related situation is one of political dissidents, activists, etc.: such people would almost certainly not want to use Dropbox and similar cloud storage solutions for e.g. lists of contacts, but they might want more private backups (off-site over-network backup solutions, for instance) to protect data against government seizure of their computers.

Certainly, if your concern is having your data accessible by entities (such as the government) that will take coercive measures to get it, then the equation changes.

Replies from: Lumifer
comment by Lumifer · 2014-01-30T19:32:18.083Z · LW(p) · GW(p)

such people would almost certainly not want to use Dropbox and similar cloud storage solutions for e.g. lists of contacts

It depends on whether they consider themselves vulnerable to rubberhose cryptography. If not, they can backup encrypted files anywhere they want to, including Dropbox, etc. But if they do, then it becomes a game of steganography and the local hard drives of their machines aren't safe either.

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2014-01-30T19:53:59.159Z · LW(p) · GW(p)

Indeed, although the truly paranoid may rig their hard drives to self-destruct, or take some similar measure, in the event of the police breaking down their door.

Replies from: private_messaging
comment by private_messaging · 2014-01-30T20:18:24.052Z · LW(p) · GW(p)

I imagine active destruction of that kind might create huge legal problems of it's own. On the technical side you can store the key in a file you can securely destroy.

I heard somewhere that in the US and UK, an average law abiding citizen, from the formal standpoint, rather frequently breaks various laws by accident. No idea about other jurisdictions. This is why NSA is such a big deal.

Replies from: Lumifer
comment by Lumifer · 2014-01-30T20:22:36.522Z · LW(p) · GW(p)

I imagine active destruction of that kind might create legal problems of it's own.

If your fear of rubberhose cryptography is well-justified, "legal problems" are a minor part of your worries.

By the way, it's hard to destroy a hard drive to the extent that a determined government wouldn't be able to extract data from it. At least hard during the time it takes the police to break down your door.

I heard somewhere that an average law abiding citizen, from the formal standpoint, rather frequently breaks various laws.

I know I do :-)

There is this book, for example.

And, of course:

"If you give me six lines written by the hand of the most honest of men, I will find something in them which will hang him." -- Cardinal De Richelieu

Replies from: private_messaging
comment by private_messaging · 2014-01-30T20:34:53.621Z · LW(p) · GW(p)

If you fear of rubberhose cryptography is well-justified, "legal problems" are a minor part of your worries.

I was thinking of "or else" crypanalysis .

There is this book, for example.

Interesting. I'm curious what kind of laws are frequently broken... at company level I know there's a plenty of regulations related to health and safety which are here for a good reason when people are working with, say, dangerous machinery, but are silly in the office context.

edit: how many laws would a company break if a computer scientist replaced a light bulb?

"If you give me six lines written by the hand of the most honest of men, I will find something in them which will hang him." -- Cardinal De Richelieu

Yeah.

comment by Said Achmiz (SaidAchmiz) · 2014-01-22T01:16:27.040Z · LW(p) · GW(p)

Also, are you asserting that all cases of low probability of large consequences are equivalent to Pascal's Mugging?

Replies from: Jiro
comment by Jiro · 2014-01-25T15:52:59.623Z · LW(p) · GW(p)

It's equivalent to Pascal's Mugging when

1) There's a large consequence, with a low probability, and a low cost action which should be taken to avoid the consequence (or to get the consequence if it's positive)

2) It is said that the size of the consequence makes up for the low probability, either explicitly or implicitly

3) The low probability of the large consequence has a large component consisting of uncertainty about the probability itself. This typically involves questions like "are they lying about the probability", "are they exaggerating the probability", or "are they mistaken about the probability".

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2014-01-26T00:11:16.416Z · LW(p) · GW(p)

The difference, however, is that in Pascal's Mugging, after you pay the mugger $5 (or whatever), you remain absolutely clueless about whether he was for real, or a swindler, and whether the threat he described had the slightest grounding in reality.

In this case, after you take the low-cost action (doing a bit of research, looking for a second opinion), you now know whether the salesman was feeding you a line of nonsense or whether he was warning you of a real threat.

In any case, I think that all you've shown is that declaring a situation to be a Pascal's Mugging is not a good proxy for deciding whether you should do something.

Replies from: Jiro
comment by Jiro · 2014-01-26T00:38:46.264Z · LW(p) · GW(p)

Salesmen make lots of claims. What you suggest would mean that pretty much every time you talk to a salesman, you need to go and research all the claims the salesman makes that imply danger. In fact, by your reasoning, every time a tea leaf reader tells you to do something, you ought to research it to determine if the tea leaf reader is correct about that. After all, by your own argument, there are many cases where if you do the research you will know whether the tea leaf reader's suggested course of action is helpful. Certainly if the tea leaf reader told you to do backups, research would tell you whether that's true.

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2014-01-26T00:44:35.439Z · LW(p) · GW(p)

Salesmen often know what they're talking about. They could be lying, or not. Tea leaf readers, however, just make stuff up.

It remains true that a customer who followed your logic would lose all their valuable data, whereas a customer who rejected your logic would have everything backed up, and lose nothing. In short: if you're so smart, why aintcha rich? (In utilons, in this case, rather than dollars.)

Replies from: Lumifer, Jiro
comment by Lumifer · 2014-01-26T02:57:48.378Z · LW(p) · GW(p)

Salesmen often know what they're talking about.

Not in my experience.

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2014-01-26T03:10:51.280Z · LW(p) · GW(p)

Perhaps you're going to the wrong stores?

There's a difference between big-box stores and small mom-and-pop outfits. Of course the sales floor at Best Buy is staffed entirely by morons. That's why we buy things on Amazon. (Well, not the only reason.)

I assure you that being knowledgeable gets you far in sales, given certain conditions.

Replies from: Lumifer
comment by Lumifer · 2014-01-26T03:27:27.589Z · LW(p) · GW(p)

Perhaps you're going to the wrong stores?

Perhaps. I don't go to stores (other than food) much. I can't recall last time that I was buying something expensive and the salesman knew more about that thing than I did.

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2014-01-26T05:46:37.927Z · LW(p) · GW(p)

Well now, hold on. I wouldn't expect a salesman to know more about computer technology than I do; but I've got a background in comp sci and IT; it would be an unreasonable expectation.

If, however, you're an average person, a layman, and you've not done your own research (perhaps because you're not savvy enough to do that, or too lazy, or something), and you go into a good tech-related store, then expecting the salesman to know more than you do is quite reasonable.

So it depends on what sorts of things you buy, and on where your own expertise lies.

Personally, the last time I went into a store and the salespeople knew more than I was in a hardware store. It was a small place in Brooklyn, strictly local, non-chain, been around as long as I can remember. Those guys really know their stuff.

Replies from: Lumifer
comment by Lumifer · 2014-01-26T06:29:36.975Z · LW(p) · GW(p)

If, however, you're an average person

Probably not :-P

comment by Jiro · 2014-01-26T01:02:45.728Z · LW(p) · GW(p)

Salesmen often know what they're talking about. They could be lying, or not. Tea leaf readers, however, just make stuff up.

The combination of salesmen telling the truth about things they know and lying about things they know is, as a whole, comparable to a tea leaf reader who neither knowingly tells the truth nor knowingly lies much..

It remains true that a customer who followed your logic would lose all their valuable data

Yes, he'd be unlucky. He'd be unlucky enough to have stumbled into one of the few rare cases where being rational produces a bad result. Being told to do backups is not a typical case of listening to a salesman (or tea leaf reader). It's a highly unusual case.

Just because someone would have been better off if they had done action X, it does not follow that it would then have been rational to have done action X.

comment by David Althaus (wallowinmaya) · 2014-02-05T23:16:51.224Z · LW(p) · GW(p)

You got me kinda scared. I just use Evernote or wordpress for all my important writing. That should be enough, right?

Replies from: Richard_Kennaway, SaidAchmiz, EndlessStrategy
comment by Richard_Kennaway · 2014-02-06T09:15:41.644Z · LW(p) · GW(p)

Some hazards your online data are exposed to:

  • Your account could be hacked.

  • Their service could be hacked.

  • They might decide that you're in breach of their ToS and close your account.

  • They could go out of business.

Anywhere your data are, they are exposed to some risks. The trick is to have multiple copies, such that no event short of the collapse of civilisation will endanger all of them together.

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2014-02-06T15:13:50.801Z · LW(p) · GW(p)

Precisely. My most immediately critical data — the stuff on which my current employment and professional success/advancement depends — exists in no less than seven places:

  1. My desktop's primary drive (an SSD).
  2. My desktop's backup hard drive.
  3. My laptop's primary drive (an SSD).
  4. My laptop's backup hard drive.
  5. The primary drive (an SSD) of a different computer, in a different part of the country.
  6. That computer's backup hard drive.
  7. A cloud-based storage service.

I worry that that's not enough. I am considering investing in some sort of NAS, or two, and placing them in more secure areas of both of the dwellings to which I have access.

Replies from: gwern, Lumifer
comment by gwern · 2014-02-07T01:47:54.365Z · LW(p) · GW(p)

How much time are you spending keeping all of that in sync...?

Just having a lot of drives is not a good use of resources from the data protection standpoint. It ensures you protection against the catastrophic failure of one or two drives simultaneously, but you seem unprotected against most other forms of data loss: for example, silent corruption of files (what are you using to ensure integrity? I don't see any mention of hashes or DVCSes), or mistaken deletions/modifications (what stops a file deletion from percolating through each of the 7 before you realize 6 months later that it was a critical file?).

For improving general safety, you should probably drop some of those drives in favor of adding protection in the form of read-only media and error detection + forward error correction (eg periodically making a full backup with PAR2 redundancy to BluRays), and more frequent backups to the backup drives.

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2014-02-07T02:52:18.939Z · LW(p) · GW(p)

Synchronization is automatic. It does not take up any of my time.

I have enough drive space to maintain backups going back several months, which protects against both file corruption (volume corruption is taken care of by redundancy) and mistaken deletion/modification. In any case, the files in question are mostly text or text-based, not binary formats, so corruption is less of a concern.

Code, specifically, is of course also kept in git repositories.

Backups to read-only media are a good idea, and I do them periodically as well (not blurays, though; DVDs or even CDs suffice, as the amount of truly critical data is not that large).

comment by Lumifer · 2014-02-06T15:36:48.808Z · LW(p) · GW(p)

I can't resist the temptation... :-D

"Only wimps use tape backup: real men just upload their important stuff on ftp, and let the rest of the world mirror it" -- Linus Torvalds

comment by Said Achmiz (SaidAchmiz) · 2014-02-06T15:09:53.289Z · LW(p) · GW(p)

Certainly not.

comment by EndlessStrategy · 2014-02-05T23:22:54.825Z · LW(p) · GW(p)

No.

comment by christopherj · 2014-01-25T01:44:25.059Z · LW(p) · GW(p)

I can verify this -- as an acknowledged "computer person" and "rational person", I still didn't back up my data, even while advising my friends that they should and they'll be sorry when they don't. Fortunately, my hard drive started making interesting new noises, rather than failing without warning, so I didn't embarrass my self too badly. It is fairly common for someone to acknowledge and advise others of backing up their data, but failing to do so themselves.

I think it's a combination of procrastination, laziness, being super-cheap, optimism/arrogance, and not having especially valuable data. Though people with valuable data do it too.

comment by Yaakov T (jazmt) · 2014-01-22T04:12:16.020Z · LW(p) · GW(p)

What method of backing up data do you recommend for a computer with windows? How often do you recommend doing it?

Replies from: zedzed, SaidAchmiz
comment by zedzed · 2014-01-22T05:20:46.057Z · LW(p) · GW(p)

It depends on your use case. My "life work" consists exclusively of things I've typed. These types of files tend to be small, and lend themselves to being written in Google Documents. If I use Emacs, then the files are tiny and I back them up to Google Drive in about 2 seconds. This costs me all of $0 and is very easy.

But maybe your life work also includes a bunch of pictures documenting your experiences. These, and other large files, will quickly exceed your 15 gigs of free storage. Then you're probably looking at an external hard drive or cloud storage. The better fit will depend on things like your internet connection, which USB standard your computer has, your tech level, how much stuff you need backed up, whether you travel a lot, whether you'll lose or damage the external hard drive, etc.

And then just use Yvain's method to find the best one.

Of course, there's more elaborate solutions for power users, but by the time you're high enough level for them, you're a power user and don't need to ask.

Replies from: jazmt
comment by Yaakov T (jazmt) · 2014-01-23T01:11:57.629Z · LW(p) · GW(p)

Thank you, I basically use this method now and am glad to have it corroborated by an expert.

comment by Said Achmiz (SaidAchmiz) · 2014-01-22T06:28:08.073Z · LW(p) · GW(p)

I don't use Windows nearly as much, but one idea (depending on use case, as zedzed said) is cloud storage. Dropbox is free up to 2 GB. Paid services exist. Synchronization is regular and automatic; some services keep some file history, as well.

comment by byrnema · 2014-01-22T04:02:06.397Z · LW(p) · GW(p)

(This is a stream of consciousness where I explore why I haven't backed up my data. This proceeds in stages, with evolution to the next stage only because the writing of this comment forced me to keep going. Thus, it's a data point in response to this comment.)

Back up your data, people. It's so easy

Interesting. I have a very dense 'ugh field' around backing up my data, come to think of it. Based on this population of one, it has nothing to do with not trusting the salesperson, or not being aware that my hard drive is going to fail.

... in fact, I know my hard drive is about to fail (upon reboot I get those dooming system error messages that cycle, etc.) and has occurred to me several times I might want to back up my data. Yes, there's some important stuff I need to back up.

Maybe the hurdle is that most stuff on my computer is useless, and I don't want to prioritize the material. I just want it all there if I need it, so I wish my computer wouldn't break.

Since I know my computer is likely to break, or in case the power goes out or I accidentally close without saving, while working I save files electronically very frequently, and I make hard copies if there will be any pain within say -- 72 hours -- of losing a particular document. The pain of the loss of anything later than a few days is discounted. (Is that hyperbolic discounting? Or just akrasia, as another commenter suggested?)

But I do know I won't spend 20 minutes tomorrow investigating how to back up my hard drive. I know someone will say it is "easy", but there will instead be some obstacle that will mean my data won't actually get backed up and I'll have wasted my twenty minutes. Right?

... OK, fine. (sigh) Let's suppose my budget is $20 and 20 minutes. What should I do?

(reading online)

...OK, I buy a hard-drive, connect it with a USB, and drag and drop the files I want to save once the computer recognizes the device. Although I still need to determine which folders are worth saving, and this is a continuous, ongoing chore, there are some folders I know I need to save right away. I should go ahead and store those.

(I'll report back tomorrow whether this back-up actually happened.)

Replies from: Richard_Kennaway, SaidAchmiz
comment by Richard_Kennaway · 2014-01-22T09:03:30.763Z · LW(p) · GW(p)

Let's suppose my budget is $20 and 20 minutes. What should I do?

Others have mentioned Dropbox, but it's so wonderful I'll mention it again. Dropbox. It's almost as awesome in its just-works-ness as Time Machine (Apple's awesome backup solution). Free up to 2GB, $10/month gets you 100GB. Runs on everything.

Note that Dropbox isn't designed as a backup solution, it's really for sharing files across multiple devices. It only preserves the current version of a file, so offers no protection against deleting a file you didn't mean to. As soon as you edit a file, the changes are uploaded to the Dropbox cloud.

A point to remember is that every backup solution protects against some threats but not others, and you have to decide what you need to defend against. I have a Time Capsule (external drive for Time Machine backup), but it's in the same room as the computer, so it provides excellent protection against disc failure or accidental deletion, but none against theft. So I also have an external drive that I plug in once a week and the rest of the time leave hidden elsewhere. If the files on your computer are your livelihood, you need an off-site backup to survive risks such as your house burning down, or serious burglars doing a complete house clearance.

Although I still need to determine which folders are worth saving, and this is a continuous, ongoing chore

A backup solution that presents a continuous, ongoing chore is not going to work. It has to be something that once you set it up, Just Works. I don't know if there's anything as awesome as Time Machine in this respect for Windows. Ideally a solution should automatically backup everything, except possibly some things you specifically exclude. If you only back up things you specifically decide to, you will inevitably leave things out, that you'll only discover when you need the backup you don't have.

Replies from: Vaniver
comment by Vaniver · 2014-01-22T18:52:40.674Z · LW(p) · GW(p)

It only preserves the current version of a file, so offers no protection against deleting a file you didn't mean to. As soon as you edit a file, the changes are uploaded to the Dropbox cloud.

Dropbox actually does version control, which has saved several files I've accidentally deleted or overwritten. It's only up to 30 days, though.

comment by Said Achmiz (SaidAchmiz) · 2014-01-22T04:10:05.577Z · LW(p) · GW(p)

I take it you've got a Windows or Linux machine? Because if you have a Mac, there's a much easier solution. Edit: I mean easier than a continuous, ongoing chore of deciding what files to save, drag-and-dropping stuff, etc. You do still need to buy a device, though. For a $20 budget I recommend this 32 GB USB flash drive.

Replies from: byrnema
comment by byrnema · 2014-01-22T04:17:40.749Z · LW(p) · GW(p)

I have a Windows machine, but I know there are automatic back-up schedules that can be done. I just don't want to do it... I don't want to think about a complex automatic process or make decisions about scheduling. Trying to pinpoint why ... it feels messy and discontinuous and inconvenient, to keep saving iterations of all my old junk.

Replies from: Richard_Kennaway, Lumifer, SaidAchmiz
comment by Richard_Kennaway · 2014-01-22T13:54:44.163Z · LW(p) · GW(p)

it feels messy and discontinuous and inconvenient, to keep saving iterations of all my old junk.

When dealing with old data, what I find most stressful is deciding which things to keep. So as far as possible I don't. It's a wasted effort. I keep everything, or I delete everything. It doesn't matter that there's gigabytes of stuff on my machine that I'll never look at, as long as I never have to see it or think about it. Disc space is measured in terabytes these days.

Replies from: SaidAchmiz, Lumifer, byrnema
comment by Said Achmiz (SaidAchmiz) · 2014-01-22T15:21:06.029Z · LW(p) · GW(p)

When dealing with old data, what I find most stressful is deciding which things to keep.

In case this wasn't clear, for the benefit of any Mac users reading this:

Time Machine makes all these decisions for you. That's one of the things that makes it awesome.

comment by Lumifer · 2014-01-22T17:27:09.052Z · LW(p) · GW(p)

Disc space is measured in terabytes these days.

This.

Typically when I change machines, the data from the old one goes into the /old folder on the new one. You get a nesting hierarchy and down at the bottom there are some files from many years ago that I would need to get a simulator to even read :-/

comment by byrnema · 2014-01-22T22:06:22.197Z · LW(p) · GW(p)

So that's what I am going to do. I actually ordered an external hard drive, and every few weeks I'll back up my hard drive. The whole thing (no decisions).

I also understand that I don't need to worry about versions -- the external hard drive just saves the latest version.

I also talked to a friend today and found out they backed their data regularly. I was surprised; didn't know regular people did this regularly.

comment by Lumifer · 2014-01-22T17:25:08.458Z · LW(p) · GW(p)

keep saving iterations of all my old junk.

Backups aren't about saving your old junk. Backup are about saving everything that you have on your hard drive in case it goes to the Great Write-Only Memory In The Sky.

If you're talking about staggered backups or snapshots, their usefulness lies mostly in being a (very primitive) versioning system, as well as a possible lifeline in case your data gets silently corrupted and you don't notice fast enough.

comment by Said Achmiz (SaidAchmiz) · 2014-01-22T06:25:29.409Z · LW(p) · GW(p)

Well, the way it works on the Mac — and I'm only describing this because I speculate that similar, if not quite as awesome, solutions exist for Windows — is this:

  1. Scheduling: backups happen every hour if the backup drive is plugged in; or, whenever you plug it in; plus, you can trigger them manually. You pretty much don't have to think about it; just either keep the thing plugged in (easy with a desktop), or plug it in once in a while.

  2. Multiple iterations of your stuff: there's a "history" of backups, maintained automatically. You can go back to any backed-up prior version (to a certain point; how long a history you can keep is dictated by available storage space). The interface for restoring things hides the messy complexity of the multiple versions from you, and just lets you go back to the latest version, or any previous available version, sorted by time.

With good backup software, it's really quite smooth and easy. The process is not complex; decisions to be made are minimal; your backup feels nice and non-messy; restoring is easy as pie.

Unfortunately I can't recommend good Windows backup software, but maybe someone else can chime in.

comment by Scott Alexander (Yvain) · 2014-01-21T05:09:50.139Z · LW(p) · GW(p)

The example in the thread is real-life-ish - compare to the story of Voltaire and friends winning the French lottery. But if you want more:

It's easy to think of trivial examples of one-time victories - for example, an early Bitcoin investor realizing that crypto-currency had potential and buying some when it was still worth fractions of a cent. But you can justly accuse me of cherry-picking here and demand repeatable examples.

Nothing guarantees that there will be repeatable examples - it could be that people are bad at taking ideas seriously until the ideas succeed once, at which point they realize they were wrong and jump on the bandwagon.

But in fact I think there are such examples. One such is investing in index funds rather than mutual funds/picking your own stocks. There are strong reasons to believe you'll do better, most people know those reasons but don't credit them, and some people do credit them and end up with more money.

Occasional use of modafinil might fall in this category as well, depending on whether we define people's usual reasons for not taking it as irrational or rational-given-different-utility-functions.

I don't think most of these examples will end out as "such obvious wins no one could possibly disagree with them" - with the possible exception of index funds it's never as purely mathematical as the lottery example - but I think for most people the calculus is clear.

Replies from: ChrisHallquist, Aleksander, gwern, SaidAchmiz, SaidAchmiz
comment by ChrisHallquist · 2014-01-21T06:02:28.820Z · LW(p) · GW(p)

I seriously doubt most people know the the reasons they should be investing in index funds. Remember, the average American has an IQ of 100, doesn't have a degree higher than a high school diploma, and rarely reads books. I'm not sure I'd know the reasons for buying index funds if not for spending a fair amount of time reading econ blogs.

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2014-01-21T06:18:13.821Z · LW(p) · GW(p)

Agreed: I have no idea why I should be investing in index funds (if, indeed, I were investing in anything). My skepticism about that example, though, actually comes from a slightly different place:

If I decided to do some investing, went to five financial experts, asked them what I should invest in, and they all said "Yep, index funds, definitely the way to go", then I would invest in index funds. Right? Where would I even get the idea to do anything else?

And thus, why does "invest in index funds" qualify as a counterintuitive idea? Why is it a thing that some people might not take seriously? Wouldn't it just be the default?

Replies from: VAuroch
comment by VAuroch · 2014-01-21T20:40:34.913Z · LW(p) · GW(p)

Because this

If I decided to do some investing, went to five financial experts, asked them what I should invest in, and they all said "Yep, index funds, definitely the way to go", then I would invest in index funds

probably wouldn't happen. If you asked uninvolved experts, it would, but the most accessible experts aren't uninvolved. What is much more likely is that you (the average American with some money to invest) go to invest your money with an investment firm. And that investment firm pushes you toward actively-managed funds, since that's where their incentives are. In order for the idea of investing solely in index funds to be available, you have to put in meaningful thought, if only enough to look for non-corporate advice on how to invest well.

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2014-01-21T21:28:48.419Z · LW(p) · GW(p)

Huh. That makes sense, I suppose. Do people generally not seek advice from uninvolved experts? Is that true only in investing, or in other domains?

Replies from: VAuroch
comment by VAuroch · 2014-01-21T21:34:36.261Z · LW(p) · GW(p)

I'm not an expert, but my impression is that most people don't think about this kind of thing without prompting. Which means that they don't think about it unless they, for example, see an ad for Charles Schwab and call them to look into investing. Getting to the point of considering whether the expert has an incentive to lie to you seems to mark you as of substantially above-average reasoning skills.

comment by Aleksander · 2014-01-21T18:39:41.443Z · LW(p) · GW(p)

Isn't it a little bit self-contradictory, to propose that smart people have beaten the market by investing in Bitcoin, and at the same time, that smart people invest in index funds rather than trying to beat the market? Or in other words, are those who got rich off Bitcoin really different from those who picked some lucky stocks in 1997 and cashed out in time?

Replies from: VAuroch
comment by VAuroch · 2014-01-21T20:44:57.687Z · LW(p) · GW(p)

That's a good point but I'm going to argue against it anyway.

Unlike a lucky stock, Bitcoin wasn't accounted for by mainstream markets at the time. An index fund amortizes the chances of lucky success and catastrophic failure across all the stocks into a single number, giving roughly the same expected value but with much lower variance. Bitcoin wasn't something that could be indexed at that point, so there was no way you could have hedged your bet in the same way that an index fund would let you hedge.

comment by gwern · 2014-01-22T19:15:16.442Z · LW(p) · GW(p)

It's easy to think of trivial examples of one-time victories - for example, an early Bitcoin investor realizing that crypto-currency had potential and buying some when it was still worth fractions of a cent.

Actually, I've been working on a mini-essay on exactly this topic: because of my PredictionBook use, I have a long paper trail of explicit predictions on Bitcoin which implied major +EV at every time period, but I failed to meaningfully exploit my beliefs and so my gains have been far smaller than they could have been.

comment by Said Achmiz (SaidAchmiz) · 2014-01-21T18:53:06.244Z · LW(p) · GW(p)

UPDATE:

I think index funds are a good example of something that fits my criteria #s 1, 2, and 3. (Thank you to the commenters who've explained to me both why they are a good idea and why many/most people may not understand or believe this.)

Do index funds fit #s 4 and 5? It might be interesting to ask, in the next survey: do you invest? If so, in index funds, or otherwise? If the former, how much money have you made as a result? In the absence of survey data, is there other evidence that rationalists (or "rationalists") invest in index funds more than the general population, and that they win thusly (i.e. make more money)?

I think modafinil is clearly a good example of my #s 2 and 3; I am not so sure about #1. I am still researching the matter. Gwern's article, though very useful, has not convinced me. (Of course, whether it fits #s 4 and 5 also remains to be demonstrated.)

I remain unsure about whether the Bitcoin investment is a good example of anything. Again, if anyone cares to elucidate the matter, I would be grateful.

comment by Said Achmiz (SaidAchmiz) · 2014-01-21T05:31:18.765Z · LW(p) · GW(p)

Thank you for the response.

investing in index funds rather than mutual funds/picking your own stocks. There are strong reasons to believe you'll do better, most people know those reasons but don't credit them, and some people do credit them and end up with more money.

I'd like to hear this from a financial expert. Do we have any who'd like to speak on this?

Occasional use of modafinil might fall in this category as well, depending on whether we define people's usual reasons for not taking it as irrational or rational-given-different-utility-functions.

Oh? What will modafinil do for me? (Will google and return to this thread, but if someone wants to recommend some links with concentrated useful info, it would be appreciated.)

I also have some objections to this sort of "obvious win" that do not depend on what modafinil's specific effects are: namely, that "deciding to start taking a drug without the advice and supervision of a licensed medical professional is bad" seems to be a decent heuristic to live by. It's not unalterable, but it seems good to a first approximation. Do you disagree?

an early Bitcoin investor realizing that crypto-currency had potential and buying some when it was still worth fractions of a cent.

Forgive me for my ignorance: so this guy has lots of Bitcoin now? What can you buy with Bitcoin? Can you just convert the Bitcoin into dollars? If so, how much money did this person make from this?

I don't think most of these examples will end out as "such obvious wins no one could possibly disagree with them" - with the possible exception of index funds it's never as purely mathematical as the lottery example - but I think for most people the calculus is clear.

My suspicion is that these examples are actually more like "it's not clear whether these things are, in fact, even wins for the people who did them, never mind whether they will be wins for other people who are considering doing them". I was really looking for something more unambiguous than that.

I will comment more when I've investigated / received clarifications on the examples you've provided. In the meantime I would love to see more examples.

Replies from: James_Miller, DanArmak, MugaSofer, Ben_LandauTaylor
comment by James_Miller · 2014-01-21T05:47:19.295Z · LW(p) · GW(p)

I'd like to hear this from a financial expert. Do we have any who'd like to speak on this?

I'm one (PhD in economics) and yes and ordinary investors should use low fee index funds.

Replies from: SaidAchmiz, jazmt, Lumifer
comment by Said Achmiz (SaidAchmiz) · 2014-01-21T05:55:17.232Z · LW(p) · GW(p)

Thank you. A couple of follow-up questions, if you don't mind:

  1. Do most ordinary investors not do this?

  2. If not, do you know why? Do most people not know about the advantage of index funds? Or do they know, but don't use them anyway?

  3. If the latter, why don't they? That seems strange. What makes index funds the "non-default" idea, so to speak? If index funds are known by financial experts to be superior to mutual funds (or other investing strategies), where would an ordinary person get the idea that they should be using anything other than index funds?

Replies from: TylerJay, James_Miller
comment by TylerJay · 2014-01-21T07:23:18.912Z · LW(p) · GW(p)

An index fund is intended to go up or down y the exact same amount as the entire exchange as a whole. For example, you might hear that the S&P 500 rose a total of 7% last year. If that happened, then your index fund would go up by 7%.

The main reason people don't invest in index funds is because they want to "beat the market." They see some stocks double or triple within a year and think "oh man, if only I'd bought that stock a bit earlier, I'd be rich!" So some people try to pick individual stocks, but the majority of laypeople want to let "experts" do it for them.

Mutual funds generally have a fund manager and tons of analysts working there trying to figure out how to beat the market (get a return greater than the market itself). They all claim to be able to do this and some have a record to point to to prove that they have done it in the past. For example, fund A may have beat the market in the previous 3 years, so investors think that by investing in Fund A over an Index fund, they will come out ahead.

But unfortunately, markets are anti-inductive so past success of individual stocks, mutual funds, and even index funds is no guarantee of future performance.

If you look at the performance of all funds over the past 20+ years and correct for survivorship bias (take into account all the funds that went out of business as well as the ones that are still around today), it becomes very clear that almost no mutual funds actually beat the market in terms of your ACTUAL RETURN when averaged over each year.

The final big problems with actively managed funds are fees and taxes. Actively managed funds charge higher percentage rates each year to cover their work. That's how they make money. They also tend to sell a percentage of your stocks each year and buy new ones in their attempt to beat the market. This gives a certain "portfolio turnover" percentage and the higher that is, the more you have to pay in taxes (capital gains), which lessens your return even more.

The bottom line is that mutual funds claim to be able to beat the market and many do in any given year. People chase the money and pay more in capital gains and fees to try to make a higher return. Over time though, the index fund beats all others in terms of total return over time.

Replies from: James_Miller
comment by James_Miller · 2014-01-21T18:15:06.619Z · LW(p) · GW(p)

But unfortunately, markets are anti-inductive

But mutual funds are. I don't remember the citation, but I recall that mutual funds that do very poorly one year are more likely to do so in the future when you take into account fees and taxes.

Replies from: V_V
comment by V_V · 2014-01-22T22:12:06.143Z · LW(p) · GW(p)

Clearly, there are actively managed funds that do consistently worse than index funds, otherwise index funds wouldn't be able to make money, since financial markets are negative-sum.

Replies from: Aleksander, James_Miller
comment by Aleksander · 2014-01-22T23:09:58.097Z · LW(p) · GW(p)

Financial markets are positive-sum. If you just buy a bunch of stocks and hold onto them, on average you'll outperform cash.

Replies from: V_V, Lumifer
comment by V_V · 2014-01-23T01:39:07.004Z · LW(p) · GW(p)

If you buy a stock A at price X, somebody must be selling you stock A at price X.

If buying turns out to be a good deal (that is, the discounted dividends Y you collect from holding stock A are greater than X), then selling must turn out to be a bad deal: if the other party held stock A they would have collected the profit Y-X that they forfeited to you. Your gain is their lost profit, therefore the market is zero-sum between investors. Add transaction costs and it becomes negative-sum.

This analysis is simplified by the fact that I didn't take into account risk aversion and the fact that different parties can discount future utility in different ways (different discount rates or even hyperbolic discounting). But I suppose that when it comes to collective investors such as mutual funds or banks, these parameters can be considered to be roughly the same.

The stock market is not (necessarily) zero-sum or negative-sum as a whole, since money is transferred from companies to investors each time dividends are paid, but the way the investors slice the cake between them is negative-sum.

comment by Lumifer · 2014-01-23T00:53:25.657Z · LW(p) · GW(p)

Financial markets are positive-sum.

Not necessarily. First, it depends on the market. Some are zero-sum, and about others one can say that they are NOT zero-sum, but that's it. They might be negative-sum or positive-sum, depending on the circumstances.

If you just buy a bunch of stocks and hold onto them, on average you'll outperform cash.

That also depends. Average over what? Which countries and what time periods?

comment by James_Miller · 2014-01-23T00:05:23.537Z · LW(p) · GW(p)

Survivorship bias means that most existing funds can have beating index funds in the past.

Replies from: V_V
comment by V_V · 2014-01-23T01:08:39.990Z · LW(p) · GW(p)

Yes, but taking into account survivorship bias, there are some actively managed funds that do do consistently worse than the market, and eventually fail (and are replaced by other funds that do so)

comment by James_Miller · 2014-01-21T16:55:09.368Z · LW(p) · GW(p)

1) No but I'm doing my best as a columnist for Better Investing Magazine to tell them. Still, lots of money is in index funds.

2 and 3) Actively managed mutual funds put a lot of money into marketing, and the explanation for index funds is probably beyond most people. A huge number of financial experts would be out of jobs if all non-professional investors switched to index funds.

Replies from: Vaniver
comment by Vaniver · 2014-01-22T19:18:09.577Z · LW(p) · GW(p)

the explanation for index funds is probably beyond most people.

I don't know, the simple explanation for index funds is "on average, you will get the market average. So why not avoid the fees?", though it requires people being self-aware enough to recognize situations where they are, in fact, average.

Replies from: James_Miller
comment by James_Miller · 2014-01-22T21:34:06.317Z · LW(p) · GW(p)

But the actively managed mutual fund you are considering investing in has consistently outperformed the market even when taking into account taxes and fees.

Replies from: Vaniver
comment by Vaniver · 2014-01-22T21:58:28.776Z · LW(p) · GW(p)

But the actively managed mutual fund you are considering investing in has consistently outperformed the market even when taking into account taxes and fees.

Am I above average at picking actively-managed mutual funds?

Replies from: James_Miller
comment by James_Miller · 2014-01-22T22:02:25.593Z · LW(p) · GW(p)

What if you are the kind of person who is above average in most things. It's far from obvious why you shouldn't think you would be above average at picking stocks or mutual funds.

Replies from: Vaniver
comment by Vaniver · 2014-01-22T22:22:44.181Z · LW(p) · GW(p)

What if you are the kind of person who is above average in most things.

Why, thanks for noticing. ;) This is where the self-awareness comes in, and I agree if you can't rely on that then you do need to build up the argument that the financial advisors and active managers are not worth their cost.

comment by Yaakov T (jazmt) · 2014-01-23T01:19:27.905Z · LW(p) · GW(p)

For ordinary investors won't there still be an issue of buying these funds at the right time, so as not to buy when the market is unusually high?

Replies from: memoridem
comment by memoridem · 2014-01-23T04:28:09.368Z · LW(p) · GW(p)

You can migitate the problem by making the investment gradually.

Replies from: James_Miller
comment by James_Miller · 2014-01-23T04:48:29.707Z · LW(p) · GW(p)

Yes

comment by Lumifer · 2014-01-22T21:51:54.952Z · LW(p) · GW(p)

ordinary investors should use low fee index funds

Two questions:

  • Doesn't this ignore the very important question of "which indices?"

  • Is this advice different from the "hold a sufficiently diversified portfolio" one?

Replies from: ygert
comment by ygert · 2014-01-22T22:04:18.070Z · LW(p) · GW(p)

Not an economist or otherwise particularly qualified, but these are easy questions.

I'll answer the second one first: This advice is exactly the same as advice to hold a diversified portfolio. The concept of an index fund is a tiny little piece of each and every thing that's on the market. The reasoning behind buying index funds is exactly the reasoning behind holding a diversified portfolio.

For the second question, remember the idea is to buy a little bit of everything, to diversify. So go meta, and buy little bits of many different index funds. But actually, as this is considered a good idea, people have made such meta-index funds, that are indices of indices, that you can buy in order to get a little bit of each index fund.

But as an index is defined as "a little bit of everything", the question of which one fades a lot in importance. There are indices of different markets, so one might ask which market to invest in, but even there you want to go meta and diversify. (Say, with one of those meta-indices.) And yes, you want to find one with low fees, which invests as widely as possible, etc. All the standard stuff. But while fiddling with the minueta may matter, it does pale when compared to the difference between buying indices and stupidly trying to pick stocks yourself.

Replies from: Lumifer
comment by Lumifer · 2014-01-22T22:08:30.760Z · LW(p) · GW(p)

The concept of an index fund is a tiny little piece of each and every thing that's on the market.

This is not true. An index fund holds a particular index which generally does not represent "every thing that's on the market".

For a simple example, consider the most common index -- the S&P 500. This index holds 500 largest-capitalization stocks in the US. If you invest in the S&P500 index you can be fairly described as investing into US large-cap stocks. The point is that you are NOT investing into small-cap stocks and neither you are investing in a large variety of other financial assets (e.g. bonds).

Replies from: ygert
comment by ygert · 2014-01-28T11:40:32.387Z · LW(p) · GW(p)

Yes. What I wrote was a summery, and not as perfectly detailed as one may wish. One can quibble about details: "the market"/"a market", and those quibbles may be perfectly legitimate. Yes, one who buys S&P 500 indices is only buying shares in the large-cap market, not in all the many other things in the US (or world) economy. It would be silly to try to define a index fund as something that invests in every single thing on the face of the planet, and some indices are more diversified than others.

That said, the archetypal ideal of an index fund is that imaginary one piece of everything in the world. A fund is more "indexy" the more diversified it is. In other words, when one buys index funds, what one is buying is diversity. To a greater or lesser extent, of course, and one should buy not only the broadest index funds available, but of course also many different (non-overlapping?) index funds, if one wants to reap the full benifit of diversification.

Replies from: Lumifer
comment by Lumifer · 2014-01-29T17:07:02.699Z · LW(p) · GW(p)

the archetypal ideal of an index fund is that imaginary one piece of everything in the world.

Maybe in your mind. Not in mine. I think of indices (and index funds) as portfolios assembled under a particular set of rules. None of them tries to reach everything in the world, in fact a lot of them are designed to be quite narrow.

A fund is more "indexy" the more diversified it is.

I still disagree. An index fund's most striking feature is that it invests passively, that is its managers generally don't have to make any decisions, they just have to follow publicly announced rules. I don't think a fund is more "indexy" if it owns more or more diverse assets.

In other words, when one buys index funds, what one is buying is diversity.

Sigh. Still no. You're buying a portfolio composed under certain rules. Some of these portfolios (= index funds) are reasonably diversifed, some aren't, and that depends on how do you think of diversification, too.

The "classic" index fund, one that invests into S&P500, is not diversified particularly well. It invests in only a single asset class in a single country.

Replies from: hyporational
comment by hyporational · 2014-01-29T17:30:33.990Z · LW(p) · GW(p)

An index fund's most striking feature is that it invests passively, that is its managers generally don't have to make any decisions, they just have to follow publicly announced rules. I don't think a fund is more "indexy" if it owns more or more diverse assets.

Yup. Take an actively managed fund that seems to be indexy by ygert's standards today. It might not be so indexy tomorrow.

comment by DanArmak · 2014-01-21T19:57:38.207Z · LW(p) · GW(p)

Forgive me for my ignorance: so this guy has lots of Bitcoin now? What can you buy with Bitcoin? Can you just convert the Bitcoin into dollars? If so, how much money did this person make from this?

The hypothetical investor probably has the same amount of Bitcoins he always had, but Bitcoins are worth many more dollars now than previously, a difference of three orders of magnitude.

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2014-01-21T20:18:28.970Z · LW(p) · GW(p)

Noted. And as for the other things I asked?

Replies from: army1987
comment by A1987dM (army1987) · 2014-01-22T16:51:17.958Z · LW(p) · GW(p)

You can easily sell Bitcoins for US dollars on mtgox.com, but after mid-2013 you need a verified account (which IIRC requires sending them proof of residence) to transfer them to your bank account, which is a heck of a trivial inconvenience. (For all I know there might be an easier way, though.)

Replies from: Eugine_Nier
comment by Eugine_Nier · 2014-01-23T02:32:51.435Z · LW(p) · GW(p)

I heard other exchanges, e.g., BitStamp don't have this problem.

comment by MugaSofer · 2014-01-21T17:57:06.198Z · LW(p) · GW(p)

So - holding up Said, and for that matter my own memories, as evidence - most people simply haven't considered these options.

Which ... checks ... does fit with the original criteria:

1.There is some opportunity for clear, unambiguous victory;

2.Taking advantage of it depends primarily on taking a strange/unconventional/etc. idea seriously (as distinct from e.g. not having the necessary resources/connections, being risk-averse, having a different utility function, etc.);

3.Most people / normal people / non-rationalists do not take the idea seriously, and as a consequence have not taken advantage of said opportunity;

4.Some people / smart people / rationalists take the idea seriously, and have gone for the opportunity;

5.And, most importantly, doing so has (not "will"! already has!) caused them to win, in a clear, unambiguous, significant way.

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2014-01-21T18:44:03.756Z · LW(p) · GW(p)

I don't think it does, actually. The following are three distinct scenarios (as pertain to my point #2):

1. Being entirely unaware of what options/possibilities exist in some domain.

Example: I don't do any investing, and so, prior to this thread, had no opinion on index funds whatsoever, nor on mutual funds, nor on anything related.

2. Being unaware of some particular (potentially counterintuitive) idea or option.

Example: I'd never had anyone recommend modafinil to me, or suggest that I should take it, or explain what benefits it might have.

3. Being aware of some idea or option, but not taking it seriously.

Example: I have no idea. Gaming poorly-designed lotteries? I suspect this example fails for other reasons, but it does fit the criterion #2.


The claim, as I understand it, was:

There are numerous cases like scenario 3 above, where the main thing that keeps people from taking advantage of an opportunity, and winning thusly, is not taking some idea seriously — despite being aware of that idea. Rationalists, on the other hand, do take the idea seriously, and win thusly.

Index funds are not a good example for people who have no knowledge of investing, because what kept me, for instance, from taking advantage of the profit opportunities offered by the idea "invest in index funds" was not having any knowledge of investing whatsoever, not some failure to take things seriously.

Modafinil is not a good example for people not aware of modafinil or its (alleged) positive effects, because what kept me, for instance, from taking advantage of the cognitive boosts offered by the idea "take modafinil" was not being aware of modafinil, not some failure to take things seriously.

I haven't gotten a good response about Bitcoin, so I won't comment on that.

Now, don't get me wrong: I think index funds are a good example in general, based on the very helpful and clear comments I've gotten on that topic (thank you, commenters!). (Modafinil is not as clearly a good example. I'm still researching.) But my case, and similar others, are not good evidence for those examples.

Replies from: MugaSofer, Yvain
comment by MugaSofer · 2014-01-21T19:35:00.599Z · LW(p) · GW(p)

Oh, indeed! Sorry, I didn't mean to state that they proved his point or anything like that. I was just observing that they do seem to fit the criteria listed in the original comment Yvain was replying to.

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2014-01-21T19:51:05.680Z · LW(p) · GW(p)

Well... my point is that they do not, in fact, fit the criteria — specifically, criterion #2 — in the case of people who haven't considered these ideas as options.

Replies from: MugaSofer
comment by MugaSofer · 2014-01-22T09:13:55.765Z · LW(p) · GW(p)

Really?

Unless they're not considering them as options because they wouldn't work for them (e.g. not having the necessary resources/connections, being risk-averse, having a different utility function, etc.), but rather because they're unusual in some fashion...

I guess perhaps you weren't clear on why, exactly, you wanted them to have been ignored?

comment by Scott Alexander (Yvain) · 2014-01-25T17:06:50.412Z · LW(p) · GW(p)

I'm not claiming that a majority of the people who don't do these options don't do them because they're aware of them but don't take them seriously. I'm claiming a majority (or at least many) of the people who possess enough knowledge about them to be able to figure out that they should do them, don't.

My source is mainly anecdotes from people I've talked to who know all the arguments for these but don't do them.

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2014-01-26T00:00:51.959Z · LW(p) · GW(p)

So, concretizing your claim, we get:

  • A majority of the people who know enough about investing to know that they should invest in index funds rather than something else, do not do so, instead continuing to invest in other, less-optimal financial instruments.

I find this hard to believe. Do you really have anecdotes supporting this? (And a lack of a comparable or greater quantity of anecdotes to the contrary?)

  • A majority of the people who possess enough knowledge about nootropic drugs to be able to figure out that they should take modafinil, do not take modafinil.

I am entirely unconvinced that taking modafinil is a good idea, so you would have to first demonstrate that.

  • Something about Bitcoin. I don't know what your claim even means in this case, honestly. Please explain.

I think your post would greatly benefit from the inclusion of some of those anecdotes you allude to. In other words, why do you believe this thing you believe? What has caused you to come to this conclusion? I would love to know!

comment by Ben_LandauTaylor · 2014-01-21T06:06:53.578Z · LW(p) · GW(p)

There's lots of modafinil info at gwern's page. Wikipedia is also a pretty good source. The short (and only slightly inaccurate) version is that it gives you the good effects of caffeine, but stronger, and with no withdrawal or other drawbacks. It's had positive effects on my mood and focus.

"deciding to start taking a drug without the advice and supervision of a licensed medical professional is bad" seems to be a decent heuristic to live by

Reasonable! Which is why I'm taking modafinil with the advice and supervision of a licensed medical professional. If you're wary of self-medication, you might want to look into that route.

Replies from: SaidAchmiz, Eugine_Nier
comment by Said Achmiz (SaidAchmiz) · 2014-01-21T06:11:28.739Z · LW(p) · GW(p)

Thank you for the link, I will look into that.

If you are so inclined, I would be interested in hearing how you approached the "advice of a medical professional" aspect; did you go to your GP and say "So I'm considering taking modafinil"? (If you'd prefer not to answer, I entirely understand, no need to even respond to say no; thank you in any case for your comment.)

Replies from: Ben_LandauTaylor
comment by Ben_LandauTaylor · 2014-01-21T07:26:51.754Z · LW(p) · GW(p)

I'd been seeing a psychiatrist to get treated for anhedonia. We tried a few different SSRIs, which didn't help. Then I read about modafinil, and it seemed like it could plausibly help treat some of my symptoms (although not their causes), so I brought it up. He agreed it was a reasonable thing to try and prescribed it. I've been taking modafinil regularly for a year, now. It's not a giant boost for me, but it is a boost, and the drawbacks are negligible.

Replies from: Creutzer
comment by Creutzer · 2014-01-21T13:04:07.715Z · LW(p) · GW(p)

That's pretty remarkable, I would expect that most psychiatrists would be highly resistant to such a proposal. Also, having to try SSRIs first in order to maybe get them to agree is not an insignificant cost.

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2014-01-21T15:18:47.168Z · LW(p) · GW(p)

Yeah, it doesn't sound like Ben_LandauTaylor's strategy of modafinil acquisition is viable for me.

Also, having to try SSRIs first in order to maybe get them to agree is not an insignificant cost.

No kidding!

comment by Eugine_Nier · 2014-01-21T23:24:14.871Z · LW(p) · GW(p)

with no withdrawal or other drawbacks.

How much data is there behind this conclusion. Is it comparable to the centuries of experience we have with caffeine?

Replies from: gwern
comment by gwern · 2014-01-22T02:45:50.124Z · LW(p) · GW(p)

There's lots of modafinil info at gwern's page. Wikipedia is also a pretty good source...

How much data is there behind this conclusion

Why are you asking, instead of looking?

comment by Cyan · 2014-01-21T03:45:32.265Z · LW(p) · GW(p)

I'm not Yvain, but his Goofus and Gallant parable did remind me of the time some dude noticed that the uncapped jackpot rollover of the Irish lotto made it vulnerable to a brute force attack#History_of_Lotto).

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2014-01-21T05:09:11.886Z · LW(p) · GW(p)

Interesting. Pretty niche (in that it doesn't seem to be an example of behavior that the average rationalist will often, or ever, have a chance to emulate), but interesting.

I note that the National Lottery responded by attempting (with partial success) to block the guy from his victory, and also making such things unfeasible in the future. So someone who thought "nah, that would never be allowed to work" (i.e. didn't take the idea seriously), would have been at least partly correct.

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2014-01-21T08:47:00.198Z · LW(p) · GW(p)

I note that the National Lottery responded by attempting (with partial success) to block the guy from his victory, and also making such things unfeasible in the future.

As a general rule, when you game the system, the system changes to stop the game, because the organisers have a goal beyond the rules of the day. So there's only a certain window of opportunity to profit. If there are high stakes, you need to be really sure that there is a gap to work with, in between "no-one has done this before, so maybe it doesn't work for reasons I haven't seen" and "everyone's doing it, so does it still work?"

comment by Solvent · 2014-01-29T20:05:39.684Z · LW(p) · GW(p)

ETA: Note that I work for App Academy. So take all I say with a grain of salt. I'd love it if one of my classmates would confirm this for me.

Further edit: I retract the claim that this is strong evidence of rationalists winning. So it doesn't count as an example of this.

I just finished App Academy. App Academy is a 9 week intensive course in web development. Almost everyone who goes through the program gets a job, with an average salary above $90k. You only pay if you get a job. As such, it seems to be a fantastic opportunity with very little risk, apart from the nine weeks of your life. (EDIT: They let you live at the office on an air mattress if you want, so living expenses aren't much of an issue.)

There are a bunch of bad reasons to not do the program. To start with, there's the sunk cost fallacy: many people here have philosophy degrees or whatever, and won't get any advantage from that. More importantly, it's a pretty unusual life move at this point to move to San Francisco and learn programming from a non-university institution.

LWers are massively overrepresented at AA. There were 4/40 at my session, and two of those had higher karma than me. I know other LWers from other sessions of AA.

This seems like a decent example of rationalists winning.

EDIT:

My particular point is that for a lot of people, this seems like a really good idea: if there's a 50% chance of it being a scam, and you're making $50k doing whatever else you were doing with your life, then if job search takes 3 months, you're almost better off in expectation over the course of one year.

And most of the people I know who disparaged this kind of course didn't do so because they disagreed with my calculation, but because it "didn't offer real accreditation" or whatever. So I feel that this was a good gamble, which seemed weird, which rationalists were more likely to take.

Replies from: ChrisHallquist, SaidAchmiz, V_V, Aleksander, Jack, Jiro, ChristianKl, Richard_Kennaway, SaidAchmiz
comment by ChrisHallquist · 2014-01-30T01:26:04.223Z · LW(p) · GW(p)

I'm one of Solvent's App Academy grads here. Unclear to me whether this is indicative of LWer's superior rationality, and to what extent it's because word about App Academy has gotten around within the LessWrong community. For me, the decision process went something like:

  1. Luke recommended it to me.
  2. I asked Luke if he knew anyone who'd been through it who could vouch for the program. He didn't, but could recommend someone within the LessWrong community who'd done a lot of research into coding bootcamps.
  3. I talked to Luke's contact, everything checked out.
  4. After getting in, I sent the contract to my uncle (a lawyer) to look at. He verified there were no "gotcha" clauses in the contact.

So I don't know how much of my decision was driven by superior rationality and how much was driven by information I had that others might not (due in large part to the LessWrong community.) Though this certainly played a role.

(EDIT: And in case anyone was wondering, it was a great decision and I'd highly recommend it.)

comment by Said Achmiz (SaidAchmiz) · 2014-01-29T21:27:16.342Z · LW(p) · GW(p)

move to San Francisco

Unrelatedly to my other response: uh, move to San Francisco? That... costs a lot of money. Even if only for nine weeks. Where did you live for the duration?

Replies from: Solvent
comment by Solvent · 2014-01-29T22:42:04.671Z · LW(p) · GW(p)

They let you live at the office. I spent less than $10 a day. Good point though.

Replies from: Jiro, SaidAchmiz
comment by Jiro · 2014-01-30T01:03:20.247Z · LW(p) · GW(p)

Moving to San Francisco has a lot of expenses other than housing expenses, including costs for movers, travel costs (and the costs of moving back if you fail), costs to stop and start utilities, storage costs to store your possessions for 9 weeks if you live in the office, and the excess everyday costs that come from living in an area where everything is expensive. It's also a significant disruption to your social life (which could itself decrease your chances of finding a job, and is a cost even if it doesn't.)

Replies from: Solvent
comment by Solvent · 2014-01-30T01:23:19.537Z · LW(p) · GW(p)

You make a good point. But none of the people I've discussed this with who didn't want to do App Academy cite those reasons.

Replies from: Jiro
comment by Jiro · 2014-01-31T05:26:13.641Z · LW(p) · GW(p)

I think this falls into the category of not assuming everyone talks like a LW-er.

Someone who has moved in the past or known someone who has moved might not remember (at least without prompting) each of the individual items which make moving cost. They may just retain a generalized memory that moving is something to be avoided without a good reason.

But guess what? When it comes to making decisions that should take into account the cost of moving, remembering "moving should be avoided without a good reason" will, if their criteria for "good reason" are well-calibrated, lead to exactly the same conclusion as having a shopping list of moving costs in their mind and knowing that the movers are $500 and the loss of social links is worth 1000 utilons etc. even if they can't articulate any numbers or any specific disadvantages of moving. Just because the people didn't actually cite those reasons, and wouldn't be able to cite those reasons, doesn't mean that they weren't in effect rejecting it for those reasons.

And yes, this generalizes to people being unable to articulate reasons to avoid other things that they've learned to avoid.

Replies from: jsteinhardt
comment by jsteinhardt · 2014-02-11T17:46:24.249Z · LW(p) · GW(p)

This is an extremely cogent articulation of something I've been wanting to articulate for a while (but couldn't, because I'm the sort of person who just remembers "you shouldn't move without a good reason). I would strongly encourage you to write a top level post about this.

comment by Said Achmiz (SaidAchmiz) · 2014-01-29T23:03:43.488Z · LW(p) · GW(p)

... huh. Could you elaborate on this, please? How's that work? Do they have actual housing? What is living at the office like?

Replies from: troll
comment by troll · 2014-01-31T09:40:23.779Z · LW(p) · GW(p)

They don't have actual housing.

There are three rooms and one open space to put beds / storage in.

80%+ of beds are air mattresses people bought at Target.

Living at the office means you have to sign up at a nearby gym if you wish to shower.

It also means no privacy.

The showers in the nearest gym occasionally turn to cold water. (about 1 in 15 times)

The nearest gym is ~7 mins away walking and costs $130 for three months membership.

There are no housing costs.

Lights typically go off at 11 pm - 12 am

Residents have to wash dishes and take out the trash, and generally pick up after themselves.

There are ~15 residents per active cohort.

Food costs are ~$10 / day if you eat out for lunch and dinner, and ~$4 / day if you make food.

Each sleeping space is ~20 square meters. (there are four)

If you sleep in the last sleeping space, you have to move your shit during the day.

Replies from: SaidAchmiz, Jiro
comment by Said Achmiz (SaidAchmiz) · 2014-01-31T13:38:52.364Z · LW(p) · GW(p)

Thank you for the info.

I guess the takeaway here is that when someone on LessWrong talks about something being an obvious win, I should take it with a grain of salt, and assume a strong prior probability of this person just having very different values from me.

Replies from: troll
comment by troll · 2014-01-31T20:30:27.535Z · LW(p) · GW(p)

Possible things to consider are:

It's assumed that you go to App Academy with the interest of getting a high paying job without paying too much for that opportunity, and being very confident of your success.

It's also assumed you want to be able to program, and would imagine it to be fun in the future, if it is not already.

Humans acclimate to conditions relatively quickly.

It's relatively easy to improve your living conditions with earplugs, night eyewear, and a mattress cover.

Having people around you to debug when you are too exhausted to is a significant boon for progression in programming skill.

That said, it's understandable if your values differ.

comment by Jiro · 2014-01-31T16:45:54.284Z · LW(p) · GW(p)

May I ask why your name is "troll"?

That name highly suggests "I actually called myself a troll right in my username and those idiots at LW didn't even realize I'm a troll when it's right there in front of them in black and white".

comment by V_V · 2014-02-11T15:20:54.176Z · LW(p) · GW(p)

This is the first time I hear about this training program, but my impression (as somebody living outside the US) is that at the moment there is a shortage of programmers in the Silicon Valley, and therefore it is relatively easy, at least for people with the appropriate cognitive structure (those who can "grok" programming), to get a relatively high-paying programming job, even with minimal training.
I suppose this is especially true in the web app/mobile app industry, since these tend to be highly commodified, non-critical products, which can be developed and deployed incrementally and have often very short lifecycles, hence a "quantity over quality" production process is used, employing a large number of relatively low-skilled programmers (*).

Since the barriers to entry to the industry are low, evaluating the effectiveness of a commercial training program is not trivial: just noting that most people who complete the program get a job isn't great evidence.
You would have to check whether people who complete the program are more likely to get a job, or get higher average salaries, than people who taught programming themselves by reading a few tutorials or completed free online courses like those offered by Code.org, Coursera, etc.
If there was no difference, or the difference was not high enough to pay back the training program cost, then paying for it would be sub-optimal.

(* I'm not saying that all app programmers are low-skilled, just that high skill is not a requirement for most of these jobs)

Replies from: Jiro, ChristianKl
comment by Jiro · 2014-02-11T20:32:03.225Z · LW(p) · GW(p)

"Shortage of programmers" often means "shortage of programmers willing to work for the salaries we offer".

Replies from: Nornagest, V_V
comment by Nornagest · 2014-02-12T01:26:05.539Z · LW(p) · GW(p)

And/or "shortage of programmers ticking all the boxes on this highly specific technology stack we're using". I get the impression that the greatest advantage of these development bootcamps from a hiring perspective is having a turnaround time short enough that they can focus narrowly on whatever technologies are trendy at the moment, as opposed to a traditional CS degree which is much more theory-centric and often a couple years out of date in its practical offerings.

comment by V_V · 2014-02-12T01:10:27.678Z · LW(p) · GW(p)

It seems to me they already tend to offer quite high salaries.
Further increasing them could increase the number of available programmers, although there are going to be both short-term and long-term availability limits. And obviously, companies can't afford to pay arbitrary high salaries.

More specifically, I suppose that much of this labor demand comes from startups, which often operate on the brink of financial viability.
Startups have high failure rates, but a few of them generate a very high return on investment, which is what makes the whole startup industry viable: VCs are as risk averse as anybody else, but by diversifying their investments in many startups they reduce the variance of their return and thus obtain a positive expected utility. However, if failure rate goes up (for instance due to increased labor costs) without the other parameters changing, it would kill the whole industry, and I would expect this to occur in a very non-linear fashion, essentially as a threshold effect.

comment by ChristianKl · 2014-02-11T16:16:33.769Z · LW(p) · GW(p)

Few people have the mental starmina to just teach themselves 8 hours a day via reading a few tutorials and complete free online courses.

If you go with your mattress to App Academy it takes effort to not spent time programming when all the people around you are programming.

It also likely that the enviroment will make it easy to network with other programmers.

Replies from: Lumifer, V_V
comment by Lumifer · 2014-02-11T16:59:18.424Z · LW(p) · GW(p)

Few people have the mental starmina to just teach themselves 8 hours a day

It's actually a defining characteristic of hackers, except that it's more like 16 hours a day.

Replies from: ChristianKl, hyporational
comment by ChristianKl · 2014-02-11T22:56:18.487Z · LW(p) · GW(p)

It depends on the teacher. If you have a specific well defined project than a good hacker can work his 16 hours focused on the project.

From the people I know few have the same ability for the kind of general tutorial learning that provides broad knowledge.

I think I certainly spend many days where I spent most of my time learning but it wasn't the kind of focused learning you have in school.

Replies from: Lumifer
comment by Lumifer · 2014-02-12T02:40:53.155Z · LW(p) · GW(p)

It depends on the teacher.

Which teacher? "...mental stamina to just teach themselves"

comment by hyporational · 2014-02-11T18:04:15.074Z · LW(p) · GW(p)

If that's the case do you have any idea what makes them so exceptional?

Replies from: Lumifer
comment by Lumifer · 2014-02-11T18:22:18.619Z · LW(p) · GW(p)

Are you asking what makes people self-motivated, have burning curiosity, and be willing to just dive headlong into new fields of study?

I have no idea, but I suspect carefully choosing one's parents helps :-)

There is also the standard stereotype of high-functioning autistics with superhuman ability to focus, but I don't know how well it corresponds to reality.

You might consider this interesting.

Replies from: hyporational
comment by hyporational · 2014-02-11T18:44:38.626Z · LW(p) · GW(p)

I do, thanks.

comment by V_V · 2014-02-11T16:35:27.425Z · LW(p) · GW(p)

Few people have the mental starmina to just teach themselves 8 hours a day via reading a few tutorials and complete free online courses.

True, but I suspect that the effect of training time runs into diminishing returns well before you reach 8 hours a day, in particular after you have been doing it for a few days.

It also likely that the enviroment will make it easy to network with other programmers.

Agreed.

Replies from: ChristianKl
comment by ChristianKl · 2014-02-11T16:56:05.547Z · LW(p) · GW(p)

I think there are many smart people that have issues with akrasia. Being in an enviroment with other people who also work makes it much easier to just sit down and follow the course.

The fact that the deal with App Academy is that you only pay when you get a job also makes it in their interest that the logistics of the job search are settled.

For someone without a programming job the way to find work as a programmer might not seem straightforward even after completing a bunch of tutorials.

For this description the only reason I won't go to App Academy is that it's in the US. If I could just do this is a a European city I would likely pursue it because it's a path that's much more straightforward than my current one.

Replies from: V_V
comment by V_V · 2014-02-11T17:00:08.152Z · LW(p) · GW(p)

I'm not saying that they offer no value, I'm saying that the fact that they have high hiring ratios statistics is, by itself, not strong evidence that they offer enough value to justify their price.

comment by Aleksander · 2014-01-29T23:11:23.441Z · LW(p) · GW(p)

I've wondered why more people don't train to be software engineers. According to wikipedia, 1 in 200 workers is a software engineer. A friend of mine who teaches programming classes estimates 5% of people could learn how to program. If he's right, 9 out of 10 people who could be software engineers aren't, and I'm guessing 8 of them make less in their current job than they would if they decided to switch.

One explanation is that most people would really hate the anti-social aspect of software engineering. We like to talk a lot about how it's critical for that job to be a great communicator etc., but the reality is, most of the time you sit at your desk and not talk to anyone. It's possible most people couldn't stand it. Most jobs have a really big social factor in comparison, you talk to clients, students, patients, supervisors, etc.

Replies from: Solvent, SaidAchmiz
comment by Solvent · 2014-01-29T23:20:43.553Z · LW(p) · GW(p)

I suspect that most people don't think of making the switch.

comment by Said Achmiz (SaidAchmiz) · 2014-01-29T23:28:11.055Z · LW(p) · GW(p)

This...

5% of people could learn how to program

does not imply that all those people can learn to be software engineers. Software engineering is not just programming. There are a lot of terrible software engineers out there.

comment by Jack · 2014-01-30T05:15:48.318Z · LW(p) · GW(p)

App Academy was a great decision for me. Though I just started looking for work, I've definitely become a very competent web developer in a short period of time. Speaking of which if anyone in the Bay Area is looking for a Rails or Backbone dev, give me a shout.

I don't know if I agree that my decision to do App Academy had a lot to do with rationalism. 4//40 is a high percentage but a small n and the fact that it was definitely discussed here or at least around the community pretty much means it isn't evidence of much. People in my life I've told about it have all been enthusiastic, even people who are pretty focused on traditional credential-ism.

comment by Jiro · 2014-01-30T00:58:28.622Z · LW(p) · GW(p)

Don't dismiss what non-LWers are trying to say just because they don't phrase it as a LWer would. "Didn't offer real accreditation" means that they 1) are skeptical about whether the the plan teaches useful skills (doing a Bayseian update on how likely that is, conditional on the fact that you are not accredited), or 2) they are skeptical that the plan actually has the success rate you claim (based on their belief that employers prefer accreditation, which ultimately boils down to Bayseianism as well).

Furthermore, it's hard to figure the probability that something is a scam. I can't think of any real-world situations where I would estimate (with reasonable error bars) that something has a 50% chance of being a scam. How would I be able to tell the difference between something with a 50% chance of being a scam and a 90% chance of being a scam?

Replies from: Solvent
comment by Solvent · 2014-01-30T01:21:29.212Z · LW(p) · GW(p)

I don't think that they're thinking rationally and just saying things wrong. They're legitimately thinking wrong.

If they're skeptical about whether the place teaches useful skills, the evidence that it actually gets people jobs should remove that worry entirely. Their point about accreditation usually came up after I had cited their jobs statistics. My impression was that they were just looking for their cached thoughts about dodgy looking training programs, without considering the evidence that this one worked.

Replies from: Jiro
comment by Jiro · 2014-01-30T02:12:16.263Z · LW(p) · GW(p)

Their point about accreditation usually came up after I had cited their jobs statistics.

If their point about accreditation was meant to indicate that they are skeptical that the plan leads to useful skills or to getting a job, then having them bring it up when you cite the job statistics is entirely expected. They brought up evidence against getting a job when you gave them evidence for getting one.

(And if you're thinking that job statistics are such good evidence that even bringing up something correlated with lack of jobs doesn't affect the chances much, that's not true. There are a number of ways in which job statistics can be poor evidence, and those people were likely aware that such ways exist.)

Replies from: Jiro
comment by Jiro · 2014-02-01T11:52:20.445Z · LW(p) · GW(p)

To elaborate a bit, one form of deceptive figures I've heard about is to only count successes as percentages of people who go through the entire program. It makes sense to do this to some degree since you don't want to count people who dropped out after a day, but depending on how the program is run, it's not hard to weed out a lot of people part of the way through and artificially increase your success rate.

There's also the difference between the percentage of people who get jobs and the percentage who keep them, and the possibility that past performance covers a time period where the job market was better and won't generalize to your chance of getting a job from the program now. Not to mention that success rate partly depends on the people who take the course--if most of the people who take the course are, say, high school graduates with high aptitude but no money for college, their success rate might not translate to the success rate for an adult who moves from another area.

And there's the possibility of overly-literal wording. Has everyone who has gotten a job gotten a job based on a skill learned during the program? Is an "average salary" a mean or median?

Then there's always the possibility that the success rate is simply false. Sure, false advertising is illegal,. but with no oversight, how's anyone supposed to find that out?

Replies from: V_V
comment by V_V · 2014-02-11T16:22:26.222Z · LW(p) · GW(p)

I don't know specifically about App Academy, but I've found a hacker news thread where there is some speculation that these "coding bootcamps" might inflate their statistics by having a selective enrollment interviews that screens off most people who are not already employable and/or hire their own students as instructors or something after they complete the program, so that they can be counted as employed, even for a short time.

comment by ChristianKl · 2014-02-11T16:16:03.796Z · LW(p) · GW(p)

Almost everyone who goes through the program gets a job, with an average salary above $90k.

What does almost mean in percentages?

How many people drop out of the program and how many complete it?

Replies from: Solvent
comment by Solvent · 2014-02-12T22:44:35.701Z · LW(p) · GW(p)

Of the people who graduated more than 6 months ago and looked for jobs (as opposed to going to university or something), all have jobs.

About 5% of people drop out of the program.

comment by Richard_Kennaway · 2014-01-31T11:36:16.512Z · LW(p) · GW(p)

ETA: Note that I work for App Academy.

Any comment on this? (News article a couple of days ago on gummint regulators threatening to shut down App Academy and several similar named organisations.)

Replies from: Solvent
comment by Solvent · 2014-02-01T21:54:31.783Z · LW(p) · GW(p)

It will probably be fine. See here.

comment by Said Achmiz (SaidAchmiz) · 2014-01-29T21:25:17.048Z · LW(p) · GW(p)

5. And, most importantly, doing so has (not "will"! already has!) caused them to win, in a clear, unambiguous, significant way.

You have, I take it, already gotten a job as a result of finishing App Academy?

Replies from: Solvent
comment by Solvent · 2014-01-29T22:42:28.958Z · LW(p) · GW(p)

I did, but the job I got was being a TA for App Academy, so that might not count in your eyes.

Their figures are telling the truth: I don't know anyone from the previous cohort who was dissatisfied with their experience of job search.

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2014-01-29T23:09:40.539Z · LW(p) · GW(p)

I did, but the job I got was being a TA for App Academy, so that might not count in your eyes.

Indeed it does not. I don't count your experience as an example of the OP.

dissatisfied with their experience of job search.

That's... an awfully strange phrasing. Do you mean they all found a web development job as a result of attending App Academy? Or what?

Replies from: Solvent
comment by Solvent · 2014-01-29T23:19:53.343Z · LW(p) · GW(p)

Pretty much all of them, yes. I should have phrased that better.

My experience was unusual, but if they hadn't hired me, I expect I would have been hired like my classmates.

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2014-01-29T23:42:33.595Z · LW(p) · GW(p)

Out of curiosity, why did you take the TA job? Does it pay more than $90k a year?

comment by Morendil · 2014-01-21T22:25:34.444Z · LW(p) · GW(p)

Google "The Pudding Guy".

comment by jefftk (jkaufman) · 2014-01-23T23:19:37.344Z · LW(p) · GW(p)

You could argue earning to give fits this pattern, though I'm not sure the victory/win is unambiguous enough.

comment by Vivificient · 2014-01-21T14:28:55.639Z · LW(p) · GW(p)

Your conclusion is possible. But I'll admit I find it hard to believe that non-rationalists really lack the ability to take ideas seriously. The 1 = 2 example is a little silly, but I've known lots of not-very-rational people who take ideas seriously. For example, people who stopped using a microwave when they heard about an experiment supposedly showing that microwaved water kills plants. People who threw out all their plastic dishes after the media picked up a study about health dangers caused by plastics. People who spent a lot of time thinking positive thoughts because they have heard it will make them successful.

Could it be that proto-rationalists are just bad at quantifying their level of belief? Normally, I'd trust somebody's claim to believe something more if they're willing to bet on it; and if they aren't willing to bet on it, then I'd think their real level of belief is lower.

Replies from: Viliam_Bur, None
comment by Viliam_Bur · 2014-01-21T18:39:51.818Z · LW(p) · GW(p)

Your examples require magic, pseudoscience, conspiracy theories. Perhaps the advantage of rationalists is the ability to take boring ideas seriously. (Even immortality is boring when all you have to do is to buy a life insurance, sign a few papers and wait. And admit that it most likely will not work. And that if it will work, it will pretty much be the science as usual.)

Replies from: Vivificient, Kaj_Sotala
comment by Vivificient · 2014-01-21T18:57:12.704Z · LW(p) · GW(p)

Making things happen with positive thinking requires magic. But myths about the health effects of microwaves or plastic bottles are dressed up to look like science as usual. The microwave thing is supposedly based on the effect of radiation on the DNA in your food or something -- nonsense, but to someone with little science literacy not necessarily distinguishable from talk about the information-theoretic definition of death.

I'm not sure that signing papers to have a team of scientists stand by and freeze your brain when you die is more boring than cooking your food without a microwave oven. I would guess that cryonics being "weird", "gross", and "unnatural" would be more relevant.

comment by Kaj_Sotala · 2014-01-21T18:53:20.134Z · LW(p) · GW(p)

"There's a health danger involved with plastic dishes" sounds quite boring to me. ("Oh, yet another study about some random substance causing cancer? Yawn.")

comment by [deleted] · 2021-11-13T10:40:50.705Z · LW(p) · GW(p)
comment by Locaha · 2014-01-21T07:28:13.468Z · LW(p) · GW(p)

We investigate with a cross-sectional study, looking at proto-rationalists versus experienced rationalists. Define proto-rationalists as those respondents to the Less Wrong survey who indicate they have been in the community for less than six months and have zero karma (usually indicative of never having posted a comment). And define experienced rationalists as those respondents to the Less Wrong survey who indicate they have been in the community for over two years and have >1000 karma (usually indicative of having written many well-received posts).

This is an incredibly bad definition of a rationalist. What you actually research here are people who fit into the mainstream of LW.

Replies from: MugaSofer
comment by MugaSofer · 2014-01-21T17:42:07.300Z · LW(p) · GW(p)

... which is somewhat relevant to whether LW-style "rationalist training" makes one irrational, yes?

comment by V_V · 2014-01-21T18:25:32.694Z · LW(p) · GW(p)

Define proto-rationalists as those respondents to the Less Wrong survey who indicate they have been in the community for less than six months and have zero karma (usually indicative of never having posted a comment). And define experienced rationalists as those respondents to the Less Wrong survey who indicate they have been in the community for over two years and have >1000 karma (usually indicative of having written many well-received posts).

I don't like this appropriation of the term "rational" (even with the "-ist" suffix), and in fact I find it somewhat offensive.

[ Warning: Trolling ahead ]
But since words are arbitrary placeholders, let's play a little game and replace the word "rationalist" with another randomly generated string, such as "cultist" (which you might possibly find offensive, but remember, it's just a placeholder).

So what does your data say?

Proto-cultists have give a higher average probability of cryonics success than committed cultists.
But this isn't necessarily particularly informative, because averaging probabilities from different estimators doesn't really tell us much (consider scenario A where half of the respondents say p = 1 and half say p = 0, and scenario B where all the respondents say p = 0.5. The arithmetic mean is the same, but the scenarios are completely different). The harmonic mean can be a better way of averaging probabilities.
But anyway, let's assume that the distribution of the responses is well-behaved enough that it holds that a randomly sampled proto-cultist is more likely to assign an higher probability of cryonics success than a randomly sampled committed cultist (you can test this hypothesis on the data).

On the other hand, proto-cultists are much less likely to be signed up for cryonics than committed cultists (in fact, none of the proto-cultist are signed up).

What is the correlation between belief in cryonics success and being signed up for cryonics? I don't know since it isn't reported neither here nor in the survey results post (maybe it was computed by found to be not significant, since IIUC there was a significance cutoff for correlations in the survey results post).
Do committed cultists who sign up for cryonics do it because they assign a high probability to its success, or despite they assign it a low probability? I have no way of knowing.
Or, actually, I could look at the data, but I wont, since you wrote the post trying to make a point from the data, hence the burden of providing a meaningful statistic analysis was on you.

Let's try to interpret this finding:
Cryonics is a weird belief. Proto-cultists didn't spend much time researching it and thinking on it, and because their typical background (mostly computer science students) find it somewhat plausible, but they don't really trust their estimate very much.
Committed cultists, on the other hand, have more polarized beliefs. Being in the cult might have actually stripped them away of their instrumental rationality (or selected for irrational people) so they decide against their explicit beliefs. Or they respond to social pressures, since cryonics is high status in the cult and has been explicitly endorsed by one of the cult elders and author of the Sacred Scrip-...Sequences. Or both.
Oops.

[ End of trolling ]

The bottom line of my deliberately uncharitable post is:

  • Don't use words lightly. Words aren't really just syntactic labels, they convey implicit meaning. Using words with an implicit positive meaning ("experienced rationalist") to refer to the core members of a community, naturally suggest a charitable interpretation (that they are smart). Using words with an implicit negative meaning ("committed cultists") suggests an uncharitable interpretation (that they are brainwashed, groupthinking, too much preoccupied with costly status signalling that has no value outside the group).

  • If you are trying to make a point from data, provide relevant statistics.

Replies from: private_messaging
comment by private_messaging · 2014-01-26T01:13:36.192Z · LW(p) · GW(p)

Yeah. Suppose we were talking about a newage-ish cult where the founder has arranged himself to be flown to Tibet for the sky burial when he dies. They can very well have exact same statistics on their online forum.

comment by Sanji · 2014-01-23T13:52:01.454Z · LW(p) · GW(p)

Is it possible that the difference you're seeing is just lack of knowledge of probabilities? I am a new person, and I don't really understand percentages. My brain just doesn't work that way. I don't know how I would even begin to assign a probability to how likely cryonics is to work.

comment by Lalartu · 2014-01-21T23:37:22.094Z · LW(p) · GW(p)

Well, yes membership in LW community make one more likely to subscribe for cryonics, even if corrected for selection. Because LW community promotes cryonics. Yes it is that simple. It is basic human behaviour and doesn't much to do with rationality. Remove all positive portrayal, all emotions, all that "value life" and "true rationalist", leaving only cold facts and numbers - and few years later cryonics subscription rate among new LW members will drop much more close to average among "people who know about cryonics" group.

Replies from: itaibn0
comment by itaibn0 · 2014-01-22T00:50:59.742Z · LW(p) · GW(p)

Yes, the fact that LW community convinces people to subscribe to cryonics is not mysterious. The mysterious thing is that the LW community manages at the same time to convince people that cryonics is unlikely to work.

Replies from: jkaufman, Lalartu
comment by jefftk (jkaufman) · 2014-01-23T23:28:13.035Z · LW(p) · GW(p)

Except the people who are signed up and the people who think it's less likely to work are not the same people. For example, I meet the criteria for "experienced lesswronger", and I am both not signed up for cryonics and think it's very unlikely to work. There are similarly other people in the Boston meetup group who are signed up and think it's somewhat likely to work. It's only mysterious if you assume we're homogenous.

Replies from: itaibn0
comment by itaibn0 · 2014-01-24T01:44:11.028Z · LW(p) · GW(p)

Yes, that's another possible explanation, and here's yet another one. I'm not saying the Yvain's theory is correct, only that Lalartu's comment fails to fully account for the situation Yvain describes (though their later comment seems to make up for it if I understood it correctly).

comment by Lalartu · 2014-01-22T14:04:17.062Z · LW(p) · GW(p)

Because "it will never work" is not the main reason why people don't subscribe for cryonics.

Replies from: itaibn0
comment by itaibn0 · 2014-01-22T22:55:06.770Z · LW(p) · GW(p)

I'm not sure I understand you. Are you saying that the reason most people don't subscribe to cryonics is not that they think that it is unlikely to work but some other reason, and so convincing people that it is unlikely to work is compatible with convincing people to do it? In that case that seems to me like a reasonable point of view.

Replies from: Lalartu, VAuroch
comment by Lalartu · 2014-01-24T09:24:07.326Z · LW(p) · GW(p)

Yes. Effect from convincing people that cryonics is socially acceptable far outweights lower success estimates.

comment by VAuroch · 2014-01-22T23:01:49.513Z · LW(p) · GW(p)

It is certainly true that, independent of how likely you think cryonics is to work, most people stay unsubscribed not because they don't think it will work but because it's weird.

comment by private_messaging · 2014-01-26T00:31:35.489Z · LW(p) · GW(p)

Less credulous than who? All your groups are far, far, extremely more credulous about cryonics on average than, say, me, or neurobiology experts, or most people I know. More credulous than many cryonics proponents, too.

As for the rather minor differences between the averages within your groups... Said groups joined the site at different times, have different age, have discovered this site for different reasons (I gather you get more scifi fans now). You even got a general trend towards increased estimates.

That you go on and ignore all signs of co-founding, even as blatantly in-your-face as last year's "proto-rationalists" and this year's "experienced rationalists" having the same average of 15%, and instead conclude some lower credulity due to training, is clearly an example of the kind of reasoning that shouldn't be taken seriously.

Human brain is big and messy, it consists of many regions that do different things. The notion of "taking ideas seriously", coupled with a thought disorder, is a recipe for utter disaster; coupled with some odd lesion in right hemisphere it might be helpful though.

My hypothesis is that normally the belief updates and expected value estimates are done in a way that is not available for introspection, much like how visual object recognition (a form of very advanced evidence processing) is outside introspection. For cryonics, normally those processes report a very low expected value.

edit: also see this . It's mostly about the ratios between extreme cryonics believers, the cryonics subscribers possibly suffering from some buyer's remorse, and folks who give the usual (quite low) estimate. Why and how the ratios changed, that's a very different question.

comment by James Camacho (james-camacho) · 2023-07-08T04:37:33.261Z · LW(p) · GW(p)

Did you check if there was a significant age difference between the two groups? I would expect proto-rationalists to be younger, so they would have less money and fewer chances to have signed up for cryonics.

comment by Michel (MichelJusten) · 2022-08-18T23:39:05.860Z · LW(p) · GW(p)

The relevant difference is that Gallant knows how to take ideas seriously [? · GW].

Flagging that the author no longer endorses this post.

comment by Zaine · 2014-01-21T19:05:48.495Z · LW(p) · GW(p)

For how long did you deliberate upon whether, or what did you think whilst deciding to go with 'Gallant' and 'Goofus'?

Replies from: VAuroch
comment by VAuroch · 2014-01-21T21:10:11.401Z · LW(p) · GW(p)

It's a classic pair of Lazy Bad Planner and Shining Example of Humanity, which has been used in the children's magazine Highlights to put morals on display for decades.

I might have gone with Simplicio and Salviati, but that would go over many people's heads for no real benefit.

Replies from: gwern
comment by gwern · 2014-01-22T19:23:23.515Z · LW(p) · GW(p)

put morals on display for decades.

More specifically, 'Goofus and Gallant' has been running since 1948 (or 66 years now).

Replies from: Zaine
comment by Zaine · 2014-01-23T18:15:48.456Z · LW(p) · GW(p)

Ah, then the purpose for my question is rendered moot. If it was an original coining, I wished to know the thought process that went into deciding, "Yes, I shall prime thusly."

comment by Brillyant · 2014-01-21T15:56:04.322Z · LW(p) · GW(p)

This whole article makes a sleight of hand assumption that more rational = more time on LW.

I'm a proto-rationalist by these criteria. I don't see any reason cryonics can't eventually work. I've no interest in it, and I think it is kinda weird.

Some of that weirdness is the typical frozen dead body stuff. But, more than that, I'm weirded out by the immortality-ism that seems to be a big part of (some of) the tenured LW crowd (i.e. rationalists).

I've yet to hear one compelling argument for why hyper-long life = better. The standard answers seems to be "death is obviously bad and the only way you could disagree is because you are biased" and "more years can equal more utilons".

In the case of the former, yeah, death sucks 'cuz it is an end and often involves lots of pain and inconvenience in the run up to it. To the latter, yeah, I get the jist: More utilons = better. Shut up and do math. Okay.

I'm totally on board with getting rid of gratuitous pain and inconvenience that comes with aging. But, as I said, the "I want to live forever! 'cuz that is winning!" thing is just plain weird to me, at least as much so as the frozen body/head bit.

But what could I know... I'm not rational.

Replies from: TheOtherDave, Vivificient, Yvain, Bugmaster, christopherj, Nornagest
comment by TheOtherDave · 2014-01-21T16:04:18.120Z · LW(p) · GW(p)

If you could remain healthy indefinitely, when do you expect you would choose to die?
Why?

Replies from: Brillyant, byrnema
comment by Brillyant · 2014-01-21T16:37:53.490Z · LW(p) · GW(p)

My first thought is that the number of years lived is relatively arbitrary. 100, 1000, whatever. I'd imagine someone smarter than I could come up with a logical number. Maybe when you could meet your great grandkids or something. Don't know. 10 seems way too small & 1,000,000 way too big, but that is likely just because I'm anchored to 75-85 as an average lifespan.

I think my choice to cease conscious experience would involve a few components:

  • Realization of the end of true novelty. I've read some good stuff on why this might not be an issue given sufficient technology, but I'm ultimately not convinced. It seems to me a perpetual invention of new novelties (new challenges to be overcome, etc.) is still artificial and would not work to extend novelty to the extent I was aware it was artificial. I suspect it might feel like how a particular sandbox video game tends to lose its appeal...and even sandbox video games in general lose their appeal. All this despite the potential for perpetual novelty within the games' engines.

  • I suppose this is related to the first, but it feels a bit separate in my mind... All risk would be lost. With a finite period of time in which to work, all my accomplishments and failures have some scope. I have the body and mind I've been given, and X amount of years to squeeze as much lemonade as I can out of the lemons life throws at me. With the option for infinite time, I'd imagine everything would become an eventuality. Once in a lifetime experiences would be mathematically bound to occur given enough time, of which I'd have an innumerable sum. I'd sum this component up by saying that games are not fun if you can't lose... in fact, they aren't even games. During a philosophical discussion, a former co-worker of mine told me he thought life's meaning was in overcoming obstacles and challenges and finding joy in it. I thought that was the stupidest thing I'd ever heard at the time, but now I basically agree. Infinite availability of time makes this whole purpose kinda moot, in my view.

  • One other component I can think of is a recognition of what death really means. It is only the end of my conscious experience. It is not, in a very real & literal sense, the end of the world. All that happens (presumably) is that I no longer observe. Period. Death isn't nearly as scary or grandiose as we make it out to be.

Replies from: TheOtherDave, blacktrance
comment by TheOtherDave · 2014-01-21T16:47:43.403Z · LW(p) · GW(p)

Given a choice between remaining alive for as long as novelty and risk and challenges and obstacles to overcome and joy remain present, or dying before that point, would you choose to die before that point?

Replies from: Brillyant
comment by Brillyant · 2014-01-21T16:55:47.237Z · LW(p) · GW(p)

I'd imagine I'd like to live as long as life had the potential for those things, even if they weren't present at a given moment. My concern isn't necessarily that they'd run out, rather that they don't really exist in a world where immortality is an option.

And again, being conscious vs. not being conscious is not a world-ending difference to me. I think consciousness is just a localized emergence from a particalur meat-computer. I enjoy/tolerate a persistent illusion of "self" that can change drastically with injury or illness. It is a fragile little state of affairs and I think it is weird (though very natural) to seek to solidify it in a (literally) permanent state.

Replies from: TheOtherDave
comment by TheOtherDave · 2014-01-21T17:20:39.275Z · LW(p) · GW(p)

being conscious vs. not being conscious is not a world-ending difference to me.

Sure. I'm clear on that part, I'm just trying to elicit your preferences on the matter.

My concern isn't necessarily that they'd run out, rather that they don't really exist in a world where immortality is an option.

Eh?
I don't really get this.
I can understand how, in principle, immortality means that I might eventually reach a point where nothing is novel, or risky, or challenging, or an obstacle, or joyful.
I don't understand how the option of immortality means that right now nothing is novel, or risky, or challenging, or an obstacle, or joyful.

But, OK... I guess I can accept that this is the way it is for you, even if I don't understand it, and therefore you would prefer not to have that option.

Replies from: Brillyant
comment by Brillyant · 2014-01-21T17:36:11.977Z · LW(p) · GW(p)

My concern isn't necessarily that they'd run out, rather that they don't really exist in a world where immortality is an option.

Eh? I don't really get this. I can understand how, in principle, immortality means that I might eventually reach a point where nothing is novel, or risky, or challenging, or an obstacle, or joyful. I don't understand how the option of immortality means that right now nothing is novel, or risky, or challenging, or an obstacle, or joyful.

There would be some novelty at first. But as soon as you became aware life was of an infinite duration and could understand the implications, what would be the motivation for anything? Every conceiveable 1-in-a-billion occurance would become an eventuality. How does risk even make sense in this world? What is an obstacle or a challenge when infinity is realized as a possiblity? I'd imagine it would feel like a game you already know you are going to win... and that is very boring, in my view.

Replies from: blacktrance, TheOtherDave
comment by blacktrance · 2014-01-21T18:04:59.362Z · LW(p) · GW(p)

But as soon as you became aware life was of an infinite duration and could understand the implications, what would be the motivation for anything?

Enjoyment. It's possible to enjoy something despite knowing exactly how it's going to turn out. For example, when you're about to take a bite of food you like, you know how it's going to taste, but that doesn't eliminate your motivation to eat it.

comment by TheOtherDave · 2014-01-21T18:07:49.642Z · LW(p) · GW(p)

soon as you became aware life was of an infinite duration and could understand the implications, what would be the motivation for anything?

But we've already established that life needn't be of infinite duration. I can end it at any time, that's implicit in the question of when I would choose to die. It's of indefinite duration, which isn't the same thing at all.

That aside, though... what are your motivations for doing things now?

Replies from: Brillyant
comment by Brillyant · 2014-01-21T19:17:39.776Z · LW(p) · GW(p)

But we've already established that life needn't be of infinite duration. I can end it at any time, that's implicit in the question of when I would choose to die. It's of indefinite duration, which isn't the same thing at all.

I'm assuming you'd have the choice to end your life, or the option to continue it forever. Of course, if you were stuck living forever, that would suck. Do you agree? Why or why not?

That aside, though... what are your motivations for doing things now?

I assume the lionshare is in my animal-nature programming. I'm evolved to derive some pleasure from the sorts of activities that benefit the replicators I carry.

Replies from: TheOtherDave
comment by TheOtherDave · 2014-01-21T19:57:08.592Z · LW(p) · GW(p)

I'm assuming you'd have the choice to end your life

And, further, you've asserted that you would choose to end it when certain conditions arose, which on your view are guaranteed to arise eventually. So your life would be predictably finite.

Of course, if you were stuck living forever, that would suck. Do you agree? Why or why not?

Generally, I prefer to have choices about things. That said, there are situations where I would willingly give up certain choices, including the choice to die. So it would depend on the situation.

But sure, all else being equal, I would rather have the choice to die.

That said, I'd also rather have the choice to live, which the current arrangement is pretty much guaranteed to deprive me of pretty soon.

Replies from: Brillyant
comment by Brillyant · 2014-01-21T20:00:27.264Z · LW(p) · GW(p)

Why would it be bad to be unable to choose to stop living?

Replies from: TheOtherDave, blacktrance
comment by TheOtherDave · 2014-01-21T20:24:46.005Z · LW(p) · GW(p)

Because situations might arise in which I preferred death to continued life, and in the absence of that choice I'd be unable to effect that preference.
That said, situations might also arise in which I transiently chose to die despite an average preference to continue living, so it depends on what alternatives I have available. I can imagine options superior to my having this choice.

comment by blacktrance · 2014-01-21T20:11:54.791Z · LW(p) · GW(p)

For example, it's better to die than to be tortured forever.

comment by blacktrance · 2014-01-21T17:54:01.199Z · LW(p) · GW(p)

Many questions:

  • Do you think anything similar to the "end of novelty" could happen within a human's current lifespan? And by "end of novelty" I mean actual end of novelty, not something that could seem similar to it, like depression. Also, is novelty necessary to have a life with positive value? Could you not imagine yourself living a routine for x number of years (where x is less than the current human lifespan, but greater than 10) and being happy? If so, why do you think this would change for a sufficiently large value of x, and why don't you think that you'd be able to find a new routine? Also, what would be artificial about a "perpetual invention of novelties" - what makes perpetual novelty artificial that isn't also the case for current novelty?

  • If potential new experiences are created faster than you can experience them, does the problem remain? (E.g., Even if you read as many books as you felt comfortable reading, the total number of books unread by you would increase every year.) Also, why would immortality mean that you can't lose? You can't lose your life, but you can lose money, once-in-a-lifetime experiences, etc.

  • Death isn't the end of the world in an objective sense, but it prevents you from enjoying anything you value - so, if someone said that they'd put you in a steel box with a lifetime supply of oxygen, food, and sleep medication (if you want to take it), then shoot you into space, would you object to that? It would be similar to death because the world would go on, but you wouldn't be able to enjoy it.

Replies from: Brillyant, Brillyant
comment by Brillyant · 2014-01-21T19:14:50.745Z · LW(p) · GW(p)

Do you think anything similar to the "end of novelty" could happen within a human's current lifespan?

I don't think that could practically happen. Though I suppose people do often end up feeling that is the case in their own lives.

Also, is novelty necessary to have a life with positive value? Could you not imagine yourself living a routine for x number of years (where x is less than the current human lifespan, but greater than 10) and being happy?

I do think novelty is a key component in happiness. Not necessarily that you have new stuff in every moment or day, but at least that their is the potential for novelty.

If so, why do you think this would change for a sufficiently large value of x, and why don't you think that you'd be able to find a new routine?

Perhaps for any non-infinite value of x, you'd be okay. Once x could be infinite, I think there could be the realization that everything is pretty meaningless. And I don't see much reason why any non-infinite years as a lifespan is better than any other. I suppose reaching an age where you could have kids or grandkids might be a good benchmark. But the difference between 100 years and 200 seems arbitrary. As does the difference between 1000 and 1,000,000.

Also, what would be artificial about a "perpetual invention of novelties" - what makes perpetual novelty artificial that isn't also the case for current novelty?

It's a good question. I don't know. I can just imagine coming to the realization that (a) I could live forever if I choose, and (b) everything could be done given enough time, of which I have a limitless supply. If these are the circumstances, every challenge would only appear to be a challenge.

comment by Brillyant · 2014-01-21T18:13:58.623Z · LW(p) · GW(p)

I have to think about the first set of questions.

To the last two:

If potential new experiences are created faster than you can experience them, does the problem remain?

This is the concept I've read which makes me wonder if supply of novelty might always be able to exceed demand.

Also, why would immortality mean that you can't lose? You can't lose your life, but you can lose money, once-in-a-lifetime experiences, etc.

I'm not smart enough to think through what "money" would mean in an economy where immortality is available. As far as once-in-a-lifetime experiences, they'd be cheapened necessarily. What were once a once-in-a-million-lifetimes experiences would be become bound to happen eventually. One-in-a-billion odds would mean nothing.

By "can't lose" I meant that you couldn't "bet" your life on things (i.e. invest your time) since you have an inexhaustable source of time. Playing games and winning is fun because losing is an option. Winning is meaningless without losing.

Death isn't the end of the world in an objective sense, but it prevents you from enjoying anything you value - so, if someone said that they'd put you in a steel box with a lifetime supply of oxygen, food, and sleep medication (if you want to take it), then shoot you into space, would you object to that? It would be similar to death because the world would go on, but you wouldn't be able to enjoy it.

I mean death isn't the end of anyone else's conscious experience. It does end yours. As Hitchens said, "It isn't that the party is over. Rather, the party will continue, and you've been asked to leave." This end of my personal conscious experience is only as big as my ego makes it. It really isn't that big a deal.

Replies from: blacktrance
comment by blacktrance · 2014-01-21T18:30:26.877Z · LW(p) · GW(p)

I'm not smart enough to think through what "money" would mean in an economy where immortality is available.

Suppose tomorrow a philanthropist introduces a free shot that grants people immortality for as long as they want it. - so the world would be the same as it would be today, except everyone is immortal. Why would money disappear? People would still want goods and services, and having a medium of exchange would still be convenient.

What were once a once-in-a-million-lifetimes experiences would be become bound to happen eventually.

The birth of any particular person would only happen once - so the birth of your child would still be a once-in-a-lifetime experience. You'd only see them grow up once. You'd only be able to meet someone for the first time once. Etc.

As for investing your time, you could still do that even if you have infinite time. If something goes wrong, you may get another chance at it (if it's not a once-in-a-lifetime experience) but in the meantime, things could be quite unpleasant. If you gamble your house away, you could probably live long enough to make enough money to buy another house, but in the meantime you would have lost something, and that would be bad.

I mean death isn't the end of anyone else's conscious experience. It does end yours.

Yes, and the end of your conscious experience prevents you from enjoying anything ever again. Isn't that an enormous loss?

Replies from: Brillyant
comment by Brillyant · 2014-01-21T19:31:55.908Z · LW(p) · GW(p)

Suppose tomorrow a philanthropist introduces a free shot that grants people immortality for as long as they want it. - so the world would be the same as it would be today, except everyone is immortal. Why would money disappear? People would still want goods and services, and having a medium of exchange would still be convenient.

I don't know. Economy implies some scarcity. I suppose it would depend on what was required to remain immortal. Would we need some currency to pay for future injections or upgrades? In that sense, would we even be immortal? Wouldn't we still be in a similar survival mode to what we are in now? Would only the rich truly be immortal while the poor had the potential, but not the ongoing means for eternal life?

The birth of any particular person would only happen once - so the birth of your child would still be a once-in-a-lifetime experience. You'd only see them grow up once. You'd only be able to meet someone for the first time once. Etc.

Birth of immortal children? You could have a billion of them. You'd meet new people infinite times if you wanted.

As for investing your time, you could still do that even if you have infinite time. If something goes wrong, you may get another chance at it (if it's not a once-in-a-lifetime experience) but in the meantime, things could be quite unpleasant. If you gamble your house away, you could probably live long enough to make enough money to buy another house, but in the meantime you would have lost something, and that would be bad.

I think we have very different view about a future that includes immortality. Probably my lack of imagination.

Yes, and the end of your conscious experience prevents you from enjoying anything ever again. Isn't that an enormous loss?

No. Death is neutral. The idea that life is necessarily optimal is simply hardwired into your animal-nature. If it were another way, you wouldn't have made it this far. This was my point orignally on this thread. The "lifeism" on LW, as I'll call it, is weird to me. Life is really cool and I hope it continues for a while for me, but I do not view my death as "bad".

Cryonics and the desire for immortality in tranhumanism circles reminds me very much of my background in Evangelical Christianity. Nowhere else have I seen such irrational fear of death (i.e. fear of no longer experiencing consciousness).

Replies from: blacktrance
comment by blacktrance · 2014-01-21T20:10:13.642Z · LW(p) · GW(p)

Economy implies some scarcity.

There would still be scarcity. Perhaps you would not longer need to eat or drink to live, but you would still want to do it for enjoyment from time to time, and resources would still be limited. If you want to buy a house, there is still a limited number of houses and good places to live. You'd want to be protected from criminals, so you'd want to pay for police and courts, etc. If you wanted to live out in the street without any clothes, you could do that for as long as you wanted, but if you want more than that, you'd run into scarcity much like you do today.

Birth of immortal children? You could have a billion of them. You'd meet new people infinite times if you wanted.

But they'd be unique children, and their birth would be a unique event. Just like now you could theoretically have 20 kids, but a moment with each would be unique.

The idea that life is necessarily optimal is simply hardwired into your animal-nature.

Sure, but why does that make it wrong? I like sweet and fatty food for evolutionary reasons too - does that mean that if it's a result of evolution, my preference is wrong? But in the case of life, it goes beyond just being hardwired - enjoying good things is good, and death prevents that, so logically death is bad.

Replies from: Brillyant
comment by Brillyant · 2014-01-21T20:31:08.470Z · LW(p) · GW(p)

I think we have a profoundly different expectation for what a future with sufficient technology for immortality might be like. It seems you think it will look alot like 2014, but with immortality. I'd imagine it will be basically unrecognizeable from our current world, so much so that it is relatively useless to speculate about the details.

I also think you are discounting how powerful an aspect of experience novelty can be, and therefore how trivial giving birth to your 100th, let alone 10,000th child might be.

As far as taking pleasure from your animal-nature: Cool. It isn't bad. I'd argue your desire for life is fundamentally identical to your love of fatty food. It serves an ultimate purpose (pass on the replicators), and simply basing your (wannabe eternal) existence off of the hedonistic side-effects of these sorts of drives will lose its appeal over the course of eons.

The idea that death is bad 'cuz good stuff could be happening in our potential lives after we are gone is not compelling to me at all. It's an opportunity cost argument, right? Okay. Except you cease to exist in this particular case.

Anyway, good to chat with you. I'd love to hear any other thoughts you have. At this point, I'm tapping, as I don't have anything else to say in this discussion with you.

Replies from: blacktrance
comment by blacktrance · 2014-01-21T20:44:44.076Z · LW(p) · GW(p)

A last word from me as well, then.

I agree that a world in which immortality would be possible would look very different from today's, but that's because the development of immortality would require technological advances (maybe nanotechnology) that would change the world by themselves even if they didn't lead to immortality. Immortality by itself wouldn't make the world look that different, though - funeral homes would go out of business, and maybe hospitals as well, but other than that, it wouldn't make a huge difference.

I think novelty is highly overrated as a source of value. Certainly, it's nice to play a good new game or something like that, but as far as possible sources of value, it's quite low on the list. Regarding having children, the parent-child bond is part of human nature, so I don't think it'll ever become trivial. As for hedonism, I don't expect it to ever lose its appeal, especially as new fun things are created.

It's an opportunity cost argument, right? Okay. Except you cease to exist in this particular case.

Yes, it is an opportunity cost, and the fact that you wouldn't exist is the problem, because it means that instead of getting something positive, you'd be getting nothing.

comment by byrnema · 2014-01-22T04:42:55.142Z · LW(p) · GW(p)

I don't think this question is a good way to investigate feelings about immortality and death.

This is somewhat related to Yvain's post post about liking versus wanting / The Neuroscience of Pleasure.

While we're alive, we want to keep on living. I recall moments -- locked away for the moment, unreachable --when the idea of death caused feelings of intense terror. But one can also recognize an immutable biological component to this (immutable unless one is depressed or in pain, etc). To circumnavigate this immediate biological feeling about death, it is better to try and perceive, counter-factually, if you were already dead, would you care? I think it is interesting that the answers are different if we're discussing tomorrow, or 100 years from now, or 100 years ago. (Tut recently shared this quote from Mark Twain.)

Replies from: TheOtherDave
comment by TheOtherDave · 2014-01-22T14:34:04.585Z · LW(p) · GW(p)

Sure, I recognize that there are all kinds of feelings one can have about immortality and death that are not captured, or even necessarily relevant, to one's choices about living and dying.

I'm interested in the choices, and the factors that contribute to those choices, so I asked about them.

Others are of course welcome to investigate other things however they consider best.

Replies from: byrnema
comment by byrnema · 2014-01-22T15:32:33.387Z · LW(p) · GW(p)

I'm interested in the choices, and the factors that contribute to those choices, so I asked about them.

If you are specifically interested in the contexts of a person deciding that they do wish, or do not wish, to continue living in the current moment, then my comment wasn't relevant.

However, I interpreted your question as a Socratic challenge to realize that one values immortality because they do not wish to die in the present moment. (I think these are separate systems in some sense, perhaps far versus near).

Replies from: TheOtherDave
comment by TheOtherDave · 2014-01-22T17:49:48.682Z · LW(p) · GW(p)

Yeah, I often get misinterpreted that way.
Relevant earlier exchange here.

Any suggestions you have about how I could have worded my question to make it clearer that I was actually interested in the answer are welcome.

Replies from: byrnema
comment by byrnema · 2014-01-22T22:00:12.278Z · LW(p) · GW(p)

I understood (and my perspective changed quite a bit) as soon as I read about Miller's Law in the exchange you linked. I really like having a handle for the concept (for my own sake, its usefulness is curbed by not being well-known).

I believe the default interpretation of the question you asked is the interpretation that I had (that you were using the Socratic method). The reason for this being the default interpretation is that there is an obvious, intuitive answer. (This question was a good counter-argument, which is why I think it was up-voted.)

... to deflect this interpretation, your question could be worded to be less obvious, and allow more nuance. Perhaps, "If you could remain healthy indefinitely, do you expect you would ever choose to die?", or, "If you could remain healthy indefinitely, for which conditions would you ever choose to die?"

Replies from: TheOtherDave
comment by TheOtherDave · 2014-01-22T22:49:53.672Z · LW(p) · GW(p)

(nods) Yeah, that last one would have been a good alternative, in retrospect. I got there eventually but could have gotten there sooner. (The other one is a fine question, but I already had the answer.)

Though I suspect that it, too, would have been understood as Socratic in the closed-ended sense.

comment by Vivificient · 2014-01-21T17:38:28.810Z · LW(p) · GW(p)

Upvoted for providing a clear counterexample to Yvain's assertion that people would find immortality to be "surely an outcome as desirable as any lottery jackpot".

This suggests that a partial explanation for the data is that "experienced rationalists" (high karma, long time in community) are more likely to find immortality desirable, and so more likely to sign up for cryonics despite having slightly lower faith in the technology itself.

comment by Scott Alexander (Yvain) · 2014-01-25T17:12:20.910Z · LW(p) · GW(p)

This whole article makes a sleight of hand assumption that more rational = more time on LW.

Not particularly. If we found that people who spent more time at church are more likely to believe in Jesus, one possible explanation (albeit not proven to be causal) is that going to church makes one believe in Jesus. Likewise, if we find that people who spend more time on Less Wrong are more likely to take a strange idea seriously, one possible (unproven, but reasonable to hypothesize) explanation is that going to Less Wrong makes one more likely to take strange ideas seriously.

Although it's perfectly reasonable not to want to sign up for cryonics (and I haven't signed up myself) the high probability of success but low signup rate among newcomers versus the lower probability of success and higher signup rate among veterans suggests the variable changing is "taking ideas seriously"; this is orthogonal to whether you should or shouldn't want to sign up for cryonics

(unless your claim is that veterans are more anti-deathist than newbies, which would also explain the data and should probably be tested on the next survey. But I think my point the the higher signup rate among veterans does not mean they are more credulous but reflects thought process change still stands)

"Rationalist" here is used to mean "exposed to rationalist ideas", not "is a rationalist person". I realize that's confusing but I don't have better terminology.

Replies from: Brillyant
comment by Brillyant · 2014-01-25T23:08:54.173Z · LW(p) · GW(p)

Although it's perfectly reasonable not to want to sign up for cryonics (and I haven't signed up myself)

Would you please explain your rationale?

"Rationalist" here is used to mean "exposed to rationalist ideas", not "is a rationalist person". I realize that's confusing but I don't have better terminology.

I understood, and then used, "rationalist" to mean "accurate map of the territory". I'd agree exposure to LW helps eliminate some biases and, in that way, it is rationalist training that improves one's rationality. I'm not yet willing to say Less Wrong = More Right in every case, however.

Maybe more time on LW leads to improved rationality... up to the point where it doesn't? I find the dogmatic-ish acceptance of certain ideas around here reminds me of religion. It is funny to me you used that example...

Replies from: memoridem
comment by memoridem · 2014-01-25T23:14:45.240Z · LW(p) · GW(p)

I find the dogmatic-ish acceptance of certain ideas around here reminds me of religion

Did you actually look at the statistics? Whatever dogma you're seeing isn't there. It's more likely you're thinking some people you've had discussions with here are more representative of LW than they actually are.

Replies from: Brillyant
comment by Brillyant · 2014-01-25T23:24:53.253Z · LW(p) · GW(p)

As in the church, it isn't too terribly important to dogma that it has widespread acceptance among adherents to a particular faith in order to be dogma.

What is far more important to establishing dogma is having de facto authority and/or status leaders accept it and voice their support.

Replies from: memoridem
comment by memoridem · 2014-01-25T23:46:54.121Z · LW(p) · GW(p)

Doesn't this apply to any system where power is tilted and the high status members have ideologies? Should we call them all religions?

Replies from: Brillyant
comment by Brillyant · 2014-01-26T00:23:07.930Z · LW(p) · GW(p)

I suppose this happens in the way you note. I don't advocate labeling LW, or anyone else, a religion. I just meant to say certain aspects remind me of religion. Other aspects are nothing like religion.

I don't think cryonics is impossible. In fact, I'm probably in the proto-rationalist group that doesn't really understand the science but thinks it has a high probability of working someday. I just don't understand why it is so appealing.

The dogma seems to be more that "cryonics and the option for indefinite life extension is good" more than "cryonics is possible".

Replies from: None
comment by [deleted] · 2014-02-04T04:17:01.002Z · LW(p) · GW(p)

It may not be a religion but it sure as anything embraces a particular mythology.

comment by Bugmaster · 2014-02-12T00:20:12.763Z · LW(p) · GW(p)

I agree with you that the article engages in sleight of hand; however, I disagree with you regarding immortality.

While I do believe that "living longer (assuming high levels of physical and mental health is always better" is too strong a statement, I would argue that "having the choice to live as long as you want is always better" is much closer to the truth.

There are many projects that I will leave unfinished when I die; many things I will never get to experience. If I had the choice to live long enough to finish everything I wanted to do, I would gladly take it. I fully expect that, by the time I'm done with all that stuff, I'll find a lot more stuff that would require even more of my time -- but I could be wrong, in which case I'd want the option to end my life voluntarily.

I fully accept that there exist people for whom 80 or so years (or fewer) would be enough. Perhaps they lead much more efficient lives than I do, or perhaps they lack imagination or curiosity, or perhaps their lives are so terrible that death would come as a welcome release. But I have difficulty believing that the majority of people are like that. I'm pretty average, so it seems more likely that most people are like me.

comment by christopherj · 2014-01-25T02:51:12.328Z · LW(p) · GW(p)

I've yet to hear one compelling argument for why hyper-long life = better.

It makes dying an optional choice, rather than an inevitable necessity. Talk to an 80 year old person about the "joys" of aging -- any proper immortality means that you don't age. With a longer lifespan, people will tend toward a long term view (at least a little). You can enjoy more things, or accomplish more things, with a longer life.

Even people who have said they'd rather die than live as an invalid, almost always change their tune when they become an invalid -- so why should I believe that you'd rather die than live as a healthy man in the prime of life? Go ahead, research this one thing.

If as you fear immortality drains motivation, the immortals will be out-competed by the mortals, so the world won't be harmed. And remember also that full immortality means finding a way around the laws of thermodynamics and the death of the universe -- "forever" might necessarily be limited to a few billion years.

Replies from: Brillyant
comment by Brillyant · 2014-01-25T04:50:33.915Z · LW(p) · GW(p)

It makes dying an optional choice, rather than an inevitable necessity.

Yes.

Talk to an 80 year old person about the "joys" of aging -- any proper immortality means that you don't age. With a longer lifespan, people will tend toward a long term view (at least a little). You can enjoy more things, or accomplish more things, with a longer life.

Okay. Eliminating aging and all the negatives involved with it makes sense.

Even people who have said they'd rather die than live as an invalid, almost always change their tune when they become an invalid -- so why should I believe that you'd rather die than live as a healthy man in the prime of life? Go ahead, research this one thing.

I'm not sure what research you think I should do. I accept that many circumstances we can imagine are much different when we actually have to deal with them in the present reality.

If as you fear immortality drains motivation, the immortals will be out-competed by the mortals, so the world won't be harmed.

I'm not worried about the world being harmed by immortality, per se. I suppose there are lots of interesting implications that would arise, but I'm not concerned.

And remember also that full immortality means finding a way around the laws of thermodynamics and the death of the universe -- "forever" might necessarily be limited to a few billion years.

Sure. That makes sense.

With a longer lifespan, people will tend toward a long term view (at least a little). You can enjoy more things, or accomplish more things, with a longer life.

This seems to be the argument. I don't find it compelling it all. Can you help me understand why "tending toward a long view" is valuable? And how is accomplishing and enjoying more things always good indefinitely? I think enjoying and accomplishing things is cool, but I'd imagine there are some diminishing returns on almost anything.

I'm hearing... "death is obviously bad and the only way you could disagree is because you are biased" and "more years can equal more utilons".

Am I off base? How?

Death is just the end of your conscious experience. You won't know your dead. Life is cool, but it isn't as if the stakes on the table are life or eternal torture. That would be a HUGE problem worth freezing bodies or severed heads over.

Replies from: christopherj, memoridem
comment by christopherj · 2014-01-29T07:10:07.239Z · LW(p) · GW(p)

This seems to be the argument. I don't find it compelling it all. Can you help me understand why "tending toward a long view" is valuable? And how is accomplishing and enjoying more things always good indefinitely? I think enjoying and accomplishing things is cool, but I'd imagine there are some diminishing returns on almost anything.

A longer term view is valuable because it would decrease things like "it's OK to pollute, I'll be dead by the time it gets bad".

I'm hearing... "death is obviously bad and the only way you could disagree is because you are biased" and "more years can equal more utilons".

It's just that many of us don't see any benefit to involuntary death. (Voluntary death also remains unpopular, even in surprisingly bad circumstances). In fact, I don't know of any product which is marketed as being superior to another product due to having a shorter lifespan ("Because our product will cease to function unexpectedly, you can enjoy it more now before it does!"), while things like "lifetime guarantee" are routinely praised as positive. I mean, for houses, tools, toys, vehicles, pet animals, longer lifespan == better, and I don't see why it should be different for my children.

As a thought experiment: Most people would, if they could, take a pill that eliminated the effects of aging, but causes multiple organ failure at about their original life expectancy. You seem to agree that aging is inconvenient, so I assume you'd take this pill. Would you?

But what if that pill also extended your lifespan indefinitely, as well as curing aging? Not true immortality, of course, since your body would still be susceptible to disease and accident, but it would mean that every year you're as likely to die as you were last year, ie your chance of dying doesn't increase with age. Now, there are a lot of people who say death is a good thing. In the interest of pleasing these people, while also providing the elimination of aging, scientists develop a second substance, which causes multiple organ failure at about your expected lifespan. By combining this substance with the immortality pill, they create a cure for aging that does not have immortality as a side-effect. Which of these pills would you prefer, or would you reject both?

Now, if you're not a consequentialist, the second pill no doubt seems like it has the stigma of suicide, even though its effects are identical to a previous example which perhaps seemed both positive and non-suicidal. This stigma would vanish, even if the pill were identical, if the pills had been developed in reverse order, with the immortality pill being a refinement of the anti-aging pill to remove a substance that causes eventual multiple organ failure. Or perhaps simply the existence of both options would make them both repugnant to you, one because it stinks of suicide, and the other one because you don't want immortality?


On a different note, there are in fact some legitimate advantages of death by limited lifespan, and some that might be considered both advantageous and disadvantageous. A limited lifespan allows for permanent retirement. Solving death would be a huge problem for the politicians who have to kick people off retirement, with a risk that they'd rather go bankrupt than anger our elderly. A huge chunk of our taxes are estate taxes "aka death tax". Death is a great equalizer: it will eliminate any specific tyrant and any specific individual who is accumulating "too much" wealth. Making death technically not inevitable would decrease our courage to do dangerous or violent things, such as soldiering, volunteering to test drugs, violent or non-violent resistance to a corrupt regime. The combination of immortal tyrant with decreased opposition from internal resistance or external liberators, is particularly worrisome. Unlimited lifespan will increase procrastination. Death eliminates old people set in their ways from positions of power and authority, making way for new ideas. Death makes all your problems go away or become someone else's problems. With limited lifespans, you won't outlive your friends by more than ~100 years. Even with all that, there's an equally impressive list for the benefits of a longer lifespan, plus I can point to about 7 billion people who think living is better than dying.

Replies from: Brillyant
comment by Brillyant · 2014-01-29T15:06:52.629Z · LW(p) · GW(p)

As a thought experiment: Most people would, if they could, take a pill that eliminated the effects of aging, but causes multiple organ failure at about their original life expectancy. You seem to agree that aging is inconvenient, so I assume you'd take this pill. Would you?

I think I would, yes.

But what if that pill also extended your lifespan indefinitely, as well as curing aging?

I don't think so, no.

stigma of suicide

More than a stigma, suicide is very consequential. It's a deep trauma for many people surrounding the victim. I think it is a net zero for the victim, however.

Overall, I still see the main argument as this: If you lived forever, you'd be able to accumulated ulimited fuzzies and utilons. And that is objectively better than fewer fuzzie and utilons. Therefore, death is bad.

I'm not "for" death. But life is an accident, an unintended side-effect of physical laws and processes. While I think you point out some good practical examples of advantages and disadvantages for the option of immortality, I sense my objections are of a bit different sort.

We, as living things, have evolved to fight to live, and live to fight. You want to live because nature has designed you to want to live. That is it. We glean some pleasure and meaning in the process of fighting/living/surviving, and that is cool. I sense the novelty of this will run out eventually.

Death isn't a problem. If future AI finds a way (and have some reason) to keep humans alive and torture them for eternity, then that is a big problem.

People all over the globe starving and enduring suffering via war, disease, etc. is a problem.

Aging leading to Alzheimer's, et al, is a problem.

Death is empty of any value. It is neither good or bad. It isn't a problem unless ego makes it one.

Replies from: hyporational
comment by hyporational · 2014-01-29T15:50:03.034Z · LW(p) · GW(p)

More than a stigma, suicide is very consequential. It's a deep trauma for many people surrounding the victim. I think it is a net zero for the victim, however.

It may be zero for you. If you think it should be zero for others too, I'd like to see some reasoning. That I can't experience death is obvious, but not convincing. If I only valued things I directly experience here and now, I don't think I could have any plans whatsoever. The fact that my death is a trauma for others also motivates me not to die. Doesn't it motivate you?

Replies from: Brillyant, Brillyant
comment by Brillyant · 2014-01-29T16:42:13.083Z · LW(p) · GW(p)

The fact that my death is a trauma for others also motivates me not to die. Doesn't it motivate you?

Absolutely. But that is a completely separate issue.

However, I sense that is related to some of what is happening when people speak about death in regard to opportunity costs. When mourning the loss of a younger person, it is common to hear people say "S/he had so much potential that now is lost." I've said that before.

But what are we really saying? What did somebody who is no longer conscious or aware in anyway really "lose"?

In reality, and as you point out, we the still living are the one who are losing something. We lose a friend or a family member. We may lose a bit of motivation when confronted with that reminder of our eventual mortality. Or maybe we lose some peace of mind (or add some anxiety) for the same reason.

It isn't my argument that death isn't bad for those who keep living. I would argue their loss would be mitigated if (a) the associated negative aspects of death (pain, trauma, aging, disease, etc.) were eliminated and (b) they meditated on the actual, practical implications of death for the deceased.

comment by Brillyant · 2014-01-29T16:09:36.230Z · LW(p) · GW(p)

I'm not sure what would convince you or count as proper reasoning.

It's basically an opportunity cost argument that is being made against death. That's fine with me. I guess there isn't much I can do to rebut that.

Opportunity costs only seem to make sense in terms of their effect on our current conscious experience.

If I chose not to travel after college when I was single and unattached, I might regret that now that I'm settled down and don't possess that opportunity (or the memories and life experience I would have gained) given my current commitments and obligations.

If I chose not to invest in Google when I had the cash to do so, I rue that decision, since buying that stock would have contributed to all sorts of potential good things in my present and future, and make me feel less anxious right now.

But death negates all such considerations. If I were to snap my finger and you and I would be dead, opportunity costs would be practically absurd to speak of in our cases.

I concede that the math works for immortality -- A years x B utilons/fuzzies per year = Total Awesomeness. As long as B is positive, then maximizing A always increases awesomeness, and making A infinite leads to infinite awesomeness.

If I only valued things I directly experience here and now, I don't think I could have any plans whatsoever.

It is interesting you phrase it that way. I'll ask in the spirit of Eckhart Tolle: Is there some way you know of to experience value in things outside of here and now?

Replies from: hyporational, Brillyant
comment by hyporational · 2014-01-29T16:19:48.011Z · LW(p) · GW(p)

Opportunity costs only seem to make sense in terms of their effect on our current conscious experience.

Are you sure the problem isn't unusual use of language?

Is there some way you know of to experience value in things outside of here and now?

I meditate regularly, I know what you mean and the answer to the meaning of your question is no. I also think this kind of a question with this particular intended meaning is abuse of language, and insisting on using common language to describe the insights you've gained through meditation mostly yields nonsense. When people say future they mean future, not the present moment and if you insist otherwise you lose information.

I value things not here and now all the time. They're just not yet here and now and don't necessarily have to ever be.

Replies from: Brillyant
comment by Brillyant · 2014-01-29T16:40:46.443Z · LW(p) · GW(p)

Are you sure the problem isn't unusual use of language?

Please say more about this.

I value things not here and now all the time. They're just not yet here and now and don't necessarily have to ever be.

I don't understand. When is it that you find value in them? In what way can you experience anything, in the future or the past, outside of the present moment? Can you "value" something without "experiencing" it? If so, how do you define the distinction?

Replies from: hyporational
comment by hyporational · 2014-01-29T16:57:46.937Z · LW(p) · GW(p)

Please say more about this.

I'll do so tomorrow with a fresh brain. I find my chances of communicating anything useful poor though. Specialized vocabulary would be nice. I suppose mindful religions have that, too bad it's buried in religious scripture.

comment by Brillyant · 2014-01-29T16:34:50.008Z · LW(p) · GW(p)

The fact that my death is a trauma for others also motivates me not to die. Doesn't it motivate you?

Absolutely. But that is a completely separate issue.

However, I sense that is related to some of what is happening when people speak about death in regard to opportunity costs. When mourning the loss of a younger person, it is common to hear people say "S/he had so much potential that now is lost." I've said that before.

But what are we really saying? What did somebody who is no longer conscious or aware in anyway really "lose"?

In reality, and as you point out, we the still living are the one who are losing something. We lose a friend or a family member. We may lose a bit of motivation when confronted with that reminder of our eventual mortality. Or maybe we lose some peace of mind (or add some anxiety) for the same reason.

It isn't my argument that death isn't bad for those who keep living. I would argue their loss would be mitigated if (a) the associated negative aspects of death (pain, trauma, aging, disease, etc.) were eliminated and (b) they meditated on the actual, practical implications of death for the deceased.

comment by memoridem · 2014-01-25T06:42:34.904Z · LW(p) · GW(p)

If everyone was immortal and healthy by default, do you think it would even occur to you suggest death as a harmless alternative?

If someone tried to convince you that a 50 year lifespan is better than what we have now, what would be your reaction? Don't you find it interesting that your intuitions support a very narrow optimum that just happens to be what you already have?

Do you argue that "death is just the end of your conscious experience" in the case of anyone who dies prematurely? Try to imagine actual deaths in real life and their outcomes.

Have you read this fable by Bostrom?

Replies from: Brillyant
comment by Brillyant · 2014-01-25T15:45:27.264Z · LW(p) · GW(p)

If everyone was immortal and healthy by default, do you think it would even occur to you suggest death as a harmless alternative?

Good question. I'd suggest death is a harmless alternative, and it would only be analogous with actual, literal, harmless alternatives. (Also, I notice you are conflating non-healthyness and mortality.)

If a reality like death didn't exist, I guess it would be like any other non-existent, yet imaginable state. In fact, death is a state of non-existence, it is imaginable, and it is harmless.

If someone tried to convince you that a 50 year lifespan is better than what we have now, what would be your reaction?

Most arguments for which exact lifespan is better would seem arbitrary to me. I can see some merit to a lifespan that allowed you to have kids, or grandkids. Maybe a lifespan where you reached full, mature adulthood makes some sense. But 50 years, 100 years, 1000 years... arbitrary.

Don't you find it interesting that your intuitions support a very narrow optimum that just happens to be what you already have?

Yes, very interesting. Though it is also your intuition, and intuition generally, that opposes (and fears?) death so intensely. It is part of our eons-evolved programming. This death-avoidance intuition exists so that we will be best equipped as vehicles for the replicators we carry. That is all is was designed for. The fact you are arguing for some intrinsic value to indefinitely extended consciousness beyond its instrumental value as a tool of the replicators is simply a glitch; a side-effect to the necessary importance every surviving organism and species must attach to surviving.

Do you argue that "death is just the end of your conscious experience" in the case of anyone who dies prematurely? Try to imagine actual deaths in real life and their outcomes.

I don't "argue" it. That seems tacky, since I would be arguing only with the deceased friends or loved ones... since the deceased themselves would be...dead.

I do, however, think it is a helpful meditation to ponder the implications of death, immortality, etc. I read and discuss my understanding of Buddhism with lots of people (these, for example), and I find explorations to better understand the human desire for permanence and striving for lasting satisfaction to be very insightful and helpful.

From your cited fable...

Stories about aging have traditionally focused on the need for graceful accommodation. The recommended solution to diminishing vigor and impending death was resignation coupled with an effort to achieve closure in practical affairs and personal relationships. Given that nothing could be done to prevent or retard aging, this focus made sense. Rather than fretting about the inevitable, one could aim for peace of mind.

Today we face a different situation. While we still lack effective and acceptable means for slowing the aging process, we can identify research directions that might lead to the development of such means in the foreseeable future. “Deathist” stories and ideologies, which counsel passive acceptance, are no longer harmless sources of consolation. They are fatal barriers to urgently needed action....

...The argument is not in favor or life-span extension per se. Adding extra years of sickness and debility at the end of life would be pointless. The argument is in favor of extending, as far as possible, the human health-span. By slowing or halting the aging process, the healthy human life span would be extended. Individuals would be able to remain healthy, vigorous, and productive at ages at which they would otherwise be dead.

I did not read the whole fable, though I skimmed it (I get it, I think) and read the moral of the story.

What I notice is that the author appears to be conflating the nasty parts of aging with death. They are not at all the same. They are not the same problem, and the should not be confused.

I am 100% for bringing about technologies that eliminate gratuitous suffering. That includes much of what happens we humans age. People often end up in horrible mental and physical states for years, or decades, near the end of their lives. I am all for getting rid of Alzheimer's, for instance. And, as a personal example, my grandmother spent the last eight years of her life effectively paralyzed and unable to speak due to a series of massive strokes -- I am 100% for technology that would make this never happen to anyone ever again.

None of that has anything to do with the end of a human's localized meat-computer-generated conscious experience. Healthyness does not = no death.

I love that he called it "Deathism", the "ideologies that counsel passive acceptance". I've often thought the sort of "stay alive at any cost" thinking I often encounter on LW could be appropriately labeled "Lifeism", and now I feel validated for thinking so.

Let me ask: Can you imagine any scenario, say, a billion years into your life, when you might opt for permanently switching off your consciousness (i.e. death)? Why or why not? What would be different at one billion years vs. one million? One million vs. 100,000? 100,000 vs. 10,000? (I'm not asking rhetorically...)

Replies from: memoridem, DefectiveAlgorithm
comment by memoridem · 2014-01-25T22:08:17.461Z · LW(p) · GW(p)

Are you sure you didn't think you were replying to someone else? You made a lot of false assumptions about my mindstate.

I'd suggest death is a harmless alternative

So what has made you decide to live so far?

Also, I notice you are conflating non-healthyness and mortality

I combined two situations because I thought that would be more acceptable to you. That doesn't mean I'm conflating them. I do think there are good deaths and bad immortalities.

Most arguments for which exact lifespan is better would seem arbitrary to me.

If I couldn't think of any interesting long term goals, I would have to agree. If that's not how you mean it, then I don't understand what you mean by arbitrary.

Though it is also your intuition, and intuition generally, that opposes (and fears?) death so intensely

It's a value, and yes it's programmed by the blind idiot god called evolution, but my core values don't go away if I just think about them hard enough and why should they?

This death-avoidance intuition exists so that we will be best equipped as vehicles for the replicators we carry. That is all is was designed for

Why exactly does it matter why the value is there? It wasn't designed for anything or by anything. It just is, and the genes were just selected for and thus they are. Genes have goals no more than they can plan and even if they did I have no reason to privilege them. Evolution is an unplanned process not optimizing anything in particular, how could it possibly glitch and why should I care?

"stay alive at any cost" thinking

Not my thinking.

Can you imagine any scenario, say, a billion years into your life, when you might opt for permanently switching off your consciousness

Any situation where my future could be expected to be net negative. Of course I can't imagine such a scenario specificly, as I can't reliably imagine what life is like even 20 years from now, so the extra years add nothing to the scenario. I can think of several situations that would make me end my life right now or a few years from now.

Replies from: Brillyant
comment by Brillyant · 2014-01-25T23:19:52.269Z · LW(p) · GW(p)

You made a lot of false assumptions about my mindstate.

Sorry.

So what has made you decide to live so far?

I'm alive. It is my default state.

I combined two situations because I thought that would be more acceptable to you. That doesn't mean I'm conflating them. I do think there are good deaths and bad immortalities.

I'm talking about (1) aging and disease and suffering vs. (2) death. They have zero to do with one another and should not be combined in this discussion.

If I couldn't think of any interesting long term goals, I would have to agree. If that's not how you mean it, then I don't understand what you mean by arbitrary.

Please give me an example of a long term goal that would require 10 Billion years? How about 1 Billion? 1 Million?

It's a value, and yes it's programmed by the blind idiot god called evolution, but my core values don't go away if I just think about them hard enough and why should they?

Why exactly does it matter why the value is there? It wasn't designed for anything or by anything. It just is, and the genes were just selected for and thus they are. Genes have goals no more than they can plan and even if they did I have no reason to privilege them. Evolution is an unplanned process not optimizing anything in particular, how could it possibly glitch and why should I care?

It does affect me quite a bit to know why my instincts and drives exist. Maybe it does nothing for you. Okay. That is interesting.

Replies from: memoridem
comment by memoridem · 2014-01-25T23:34:03.639Z · LW(p) · GW(p)

I'm alive. It is my default state.

Stop eating. Let's see how default it is.

They have zero to do with one another and should not be combined in this discussion.

If that's how you want to have your definitions, I can live with that.

Please give me an example of a long term goal that would require 10 Billion years? How about 1 Billion? 1 Million?

No need for that. Just always have plans for tomorrow.

It does affect me quite a bit to know why my instincts and drives exist. Maybe it does nothing for you. Okay. That is interesting.

Why/how they exist and what for are different things. Conflating the two leads just to confusion in this case, because the what for doesn't exist.

Replies from: Brillyant
comment by Brillyant · 2014-01-26T00:18:59.595Z · LW(p) · GW(p)

Stop eating. Let's see how default it is.

I meant only that I am alive, and I see no reason that death is preferable at this point.

If that's how you want to have your definitions, I can live with that.

There is a difference beyond definitions here. We may have different definitions of death -- I think it is the end of individual consciousness. But the suffering caused by aging and disease is separate from any definition of death. It is an important distinction that goes overlooked oft times.

No need for that. Just always have plans for tomorrow.

Fighting to live; living to fight. I see this a hamster wheel. It has some novelty, but I see no need to prolong it indefinitely. Or, if it can be prolonged, it shouldn't be at the top of the list of problems facing humanity/the universe.

Why/how they exist and what for are different things. Conflating the two leads just to confusion in this case.

I'm not sure I understand what your point is.

I'm tapping on our conversation now. I'd be pleased to hear any responses you have.

Replies from: memoridem
comment by memoridem · 2014-01-26T07:20:37.961Z · LW(p) · GW(p)

I meant only that I am alive, and I see no reason that death is preferable at this point.

This could easily describe my preferences as well. Perhaps we just have different thresholds for logging out.

But the suffering caused by aging and disease is separate from any definition of death.

I fully agree with this distinction, but it doesn't matter much to my preferences. I think permanent cessation of consciousness is bad. Some things in life are worse though, and could override this preference. Outcomes that we value don't have to be directly experienced, and death is no exception. For example I don't have to experience pain to want to avoid it. In addition living is instrumental to most of my goals.

It has some novelty, but I see no need to prolong it indefinitely.

I'm not bored yet. I can't imagine how I could be. I wouldn't choose immortality without the option of death however for various reasons. My ability to make long term plans will increase with technology. I might have million year plans, but can't imagine what they could be. Imagination is a very limited tool.

I'm not sure I understand what your point is.

You seemed to think we exist for our genes. This is simply wrong. Evolution explains how we came to be, not what for. Cryopreserving some of your cells in a jar or backing up your sequenced genome in the cloud might maximize your genetic fitness but would feel strangely unsatisfying, don't you think?

comment by DefectiveAlgorithm · 2014-01-25T15:53:29.273Z · LW(p) · GW(p)

'Let me ask: Can you imagine any scenario, say, a billion years into your life, when you might opt for permanently switching off your consciousness (i.e. death)? Why or why not? What would be different at one billion years vs. one million? One million vs. 100,000? 100,000 vs. 10,000? (I'm not asking rhetorically...)'

Yes*. And I can imagine it at one million as well, and 100,000, and 10,000. What I can't do is know a priori which it'll end up being, and I certainly wouldn't want the decision to be made for me.

*Well, maybe. This actually might be one of the few scenarios in which I'd voluntarily undergo wireheading (as an alternative to death).

comment by Nornagest · 2014-01-21T16:22:42.244Z · LW(p) · GW(p)

This whole article makes a sleight of hand assumption that more rational = more time on LW.

Yvain isn't talking about rationality, he's talking about membership in a rationalist group. (He says "training", but he's looking at time and status in community, not any specific training regime.) That "-ist" is important: it denotes a specific ideology or methodology. In this case, that's one that's strongly associated with the LW community, so using time and karma isn't a bad measure of one's exposure to it.

Myself, I'd be interested to see how these numbers compare to CFAR alumni. There's some overlap, but not so much as to rule out important differences.

Replies from: V_V, Brillyant
comment by V_V · 2014-01-21T17:11:00.240Z · LW(p) · GW(p)

I dislike this usage, and in fact I find it offensive.
Even with the "-ist" appended, It's an appropriation of a term that has a general meaning of "thinking clearly" which gets redefined as a label of membership into a given community.

Replies from: Nornagest
comment by Nornagest · 2014-01-21T19:37:29.550Z · LW(p) · GW(p)

Personally, I'm more bothered by the fact that it shares a name with an epistemological stance) that's in most ways unrelated and in some ways actually opposed to the LW methodology. (We tend to favor empiricist approaches in most situations.) But that ship has sailed.

comment by Brillyant · 2014-01-21T17:04:05.493Z · LW(p) · GW(p)

Yvain isn't talking about rationality, he's talking about membership in a rationalist group.

My understanding is that one's rationality (or ability to be rational) would increase as a result of participation in rationalist training. Hence, I see your disctinction, but little, if any, difference.

In this case, he assumes (1) LW is rationalist and (2) LW is good at providing training that makes a participating member more rational.

Karma does not necessarily have anything to do with rationality, being rational, rationalist training, etc. It is a point system in which members of LW give points to stuff they want more of. It has also been used as a reward for doing tasks for free for LW, mass blocks of downvoting for dissenting political views, and even filling out the survey we are talking about in this post.

Replies from: TheAncientGeek, MugaSofer
comment by TheAncientGeek · 2014-01-21T17:41:32.166Z · LW(p) · GW(p)

In this case, he assumes (1) LW is rationalist and (2) LW is good at providing training that makes a participating member more rational.

...(3) No one turns up as a newbie at LW having already learnt rationality.

comment by MugaSofer · 2014-01-21T17:39:15.223Z · LW(p) · GW(p)

My understanding is that one's rationality (or ability to be rational) would increase as a result of participation in rationalist training.

That it, is in fact, the question Yvain is discussing.

comment by philh · 2014-01-21T14:19:29.533Z · LW(p) · GW(p)

Proto-rationalists thought that, on average, there was a 21% chance of an average cryonically frozen person being revived in the future. Experienced rationalists thought that, on average, there was a 15% chance of same. The difference was marginally significant (p < 0.1).

Both of these numbers are higher than I would have expected, and I'd at least weakly say they at least weakly support the claim "rationalists are gullible, but experienced rationalists are less gullible than proto-rationalists".

Out of curiosity, I took an average in decibels instead of percents, for people with > 1000 karma. Leaving out two people who gave 100% (really?), and four who gave 0%, if I did the calculations right we get -12db = ~5.5%.

(But since 100 and 0 are -/+ epsilon, this might not accurately reflect beliefs.)

(I didn't check time in community, and of course I don't have the full dataset, but the percent average was 14.1 instead of 15, so the results probably don't change much.)

(I might also check whether the 100%ers were trolls, but if we look more closely for trolls among people who profess silly beliefs...)

Replies from: jkaufman
comment by jefftk (jkaufman) · 2014-01-23T23:31:11.930Z · LW(p) · GW(p)

Does average in decibels give you a geometric mean? I think it does, in which case it's a better average to be taking here.

Replies from: VAuroch
comment by VAuroch · 2014-01-24T01:02:07.182Z · LW(p) · GW(p)

Artithmetic mean of the logs is the log of the geometric mean, yes.

comment by ChrisHallquist · 2014-01-21T06:13:10.523Z · LW(p) · GW(p)

You say there were 93 proto-rationalists; I'm curious to know how many experienced rationalists there were.

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2014-01-21T06:19:01.028Z · LW(p) · GW(p)

and 134 experienced rationalists

Right there in the same sentence. :)