Posts
Comments
Here is a rough cost-benefit analysis. I found all the numbers before doing any maths and tried to be as optimistic as possible towards your theory to account for people more intelligent than me coming up with better ways to do it if it became standardised.
The cost of freezing a cell is proxied as the cost of freezing your eggs, which is already commercially available. It is £2900 for the initial harvest and £275 for a subsequent year of freezing. This £275 is not discounted because you pay it every year. Source. Let's assume that the clinic are nice and let you freeze as many different organ-cells as you want for the same £275 / year fee.
There are 56,000 people in the EU who are on the organ donor waiting list, suggesting there are around 56,000 people who have a sufficiently chronic failure of an organ that it didn't kill them outright but will kill them if they don't get a transplant. Many of these 56,000 people will get an organ through conventional means, but let's say none of them do to account for the fact that as medicine advances we will probably be able to transfer more people off the 'going to die' list onto the 'might be able to survive if given a transplant' list. Let's also assume that this represents 56,000 new people on the organ transplant list each year, even though the average wait on the list is around three years. Source (also I found a dodgy-looking source suggesting that 50,000 people die of organ failure each year, so the figure for the number who could benefit is probably not out by more than an order of magnitude). There are 500m people in the EU, so your chance of being on the organ transplant list in any given year is about 0.01%.
Same source as above suggests the average QALY gain for an organ transplant is 11.5 for a liver transplant, 6.8 for a heart transplant and 5.2 for a lung transplant. Let's assume all transplants give you the full 11.5 QALYs because medicine improves.
Finally let's assume you freeze your cells now at 25 with the expectation that they will be used in 40 years at 65. You only get one shot at doing this; you may never freeze your cells again because they degrade too much on your 26th birthday.
This means in total you pay £13900 for a 0.01% chance at 11.5 additional QALYs, for an expectated value of £10.8m / QALY. That is to say, if you would pay £10.8m for an additional year of life at the margin, this is probably worthwhile (given some very optimistic assumptions). However, this is only true if you get one shot to freeze your cells at 25. If instead you can wait until you need them and freeze them then, you'd only be paying something like £250 / QALY. You can see from these two numbers that even if cells degrade in quality to the extent that they are a thousand times harder to successfully transplant you are still better to wait until you are pretty sure you are a high-risk group for organ failure. Anyone who believes £10.8m / QALY is a good deal should also be prepared to accept a salary cut of around £115,000 / year in order to avoid a 20 mile commute since road vehicles have a fatality rate of something like 1.5 per billion miles travelled.
I really enjoyed the article, but I think your argument falls down in the following way:
1) Fission / fusion are the best energy sources we know of, but we can't yet do it for all forms of matter
2) A sufficiently clever and motivated intelligence probably could do it for all forms of matter, because it looks to be thermodynamically possible
3) (Implicit premise) In between now and the creation of a galaxy-hopping superintelligence with the physical nouse to fusion / fission at least the majority of matter in its path, there will be no more efficient forms of energy discovered
4) Therefore paperclips (or at least something that looks enough like paperclips that we needn't argue)
Premise 1 is trivialy true, premise 2 has just enough wild speculation to make it plausible but still exciting, and the conclusion is supported if premise 3 is true. But premise 3 looks pretty shakey to me - we can already extract energy from the quantum foam and can at least theoretically extract energy from matter-antimatter collision (although I don't know if thermodynamics permits either of these methods to be more efficient than fusion). It is a bold judgement to suppose we are at the limits of our understanding of these processes, and bolder still to assume there are no further processes to discover.
They are slightly different, but in practical terms they describe the same error; sensitivity and specificity are properties of a test while Type I and II errors are properties of a system, but both errors are basically saying, "Our test is not perfectly accurate so if we want to catch more people with a disease we need to misdiagnose more people"
To illustrate the distinction, consider a test which is 90% sensitive and 90% specific in a population of 100 where a disease has a 50% prevelance. This means 50 people have the disease, of which the test will identify 45 as having the problem (90% sensitive). 50 people are free of the disease, of which the test will correctly identify 45 (90% specific). So if diagnosed your probability of the diagnosis being a Type I error is 5/50 = 10% (if given the all clear the same logic applies for a Type II error). You derive this from the number of people in the population who were told they have a disease who were incorrectly diagnosed divided by the total population who were told they have a disease (rightly or wrongly)
But if the disease prevelance changes due to demographic pressue to 10% then 10 people have the disease of whom 9 are diagnosed, and 90 people are disease-free of whom 81 are given the all-clear. This means the probabilities of the different 'Type' errors change dramatically; now 9/18 = 50% for a Type I error and 1/82 ~ 1.2% for a Type II error. But the sensitivity and specificity of the test are completely unchanged.
I agree with everything you've said, but I would point out that I already allow myself to be tracked by Google, so the true cost is only the difference between the 'badness' of Google and Microsoft.
Don't worry about the tone, opportunity cost is that hinterland where it is too complicated to explain to someone who doesn't get it in one sentence, but too fundamental not to need to talk about so it is very difficult to judge tone when you're not sure whether you can assume familiarity with economic concepts.
It sounds to me like we basically agree - the cost of switching search engine is ten minutes (assumption) and this pays off about 50 cents a day for forever (assumption). This makes cutting off the analysis at one year arbitrary, which I agree with. You also have to compare the effort you put into searching with anything else you could do with that time, (even if you would have been doing those searches 'naturally') for the purpose of correctly calculating opportunity cost.
I think we disagree on the final step - if this is to be ineffective you need to be able to find an activity which is a better use of my time than conducting those daily searches. Since my primary contribution to charitable causes is from my salary, and I use a lot of Google in my job (I would be fired if I didn't do internet searches because I would be totally ineffective) I can't think what else I should be doing - what is a better use of my time than doing those searches? Assume we're only interested in maximising my total charitable giving.
Why not? Genuine question, because my job is pretty much nothing but cost-benefit analysis of fixed-cost projects which pay off a small amount every year, and the calculation you describe as 'not how you do cost-benefit analysis' is almost exactly how I would do a cost-benefit analysis of this kind (although I wouldn't phrase it as $1200 / hour because that's clunky, I'd probably talk about a percentage return on investment). I rate the probability that I am wrong here as extremely small, but if I am wrong I really need to hear it, and have the problem explained to me.
If I were totally comitted to getting the most accurate answer I'd add a couple of complications that katydee doesn't - for example;
I would discount against the possibility that GoodSearch no longer exists in 2040 (and discount future earnings more generally),
I would try to estimate a probability that I'd need to reconfigure my settings in the future, and estimate timings for that (if both Chrome and GoodSearch are still around in 30 years I'd be surprised!)
But ignoring these complications for the moment I think it is totally accurate to say in 2040, "I have donated $16,000 to charity since 2014". I think the $1200/hour rhetoric is unhelpful, because it implies that you could earn another $1200 by working another hour, when in fact you don't actually ever earn $1200 (well, I suppose you do after about a decade) and you can only earn the money over a very long period of time. I would probably describe it as a 'Nominal Rate of Return of X% per annum over Y years', compared to a nominal rate of return of almost zero percent per annum if I don't search with GoodSearch (it is possible my advertising generates a multiplier effect which has a tiny positive externality on me, but I'd expect this to be almost totally negligable)
Where is the error in my reasoning?
I think there is a problem in how we are using the word 'ineffective'. I think you are using the word to mean 'very small absolute amounts' whereas I am using the word to mean something like 'low opportunity cost : return ratio'. I think looking at the cost : return ratio is fairer, and I also think 'very small absolute amount' is misleading.
I did 46 searches on my work computer yesterday, and probably a handful more on mobile devices. Say 50 for the sake of argument, or $0.50. Over a year this is $130 dollars if I make no searches at weekends. I agree with katydee that it would probably take about ten minutes to configure my search settings, so I am better advised to spend ten minutes configuring my search settings than work a marginal ten minutes and donating the procedes so long as my salary is less than about $800 an hour. Based on the 2013 LessWrong Survey, around 1500 LessWrong users have a salary of less than $800 an hour, so if they all configured their internet settings in that way they would raise about $200,000 for a charity of their choice. I can't find an official budget for MIRI, but I'd estimate that it is somewhere between $0.5m and $1m per year, so that's a pretty meaningful amount.
Another interpretation of your argument is that it is inefficient to do needless searches on GoodSearch to earn money for charity. This is a good argument; if it takes (say) five seconds to do a search then you are better off working a marginal hour as long as your salary is above $7.20 per hour. But I don't think anyone is actually arguing that - the idea is just to monetise searches you would be making anyway, since recapturing a small (non-zero) fraction of your personal contribution to advertising is strictly better than recapturing a zero-size contribution to your personal contribution to advertising.
I don't understand why you think it is a grossly ineffective way to earn money for charity. Please could you explain why you think this is?
I think - though I'm not certain - that you are right that in all but the most marginal seats you'd never find a politically unafiliated / passionate about a political issue group large enough to swing a seat. I don't think that implies that you shouldn't try to game democracy though - there are certain known flaws in the democratic system we have which exist (and swing elections) independently of whether people knowingly exploit them or not.
I think that's certainly an interesting idea - NHS-homeopathy could be even cheaper than what is currently provided (comissioning the services off a private homeopathy provider) because we could do it in bulk - the raw ingredients aren't expensive at all. I'd worry about the indirect cost of moving the Overton Window though - at the moment we STRONGLY advise people not to use homeopathy even for trivial conditions, and we mock those that promote it. Even so, many people still use it and swear by its efficacy. If we moved to a situation where we promoted homeopathy for minor conditions and gave its practitioners the stamp of NHS/Government approval, we would see many more people using it for minor conditions and - I would expect - some people begin to use it for major conditions. Thus the money we save on prescribing a placebo over an active drug might be sucked up by the cost of treating the complications of people who take homeopathic treatments to manage AF and then get a massive stroke, for example.
But I think as a matter of principle we should set the level of homeopathy at whatever maximises the number of healthy life-years per unit of spending, even if that is not zero.
I certainly don't disagree with your analysis, but I think I might not have been clear enough with the endgame of this potential strategy; I don't think this is a good strategy to succeed as a minor party, because no matter how virtuous you make transhumanism sound, people are always going to care more about the economy or defence. But I think you can probably find enough people who care more about transhumanism than they do about the marginal difference between the economic policy of the two main parties. So the 'transhuman' party will never get off the ground, but it may have enough power to swing a marginal seat for one of the two main parties, in exchange for agreement to vote a certain way on a certain issue.
Whether or not you could parley that into a successful minor party is a much harder question!
I think both of you are incorrect. This leverages a specific flaw in the FPTP system which wouldn't work in a PR system that gives a small, tightly coordinated group in a swing seat a disproportionate amount of power. Insofar as both political parties and lobby groups can exist in a PR system, this cannot be either of those things since it could not exist in a PR system.
More specifically, it is not a political party because (amongst other things) it has no general platform and does not seek to acquire power. It is also not a lobby group because it doesn't really 'lobby' in any meaningful sense to get the law changed. I think the example of the NRA is a red herring - it is hard to believe the NRA is well-enough coordinated to get a large number of its members to vote for a party they don't like. Do you have any evidence they have ever been successful at swinging a seat in this way?
I don't disagree I was grasping at straws for some of the more outlandish suggestions, but this was deliberate - to try and explore the full boundaries of the strategy space. So I take most of your criticism in the constructive spirit in which it was intended, but I do think maybe you are a bit confused about 'philosophical preservation' (no doubt I explained it very badly to avoid using the word 'religion'). My point is not that you convince yourself, "I will live forever because all life is meaningless and hence death is the same as life", it is that you find some philosophical argument that indicates a plausible strategy and then do that strategy. A simple example would be that you discover an argument which really genuinely proves Christianity offers salvation and then get baptised, or prove to your satisfaction that the soul is real and then pay a medium to continue contacting you after you die. Again, I agree this is outlandish but there must be something appealing about the approach because it is unquestionably the most popular strategy on the list in a worldwide sense.
I didn't know that. Fair enough, seems likely 'signal preservation' is much more costly than I originally realised and not worth pursuing (I think the likelihood of revivification is the same or better than cryonics, but the cost in terms of hours spent tapping at a keyboard is basically more than any human could pay in one lifetime)
This is an excellent comment, and it is extremely embarrassing for me that in a post on the plausible 'live forever' strategy space I missed three extremely plausible strategies for living forever, all of which are approximately complementary to cryonics (unless they're successful, in which case; why would you bother). I'd like to take this as evidence that many eyes on the 'live forever' problem genuinely does result in utility increase, but I think it is a more plausible explanation that I'm not very good at visualising the strategy space!
I basically agree with you that the strategy seems pretty unlikely. But I think you are over-harsh on it; you don't need to reconstruct the entire brain, just the stuff that deals with personal identity. If you can select from any one of thirty keys on your keyboard then every ten letters you type has 10^15 bits of entropy, so it seems possible that if somebody knew absolutely everything about the state you were in when typing they could reconstruct you just from this. You are also not restricted to tapping away randomly - I suspect words or sentences would leak way more the pseudorandom tapping. At any rate, this strategy is almost free, so you'd need astonishingly good reasons not to attempt it if you plan on attempting cryonics.
I think those reasons exist (I'm skeptical the information would survive) but I don't think the theory is quite as much in the lunatic fringe as you do.
I think the plausibility of the arguments depends in a very great part on how plausible you think cryonics is; since the average on this site is about 22%, I can see how other strategies which are low likelihood/high payoff might appear almost not worth considering. On the other hand, something like 'simulationist' preservation seems to me to be well within two orders of magnitude of the probability of cryonics - both rely on society finding your information and deciding to do something with it, and both rely on the invention of technology which appears logically possible but well outside the realms of current science (overcome death vs overcome computational limits on simulations). But simulation preservation is three orders of magnitude cheaper than cryonics, which suggests to me that it might be worthwhile to consider. That is to say, if you seriously dismissed it in a couple of seconds you must have very very strong reasons to think the strategy is - say - about four orders of magnitude less likely than cryonics. What reason is that? I wonder if maybe I assumed the simulation problem was more widely accepted than I thought it might be. I'm a bit concerned about this line of reasoning, because all of my friends dismiss cryonics as 'obviously not worth considering' and I think they adopt this argument because the probabilistic conclusions are uncomfortable to contemplate.
With respect to your second point, that this post could be counter-productive, I am hugely interested by the conclusion. A priori it seems hugely unlikely that with all of our ingenuity we can only come up with two plausible strategies for living forever (religion and cryonics) and that both of those conclusions would be anathemic to the other group. If the 'plausible strategy-space' is not large I would take that as evidence that the strategy-space is in fact zero and people are just good at aggregating around plausible-but-flawed strategies. Can you think about any other major human accomplishment for which the strategy-space is so small? I suspect the conclusion for this is that I am bad at thinking up alternate strategies, rather than the strategies not existing, but it is an excellent point you make and well worth considering
I'm not sure I agree with your analysis of the first - it is reasonable to assume that when a person generates pseudorandom noise they are masking a 'signal' with some amount of true randomness; we don't know enough to say for absolute certain that the input is totally garbage and we have good reason to believe people are actually very bad at generating random numbers. Contrast that to - for example - the fact that we have pretty good reasons to think that bringing someone back from the dead is a hard project and I don't think you're fairly applying the same criteria across preservation methods.
This is very true. I agonised about including a, 'Structure your life in such a way that your minimise the probability of a death which destroys your brain' option, but decided in the end that a pedant could argue that such a change to your lifestyle might decrease your total lifetime utility and so isn't worth it for certain probabilities of cryonics' success.
I'm surprised nobody has posted about finding the speed of light with a chocolate bar and a microwave, because I find that absolutely mindblowing.
The basic experiment is to take the turntable out of the microwave and put in the chocolate, nuke it for a couple of seconds until part of the chocolate starts melting and then measure the distance between the melting patches. If you have a standard microwave, you'll be on a frequency of 2.45 GHz (you can check this online or in the manual). Multiply the distance between the spots by 2,450,000,000 (or whatever the frequency is) and then by 2 and you will end up with c, to within whatever accuracy you measured the melting spots.
I guess if you were really skeptical you could say that you have no reason to believe that v = f * lambda, or that the manufacturers of microwaves or rulers were colluding to decieve you, but I think this is around the point where you can start claming the evidence of your eyes is decieving you and so on - too skeptical to add anything useful to the discussion.
While on the one hand I completely agree with you given your starting premises, I don't necessarily think we're in quite the zero information situation you describe. For example, it is pretty well accepted (even amongst people who don't think cryo will work) that simply freezing yourself without cryopreservant lowers your chance of revivification. This is a pretty important consensus since cryopreservant is highly toxic, but we extrapolate from current trends and conclude, "Curing poisoning is probably an easier task than reconstructing information destroyed by entropy, so I should adopt the 'cryopreservant' branch of strategy-space". This indicates we don't really have no information about the correct cryo strategy; though I totally accept your weaker claim that I seem to demand much MORE information than we can reasonably be expected to possess.
I think we're in a situation more like a friend ringing up and says, "We're going to play Ticket to Ride tonight; it's like Monopoly only better". We don't have enough information to decide whether we want to be the top hat or the battleship (which is a meaningless question anyway since the answer is always 'top hat'), but we might have enough information to begin to say, "On my first turn I will study the layout of the board carefully (rather than act quickly)" and "I will attempt to remain on good terms with the other players insofar as they can hurt me and I cannot overwhelmingly hurt them" or even "It is unlikely this game will involve serious roleplay. I will not put on my robe and wizard hat". None of these are enough to guarentee a win, but neither are they trivial realisations; I think it is reasonable to believe probability theory, human nature and my own utility function will not change dramatically in the time it takes me to be revivified, so basing strategy on these characteristics seems worthwhile.
You could try signalling that unless they trade with you you'll put them at a disadvantage. Consider - "Player 1, both you and Player 2 want this property. This property for one of your properties is a fair swap that benefits both of us, but if you turn me down I'll trade it to Player 2 for their best offer, which benefits Player 2 a lot and me only a little. Player 2, if you don't give me even the small amount I ask for, I'll randomly give it to some other player." If you signal credibly (ie you actually do it if someone calls your bluff) then Player 1 should make the trade provided he values your property more than you knocking yourself out of the game (ie the trade really DOES have to be a fair deal - you can't just use this to up your bargaining ante).
Part of your argument could be the (truthful) observation that the losers in a trade aren't the people who made the less valuable trade, but the people who didn't trade at all - if you play at a very conservative table it might be in your interest to trade at a disadvantage and exploit the increased variance a monopoly gives you.
The major downside here is that many people don't play Monopoly to win, and strategies that optimise for winning using game theory and the like are seen as 'unsporting'. Sometimes you just need to accept that to keep your family happy you have to play Monopoly and get bored.
I'm not sure I completely agree with you, but I'd argue that is exactly the sort of discussion which I am surprised is not already happening. Consider:
- I should not make myself an appealing target for resurrection, because I am likely to receive the procedure in the most 'pre-alpha' form
versus
- I should make myself the most appealing target for ressurection possible; history shows that if a procedure is expensive or difficult (like going to the moon) it is usually only done infrequently until technology catches up with ambition. The longer I am frozen, the more chance something happens to catastrophically prevent my resurrection, so I desire to be revived in the first wave.
Alternatively
- Future society is likely to punish (or refuse to revive) those who were evil in this life, so I should only adopt strategies which reflect well on me
versus
- Future society is likely to reprogramme people who were evil in this life before reviving them, so I should maximise my chance of making it to future society by any means necessary; it won't affect my chances of revivification because almost every current human will need reprogramming before revival.
While it happens my probability distribution over what future society looks like is a lot closer to yours than what you might infer from the main body of the post, your belief about future society is hugely important in determining your optimal freezing strategy. This is why I say I am surprised there is not more discussion already happening; cryonics correlates well with people who have carefully considered the question of what future society is likely to look like, but it appears many people have not then made a link between the two sets of beliefs.
Personally, it seems like a pretty rational decision to me (excluding autopsy problems, which I talk about somewhere else). The reason I advise against it is because I don't believe anyone could possibly know their utility function - and life expectancy - well enough to make a sensible decision about when the right time was to begin the process. This is true even if you exclude the fact that there are good reasons to think that many people do not approach death rationally, and if you consider that an ostentatious decapitation would likely be distressing for those left behind (insofar as you care about the utility of people after your death).
But in a purely hypothetical case - where there was a bomb in my heart that was going to go off in ten seconds and I happened to be standing next to a big vat of cryopreservant - I would highly recommend freezing yourself before dying naturally.
I agree the fact that current cryonics practice is not optimised for revival is extremely strong Baysian evidence (for me at least) that most cryonicists on these forums are considerably more likely to be signalling than rationally trying to live forever. I would add into that the well known problem of 'cryo-crastinating', which is hard to explain if pro-cryo individuals are highly rational life-year maximisers, but extremely easy to explain if people are willing to send a 'pro-cryo' signal when it is free, but not when it is expensive.
On the other hand, I am convinced at least some cryonics advocates will find a discussion on cryonics strategy genuinely useful and important, and since I was reading about cryonics anyway I thought I would fill what I considered to be a gap in the debate landscape.
Unfortunately my friends would probably see winning too often as a good reason to collude against me. Although collusion would lower the average length of a game, it would probably raise the chance any individual friend wanted to play with me (because they would be winning more often, on average). Although I agree with you that that's a strategy I hadn't considered, which is quite an oversight given the content of the post!
Khoth has correctly identified that surely the best strategy is to convince my friends to play a similar but superior game, although this isn't always possible. For example with the horse-traders I try to play Catan and with the roll-and-movers I play Pirates. Unfortunately if there are too many of both groups then the only thing they can compromise on is Monopoly, and I don't have the persuasive skills to overcome the inertia.
However the fact there are a whole bunch of superior games to Monopoly sort of breaks the analogy I was driving at so I left it out of the main body of the post.
You are completely correct about Alcor's FAQ:
While some communities have enacted legislation allowing suicide with the assistance of a physician, any such case almost certainly would be followed by an autopsy which would include dissection of the brain. For these reasons, and to protect ourselves from any accusation of conflict of interest, Alcor has a strict policy against advising any member to end life prematurely.
However this identifies exactly the point I was making; a rational discussion could be had about the risk of autopsy destroying the brain versus the benefit of being able to very tightly control your freezing. To expand on this; autopsy in the case of suicide is mandatory in the US to determine cause of death. I strongly suspect that if the average coroner comes across a headless frozen body full of toxic cryoprotectant they can determine the cause of death without destroying the brain, and if they can determine the cause of death witout destroying the brain they might choose to respect the implicit wishes of the deceased and explicit wishes of surviving relatives not to cut into the brain tissue. By contrast, freezing the brain 'in use' might increase the chance of survival. If an ostentatious suicide raises your chance of an autopsy by less than it raises your chances of revivification, it is a plausible strategy.
I'd add that Alcor has a very, very strong reason to advise members against suicide which members themselves do not have; Alcor can get sued for that sort of behaviour.
With respect to your other point - that these strategies are not novel - I can only agree that I would be surprised if I were the only person to have thought of them, but I did not come across any serious discussion of them even after some fairly comitted googling. If somebody looking for discussion of these strategies can't find them, the odds of someone interested in cryonics but paradigm-bound to ignore the possibility of other strategies will have even more difficulty; it is for those people this article is written.
One possible mechanism would be a general social shift towards more cryogenics meaning cryo voters became an important voting block. Since most rational cryo-voters can be expected to be more-or-less single issue with respect to cryonics (almost nothing will increase your individual expected utility for a given level of money more than increasing your chance of being revivified), politicians will begin to face great pressure to appease this demographic. You'll see that this is different to the situation you describe for at least three reasons:
On those issues where the individual utility gain is greatest, the population is smallest (cures for very rare genetic conditions which are unaffordable to the average person and yet not subsidised by the government). This is probably because it is not in the interests of politicians to use political capital on a very small sub-section of the population.
On those issues where individual utility gain is small and populations are large, the individuals concerned are unlikely to be single-issue. For example public health measures undoubtedly raise my lifetime utility, but do they do so more than public education, public art or nebulous concepts like 'freedom'? Hard to say
On those issues where individual utility gain is large and populations are large, those populations are almost inevitably located in areas where US politicians have no incentive to help them. For example, campaigning to end malaria would be both massively important and affect a huge number of people, but those people would not be US voters.
If this social shift occurs, politicians may be incentivised to offer a 'government guarentee' to all frozen corpsicles, in the same way all mortage lenders are government-backed or banks are unable to go bust in an uncontrolled way (assets up to a certain value are protected). So it wouldn't so much be a 'not-death' right (because all three groups I describe above would still fail to be protected from death), but I was using it as a shorthand for the slightly more complex scenario I describe here.
I don't know how likely I think this scenario is, but I think if it is going to happen, it will happen before a post-scarcity society. In the interests of being charitable to the cryogenics companies, I think it is fair to point out that this is a mechanism that could greatly improve their chance of being revivified without any technological innovation.
I understand the concern about unpacking bias, and read about a related experiment also by Kahneman (I think) who elicited a higher probability when he asked experts to estimate the likelihood a specific scenario (deflation of the rouble leads to a Soviet invasion of Germany and nuclear war) than a general scenario (nuclear war). So I would be cautious of handling an equation with multiple, obviously overlapping terms. I'll update the original post when I'm back at a computer to include a health warning in the first paragraph.
I don't think I fully understand the criticism of this piece though; are you saying the modelling approach is incoherent or simply cautioning people not to just plug it into the cryo-Drake equation without considering the unpacking bias?
Other relevant differences might be that humans are never allowed to just 'die' of making bad financial decisions in countries like America - if humans make really wild spending decisions the state will at least feed and house them.
Perhaps charities would be a better reference class? If anyone can find any data I'll happily rerun the analysis, but 'age charities' will give you charities concerned with age and 'life expectancy charities' will give you charities concerned with life expectancy; it could be a bit of a slog.
Be careful about reading too much into that - "Large enterprises, those with 250 or greater employment, accounted for only 0.4 per cent of all enterprises." according to the ONS. You'd expect to see 89.4% small companies by chance alone, although I concede that if a company is around for 100 years you might expect it to grow into a large company by inertia alone.
With respect to your other point, you are absolutely right - I wanted to show my working here to indicate how badly wrong back-of-the-envelope calculations can go in situations like this.
Although in reality it makes a big difference, in my model it does not - my model varies only the size of the company, since that's all I could find good data on. I found another source saying that the age of a company was about 30% more important in predicting its survival than its size, but because it was a complicated regression I was unable to exclude terms that had absolutely nothing to do with cryonics.
It is probable that you should shade the probability of Alcor surviving up and the probability of KryoRus surviving down to account for this.
Hi RolfAndreassen,
I'm impressed you spotted that so quickly, because it was non-obvious to me. Nevertheless, I did spot the problem you are describing and attempted to correct for it in the second graph by considering only companies which went into liquidation, using a proper academic source.
This seems an unfair response to me - TheAncientGeek offers a standard argument pro-AA while admitting they haven’t studied the issue in detail. You attack the response on grounds that an AA supporter could rebut without ever contradicting themselves (i.e. “It isn’t collective justice, it compensates for individual inequality of opportunity (unless, say, you choose to define a progressive income tax as ‘collective justice’ in which case I do support collective justice)”, “It applies to certain minorities and not others because of the size of the disopportunity facing them (discriminatory social structures don’t distinguish between recent immigrants and descendants of slaves, but they do appear to discriminate between black African and white Irish)” and “It isn’t economically inefficient, and might even be economically efficient”).
The next paragraph contains an argument for AA which I support which I think proves there is at least one rational argument for AA. If it is important to you, I can also defend my position to prove to you it is not obviously wrong (although I hope the argument alone will be enough). If there is a rational argument in favour of AA, then there must be at least one utility function that makes supporting AA rational (in the same way that a utility function which really REALLY values ants might rationally choose to try to ban glasses so children can’t use them to burn ants). I don’t agree with TheAncientGeek’s starting premise that we should therefore suppress research into race, but I think it is important you don’t base your conclusion on a faulty premise (“AA is obviously wrong”).
This 2005 paper published in The Journal of Economic Education gives the result of an experiment where participants were randomly assigned a colour (‘green’ or ‘purple’) and given the following information (I’m paraphrasing badly to ensure I remain brief, please consult the paper for the actual protocol): “You are allowed to get education, which costs £1. You then take a (simulated) test where your score is randomly picked from 1 to 100, but if you bought education the score will have a small bias towards the higher end. ‘Employers’ (other participants) will then choose whether to ‘employ’ you. They only know your colour and your test score. If they employ you, you get £5. If they don’t, you get £1. If an employer picks an individual with education, the employer gets £10, otherwise they get nothing.” I presume the experiment was then iterated an unknown number of times to prevent gaming, but I can’t find that in the paper. Clearly, the socially optimal outcome is that everybody gets education and the employers employ everybody. However, individuals can earn the full £10 rather than a net £9 by gambling on the employers being over-generous and picking them even though they didn’t get education.
By chance, the ‘purples’ happened to be under-educated in the first round, which meant some purples who got an education decided not to waste the money next round. This therefore compounded the effect, to the point where new purples realised there was no point in investing in education, so even some free-riding greens couldn’t prevent employers betting on greens (even if the green score was lower than the purple score). If the society in the experiment were allowed to implement AA they would; it would be hugely more economically efficient to remove the pro-green bias and both encourage purples back into education and force greens to keep up their initial levels of education and not ‘free ride’. The experimental confirmation that AA can be economically efficient is reason enough to support such policies, but I think they would be more effective in the real world compared to the experimental world; for example, two contradictory opinions are likely to lead to more economic progress than two homogenous opinions, and this cultural bonus is not modelled in the original experiment.
In the particular case of the first problem there may be a shortcut that is worthwhile exploring. As I see it, your problem is that you would like to know how much leisure time to allocate to improve your productiveness (including the possibility of zero leisure time). The ‘improving of productiveness’ is the important goal to you, not the philosophical distinctions regarding the optimal work/life balance. Since productivity is something approximately measurable, you yourself can optimise over this domain.
With that in mind, you can perform an experiment on yourself. Start by allocating an amount of leisure time you think is excessive, but not wildly so. You want to pick a number that means you will be completely relaxed when you attempt to perform productive work, such that your productivity is ‘100%’ (however you want to define that). When I was at university I needed between one and two hours a day, now I work a full-time job I need closer to three. I’d suggest based on my experience alone that two hours a day would be a good starting point. Force yourself to have this much leisure (but to optimise substantially, train yourself that activities like exercise, cooking and meditation are pleasurable). If you find yourself worrying about not benefitting the future, think to yourself, “I am currently engaged in an experiment on myself, the results of which could make me substantially more productive for the rest of my life. It is highly unlikely that the insights of a marginal two hours’ work will benefit the future more than the insights from this experiment.”
After the end of your first week of this, reflect on whether you think you could reduce the number of leisure hours you spent and maintain your productivity. In particular, you should reflect on whether you can achieve the same level of fun in a shorter space of time, or whether you can decrease your marginal fun without decreasing your productivity. For example, would a ten minute break each hour be more refreshing than a two-hour game of Civ? Reduce your leisure hours by a small fraction of their total value; maybe schedule ten minutes less leisure next week. Repeat. It is important you don’t decrease fun too quickly or too sharply; you need to have a slow-ish period of optimising your fun.
Eventually, you will come to the point where you cannot possibly decrease fun without cutting into productivity. Here you want to make the decreases in your scheduled leisure time much shorter, and try to track more closely the impact they have on your productivity, such that you can identify the point where a marginal minute would be better spent resting than working. Remember that productivity isn’t simply the ability to churn out mediocre code with few errors, but the possibility to have a ‘brainwave’ and capitalise on it. After all, Friendly AI only needs to be solved once! Personally, I think I would happily take an extra half-hour out of my day if it meant I could guarantee I would be working perfectly productively for the rest of that day, but if your cost (in time) is high for a marginal unit of productivity you might differ.
This puts me in mind of a thought experiment Yvain posted a while ago (I’m certain he’s not the original author, but I can’t for the life of me track it any further back than his LiveJournal):
“A man has a machine with a button on it. If you press the button, there is a one in five million chance that you will die immediately; otherwise, nothing happens. He offers you some money to press the button once. What do you do? Do you refuse to press it for any amount? If not, how much money would convince you to press the button?”
This is – I think – analogous to your ‘siren world’ thought experiment. Rather than pushing the button once for £X, every time you push the button the AI simulates a new future world and at any point you can stop and implement the future that looks best to you. You have a small probability of uncovering a siren world, which you will be forced to choose because it will appear almost perfect (although you may keep pressing the button after uncovering the siren world and uncover an even more deviously concealed siren, or even a utopia which is better than the original siren). How often do you simulate future worlds before forcing yourself to implement the best so far to maximize your expected utility?
Obviously the answer depends on how probable siren worlds are and how likely it is that the current world will be overtaken by a superior world on the next press (which is equivalent to a function where the probability of earning money on the next press is inversely related to how much money you already have). In fact, if the probability of a siren world is sufficiently low, it may be worthwhile to take the risk of generating worlds without constraints in case the AI can simulate a world substantially better than the best-optimised world changing only the 25 yes-no questions, even if we know that the 25 yes-no questions will produce a highly livable world.
Of course, if the AI can lie to you about whether a world is good or not (which seems likely) or can produce possible worlds in a non-random fashion, increasing the risk of generating a siren world (which also seems likely) then you should never push the button, because of the risk you would be unable to stop yourself implementing the siren world which – almost inevitably – be generated on the first try. If we can prove the best-possible utopia is better than the best-possible siren even given IC constraints (which seems unlikely) or that the AI we have is definitely Friendly (could happen, you never know… :p ) then we should push the button an infinite number of times. But excluding these edge cases, it seems likely the optimal decision will not be constrained in the way you describe, but more likely an unconstrained but non-exhaustive search – a finite number of pushes on our random-world button rather than an exhaustive search of a constrained possibility space.
Hi elharo,
Your criticism is absolutely correct; not all writers are novelist so even if novelists show the income variation I assert, that wouldn’t show up on the BLS statistics. I think that shows that my illustrative example is flawed, but I hope it doesn’t undermine the main conclusion too much.
Ah yes, this makes a lot of sense and explains my earlier confusion; although it may still be true that there is a high variance in income between novelists, not all writers are novelists (for that matter, I suppose not all novelists are writers, at least as far as the BLS will bin them). I think that indicates my illustrative example is flawed, although I hope the wider point still stands.
These findings might also be useful for choosing between high-variance and low-variance careers, insofar as you are able to predict how much better at each of these skills you are than average.
For example, engineering is a field where most people earn a decent wage and some people earn a very decent (but not obscene) wage. I think the average salary of an engineering graduate is about $58,000, with the vast majority of that made up by people working in the $30,000-$50,000 band with a couple of hotshots pulling down six-digit salaries and almost nobody grinding out a subsistence wage. By contrast writing a book is a field where one or two people become megastar billionaires like J K Rowling or John Grisham and everybody else earns practically nothing. Pretty amazingly (I think it’s amazing) the Bureau of Labour Statistics reckons the average wage for a writer is almost exactly the same as the average wage of an engineer ($56,000), but it seems likely to me the median salary is much lower than in engineering (maybe something like $15,000-$25,000) with the mean heavily skewed by the handful of super-rich authors.
What this suggests is that you can – on a probabilistic basis – determine whether you are likely to end up at the top or bottom of the income distribution of your chosen profession. If you know your political knowledge and skills are weak, it would be a good idea to pick engineering over writing (assuming you are equally good at both) because you are more likely to end up earning $30,000 than $100,000. If you are excellent at office politics you are much more likely to form the sort of connections that give you a bestseller, and so writing might offer you the highest expected earnings even though the average salary in both professions is (nearly) identical.
This is complicated by the fact that ‘political knowledge and skills’ are not consistent between careers (the most insensitive politician is likely to be far more manipulative than the smoothest political operator in database administration), but I think it is probably possible to allow for this and still have more information about career choice than you did before.