Rational retirement plans

post by Ik (ik-kyeong-jin) · 2023-05-15T17:49:13.860Z · LW · GW · 17 comments

Are you in your 20s-40s and diligently saving for your retirement plan? You might want to reconsider your strategy.

Saving money with the intention of spending it in 20+ years will return (almost) nothing.

Superintelligence is likely to arrive within the next 20 years, probably sooner.

  1. If superintelligence ends up being detrimental to humanity (doom), saving money is a waste.

  2. What if we can align superintelligence with our values?

Technological advancements will significantly increase the utility per dollar. The exponential enhancement of utility per dollar brought on by superintelligence will eventually cause the value of a dollar to collapse.

Happiness has been shown to increase with income up to a certain threshold ($ 200K per year now, roughly speaking), beyond which the effect tends to plateau. This number will get ridiculously small as utility per dollar explodes (imagine $ 1/month gets you everything you need). That is, anyone can enjoy a quality of life comparable to that of the wealthy.

Whatever your life expectancy is, only plan the next 20 years. Beyond then, your dollars will return (almost) nothing.

What do you think?

17 comments

Comments sorted by top scores.

comment by Adam Zerner (adamzerner) · 2023-05-15T19:42:04.840Z · LW(p) · GW(p)

I think this is an important question and am glad to see it brought up. I think it is a question that requires a good amount of Taking Ideas Seriously [? · GW].

Zvi discusses it here [LW · GW]. My interpretation/summary of his answer: "You should still save up money because there are some future scenarios where having a pile of money will be useful." I would have liked to see more elaboration. What future scenarios? How likely are they? What about the question of retirement?

Personally, here's how I think about it.

  1. If we have a FOOM:
    1. One possibility is that we're in a utopia and one's savings don't matter.
    2. One possibility is that we're in a dystopia and one's savings don't matter.
    3. One possibility is that we're in some sort of world where one's savings do matter.
    4. One possibility is that we're in some sort of world where one's savings don't matter.
  2. If we don't have a FOOM:
    1. One possibility is that we're in a utopia and one's savings don't matter.
    2. One possibility is that we're in a dystopia and one's savings don't matter.
    3. One possibility is that we're in some sort of world where one's savings do matter.
    4. One possibility is that we're in some sort of world where one's savings don't matter.

I'm not sure how likely each of these scenarios are. If we have FOOM, it does seem like 1a or 1b will happen rather than 1c or 1d. It's a hard thing to think about though and I have significant uncertainty.

But for me, the dominant consideration is probably that I want a sense of normalcy. Saving for retirement helps with that and for me personally doesn't have a large downside.

I also think what Zvi says makes sense. Having a pile of money seems wise for reasons other than retirement.

comment by Dagon · 2023-05-15T21:05:58.103Z · LW(p) · GW(p)

I like the topic, but I think this is bad advice and almost certainly wrong.

I'll ignore the saving vs investing distinction.  I assume you meant investing, with a Kelly Criterion or similar risk/reward strategy. 

But your main error is in thinking that you can't profitably use your saved money sooner, if circumstances warrant.  Investing with a 10- or a 40-year horizon is darned near identical.  You're really saving for 'the foreseeable future'.  You're asserting that what you can get for $1-plus-future-growth averaged over all futures is higher utility than $1 spent today.

You're also just wrong about happiness correlation with income - there are population effects that make it confusing, but it never seems to fully plateau, just becomes smaller increments.  More is (nearly) always better.  I think you're wrong abut utility-per-dollar as well, but I have weaker models of that.

Replies from: benjaminikuta
comment by benjaminikuta · 2023-06-07T23:59:41.786Z · LW(p) · GW(p)

The stock market has never declined over a 20 year period, but it has declined over 10 year periods, so if you're particularly risk averse, that could be quite the difference. 

comment by Brendan Long (korin43) · 2023-05-15T20:10:33.830Z · LW(p) · GW(p)

You're putting a lot of importance on the "likely" here:

Superintelligence is likely to arrive within the next 20 years

What if it doesn't? Reaching retirement age with no savings sucks pretty badly (especially if you're used to spending your entire income). Short AI timelines might push you in the direction of smaller retirement savings (taking a risk of things being a little worse in timelines you think are unlikely), but probably not all the way to "don't save for retirement". You should also put at least some weight on AGI happening and somehow the world still existing in a recognizable form (since this is what has happened every other time a world-changing technology was created).

Something else to consider is that retirement isn't the only reason to have savings. If you think AI timelines are short, you might want to have a giant pile of money you can strategically deploy to do things like quit your job and work on alignment for a few years (or pay someone else to).

comment by [deleted] · 2023-05-16T16:09:47.156Z · LW(p) · GW(p)

First of all, it's entirely possible that "superintelligence" will not be this monolithic single sovereign entity that does what it wants and it chooses whether to kill all humans. (So future where it kills humans, your savings don't matter, and futures where it makes earth a utopia, they don't matter)

There are many futures you are not considering including some that I personally think hold more probability mass.

  1. "Superintelligence" may not be uncontrollable, depending on how humans design the agents.
  2. A narrow, "tool aligned" superintelligence is a machine that has the same intelligence as an uncontrolled system but is myopic and functions like a tool. Such a system is a weapon and humans may build many variants of these, using them to mass produce weapons and hunt down any rogues who broke out from #1.
  3. We don't know how smart a superintelligence can be when thinking with limited human built computers because we have never built one, nature may limit effectiveness to only a controllable margin above human intelligence. It is only a theory that the margin will be insurmountable and there is not yet evidence that directly supports the theory. (All the above human intelligences we have built so far are easily beatable if the human opponent gets a small resource advantage)

Due to 1-3 there is some reason for retirement funds. And then there's the big one:

The obvious way to become the wealthiest company on earth is the following steps:

  1. You're an application company, so wait until commercially available general ASIs exist
  2. Raise a lot of money
  3. Using ASI and many robots (probably millions), systematically automate bioscience research, with each experiment contributing to a common dataset that the ASIs consume
  4. Most of the research is shaped : you want to grow human tissues reliably, human organs that exhibit the same measurable parameters you measure from "real" human organs. You want to take cadavar organs and keep them alive as long as possible, using ASI in a closed loop. You want to age forward test animals and deage them again. Each of these involves a structured experiment with measurable outcomes, no sorcerers apprentice here.

All these experiments are providing the structured information to train the ASI to do its job

  1. Set up shop in a country with a compatible regulatory regime and start offering cures for most diseases and aging. Each patient gets examined by western doctors unaffiliated with your company and their medical files added to a Blockchain before and after so there can be no doubt of the treatment outcomes.

  2. Patients pay a percentage of net assets, while there care is delivered primarily via robotics driven by ASIs, there are some human doctors checking the sanity on the ASIs actions and finite capacity at first. The wait list is sorted by both severity and wealth.

Number 6 means there would be a time period where having money might help save your life. Whether that time window is 1 year or 20 I do not know. Obviously eventually western regulatory regimes will fold but I don't know how long that will take. (They have to fold, the above ultimately is "use whatever chemical compound the ASI thinks works in this scenario. The "procedure " is "do whatever the ASI thinks should be done next". The ASI may invent novel drugs while a specific patient is dying, change a surgical procedure upon discovering variant anatomy no surgeon has ever seen, and so on. It's practicing medicine like stockfish, humans will need some time to even understand the reason a move was made.

comment by AnthonyC · 2023-05-16T02:12:09.850Z · LW(p) · GW(p)

I think this is bad advice for a number of reason. Previous commenters hit several of the main ones, but to add:

  1. If you have enough money that saving for longer term goals (10+ years) is not a large burden, then it's likely that what you can buy in the near term instead just won't have overly much impact on your well being.
  2. Even in the near term, the difference between "going through life with plenty of savings" vs. not having savings makes a huge difference to psychological well being. Large unplanned expense or emergency comes up? Much less of a problem than if you had to go into debt or do without as a result. Sudden bout of inflation or increase in taxes? Not a big deal if you're used to living on 80% of your salary and can (temporarily) reduce your savings rate.
  3. Retirement savings (in the US) are often tax advantaged, plus many employers provide matching contributions. Me not saving in my 401k would mean I'm taking a pay cut by not receiving the match. And the savings grow tax-free until I withdraw them. Me not investing in a Roth IRA means my savings grow slower because I'm paying taxes on earnings, plus I'm still allowed to withdraw the contributions without penalty if I need to. And in a real emergency you can borrow against tax-advantaged retirement savings, or withdraw early, and pay the penalty.
comment by Dave Lindbergh (dave-lindbergh) · 2023-05-16T02:18:58.248Z · LW(p) · GW(p)

Niels Bohr supposedly said "Prediction is difficult, especially about the future". Even if he was mistaken about quantum mechanics, he was right about that.

Every generation seems to think it's special and will encounter new circumstances that turn old advice on its head. Jesus is coming back. We'll all die in a nuclear war. Space aliens are coming. A supernova cascade will sterilize Earth. The planets will align and destroy the Earth. Nanotech will turn us all into grey goo. Global warming will kill us all. 

It's always something. Now it's AGI. Maybe it'll kill us. Maybe it'll usher in utopia, or transform us into gods via a singularity. 

Maybe. But based on the record to date, it's not the way to bet.

Whatever you think the world is going to be like in 20 years, you'll find it easier to deal with if you're not living hand-to-mouth. If you find it difficult to save money, it's very tempting to find an excuse to not even try. Don't deceive yourself.

"... however it may deserve respect for its usefulness and antiquity, [predicting the end of the world] has not been found agreeable to experience." --Edward Gibbon, 'Decline and Fall of the Roman Empire'

Replies from: dave-lindbergh, sharmake-farah
comment by Dave Lindbergh (dave-lindbergh) · 2023-05-16T02:31:12.379Z · LW(p) · GW(p)

Added: I do think Bohr was wrong and Everett (MWI) was right. 

So think of it this way - you can only experience worlds in which you survive. Even if Yudkowsky is correct and in 99% of all worlds AGI has killed us all by 20 years from now, you will experience only the 1% of worlds in which that doesn't happen.

And in many of those worlds, you'll be wanting something to live on in your retirement.

Replies from: programcrafter
comment by ProgramCrafter (programcrafter) · 2023-05-16T14:40:58.507Z · LW(p) · GW(p)

I've thought on this additional axiom, and it seems to bend the reality too much, leading to possible [unpleasant outcomes](https://www.lesswrong.com/posts/4ARaTpNX62uaL86j6/the-hidden-complexity-of-wishes): for example, where a person survives but is tortured indefinitely long.

Also, it's unclear how could this axiom manage to preserve ratios of probabilities for quantum states.

comment by Noosphere89 (sharmake-farah) · 2023-05-16T02:46:23.250Z · LW(p) · GW(p)

Every generation seems to think it's special and will encounter new circumstances that turn old advice on its head.

I have a question, how did you come to know this, especially as a repeatable pattern? I'd really like to know this, because this sounds like one of the more interesting arguments against AI being impactful at all.

Replies from: shayne-o-neill
comment by Shayne O'Neill (shayne-o-neill) · 2023-05-16T13:36:55.073Z · LW(p) · GW(p)

I dont think he's trying to say AI wont be impactful, obviously it will, just that trying to predict it isn't an activity that one ought apply any surety to. Soothsaying isn't a thing. Theres ALWAYS been an existential threat right around the corner, gods , devils, dynamite,machine guns, nukes, AGW (that one though might still end up being the one that does in fact do us in if the political winds dont change soon) and now AI.  We think that AI might go foom, but there might be some limit we just wont know about till we hit it, and we have various estmations , all contracting, on how bad , or good, it might be for us. Attempting to fix those odds in firm conviction however is not science, its belief.

comment by ESRogs · 2023-05-21T09:36:42.497Z · LW(p) · GW(p)

Happiness has been shown to increase with income up to a certain threshold ($ 200K per year now, roughly speaking), beyond which the effect tends to plateau.

Do you have a citation for this? My understanding is that it's a logarithmic relationship — there's no threshold. (See the Income & Happiness section here.)

comment by Martin Randall (martin-randall) · 2023-05-16T15:22:49.022Z · LW(p) · GW(p)

You could buy a "target retirement 2040" fund if you expect the eschaton around 2043 and you want a little time before then to relax. If you are 20 that is different to the "target retirement 2075" you might otherwise get.

Diligent people in their 20s are probably better off diligently increasing their ability to be happy and productive and useful, rather than their retirement savings. Mostly the benefit of early retirement savings is to get into the habit and to make your financial mistakes while you don't yet have much money to lose.

The future appears unusually uncertain this century and insuring against various negative outcomes, to the extent that can be done simply, is probably best.

comment by Myron Hedderson (myron-hedderson) · 2023-05-16T19:45:14.604Z · LW(p) · GW(p)

Generically, having more money in the bank gives you more options, being cash-constrained means you have fewer options. And, also generically, when the future is very uncertain, it is important to have options for how to deal with it. 

If how the world currently works changes drastically in the next few decades, I'd like to have the option to just stop what I'm doing and do something else that pays no money or costs some money, if that seems like the situationally-appropriate response. Maybe that's taking some time to think and plan my next move after losing a job to automation, rather than having to crash-train myself in something new that will disappear next year. Maybe it's changing my location and not caring how much my house sells for. Maybe it's doing different work. Maybe it's paying people to do things for me. Maybe it's also useful to be invested in the right companies when the economy goes through a massive upswing before the current system collapses, so I for a brief time have a lot of wealth and can direct it towards goals that are aligned with my values rather than someone else's, thus, index funds that buy me into a lot of companies.

Even if we eventually get to a utopia, the path to that destination could be rocky, and having some slack is likely to be helpful in riding that time out.

Another form of slack is learning to live on much less than you make - so the discipline required to accumulate savings, could also pay off in terms of not being psychologically attached to a lifestyle that stops you from making appropriate changes as the world changes around you.

Of course "accumulate money so you have options when the world changes" is a different mindset than "save money so you can go live on a beach in 40 years". But money is sort of like fungible power, an instrumentally useful thing to have for many different possible goals in many different scenarios, and a useless thing to have in only a few.

Side note: "the amount a dollar can do goes up, the value of a dollar collapses" strikes me as implausible. Your story for how that could happen is people hit a point of diminishing returns in terms of their own happiness... but there are plenty of things dollars can be used for aside from buying more personal happiness. If things go well, we're just at the start of earth-originating intelligence's story, and there are plenty of ways for an investment made at the right time to ripple out across the universe. If I was a trillionaire (or a 2023-hundred-thousandaire where the utility of a dollar has gone up by a factor of 10 million, whatever), I could set up a utopia suited to my tastes and understanding of the good, for others, and that seems worth doing even if my subjective day-to-day experience doesn't improve as a result. As just one example. In any case, being at the beginning of a large expansion in the power of earth-originating intelligence, seems like just the sort of time when you'd like to have the ability to make a careful investment.

comment by Gesild Muka (gesild-muka) · 2023-05-16T13:23:10.681Z · LW(p) · GW(p)

The logical assumption is that we save for retirement so we can have the funds to maintain a good quality of life in old age when we can't or don't want to work. This safety net ideal is the stated reason but to me it seems many people work hard to see the dollar amount increase, for the achievement, status and satisfaction that comes with work and because their job becomes so tied into their identity. If you're in the latter camp future value is irrelevant.