Retirement Accounts and Short Timelines

post by jefftk (jkaufman) · 2024-02-19T18:50:05.231Z · LW · GW · 35 comments

Contents

36 comments

Sometimes I talk to people who don't use retirement accounts because they think the world will change enormously between now and when they're older. Something like, the most likely outcomes are that things go super well and they won't need the money, or things go super poorly and we're all dead. So they keep savings in unrestricted accounts for maximum flexibility. Which I think is often a bad decision, at least in the US:

I think the cleanest comparison is between investing through a regular investment account and a Roth 401k. This is a plan through your work, and they may offer matching contributions. If your employer doesn't offer a 401k, or offers a bad one (no low-cost index funds), you can use a Roth IRA instead.

When people compare a Roth 401k to keeping the money in a non-retirement account the normal presentation is something like:

This isn't exactly wrong, but it's missing a lot. Additional considerations:

Some caveats:

35 comments

Comments sorted by top scores.

comment by Daniel Kokotajlo (daniel-kokotajlo) · 2024-02-19T23:08:41.733Z · LW(p) · GW(p)

If you expect to need the money soon, say for buying a house, then it wouldn't make sense.

How soon? I expect to need the money sometime in the next 3 years, because that's about when we get to 50% chance of AGI.

Replies from: jkaufman, davekasten
comment by jefftk (jkaufman) · 2024-02-20T00:52:56.507Z · LW(p) · GW(p)

In your 50% of worlds where we get AGI in the next 3y, do you have important uses for the money?

How does your remaining 50% smear across "soon but >3y" through "AI fizzle"?

Replies from: daniel-kokotajlo
comment by Daniel Kokotajlo (daniel-kokotajlo) · 2024-02-20T05:36:32.165Z · LW(p) · GW(p)

In the worlds where we get AGI in the next 3y, the money can (and large chunks of it will) get donated, partly to GiveDirectly and suchlike, and partly to stuff that helps AGI go better.

The remaining 50% basically exponentially decays for a bit and then has a big fat tail. So off the top of my head I'm thinking something like this:

15% - 2024
15% - 2025
15% - 2026
10% - 2027
5% - 2028
5% - 2029
3% - 2030
2% - 2031
2% - 2032
2% - 2033
2% - 2034
2% - 2035
... you get the idea.

 

Replies from: Vladimir_Nesov, akram-choudhary
comment by Vladimir_Nesov · 2024-02-24T11:33:07.374Z · LW(p) · GW(p)

I'd put more probability in the scenario where good $5 billion 1e27 FLOPs runs give mediocre results, so that more scaling remains feasible but lacks an expectation of success. With how expensive the larger experiments would be, it could take many years for someone to take another draw from the apocalypse deck. That alone adds maybe 2% for 10 years after 2026 or so, and there are other ways for AGI to start working.

comment by Akram Choudhary (akram-choudhary) · 2024-02-22T18:43:04.793Z · LW(p) · GW(p)

Why do you have 15% for 2024 and only an additional 15 for 2025.

Do you really think there's a 15% chance of AGI this year ?

Replies from: daniel-kokotajlo, Vladimir_Nesov
comment by Daniel Kokotajlo (daniel-kokotajlo) · 2024-02-22T19:41:28.764Z · LW(p) · GW(p)

Yes, I really do. I'm afraid I can't talk about all of the reasons for this (I work at OpenAI) but mostly it should be figure-outable from publicly available information. My timelines were already fairly short (2029 median) when I joined OpenAI in early 2022, and things have gone mostly as I expected. I've learned a bunch of stuff some of which updated me upwards and some of which updated me downwards.

As for the 15% - 15% thing: I mean I don't feel confident that those are the right numbers; rather, those numbers express my current state of uncertainty. I could see the case for making the 2024 number higher than the 2025 (exponential distribution vibes, 'if it doesn't work now then that's evidence it won't work next year either' vibes.) I could also see the case for making the 2025 number higher (it seems like it'll happen this year, but in general projects usually take twice as long as one expects due to the planning fallacy, therefore it'll probably happen next year)

 

comment by Vladimir_Nesov · 2024-02-24T17:04:33.096Z · LW(p) · GW(p)

Any increase in scale is some chance of AGI at this point, since unlike weaker models, GPT-4 is not stupid in a clear way, it might be just below the threshold of scale to enable an LLM to get its act together. This gives some 2024 probability.

More likely, a larger model "merely" makes job-level agency feasible for relatively routine human jobs, but that alone would suddenly make $50-$500 billion runs financially reasonable. Given the premise of job-level agency at <$5 billion scale, the larger runs likely suffice for AGI. The Gemini report says training took place in multiple datacenters, which suggests that this sort of scaling might already be feasible, except for the risk that it produces something insufficiently commercially useful to justify the cost (and waiting improves the prospects). So this might all happen as early as 2025 or 2026.

comment by davekasten · 2024-02-22T15:04:51.264Z · LW(p) · GW(p)

I mean, does your Vanguard targeted lifecycle index fund likely invest in equities exposed to AGI growth (conditional on non-doom)?  

If you think money still has meaning after AGI and meaningful chance of no-doom, it might actually be optimal to invest in your retirement fund.

comment by CarlShulman · 2024-02-26T16:30:20.475Z · LW(p) · GW(p)

This post seems catastrophically wrong to me because of its use of a Roth 401k as an example, instead of a pre-tax account. Following it could create an annoying problem of locked-up funds.
 

Five years from when you open your account there are options for taking gains out tax-free even if you're not 59.5 yet. You can take "substantially equal periodic payments", but there are also ones for various kinds of hardship.

Roth earnings become tax free at 59.5. Before that, even if you use SEPP to do withdrawals without penalties you still have to pay taxes on the withdrawn earnings (some of which are your principal because of inflation). And those taxes are ordinary income rates, which top out much higher than long term capital gains tax rates. Further, the SEPP withdrawals are spaced out to reflect your whole lifetime according to actuarial tables, so if TEOTAWKI is in 10 years and the life tables have you space out your SEPP withdrawals over 40 years, then you can only access a minority of your money in that time.

For a pretax 401k where you contribute when you have a high income, the situation is very different: you get an upfront ordinary income tax deduction when you contribute, you don't get worse tax treatment by missing out on LTCG rates. And you can rollover to a Roth IRA (paying taxes on the conversion) and then access the amount converted penalty-free in 5 years (although that would trap some earnings in the Roth) or just withdraw early and pay the 10% penalty (which can be overcome by tax-free growth benefits earlier, or withdrawing in low income years).

I'm 41.5, so it's 18 years to access my Roth balances without paying ordinary taxes on the earnings (which are most of the account balances). I treat those funds as insurance against the possibility of a collapse of AI progress or blowup of other accounts, but I prefer pre-tax contributions over Roth ones now because of my expectation that probably there will be an AI capabilities explosion well before I reach 59.5. If I had all or most of my assets in Roth accounts it would be terrible.
 

Replies from: jkaufman
comment by jefftk (jkaufman) · 2024-02-27T02:51:53.560Z · LW(p) · GW(p)

This is subtle and I may be missing something, but it seems to me that using a pretax 401k helps some but not that much, and the Roth scenario is only slightly worse than the regular investment account. Compare the three, chosen to be maximally favorable to your scenario:

  1. You contribute to your pre-tax 401k, it grows (and inflates) 2x. You roll it over into a Roth IRA, paying taxes on the conversion. Over the next five years it grows 1.3x. You withdraw the contribution and leave the gains.

  2. You contribute to your post-tax Roth 401k, it grows (and inflates) 2x, and then another 1.3x. You withdraw the same amount as in scenario #1.

  3. You put it in a regular investment account.

Let's assume your marginal tax rates are 24% for regular income and 15% for capital gains.

In #1 if you start with $100k then it's $200k at the time you convert, and you pay $48k (24%) in taxes leaving you with $152k in your Roth 401k. It grows to $198k, you withdraw $152k and you have $46k of gains in your Roth 401k.

In #2 your $100k is taxed and $76k (less the 24%) starts in the Roth. When it's time to withdraw it's grown to $198k. Of that, your $76k of contributions are tax and penalty free, leaving you with $122k of gains. To end up with $152k in your bank account you withdraw $115k, paying $28k (24%) in taxes and $12k (10%) in penalties. You have $7k of gains still in your Roth.

In #3 your $100k is taxed to $76k when you earn it, and then grows to $198k. You sell $179k, paying 15% LTCG, and end up with $152k after taxes and $19k still invested (but subject to 15% tax when you eventually sell, so perhaps consider it as $16k).

So you're better off in #1 than #3 than #2, but the difference between #3 and #2 is relatively small, and this is a scenario relatively unfavorable to Roths.

My claim isn't "Roth 401(k)s are strictly better than putting the money in investment accounts" or "Roth 401(k)s are strictly better than pre-tax 401(k)s" but instead "when you consider the range of possible futures, for most people Roth 401(k)s are better than non-protected accounts and other protected accounts may be even better".

Replies from: CarlShulman
comment by CarlShulman · 2024-02-27T04:52:12.070Z · LW(p) · GW(p)

The catastrophic error IMO is:

Five years from when you open your account there are options for taking gains out tax-free even if you're not 59.5 yet. You can take "substantially equal periodic payments", but there are also ones for various kinds of hardship.

For Roth you mostly can't take out gains tax-free. The hardship ones are limited, and SEPP doesn't let you access much of it early. The big ones of Roth conversions and just eating the 10% penalty only work for pretax.

[As an aside Roth accounts are worse for most people vs pretax for multiple reasons, e.g. pretax comes with an option of converting or withdrawing in low income years at low tax rates. More details here.]
 

In #1 if you start with $100k then it's $200k at the time you convert, and you pay $48k (24%) in taxes leaving you with $152k in your Roth 401k. It grows to $198k, you withdraw $152k and you have $46k of gains in your Roth 401k.

You pay taxes on the amount you convert, either from outside funds or withdrawals to you. If you convert $X you owe taxes on that as ordinary income, so you can convert $200k and pay $48k in taxes from outside funds. This makes pretax better.
 
Re your assumptions, they are not great for an AI-pilled saver. Someone who believes in short AI timelines should probably be investing in AI if they don't have decisive deontological objections. NVDA is up 20x in the last 5 years, OpenAI even more. On the way to a singularity AI investments will probably more than 10x again unless it's a surprise in the next few years as Daniel K argues in comments. So their 401k should be ~all earnings, and they may have a hard time staying in the low tax brackets you use (moreso at withdrawal time than contribution time) if they save a lot. The top federal tax rates are 37% for ordinary income and 23.8% for capital gains.

Paying the top federal income tax rate plus penalties means a 47% tax rate on early withdrawals from the Roth vs 23.8% from taxable. I.e. every dollar kept outside the the Roth is worth 44% more if you won't be using the account after 59.5. That's a wild difference from the standard Roth withdrawal case where there's a 0% tax rate.

A substantially larger percentage in Roth than the probability you are around to use it and care about it after 59.5 looks bad to me. From the perspective of someone expecting AI soon this advice could significantly hurt them in a way that the post obscured.


 

Replies from: lexande, jkaufman
comment by lexande · 2024-02-29T01:31:08.459Z · LW(p) · GW(p)

My impression is that the "Substantially Equal Periodic Payments" option is rarely a good idea in practice because it's so inflexible in not letting you stop withdrawals later, potentially even hitting you with severe penalties if you somehow miss a single payment. I agree that most people are better off saving into a pretax 401k when possible and then rolling the money over to Roth during low-income years or when necessary. I don't think this particularly undermines jefftk's high-level point that tax-advantaged retirement savings can be worthwhile even conditional on relatively short expected AI timelines.

I prefer pre-tax contributions over Roth ones now because of my expectation that probably there will be an AI capabilities explosion well before I reach 59.5. If I had all or most of my assets in Roth accounts it would be terrible.

Why would money in Roth accounts be so much worse than having in in pretax accounts in the AI explosion case? If you wanted the money (which would then be almost entirely earnings) immediately you could get it by paying tax+10% either way. But your accounts would be up so much that you'd only need a tiny fraction of them to fund your immediate consumption, the rest you could keep investing inside the 401k/IRA structure.

Replies from: jkaufman
comment by jefftk (jkaufman) · 2024-02-29T01:34:37.122Z · LW(p) · GW(p)

But your accounts would be up so much that you'd only need a tiny fraction of them to fund your immediate consumption

Maybe you want to use the money altruistically? To spend on labor, compute, etc?

Replies from: lexande
comment by lexande · 2024-02-29T01:46:19.623Z · LW(p) · GW(p)

Some altruistically-motivated projects would be valid investments for a Checkbook IRA. I guess if you wanted to donate 401k/IRA earnings to charity you'd still have to pay the 10% penalty (though not the tax if the donation was deductible) but that seems the same whether it's pretax or a heavily-appreciated Roth.

comment by jefftk (jkaufman) · 2024-02-28T21:29:00.907Z · LW(p) · GW(p)

I think a lot of this depends on your distribution of potential futures:

  • What sort of returns (or inflation) do you expect, in worlds where you need the money at various ages?

  • What future legal changes do you expect?

  • How likely are you to have a 5y warning before you'll want to spend the money you've put in a traditional 401k?

  • What are your current and future tax brackets?

  • How likely are you to be in a situation where means testing means you lose a large portion of non-protected money?

  • How likely are you to lose a lawsuit for more than your (unprotected) net worth or otherwise go bankrupt?

The first version of this post (which I didn't finish) tried to include a modeling component, but it gets very complex and people have a range of assumptions so I left it as qualitative.

comment by NeroWolfe · 2024-02-22T16:29:04.155Z · LW(p) · GW(p)

I gather from the recent census article that most of the readers of this site are significantly younger than I am, so I'll relay some first-hand experiences you probably didn't live through.

I was born in 1964. The Cuban Missle Crisis was only a few years in the past, and Kennedy had just been shot, possibly by Russians, or The Mob, or whomever. Continuing through at least the end of the Cold War in 1989, there was significant public opinion that we were all going to die in a nuclear holocaust (or Nuclear Winter), so really, what was the point in making long-term plans?

Spoiler: things worked out better than expected, although not without significant bumps along the way. Spending all your money on hookers and blow because you might as well enjoy yourself now would not have been a solid investment strategy.

Now, much like the AGI/ASI threat, the nuclear threat could have actually played out. There were other close calls where we (or they) thought the attack had started already (Vasily Arkhipov comes to mind), and of course, Death From AI could well happen. However, you should probably hedge your bets to a certain extent just in case you manage to live to retirement age. Remember, we still don't have flying cars.

Replies from: gwern, None
comment by gwern · 2024-02-23T03:30:50.051Z · LW(p) · GW(p)

However, you should probably hedge your bets to a certain extent just in case you manage to live to retirement age.

Do you need to, though? People have been citing Feynman's anecdote about the RAND researchers deciding to stop bothering with retirement savings in the 1940s/50s because they thought the odds of nuclear war was so high. But no one has mentioned any of those RAND researchers dying on the streets or living off the proverbial dog chow in retirement. And why would they have?

First, anyone who was a RAND researcher is a smart cookie doing white-collar jobs who will be in high demand into retirement and beyond (and maybe higher than any time before in their career); it's not like they were construction workers whose backs are giving out in their 30s and will be unable to earn any money after that. Quite the opposite.

Second, no one said stopping was irrevocable. You can't go back in time, of course, but you can always just start saving again. This is relevant because when nuclear war didn't happen within a decade or two, presumably they noticed at some point, 'hey, I'm still alive'. There is very high option value/Value of Information. If, after a decade or two, nuclear war has not happened and you survive the Cuban Missile Crisis... you can start saving then.

The analogue here would be for AI risk, most of the short-term views are that we are going to learn a lot over the next 5-10 years. By 5 years, a decent number of AGI predictions will be expiring, and it will be much clearer how far DL scaling will go. DL will either be much scarier than it is now, or it will have slammed to a halt. And by 10 years, most excuses for any kind of pause will have expired and matters will be clearer. You are not committed to dissavings forever; you are not even committed for more than a few years, during which you will learn a lot.

Third, consider also the implication of 'no nuclear war' for those RAND researchers: that's good for the American economy. Very good. If you were a RAND researcher who stopped saving in the 1940s-1950s and decided to start again in the '60s, and you were 'behind', well, that meant that you started investing in time for what Warren Buffett likes to call one of the greatest economic long-term bull markets in the history of humanity.

Going back to AI, if you are wrong about AGI being a danger, and AGI is achieved on track but it's safe and beneficial, the general belief is that whatever else it does, it ought to lead to massive economic growth. So, if you are wrong, and you start saving again, you are investing at the start of what may be the greatest long-term bull market that could ever happen in the history of humanity. Seems like a pretty good starting point for your retirement savings to catch up, no?

(It has been pointed out before that if you are optimistic about AGI safety & economic growth and you are saving money, you are moving your resources from when you are very poor to when you will be very rich, and this seems like a questionable move. You should instead either be consuming now, or hedging against bad outcomes rather than doubling down on the good outcome.)

Whereas if you are wrong, the size of your retirement savings accounts will only bitterly recall to you your complacency and the time you wasted. The point of having savings, after all, is to spend them. (Memento mori.)

Replies from: lexande, None
comment by lexande · 2024-02-23T16:37:20.610Z · LW(p) · GW(p)

You have to be really confidently optimistic or pessimistic about AI to justify a major change in consumption rates; if you assign a significant probability to "present rate no singularity"/AI winter futures then the benefits of consumption smoothing dominate [LW(p) · GW(p)] and you should save almost as much (or as little) as you would if you didn't know about AI.

Replies from: gwern
comment by gwern · 2024-02-26T20:53:10.826Z · LW(p) · GW(p)

hold_my_fish's setup in which there is no increase in growth rates, either destruction or normality, is not the same as my discussion. If you include the third option of a high-growth-rate future (which is increasingly a plausible outcome), you would also want to consume a lot now to consumption-smooth, because once hypergrowth starts, you need very little capital/income to smooth/achieve the same standard of living under luxury-robot-space-communism as before. (Indeed, you might want to load up on debt on the expectation that if you survive, you'll pay it out of growth.)

Replies from: lexande
comment by lexande · 2024-02-29T01:36:24.805Z · LW(p) · GW(p)

The math in the comment I linked works the same whether the chance of money ceasing to matter in five years' time is for happy or unhappy reasons.

comment by [deleted] · 2024-02-23T03:45:37.704Z · LW(p) · GW(p)

Gwern I have one very specific scenario in mind.

You know how in prospera, Honduras you can get gene therapy for myostatin inhibitors? (Supposed to help with aging, definitely makes people buff)

I am imagining a future where things go well, in say 2035 a biotech company starts working on aging with a licensed ASI. By 2045-2065 they finally have a viable solution (by a lot of bootstrapping and developing living models), but the FDA obstructs and you can get early access somewhere like Honduras. Just need a few mil cash (ironically third world residents get the first treatments for testing, then billionaires, then hundred millionaires, then...)

Sure 10 years later it gets approved in the USA after much protest and violence, but do you want "died 2 years before the cure" on your tombstone?

Does this sound like a plausible scenario?

Replies from: gwern, Raphaël
comment by gwern · 2024-02-23T14:36:11.717Z · LW(p) · GW(p)

That sounds like a knife's-edge sort of scenario. The treatment arrives neither much earlier nor later but just a year or two before you die (inclusive of all interim medical/longevity improvements, which presumably are nontrivial if some new treatment is curing aging outright) and costs not vastly too much nor vastly below your net worth but just enough that, even in a Christiano-esque slow takeoff where global GDP is doubling every decade & also the treatment will soon be shipped out to so many people that it will drop massively in price each year, that you still just can't afford it - but you could if only you had been socking away an avocado-toast a month in your 401k way back in 2020?

Replies from: None
comment by [deleted] · 2024-02-23T14:59:47.620Z · LW(p) · GW(p)

Yep.  And given how short human lifespans are, and how long medical research has historically taken, the 'blade of the knife' is 10 years across.  With the glacial speed of current medical research it's more like 20-30.  

It's not fundamentally different from the backyard bunker boom in the past.  That's a knife blade - you're far enough from the blast not to be killed immediately, but not so far your house doesn't burn to the ground from the blast wave and followup firestorm.  And then your crude bunker doesn't accumulate enough radioactive fallout for the dose to be fatal, and you don't run out of supplies before it's cool enough to survive the wasteland above long enough to escape on foot.

comment by Raphaël · 2024-02-23T10:29:48.080Z · LW(p) · GW(p)

Do you think it's realistic to assume that we won't have an ASI by the time you reach old age, or that it won't render all this aging stuff irrelevant? In my own model, it's a 5% scenario. Most likely in my model is that we get an unaligned AGI that kills us all or a nuclear war that prevents any progress in my lifetime or even AI to be regulated into oblivion after a large-scale disaster such as a model that can hack into just about anything connected to the Internet bringing down the entire digital infrastructure.

comment by [deleted] · 2024-02-23T03:37:03.139Z · LW(p) · GW(p)

Remember, we still don't have flying cars.

I have a big peeve about that. When I try to model a flying car I see the tradeoffs of

(High fuel consumption, higher cost to build, higher skill to drive, noise, falling debris) vs (less time to reach work)

As long as the value per hour of a workers time is less than the cost per hour of the vtol + externalities, there isn't ROI for most workers.

Less market size means higher cost and thus we just have helicopters for billionaires and everyone else drives.

Did this come up in the 1970s or after the oil shocks were over in the 80s? Because they just jump out at me as a doomed idea that doesn't happen because it doesn't make money.

Even now: electric vtols fix the fuel cost, using commodity parts makes them cheaper, automation makes them easier to fly, but you still have the negative externalities.

AI makes immediate money, gpt-4 seems to be 100+ percent annual ROI...(60 mil to train, 2 billion annual revenue after a year, assuming 10 percent profit margin)

Replies from: NeroWolfe
comment by NeroWolfe · 2024-02-23T15:21:44.141Z · LW(p) · GW(p)

I may have used too much shorthand here. I agree that flying cars are impractical for the reasons you suggest. I also agree that anybody who can justify it uses a helicopter, which is akin to a flying car.

According to Wikipedia, this is not a concept that first took off (hah!) in the 1970s - there have been working prototypes since at least the mid-1930s. The point of mentioning the idea is that it represents a cautionary tale about how hard it is to make predictions, especially about the future. When cars became widely used (certainly post-WWII), futurists started predicting what transportation tech would look like, and flying cars were one of the big topics. The fact that they're impractical didn't occur to many of the people making predictions. 

I have a strong suspicion that there are flaws in current reasoning about the future, especially as it relates to the threat of AGI. Recall that there was a round of AI hype back in the 1980s that fizzled out when it became clear nothing much worked beyond the toy systems. I think there are good reasons to believe we're in a very dangerous time, but I think there are also reasons to believe that we'll figure it out before we all kill ourselves. Frankly, I'm more concerned about global warming, as that requires absolutely no new technology nor policy changes to be able to kill us or at least put a real dent in global human happiness.

My point is simply that deciding that we're 95% likely to die in the next five years is probably wrong, and if you base your entire set of life choices on that prediction, you are going to be surprised when it turns out differently.

Also, I'm not strongly invested in convincing others of this fact, partly because I don't think I have any special lock on predicting the future. I'm just suggesting you look back farther than 1980 for examples of how people expected things to turn out vs. how they actually did and factor that into your calculations.

[Small edit in the first paragraph for clearer wording]

Replies from: None
comment by [deleted] · 2024-02-23T15:31:50.952Z · LW(p) · GW(p)

The fact that they're impractical didn't occur to many of the people making predictions.

Right I am just trying to ask if you personally thought they were far fetched when you learned of them. Or were there serious predictions that this was going to happen. Flying cars don't pencil in.

AGI financially does pencil in.

AGI killing everyone with 95 percent probability in 5 years doesn't because it require several physically unlikely assumptions.

The two assumptions are

A. being able to optimize an algorithm to use many oom less compute than right now

B. The "utility gain" of superintelligence being so high it can just do things credentialed humans don't think are possible at all. Like developing nanotechnology in a garage rather than needing a bunch of facilities that resemble IC fabs.

If you imagined you might be able to find a way to make flying cars like regular cars, and reach mpgs similar to that of regular cars, and the entire FAA drops dead...

Then yeah flying cars sound plausible but you made physically unlikely assumptions.

comment by Dagon · 2024-02-19T20:15:31.203Z · LW(p) · GW(p)

Yup.  It's just stupid not to AT LEAST contribute the matched or otherwise-incented amount.  This is free money.  As for longer-term amounts and planning, the key question is "what is the alternative use?"

  • Not saving so you can donate more - I think it's confused, but I can't really judge what's important to you.
  • Not saving so you can get full consumption value in the present (vacations, fine dining, better living, etc.).  Still can't judge.  I suspect you'll regret it, but I really don't have a good model of "best" for intertemporal transfers.
  • Saving, but avoiding protected "retirement" plans so you can invest in traditional taxed assets.  This is very hard to justify, for the reasons you give.  I'd classify as mostly dumb.
  • Saving/investing in non-traditional things (angel funding, crypto schemes, etc.) that you can't do in a retirement account.  I'd recommend "do both" - diversify across possible futures and timelines.  I will say, when you're young and have no dependents is the time to take more risks.  In a static world, it's also the time to start traditional investments (including long-term protected accounts).

Thus, I'd separate the aspects of this advice.  "the chance that you'll want money at retirement is large enough to be worth planning for." will depend on specific estimates, and the foregone uses for money that "planning for" entails.  "The money is less restricted than it sounds" is very important, and true.

Replies from: lexande, jkaufman, lorenzo-buonanno
comment by lexande · 2024-02-23T03:08:35.644Z · LW(p) · GW(p)

Note that it is entirely possible to invest in almost all "non-traditional" things within a retirement account; "checkbook IRA" is a common term for a structure that enables this (though the fees can be significant and most people should definitely stick with index funds). Somewhat infamously, Peter Thiel did much of his early angel investing inside his Roth IRA, winding up with billions of dollars in tax-free gains.

comment by jefftk (jkaufman) · 2024-02-19T20:53:28.046Z · LW(p) · GW(p)

Saving, but avoiding protected "retirement" plans so you can invest in traditional taxed assets. This is very hard to justify, for the reasons you give. I'd classify as mostly dumb.

This is the only one I'm trying to argue against in the post, fwiw.

comment by Lorenzo (lorenzo-buonanno) · 2024-02-27T19:53:11.091Z · LW(p) · GW(p)

Not saving so you can donate more - I think it's confused, but I can't really judge what's important to you.


Why do you think it's confused? If some others can benefit >100x more from the money compared to 60-year-old Lorenzo, and their interests are just as important, wouldn't it be rational to reallocate money from him to them?

Replies from: Dagon
comment by Dagon · 2024-02-27T20:24:30.805Z · LW(p) · GW(p)

Not really interested in convincing anyone, and probably shouldn't have mentioned it.  I respect and honor your choices, and the fact that your time preferences and value of optionality compared to your estimate of urgent donation value don't make sense to me is kind of irrelevant.

IMO, even if I do intend to donate some amount over some timeframe (and I do, and I do donate a fair bit concurrently), you're PROBABLY better off investing and growing a fair bit of your assets, and donating it later (when it's larger, and you'll have more information about how to donate well).  This doesn't fully generalize, and there may well be causes where sooner is way better and it's knowable with pretty good certainty now.

comment by Kei · 2024-02-22T05:30:32.203Z · LW(p) · GW(p)

[Edit: There are caveats, which are mentioned below.]

Also, please correct me if I am wrong, but I believe you can withdraw from a retirement account at any time as long as you are ok paying a 10% penalty on the withdrawal amount. If your employer is giving a ~>10% match, this means you'll make money even if you withdraw from the account right away.

Replies from: CarlShulman, jkaufman
comment by CarlShulman · 2024-02-26T16:21:54.133Z · LW(p) · GW(p)

This is pretty right for pretax individual accounts (401ks may not let you do early withdrawal until you leave), for Roth accounts that have accumulated earnings early withdrawal means paying ordinary taxes on the earnings, so you missed out on LTCG rates in addition to the 10% penalty.

comment by jefftk (jkaufman) · 2024-02-22T12:06:34.197Z · LW(p) · GW(p)

Often, but not always: your plan might not allow in-service withdrawals, so taking the money out right away might require leaving your company.

comment by elifland · 2024-02-20T19:08:52.419Z · LW(p) · GW(p)