Posts
Comments
Some altruistically-motivated projects would be valid investments for a Checkbook IRA. I guess if you wanted to donate 401k/IRA earnings to charity you'd still have to pay the 10% penalty (though not the tax if the donation was deductible) but that seems the same whether it's pretax or a heavily-appreciated Roth.
The math in the comment I linked works the same whether the chance of money ceasing to matter in five years' time is for happy or unhappy reasons.
My impression is that the "Substantially Equal Periodic Payments" option is rarely a good idea in practice because it's so inflexible in not letting you stop withdrawals later, potentially even hitting you with severe penalties if you somehow miss a single payment. I agree that most people are better off saving into a pretax 401k when possible and then rolling the money over to Roth during low-income years or when necessary. I don't think this particularly undermines jefftk's high-level point that tax-advantaged retirement savings can be worthwhile even conditional on relatively short expected AI timelines.
I prefer pre-tax contributions over Roth ones now because of my expectation that probably there will be an AI capabilities explosion well before I reach 59.5. If I had all or most of my assets in Roth accounts it would be terrible.
Why would money in Roth accounts be so much worse than having in in pretax accounts in the AI explosion case? If you wanted the money (which would then be almost entirely earnings) immediately you could get it by paying tax+10% either way. But your accounts would be up so much that you'd only need a tiny fraction of them to fund your immediate consumption, the rest you could keep investing inside the 401k/IRA structure.
You have to be really confidently optimistic or pessimistic about AI to justify a major change in consumption rates; if you assign a significant probability to "present rate no singularity"/AI winter futures then the benefits of consumption smoothing dominate and you should save almost as much (or as little) as you would if you didn't know about AI.
Note that it is entirely possible to invest in almost all "non-traditional" things within a retirement account; "checkbook IRA" is a common term for a structure that enables this (though the fees can be significant and most people should definitely stick with index funds). Somewhat infamously, Peter Thiel did much of his early angel investing inside his Roth IRA, winding up with billions of dollars in tax-free gains.
In particular it seems very plausible that I would respond by actively seeking out a predictable dark room if I were confronted with wildly out-of-distribution visual inputs, even if I'd never displayed anything like a preference for predictability of my visual inputs up until then.
It seems like a major issue here is that people often have limited introspective access to what their "true values" are. And it's not enough to know some of your true values; in the example you give the fact that you missed one or two causes problems even if most of what you're doing is pretty closely related to other things you truly value. (And "just introspect harder" increases the risk of getting answers that are the results of confabulation and confirmation bias rather than true values, which can cause other problems.)
Here's an attempt to formalize the "is partying hard worth so much" aspect of your example:
It's common (with some empirical support) to approximate utility as proportional to log(consumption). Suppose Alice has $5M of savings and expected-future-income that she intends to consume at a rate of $100k/year over the next 50 years, and that her zero utility point is at $100/year of consumption (since it's hard to survive at all on less than that). Then she's getting log(100000/100) = 3 units of utility per year, or 150 over the 50 years.
Now she finds out that there's a 50% chance that the world will be destroyed in 5 years. If she maintains her old spending patterns her expected utility is .5*log(1000)*50 + .5*log(1000)*5 = 82.5. Alternately, if interest rates were 0%, she might instead change her plan to spend $550k/year over the next 5 years and then $50k/year subsequently (if she survives). Then her expected utility is log(5500)*5+.5*log(500)*45 = 79.4, which is worse. In fact her expected utility is maximized by spending $182k over the next five years and $91k after that, yielding an expected utility of about 82.9, only a tiny increase in EV. If she has to pay extra interest to time-shift consumption (either via borrowing or forgoing investment returns) she probably just won't bother. So it seems like you need very high confidence of very short timelines before it's worth giving up the benefits of consumption-smoothing.
Why would you expect her to be able to diminish the probability of doom by spending her million dollars? Situations where someone can have a detectable impact on global-scale problems by spending only a million dollars are extraordinarily rare. It seems doubtful that there are even ways to spend a million dollars on decreasing AI xrisk now when timelines are measured in years (as the projects working on it do not seem to be meaningfully funding-constrained), much less if you expected the xrisk to materialize with 50% probability tomorrow (less time than it takes to e.g. get a team of researchers together).
I think it generally makes sense to try to smooth personal consumption, but that for most people I know this still implies a high savings rate at their first high-paying job.
- As you note, many of them would like to eventually shift to a lower-paying job, reduce work hours, or retire early.
- Even if this isn't their current plan, burnout is a major risk in many high-paying career paths and might oblige them to do so, and so there's a significant probability of worlds where the value of having saved up money during their first high-paying job is large.
- If they're software engineers in the US they face the risk that US software engineer salaries will revert to the mean of other countries and other professional occupations. https://www.jefftk.com/p/programmers-should-plan-for-lower-pay
- If they want but don't currently have children, then even if their income is higher later in their career, it's likely that their income-per-household-member won't be. Childcare and college costs mean they should probably be prepared to spend more per child in at least some years than they currently do on their own consumption.
Yeah that's essentially the example I mentioned that seems weirder to me, but I'm not sure, and at any rate it seems much further from the sorts of decisions I actually expect humanity to have to make than the need to avoid Malthusian futures.
I'm happy to accept the sadistic conclusion as normally stated, and in general I find "what would I prefer if I were behind the Rawlsian Veil and going to be assigned at random to one of the lives ever actually lived" an extremely compelling intuition pump. (Though there are other edge cases that I feel weirder about, e.g. is a universe where everyone has very negative utility really improved by adding lots of new people of only somewhat negative utility?)
As a practical matter though I'm most concerned that total utilitarianism could (not just theoretically but actually, with decisions that might be locked-in in our lifetimes) turn a "good" post-singularity future into Malthusian near-hell where everyone is significantly worse off than I am now, whereas the sadistic conclusion and other contrived counterintuitive edge cases are unlikely to resemble decisions humanity or an AGI we create will actually face. Preventing the lock-in of total utilitarian values therefore seems only a little less important to me than preventing extinction.
I think
- Humans are bad at informal reasoning about small probabilities since they don't have much experience to calibrate on, and will tend to overestimate the ones brought to their attention, so informal estimates of the probability very unlikely events should usually be adjusted even lower.
- Humans are bad at reasoning about large utilities, due to lack of experience as well as issues with population ethics and the mathematical issues with unbounded utility, so estimates of large utilities of outcomes should usually be adjusted lower.
- Throwing away most of the value in the typical case for the sake of an unlikely case seems like a dubious idea to me even if your probabilities and utility estimates are entirely correct; the lifespan dilemma and similar results are potential intuition pumps about the issues with this, and go through even with only single-exponential utilities at each stage. Accordingly I lean towards overweighting the typical range of outcomes in my decision theory relative to extreme outcomes, though there are certainly issues with this approach as well.
As far as where the penalty starts kicking in quantitatively, for personal decisionmaking I'd say somewhere around "unlikely enough that you expect to see events at least this extreme less than once per lifetime", and for altruistic decisionmaking "unlikely enough that you expect to see events at least this extreme less than once in the history of humanity". For something on the scale of AI alignment I think that's around 1/1000? If you think the chances of success are still over 1% then I withdraw my objection.
The Pascalian concern aside I note that the probability of AI alignment succeeding doesn't have to be *that* low before its worthwhileness becomes sensitive to controversial population ethics questions. If you don't consider lives averted to be a harm then spending $10B to decrease the chance of 10 billion deaths by 1/10000 is worse value than AMF. If you're optimizing for the average utility of all lives eventually lived then increasing the chance of a flourishing future civilization to pull up the average is likely worth more but plausibly only ~100x more (how many people would accept a 1% chance of postsingularity life for a 99% chance of immediate death?) so it'd still be a bad bet below 1/1000000. (Also if decreasing xrisk increases srisk, or if the future ends up run by total utilitarians, it might actually pull the average down.)
I think that I'd easily accept a year of torture in order to produce ten planets worth of thriving civilizations. (Or, if I lack the resolve to follow through on a sacrifice like that, I still think I'd have the resolve to take a pill that causes me to have this resolve.)
I'd do this to save ten planets of worth of thriving civilizations, but doing it to produce ten planets worth of thriving civilizations seems unreasonable to me. Nobody is harmed by preventing their birth, and I have very little confidence either way as to whether their existence will wind up increasing the average utility of all lives ever eventually lived.
There's some case for it but I'd generally say no. Usually when voting you are coordinating with a group of people with similar decision algorithms who you have some ability to communicate with, and the chance of your whole coordinated group changing the outcome is fairly large, and your own contribution to it pretty legible. This is perhaps analogous to being one of many people working on AI safety if you believe that the chance that some organization solves AI safety is fairly high (it's unlikely that your own contributions will make the difference but you're part of a coordinated effort that likely will). But if you believe is extremely unlikely that anybody will solve AI safety then the whole coordinated effort is being Pascal-Mugged.
This is Pascal's Mugging.
Previously comparisons between the case for AI xrisk mitigation and Pascal's Mugging were rightly dismissed on the grounds that the probability of AI xrisk is not actually that small at all. But if the probability of averting the xrisk is as small as discussed here then the comparison with Pascal's Mugging is entirely appropriate.
The cost of Covid is not just unlikely chronic effects, nor vanishingly-unlikely-with-three-shots severe/fatal effects, but also making you feel sick and obliging you to quarantine for ~five days (and probably send some uncomfortable emails to people you saw very recently). With the understandable abandonment of NPIs and need to get on with life, the chance that you will catch Covid in a given major wave if not recently boosted seems pretty high, perhaps 50%? (There were 30M confirmed US cases during the Omicron wave, and at least for most of the pandemic confirmed cases seemed to undercount true cases by about 3x, which makes 27% of the US population despite recent boosters and NPIs.) 100% chance of losing one predictable day (plus perhaps 5% chance of losing five days) seems much better than 50% chance of losing five unpredictable days.
- Is there any reason to think research that could lead to malaria vaccines is funding-constrained? There doesn't seem to be any shortage of in-mice studies, and in light of Eroom's Law the returns on marginal biomedical research investment seem low.
- Malaria is preventable and curable with existing drugs, so vaccines for it only make sense if their cost (including required research) works out lower than preventing it in other ways, which means some strategies that made sense for something like Covid won't make sense here.
- That's not how international waters works, you're still subject to the jurisdiction of the flag country and if they're okay with your trial you could do it more cheaply on land there.
- If you attempt an end-run of the developed-country regulators with your trial they will just refuse to approve anything based on your trial data, which is why pharma companies don't jurisdiction-shop much at present.
- That said developed country regulators do in fact approve challenge trials for malaria vaccines (as I noted) and vaccines for other curable diseases. Regulatory & IRB frameworks no doubt still add a bunch of overhead but this does further bound the potential benefits of attempting to work outside them.
- I don't know what "focusing on epistemics" could possibly entail in terms of concrete interventions. Trying to develop prediction markets I suppose? I have updated away from the usefulness of those based on their performances over the past past year though, and it seems like they are more constrained by policy than by lack of marginal funding (at retail donor levels).
- Policy change is still intractable.
- In general there are lots of margins on which the world might be improved, but the vast majority of them are not plausibly bottlenecked on resources that I or most EAs I know personally control. Learning about a few more such margins is not a significant update. I focus on bednets not because I think it's unusually much more important than other world-improving margins, nor because I think it will be a margin where unusually much improvement happens in coming years, but because it's a rare case of a margin where I think decisions I can make personally (about what to do with my disposable income dollars) are likely to have a nontrivial impact.
It’s plausible that the Covid-19 pandemic could end up net massively saving lives, and a lot of Effective Altruists (and anyone looking to actually help people) have some updating to do. It’s also worth saying that 409k people died of malaria in 2020 around the world, despite a lot of mitigation efforts, so can we please please please do some challenge trials and ramp up production in advance and otherwise give this the urgency it deserves?
What update is this supposed to cause for Effective Altruists? We already knew that policy around all sorts of global health (and other) issues is very far from optimal, but there's nothing we can do about that. Even a global pandemic wasn't enough to get authorities to treat trials and approvals with appropriate urgency and consideration of the costs of inaction, so what hope would a tiny number of advocates have? We can fantasize all day about what we'd do if we ran the world, but back in reality policy change is intractable and donating to incrementally-scalable interventions like bednets remains the best most of us can personally do. Or am I misunderstanding what you meant here?
(Note also that malaria vaccine human challenge trials were already a thing; Effective Altruist John Beshir participated as a subject in one in 2019.)
Yes, I'm conflating "BLM movement" and "individual Americans who want to help BLM achieve its goals" because isn't it the same thing.
No? I want to help BLM achieve its goals, but "launch a nationwide discussion" and "come to a consensus policy" are not actions I can personally take. If I post policy proposals on Facebook it seems unlikely to me that many people will read or be influenced by them; it also seems unlikely that they would be better than many other policy ideas already out there. If you actually do think that lack of policy ideas is the most important bottleneck for BLM and that personal Facebook posts by non-experts is a promising way of addressing it then that's a possible answer, but if so I'd like to see your analysis for why you believe that.
find solutions that both sides support
Note that at the national level this is inherently very difficult because for any proposal made by one party, the other party has an incentive to oppose it in order to deny the proposing party a victory (and the accompanying halo of strength and efficacy). But fortunately this is not necessarily a problem for at least some approaches to the police reform issue, because police are mostly controlled by state & city governments, and as noted many states and cities are under undisputed Democratic Party control, so the relevant politics are within rather than between parties.
defend shops from looters so people have more sympathy for your side
This seems to have already been done; reports of looting have become increasingly rare and polls report public sympathy for BLM is very high.
The question was not what the "BLM movement" should do, but what an individual Americans should do; your steps do not seem actionable for individuals. Your steps 1 and 2 also partly beg the question.
Additionally, assuming the support of all Democratic politicians is highly dubious; a number of cities that have been marked by highly visible abusive police behavior in recent weeks are already controlled by Democratic mayors and city councils, who in many cases have nonetheless refused to hold the police accountable. And support of 50% of the population (which BLM now has) is certainly not always enough to pass "whatever policy you want" absent coordinated organization and in the face of political inertia; for example marijuana legalization has had majority nationwide support for years but has no near-term prospect of passage at the federal level.
1) People are probably less likely to throw out stale bread if it's impossible to obtain fresh bread?
2) If the price of e.g. fish is less regulated but generally higher than that of bread, banning fresh bread would lead to a larger rise in the price of fish as more rich people switch to it, which would perhaps lead to fishermen working longer hours and catching more fish, helping make up the overall calorie shortfall from the poor harvest without increasing costs for poor people who could never afford fish in the first place. Whereas letting the price of bread itself rise would be more regressive?
3) Same as with fish but with meat from livestock, pushing tradeoffs in the direction of "slaughter this year" vs "keep fattening up for next year", which could be desirable if the wheat shortage is expected to be temporary, and might even decrease demand for wheat as livestock feed if that was a thing at the time?
Not sure how large any of these effects would be.
Since apparently some confirmed cases never develop symptoms (this study of Diamond Princess passengers estimates 18%), it seems the answer to your second question is "never"?
The world population is not infinite. If somebody moves to San Francisco that means lower demand and lower rents wherever they came from (and conversely many other US cities now have housing crises caused by exiles from San Francisco). The desirable cities should be allowed to expand until there is more than enough room for everybody (yes, everybody) who wants to live in them to live in them, at which point landlords will no longer have the leverage to keep rents high.
Next time I see somebody say "shoot for the moon so if you miss you'll land among the stars" I'm going to link them here.
You seem to be saying that you prefer general words that encompass many concepts rather than specific and more precise words.
I can believe that you meant something more specific and precise than "worrying sometimes makes things worse" when you said "secondary stressors", but your post failed to get any more precise distinction across, and if people used the term as jargon they wouldn't be using it for anything more precise than "worrying making things worse". (Less sure about the motivation vs "tactile ambition" example since I don't know of any decent framework for thinking about motivation.)
Yeah, Lesswrong sometimes feels a bit like a forum for a fad diet that has a compelling story for why it might work and seemed to have greatly helped a few people anecdotally, so the forum filled up with people excited about it, but it doesn't seem to actually work for most of them. Yet they keep coming back because their friends are on the forum now and because they don't want to admit failure.
FDIC doesn't insure safe deposit boxes. It does insure your checking account balance, but your bank still has to figure out somewhere with a nonnegative interest rate to put your money (since the FDIC insurance triggers only after the bank itself is wiped out). Or find a way to charge you enough fees to make your effective interest rate negative.
Yeah, ignoring the option to declare bankruptcy or foreclose, effectively bounding your downside, seems like a major gap in this analysis. Especially as many jurisdictions usually allow people to keep significant assets (primary residence, 401ks) in bankruptcy. (Though on the other hand since 2005 US bankruptcy law obliges many filers to accept "repayment plans" for some fraction of what they owe, so it's not quite "discharging your debt for free".) That said I guess the most common debt for people reading this post is probably nondischargeable student debt; it makes sense if it's mainly talking about that.
Bank lockboxes have fees, which typically work out to more negative interest than the most negative actually-observed government-debt interest rates. (Indeed the operating & insurance costs of bank lockboxes at scale are basically a lower bound on how low government-debt interest rates can go in the market; this article from the European interest rate lows in 2016 suggest insurance costs of 0.5-1%.)
Bitcoin is (currently) pretty much useless as a medium of exchange. It remains of some practical use as a store of value resilient to certain legal risks (e.g. as the answer to the question Eliezer asked in this Facebook post), and in general with a risk profile uncorrelated with other assets. Its strength over other cryptocurrencies for this use case is based primarily on being the most established Schelling point. It's also possible (though not looking particularly likely) that future software changes will eventually make it useful as a medium of exchange again.
I'm one of the 15%. Given declining marginal utility of money , high-risk-high-financial-reward bets have never appealed to me; the financial EV would have to be ridiculously high for the EV in utilons to be positive. I considered getting some BTC as a curiosity in 2011 but decided it was too much hassle. However discussions in the aftermath of the 2016 election led me to conclude that holding a small amount of cryptocurrency could decrease overall risk by mitigating certain legal risks (e.g. money you can memorise might be good to have if you're a fleeing refugee), and so I bought some in early 2017, despite assuming that the then-current price was basically efficient and so expecting 0% average returns. I have of course been pleasantly surprised by the returns since then, but continue to make decisions based on the assumption of 0% expected returns going forward. (I'm waiting to rebalance out until the capital gains are long-term for tax purposes.)
I agree that this is the appropriate strategy to use when adding an investment to your portfolio, but note that if applied to Bitcoin it did not yield the sort enormous gains that motivated this post. So if you think the Bitcoin example should lead us to update away from outside-view-motivated beliefs about our ability to spot market inefficiencies/investment opportunities, you should probably also endorse updating away from outside-view-motivated portfolio strategies like picking an allocation and rebalancing.
I just ran some numbers on this. Suppose you had $100k in savings, read the 2011 LessWrong post and were convinced to adopt a 95% cash 5% bitcoin allocation at the end of Q1 2011, and thereafter rebalanced on the last Monday of every quarter. (Assume for simplicity that your non-Bitcoin holdings earn zero interest, that you don't add or remove any money from your total savings during the period, and that you successfully avoided having your BTC stolen in MtGox etc.) If you ignore taxes, then at the end of 2017 you'd be left with $414k, which is decent but not life-changing. Further, since you're rebalancing every quarter you're paying a lot of taxes if you're in the US; assuming a federal+state short term capital gains rate of 30% you'd end up with $284k. (By only rebalancing yearly you can decrease your tax liability but you miss out on some of the big rallies; assuming a 15% long-term capital gains rate you end up with $258k.)
By contrast just buying the same $5k of BTC in Q1 2011 and hodling until the end of 2017 would leave you with around $75M (perhaps $60M after tax), which is more like the sort of "winning" Scott seems to be thinking about here. But how would you know to do that rather than, say, selling in mid-2011 for $100k?
Pat and Maude's arguments seem somewhat more reasonable if they're essentially saying "if you're so smart, why aren't you high-status?" Since nearly everyone (including many people who explicitly claim not to) places a high value on status at least within some social sphere, and status is instrumentally useful for so many goals even if you don't value it terminally, a human can be assumed to already be trying as hard as they can to increase their status, and thus it's a decent predictive proxy for a their general ability to achieve their goals. "I hesitate to call anyone a rationalist other than the person smiling down from atop a huge pile of utility." (Eliezer himself has in fact acquired impressive amounts of status in many of the spheres he seems to care about, so this is quite a weak argument against him, but the same is probably not true for most of the audience of this piece.)
In my experience "pop" is connotationally very different from how the Boston rationalists "backthumb"; "backthumb" contains a value judgement that the nascent conversation branch would be a poor use of time even if there is much that could be said about it, while "pop" is primarily used to return to a previous topic after a conversation branch has exhausted itself naturally.
I notice that I am confused why people are so extremely disinclined to keep gratitude journals (the effect of which does apparently replicate) even when they report doing it makes them feel better. (Of course I don't keep one either, the idea seems aversive and I don't know why.)
The social reality of how hard you can reasonably be expected to try/the "standard amount" of trying is actually really important, because it gates the tremendous value of social diversification.
After Hurricane Sandy, when lower Manhattan was without power but I still had power in upper Manhattan, I let a couple of friends sleep in my double bed while I slept on my own couch. In principle they could have applied more dakka to ensure their apartment would be livable in natural disasters, but this would be very expensive and the ability to fall back on mutual aid creates a lot of value by decreasing the need for such extraordinary precautions, mitigating unknown unknowns, and lowering everybody's P(totally fucked). (Especially when this scales up from a temporary local power outage to something like being an international refugee.)
On the other hand, if a couple I knew similarly well had shown up in NYC with no notice asking if they could sleep in my bed while I slept on my couch, I would say no. If they had booked a confirmed Airbnb but the host had flaked out at the last minute, I'd probably say yes. If they had gone to Aqueduct Racetrack expecting to win enough for a hotel room but their horse had lost, I'd say no. It seems to me this mostly comes down to whether they had a prima facie reasonable plan, or whether they were predictably likely to take unfair advantage of mutual aid all along in a way that needs to be timelessly disincentivised. But this means a lot depends on what your particular subculture considers "prima facie reasonable" and what unknowns it considers known.
(Which of the following are prima facie reasonable to rely on in retirement planning such that you can fairly expect aid from more fortunate friends if they fail? 1. bitcoin investments, 2. stock index investments, 3. gold, 4. a home you own in the US, 5. USD in savings accounts, 6. a defined-benefit employer pension, 7. US Social Security, 8. US Medicare, 9. absence of punitive wealth taxes in the US, 10. your ability to get a new job after years out of the workforce, 11. your children's unconditional support, 12. the singularity arriving before you get too old to work. The answer will vary a lot with your social circle's politics and memes with only a very indirect dependence on wider objective reality.)
That doesn't explain why subjects who thought a good heart would mean a lower post-exercise pain threshold took their hands out sooner.
Looking at the actual data from the article, since Yvain neglected to actually state the results of the second case. Subjects told that a good heart was correlated with higher pain threshold after exercise showed an 11.84 second increase in mean immersion time, while subjects told that a good heart was correlated with a decrease in pain threshold showed a 7.63 second decrease in mean immersion time.