0 comments
Comments sorted by top scores.
comment by Yair Halberstadt (yair-halberstadt) · 2022-03-01T19:06:16.048Z · LW(p) · GW(p)
This is making a lot of assumptions such that your principle only holds in specific cases rather than generally. E.g:
- There's no time discounting
- Utility is linear in amount of Factorio played (the agent is not risk averse).
- The way to maximize the chance of playing Factorio for a million years is to save the world.
The first two points should be obvious. To illustrate the 3rd point. Assume that the chance the world survived for millions of years is 1%, but the chance Factorio survives a million years conditional on that is 0.1%. Then your best use of your money is to invest it in making sure Factorio survives, not humanity survives.
More practically, if I was a completely selfish person: the chance of me solving antiaging is very slim. But if somebody else does solve it, I want to make sure I can get hold of it. So the best use of my money is to save it, so that I'll be able to afford antiaging treatment once it exists.
Replies from: rank-biserial↑ comment by rank-biserial · 2022-03-02T01:51:38.501Z · LW(p) · GW(p)
Ooh, good points. Gonna go retract it
comment by Donald Hobson (donald-hobson) · 2022-03-01T13:50:42.815Z · LW(p) · GW(p)
This makes the assumption of a 0 discount rate. And that there is a philosophically clear way to point to some ancient posthuman superbeing and say "that's me".
Replies from: rank-biserial, rank-biserial↑ comment by rank-biserial · 2022-03-01T17:01:24.219Z · LW(p) · GW(p)
Also, I like the implication that, to a first approximation, Virtue is a low time-preference.
↑ comment by rank-biserial · 2022-03-01T15:06:43.586Z · LW(p) · GW(p)
This makes the assumption of a 0 discount rate.
Yeah, that particular utility function I described doesn't have a discount rate. Wouldn't a superintelligent agent with a nonzero discount rate still seek self-preservation? What about the other instrumentally convergent goals?
And that there is a philosophically clear way to point to some ancient posthuman superbeing and say "that's me".
A superintelligent agent that was perfectly aligned with my values wouldn't be me. But I would certainly take any advice it gave me. Given that virtually all possible superintelligent agents seek self-preservation, I can assume with very high confidence that the me-aligned superintelligence would too. Put another way, If I were scanned and uploaded to a computer, and I somehow figured out how to recursively self-improve, I would probably FOOM. That superintelligence would be me, I think.
comment by seed · 2022-03-01T09:11:50.392Z · LW(p) · GW(p)
Wait, I thought EA already had 46$ billion they didn't know where to spend, so I should prioritize direct work over earning to give? https://80000hours.org/2021/07/effective-altruism-growing/
Replies from: rank-biserial↑ comment by rank-biserial · 2022-03-01T09:30:43.178Z · LW(p) · GW(p)
I thought so too. This comment thread on ACX shattered that assumption of mine. EA institutions should hire people to do "direct work". If there aren't enough qualified people applying for these positions, and EA has 46 billion dollars, then their institutions should (get this) increase the salaries they offer until there are.
Replies from: rank-biserialThere's this thing called "Ricardo's Law of Comparative Advantage". There's this idea called "professional specialization". There's this notion of "economies of scale". There's this concept of "gains from trade". The whole reason why we have money is to realize the tremendous gains possible from each of us doing what we do best.
This is what grownups do. This is what you do when you want something to actually get done. You use money to employ full-time specialists.
↑ comment by rank-biserial · 2022-03-02T01:53:59.151Z · LW(p) · GW(p)
Gonna go make this a shoddy top-level post, I think this comment chain is far more important than its parent post.
comment by Rene de Visser (rene-de-visser) · 2022-03-01T16:18:42.304Z · LW(p) · GW(p)
An agent typically maximizes their expected utility. i.e. they make the choices under their control that lead to the highest expected utility.
If they predict their efforts to solving aging and mitigating other risks to themselves have minimal effect on the expected utility they will spend most of their time playing Factorio while they can. This will lead to to the maximum expected utility.
If they spend all their time trying to not die, and then they die their total utility will be zero.
Replies from: rank-biserial↑ comment by rank-biserial · 2022-03-01T16:24:51.886Z · LW(p) · GW(p)
The idea isn't to spend all your time trying not to die. The idea is to spend fifty years now so you can have millions of factorio-years later.
comment by gbear605 · 2022-03-02T01:21:01.579Z · LW(p) · GW(p)
Suppose that curing aging gives you an extra million* years of Factorio playing. Then to be worth it, spending the next 50 years curing aging has to increase the odds of curing aging by 1/20,000. That might be possible, but that's a pretty large fraction to assess for one person's contributions. I'd expect that the estimated Factorio-years gained from a year of working on curing aging is less than one. In that case, the rational thing to do would be to play Factorio for now.
One way around this though would be with game theory and coordination. Perhaps you could create a pact that if all of the other Factorio-obsessed people out there also signed it, you would all immediately switch to curing aging instead of playing Factorio. Getting a large number of people all working on it might increase the chance of curing aging to be worth it.
* I said a million years as an arbitrary number. At some point unknown or unavoidable risks dominate, and most people would engage in time discounting as well. You could probably go up a order of magnitude and still have the argument hold.