0 comments
Comments sorted by top scores.
↑ comment by Gurkenglas · 2021-01-09T15:55:30.340Z · LW(p) · GW(p)
To avert Idiocracy? Just clone Einstein.
↑ comment by Viliam · 2021-02-06T14:53:05.280Z · LW(p) · GW(p)
What is so special about this cosmological era
It has atoms. Also stars.
Replies from: ete↑ comment by plex (ete) · 2021-02-06T15:42:12.621Z · LW(p) · GW(p)
A whole lot of other times have those, in fact according to Wikipedia:
1013 (10 trillion): Estimated time of peak habitability in the universe, unless habitability around low-mass stars is suppressed.
It's not surprising that we don't find ourselves in, say, the era where there are just black holes, but observing that we are right near the start of what looks like a very long period where life seems possible is something to think about.
One answer is the simulation hypothesis [? · GW], combined with the observation that we seem to be living in very interesting times.
↑ comment by Dagon · 2021-02-03T20:20:03.056Z · LW(p) · GW(p)
Can you clarify the meaning of "possible" in this definition? If you mean "a reachable state of the universe", then either we're already in the utopia (anything possible is possible, right?) or there is no such thing as a utopia (impossible things are not possible, right)?
↑ comment by Dagon · 2021-01-25T00:44:37.398Z · LW(p) · GW(p)
The math of very wide value ranges requires some assumptions about "compared to what" that it's necessary to calculate in order to make good decisions. Putting the calculations together to figure out what "a bit" is - what's the amount you should spend on which lottery this week?
I suspect you'll find that the lottery is better than buying junk food, and worse than almost any other charitable or durable-value use of money.
↑ comment by Dagon · 2021-01-08T18:18:38.358Z · LW(p) · GW(p)
Alternate framing: regret is the mechanism of reinforcing something you learned about your behavior. Noticing that you wish you'd done something differently is equivalent to adding weight to better future decisions.
And like all learning, too much regret can be worse than too little. Overfitting can lead to even worse predictions/decisions.
↑ comment by Dagon · 2020-12-08T04:21:33.875Z · LW(p) · GW(p)
There's a ton hiding behind that "let's say". How many civilizations are we actually comparing and how similar are they to ours?
In any case, If our civilization is similar to the mean civilization that faced filter X, then 10% seems right. If our civilization is different than that reference class, the chance could be quite different.
↑ comment by Dagon · 2020-12-07T23:59:17.098Z · LW(p) · GW(p)
We don't have many examples of civilizations passing through or failing at different filters, so it's all pretty darned theoretical anyway. A lot depends on whether your 1 in 10 is for civilizations like the one under consideration, or from some distribution of civilization-types.
If we had actual objective measures of real filters, I think your gut has something. There's likely some characteristics which vary between civilizations that make for a correlation between passing some sets of filters (like those who pass Y tend to pass X more easily than those who fail Y). But again, that's a bit sketchy when we know of no actual civilizations other than ours, so we're pretty much making up the denominators.
Replies from: Gunnar_Zarncke↑ comment by Gunnar_Zarncke · 2020-12-08T15:03:39.411Z · LW(p) · GW(p)
Collapse has quite a few examples of civilizations failing (the book has an agenda but the examples are legit I would say).
↑ comment by mako yass (MakoYass) · 2020-11-24T23:28:28.551Z · LW(p) · GW(p)
For a while, we've been exploring a similar question but more in the direction of pre-committing to giving simulants better lives, rather than just not bringing them into existence: https://www.lesswrong.com/posts/NiN6fNXjnS9hMSB2C/principia-compat-the-potential-importance-of-multiverse [LW · GW]
Trivially, if we prevent simulatees from using anthropic reasoning, or any method of self-location, then the only thing you'll need to do to ensure your status as a nonsimulatee is to just self-locate every once in a while.
Doesn't that protocol just allow some people to prove they're not simulants, while doing little to aleiveate the real anguishes of being one; growing up in an immature low-tech society (with aging, disease and fear) and then dying before spreading out into the stars?
↑ comment by Pattern · 2020-11-24T04:27:04.352Z · LW(p) · GW(p)
"we ought to avoid using ought statements"
If anyone thinks that statement isn't paradoxical, please enlighten me.
We shouldn't be using should statements. (And yet we are.) The statement can only be made if it isn't being followed - where's the paradox?
For comparison:
A library has a sign which says "No talking in the library." Someone talks in the library. Someone goes "Shhh!". "Why?" A librarian says "No talking in the library."
↑ comment by Viliam · 2020-11-23T19:18:29.597Z · LW(p) · GW(p)
Seems like you apply labels "invalid", "immoral", "irrational" to memes that do not straightforwardly try to spread themselves. Even if I accept the implied value judgment, there is still the problem that memes do not exist in vacuum. The environment can punish some attempts at self-replication, and can reward doing things other than straightforward attempts to self-replicate.
↑ comment by neotoky01 · 2021-01-25T08:49:38.138Z · LW(p) · GW(p)
So if you had 10,000 dollars, you would buy all 10,000 lottery tickets to win the grand prize of $9,900?
Whenever you're investing you always want to use compound interest. $10,000 invested would give you a grand total of $43,219 after 30 years with 5% yearly compound interest.
https://www.investor.gov/financial-tools-calculators/calculators/compound-interest-calculator
Finally there is an increased marginal utility the less money you have, not more. So when you have $0, each additional dollar gives a large marginal utility; whereas when you have $10,000 each additional dollar gives a smaller marginal utility. When you're poor each dollar goes a long way. So in your scenario people are doubly worse off: money is being siphoned off at a %1 rate every day; there is no compound interest; and the 9,999 people have lost greater marginal utility than the 1 person who won $10,000.
↑ comment by Tetraspace (tetraspace-grouping) · 2020-12-12T23:55:37.171Z · LW(p) · GW(p)
The number of observers in a universe is solely a function of the physics of that universe, so the claim that a theory that implies 2Y observers is a third as likely as a theory that implies Y observers (even before the anthropic update) is just a claim that the two theories don't have an equal posterior probability of being true.
↑ comment by Tetraspace (tetraspace-grouping) · 2020-12-07T21:38:50.948Z · LW(p) · GW(p)
This is self-sampling assumption-like reasoning: you are reasoning as if experience is chosen from a random point in your life, and since most of an immortal's life is spent being old, but most of a mortal's life is spent being young, you should hence update away from being immortal.
You could apply self-indication assumption-like reasoning to this: as if your experience is chosen from a random point in any life. Then, since you are also conditioning on being young, and both immortals and mortals have one youthhood each, just being young doesn't give you any evidence for or against being immortal that you don't already have. (This is somewhat in line with your intuitions about civilisations [LW(p) · GW(p)]: immortal people live longer, so they have more Measure/prior probability, and this cancels out with the unlikelihood of being young given you're immortal)