LessWrong 2.0 Reader
View: New · Old · Topnext page (older posts) →
next page (older posts) →
New York City
Solstice: December 14 [fb]
Megameetup: December 13-16 [fb]
Both: Sheraton Brooklyn New York Hotel
Signup: https://rationalistmegameetup.com/
Thanks for this post! Very clear and great reference.
- You appear to use the term 'scope' in a particular technical sense. Could you give a one-line definition?
- Do you know if this agenda has been picked up since you made this post?
I think there's one fundamental problem here IMO, which is that not everything is fungible, and thus not everything manages to actually comfortably exist on the same axis of values. Fingers are not fungible. At the current state of technology, once severed, they're gone. In some sense, you could say, that's a limited loss. But for you, as a human being, it may as well be infinite. You just lost something you'll never ever have back. All the trillions and quadrillion dollars in the world wouldn't be enough to buy it back if you regretted your choice. And thus, while in some sense its value must be limited (it's just the fingers of one single human being after all, no? How many of those get lost every day simply because it would have been a bit more expensive to equip the workshop with a circular saw that has a proper safety stop?), in some other sense the value of your fingers to you is infinite, completely beyond money.
Bit of an aside - but I think this is part of what causes such a visceral reaction in some people to the idea of sex reassignment surgery, which then feeds into transphobic rationalizations and ideologies. The concept of genuinely wanting to get rid of a part of your body that you can't possibly get back feels so fundamentally wrong on some level to many people, it pretty much alone for them seals the deal that you must either be insane or having been manipulated by some kind of evil outside force.
green_leaf on Quantum Immortality: A Perspective if AI Doomers are Probably Right(This comment is written in the ChatGPT style because I've spent so much time talking to language models.)
The calculation of the probabilities consists of the following steps:
The epistemic split
Either we guessed the correct digit of π (10%) (branch 1), or we didn't (90%) (branch 2).
The computational split
On branch 1, all of your measure survives (branch 1−1) and none dies (branch 1−2), on branch 2, 1128 survives (branch 2−1) and 127128 dies (branch 2−2).
Putting it all together
Conditional on us subjectively surviving (which QI guarantees), the probability we guessed the digit of π correctly is
The probability of us having guessed the digit of π prior to us surviving is, of course, just 10%.
For the probabilities to be meaningful, they need to be verifiable empirically in some way.
Let's first verify that prior to us surviving, the probability of us guessing the digit correctly is 10%. We'll run n experiments by guessing a digit each time and instantly verifying it. We'll learn that we're successful in, indeed, just 10% of the time.
Let's now verify that conditional on us surviving, we'll have ≈93.4% probability of guessing correctly. We perform the experiment n times again, and this time, every time we survive, other people will check if the guess was correct. They will observe that we guess correctly, indeed, ≈93.4% of the time.
We arrived at the conclusion that the probability jumps at the moment of our awakening. That might sound incredibly counterintuitive, but since it's verifiable empirically, we have no choice but to accept it.
mateusz-baginski on DeepSeek beats o1-preview on math, ties on coding; will release weightsFYI if you want to use o1-like reasoning, you need to check off "Deep Think".
dr_s on What are the good rationality films?
It's also not really a movie as much as a live recording of a stage play. But agree it's fantastic (honestly, I'd be comfortable calling it Aladdin rational fanfiction).
Also a little silly detail I love about it in hindsight:
During the big titular musical number, all big Disney villains show on stage to make a case for themselves and why what they wanted was right - though some of their cases were quite stretched. Even amidst this collection of selfish entitled people, when Cruella De Vil shows up to say "I only wanted a coat made of puppies!" she elicits disgust and gets kicked out by her fellow villains, having crossed a line. Then later on Disney thought it was a good idea to unironically give her the Wicked treatment in "Cruella".
Must be noted that all that subtext is entirely the product of the movie adaptation. The short story absolutely leaves no room for doubt, and in fact concludes on a punchline that rests on that.
dr_s on What are the good rationality films?This muddies the alienness of AI representation quite a bit.
I don't think that's necessarily it. For example, suppose we build some kind of potentially dangerous AGI. We're pretty much guaranteed to put some safety measures in place to keep it under control. Suppose these measures are insufficient and the AGI manages to deceive its way out of the box - and we somehow still live to tell the tale and ask ourselves what went wrong. "You treated the AGI with mistrust, therefore it similarly behaved in a hostile manner" is guaranteed to be one of the interpretations that pop up (you already see some of this logic, people equating alignment to wanting to enslave AIs and claiming it is thus more likely to make them willing to rebel). And if you did succeed to make a truly "human" AI (not outside of the realm of possibility if you're training it on human content/behaviour to begin with), that would be a possible explanation - after all, it's very much what a human would do. So is the AI so human it also reacted to attempt to control it as a human would - or so inhuman it merely backstabbed us without the least hesitation? That ambiguity exists with Ava, but I also feel like it would exist in any comparable IRL situation.
Anyway "I am Mother" sounds really interesting, I need to check it out.
Only tangentially related, but one very little known movie that I enjoyed is the Korean sci-fi "Jung_E". It's not about "alien" AGI but rather about human brain uploads used as AGI. It's quite depressing, along the lines of that qntm story you may have read on the same topic, but it felt like a pretty thoughtful representation of a concept that usually doesn't make it a lot into mainstream cinema.
jonah-wilberg on Ethical Implications of the Quantum MultiverseYou're right that you can just take whatever approximation you make at the macroscopic level ('sunny') and convert that into a metric for counting worlds. But the point is that everyone will acknowledge that the counting part is arbitrary from the perspective of fundamental physics - but you can remove the arbitrariness that derives from fine-graining, by focusing on the weight. (That is kind of the whole point of a mathematical measure.)
camille-berger on What are the good rationality films?Just discovered an absolute gem. Thank you so much.