Posts
Comments
How do you even make a quantum coin with 1/googolplex chance?
What about your past self? If Night Guy can predict what Morning Guy will do, Morning Guy is effectively threatening his past self.
But... but... Light actually won, didn't he? At least in the short run - he managed to defeat L. I was always under the impression that some of these "mistakes" were committed by Light deliberately in order to lure L.
Is there an analogous experiment for Tegmark's multiverse?
You set up an experiment so that you survive only if some outcome, anticipated by your highly improbable theory of physics, is true.
Then you wake up in a world which is with high probability governed by your theory.
If I understand correctly, under MW you anticipate the experience of surviving with probability 1, and under C with probability 0.5. I don't think that's justified.
In both cases the probability should be either conditional on "being there to experience anything" (and equal 1), OR unconditional (equal the "external" probability of survival, 0.5). This is something in between. You take the external probability in C, but condition on the surviving branches in MW.
To go with the TV series analogy proposed by Eliezer, maybe it could be an end of Season 1?
It adds a "friend" CSS class to your friend's username everywhere, so you can add an user style or some other hack to highlight it. There is probably a reason LessWrong doesn't do it by default, though.
I have no familiarity with Reddit/Lesswrong codebase, but isn't this (r2/r2/models/subreddit.py) the only relevant place?
elif self == Subreddit._by_name(g.default_sr) and user.safe_karma >= g.karma_to_post:
So it's a matter of changing that g.karma_to_post
(which apparently is a global configuration variable) into a subreddit's option (like the ones defines on top of the file).
(And, of course, applying that change to the database, which I have no idea about, but this also shouldn't be hard...)
ETA: Or, if I understand the code correctly, one could just change elif self.type == 'public':
(a few lines above) to elif self.type == 'public' and user.safe_karma >= 1:
, but it's a dirty hack.)
Oh, right. Somehow I was expecting it to be 40 and 0.4. Now it makes sense.
Something is wrong with the numbers here:
The probability that a randomly chosen man surived given that they were given treatment A is 40/100 = 0.2
There are some theories about continuation of subjective experience "after" objective death - quantum immortality, or extension of quantum immortality to Tegmark's multiverse (see this Moravec's essay). I'm not sure if taking them seriously is a good idea, though.
I imagine the "stress table" is just a threshold value, and dice roll result is unknown. This way, stress is weak evidence for lying.
I considered the existence of Santa a definitive proof that the paranormal/magic exists and not everything in the world is in the domain of science (and was slightly puzzled that the adults don't see it that way).
No conspiracies, but for a long time I've been very prone to wishful thinking. I'm not really sure if believing in Santa actually influenced that. I don't remember finding out the truth as a big revelation, though - no influence on my worldview or on trust for my parents.
(I've been raised without religion.)
I could also imagine that there are no practically feasible approaches to AGI promising approaches to AGI
?
Reddit illustrated Asch's conformity experiment today (post).
Is there a link to an online explanation of this? When are the consequences of breaking an oath worse than a destroyed world? What did "world" mean when he said it? Humans? Earth? Humans on Earth? Energy in the Multiverse?
Suppose someone comes to a rationalist Confessor and says: "You know, tomorrow I'm planning to wipe out the human species using this neat biotech concoction I cooked up in my lab." What then? Should you break the seal of the confessional to save humanity?
It appears obvious to me that the issues here are just those of the one-shot Prisoner's Dilemma, and I do not consider it obvious that you should defect on the one-shot PD if the other player cooperates in advance on the expectation that you will cooperate as well.
So you're saying that the knowledge "I survive X with probability 1" can in no way be translated into objective rule without losing some information?
I assume the rules speak about subjective experience, not about "some Everett branch existing" (so if I flip a coin, P(I observe heads) = 0.5, not 1). (What do probabilities of possible, mutually exclusive outcomes of given action sum to in your system?)
Isn't the translation a matter of applying conditional probability? i.e. (P(survives(me, X) = 1 <=> P(survives(joe, X) | joe's experience continues = 1)
Sorry, now I have no idea what we're talking about. If your experiment involves killing yourself after seeing the wrong string, this is close to the standard quantum suicide.
If not, I would have to see the probabilities to understand. My analysis is like this: P(I observe string S | MWI) = P(I observe string S | Copenhagen) = 2^-30, regardless of whether the string S is specified beforehand or not. MWI doesn't mean that my next Everett branch must be S because I say so.
Either you condition the observation (of surviving 1000 attempts) on the observer existing, and you have 1 in both cases, or you don't condition it on the observer and you have p^-1000 in both cases. You can't have it both ways.
What if Tegmark's multiverse is true? All the equivalent formulations of reality would "exist" as mathematical structures, and if there's nothing to differentiate between them, it seems that all we can do is point to appropriate equivalence class in which "we" exist.
However, the unreachable tortured man scenario suggests that it may be useful to split that class anyway. I don't know much about Solomonoff prior - does it make sense now to build a probability distribution over the equivalence class and say what is the probability mass of its part that contains the man?
The reason why this doesn't work (for coins) is that (when MWI is true) A="my observation is heads" implies B="some Y observes heads", but not the other way around. So P(B|A)=1, but P(A|B) = p, and after plugging that into the Bayes formula we have P(MWI|A) = P(Copenhagen|A).
Can you translate that to the quantum suicide case?
If you observe 30 quantum heads in a row you have strong evidence in favor of MWI.
But then if I observed any string of 30 outcomes I would have strong evidence for MWI (if the coin is fair, "p" for any specific string would be 2^-30).
First, I'm gonna clarify some terms to make this more precise. Let Y be a person psychologically continuous with your present self. P(there is some Y that observes surviving a suicide attempt|Quantum immortality) = 1. Note MWI != QI. But QI entails MWI. P(there is some Y that observes surviving a suicide attempt| ~QI) = p.
It follows from this that P(~(there is some Y that observes surviving a suicide attempt)|~QI) = 1-p.
I don't see a confusion of levels (whatever that means).
I still see a problem here. Substitute quantum suicide -> quantum coinflip, and surviving a suicide attempt -> observing the coin turning up heads.
Now we have P(there is some Y that observes coin falling heads|MWI) = 1, and P(there is some Y that observes coin falling heads|Copenhagen) = p.
So any specific outcome of a quantum event would be evidence in favor of MWI.
The probability that there exists an Everett branch in which I continue making that observation is 1. I'm not sure if jumping straight to subjective experience from that is justified:
If P(I survive|MWI) = 1, and P(I survive|Copenhagen) = p, then what is the rest of that probability mass in Copenhagen interpretation? Why is P(~(I survive)|Copenhagen) = 1-p and what does it really describe? It seems to me that calling it "I don't make any observation" is jumping from subjective experiences back to objective. This looks like a confusion of levels.
ETA: And, of course, the problem with "anthropic probabilities" gets even harder when you consider copies and merging, simulations, Tegmark level 4, and Boltzmann brains (The Anthropic Trilemma). I'm not sure if there even is a general solution. But I strongly suspect that "you can prove MWI by quantum suicide" is an incorrect usage of probabilities.
Flip a quantum coin.
The observation that you survived 1000 good suicide attempts is much more likely under MWI than under Copenhagen.
Isn't that like saying "Under MWI, the observation that the coin came up heads, and the observation that it came up tails, both have probability of 1"?
The observation that I survive 1000 good suicide attempts has a probability of 1, but only if I condition on my being capable of making any observation at all (i.e. alive). In which case it's the same under Copenhagen.
Sure, people in your branch might believe you
The problem I have with that is that from my perspective as an external observer it looks no different than someone flipping a coin (appropriately weighted) a thousand times and getting thousand heads. It's quite improbable, but the fact that someone's life depends on the coin shouldn't make any difference for me - the universe doesn't care.
Of course it also doesn't convince me that the coin will fall heads for the 1001-st time.
(That's only if I consider MWI and Copenhagen here. In reality after 1000 coin flips/suicides I would start to strongly suspect some alternative hypotheses. But even then it shouldn't change my confidence of MWI relative to my confidence of Copenhagen).
I would say quantum suiciding is not "harnessing its anthropic superpowers for good", it's just conveniently excluding yourself from the branches where your superpowers don't work. So it has no more positive impact on the universe than you dying has.
I don't really see what is the problem with Aumann's in that situation. If X commits suicide and Y watches, are there any factors (like P(MWI), or P(X dies|MWI)) that X and Y necessarily disagree on (or them agreeing would be completely unrealistic)?
Related (somewhat): The Hero With A Thousand Chances.
That's the problem - it shouldn't really convince him. If he shares all the data and priors with external observers, his posterior probability of MWI being true should end up the same as theirs.
It's not very different from surviving thousand classical Russian roulettes in a row.
ETA: If the chance of survival is p, then in both cases P(I survive) = p, P(I survive | I'm there to observe it) = 1. I think you should use the second one in appraising the MWI...
ETA2: Ok maybe not.
Quantum immortality is not observable. You surviving a quantum suicide is not evidence for MWI - no more than it is for external observers.
600 or so interlinked documents
I was thinking more of a single, 600-chapter document.
(Actually this is why I think Sequences are best read on a computer, with multiple tabs open, like TVTropes or Wikipedia - not on an e-reader. I wonder how Eliezer's book will turn out...)
PDFs are pretty much write-only, and in my experience (with Adobe Acrobat-based devices) reflow never works very well. As long as you use a sane text-based ebook format, Calibre can handle conversion to other formats.
So I recommend converting into - if not EPUB, then maybe just a clean HTML (with all the links retained - readers that support HTML should have no problems with links between file sections).
Your "strong/weak scientific" distinction sounds like it's more about determinism than reductionism.
According to your definitions, I'm a "strong ontological reductionist", and "weak scientific reductionist" because I have no problem with quantum mechanics and MWI being true.
Since there is no handy toll to create polls on LW
I often see polls in comments - "upvote this comment if you choose A", "upvote this if you choose B", "downvote this for karma balance". Asking for replies probably gives you less answers but more accuracy.
Isn't some form of Twin Prisoner's Dilemma here? Not in the payoffs, but in the fact you can assume your decision (to vote or not) is correlated to some degree with others' decision (which it should be if you, and some of them, make that decision rationally).
I was refering to the idea that complex propositions should have lower prior probability.
Of course you don't have to make use of it, you can use any numbers you want, but you can't assign a prior of 0.5 to any proposition without ending up with inconsistency. To take an example that is more detached from reality - there is a natural number N you know nothing about. You can construct whatever prior probability distribution you want for it. However, you can't just assign 0.5 for any possible property of N (for example, P(N10)=0.5).
Prior probability is what you can infer from what you know before considering a given piece of data.
If your overall information is I, and new data is D, then P(H|I) is your prior probability and P(H|DI) posterior probability for hypothesis H.
No one says you have to put exactly 0.5 as prior (this would be especially absurd for absurd-sounding hypotheses like "the lady next door is a witch, she did it".)
"Why are you upside down, soldier?"
I'm actually a MoR fan, and I've found it both entertaining and (at times) enlightening.
But I think a "beginning rationalist"s time is much better spent if they're studying philosophy, critical thinking, probability theory, etc. than on writing fanfiction (even if it would be useful in small doses).
Look at the recently posted reading list. Pick some stuff, study and discuss. If you have a good "fighting spirit" and desire to become stronger, don't waste it on writing fanfiction...
I see here a Newcomb-like situation, but in the reverse direction - the fire department didn't help the guy out to counterfactually make him pay his $75.
To me this distinction is what makes consciousness distinct and special. I think it is a fascinating consequence of a certain pattern of interacting systems. Implying that conscious feelings occur all over the place, perhaps every feedback system is feeling something.
This sounds like the point Pinker makes in How the Mind Works - that apart from the problem of consciousness, concepts like "thinking" and "knowing" and "talking" are actually very simple:
(...) Ryle and other philosophers argued that mentalistic terms such as "beliefs," "desires," and "images" are meaningless and come from sloppy misunderstandings of language, as if someone heard the expression "for Pete's sake" and went around looking for Pete. Simpatico behaviorist psychologists claimed that these invisible entities were as unscientific as the Tooth Fairy and tried to ban them from psychology.
And then along came computers: fairy-free, fully exorcised hunks of metal that could not be explained without the full lexicon of mentalistic taboo words. "Why isn't my computer printing?" "Because the program doesn't know you replaced your dot-matrix printer with a laser printer. It still thinks it is talking to the dot-matrix and is trying to print the document by asking the printer to acknowledge its message. But the printer doesn't understand the message; it's ignoring it because it expects its input to begin with '%!' The program refuses to give up control while it polls the printer, so you have to get the attention of the monitor so that it can wrest control back from the program. Once the program learns what printer is connected to it, they can communicate." The more complex the system and the more expert the users, the more their technical conversation sounds like the plot of a soap opera.
Behaviorist philosophers would insist that this is all just loose talk. The machines aren't really understanding or trying anything, they would say; the observers are just being careless in their choice of words and are in danger of being seduced into grave conceptual errors. Now, what is wrong with this picture? The philosophers are accusing the computer scientists of fuzzy thinking? A computer is the most legalistic, persnickety, hard-nosed, unforgiving demander of precision and explicitness in the universe. From the accusation you'd think it was the befuddled computer scientists who call a philosopher when their computer stops working rather than the other way around. A better explanation is that computation has finally demystified mentalistic terms. Beliefs are inscriptions in memory, desires are goal inscriptions, thinking is computation, perceptions are inscriptions triggered by sensors, trying is executing operations triggered by a goal.
Badly formulated question. I think "consciousness" as subjective experience/ability of introspection/etc. is a concept we all intuitively know (from one example, but still...) and more or less agree on. Do you believe in the color red?
What's under discussion is whether that intuitive concept is possible to be mapped to a specific property, and on what level. Assuming that is the question, I believe a mathematical structure (algorithm?) could be meaningfully called conscious or not conscious.
However, I wouldn't be surprised if it could be "dissolved" into some more specific, more useful properties, making the original concept appear too simplistic (I believe Dennett said something like this in Consciousness Explained).
Saying that "what we perceive as consciousness" has to exist by itself as a real (epiphenomenal) thing seems just silly to me. But then again I probably should read some Chalmers to understand the zombist side more clearly.
AFAIK some people subvocalize while reading, some don't. Is this preventing you from reading quickly?
(I've heard claims that eliminating subvocalization it is the first step to faster reading, although Wikipedia doesn't agree. I, as far as I can tell, don't subvocalize while reading (especially when reading English text, in which I don't link strongly words to pronunciation), and although I have some problems with concentration, I still read at about 300 WPM. One of my friends claims ve's unable to read faster than speech due to subvocalization).
I don't know how we could overcome the boundary of subjective first-person experience with natural language here. If it is the case that human differ fundamentally in their perception of outside reality and inside imagination, then we might simply misunderstand each others definition and descriptions of certain concepts and eventually come up with the wrong conclusions.
While it does sound dangerously close to the "is my red like your red" problem, I think there is much that can be done before you leave the issue as hopelessly subjective. Your own example of being/not being able to visualise faces suggests that there are some points on which you can compare the experiences, so such heterophenomenological approach might give some results (or, more probably, someone already researched this and the results are available somewhere :) ).
I suspect such visualisation is not a binary ability but a spectrum of "realness", a skill you can be better or worse at. I don't identify with your description fully, I wouldn't call what my imagination does "entering the Matrix", but in some ways it's like actual sensory input, just much less intense.
I also observed this spectrum in my dreams - some are more vivid and detailed, some more like the waking level of imagination, and some remain mostly on the conceptual level.
I would very be interested to know if it's possible to improve your imagination's vividness by training.
"In the 5617525 times this simulation has run, players have won $664073 And by won I mean they have won back $664073 of the $5617525 they spent (11%)."
Either it's buggy or there is some tampering with data going on.
Also, several Redditors claim to have won - maybe the simulator is just poorly programmed.
Let's make them wear hooded robes and call them Confessors.
I'm not sure if non-interference is really the best thing to precommit to - if we encounter a pre-AI civilization that still has various problems, death etc., maybe what {the AI they would have build} would have liked more is for us to help them (in a way preserving their values).
If a superintelligence discovers a concept of value-preserving help (or something like CEV?) that is likely to be universal, shouldn't it precommit to applying it to all encountered aliens?
For example, you might have some herb combination that "restores HP", but whenever you use it, you strangely lose HP that more than cancels what it gave you.
What if instead of being useless (by having an additional cancelling effect), magical potions etc. had no effect at all? If HP isn't explicitly stated, you can make the player feel like he's regaining health (e.g. by some visual cues), but in reality he'd die just as often.