Posts

6502 simulated - mind uploading for microprocessors 2011-01-08T18:03:10.340Z

Comments

Comment by humpolec on AI Box Log · 2012-01-28T23:15:06.202Z · LW · GW

How do you even make a quantum coin with 1/googolplex chance?

Comment by humpolec on The Noddy problem · 2012-01-16T22:42:28.077Z · LW · GW

What about your past self? If Night Guy can predict what Morning Guy will do, Morning Guy is effectively threatening his past self.

Comment by humpolec on Death Note, Anonymity, and Information Theory · 2011-05-12T14:47:12.062Z · LW · GW

But... but... Light actually won, didn't he? At least in the short run - he managed to defeat L. I was always under the impression that some of these "mistakes" were committed by Light deliberately in order to lure L.

Comment by humpolec on [Draft] Holy Bayesian Multiverse, Batman! · 2011-02-04T09:49:07.902Z · LW · GW

Is there an analogous experiment for Tegmark's multiverse?

You set up an experiment so that you survive only if some outcome, anticipated by your highly improbable theory of physics, is true.

Then you wake up in a world which is with high probability governed by your theory.

Comment by humpolec on [Draft] Holy Bayesian Multiverse, Batman! · 2011-02-04T09:39:41.935Z · LW · GW

If I understand correctly, under MW you anticipate the experience of surviving with probability 1, and under C with probability 0.5. I don't think that's justified.

In both cases the probability should be either conditional on "being there to experience anything" (and equal 1), OR unconditional (equal the "external" probability of survival, 0.5). This is something in between. You take the external probability in C, but condition on the surviving branches in MW.

Comment by humpolec on Harry Potter and the Methods of Rationality discussion thread, part 7 · 2011-01-28T18:04:08.627Z · LW · GW

To go with the TV series analogy proposed by Eliezer, maybe it could be an end of Season 1?

Comment by humpolec on "Add to Friends" does something or not? · 2011-01-28T16:42:48.631Z · LW · GW

It adds a "friend" CSS class to your friend's username everywhere, so you can add an user style or some other hack to highlight it. There is probably a reason LessWrong doesn't do it by default, though.

Comment by humpolec on Meta: A 5 karma requirement to post in discussion · 2011-01-22T17:54:58.211Z · LW · GW

I have no familiarity with Reddit/Lesswrong codebase, but isn't this (r2/r2/models/subreddit.py) the only relevant place?

elif self == Subreddit._by_name(g.default_sr) and user.safe_karma >= g.karma_to_post:

So it's a matter of changing that g.karma_to_post (which apparently is a global configuration variable) into a subreddit's option (like the ones defines on top of the file).

(And, of course, applying that change to the database, which I have no idea about, but this also shouldn't be hard...)

ETA: Or, if I understand the code correctly, one could just change elif self.type == 'public': (a few lines above) to elif self.type == 'public' and user.safe_karma >= 1:, but it's a dirty hack.)

Comment by humpolec on Simpson's Paradox · 2011-01-13T11:41:21.713Z · LW · GW

Oh, right. Somehow I was expecting it to be 40 and 0.4. Now it makes sense.

Comment by humpolec on Simpson's Paradox · 2011-01-13T10:14:35.398Z · LW · GW

Something is wrong with the numbers here:

The probability that a randomly chosen man surived given that they were given treatment A is 40/100 = 0.2

Comment by humpolec on Is there anything after death? · 2011-01-09T15:16:14.583Z · LW · GW

There are some theories about continuation of subjective experience "after" objective death - quantum immortality, or extension of quantum immortality to Tegmark's multiverse (see this Moravec's essay). I'm not sure if taking them seriously is a good idea, though.

Comment by humpolec on Rationalist Clue · 2011-01-08T18:48:47.097Z · LW · GW

I imagine the "stress table" is just a threshold value, and dice roll result is unknown. This way, stress is weak evidence for lying.

Comment by humpolec on The Santa deception: how did it affect you? · 2010-12-19T21:28:35.734Z · LW · GW

I considered the existence of Santa a definitive proof that the paranormal/magic exists and not everything in the world is in the domain of science (and was slightly puzzled that the adults don't see it that way).

No conspiracies, but for a long time I've been very prone to wishful thinking. I'm not really sure if believing in Santa actually influenced that. I don't remember finding out the truth as a big revelation, though - no influence on my worldview or on trust for my parents.

(I've been raised without religion.)

Comment by humpolec on Friendly AI Research and Taskification · 2010-12-14T10:10:12.412Z · LW · GW

I could also imagine that there are no practically feasible approaches to AGI promising approaches to AGI

?

Comment by humpolec on Brainstorming: neat stuff we could do on LessWrong · 2010-12-13T21:15:10.922Z · LW · GW

Reddit illustrated Asch's conformity experiment today (post).

Comment by humpolec on Diplomacy as a Game Theory Laboratory · 2010-11-14T19:24:01.934Z · LW · GW

Is there a link to an online explanation of this? When are the consequences of breaking an oath worse than a destroyed world? What did "world" mean when he said it? Humans? Earth? Humans on Earth? Energy in the Multiverse?

Prices or Bindings

Suppose someone comes to a rationalist Confessor and says: "You know, tomorrow I'm planning to wipe out the human species using this neat biotech concoction I cooked up in my lab." What then? Should you break the seal of the confessional to save humanity?

It appears obvious to me that the issues here are just those of the one-shot Prisoner's Dilemma, and I do not consider it obvious that you should defect on the one-shot PD if the other player cooperates in advance on the expectation that you will cooperate as well.

Comment by humpolec on A note on the description complexity of physical theories · 2010-11-12T06:55:14.356Z · LW · GW

So you're saying that the knowledge "I survive X with probability 1" can in no way be translated into objective rule without losing some information?

I assume the rules speak about subjective experience, not about "some Everett branch existing" (so if I flip a coin, P(I observe heads) = 0.5, not 1). (What do probabilities of possible, mutually exclusive outcomes of given action sum to in your system?)

Isn't the translation a matter of applying conditional probability? i.e. (P(survives(me, X) = 1 <=> P(survives(joe, X) | joe's experience continues = 1)

Comment by humpolec on A note on the description complexity of physical theories · 2010-11-12T06:45:15.428Z · LW · GW

Sorry, now I have no idea what we're talking about. If your experiment involves killing yourself after seeing the wrong string, this is close to the standard quantum suicide.

If not, I would have to see the probabilities to understand. My analysis is like this: P(I observe string S | MWI) = P(I observe string S | Copenhagen) = 2^-30, regardless of whether the string S is specified beforehand or not. MWI doesn't mean that my next Everett branch must be S because I say so.

Comment by humpolec on A note on the description complexity of physical theories · 2010-11-12T06:32:19.856Z · LW · GW

Either you condition the observation (of surviving 1000 attempts) on the observer existing, and you have 1 in both cases, or you don't condition it on the observer and you have p^-1000 in both cases. You can't have it both ways.

Comment by humpolec on The Strong Occam's Razor · 2010-11-11T23:06:12.013Z · LW · GW

What if Tegmark's multiverse is true? All the equivalent formulations of reality would "exist" as mathematical structures, and if there's nothing to differentiate between them, it seems that all we can do is point to appropriate equivalence class in which "we" exist.

However, the unreachable tortured man scenario suggests that it may be useful to split that class anyway. I don't know much about Solomonoff prior - does it make sense now to build a probability distribution over the equivalence class and say what is the probability mass of its part that contains the man?

Comment by humpolec on A note on the description complexity of physical theories · 2010-11-11T11:17:54.587Z · LW · GW

The reason why this doesn't work (for coins) is that (when MWI is true) A="my observation is heads" implies B="some Y observes heads", but not the other way around. So P(B|A)=1, but P(A|B) = p, and after plugging that into the Bayes formula we have P(MWI|A) = P(Copenhagen|A).

Can you translate that to the quantum suicide case?

Comment by humpolec on A note on the description complexity of physical theories · 2010-11-11T11:14:33.250Z · LW · GW

If you observe 30 quantum heads in a row you have strong evidence in favor of MWI.

But then if I observed any string of 30 outcomes I would have strong evidence for MWI (if the coin is fair, "p" for any specific string would be 2^-30).

Comment by humpolec on A note on the description complexity of physical theories · 2010-11-11T10:28:36.061Z · LW · GW

First, I'm gonna clarify some terms to make this more precise. Let Y be a person psychologically continuous with your present self. P(there is some Y that observes surviving a suicide attempt|Quantum immortality) = 1. Note MWI != QI. But QI entails MWI. P(there is some Y that observes surviving a suicide attempt| ~QI) = p.

It follows from this that P(~(there is some Y that observes surviving a suicide attempt)|~QI) = 1-p.

I don't see a confusion of levels (whatever that means).

I still see a problem here. Substitute quantum suicide -> quantum coinflip, and surviving a suicide attempt -> observing the coin turning up heads.

Now we have P(there is some Y that observes coin falling heads|MWI) = 1, and P(there is some Y that observes coin falling heads|Copenhagen) = p.

So any specific outcome of a quantum event would be evidence in favor of MWI.

Comment by humpolec on A note on the description complexity of physical theories · 2010-11-11T08:10:26.020Z · LW · GW

The probability that there exists an Everett branch in which I continue making that observation is 1. I'm not sure if jumping straight to subjective experience from that is justified:

If P(I survive|MWI) = 1, and P(I survive|Copenhagen) = p, then what is the rest of that probability mass in Copenhagen interpretation? Why is P(~(I survive)|Copenhagen) = 1-p and what does it really describe? It seems to me that calling it "I don't make any observation" is jumping from subjective experiences back to objective. This looks like a confusion of levels.

ETA: And, of course, the problem with "anthropic probabilities" gets even harder when you consider copies and merging, simulations, Tegmark level 4, and Boltzmann brains (The Anthropic Trilemma). I'm not sure if there even is a general solution. But I strongly suspect that "you can prove MWI by quantum suicide" is an incorrect usage of probabilities.

Comment by humpolec on A note on the description complexity of physical theories · 2010-11-11T07:20:51.639Z · LW · GW

Flip a quantum coin.

The observation that you survived 1000 good suicide attempts is much more likely under MWI than under Copenhagen.

Isn't that like saying "Under MWI, the observation that the coin came up heads, and the observation that it came up tails, both have probability of 1"?

The observation that I survive 1000 good suicide attempts has a probability of 1, but only if I condition on my being capable of making any observation at all (i.e. alive). In which case it's the same under Copenhagen.

Comment by humpolec on A note on the description complexity of physical theories · 2010-11-11T07:08:50.339Z · LW · GW

Sure, people in your branch might believe you

The problem I have with that is that from my perspective as an external observer it looks no different than someone flipping a coin (appropriately weighted) a thousand times and getting thousand heads. It's quite improbable, but the fact that someone's life depends on the coin shouldn't make any difference for me - the universe doesn't care.

Of course it also doesn't convince me that the coin will fall heads for the 1001-st time.

(That's only if I consider MWI and Copenhagen here. In reality after 1000 coin flips/suicides I would start to strongly suspect some alternative hypotheses. But even then it shouldn't change my confidence of MWI relative to my confidence of Copenhagen).

Comment by humpolec on A note on the description complexity of physical theories · 2010-11-11T06:55:01.299Z · LW · GW

I would say quantum suiciding is not "harnessing its anthropic superpowers for good", it's just conveniently excluding yourself from the branches where your superpowers don't work. So it has no more positive impact on the universe than you dying has.

Comment by humpolec on A note on the description complexity of physical theories · 2010-11-11T06:50:48.744Z · LW · GW

I don't really see what is the problem with Aumann's in that situation. If X commits suicide and Y watches, are there any factors (like P(MWI), or P(X dies|MWI)) that X and Y necessarily disagree on (or them agreeing would be completely unrealistic)?

Comment by humpolec on A note on the description complexity of physical theories · 2010-11-10T15:50:49.838Z · LW · GW

Related (somewhat): The Hero With A Thousand Chances.

Comment by humpolec on A note on the description complexity of physical theories · 2010-11-10T15:07:36.756Z · LW · GW

That's the problem - it shouldn't really convince him. If he shares all the data and priors with external observers, his posterior probability of MWI being true should end up the same as theirs.

It's not very different from surviving thousand classical Russian roulettes in a row.

ETA: If the chance of survival is p, then in both cases P(I survive) = p, P(I survive | I'm there to observe it) = 1. I think you should use the second one in appraising the MWI...

ETA2: Ok maybe not.

Comment by humpolec on A note on the description complexity of physical theories · 2010-11-10T10:53:21.820Z · LW · GW

Quantum immortality is not observable. You surviving a quantum suicide is not evidence for MWI - no more than it is for external observers.

Comment by humpolec on Print ready version of The Sequences · 2010-11-06T17:33:39.006Z · LW · GW

600 or so interlinked documents

I was thinking more of a single, 600-chapter document.

(Actually this is why I think Sequences are best read on a computer, with multiple tabs open, like TVTropes or Wikipedia - not on an e-reader. I wonder how Eliezer's book will turn out...)

Comment by humpolec on Print ready version of The Sequences · 2010-11-06T10:14:44.238Z · LW · GW

PDFs are pretty much write-only, and in my experience (with Adobe Acrobat-based devices) reflow never works very well. As long as you use a sane text-based ebook format, Calibre can handle conversion to other formats.

So I recommend converting into - if not EPUB, then maybe just a clean HTML (with all the links retained - readers that support HTML should have no problems with links between file sections).

Comment by humpolec on POLL: Reductionism · 2010-11-04T18:26:09.375Z · LW · GW

Your "strong/weak scientific" distinction sounds like it's more about determinism than reductionism.

According to your definitions, I'm a "strong ontological reductionist", and "weak scientific reductionist" because I have no problem with quantum mechanics and MWI being true.

Since there is no handy toll to create polls on LW

I often see polls in comments - "upvote this comment if you choose A", "upvote this if you choose B", "downvote this for karma balance". Asking for replies probably gives you less answers but more accuracy.

Comment by humpolec on Why should you vote? · 2010-10-29T21:10:38.977Z · LW · GW

Isn't some form of Twin Prisoner's Dilemma here? Not in the payoffs, but in the fact you can assume your decision (to vote or not) is correlated to some degree with others' decision (which it should be if you, and some of them, make that decision rationally).

Comment by humpolec on The prior probability of justification for war? · 2010-10-29T05:38:32.546Z · LW · GW

I was refering to the idea that complex propositions should have lower prior probability.

Of course you don't have to make use of it, you can use any numbers you want, but you can't assign a prior of 0.5 to any proposition without ending up with inconsistency. To take an example that is more detached from reality - there is a natural number N you know nothing about. You can construct whatever prior probability distribution you want for it. However, you can't just assign 0.5 for any possible property of N (for example, P(N10)=0.5).

Comment by humpolec on The prior probability of justification for war? · 2010-10-28T15:00:11.818Z · LW · GW

Prior probability is what you can infer from what you know before considering a given piece of data.

If your overall information is I, and new data is D, then P(H|I) is your prior probability and P(H|DI) posterior probability for hypothesis H.

No one says you have to put exactly 0.5 as prior (this would be especially absurd for absurd-sounding hypotheses like "the lady next door is a witch, she did it".)

Comment by humpolec on Discuss: Original Seeing Practices · 2010-10-18T10:27:04.386Z · LW · GW

"Why are you upside down, soldier?"

Comment by humpolec on Rationality Dojo · 2010-10-15T19:20:03.318Z · LW · GW

I'm actually a MoR fan, and I've found it both entertaining and (at times) enlightening.

But I think a "beginning rationalist"s time is much better spent if they're studying philosophy, critical thinking, probability theory, etc. than on writing fanfiction (even if it would be useful in small doses).

Comment by humpolec on Rationality Dojo · 2010-10-15T18:35:21.941Z · LW · GW

Look at the recently posted reading list. Pick some stuff, study and discuss. If you have a good "fighting spirit" and desire to become stronger, don't waste it on writing fanfiction...

Comment by humpolec on Rationality quotes: October 2010 · 2010-10-09T22:54:05.996Z · LW · GW

I see here a Newcomb-like situation, but in the reverse direction - the fire department didn't help the guy out to counterfactually make him pay his $75.

Comment by humpolec on Do you believe in consciousness? · 2010-10-03T16:48:14.294Z · LW · GW

To me this distinction is what makes consciousness distinct and special. I think it is a fascinating consequence of a certain pattern of interacting systems. Implying that conscious feelings occur all over the place, perhaps every feedback system is feeling something.

This sounds like the point Pinker makes in How the Mind Works - that apart from the problem of consciousness, concepts like "thinking" and "knowing" and "talking" are actually very simple:

(...) Ryle and other philosophers argued that mentalistic terms such as "beliefs," "desires," and "images" are meaningless and come from sloppy misunderstandings of language, as if someone heard the expression "for Pete's sake" and went around looking for Pete. Simpatico behaviorist psychologists claimed that these invisible entities were as unscientific as the Tooth Fairy and tried to ban them from psychology.

And then along came computers: fairy-free, fully exorcised hunks of metal that could not be explained without the full lexicon of mentalistic taboo words. "Why isn't my computer printing?" "Because the program doesn't know you replaced your dot-matrix printer with a laser printer. It still thinks it is talking to the dot-matrix and is trying to print the document by asking the printer to acknowledge its message. But the printer doesn't understand the message; it's ignoring it because it expects its input to begin with '%!' The program refuses to give up control while it polls the printer, so you have to get the attention of the monitor so that it can wrest control back from the program. Once the program learns what printer is connected to it, they can communicate." The more complex the system and the more expert the users, the more their technical conversation sounds like the plot of a soap opera.

Behaviorist philosophers would insist that this is all just loose talk. The machines aren't really understanding or trying anything, they would say; the observers are just being careless in their choice of words and are in danger of being seduced into grave conceptual errors. Now, what is wrong with this picture? The philosophers are accusing the computer scientists of fuzzy thinking? A computer is the most legalistic, persnickety, hard-nosed, unforgiving demander of precision and explicitness in the universe. From the accusation you'd think it was the befuddled computer scientists who call a philosopher when their computer stops working rather than the other way around. A better explanation is that computation has finally demystified mentalistic terms. Beliefs are inscriptions in memory, desires are goal inscriptions, thinking is computation, perceptions are inscriptions triggered by sensors, trying is executing operations triggered by a goal.

Comment by humpolec on Do you believe in consciousness? · 2010-10-03T14:44:46.572Z · LW · GW

Badly formulated question. I think "consciousness" as subjective experience/ability of introspection/etc. is a concept we all intuitively know (from one example, but still...) and more or less agree on. Do you believe in the color red?

What's under discussion is whether that intuitive concept is possible to be mapped to a specific property, and on what level. Assuming that is the question, I believe a mathematical structure (algorithm?) could be meaningfully called conscious or not conscious.

However, I wouldn't be surprised if it could be "dissolved" into some more specific, more useful properties, making the original concept appear too simplistic (I believe Dennett said something like this in Consciousness Explained).

Saying that "what we perceive as consciousness" has to exist by itself as a real (epiphenomenal) thing seems just silly to me. But then again I probably should read some Chalmers to understand the zombist side more clearly.

Comment by humpolec on Can you enter the Matrix? The deliberate simulation of sensory input. · 2010-10-01T22:59:18.454Z · LW · GW

AFAIK some people subvocalize while reading, some don't. Is this preventing you from reading quickly?

(I've heard claims that eliminating subvocalization it is the first step to faster reading, although Wikipedia doesn't agree. I, as far as I can tell, don't subvocalize while reading (especially when reading English text, in which I don't link strongly words to pronunciation), and although I have some problems with concentration, I still read at about 300 WPM. One of my friends claims ve's unable to read faster than speech due to subvocalization).

Comment by humpolec on Can you enter the Matrix? The deliberate simulation of sensory input. · 2010-10-01T22:41:07.824Z · LW · GW

I don't know how we could overcome the boundary of subjective first-person experience with natural language here. If it is the case that human differ fundamentally in their perception of outside reality and inside imagination, then we might simply misunderstand each others definition and descriptions of certain concepts and eventually come up with the wrong conclusions.

While it does sound dangerously close to the "is my red like your red" problem, I think there is much that can be done before you leave the issue as hopelessly subjective. Your own example of being/not being able to visualise faces suggests that there are some points on which you can compare the experiences, so such heterophenomenological approach might give some results (or, more probably, someone already researched this and the results are available somewhere :) ).

Comment by humpolec on Can you enter the Matrix? The deliberate simulation of sensory input. · 2010-10-01T15:53:28.774Z · LW · GW

I suspect such visualisation is not a binary ability but a spectrum of "realness", a skill you can be better or worse at. I don't identify with your description fully, I wouldn't call what my imagination does "entering the Matrix", but in some ways it's like actual sensory input, just much less intense.

I also observed this spectrum in my dreams - some are more vivid and detailed, some more like the waking level of imagination, and some remain mostly on the conceptual level.

I would very be interested to know if it's possible to improve your imagination's vividness by training.

Comment by humpolec on Open Thread September, Part 3 · 2010-09-29T18:59:36.704Z · LW · GW

"In the 5617525 times this simulation has run, players have won $664073 And by won I mean they have won back $664073 of the $5617525 they spent (11%)."

Either it's buggy or there is some tampering with data going on.

Also, several Redditors claim to have won - maybe the simulator is just poorly programmed.

Comment by humpolec on Open Thread September, Part 3 · 2010-09-28T20:46:27.881Z · LW · GW

Let's make them wear hooded robes and call them Confessors.

Comment by humpolec on Open Thread, September, 2010-- part 2 · 2010-09-20T19:23:34.176Z · LW · GW

I'm not sure if non-interference is really the best thing to precommit to - if we encounter a pre-AI civilization that still has various problems, death etc., maybe what {the AI they would have build} would have liked more is for us to help them (in a way preserving their values).

If a superintelligence discovers a concept of value-preserving help (or something like CEV?) that is likely to be universal, shouldn't it precommit to applying it to all encountered aliens?

Comment by humpolec on Less Wrong: Open Thread, September 2010 · 2010-09-08T12:21:47.298Z · LW · GW

For example, you might have some herb combination that "restores HP", but whenever you use it, you strangely lose HP that more than cancels what it gave you.

What if instead of being useless (by having an additional cancelling effect), magical potions etc. had no effect at all? If HP isn't explicitly stated, you can make the player feel like he's regaining health (e.g. by some visual cues), but in reality he'd die just as often.