Posts

Comments

Comment by Rolf_Nelson2 on Three Worlds Decide (5/8) · 2009-02-04T08:21:00.000Z · LW · GW

One possibility, given my (probably wrong) interpretation of the ground rules of the fictional universe, is that the humans go to the baby-eaters and tell them that they're being invaded. Since we cooperated with them, the baby-eaters might continue to cooperate with us, by agreeing to:

1. reduce their baby-eating activities, and/or

2. send their own baby-eaters ship to blow up the star (since the fictional characters are probably barred by the author from reducing the dilemma by blowing up Huygens or sending a probe ship), so that the humans don't have to sacrifice themselves.

Comment by Rolf_Nelson2 on Complexity and Intelligence · 2008-11-05T05:28:22.000Z · LW · GW

@Wei: p(n) will approach arbitrarily close to 0 as you increase n.

This doesn't seem right. A sequence that requires knowledge of BB(k), has O(2^-k) probability according to our Solomonoff Inductor. If the inductor compares a BB(k)-based model with a BB(k+1)-based model, then BB(k+1) will on average be about half as probable as BB(k).

In other words, P(a particular model of K-complexity k is correct) goes to 0 as k goes to infinity, but the conditional probability, P(a particular model of K-complexity k is correct | a sub-model of that particular model with K-complexity k-1 is correct), does not go to 0 as k goes to infinity.

Comment by Rolf_Nelson2 on Complexity and Intelligence · 2008-11-04T05:11:01.000Z · LW · GW

If humanity unfolded into a future civilization of infinite space and infinite time, creating descendants and hyperdescendants of unlimitedly growing size, what would be the largest Busy Beaver number ever agreed upon?

Suppose they run a BB evaluator for all of time. They would, indeed, have no way at any point of being certain that the current champion 100-bit program is the actual champion that produces BB(100). However, if they decide to anthropically reason that "for any time t, I am probably alive after time t, even though I have no direct evidence one way or the other once t becomes too large", then they will believe (with arbitrarily high probability) that the current champion program is the actual champion program, and an arbitrarily high percentage of them will be correct in their belief.

Comment by Rolf_Nelson2 on Measuring Optimization Power · 2008-11-04T04:56:37.000Z · LW · GW
  1. One difference between optimization power and the folk notion of "intelligence": Suppose the Village Idiot is told the password of an enormous abandoned online bank account. The Village Idiot now has vastly more optimization power than Einstein does; this optimization power is not based on social status nor raw might, but rather on the actions that the Village Idiot can think of taking (most of which start with logging in to account X with password Y) that don't occur to Einstein. However, we wouldn't label the Village Idiot as more intelligent than Einstein.

  2. Is the Principle of Least Action infinitely "intelligent" by your definition? The PLA consistently picks a physical solution to the n-body problem that surprises me in the same way Kasparov's brilliant moves surprise me: I can't come up with the exact path the n objects will take, but after I see the path that the PLA chose, I find (for each object) the PLA's path has a smaller action integral than the best path I could have come up with.

  3. An AI whose only goal is to make sure such-and-such coin will not, the next time it's flipped, turn up heads, can apply only (slightly less than) 1 bit of optimization pressure by your definition, even if it vaporizes the coin and then builds a Dyson sphere to provide infrastructure and resources for its ongoing efforts to probe the Universe to ensure that it wasn't tricked and that the coin actually was vaporized as it appeared to be.

Comment by Rolf_Nelson2 on Bay Area Meetup for Singularity Summit · 2008-10-07T05:32:02.000Z · LW · GW

Count me in.

Comment by Rolf_Nelson2 on Optimization and the Singularity · 2008-06-25T02:56:52.000Z · LW · GW

Chip, I don't know what you mean by "The AI Institute", but such discussion would be more on-topic at the SL4 mailing list than in the comments section of a blog posting about optimization rates.

Comment by Rolf_Nelson2 on The Outside View's Domain · 2008-06-21T14:06:16.000Z · LW · GW

The question of whether trying to consistently adopt meta-reasoning position A will raise the percentage of time you're correct, compared with meta-reasoning position B, is often a difficult one.

When someone uses a disliked heuristic to produce a wrong result, the temptation is to pronounce the heuristic "toxic". When someone uses a favored heuristic to produce a wrong result, the temptation is to shrug and say "there is no safe harbor for a rationalist" or "such a person is biased, stupid, and beyond help; he would have gotten to the wrong conclusion anyway, no matter what his meta-reasoning position was. The idiot reasoner, rather than my beautiful heuristic, has to be discarded." In the absence of hard data, consensus seems difficult; the problem is exacerbated when a novel meta-reasoning argument is brought up in the middle of a debate on a separate disagreement, in which case the opposing sides have even more temptation to "dig in" to separate meta-reasoning positions.

Comment by Rolf_Nelson2 on LA-602 vs. RHIC Review · 2008-06-20T13:13:23.000Z · LW · GW

CERN on its LHC:

Studies into the safety of high-energy collisions inside particle accelerators have been conducted in both Europe and the United States by physicists who are not themselves involved in experiments at the LHC... CERN has mandated a group of particle physicists, also not involved in the LHC experiments, to monitor the latest speculations about LHC collisions

Things that CERN is doing right:

  1. The safety reviews were done by people who do not work at the LHC.
  2. There were multiple reviews by independent teams.
  3. There is a group continuing to monitor the situation.
Comment by Rolf_Nelson2 on LA-602 vs. RHIC Review · 2008-06-19T13:21:40.000Z · LW · GW

Wilczek was asked to serve on the committee "to pay the wages of his sin, since he's the one that started all this with his letter."

Moral: if you're a practicing scientist, don't admit the possibility of risk, or you will be punished. (No, this isn't something I've drawn from this case study alone; this is also evident from other case studies, NASA being the most egregious.)

Comment by Rolf_Nelson2 on LA-602 vs. RHIC Review · 2008-06-19T13:15:28.000Z · LW · GW

@Vladimir: We can't bother to investigate every crazy doomsday scenario suggested

This is a strawman; nobody is suggesting investigating "every crazy doomsday scenario suggested". A strangelet catastrophe is qualitatively possible according to accepted physical theories, and was proposed by a practicing physicist; it's only after doing quantitative calculations that they can be dismissed as a threat. The point is that such important quantitative calculations need to be produced by less biased processes.

Comment by Rolf_Nelson2 on Against Devil's Advocacy · 2008-06-09T08:26:12.000Z · LW · GW

if you manage to get yourself stuck in an advanced rut, dutifully playing Devil's Advocate won't get you out of it.

It's not a binary either/or proposition, but a spectrum; you can be in a sufficiently shallow rut that a mechanical rule of "when reasoning, search for evidence against the proposition you're currently leaning towards" might rescue you in a situation where you would otherwise fail to come to the correct conclusion. That said, yes, it would indeed be preferable to conduct the search because you actually have "true doubt" and lack overconfidence, rather than by rote, and rather than for the odd reasons that Michael Rose gives.

Dad was an avid skeptic and Martin Gardner / James Randi fan, as well as being an Orthodox Jew. Let that be a lesson on the anti-healing power of compartmentalization

Why do you think that, if he had not compartmentalized, he would have rejected Orthodox Judaism, rather than rejecting skepticism?

Comment by Rolf_Nelson2 on Why Quantum? · 2008-06-08T21:45:55.000Z · LW · GW

"Oh, look, Eliezer is overconfident because he believes in many-worlds."

I can agree that this is absolutely nonsensical reasoning. The correct reason to believe Eliezer is overconfident is because he's a human being, and the prior that any given human is overconfident is extremely large.

One might propose heuristics to determine whether person X is more or less overconfident, but "X disagrees strongly with me personally on this controversial issue, therefore he is overconfident" (or stupid or ignorant) is the exact type of flawed reasoning that comes from self-serving biases.

Comment by Rolf_Nelson2 on Decoherence is Simple · 2008-05-08T00:46:18.000Z · LW · GW

Some physicists speak of "elegance" rather than "simplicity". This seems to me a bad idea; your judgments of elegance are going to be marred by evolved aesthetic criteria that exist only in your head, rather than in the exterior world, and should only be trusted inasmuch as they point towards smaller, rather than larger, Kolmogorov complexity.

Example:

In theory A, the ratio of tiny dimension #1 to tiny dimension #2 is finely-tuned to support life.

In theory B, the ratio of the mass of the electron to the mass of the neutrino is finely-tuned to support life.

An "elegance" advocate might favor A over B, whereas a "simplicity" advocate might be neutral between them.

Comment by Rolf_Nelson2 on Where Physics Meets Experience · 2008-04-25T10:36:58.000Z · LW · GW

can you tell me why the subjective probability of finding ourselves in a side of the split world, should be exactly proportional to the square of the thickness of that side?

Po'mi runs a trillion experiments, each of which have a one-trillionth 4D-thickness of saying B but is otherwise A. In his "mainline probability", he sees the all trillion experiments coming up A. (If he ran a sextillion experiments he'd see about 1 come up B.)

Presumably an external four-dimensional observer sees it differently: He sees only one-trillionth of Po'mi coming up all-A, and the rest of Po'mi saw about 1 B and are huddled in a corner crying that the universe has no order. (Maybe the 4D observer would be unable to see Po'mi at all because Po'mi and all other inhabitants of the lawful "mainline probablity" that we're talking about have almost infinitesimal thickness from the 4D observer's point of view.)

If I were Po'mi, I would start looking for a fifth dimension.

Comment by Rolf_Nelson2 on Which Basis Is More Fundamental? · 2008-04-25T10:24:51.000Z · LW · GW

It seems worthwhile to also keep in mind other quantum mechanical degrees of freedom, such as spin

Only if the spin's basis turns out to be relevant in the final ToEILEL (Theory of Everything Including Laboratory Experimental Results) that gives a mechanical algorithm for what probabilities I anticipate.

In contrast, if someone had a demonstrably-correct theory that could tell you the macroscopic position of everything I see, but doesn't tell you the spin or (directly) the spatial or angular momentum, then the QM Measurement Problem would still be marked "completely solved". In such a position-basis theory, the answer to any question about spin would be "Mu, it only matters if it affects the position of my macroscopic readout."

Comment by Rolf_Nelson2 on Quantum Explanations · 2008-04-10T13:42:41.000Z · LW · GW

Robin: is there a paper somewhere that elaborates this argument from mixed-state ambiguity?

Scott should add his own recommendations, but I would say here is a good starting introduction.

To my mind, the fact that two different situations of uncertainty over true states lead to the same physical predictions isn't obviously a reason to reject that type of view regarding what is real.

The anti-MWI position here is that MWI produces different predictions depending on what basis is arbitrarily picked by the predictor; and that the various MWI efforts to "patch" this problem without postulating a new law of physics, are like squaring the circle. I think the anti-MWI'ers math is correct, but I'm not an expert enough to be 100% sure; what really makes me think MWI is wrong is the inability of the MWI'ers, after many decades, to produce an algorithm that you can "turn the crank" on to get the correct probabilities that we see in experiments; they have the tendency of trying to patch this "basis problem" by producing a new framework, which itself contains an arbitrary choice that's just as bad as the arbitrary choice of basis.

More succinctly, in vanilla MWI you have to pick the correct basis to get the correct experimental results, and you have to peek at the results to get the correct basis.

Comment by Rolf_Nelson2 on Quantum Explanations · 2008-04-09T13:16:17.000Z · LW · GW

In many of your prior posts where you bring up MWI, your interpretation doesn't fundamentally matter to the overall point you're trying to make in that post; that is, your overall conclusion for that post held or failed regardless of which interpretation is correct, possibly to a greater degree than you tend to realize.

For example: "We used a true randomness source - a quantum device." The philosophers' point could equally have been made by choosing the first 2^N digits of pi and finding they correspond by chance to someone's GLUT.

Comment by Rolf_Nelson2 on Belief in the Implied Invisible · 2008-04-09T01:38:52.000Z · LW · GW

the colony is in the future light cone of your current self, but no future version of you is in its future light cone.

Right, and if anyone's still confused how this is possible: wikipedia and a longer explanation

Comment by Rolf_Nelson2 on Zombies! Zombies? · 2008-04-05T00:27:29.000Z · LW · GW

* That-which-we-name "consciousness" happens within physics, in a way not yet understood, just like what happened the last three thousand times humanity ran into something mysterious.

not yet understood? Is your position that there's a mathematical or physical discovery waiting out there, that will cause you, me, Chalmers, and everyone else to slap our heads and say, "of course, that's what the answer is! We should have realized it all along!"

Question for all: How do you apply Occam's Razor to cases where there are two competing hypotheses:

  1. A and B are independently true

  2. A is true, and implies B, but in some mysterious way we haven't yet determined. (For example, "heat is caused by molecular motion" or "quarks are caused by gravitation", to pick two inferences at opposite ends of the plausibility spectrum.)

I don't know what the best answer is. Maybe the practical answer is a variant of Solomoff induction: somehow compare "P(A) P(B)" with "P(A) P(B follows logically from A, and we were too dumb to realize that)", where the P's are some type of Solomonoff-ish a-priori "2^shortest program" probabilities. But the best answer certainly isn't, "A is simpler than A + B, so we know hypothesis 2 is correct, without even having to glance at the likelihood that B follows from A." Otherwise, you would have to conclude that, logically, quarks are caused by gravitation, in some currently-mysterious way that future mathematicians will be certain to discover.

For the record, my belief is that many of the debaters have beliefs that are isomorphic to their opponent's beliefs. When I hear things like, "You said this is a physical law without material consequences, but I define physical laws as things that have material consequences, so you're wrong QED!" then that's a sign that we're in "does a tree falling in the forest make a noise" territory. Does a conciousness mapping rule "actually exist"? Does the real world "actually exist"? Does pi "actually exist"? Why should I care?

In the end, I care about actions and outcomes, and the algorithms that produce those actions. I don't care whether you label conciousness as "part of reality" (because it's something you observe), or "part of your utility function" (because it's not derivable by an intelligence-in-general), or "part of this complete nutritious breakfast" (because, technically, anything that's not poisonous can be combined with separate unrelated nutritious items to form a complete nutritious breakfast.)

Comment by Rolf_Nelson2 on To Spread Science, Keep It Secret · 2008-03-28T23:33:01.000Z · LW · GW

@spindizzy:

No, this hasn't been "argued out", and even if it had been in the past, the "single best answer" would differ from person to person and from year to year. I would suggest starting a thread on SL4 or on SIAI's Singularity Discussion list.

Comment by Rolf_Nelson2 on Explaining vs. Explaining Away · 2008-03-17T12:53:11.000Z · LW · GW

Doug S., we get the point, nothing that Ian could say would pry you away from your version of reductionism, there's no need to make any more posts with Fully General Counterarguments. "I defy the data" is a position, but does not serve as an explanation of why you hold that position, or why other people should hold that position as well.

I would agree with reductionism, if phrased as follows:

  1. When entity A can be explained in terms of another entity B, but not vice-versa, it makes sense to say that entity A "has less existence" compared to the fundamental entities that do exist. That is, we can still have A in our models, but we should be aware that it's only a "cognitive shortcut", like when a map draws a road as a homogeneous black line instead of showing microscopic detail.

  2. The number of fundamental entities is relatively small, as we live in a lawful universe. If we see a mysterious behavior, our first guess should be that it's probably a result of the known entities, rather than a new entity. (Occam's razor)

  3. Reductionism, as a philosophy, doesn't itself say what these fundamental entities are; they could be particles, or laws of nature, or 31 flavors of ice cream. If every particle were composed of smaller particles, then there would be no "fundamental particle", but the law that states how this composition occurs would still be fundamental. If we discover tomorrow that unicorns exist and are indivisible (rather than made up of quarks), then this is a huge surprise and requires a rewrite of all known laws of physics, but it does not falsify reductionism because that just means that a "unicorn field" (which seems to couple quite strongly with the Higgs boson) gets added to our list of fundamental entities.

  4. Reductionism is a logical/philosophical rather than an empirical observation, and can't be falsified as long as Occam's razor holds.

Comment by Rolf_Nelson2 on Probability is in the Mind · 2008-03-14T03:38:00.000Z · LW · GW

if the vast majority of the measure of possible worlds given Bob's knowledge is in worlds where he loses, he's objectively wrong.

That's a self-consistent system, it just seems to me more useful and intuitive to say that:

"P" is true => P
"Bob believes P" is true => Bob believes P

but not

"Bob's belief in P" is true => ...er, what exactly?

Also, I frequently need to attach probabilities to facts, where probability goes from [0,1] (or, in Eliezer's formulation, (-inf, inf)). But it's rare for me to have to any reason to attach probabilities to probabilities. On the flip side, I attach scoring rules in the range [0, -inf] to probability calculations, but not to facts. So in my current worldview, facts and probabilities are tentatively "made of different substances".

Comment by Rolf_Nelson2 on Probability is in the Mind · 2008-03-13T03:41:48.000Z · LW · GW

Follow-up question: If Bob believes he has a >50% chance of winning the lottery tomorrow, is his belief objectively wrong? I would tentatively propose that his belief is unfounded, "unattached to reality", unwise, and unreasonable, but that it's not useful to consider his belief "objectively wrong".

If you disagree, consider this: suppose he wins the lottery after all by chance, can you still claim the next day that his belief was objectively wrong?

Comment by Rolf_Nelson2 on Dissolving the Question · 2008-03-09T18:29:46.000Z · LW · GW

Most of the proposed models in this thread seem reasonable.

I would write down all the odd things people say about free will, pick the simplest model that explained 90% of it, and then see if I could make novel and accurate predictions based on the model. But, I'm too lazy to do that. So I'll just guess.

Evolution hardwired our cognition to contain two mutually-exclusive categories, call them "actions" and "events."

"Actions" match: [rational, has no understandable prior cause]. "Rational" means they are often influenced by reward and punishment. Synonyms for 'has no understandable prior cause' include 'free will', 'caused by elan vitale' and 'unpredictable, at least by the prediction process we use for things-in-general like rocks'.

"Events" match: [not rational, always directly caused by some previous and intuitively comprehendable physical event or action]. If you throw a rock up, it will come back down, no matter how much you threaten or plead with it.

We are born to axiomatically believe actions we take of this innate 'free will' category have no physical cause. In this model, symptoms might include:

  • believing there is an interesting category called 'free will'

  • believing that arguing whether humans either belong to, or don't belong to, this 'free will' category, is an interesting question

  • believing that if we don't have 'free will', it's wrong to punish people

  • believing that if we don't have 'free will', we are marionettes, zombies, or in some other way 'subhuman'.

  • believing that if we don't understand what causes a thunderstorm or a crop failure or an eclipse, it is the will of a rational agent who can be appeased through the appropriate sacrifices

  • believing that if our actions are caused by God's will, fate, spiritual possession, an ancient prophesy, Newtonian dynamics, or some other simple and easily-understandable cause, we do not have 'free will'. However, if our actions are caused by an immaterial soul, spooky quantum mechanics, or anything else that 'lives in another dimension beyond the grasp of intuitive reason', then we retain 'free will'.

I'm not particularly confident my model is correct, the human capacity to spot patterns where there are none works against me here.

Comment by Rolf_Nelson2 on Mutual Information, and Density in Thingspace · 2008-02-27T03:04:24.000Z · LW · GW

Green-eyed people are more likely than average to be black-haired (and vice versa), meaning that we can probabilistically infer green eyes from black hair or vice versa

There is nothing in the mind that is not first in the census.

Comment by Rolf_Nelson2 on The Second Law of Thermodynamics, and Engines of Cognition · 2008-02-27T01:53:22.000Z · LW · GW

Another solid essay.

To form accurate beliefs about something, you really do have to observe it.

How do we model the fact that I know the Universe was in a specific low-entropy state (spacetime was flat) shortly after the Big Bang? It's a small region in the phase space, but I don't have enough bits of observations to directly pick that region out of all the points in phase space.

Comment by Rolf_Nelson2 on Arguing "By Definition" · 2008-02-21T14:06:07.000Z · LW · GW

Frank, tcpkac:

What do you think of, say, philosophers' endless arguments of what the word "knowledge" really means? This seems to me one example where many philosophers don't seem to understand that the word doesn't have any intrinsic meaning apart from how people define it.

If Bob sees a projection of an oasis and thinks there's an oasis, but there's a real oasis behind the projection that creates a projection of itself as a Darwinian self-defense mechanism, does Bob "know" there's an oasis? Presumably Eliezer would ask, "for what purpose do we want to answer the question?" However, many philosophers would prefer to unconstructively argue what semantics are "correct". So my personal experience is that I don't think Eliezer's attacking a straw man here.

A similar example in grammar: many people think usage of "ain't" is somehow objectively wrong, rather than being just an uncommon and frowned-upon dialect.

Comment by Rolf_Nelson2 on Disguised Queries · 2008-02-11T00:51:27.000Z · LW · GW

What's really at stake is an atheist's claim of substantial difference and superiority relative to religion

Often semantics matter because laws and contracts are written in words. When "Congress shall make no law respecting an establishment of religion", it's sometimes advantageous to claim that you're not a religion, or that your enemy is a religion. If churches get preferential tax treatment, it may be advantageous to claim that you're a church.

Comment by Rolf_Nelson2 on Trust in Bayes · 2008-01-31T04:05:12.000Z · LW · GW

@Peter As a human, I can't introspect and look at my utility function, so I don't really know if it's bounded or not. If I'm not absolutely certain that it's bounded, should I just assume it's unbounded, since there is much more at stake in this case?

This has been gnawing at my brain for a while. If the useful Universe is temporally unbounded, then utility arguably goes to aleph-null. Some MWI-type models and Ultimate-ensemble models arguably give you an uncountable number of copies of yourself, does that count as greater than than aleph-null or less than aleph-null (because we normalize to a measure [0, 1] that "looks" small)? What if someone claims "the Universe is spatially finite, but everyone has an inaccessible cardinal number of invisible copies of themselves?" Given my ignorance and confusion, maybe it makes sense to pick the X most credible utility measures, and give them each an "equal vote" in deciding what to do next at each stage, as a current interim measure. This horrendous muddled compromise is itself non-utilitarian and sub-optimal, but I personally don't have a better answer at the moment.

I used to think of my utility function as unbounded, and then after Eliezer's "Pascal's Mugging" post I thought of it as probably bounded. This decision changed the way I live my life... not at all. However, I can understand that if you want to instruct an AGI, you may not be able to allow yourself the luxury of such blissful agnosticism.

@Stephen An intuition in the opposite direction (which I think Rolf agrees with) is that once you reach giant tentacled squillions of units of fun, specifying when/where it happens takes just as much algorithmic complexity as making up a mind from scratch (or interpreting it from a rock).

Alas I'm not completely sure what you're talking about, the secret decoder ring says "fun = utility" but I think I require an additional cryptogram clue. Is this a UDASSA reference?

Comment by Rolf_Nelson2 on Trust in Bayes · 2008-01-30T01:04:00.000Z · LW · GW

other way around, I mean.

Comment by Rolf_Nelson2 on Trust in Bayes · 2008-01-30T00:59:31.000Z · LW · GW

It's a pity I consider my current utility function bounded; the statement "there's no amount of fun F such that there isn't a greater amount of fun G such that I would I would prefer a 100% chance of having fun F, to having a 50% chance of having fun G and a 50% chance of having no fun" would have been a catchy slogan for my next party.

Comment by Rolf_Nelson2 on Rationality Quotes 5 · 2008-01-27T16:34:39.000Z · LW · GW

The difficulty with analyzing the "insightfulness quotient" of comedians like Scott Adams or Jon Stewart is that there's no reliable way of differentiating "things he sincerely believes" versus "things he means seriously at some level, but are not literally true" versus "things that are meant to be just throwaway jokes". If you're sympathetic to Scott Adams, you're likely to interpret true statements or true predictions as "hits", but classify false predictions as "just jokes", and overestimate how insightful he is on average.

Which begs the question, why bother to seek insights from Scott Adams in the first place, if he deliberately mixes in false or misleading statements in with true statements. There are already enough "unwittingly false" statements in the media to keep us on our toes in the first place.

Comment by Rolf_Nelson2 on Trust in Math · 2008-01-15T14:03:47.000Z · LW · GW

Another solid article.

One point of confusion for me: You talk about axiomatic faith in logic (which is necessary in some form to bootstrap your introspective thinking process), but then abruptly switch to talking about "the last ten million times that first-order arithmetic has proven consistent", a statement of observed prior evidence about learned arithmetic. Both points are valid, but it seemed a non sequiter to me to abruptly go from one to the other.

Oh well, off to cast half a vote in the Michigan Primary.

Comment by Rolf_Nelson2 on Is Reality Ugly? · 2008-01-15T01:20:01.000Z · LW · GW

Rolf, surely the simplicity of MWI relative to objective collapse is strong evidence that when we have a better technical understanding of decoherence it will be compatible with MWI?

What do you mean by "compatible"? Do you mean, the observed macroscopic world will emerge as "the most likely result" from MWI, instead of some other macroscopic world where objects decohere on alternate Thursdays, or whenever a proton passes by, or stay a homogeneous soup forever? That's a lot of algorithmic bits that I have to penalize MWI for, given that this has not been demonstrated.

Here's the linchpin of my argument: why should I believe, a priori, that the observed macroscopic world has a decent chance of popping naturally out of MWI, any more than I should believe that the observed world might pop out from the philosophy "All Is Fire?" Should I believe this just because some people have convinced themselves that it probably does (even if they consistently fail to demonstrate it in a rigorous way?) But such post-hoc intuitive beliefs are notoriously unreliable. Extreme example: many people believe that quantum mechanics emerges naturally from Buddhist beliefs (yet, again, oddly they cannot demonstrate this in a rigorous way, and as an added coincidence, they only started saying this after quantum mechanics had already been discovered by secular experimentation.)

Aside: if MWI'ers had started in 1890, and then used their "simple MWI" theory to go backwards from macroscopic observations to infer the possible existence of quantum mechanics by asking themselves "from what sets of simple theories might the macroscopic world naturally and intuitively emerge", now that would have impressed me.

Comment by Rolf_Nelson2 on Is Reality Ugly? · 2008-01-14T01:31:21.000Z · LW · GW

Do you have any specific problem in mind? Have you read some of the post-2000 papers on how MWI works, like Everett and Structure?

From the paper:

Two sorts of objection can be raised against the decoherence approach to definiteness. The first is purely technical: will decoherence really lead to a preferred basis in physically realistic situations, and will that preferred basis be one in which macroscopic objects have at least approximate definiteness. Evaluating the progress made in establishing this would be beyond the scope of this paper, but there is good reason to be optimistic. The other sort of objection is more conceptual in nature: it is the claim that even if the technical success of the decoherence program is assumed, it will not be enough to solve the problem of indefiniteness...

So David Wallace would agree that "decoherence for free", mapping QM onto macroscopic operations without postulating a new non-unitary rule, has not yet been established on that tiny little, nitpicky "purely technical" level. The difference is that Wallace presumably believes that success is Right Around the Corner, whereas I believe the 50 years of failure are strong evidence that the basic approach is entirely wrong. (And yes, I feel the same way about 20 years of failure in String Theory.) Time will tell.

Comment by Rolf_Nelson2 on Is Reality Ugly? · 2008-01-13T17:26:29.000Z · LW · GW

I wish to hell that I could just not bring up quantum physics. But there's no real way to explain how reality can be a perfect mathematical object and still look random due to indexical uncertainty, without bringing up quantum physics.

MWI doesn't explain why the Universe has four large dimensions and three small neutrinos. In order to explain that by indexical uncertainty, you have to bring up other multiverse concepts anyway, and if you bring in "ultimate ensemble" theories, then MWI vs. non-MWI no longer matters for the rhetorical point you're making.

I am personally unconvinced by the arguments that MWI does away with the need for a non-unitary operation, because of the inability of MWI proponents to show that MWI works in a rigorous way without one. I would bet that some combination of Objective Reduction + MWI is the correct physical theory. The point I'm trying to make is that the Eliezer's conclusion about indexical uncertainty may still be correct, even if you find Everett's MWI incoherent.

Comment by Rolf_Nelson2 on 0 And 1 Are Not Probabilities · 2008-01-11T00:45:10.000Z · LW · GW

Doug S., I believe according to quantum mechanics the smallest unit of length is Planck length and all distances must be finite multiples of it.

Not in standard quantum mechanics. Certain of the many theories unsupported hypotheses of quantum gravity (such as Loop Quantum Gravity) might say something similar to this, but that doesn't abolish every infinite set in the framework. The total number of "places where infinity can happen" in modern models has tended to increase, rather than decrease, over the centuries, as models have gotten more complex. One can never prove that nature isn't "allergic to infinities" (the skeptic can always claim, "wait, but if we looked even closer or farther, maybe we would see a heretofore unobserved brick wall"), but this allergy is not something that has been empirically observed.

Comment by Rolf_Nelson2 on Rational vs. Scientific Ev-Psych · 2008-01-05T00:44:03.000Z · LW · GW

Is there a word for the similar case to the "just-so" story, but that has a spurious environmental explanation rather than a spurious genetic explanation? (For example, "boys are more aggressive than girls because parents give their boys more violent toys.") I see many more of the former than the latter in the media.

Comment by Rolf_Nelson2 on The Two-Party Swindle · 2008-01-03T02:04:14.000Z · LW · GW

I don't find the polls consistent with the picture of libertarian voters vs. colluding statist politicians. Only a significant majority (not an overwhelming majority) seems to support lower taxes, and when the question is phrased as costs vs. benefits (rather than "taxes in a vacuum") that majority tends to disappear.

Comment by Rolf_Nelson2 on When None Dare Urge Restraint · 2007-12-10T01:42:00.000Z · LW · GW

the overreaction was foreseeable in advance, not just in hindsight

To paraphrase what my brain is hearing from you, Eliezer:

In 2001, you would have predicted, "In 2007, I will believe that the U.S. overreacted between 2001 and 2007."

In 2007, your prediction is true: you personally believe the U.S. overreacted.

Not very impressive. (I know lots of people who can successfully predict that they will have the same political beliefs six years from now, no matter what intervening evidence occurs between now and then! It's not something that you should take pride in. :-)

I would suggest you join a prediction market if you believe you have an uncanny, cross-domain knack for consistently predicting the future, except that I don't want to distract you from your AI work.

Comment by Rolf_Nelson2 on Uncritical Supercriticality · 2007-12-09T22:39:00.000Z · LW · GW

Rolf: It seems to me that you are trying to assert that it is normative for agents to behave in a certain manner because the agents you are addressing are presumably non-normative.

On a semantic level, I agree; I actually avoided using the word "normative" in my comment because you had, earlier, correctly criticized my use of the word on my blog.

I try to consistently consider myself as part of an ensemble of flawed humans. (It's not easy, and I often fail.) To be more rigorous, I would want to condition my reasoning on the fact that I'm one of the flawed humans who attempts to adjust for the fact that he himself is a flawed human. (But, I don't think that, in practice, this particular conditioning would change my conclusions.)

I do have to, to "bootstrap" my philosophy, presume that I have some ability to, much of the time, use logic, in such a way that (on average) >50% of my 1-bit beliefs are likely to be correct. But since I grant that same ability to the rest of the ensemble of flawed humans, that doesn't affect the analysis.

I don't have a citation to an existing paper that rigorously spells out how you would do this (maybe such a paper doesn't even exist, for all I know), but my intuition is that such analysis is not, at a fundamental level, self-contradictory.

Comment by Rolf_Nelson2 on Uncritical Supercriticality · 2007-12-05T01:31:06.000Z · LW · GW

And it is triple ultra forbidden to respond with violence.

I agree. However, here are my minority beliefs on the topic: unless you use Philosophical Majoritarianism, or some other framework where you consider yourself as part of an ensemble of fallible human beings, it's fairly hard to conclusively demonstrate the validity of this rule, or indeed to draw any accurate conclusions about what to do in these cases.

If I consider my memories and my current beliefs in the abstract, as not a priori less infallible than anyone else's, a "no exceptions to Freedom to Dissent" policy follows naturally. But if I, instead, always model my last ten minutes of thought as part of a privileged, infallible Bayesian process, then the simplest conclusion is that I have a right, and even a moral duty, to mete out political punishment as I see fit. (You're also logically required, if the latter is your world-view, to eschew stock-market index funds in favor of placing diverse InTrade bets with your savings; finally, you're also required, if a betting market ever opens up where people can bet on the consequences of mathematical proofs, to bet against the greatest mathematicians in the world, anytime you disagree with them on whether a proof is valid or not.)

Comment by Rolf_Nelson2 on Natural Selection's Speed Limit and Complexity Bound · 2007-11-07T01:39:00.000Z · LW · GW

Cyan,

> I can't really process this query until you relate the words you've used to the math MacKay uses

On Page 1, MacKay posits x as a bit-sequence of an individual. Pick an individual an random. The question at hand is whether the Shannon Entropy of x, for that individual, decreases at a rate of O(1) per generation.

This would be one way to quantify the information-theoretic adaptive complexity of an individual's DNA.

In contrast, if for some odd reason you wanted to measure the total information-theoretic adaptive complexity of the entire species, then as the population N -> ∞, the total amount of information maxes out in one generation (since, if he had access to the entire population, anyone with a calculator and a lot of spare time could, more-or-less, deduce the entire environment after one generation.)

Comment by Rolf_Nelson2 on Natural Selection's Speed Limit and Complexity Bound · 2007-11-06T04:59:00.000Z · LW · GW

If you look at equation 3 of MacKay's paper, you'll see that he defines information in terms of frequency of an allele in a population

I apologize, my statement was ambiguous. The topic of Eliezer's post is how much information is in an individual organism's genome, since that's what limits the complexity of a single organism, which is what I'm talking about.

Equation 3 addresses the holistic information of the species, which I find irrelevant to the topic at hand. Maybe Alice, Bob, and Charlie's DNA could together have up to 75 MB of data in some holographic sense. Maybe a dog, cat, mouse, and anteater form a complex 100 MB system, but I don't care.

Would you agree that the information-theoretic increase in the amount of adaptive data in a single organism is still limited by O(1) bits in Mackay's model? If not, please let me know, because in that case I'm clearly missing something and would like to learn from my mistake.

Comment by Rolf_Nelson2 on Natural Selection's Speed Limit and Complexity Bound · 2007-11-06T03:34:00.000Z · LW · GW

MacKay's paper talks about gaining bits as in bits on a hard drive

I don't think MacKay's paper even has a coherent concept of information at all. As far as I can tell, in MacKay's model, if I give you a completely randomized 100 Mb hard drive, then I've just given you 50 Mb of useful information, because half of the bits are correct (we just don't know which ones.) This is not a useful model.

Comment by Rolf_Nelson2 on Pascal's Mugging: Tiny Probabilities of Vast Utilities · 2007-10-23T01:57:00.000Z · LW · GW

You would have to abandon Solomonoff Induction (or modify it to account for these anthropic concerns) to make this work.

To be more specific, you would have to alter it in such a way that it accepted Brandon Carter's Doomsday Argument.

Comment by Rolf_Nelson2 on Pascal's Mugging: Tiny Probabilities of Vast Utilities · 2007-10-23T01:39:00.000Z · LW · GW

Even if the Matrix-claimant says that the 3^^^^3 minds created will be unlike you, with information that tells them they're powerless, if you're in a generalized scenario where anyone has and uses that kind of power, the vast majority of mind-instantiations are in leaves rather than roots.

You would have to abandon Solomonoff Induction (or modify it to account for these anthropic concerns) to make this work. Solomonoff Induction doesn't let you consider just "generalized scenarios"; you have to calculate each one in turn, and eventually one of them is guaranteed to be nasty.

To paraphrase Wei's example: the mugger says, "Give me five dollars, or I'll simulate and kill 3^^^^3 people, and I'll make sure they're aware that they are at the leaf and not at the node". Congratulations, you now have over 3^^^^3 bits of evidence (in fact, it's a tautology with probability 1) that the following proposition is true: "if the mugger's statement is correct, then I am the one person at the node and am not one of the 3^^^^3 people at the leaf." By Solomonoff Induction, this scenario where his statement is literally true has > 1 / 2^(10^50) probability, as it's easily describable in much less than 10^50 bits. Once you try to evaluate the utility differential of that scenario, boom, we're right back where we started.

On the other hand, you could modify Solomonoff Induction to reflect anthropic concerns, but I'm not sure it's any better than just modifying the utility function to reflect anthropic concerns.

And, of course, there's still the pig problem in either case.

Comment by Rolf_Nelson2 on Pascal's Mugging: Tiny Probabilities of Vast Utilities · 2007-10-20T19:01:20.000Z · LW · GW

Tiiba, keep in mind that to an altruist with a bounded utility function, or with any other of Peter's caveats, in may not "make perfect sense" to hand over the five dollars. So the problem is solveable in a number of ways, the problem is to come up with a solution that (1) isn't a hack and (2) doesn't create more problems than in solves.

Anyway, like most people, I'm not a complete utilitarian altruist, even at a philosophical level. Example: if an AI complained that you take up too much space and are mopey, and offered to kill you and replace you with two happy midgets, I would feel no guilt about refusing the offer, even if the AI could guarantee that overall utility would be higher after the swap.

Comment by Rolf_Nelson2 on Pascal's Mugging: Tiny Probabilities of Vast Utilities · 2007-10-20T17:21:59.000Z · LW · GW

Re: "However clever your algorithm, at that level, something's bound to confuse it. Gimme FAI with checks and balances every time."

I agree that a mature Friendly Artificial Intelligence should defer to something like humanity's volition.

However, before it can figure out what humanity's volition is and how to accomplish it, an FAI first needs to:

  1. self-improve into trans-human intelligence while retaining humanity's core goals
  2. avoid UnFriendly Behavior (for example, murdering people to free up their resources) in the process of doing step (1)

If the AI falls prey to a paradoxes early on in the process of self-improvement, the FAI has failed and has to be shut down or patched.

Why is that a problem? Because if the AI falls prey to a paradox later on in the process of self-improvement, when the computer can outsmart human beings, the result could be catastrophic. (As Eliezer keeps pointing out: a rational AI might not agree to be patched, just as Gandhi would not agree to have his brain modified into becoming a psychopath, and Hitler would not agree to have his brain modified to become an egalitarian. All things equal, rational agents will try to block any actions that would prevent them from accomplishing their current goals.)

So you want to create an elegant (to the point, ideally, of being "provably correct") structure that doesn't need patches or hacks. If you have to constantly patch or hack early on in the process, that increases the chances that you've missed something fundamental, and that the AI will fail later on, when it's too late to patch.

Comment by Rolf_Nelson2 on Congratulations to Paris Hilton · 2007-10-20T15:59:42.000Z · LW · GW

Here's my data point:

  1. Like Michael Vassar, I see the rationality of cryonics, but I'm not signed up myself. In my case, I currently use altruism + inertia (laziness) + fear of looking foolish to non-transhumanists + "yuck factor" to override my fear of death and allow me to avoid signing up for now. Altruism is a constant state of Judo.

  2. My initial gut emotional reaction to reading that Eliezer signed up for cryonics was irritation that Eliezer asks for donations, and then turns around and spends money on this perk that most people, including me, don't indulge in. (An analogy is the emotion that strikes you if you hear that the president of a charity drives a Ferrari that he bought out of his charity salary.)

  3. I then quickly realized (even before seeing Eliezer's elaboration) that this reaction is illogical, it doesn't matter if you spend money on cryonics rather than, say, on eating out more often, or buying a house that's slightly larger than you need for bare survival. So, I discount this emotion.

  4. However, it's not clear to me what % of the non-cryonics majority will reach step 3. There are many ways someone could easily rationalize the emotions of step 2 if, unlike me, they were inclined to do so in this case. (I can give examples of plausible rationalizations on request.)

  5. One way to mitigate, for people who didn't reach step 3, would be to point out that, while signing up for cryonics when you're on death's door is a 5 to 6-figure investment, signing up through life insurance when you're young and healthy (which I presume is Eliezer's situation) is extremely cheap.

  6. Eliezer is a product of Darwinian evolution. An extreme outlier, to be sure, with the "altruism knob" cranked up to 11, but a product of evolution nonetheless, with all the messy drives that entails. I would be more bothered if he claimed to be altruistic 100% of the time, since that would cause me to doubt his honesty.

  7. (Corollary to (6)) If someone is considering donating, but is holding off because "I am not sufficiently convinced Eliezer is altruistic enough, I'm going keep my money and wait until I meet someone with a greater probability of being altruistic", please let me know (here, or at rzolf.h.d.nezlson@gmail.com, remove z's) and I will be happy to enlighten you on all the ways this reasoning is wrong.