Posts

Reading Math: Pearl, Causal Bayes Nets, and Functional Causal Models 2011-12-05T03:45:06.671Z

Comments

Comment by jwdink on Reading Math: Pearl, Causal Bayes Nets, and Functional Causal Models · 2011-12-05T22:02:45.844Z · LW · GW

Thanks, that is helpful.

My claim was that, if we simply represent the gears example by representing the underlying (classical) physics of the system via Pearl's functional causal models, there's nothing cyclic about the system. Thus, Pearl's causal theory doesn't need to resort to the messy expensive stuff for such systems. It only needs to get messy in systems which are a) cyclic, and b) implausible to model via their physics-- for example, negative and positive feedback loops (smoking causes cancer causes despair causes smoking).

Comment by jwdink on Reading Math: Pearl, Causal Bayes Nets, and Functional Causal Models · 2011-12-05T19:48:48.640Z · LW · GW

Thanks, I'll check that out soon.

Comment by jwdink on Reading Math: Pearl, Causal Bayes Nets, and Functional Causal Models · 2011-12-05T19:48:15.641Z · LW · GW

Oy, I'm not following you either; apologies. You said:

The common criticism of Pearl is that this assumption fails if one assumes quantum mechanics is true.

...implying that people generally criticize his theory for "breaking" at quantum mechanics. That is, to find a system outside his "subset of causal systems" critics have to reach all the way to quantum mechanics. He could respond "well, QM causes a lot of trouble for a lot of theories." Not bullet-proof, but still. However, you started (your very first comment) by saying that his theory "breaks" even in the gears example. So why have critics tried criticizing his theory for breaking in complex quantum mechanics, when all along there were much more simple and common causal situations they could have used to criticize the theory for breaking under?

More generally, I just can't agree with your interpretation of Pearl that he was only trying to describe a subset of causal systems, if such a subset excludes such commonplace examples as the gears example. I think he was trying to describe a theory of how causation and counterfactuals can be formalized and mathemetized to describe most of nature. Perhaps this theory doesn't apply to nature when described on the quantum mechanical level, but I find it extremely implausible that it doesn't apply to the vast majority of nature. It was designed to. Can you really watch this video and deny he thinks that his theory applies to classical physics, such as the gears example? Or do you think he'd be stupid enough to not think of the gears example? I'm baffled by your position.

Comment by jwdink on Reading Math: Pearl, Causal Bayes Nets, and Functional Causal Models · 2011-12-05T09:11:23.200Z · LW · GW

If his theory breaks in situations as mundane and simple as the gears example above, then why have common criticisms employed the vagaries of quantum mechanics in attempting to criticize the Markov assumption? They might as well have just used simple examples involving gears.

Comment by jwdink on Reading Math: Pearl, Causal Bayes Nets, and Functional Causal Models · 2011-12-05T04:43:21.067Z · LW · GW

The theory is supposed to describe ANY causal system-- otherwise it would be a crappy theory of how (normatively) people ought to reason causally, and how (descriptively) people do reason causally.

Comment by jwdink on Philosophy: A Diseased Discipline · 2011-04-01T18:11:55.571Z · LW · GW

That philosophy itself can't be supported by empirical evidence; it rests on something else.

Right, and I'm asking you what you think that "something else" is.

I'd also re-assert my challenge to you: if philosophy's arguments don't rest on some evidence of some kind, what distinguishes it from nonsense/fiction?

Comment by jwdink on Philosophy: A Diseased Discipline · 2011-03-31T21:58:46.076Z · LW · GW

Unless you think the "Heideggerian critique of AI" is a good example. In which case I can engage that.

Comment by jwdink on Philosophy: A Diseased Discipline · 2011-03-31T21:37:03.754Z · LW · GW

I think you are making a category error. If something makes claims about phenomena that can be proved/disproved with evidence in the world, it's science, not philosophy.

Hmm.. I suspect the phrasing "evidence/phenomena in the world" might give my assertion an overly mechanistic sound to it. I don't mean verifiable/disprovable physical/atomistic facts must be cited-- that would be begging the question. I just mean any meaningful argument must make reference to evidence that can be pointed to in support of/ in criticism of the given argument. Note that "evidence" doesn't exclude "mental phenomena." If we don't ask that philosophy cite evidence, what distinguishes it from meaningless nonsense, or fiction?

I'm trying to write a more thorough response to your statement, but I'm finding it really difficult without the use of an example. Can you cite some claim of Heidegger's or Hegel's that you would assert is meaningful, but does not spring out of an argument based on empirical evidence? Maybe then I can respond more cogently.

Comment by jwdink on Philosophy: A Diseased Discipline · 2011-03-30T21:36:09.075Z · LW · GW

Continental philosophy, on the other hand, if you can manage to make sense of it, actually >can provide new perspectives on the world, and in that sense is worthwhile. Don't assume >that just because you can't understand it, it doesn't have anything to say.

It's not that people coming from the outside don't understand the language. I'm not just frustrated the Hegel uses esoteric terms and writes poorly. (Much the same could be said of Kant, and I love Kant.) It's that, when I ask "hey, okay, if the language is just tough, but there is content to what Hegel/Heidegger/etc is saying, then why don't you give a single example of some hypothetical piece of evidence in the world that would affirm/disprove the putative claim?" In other words, my accusation isn't that continental philosophy is hard, it's that it makes no claims about the objective hetero-phenomenological world.

Typically, I say this to a Hegelian (or whoever), and they respond that they're not trying to talk about the objective world, perhaps because the objective world is a bankrupt concept. That's fine, I guess-- but are you really willing to go there? Or would you claim that continental philosophy can make meaningful claims about actual phenomena, which can actually be sorted through?

I guess I'm wholeheartedly agreeing with the author's statement:

You will occasionally stumble upon an argument, but it falls prey to magical categories >and language confusions and non-natural hypotheses.

Comment by jwdink on Yes, a blog. · 2010-11-20T22:37:13.163Z · LW · GW

That's fantastic. What school was this?

Comment by jwdink on Yes, a blog. · 2010-11-20T22:36:33.233Z · LW · GW

If they can't stop students from using Wikipedia, pretty soon schools will be reduced from teaching how to gather facts, to teaching how to think!

This is what kind of rubs me the wrong way about the above "idea selection" point. Is the implication here that the only utility of working through Hume or Kant's original text is to cull the "correct" facts from the chaff? Seems like working through the text could be good for other reasons.

Comment by jwdink on The Trolley Problem in popular culture: Torchwood Series 3 · 2009-08-08T00:21:10.953Z · LW · GW

Haha, we must have very different criteria for "confusing." I found that post very clear, and I've struggled quite a bit with most of your posts. No offense meant, of course: I'm just not very versed in the LW vernacular.

Comment by jwdink on The Trolley Problem in popular culture: Torchwood Series 3 · 2009-08-07T20:58:11.858Z · LW · GW

I agree generally that this is what an irrational value would mean. However, the presiding implicit assumption was that the utilitarian ends were the correct, and therefore the presiding explicit assumption (or at least, I thought it was presiding... now I can't seem to get anyone to defend it, so maybe not) was that therefore the most efficient means to these particular ends were the most rational.

Maybe I was misunderstanding the presiding assumption, though. It was just stuff like this:

Lesswrongers will be encouraged to learn that the Torchwood characters were rationalists to a man and woman - there was little hesitation in agreeing to the 456's demands.

Or this, in response to a call to "dignity":

How many lives is your dignity worth? Would you be willing to actually kill people for your dignity, or are you only willing to make that transaction if someone else is holding the knife?

Comment by jwdink on Why You're Stuck in a Narrative · 2009-08-05T16:58:49.007Z · LW · GW

I don't get you

Could you say why?

Comment by jwdink on The Trolley Problem in popular culture: Torchwood Series 3 · 2009-08-05T16:57:57.956Z · LW · GW

Okay, that's fine. So you'll agree that the various people--who were saying that the decision made in the show was the rational route--these people were speaking (at least somewhat) improperly?

Comment by jwdink on The Trolley Problem in popular culture: Torchwood Series 3 · 2009-08-05T04:38:50.313Z · LW · GW

It seems like you are seeing my replies as soldier-arguments for the object-level question about the sacrifice of children, stumped on a particular conclusion that sacrificing children is right, while I'm merely giving opinion-neutral meta-comments about the semantics of such opinions. (I'm not sure I'm reading this right.)

...so you're NOT attempting to respond to my original question? My original question was "what's irrational about not sacrificing the children?"

Comment by jwdink on Why You're Stuck in a Narrative · 2009-08-04T17:56:58.911Z · LW · GW

Wonderful post.

Because the brain is a hodge podge of dirty hacks and disconnected units, smoothing over and reinterpreting their behaviors to be part of a consistent whole is necessary to have a unified 'self'. Drescher makes a somewhat related conjecture in "Good and Real", introducing the idea of consciousness as a 'Cartesian Camcorder', a mental module which records and plays back perceptions and outputs from other parts of the brain, in a continuous stream. It's the idea of "I am not the one who thinks my thoughts, I am the one who hears my thoughts", the source of which escapes me. Empirical support of this comes from the experiments of Benjamin Libet, which show that a subconscious electrical processes precede conscious actions - implying that consciousness doesn't engage until after an action has already been decided. If this is in fact how we handle internal information - smoothing out the rough edges to provide some appearance of coherence, it shouldn't be suprising that we tend to handle external information in the same matter.

Even this language, I suspect, is couched in a manner that expresses Cartesian Materialist remnants. One of the most interesting things about Dennett is that he believes in free will, despite his masterful grasp of the disunity of conscious experience and action. This, I think, is because he recognizes an important fact: we have to redefine the conscious self as something spaced out over time and location (in the brain), not as the thing that happens AFTER the preceding neuronal indicators.

But perhaps I'm misinterpreting your diction.

Comment by jwdink on The Trolley Problem in popular culture: Torchwood Series 3 · 2009-08-04T17:37:20.684Z · LW · GW

Okay, so I'll ask again: why couldn't the humans real preference be to not sacrifice the children? Remember, you said:

You can't decide your preference, preference is not what you actually do, it is what you should do

You haven't really elucidated this. You're either pulling an ought out of nowhere, or you're saying "preference is what you should do if you want to win". In the latter case, you still haven't explained why giving up the children is winning, and not doing so is not winning.

And the link you gave doesn't help at all, since, if we're going to be looking at moral impulses common to all cultures and humans, I'm pretty sure not sacrificing children is one of them. See: Jonathan Haidt

Comment by jwdink on The Trolley Problem in popular culture: Torchwood Series 3 · 2009-08-04T17:35:34.690Z · LW · GW

But there seemed to be some suggestion that an avoidance of sacrificing the children, even to the risk of everyone's lives was a "less rational" value. If it's a value, it's a value... how do you call certain values invalid, or not "real" preferences?

Comment by jwdink on Pain · 2009-08-03T17:19:50.792Z · LW · GW

Excellent response.

As a side note, I do suspect that there's a big functional difference between an entity that feels a small voice in the back of the head and an entity that feels pain like we do.

Comment by jwdink on Pain · 2009-08-03T17:16:06.719Z · LW · GW

How does one define "bad" without "pain" or "suffering"? Seems rather difficult. Or: The question doesn't seem so much difficult as it is (almost) tautological. It's like asking "What, if anything, is hot about atoms moving more quickly?"

Comment by jwdink on The Trolley Problem in popular culture: Torchwood Series 3 · 2009-07-30T21:26:32.492Z · LW · GW

Oh, it's no problem if you point me elsewhere. I should've specified that that would be fine. I just wanted some definition. The only link that was given, I believe, was one defining rationality. Thanks for the links, I'll check them out.

Comment by jwdink on The Trolley Problem in popular culture: Torchwood Series 3 · 2009-07-30T19:00:07.651Z · LW · GW

All human preferences, in their exact form, are hidden. The complexity of human value is too great to comprehend it all explicitly with a merely human mind.

Okay... so again, I'll ask... why is it irrational to NOT sacrifice the children? How does it go against hidden preference (which, perhaps, it would be prudent to define)?

Comment by jwdink on The Trolley Problem in popular culture: Torchwood Series 3 · 2009-07-30T18:58:51.570Z · LW · GW

That's not a particularly helpful or elucidating response. Can you flesh out your position? It's impossible to tell what it is based on the paltry statements you've provided. Are you asserting that the "equation" or "hidden preference" is the same for all humans, or ought to be the same, and therefore is something objective/rational?

Comment by jwdink on The Trolley Problem in popular culture: Torchwood Series 3 · 2009-07-29T22:25:48.298Z · LW · GW

What would be an example of a hidden preference? The post to which you linked didn't explicitly mention that concept at all.

Comment by jwdink on The Trolley Problem in popular culture: Torchwood Series 3 · 2009-07-29T22:25:07.900Z · LW · GW

I suppose I'm questioning the validity of the analogy: equations are by nature descriptive, while what one ought to do is prescriptive. Are you familiar with the Is-Ought problem?

Comment by jwdink on The Trolley Problem in popular culture: Torchwood Series 3 · 2009-07-29T21:19:01.187Z · LW · GW

Instrumental rationality: achieving your values. Not necessarily "your values" in the sense of being selfish values or unshared values: "your values" means anything you care about. The art of choosing actions that steer the future toward outcomes ranked higher in your preferences. On LW we sometimes refer to this as "winning".

Couldn't these people care about not sacrificing autonomy, and therefore this would be a value that they're successfully fulfilling?

Comment by jwdink on The Trolley Problem in popular culture: Torchwood Series 3 · 2009-07-29T21:13:34.244Z · LW · GW

You can't decide your preference, preference is not what you actually do, it is what you should do, and it's encoded in your decision-making capabilities in a nontrivial way, so that you aren't necessarily capable of seeing what it is.

You've lost me.

Comment by jwdink on The Trolley Problem in popular culture: Torchwood Series 3 · 2009-07-29T20:18:19.780Z · LW · GW

Which of the decision is (actually) the better one depends on the preferences of one who decides

So if said planet decided that its preference was to perish, rather than sacrifice children, would this be irrational?

However, whatever the right decision is, there normally should be a way to fix the parameters of utilitarian calculation so that it outputs the right decision. For example, if the right decision in the topic problem is actually war to the death, there should be a way to more formally understand the situation so that the utilitarian calculation outputs "war to the death" as the right decision.

I don't see why I should agree with this statement. I was understanding a utilitarian calculation as either a) the greatest happiness for the greatest number of people or b) the greatest preferences satisfied for the greatest number of people. If a), then it seems like it might predictably give you answers that are at odds with moral intuitions, and have no way of justifying itself against these intuitions. If b), then there's nothing irrational about deciding to go to war with the aliens.

Comment by jwdink on The Trolley Problem in popular culture: Torchwood Series 3 · 2009-07-29T20:13:51.509Z · LW · GW

Well then... I'd say a morality that puts the dignity of a few people (the decision makers) as having more importance than, well, the lives and well being of the majority of the human race is not a very good morality.

Okay. Would you say this statement is based on reason?

Comment by jwdink on The Trolley Problem in popular culture: Torchwood Series 3 · 2009-07-29T20:12:57.501Z · LW · GW

If a decision decreases utility, is it not irrational?

I don't see how you could go about proving this.

As for the trolley problem, what we are dealing with is the aftermath of the trolley problem. If you save the people on the trolley, it could be argued that you have behaved dishonourably, but what about the people you saved? Surely they are innocent of your decision. If humanity is honourably wiped out by the space monsters, is that better than having some humans behave dishonourably and others (i.e. those who favoured resistance, but were powerless to effect it) survive honourably?

Well, wait. Are we dealing with the happiness that results in the aftermath, or are we dealing with the moral value of the actions themselves? Surely these two are discrete. Don't the intentions behind an action factor into the morality of the action? Or are the results all that matter? If intentions are irrelevant, does that mean that inanimate objects (entities without intentions, good or bad) can do morally good things? If a tornado diverts from a city at the last minute, was that a morally good action?

I think intentions matter. It might be the case that, 100 years later, the next generation will be happier. That doesn't mean that the decision to sacrifice those children was the morally good decision-- in the same way that, despite the tornado-free city being a happier city, it doesn't mean the tornado's diversion was a morally good thing.

Comment by jwdink on The Trolley Problem in popular culture: Torchwood Series 3 · 2009-07-29T20:08:13.745Z · LW · GW

Ah, then I misunderstood. A better way of phrasing my challenge might be: it sounds like we might have different algorithms, so prove to me that your algorithm is more rational.

No one has answered this challenge.

Comment by jwdink on The Trolley Problem in popular culture: Torchwood Series 3 · 2009-07-29T19:24:17.184Z · LW · GW

Well, sure, when you phrase it like that. But your language begs the question: it assumes that the desire for dignity/autonomy is just an impulse/fuzzy feeling, while the desire for preserving human life is an objective good that is the proper aim for all (see my other posts above). This sounds probable to be me, but it doesn't sound obvious/ rationally derived/ etc.

I could after all, phrase it in the reverse manner. IF I assume that dignity/autonomy is objectively good:

then the question becomes "everyone preserves their objectively good dignity" vs. "just about everyone loses their dignity for destroying human autonomy, but we get that warm fuzzy feeling of saving some people." In this situation, "Everyone loses their dignity, but at least they get to survive--in the way that any other undignified organism (an amoeba) survives" would actually seem to be a highly immoral decision.

I'm not endorsing either view, necessarily. I'm just trying to figure out how you can claim one of these views is more rational or logical than the other.

Comment by jwdink on The Trolley Problem in popular culture: Torchwood Series 3 · 2009-07-29T19:13:08.674Z · LW · GW

What do the space monsters deserve?

Haha, I was not factoring that in. I assumed they were evil. Perhaps that was close minded of me, though.

The first scenario is better for both space monsters and humans. Sure, in the second scenario, the humans theoretically don't lose their dignity, but what does dignity mean to the dead?

Some people would say that dying honorably is better than living dishonorably. I'm not endorsing this view, I'm just trying to figure out why it's irrational, while the utilitarian sacrifice of children is more rational.

To put it in another light, what if this situation happened a hundred years ago? Would you be upset that the people alive at the time caved in to the aliens' demands, or would you prefer the human race had been wiped out?

There are plenty of variables you can slide up and down to make one feel more or less comfortable with the scenario. But we already knew that, didn't we? That's what the original trolley problem tells us: that pushing someone off a bridge feels morally different than switching the tracks of a trolley. My concern is that I can't figure out how to call one impulse (the discomfort at destroying autonomy) an objectively irrelevant mere impulse, and another impulse (the comfort at preserving life) an objectively good fact. It seems difficult to throw just the bathwater out here, but I'd really like to preserve the baby. (See my other post above, in response to Nesov.)

Comment by jwdink on The Trolley Problem in popular culture: Torchwood Series 3 · 2009-07-29T19:05:28.588Z · LW · GW

Yeah, the sentiment expressed in that post is usually my instinct too.

But then again, that's the problem: it's an instinct. If my utilitarian impulse is just another impulse, then why does it automatically outweigh any other moral impulses I have, such as a value of human autonomy? If my utilitarian impulse is NOT just an impulse, but somehow is objectively more rational and outranks other moral impulses, then I have yet to see a proof of this.

Comment by jwdink on The Trolley Problem in popular culture: Torchwood Series 3 · 2009-07-29T16:38:51.997Z · LW · GW

I don't quite understand how your rhetorical question is analogous here. Can you flesh it out a bit?

I don't think the notion of dignity is completely meaningless. After all, we don't just want the maximum number of people to be happy, we also want people to get what they deserve-- in other words, we want people to deserve their happiness. If only 10% of the world were decent people, and everyone else were immoral, which scenario would seem the more morally agreeable: the scenario in which the 10% good people were ensured perennial happiness at the expense of the other 90%'s misery, or the reversed scenario?

I'm just seeing something parallel here: it's not brute number of people living that matters, so much as those people having worthwhile existences. After sacrificing their children on a gamble, do these people really deserve the peace they get?

(Would you also assert that Ozymandias' decision in The Watchmen was morally good?)

Comment by jwdink on The Trolley Problem in popular culture: Torchwood Series 3 · 2009-07-29T16:37:54.654Z · LW · GW

I'm surprised that was so downvoted too.

Perhaps I should rephrase it: I don't want to assert that it would've been objectively better for them to not give up the children. But can someone explain to me why it's MORE rational to give up in this situation?

Comment by jwdink on The Trolley Problem in popular culture: Torchwood Series 3 · 2009-07-29T03:00:32.088Z · LW · GW

That's horrible. They should've fought the space monsters in an all out war. Better to die like that than to give up your dignity. I'm surprised they took that route on the show.

Comment by jwdink on The Trolley Problem in popular culture: Torchwood Series 3 · 2009-07-29T02:59:31.826Z · LW · GW

A good example of this (I think) is The Dark Knight, with the two ferries.

Comment by jwdink on The Terrible, Horrible, No Good, Very Bad Truth About Morality and What To Do About It · 2009-07-21T18:44:26.904Z · LW · GW

The mistake philosophers tend to make is in accepting rationalism proper, the view that our moral intuitions (assumed to be roughly correct) must be ultimately justified by some sort of rational theory that we’ve yet to discover.

The author seems to assert that this is a cultural phenomenon. I wonder if our attempts at unifying into a theory might not be instinctive, however. Would it then be so obvious that Moral Realism were false? We have an innate demand for consistency in our moral principles, that might allow us to say something like "racism is indeed objectively wrong, IF you believe that happiness is good."

That being said, I don't think it's enough to save moral realism. The probability that moral realism is false has been a disturbing prospect for me lately, so I'm curious how he carves out a comfortable alternative.

Comment by jwdink on The Strangest Thing An AI Could Tell You · 2009-07-15T21:25:06.814Z · LW · GW

The example of the paralysis anosognosia rationalization is, for some reason, extremely depressing to me.

Does anyone understand why this only happens in split brain patients when their right hemisphere motivates an action? Shouldn't it happen quite often, since the right side has no way of communicating to the left side "its time to try a new theory," and the left side is the one that we'll be talking to?

Comment by jwdink on Open Thread: July 2009 · 2009-07-08T19:12:06.501Z · LW · GW

If they are being called "fundamentally mental" because they interact by one set of rules with things that are mental and a different set of rules with things that are not mental, then it's not consistent with a reductionist worldview...

Is it therefore a priori logically incoherent? That's what I'm trying to understand. Would you exclude a "cartesian theatre" fundamental particle a priori?

(and it's also confused because you're not getting at how mental is different from non-mental). However, if they are being called fundamentally mental because they happen to be mechanistically involved in mental mechanisms, but still interact with all quarks in one consistent way everywhere, it's logically possible.

What do you mean by mechanical? I think we're both resting on some hidden assumption about dividing the mental from the physical/mechanical. I think you're right that it's hard to articulate, but this makes Eliezer's original argument even more confusing. Could you clarify whether or not you're agreeing with his argument?

Comment by jwdink on Open Thread: July 2009 · 2009-07-08T17:43:50.691Z · LW · GW

what if qualions really existed, in a material way and there were physical laws describing how they were caught and accumulated by neural cells. There's absolutely no evidence for such a theory, so it's crazy, but its not logically impossible or inconsistent with reductionism, right?

Hmm... excellent point. Here I do think it begins to get fuzzy... what if these qualions fundamentally did stuff that we typically attribute to higher-level functions, such as making decisions? Could there be a "self" qualion? Could their behavior be indeterministic in the sense that we naively attribute to humans? What if there were one qualion per person, which determined everything about their consciousness and personality irreducibly? I feel that, if these sorts of things were the case, we would no longer be within the realm of a "material" theory. It seems that Eliezer would agree:

By far the best definition I've ever heard of the supernatural is Richard Carrier's: A "supernatural" explanation appeals to ontologically basic mental things, mental entities that cannot be reduced to nonmental entities. This is the difference, for example, between saying that water rolls downhill because it wants to be lower, and setting forth differential equations that claim to describe only motions, not desires. It's the difference between saying that a tree puts forth leaves because of a tree spirit, versus examining plant biochemistry. Cognitive science takes the fight against supernaturalism into the realm of the mind. Why is this an excellent definition of the supernatural? I refer you to Richard Carrier for the full argument. But consider: Suppose that you discover what seems to be a spirit, inhabiting a tree: a dryad who can materialize outside or inside the tree, who speaks in English about the need to protect her tree, et cetera. And then suppose that we turn a microscope on this tree spirit, and she turns out to be made of parts - not inherently spiritual and ineffable parts, like fabric of desireness and cloth of belief; but rather the same sort of parts as quarks and electrons, parts whose behavior is defined in motions rather than minds. Wouldn't the dryad immediately be demoted to the dull catalogue of common things?

Based on his post eventually insisting on the a priori incoherence of such possibilities (we look inside the dryad and find out he's not made of dull quarks), I inferred that he thought fundamentally mental things, too, are excluded a priori. You seem to now disagree, as I do. Is that right?

Comment by jwdink on Open Thread: July 2009 · 2009-07-08T17:14:26.399Z · LW · GW

Reductionism, as I understand it, is the idea that the higher levels are completely explained by (are completely determined by) the lower levels. Any fundamentally new type of particle found would just be added to what we consider "lower level".

Oh! Certainly. But this doesn't seem to exclude "mind", or some element thereof, from being irreducible-- which is what Eliezer was trying to argue, right? He's trying to support reductionism, and this seems to include an attack on "fundamentally mental" entities. Based on what you're saying, though, there could be a fundamental type of particle, called "feelions," or "qualions"--the entities responsible for what we call "mind"--which would not reduce down to quarks, and therefore would deserve to be called their own fundamental type of "stuff." It's a bit weird to me to call this a reductionist theory, and its certainly not a reductionist materialist theory.

Everything else you said seems to me right on-- emergent properties that are irreducible to their constituents in principle seems somewhat incoherent to me.

Comment by jwdink on Open Thread: July 2009 · 2009-07-08T17:02:32.562Z · LW · GW

QM possesses some fundamental level of complexity, but I wouldn't agree in this context that it's "fundamentally complicated".

I see what you mean. It's certainly a good distinction to make, even if it's difficult to articulate. Again, though, I think it's Occam's Razor and induction that makes us prefer the simpler entities-- they aren't the sole inhabitants of the territory by default.

Comment by jwdink on Open Thread: July 2009 · 2009-07-08T15:24:22.125Z · LW · GW

Indeed, an irreducible entity (albeit with describable, predictable, behavior) is not much better than a miracle. This is why Occam's Razor, insisting that our model of the world should not postulate needless entities, insists that everything should be reduced to one type of stuff if possible. But the "if possible" is key: we verify through inference and induction whether or not it's reasonable to think we'll be able to reduce everything, not through a priori logic.

Comment by jwdink on Open Thread: July 2009 · 2009-07-08T15:10:09.313Z · LW · GW

That said, I wonder if the claim can't be near-equivalently rephrased "it's impossible to imagine a non-reductionist scenario without populating it with your own arbitrary fictions".

Ah, that's very interesting. Now we're getting somewhere.

I don't think it has to be arbitrary. Couldn't the following scenario be the case?:

The universe is full of entities that experiments show reducible to fundamental elements with laws (say, quarks), or entities that induction + parsimony tells us ought to be reducible to fundamental elements (since these entities are made of quarks, we just haven't quite figured out the reduction of their emergent properties yet)... BUT there is one exception in this universe, a certain type of stuff whose behavior is quantifiable, yet not reducible to quarks. In fact, we have no reason to believe this certain type of stuff is even made of the fundamental stuff everything else seems to be. Every experiment would defy reducing this entity down to quarks, to the point that it would actually be against Occam's Razor to try and reduce this entity to quarks! It would be a type of dualism, I suppose. It's not a priori logically excluded, and it's not arbitrary.

Comment by jwdink on Open Thread: July 2009 · 2009-07-08T14:55:29.408Z · LW · GW

Of course it's technically possible that the territory will play a game of supernatural and support a fundamental object behaving according to a high-level concept in your mind. But this is improbable to an extent of being impossible, a priori, without need for further experiments to drive the certainty to absolute.

Not quite sure what you're saying here. If you're saying:

1)"Entities in the map will not magically jump into the territory," Then I never disagreed with this. What I disagreed with is your labeling certain things as obviously in the map and others obviously in the territory. We can use whatever labels you like: I still don't know why irreducible entities in the territory are "incredibly improbable prior to any empirical evidence."

2)"The territory can't support irreducible entities," you still haven't asserted why this is "incredibly improbable prior to any empirical evidence."

Comment by jwdink on Open Thread: July 2009 · 2009-07-08T14:43:43.327Z · LW · GW

To loqi and Nesov:

Again, both of your responses seem to hinge on the fact that my challenge below is easily answerable, and has already been answered:

Tell me the obvious, a priori logically necessary criteria for a person to distinguish between "entities within the territory" and "high-level concepts." If you can't give any, then this is a big problem: you don't know that the higher level entities aren't within the territory. They could be within the territory, or they could be "computational abstractions." Either position is logically tenable, so it makes no sense to say that this is where the logical incoherence comes in.

To loqi: Where do we draw the line? Where is an entity too complex to be considered fundamental, whereas another is somewhat less complex and can therefore be considered simple? What would be a priori illogical about every entity in the universe being explainable in terms of quarks, except for one type of entity, which simply followed different laws? (Maybe these laws wouldn't even be deterministic, but that's apparently not a knockdown criticism of them, right? From what I understand, QM isn't deterministic, by some interpretations.)

To Nesov: Again, you're presupposing that you know what's part of the territory, and what's part of the map, and then saying "obviously, the territory isn't affected by the map." Sure. But this presupposes the territory doesn't have any irreducible entities. It doesn't demonstrate it.

Don't get me wrong: Occam's razor will indeed (and rightly) push us to suspect that there are no irreducible entities. But it will do this based on some previous success with reduction-- it is an inference, not an a priori necessity.

Comment by jwdink on Open Thread: July 2009 · 2009-07-08T04:02:55.264Z · LW · GW

This doesn't really answer the question, though. I know that a priori means "prior to experience", but what does this consist of? Originally, for something to be "a priori illogical", it was supposed to mean that it couldn't be thought without contradicting oneself, because of pre-experiential rules of thought. An example would be two straight lines on a flat surface forming a bounded figure-- it's not just wrong, but inconceivable. As far as I can tell, an irreducible entity doesn't possess this inconceivability, so I'm trying to figure out what Eliezer meant.

(He mentions some stuff about being unable to make testable predictions to confirm irreducibility, but as I've already said, this seems to presuppose that reducibility is the default position, not prove it.)