Posts
Comments
OK, thanks, I see no problems with that.
I have now added a hopefully suitable paragraph to the post.
In replying initially, I assumed that "indexical uncertainty" was a technical terms for a variable that plays the role of probability given that in fact "everything happens" in MW and therefore everything strictly has a probability of 1. However, now I have looked up "indexical uncertainty" and find that it means an observer's uncertainty as to which branch they are in (or more generally, uncertainty about one's position in relation to something even though one has certain knowledge of that something). That being so, I can't see how you can describe it as being in the territory.
Incidentally, I have now added an edit to the quantum section of the OP.
Great. Incidentally, that seems a much more intelligible use of "territory" and "map" than in the Sequence claim that a Boeing 747 belongs to the map and its constituent quarks to the territory.
Thanks, so to get back to the original question of how to describe the different effects of divergence and convergence in the context of MW, here's how it's seeming to me. (The terminology is probably in need of refinement).
Considering this in terms of the LW-preferred Many Worlds interpretation of quantum mechanics, exact "prediction" is possible in principle but the prediction is of the indexical uncertainty of an array of outcomes. (The indexical uncertainty governs the probability of a particular outcome if one is considered at random.) Whether a process is convergent or divergent on a macro scale makes no difference to the number of states that formally need to be included in the distribution of possible outcomes. However, in the convergent process the cases become so similar that there appears to be only one outcome at the macro scale; whereas in a divergent process the "density of probability" (in the above sense) becomes so vanishingly small for some states that at a macro scale the outcomes appear to split into separate branches. (They have become decoherent.) Any one such branch appears to an observer within that branch to be the only outcome, and so such an observer could not have known what to "expect" - only the probability distribution of what to expect. This can be described as a condition of subjective unpredictability, in the sense that there is no subjective expectation that can be formed before the divergent process which can be reliably expected to coincident with observation after the process.
There are no discrete "worlds" and "branches" in quantum physics as such.
This seems to conflict with references to "many worlds" and "branch points" in other comments, or is the key word "discrete"? In other words, the states are a continuum with markedly varying density so that if you zoom out there is the appearance of branches? I could understand that expect for cases like Schroedinger's cat where there seems to be a pretty clear branch (at the point where the box is opened, i.e. from the point of view of a particular state if that is the right terminology).
Once two regions in state space are sufficiently separated to no longer significantly influence each other...
From the big bang there are an unimaginably large number of regions in state space each having an unimaginably small influence. It's not obvious, but I can perfectly well believe that the net effect is dominated by the smallness of influence, so I'll take your word for it.
Thanks, I think I understand that, though I would put it slightly differently, as follows...
I normally say that probability is not a fact about an event, but a fact about a model of an event, or about our knowledge of an event, because there needs to be an implied population, which depends on a model. When speaking of "situations like this" you are modelling the situation as belonging to a particular class of situations whereas in reality (unlike in models) every situation is unique. For example, I may decide the probability of rain tomorrow is 50% because that is the historic probability for rain where I live in late July. But if I know the current value of the North Atlantic temperature anomaly, I might say that reduces it to 40% - the same event, but additional knowledge about the event and hence a different choice of model with a smaller population (of rainfall data at that place & season with that anomaly) and hence a greater range of uncertainty. Further information could lead to further adjustments until I have a population of 0 previous events "like this" to extrapolate from!
Now I think what you are saying is that subject to the hypothesis that our knowledge of quantum physics is correct, and in the thought experiment where we are calculating from all the available knowledge about the initial conditions, that is the unique case where there is nothing more to know and no other possible correct model - so in that case the probability is a fact about the event as well. The many worlds provide the population, and the probability is that of the event being present in one of those worlds taken at random.
Incidentally, I'm not sure where my picture of probability fits in the subjective/objective classification. Probabilities of models are objective facts about those models, probabilities of events that involve "bets" about missing facts are subjective, while what I describe is dependent on the subject's knowledge of circumstantial data but free of bets, so I'll call it semi-subjective until somebody tells me otherwise!
So, to get this clear (being well outside my comfort zone here), once a split into two branches has occurred, they no longer influence each other? The integration over all possibilities is something that happens in only one of the many worlds? (My recent understanding is based on "Everything that can happen does happen" by Cox & Forshaw).
even if in the specific situation the analogy is incorrect, because the source of randomness is not quantum, etc.
This seems a rather significant qualification. Why can't we say that the MW interpretation is something that can be applied to any process which we are not in a position to predict? Why is it only properly a description of quantum uncertainty? I suspect many people will answer in terms of the subjective/objective split, but that's tricky terrain.
you can consider the whole universe as a big quantum computer, and you're living in it
I recall hearing it argued somewhere that it's not so much "a computer" as "the universal computer" in the sense that it is impossible to principle for there to be another computer performing the calculations from the same initial conditions (and for example getting to a particular state sooner). I like that if it's true. The calculations can be performed, but only by existing.
the multiverse as a whole evolves deterministically
So to get back to my question of what predictability means in a QM universe under MW, the significant point seems to be that prediction is possible starting from the initial conditions of the Big Bang, but not from a later point in a particular universe (without complete information about the all other universes that have evolved from the Big Bang)?
the truth-value of the claim, which is what we're discussing here
More precisely, it's what you're discussing. (Perhaps you mean I should be!) In the OP I discussed the implications of an infinitely divisible system for heuristic purposes without claiming such a system exists in our universe. Professionally, I use Newtonian mechanics to get the answers I need without believing Einstein was wrong. In other words, I believe true insights can be gained from imperfect accounts of the world (which is just as well, since we may well never have a perfect account). But that doesn't mean I deny the value of worrying away at the known imperfections.
Well, I didn't quite say "choose what is true". What truth means in this context is much debated and is another question. The present question is to understand what is and isn't predictable, and for this purpose I am suggesting that if the experimental outcomes are the same, I won't get the wrong answer by imagining CI to be true, however unparsimonious. If something depends on the whether an unstable nucleus decays earlier or later than its half life, I don't see how the inhabitants of the world where it has decayed early and triggered a tornado (so to speak) will benefit much by being confident of the existence of a world where it decayed late. Or isn't that the point?
I mentioned back in April that the point about chaos and computer science needed a proper discussion. It is here.
I also mentioned another way of taking the reductionism question further. I was referring to this.
I agree, I had thought of mentioning this but it's tricky. As I understand it, living in one of Many Worlds feels exactly like living in a single "Copenhagen Interpretation" world, and the argument is really over which is "simpler" and generally Occam-friendly - do you accept an incredibly large number of extra worlds, or an incredibly large number of reasons why those other worlds don't exist and ours does? So if both interpretations give rise to the same experience, I think I'm at liberty to adopt the Vicar of Bray strategy and align myself with whichever interpretation suits any particular context. It's easier to think about unpredictability without picturing Many Worlds - e.g. do we say "don't worry about driving too fast because there will be plenty of worlds where we don't kill anybody?" But if anybody can offer a Many Worlds version of the points I have made, I'd be most interested!
Yes, that looks like a good summary of my conclusions, provided it is understood that "subsystems" in this context can be of a much larger scale than the subsystems within them which diverge. (Rivers converge while eddies diverge).
Perhaps "hedging" is another term that also needs expanding here. One can reasonably assume that Penrose's analysis has some definite flaws in it, given the number of probable flaws identified, while still suspecting (for the reasons you've explained) that it contains insights that may one day contribute to sounder analysis. Perhaps the main implication of your argument is that we need to keep arguments in our mind in more categories then just a spectrum from "strong" to "weak". Some apparently weak arguments may be worth periodic re-examination, whereas many probably aren't.
"having different descriptions at different levels" is itself something you say that belongs in the realm of Talking About Maps, not the realm of Talking About Territory
Why do we distinguish “map” and “territory”? Because they correspond to “beliefs” and “reality”, and we have learnt elsewhere in the Sequences that
my beliefs determine my experimental predictions, but only reality gets to determine my experimental results.
Let’s apply that test. It isn’t only predictions that apply at different levels, so do the results. We can have right or wrong models at quark level, atom level, crystal level, and engineering component level. At each level, the fact that one model is right and another wrong is a fact about reality: it is Talking about Territory. When we say a 747 wing is really there, we mean that (for example) visualising it as a saucepan will result in expectations that the results will not fulfil in the way that they will when visualising it as a wing. Indeed, we can have many different models of the wing, all equally correct - since they all result in predictions that conform to the same observations. The choice of correct model is what is in our head. The fact that it has to be (equivalent to) a model of a wing to be correct is in the Territory. In short, when Talking about Territory we can describe things at as many levels (of aggregation) as yield descriptions that can be tested against observation.
at different levels
What exactly is meant by “levels” here? The Naval Gunner is arguing about levels of approximation. The discussion of Boeing 747 wings is an argument about levels of aggregation. They are not the same thing. Treating the forces on an aircraft wing at the aggregate level is leaving out internal details that per se do not affect the result. There will certainly be approximations involved in practice, of course, but they don’t stem from the actual process of aggregation, which is essentially a matter of combining all the relevant force equations algebraically, eliminating internal forces, before solving them; rather than combining the calculated forces numerically.
...the way physics really works, as far as we can tell, is that there is only the most basic level—the elementary particle fields and fundamental forces
The way that reality works, as far as we can tell, is that there are basic ingredients, with their properties, which in any given system at any given instant exist in a particular configuration. Now reality is not just the ingredients but also the configuration - a wrong model of the configuration will give wrong predictions just as a wrong model of the ingredients will. The possible configurations include known stable structures. These structures are likewise real because any model of a configuration which cannot be transformed into a model which includes the identified structure in question is in conflict with reality. Physics is I understand it comprises (a) laws that are common to different configurations of the ingredients, and (b) laws that are common to different configurations of the known stable structures. Physicalism implies the belief that laws (b) are always consistent with laws (a) when both are sufficiently accurate.
...The laws of physics do not contain distinct additional causal entities that correspond to lift or airplane wings
True but the key word here is “additional”. Newton’s laws were undoubtedly laws of physics, and in my school physics lessons were expressed in terms of forces on bodies, rather than on their constituent particles. The laws for forces on constituent particles were then derived from Newton’s laws by a thought experiment in which a body is divided up. In higher education today the reverse process is the norm, but reality is indifferent to which equivalent formulation we use: both give identical predictions.[Original wording edited]
General Relativity contains the additional causal entity known as space-time curvature, which is an aggregate effect of all the massive particles in the universe given their configuration so is not a natural fit in the Procrustean bed of reductionism. [Postscript] Interestingly, I've read that Newton was never happy with his idea of gravitation as a force of attraction between two things because it implied a property shared between the two things concerned and therefore being intrinsic to neither - but failed to find a better formulation.
The critical words are really and see
Indeed, but when you see a wing it is not just in the mind, it is also evidence of how reality is configured. It is the result of the experiment you perform by looking.
.. the laws of physics themselves, use different descriptions at different levels—as yonder artillery gunner thought
What the gunner really thought is pure speculation of course, but this assumption by EY raises an important point about meta-models.
In thought experiments the outcome is determined by the applicable universal laws – that’s meta-model (A). In any real-world case you need a model of the application as well as models of universal laws. That’s meta-model (B). An actual artillery shell will be affected by things like air resistance, so the greater accuracy of Einstein’s laws in textbook cases is no guarantee of it giving more accurate results in this case. EY obviously knew this, but his meta-model excluded it from consideration here. Treating the actual application as a case governed only by Newton’s or Einstein’s laws is itself a case of “Mind Projection Fallacy” – projecting meta-model (A) onto a real-world application. So it’s not a case of the gunner mistaking a model for reality, but of mistaking the criteria for choosing between one imperfect model and another. I imagine gunners are generally practical men, and in the field of the applied sciences it is very common for competing theories to have their own fields of application where they are more accurate than the alternatives – so although he was clearly misinformed, at least his meta-model was the right one.
[Postscript] An arguable version of reductionism is the belief that laws about the ingredients of reality are in some sense "more fundamental" than laws about stable structures of the ingredients. This cannot be an empirical truth, since both laws give the same predictions where they overlap so cannot be empirically distinguished. Neither is any logical contradiction implied by its negation. It can only be a metaphysical truth, whatever that is. Doesn't it come down to believing Einstein's essentialist concept of science against Bohr's instrumentalist version? That science doesn't just describe, but also tells? So pick Bohr as an opponent if you must, not some anonymous gunner.
I'm not clear what you are meaning by "spatial slice". That sounds like all of space at a particular moment in time. In speaking of a space-time region I am speaking of a small amount of space (e.g. that occupied by one file on a hard drive) at a particular moment in time.
..absent collapse..
Ah, is that so.
But a 4D descriptions of all the changes involved in the copy-and-delete process would be sufficient..
Yes, I can see that that's one way of looking at it.
In fact, your problem would be false positives
I don't think so, since the information I would be comparing in this case (the "file contents") would be just a reduction of the information in two regions of space-time.
Reducing to "physical properties" is not necessarily the same as to "the physical properties of the ingredients". I would have thought physicalists think mental properties can be reduced to physical properties, but reductionists identify these with the physical properties of the ingredients. I suppose one way of looking at it is that when you say "in principle" the principles you refer to are physical principles, whereas when emergentists see obstacles as present "in principle" when certain kinds of complexity are present they are more properly described as mathematical principles.
Mental events can certainly be reduced to physical events, but I would take mental properties to be the properties of the set of all possible such events, and the possibility of connecting these to the properties of the brain's ingredients even in principle is certainly not self-evident.
"As a theory of mind (which it is not always), emergentism differs from idealism, eliminative materialism, identity theories, neutral monism, panpsychism, and substance dualism, whilst being closely associated with property dualism. " (WP)
As a theory exclusively of the mind, I can see that emergentism has implications like property dualism, but not as a theory that treats the brain just as a very complex system with similar issues to other complex systems.
OK, not strictly "conserved", except that I understand quantum mechanics requires that the information in the universe must be conserved. But what I meant is that if you download a file to a different medium and then delete the original, the information is still the same although the descriptions at quark level are utterly different. Thus there is a sense in which a quark level description of reality fails to capture an important fact about it (the identity of the two files in information terms).
I don't think this has anything to do with dualism in the Cartesian sense, it's just an example of my general preference for not taking metaphysical positions without reference to the context. I'm afraid I don't know the label for that!
"Emergentism" can only be applied to gearboxes if the irreducibility clause is dropped. The high-level behaviour of a mechanism is always reducible to its the behaviour of its parts.
My point is that depends if by "behaviour" you mean "the characteristics of a single solution" or "the characteristics of solution space". In the latter case the meaning of "reduction" doesn't seem unambiguous to me.
The practical debate I have in mind is whether multibody dynamics can answer practical questions about the behaviour of gearboxes under conditions of stochastic or transient excitation with backlash taken into account, the point being that the solution space in such an application can be very large.
The problem is when people use the label "emergence" as a semantic stop sign
Agreed, which is why I was trying to replace it by a "proceed with caution" sign with some specific directions.
One lives & learns - thanks.
The boundary between physical causality and logical or mathematical implication doesn’t always seem to be clearcut. Take two examples.
(1) The product of two and an integer is an even integer. So if I double an integer I will find that the result is even. The first statement is clearly a timeless mathematical implication. But by recasting the equation as a procedure I introduce both an implied separation in time between action and outcome, and an implied physical embodiment that could be subject to error or interruption. Thus the truth of the second formulation strictly depends on both a mathematical fact and physical facts.
(2) The endpoint of a physical process is causally related to the initial conditions by the physical laws governing the process. The sensitivity of the endpoint to the initial conditions is a quite separate physical fact, but requires no new physical laws: it is a mathematical implication of the physical laws already noted. Again, the relationship depends on both physical and mathematical truths.
Is there a recognized name for such hybrid cases? They could perhaps be described as “quasi-causal” relationships.
Yes indeed, it is a challenge to understand how the same human moral functionality "F" can result in a very different value system "M" to ones own, though I suspect a lot of historical reading would be necessary to fully understand the Nazi's construction of the social world - "S", in my shorthand. A contemporary example of the same challenge is the cultures that practice female genital mutilation. You don't have to agree with a construction of the world to begin to see how it results in the avowed values that emerge from it, but you do have to be able to picture it properly. In both cases, this challenge has to be distinguished from the somewhat easier task of explaining the origins of the value system concerned.
Maybe it's just that EY is very persuasive! I'm reminded of what was said about some other polymath (Arthur Koestler I think) that the critics were agreed that he was right on almost everything - except, of course, for the topic that the critic concerned was expert in, where he was completely wrong!
So my problem is, whether to just read the sequences, or to skim through all the responses as well. The latter takes an awful lot longer, but from what I've seen so far there's often a response from some expert in the field concerned that, at the least, puts the post into a whole different perspective.
Confidence in moral judgments is never a sound criterion for them being "terminal", it seems to me.
To see why, consider that ones working values are unavoidably a function of two related things: one's picture of oneself, and of the social world. Thus, confident judgments are likely to reflect confidence in relevant parts of these pictures, rather than the shape of the function. To take your example, your adverse judgement of authority could have been a reflection of a confident picture of your ideal self as not being submissive, and of human society at its current state of development as being capable of operating without authority (doubtless oversimplifying greatly, but I hope you get the idea).
A crude mathematical model may help. If M is a vector of your moral values, and S and I is your understanding of society and personal construct respectively, then I am suggesting M = F(S, I). Then the problem is that "terminal values" as I understand them reside in F, but it is only M that is directly accessible to introspection. It is extremely difficult to imagine away the effect of S and I, but one way of making progress should be to vary S & I. That is, try hard to imagine being in an utterly different social context to the one we know. EG: an ancestral hunter-gatherer tribal group; a group of castaways on an island, the remainder being young children; an encounter with aliens; in a group defending ones family against an evil oppressor; etc. etc. Likewise, imagine being in the shoes of somebody with very different aptitudes and personality. The things that remain constant - the things that tell us how to deal with all these different cases - are our terminal values. (Or rather, they would be if we could only eliminate self-deception.)
I'm still puzzled, as you seem to be both defending and contradicting EY's view that:
the reductionist thesis is that we use multi-level models for computational reasons, but physical reality has only a single level. (Italics added).
I'm not actually attacking this view so much as regarding it as a particular convention or definition of reality rather than a "thesis".
Perhaps you are reading "best characterized as" as "best modelled as"? I'm not saying that, just that this is the sense of "reality" that EY/the wiki writer prefers to adopt.
Re my claim:
"well, of course Bryan's mental model of his pain doesn't exist in reality by definition"
On reflection I suspect the disagreement here is that I am doubting that Bryan could consciously deny this, and you & EY & others are suspecting that he is unconsciously denying it. Well, that's a theory. I have added an edit to my post recognizing this. This seems to boil down to the LW-wiki "definition" not really defining what reductionists believe, but rather defining why they believe certain criticisms of reductionism are wrong. That at least would explain why it sounds biased!
What do you mean when you say "but it's information that is downloaded"? That the monist model does not completely describe reality? That computer programming is easier with the dualist model than the monist model? That information lives in a nonphysical universe that communicates with the physical universe, such that only having the physical universe would be insufficient for computers to run?
To answer more fully: The 'monist' model without information as a category describes reality at any instant but does not describe what is conserved from one instant to the next. Any activity that requires an intelligible account of what is going on is easier with the concept of information as a separate "thing". Of course, information doesn't belong in a nonphysical universe, since it obeys physical laws. Nevertheless the fact that it has a life of its own, with laws distinct to the laws specific to the materials which embody it an any given time, give it part (but not all) of the character of a separate physical but intangible substance.
The point of my analogy was to emphasise that all categories are man-made, including "substance", so that "substance counting" has an element of arbitrariness. Actually I don't find that treating "mind" as a separate substance is helpful!
Re Hands vs. Fingers. What worries me about this is the lack of any attention to the different contexts/purposes of different statements about hands & fingers. I have added a comment to the original post to amplify this.
...much later... The thing that puzzles me about this post is that no attention is paid to context.
I had an operation last year to my right index finger. It was carried out by a hand surgeon. I used those terms because it was rather important which finger was operated on, and because the medical specialism relates to any part of the hand indifferently.
A trivial example, of course, but it illustrates the point, which applies also to much more complex issues, that the appropriate choice of "model level" (or other meta-model feature) to best represent the aspect of reality that matters depends on the context (and especially on the purpose). The difficulty begins, IMHO, when people insist on using the same model or meta-model whatever the context.
Most commenters on this post seem entirely wrapped up in the mind/brain question. That isn't the only question for rationalists to have a view about! They don't seem to be aware that arguments about the usefulness and limits of reductionism also continue in many other fields. The problem is probably that concepts like emergence are used in the mind-brain debate as an excuse for vitalism. But that is really a special case, just because minds are the things that are conducting this debate. In other fields emergence can be a useful concept. In other words I can claim that emergence is useful (in some senses anyway) without believing this has anything to do with consciousness.
Many different things can be deduced from this story, as previous comments have illustrated. The step that I question is "carries no information" = "magic". I prefer Karl Popper's account, in which [to paraphrase "Conjectures & Refutations" Chapter 1] "carries no predictive information" = "metaphysical" but "metaphysical" does not mean "unscientific". Rather, science involves two activities, hypothesis creation and hypothesis testing. It is the hypothesis testing that has to be exclusively empirical (confined to falsifiable hypotheses). There are no rules for arriving at new hypotheses according to Popper, only heuristics, and metaphysical arguments can often be a source of new insights that lead to new falsifiable hypotheses. I believe Imre Lakatos developed this distinction with his idea of "Research programmes" which cannot be falsified but get abandoned when they cease to be fruitful of falsifiable hypotheses. The commenters who have stressed that some of the student's wrong answers could be valuable first steps towards understanding fit into Popper's scheme. The question (which we can't answer) is whether the "password" status or the "first step" status was uppermost in their minds. To conclude, the posting is valuable in drawing attention to the disutility of password-type answers, but misleading in not also recognizing the role of first-step-type answers.
Thanks again.
it seems cleaner to consider that a fact about computer science, not meteorology.
I'd call it a fact about any system whose trajectories diverge at a smaller scale and converge at a larger scale (roughly), but that's a radical view that needs a new discussion some time.
I think I can see a useful way of taking the reductionism question further, but will do more reading first...
Well, if the definition said that "reductionists disagree that 2 & 2 make 5" I wouldn't disagree with that either. What worries me is the apparent refusal to engage with the rational critics of reductionism. But I am mainly thinking of critics in fields other than physics - politics "there is no such thing as society", Skinner's psychology, "there are no thoughts, only stimuli and responses", not to mention developmental biology, weather forecasting & even mechanical engineering analysis, none of which actually get near "the territory" of quarks and leptons. So I am beginning to suspect that reductionism is used in a special sense by EY, more or less as a synonym for monism. And it's true, I wouldn't want to defend "substance dualists".
As for the Naval Gunner, the point is that he would be right in other fields than fundamental physics. In weather forecasting long term forecasts using coarser models are actually more accurate than those using fine meshes, because of the chaotic behaviour at smaller scales. So I would say the gunner was just misinformed! The fact that one of the two theories happens to be one of the very few theories that are exact as far as we currently know, and the other an approximation, makes it a special case - though possibly one of special relevance if monism/dualism is really the issue in question.
Thanks for pointing to the more recent EY post, which I look forward to reading. No time tonight.
;) ... but that's still only matter and information, just that we're now just information....
Sorry no time for a full answer, but roughly, yes, in a sense I do think that many of these disagreements turn out to be linguistic if you dig far enough. But if they are causal, the definition needs to compare two intelligible models of causality, not define one in a self-contradictory way. My reply to buybuydandavis may also help clarify.
That computer programming is easier with the dualist model than the monist model?
Yes, and anything else that requires an intelligible account of what is going on. You start with a monist model and then you have to define something called or synonymous with information. In my understanding that makes it a dualist model. (I hope my draft next discussion, Karma permitting(!), will elucidate further.)
I find it hard to square that with the Sequence item referred to, but then you imply you also found it confused. So, what do you use the word to mean?
So my objections aren't aimed at you!
I wasn't intending to unless that's what the Wiki definition characterizes it as, because I simply tried to re-express that definition without using the terms map and territory in ways that their definitions exclude.
I think perhaps I can see the problem. My phrase "for the purposes of causal explanation" is ambiguous. I wasn't meaning "as a way of explaining any particular behaviour" but rather "as a way of establishing the root causes underlying any behaviour". Does that make it more acceptable/less "greedy"?
Another possible cause of misunderstanding is that I have never seen the point of essentialism, in other words I think the questions that matter are always "how can something be most usefully described (for some stated purpose)" rather than "what is the essence of that something", so I instinctively avoided an essentialist definition. I'll think about rewording my version in a way that essentialists will recognize, as the sort of reductionism we are talking about does seem to hinge on reality being a sort of Platonic essence...
I agree that's a good distinction, though direct explanation of a higher level obviously works in some cases (e.g. the weight of the brain is a simple aggregate of the weight of the constituent atoms).
"Can be explained in terms of.." seems a much less biased way of framing the definition to me than the one in the wiki. I'll add it to my list of starting points for discussion!
The answer is in the rest of the sentence that you truncated! I imagine a dualist would say that there is something out there in the territory which you consider to be a manifestation of a model higher level, but they don't. That isn't the same.
I don't find the monist/dualist distinction helpful. Computers have hardware and information: that's a dualist model, and it has served very well as a model. At any given instant, the information is a state of the hardware, which requires a monist model, but it's information that is downloaded etc. So is information "stuff"? Depends what you mean by "stuff". In short it's an argument about definitions in the bad sense of insisting about the "true" meaning.
Sorry, I don't understand. Are you saying that you don't agree with my definition of reductionism (which was intended as a point of agreement, not a straw man at all)? I agree that an opinion about the likelihood that the standard model will continue to serve is a separate question.
An "emergentist" would probably define their view in the same way. The question is, simpler for what purposes?
Thanks! (Both of you)
In the same way that it's a very good exercise when having a rational debate to start by each side paraphrasing the view they oppose in a manner that is acceptable to holders of that view, otherwise the chances are you haven't understood what is being said. Is "a dialogue of the deaf" really what you want?
The one useful purpose for discussion of "meanings" is to draw attention to distinctions between different usages that may get overlooked. The "epistemic" vs "instrumental" is one such distinction in this case.
I suggest there is a third useful sense, which sort of links epistemic rationality and instrumental rationality.
The example in the sequence post takes consistent Bayesian probability with Occam priors as an example of rational modelling of the world. But in what sense is it rational to believe that the world is an ordered place with rules that apply irrespective of location and time, that Occam's razor is the right prior, and so on? The choice of such a method cannot be justified by appeals to evidence that assume the validity of the method.
The only root justification that makes sense to me is by a game-theory type argument. If the world does continue to behave the way I broadly expect it to, that has huge implications - I can continue to behave in broadly the same way as I have been behaving. If the universe from this moment on is going to behave according to entirely different rules, I have no basis for as much as putting one foot in front of another. So assuming that a model which describes the past well will also have validity in future has great instrumental advantages. So it is 'rational' to do so, even though it can't be justified by "scientific" reasoning.
It can be fairly pointed out that the reasoning here is essentially that of Pascal's Wager. However, there is nothing in that particular argument which justifies belief in any one version of "God" rather than another, and if somebody wants to use the word "God" exclusively for the fact that the universe makes sense, I see no reason to object!
This is just a test because a previous comment vanished on submission....
By my understanding, rule consequentialism means choosing rules according to the utility of the expected consequences, whereas deontology argues for a duty to follow a rule for reasons which may have nothing to do with the consequences. Kant's "treat another person as an end in him/herself, not as a means to an end" doesn't mention consequences and the argument for it isn't based on assessment of consequences. Admittedly both sorts of rule may lead to the same outcome in most cases, but in totally unprecedented moral dilemmas it helps to have an idea where the rule comes from. My prejudice is that rule consequentialism is the best basis for public policy, but deontology sometimes better captures the essence of what matters in cases of private morality.
To reverse your last point, Sam Harris (The Moral Landscape) defends RC on the grounds that only that which is experienced can be morally significant. While agreeing, I would reply that the motivation of acts is experienced, as well as the consequences. EG: Should you vote if you live in a safe seat? You could argue that the rule "vote anyway" has beneficial consequences, but then, so does the rule "vote, except in safe seats". RC doesn't actually invent the rules, it only tells you how to evaluate them once invented! However, I would vote anyway because I wish to be the sort of person who does. (NB, I didn't say "become"). That's an example of a D-ish argument that is based on conscious experience and, it seems to me, is a valid supplement to a generally RC-based outlook.