Posts
Comments
LW is an exercise in knowing your audience. Best of luck.
Well, I certainly consider this my last reply, because 1) I grow weary of this straightforward enough topic, 2) respondents have hitherto been enthralled in a childish, eristically motivated game of serving a slapdash of trivial, illogical, and baseless complaints one after another, 3) my posts have been consistently down-voted, which I find highly annoying, and 4) my grasp of the subject – and general familiarity with (and understanding of) the connections between the concepts omniverse (from omnium=multiverse), MW, QM, probability states, and the infinitary conclusion that is obtained by noting the (well-established) opinion that we live in an inflationary universe (that may well be eternally inflationary) – has no need of a "well-reasoned" or even "pursuasive" (an interesting way to move the goal posts, I might add) justification, insofar as there is a body of literature out there that suggests the pertinence and correctness of the answer I provided (which I never said was the true, in-your-face-clear-as-day answer).
(FYI: another source, of which I was previously unaware, that raises some of the key points I have is Brian Greene's The Hidden Reality, p. 181 onward. It does so in a largely jargon-free, not-so-overly-technical manner, so I suppose that should be a relief to LWers.)
You disregarded my claim that "you implicitly imply in your root comment that MWI implies the existence of an omniverse [...] but provide no justification for this." This was a main point.
I don't need to justify what is common knowledge. Take note of Tegmark, if you and the other down-voters care to.
Under this assumption of literal meaning, I contend that there is a contradiction in the two statements that you wrote.
Wow, so you really think your strawman is sufficient as grounds for objection to what I've claimed as correct? I didn't require sophistication of others here. That's pure nonsense. But by all means, try to impute meaning into my posts where it wasn't.
An argument for your answer is what I would like to see. You state that this is "clear", but again, one of the main purposes of this comment thread is to establish whether your answer is correct or not!
I have. It is quite clear. And the only objections I've seen consist in mere definitional confusions on the part of the "objectors" or who don't seem to demonstrate an understanding of the claims I made, but instead contend that I'm merely being "hostile" and not persuasive enough.
Is the first phrase supposed to have to same meaning as the second phrase?
Not necessarily the same "meaning" but more or less the same pragmatic thrust.
If your arguments are correct, then you have nothing to lose by being more persuasive, and I claim that your tone was overly aggressive and not persuasive for most purposes.
You make an interesting, and fallacious, claim, and continue to hide behind smoke and mirrors by suggesting that I haven't answered your so-called objections.
Going back to your last post:
It is the opposite, where the probabilities are equal, which requires specific preparations...
What exactly is "it"? I'm referring to the universe, not the cat's being dead or alive. What exactly is the relevance of the probabilities being equal, in any case? Does that even impinge on anything I've said, or even anything anyone else has said? Not obviously so.
And wrongly contend at that.
The word 'omniverse' does not represent a recognized concept in mainstream physics.
If The Road to Reality (from which the term omniverse, or "omnium", originally sprung) is not "mainstream", then pray tell what is.
[E]ven if the probability for an event in our universe were 0 that would in no way serve as an impediment to its occurring in the long run.
This is a technical aspect of the discussion, and is not contradictory. The point should be clear if one considers the possibility of flipping 100 heads in a row on a fair two-sided coin. For all intents and purposes, the probability is 0, but that it may happen is not in the least prevented or negated were we to consider an infinite ("long run") flipping of coins. Pretty straightforward and not contradictory.
I can substitute this phrase with "countably infinitely many" and the structure and strength of your arguments would not be changed.
Had I devoted the energy to a full-length discussion of the topic, this probably wouldn't be an issue, but (in general) it should be clear that the number of such worlds (or universes) should be uncountably infinite, not countably infinite. That is, the cardinality would be on the order of Aleph-1, at the very least. And I seriously doubt that had any bearing on prase's original point.
This response is not constructive. Provide references. Also, you changed context from ... without clarifying what you meant by the first phrase, which I cannot parse in a way that makes sense.
I guess you could read on the topic, if you're interested. I've already suggested at least two (namely, Hawking and Penrose). Do I really have to do all the work? I need to eat and making a living.
First, your tone unnecessarily escalates the hostility in this comment thread.
There is no "tone" here. That is a mind-projection fallacy. If anyone liberated of mammalian instinct can read what I say without imputing emotional overtones thereto, then it should be obvious that my points consist in reasoned discourse without torrents of bluster at all. It's almost as if you people wish to say, "yeah, we can see you holding that 9mm, just waiting to bust a cap, and the foam dripping from your mouth". It's really rather cute.
Your last two sentences are interesting, but I'm currently short on time. Grant me that I will return to respond to them later. Thank you.
Also someone else may have pointed this out but the general policy on lesswrong is not to vote on agree/disagree but on this comment was worth reading/was not worth reading.
No one "pointed this out" to me. But they did downvote whatever I said, without so much as a reasoned explanation. I seriously doubt that that is the actual universal employment of the voting mechanism, particularly since I've seen quite a few good posts on LW with numerous down-votes accorded to them. Perhaps my standards of "worthwhile reading" are too generous for the likes of LW'ers.
Saying you are not interested in upvotes is essentially saying you are not interested in contributing to the community.
No, I disagree that that is what I'm saying about the nature of not being interested in up-votes. I can still contribute without being up-voted, and I'm fine with that.
My grammar may be "convoluted" to those who do not take a liking to heady material (yes, I read difficult stuff all the time), so I can't be blamed for slipping into what I find most comfortable, just as you do without any second thought.
Interesting. I thought it would be. The left-hemisphere (controlling the right hand) is inhibitory of right-hemispheric activity, and so it would seem you've found a way for your left to counteract negative thinking patterns (which are typical of right-hemispheric thought).
Point specifically to that which is "derogatory" in the initial post. I don't participate in LW to get upvoted, anyway, since that is merely a marker of groupthink (or correlates in assigning yay or boo ascriptions to a particular post for mere classical conditioning to take place). I didn't use any jargon except the term "omniverse" which anyone equipped with Google could look up themselves. I suppose when writing comments on LW, in special cases (as in a technical topic), one must hold the hand of the reader, lest they become enraged by subtleties and novel syntactical arrangements of words.
I experience this during intense aesthetic events as in music, literature, or cinema. It is delightful.
The other effect is that it seems to function as some sort of intra-brain communication.
This is not so surprising. Intra-brain conflicts are well-established neuro-psychological phenomena, primarily on account of the presence of two hemispheres being thinly connected by axon fibres. There is a degree of modularity in the brain, because each hemisphere tends to work within its own sphere as a general rule.
I am curious to know: which hand/finger generally exhibits these non-verbal cues for you to recognize and label particular thoughts consciously?
I hope this helps!
Yes, it does. Thanks. I suppose I should lower my expectations of the general community's familiarity with "technical" subjects.
I express disgust with specific instances of voting.
Okay, I see your point. But the way the voting system is set up, it generalizes across one's presence on the website, hence "karma".
To be clear, I wasn't being "defiant". I asked a very specific question, expecting specific input, not a down-vote and being told (in a "put up or shut up" fashion) that I am just wrong. Well, LW is looking less inviting as a place for truly "rational discourse". But I digress.
"I'm going to leave" bluster.
I thought it was clear that if the question was answered in the affirmative (with clear reasoning), then it would be reasonable for someone to leave such a forum. I stand by that, too, because it would be a waste of my time to put thought into posts and to have them down-voted out of existence. It is wise for a community (if that's what it is) to consider its own nature from a meta-stand point. Is LW a treasure trove of instances of "fooling oneself"? A case study leads to many others.
...the generalization of the defiance of karma to all cases, not just a specific disagreement.
I never generalized. I asked a question. Read the post again, if you care to.
...the billigerent defiance of the negative perception of your own behavior...
What? Where is that, exactly? The behavior isn't negative, but the perception of it is, therefore I'm unjustified in being quite disgusted by it? It still seems to be a popularity contest that consists in an arbitrary ascription that generates nothing useful in an "art of rationality" setting.
And suggesting that someone is a troll is by far the most bathetic exercise of non-rational discourse. But, go ahead, down-vote this, too, if you feel better by it.
That sounds believable. My post has already been down-voted. Why? Who knows.
Uncountably many is the correct answer, and yet it's one of the down-voted posts. In another thread, my posts were also down-voted, despite their well-reasoned bases.
Personally, I think the voting system is corrupt, and – especially given that one can create an account, get a few votes, and start wreaking havoc with the presumed perception of posts – LW will only be more wrong than anything I might imagine. Is LW supposed to be a popularity contest, where gang affiliation is measured in the "karma" one has gained by not stepping on the toes of those who might down-vote (whatever that is supposed to suggest; one guesses 1= "yay" and -1="boo") something they either dislike or don't comprehend (and don't want to admit they don't comprehend)? If so, I'm already counting the days I continue "participating".
The measure of a post should consist in its merits, but the way LW invites censors, I hardly think the improvement of the "art of human rationality" will manifest. After all, an excellent exercise of rational thought is to show in what way faults are present in specific claims, not the arbitrary employment of "yay" or "boo" ascriptions.
(This was edited a few times after initial posting.)
I was the first person to downvote. Not because I don't grasp, but because I believe your explanation is in the best too brief to be generally intelligible. My negative opinion can be, of course, due to my stupidity, but as for my downvoting strategy, my own judgment is all I can rely upon. (My judgment also tells me that you appear a bit oversensitive to downvoting.)
Good enough for me. The sensitivity is merely a measure of my newness to LW. But, again, the sensitivity wasn't unwarranted granted your complete lack of explanation for the objection "I don't follow".
I don't see how it is relevant. Quantum branching doesn't require Omniverse. That alone makes your argument seemingly irrelevant. But let's proceed.
No, it doesn't. The question posted by the OP implied the relevance of MWI of QM. Note, in order for QM to hold any relevance to us, it must be interpreted in some way. Yes, let's proceed.
I have no clear idea what a probability of event happening in the Omniverse means.
You obviously aren't familiar with the concept (for which I cannot be held accountable). In any event, I'll explain it briefly: the omniverse is that state of affairs in which all possibilities are realized. Hence, that any event should obtain therein is an absolute certainty.
Is this supposed to justify the previous claim, i.e. that the probability of any event in Omniverse is 1? If so, I don't regard "each universe contains something, therefore any event has probability 1 in the Omniverse" a valid inference, whatever interpretation of both the premise and the conclusion I can imagine.
No, not particularly. However, even if it were so, consider this: tell me of a universe in which nothing exists. Does it make sense to posit something of which there is nothing? Equivalently: There isn't anything of the universe. But there is a thing, namely, the universe.
What is "long run"? Does it mean "in other universes" (that would make sense, but the choice of words "long run" to denote that seems bizarre) or does it mean "sometimes later in this universe" (that would be the natural interpretation of "long run", but then the statement says "p(the event happens) = 0 and the event can happen", which is a contradiction).
This is the standard understanding of what objective probability teaches: given any universe you please, a given probability of an event is supposed to hold for a particular situation in the case that one were to observe all cases (for all time). Thus, the "long run" considers a particular situation for all time. If you flip a coin, you will not observe an outcome of 50% heads and 50% tails, but were you to flip this coin for eternity, the net result is just such an outcome.
And of all that, how does anything imply, or even relate to, the "uncountably many" answer you gave at the beginning?
I'm not sure how you'd pose this question seriously. For one, the MWI and nature of QM decoherence shows a state of information as unrelated instances of a general state of things (in a coherent superposition). That there are uncountably many universes (as inhabited by any observer you please) in which the cat is alive, and so too for the cat being dead. The "cat" could even be an infinite variety of other objects, for all we damn well know.
I even don't understand what do you mean by "taking the universe as a QM event".
Then you obviously aren't familiar with MWI. Even Penrose and Hawking agree that QM applied to the universe implies MWI.
From the single sentence the OP consists of, could you quote the section where it very clearly asks for non-standard instances of the (which?) question?
Are you being obtuse to justify your down-vote or something? This is ridiculous. Now I have to justify my answer to the OP to you? Absurd. But I'll play along, quoting OP:
...what about non-50/50 scenarios...
I think that the universe is a "non-50/50 scenario", but I guess you can make the case it isn't.
Indeed. Thank you for making the points you did in the first paragraph; that's more or less what I was making note of (in perhaps too-general terms). I was going to respond to another post that falsely contended that just because there are two states that that necessarily exhausts all of the possibilities that obtain (as ens rationis), since the state of the cat as such is not discrete, but also continuous.
I would not be so quick to dismiss MW on account of the heuristic value of the idea of multiverses (and the successive hierarchy of universes), because rationality cannot be used to dismiss the preeminent possibility of any possibility. Anyway, there's a pretty interesting article on arXiv by R. Vaas about it: http://arxiv.org/abs/1001.0726 .
Taking the universe as a QM event most definitely implies there are uncountably many universes. The OP very clearly asked for non-standard instances of the question, and a generalization of the question most certainly applies thereto.
I certainly hope others do not continue to down-vote what they don't grasp, because LW will only be the worse off for it. (Not implying you down-voted, but if you weren't, then the one who did obviously hasn't the wherewithal to state an outright objection.)
Edit: if you don't "follow", at least state in what exactly you don't follow so that I can actually provide something to your explicit satisfaction.
Uncountably many. Consider that on the scale of the Omniverse (which contains only this one particular universe among uncountably many) the probability for any event is 1. It is also so, because it is absurd to suppose there is a universe in which something, if there be anything, does not exist. Furthermore, even if the probability for an event in our universe were 0 that would in no way serve as an impediment to its occurring in the long run.
When did I say that color was a near-universal attribute?
Here's what indicated as much:
There really are attributes for colors that are near-universal, for humans.
An "attribute for color" is not much different from showing that a name is an attribute for a color. Again, you were making the same mistake by thinking that a name for a color is an absolute. Definitely not the case, which you recognize:
You are right though--for that claim to make sense colors also have to be assumed to be near-universal.
To continue –
However, notice how color blindness and tetrachromacy are considered exceptions to the norm. These exceptions are largely the reason I specified near-universal for humans rather than simply universal for humans.
– I further pointed out that humans do not live in a mono-culture with a universal language that predetermines the arrangement of linguistic space in connection to perceived colors. That is the norm, such that the claim of near-universality does not apply. (And were such a mono-culture present, all it would take is a small deviation to accumulate to undermine it. Think of the Tower of Babel.)
The objection I posited covers all cases, even the exceptions. It's really the mind-projection fallacy, such that one human regards their "normal" experience as the "normal" experience of "normal" humans, more or less.
This is also reminiscent of Descartes' cogito:
X cannot occur without Y. X occurs. Therefore, Y exists.
(X=thought; Y=a thinking thing)
...I don't see how you can talk about "defeat" if you're not talking about justified believing
"Defeat" would solely consist in the recognition of admitting to ~T instead of T. Not a matter of belief per se.
You agree with what I said in the first bullet or not?
No, I don't.
The problem I see here is: it seems like you are assuming that the proof of ~T shows clearly the problem (i.e. the invalid reasoning step) with the proof of T I previously reasoned. If it doesn't, all the information I have is that both T and ~T are derived apparently validly from the axioms F, P1, P2, and P3.
T cannot be derived from [P1, P2, and P3], but ~T can on account of F serving as a corrective that invalidates T. The only assumptions I've made are 1) Ms. Math is not an ivory tower authoritarian and 2) that she wouldn't be so illogical as to assert a circular argument where F would merely be a premiss, instead of being equivalent to the proper (valid) conclusion ~T.
Anyway, I suppose there's no more to be said about this, but you can ask for further clarification if you want.
Funny. I thought of pointing that out as well, but I thought it probably wasn't worth mentioning.
As I've imagined it being said before: "I'm either a genius or I'm not. That's a 50% chance of my being a genius. Just pray luck isn't on my side!" :)
Colors-as-near-universal-attributes is really a false claim. Consider examples of the varieties of color blindness, tetrachromacy, and cultures in which certain colors go by names that other cultures distinguish as being different. Your last paragraph seems to indicate that you still hold to the Mind Projection Fallacy which you had assumed to have overcome by realizing your favorite isn't everyone's favorite. Well, even their "blue" might be your "green". Generally, this goes unnoticed because we tend to acculturate and inhabit more or less similar linguistic spaces.
C1 is a presumption, namely, a belief in the truth of T, which is apparently a theorem of P1, P2, and P3. As a belief, it's validity is not what is at issue here, because we are concerned with the truth of T.
F comes in, but is improperly treated as a premiss to conclude ~T, when it is equivalent to ~T. Again, we should not be concerned with belief, because we are dealing with statements that are either true or false. Either but not both (T or ~T) can be true (which is the definition of a logical tautology).
Hence C2 is another presumption with which we should not concern ourselves. Belief has no influence on the outcome of T or ~T.
For the first bullet: no, it is not possible, in any case, to conclude C2, for not to agree that one made a mistake (i.e., reasoned invalidly to T) is to deny the truth of ~T which was shown by Ms. Math to be true (a valid deduction).
Second bullet: in the case of a theorem, to show the falsity of a conclusion (of a theorem) is to show that it is invalid. To say there is a mistake is a straightforward corollary of the nature of deductive inference that an invalid motion was committed.
Third bullet: I assume that the problem is stated in general terms, for had Ms. Math shown that T is false in explicit terms (contained in F), then the proper form of ~T would be: F -> ~T. Note that it is wrong to frame it the following way: F, P1, P2, and P3 -> ~T. It is wrong because F states ~T. There is no "decision" to be made here! Bayesian reasoning in this instance (if not many others) is a misapplication and obfuscation of the original problem from a poor grasp of the nature of deduction.
(N.B.: However, if the nature of the problem were to consist in merely being told by some authority a contradiction to what one supposes to be true, then there is no logically necessity for us to suddenly switch camps and begin to believe in the contradiction over one's prior conviction. Appeal to Authority is a logical fallacy, and if one supposes Bayesian reasoning is a help there, then there is much for that person to learn of the nature of deduction proper.)
Let me give you an example of what I really mean:
Note statements P, Q, and Z:
(P) Something equals something and something else equals that same something such that both equal each other. (Q) This something equals that. This other something also equals that. (Z) The aforementioned somethings equal each other.
It is clear that Z follows from P and Q, no? In effect, you're forced to accept it, correct? Is there any "belief" involved in this setting? Decidedly not. However, let's suppose we meet up with someone who disagrees and states: "I accept the truths of P and Q but not Z."
Then we'll add the following to help this poor fellow:
(R) If P and Q are true, then Z must be true.
They may respond: "I accept P, Q, and R as true, but not Z."
And so on ad infinitum. What went wrong here? They failed to reason deductively. We might very well be in the same situation with T, where
(P and Q) are equivalent to (P1, P2, and P3) (namely, all of these premisses are true), such that whatever Z is, it must be equivalent to the theorem (which would in this case be ~T, if Ms. Math is doing her job and not merely deigning to inform the peons at the foot of her ivory tower).
P1, P2, and P3 are axiomatic statements. And their particular relationship indicates (the theorem) S, at least to the one who drew the conclusion. If a Ms. Math comes to show the invalidity of T (by F), such that ~T is valid (such that S = ~T), then that immediately shows that the claim of T (~S) was false. There is no need for belief here; ~T (or S) is true, and our fellow can continue in the vain belief that he wasn't defeated, but that would be absolutely illogical; therefore, our fellow must accept the truth of ~T and admit defeat, or else he'll have departed from the sphere of logic completely. Note that if Ms. Math merely says "T is false" (F) such that F is really ~T, then the form [F, P1, P2, and P3] implies ~T is really a circular argument, for the conclusion is already assumed within the premisses. But, as I said, I was being charitable with the puzzles and not assuming that that was being communicated.
Here's one that comes to mind:
I really don't know anything about baseball, so if I'm going to bet on either the Red Socks or the Yankees, I'd have to go fifty-fifty on it. Therefore, the chance that either will win is fifty percent.
(Right at the "therefore" is the fallacy put forward as a veritable property of either of the teams winning, when in fact it is merely indicative of the ignorance of the gambler. The actual probability is most likely not 50-50.)
EDIT: Others might enjoy reading this PDF ("Probability Theory as Logic") for additional background and ideas. There you'll also see a bon mot by Montaigne: "Man is surely mad. He cannot make a worm; yet he makes Gods by the dozen."
Read my latest comments. If you need further clarity, ask me specific questions and I will attempt to accommodate them.
But to give some additional note on the quote you provide, look to reductio ad absurdum as a case where it would be incorrect to aver to the truth of what is really contradictory in nature. If it still isn't clear, ask yourself this: "does it make sense to say something is true when it is actually false?" Anyone who answers this in the affirmative is either being silly or needs to have their head checked (for some fascinating stuff, indeed).
...if you allow for the possibility that the original deductive reasoning is wrong...
I want to be very clear here: a valid deductive reasoning can never be wrong (i.e., invalid), only those who exercise in such reasoning are liable to error. This does not pertain to logical omniscience per se, because we are not here concerned with the logical coherence of the total collection of beliefs a given person (like the one in the example) might possess; we are only concerned with T. And humans, in any case, do not always engage in deduction properly due to many psychological, physical, etc. limitations.
don't you need some way to quantify that possibility, and in the end that would mean treating the deductive reasoning itself as bayesian evidence for the truth of T?
No, the possibility that someone will commit an error in deductive reasoning is in no need of quantification. That is only to increase the complexity of the puzzle. And by the razor, what is done with less is in vain done with more.
Unless you assume that you can't make a mistake at the deductive reasoning, T being a theorem of the promises is a theory to be proven with the Bayesian framework, with Bayesian evidence, not anything special.
To reiterate, an invalid deductive reasoning is not a deduction with which we should concern ourselves. The prior case of T, having been shown F, is in fact false, such that we should no longer elevate it to the status of a logical deduction. By the measure of its invalidity, we know full well in the valid deduction ~T. In other words, to make a mistake in deductive reasoning is not to reason deductively!
And if you do assume that you can't make a mistake at the deductive reasoning, I think theres no sense in paying attention to any contrary evidence.
This is where the puzzle introduced needless confusion. There was no real evidence. There was only the brute fact of the validity of ~T as introduced by a person who showed the falsity/invalidity of T. That is how the puzzles' solution comes to a head – via a clear understanding of the nature of deductive reasoning.
Those are only beliefs that are justified given certain prior assumptions and conventions. In another system, such statements might not hold. So, from a meta-logical standpoint, it is improper to assign probabilities of 1 or 0 to personally held beliefs. However, the functional nature of the beliefs do not themselves figure in how the logical operators function, particularly in the case of necessary reasoning. Necessary reasoning is a brick wall that cannot be overcome by alternative belief, especially when one is working under specific assumptions. To deny the assumptions and conventions one set for oneself, one is no longer working within the space of those assumptions or conventions. Thus, within those specific conventions, those beliefs would indeed hold to the nature of deduction (be either absolutely true or absolutely false), but beyond that they may not.
Actually, I think if "I know T is true" means you assign probability 1 to T being true, and if you ever were justified in doing that, then you are justified in assigning probability 0 that the evidence is misleading and not even worth to take into account. The problem is, for all we know, one is never justified in assigning probability 1 to any belief.
The presumption of the claim "I know T is true" (and that evidence that it is false is false) is false precisely in the case that the reasoning used to show that T (in this case a theorem) is true is invalid. Were T not a theorem, then probabilistic reasoning would in fact apply, but it does not. (And since it doesn't, it is irrelevant to pursue that path. But, in short, the fact that it is a theorem should lead us to understand that the premisses' truth is not the issue at hand here, thus probabilistic reasoning need not apply, and so there is no issue of T's being probably true or false.) Furthermore, it is completely wide of the mark to suggest that one should apply this or that probability to the claims in question, precisely because the problem concerns deductive reasoning. All the non-deductive aspects of the puzzles are puzzling distractions at best. In essence, if a counterargument comes along demonstrating that T is false, then it necessarily would involve demonstrating that invalid reasoning was somewhere committed in someone's having arrived at the (fallacious) truth of T. (It is necessary that one be led to a true conclusion given true premisses.) Hence, one need not be concerned with the epistemic standing of the truth of T, since it would have clearly been demonstrated to be false. And to be committed to false statements as being not-false would be absurd, such that it would alone be proper to aver that one has been defeated in having previously been committed to the truth of T despite that that committment was fundamentally invalid. Valid reasoning is always valid, no matter what one may think of the reasoning; and one may invalidly believe in the validity of an invalid conclusion. Such is human fallibility.
So I'd say the problem is a wrong question.
No, I think it is a good question, and it is easy to be led astray by not recognizing where precisely the problem fits in logical space, if one isn't being careful. Amusingly (if not disturbingly), some of most up-voted posts are precisely those that get this wrong and thus fail to see the nature of the problem correctly. However, the way the problem is framed does lend itself to misinterpretation, because a demonstration of the falsity of T (namely, that it is invalid that T is true) should not be treated as a premiss in another apodosis; a valid demonstration of the falsity of T is itself a deductive conclusion, not a protasis proper. (In fact, the way it is framed, the claim ~T is equivalent to F, such that the claims [F, P1, P2, and P3] implies ~T is really a circular argument, but I was being charitable in my approach to the puzzles.) But oh well.
Puzzle 1
- RM is irrelevant.
The concept of "defeat", in any case, is not necessarily silly or inapplicable to a particular (game-based) understanding of reasoning, which has always been known to be discursive, so I do not think it is inadequate as an autobiographical account, but it is not how one characterizes what is ultimately a false conclusion that was previously held true. One need not commit oneself to a particular choice either in the case of "victory" or "defeat", which are not themselves choices to be made.
Puzzle 2
- Statements ME and AME are both false generalizations. One cannot know evidence for (or against) a given theorem (or apodosis from known protases) in advance based on the supposition that the apodosis is true, for that would constitute a circular argument. I.e.:
T is true; therefore, evidence that it is false is false. This constitutes invalid reasoning, because it rules out new knowledge that may in fact render it truly false. It is also false to suppose that a human being is always capable of reasoning correctly under all states of knowledge, or even that they possess sufficient knowledge of a particular body of information perfectly so as to reason validly.
- MF is also false as a generalization.
In general, one should not be concerned with how "misleading" a given amount of evidence is. To reason on those grounds, one could suppose a given bit of evidence would always be "misleading" because one "knows" that the contrary of what that bit of evidence suggests is always true. (The fact that there are people out there who do in fact "reason" this way, based on evidence, as in the superabundant source of historical examples in which they continue to believe in a false conclusion, because they "know" the evidence that it is false is false or "misleading", does not at all validate this mode of reasoning, but rather shores up certain psychological proclivities that suggest how fallacious their reasoning may be; however, this would not itself show that the course of necessary reasoning is incorrect, only that those who attempt to exercise it do so very poorly.) In the case that the one is dealing with a theorem, it must be true, provided that the reasoning is in fact valid, for theorematic reasoning is based on any axioms of one's choice (even though it is not corollarial). !! However, if the apodosis concerns a statement of evidence, there is room for falsehood, even if the reasoning is valid, because the premisses themselves are not guaranteed to be always true.
The proper attitude is to understand that the reasoning prior to exposure of evidence/reasoning from another subject (or one's own inquiry) may in fact be wrong, however necessary the reasoning itself may seemingly appear. No amount of evidence is sufficient evidence for its absolute truth, no matter how valid the reasoning is. Note that evidence here is indeed characteristic of observational criteria, but the reasoning based thereon is not properly deductive, even if the reasoning is essentially necessary in character. Note that deductive logic is concerned with the reasoning to true conclusions under the assumption that the relevant premisses are true; if one is taking into account the possibility of premisses which may not always be true, then such reasoning is probabilistic (and necessary) reasoning.
!! This, in effect, resolves puzzle 1. Namely, if the theorem is derived based on valid necessary reasoning, then it is true. If it isn't valid reasoning, then it is false. If "defeat" consists in being shown that one's initial stance was incorrect, then yes, it is essential that one takes the stance of having been defeated. Note that puzzle 2 is solved in fundamentally the same manner, despite the distracting statements ME, AME, and MF, on account of the nature of theorems. Probabilities nowhere come into account, and the employment of Bayesian reasoning is an unnecessary complication. If one does not take the stance of having been defeated, then there is no hope for that person to be convinced of anything of a logical (necessary) character.
Excuse me for waxing over-philosophical in my last message, since I said "might be" rather than "currently is". To be clear, I'm referring to the practical possibility (if not the straightforward logical possibility) of such a game existing.
I suppose, in any case, that one form such a game has the greatest chance of succeeding in meeting that (rather vague) designation would involve its exhibiting the most generality within its gameplay, such that the kinds of cognitive requirements put upon users would not necessarily involve specific skills or skill acquisition per se, but rather a kind of mystifying push-without-training-wheels that permits the mind to shape itself however it sees fit to accomplish the task - which then creates problems for users by forcing them to constantly modify their adopted strategy or preferred tactics.
One such game that comes to mind as a (tentative) example is Dual N-Back (or related variants) that does not directly demand any specific strategy or conceptual framework for it to be taken on by a user. One has no specific input on how to tackle it, but when the user gets the hang of it, the game naturally changes the rule(s) or framework, forcing the user to adapt once more. Such a game most certainly involves expertise (a lot of time spent playing it and getting better).
But, yeah, with most, if not all, generally recognized games, it is pretty clear that with the kinds of skills demanded of a user it may be quite difficult to maneuver certain other skills and make such a game feasible.
I think the main issue here is that expertise must be conceptualized with respect to a particular activity or set of activities in order for it to maintain its essential meaning. The nature of expertise is also restricted to a specific range of tools the brain embodies (as in "embodied cognition"); in other words, it is not the hand the knows what to type, but rather the keyboard that knows what to type. To be clear, my cognitive capacity is effectively extended and reshaped by the interaction with the keyboard, so in effect the nature of the expertise will be limited specifically to the final cause (in the philosophical sense) of the activity itself. I like to think of it as the mind further approximating the function of the game, or activity, over time serving as a kind of analogy to the ever-accumulating expertise therein.
Taking the example of chess versus a modern-day computer-enhanced strategy game, the modes of embodiment are vastly different, and so the kinds of expertise to be expected should naturally diverge. However, I would not be so pollyannaish as to assert that playing StarCraft 2 (or Chess) would be "really useful", unless you're playing for money to help you in some specific goal outside of the game itself. That is going a bit too far, in my opinion. We already know that the nature of expertise is such that it only operates at the level of the activity one is engaged in, and will not generalize (or transfer) far from that domain of activity. For instance, the expertise in knowing the layout of a keyboard and being able to type commands without a second thought (being constantly honed by a game that demands it) will transfer to the tasks (of other games) that require the same input on a keyboard (and will differentially benefit from those quick reflexes), but the specific tactics and techniques learned in-game will generally not find much use beyond that game, and I do believe that is what we're getting at with a game like SC2 insofar as "expertise" is a concern here. Similarly with chess: one might very well have excellent reflexes, honed in certain other tasks, and know many strategies and techniques for other things, but they won't apply to the space of chess, and so vice versa for chess to other activities. (And we already know that typical memorization techniques used in chess really don't help with memorizing anything else.)
Having said all that, I wonder whether or not there might be a prime example of the game of general expertise par excellence out there, one that touches on many domains simultaneously... Perhaps the Glass Bead Game? Ah, never mind. But, in all seriousness, the way of the game is probably the only way we'll ever find out if such a thing exists and will permit the mind to approximate the function of life all the more perfectly.
By the way, I don't know how it is the researchers in the article don't think there hasn't been such a "satellite view" of expertise before, particularly on the note of chess. Hasn't anyone told them of the Chess Tactics Server? ( http://chess.emrald.net/ ) Chumps to champs aplenty there.