Posts
Comments
This is one of those philosophical arguments where the premise is so absurdist as to make it impossible to take seriously, but at the end of the day I'm far less inclined to kowtow to the British example than the Islam.
Restricting an image is, at it's heart, restricting thought. Restricting nerve impulses and the way they interact with the brain. The Islam restriction is, to an extent, silly in this day and age - there are no pictures of Mohammed, therefore there can be no pictures of Mohammad; You can't commit that 'sin' anymore than you can commit the sin of operating heavy machinery while deceased.
Unless I go to the trouble of labelling, you can't even know I tried
O
/|\ <-- May or may not be Stick Man representation of Mohammad in XKCD
/ \
We obtain data from pictures, and the blow to nature photographers, is hardly the issue. Think about the problems regarding ecologists, wildlife preservationists, biologists, fisherman, et al.
As a matter which impinges upon no impulse to do so past contrarily labeling stickmen, of course I can politely consent not to draw Mohammed. Not photographing Salmon causes active harm.
Jonnan
A lot of the 'learned blankness' or black box problem (I prefer that) seems to me to be directly related to how afraid someone is of feeling (or worse, looking) stupid.
There are exceptions of course, but by and large the people that seem to hit that wall (or, at least have a higher than average number of those walls to hit) are people that were told over and over that they're dumb, or that pursuing 'X' is dumb.
And - they become that, or at least an unreasonable facsimile thereof. Within the realm of their expertise it's very obvious they're highly intelligent, but they either assume they are just as much an authority in unrelated realms without actually educating themselves in that realm, or they get out of their comfort zone and they stop - two divergent attempts to attempt to avoid looking dumb.
I'm convinced the average IQ is actually 300+ and we simply evoke it more and more as we're less and less afraid of feeling stupid.
Jonnan
The one thing that stands out for me in this is that it seems to go from the same "figures don't lie but liars sure can figure" assumption that NTL is much easier fool people with than making stuff up.
But, in my experience, that's not true. There are indicators when someone is NTL, versus actually being honest, just as I've noticed over the years that there are indications when a statistic is being taken out of context.
Most forms of deceit are either very short term, or fall quite rapidly to logic of the form "If this is true as it stands, what else would that imply?"
And maybe that's my advantage - I'm not entirely sure I consider myself all that principled a person as such, I simply noticed at a very young age that deception doesn't actually work all that well. Simpler to convince people you are right because you're actually right and can tell people why.
But you've missed the most important point!
It means that the comic book tendency to get super-powers coincidentally related to your real name actually works!
Now if only I can figure out a superpower related to the name Jonnan, I can figure out what kind of radioactive bug to be bit by?
Jonnan
I'm not entirely sure I understand your Correspondence Bias assertion, since I have made no actual assertions regarding whether the use of such vague definitions implied anything about someones personality. I certainly have my opinions on such, but they are irrelevant to the topic on hand.
That said - I'm not certain that what I title the Humpty Dumpty fallacy is a special case of Equivocation. Equivocation is typically defined as using two accurate definition as if they are interchangable, while Humpty Dumpty tends to use an inaccurate or vague definition as if it were perfectly interchangeable with the accurate and agreed upon definition.
They are obvious close relations, and I think there is a strong case to be made for it, but that is the difference between "Seems to Me" that such is the case and "Is" the case - I was merely putting it forward for consideration.
Please say Hi to Bizarro Jonnan for me, and tell him me hates him so much.
Thank - Jonnan
My personal definition for the general case of that is the 'Humpty Dumpty Fallacy' from Alice in Wonderland
'And only ONE for birthday presents, you know. There's glory for you!' I don't know what you mean by "glory,"' Alice said. Humpty Dumpty smiled contemptuously. 'Of course you don't—till I tell you. I meant "there's a nice knock-down argument for you!"' 'But "glory" doesn't mean "a nice knock-down argument,"' Alice objected. 'When I use a word,' Humpty Dumpty said in rather a scornful tone, 'it means just what I choose it to mean—neither more nor less.'
Catchy Fallacy name fallacy seems to me to be a special case of that, which is in turn (to My mind) a special case of Equivocation (Using two different but accurate definitions as if they were identical). Except of course in Humpty Dumpty you're using an inaccurate or vaguely defined definition, rather than an accurate one.
Just a thought - Jonnan
Edit: I once heard the same thought called, in quite formal tone, "The Spaniards Observation" in reference to The Princess Bride -
Inigo Montoya: "You keep using that word. I do not think it means what you think it means."
just so.
I do not always agree with Kant, but his advice re (loose translation) "Act as if you were the leader of the world and everyone would copy their actions based on yours" has seemed to me to be good advice over the years.
Plus I get to pretend I run the world, instead of that cabal of unseen shadow-puppeteers secretly manipulating things from behind the scenes - I HATE them. HATE THEM HATE THEM HATE THEM . . . umm, have you read my resume yet?
Jonnan
If Bayesian rational thought is not the best answer for questions such as "How long do you wait before giving up (on the flakey date)?", then training oneself to think in those terms is training oneself to use a sub-optimal strategy for the real world.
The entire point, to me to using rational thought is because I believe rational thought is an optimal strategy. If it's not, why would you do it?
I would prefer a parlor game for $10 any day - {G}
Jonnan
Amusing article - I can't quite get my mind around feeling that way abuoy quake, but I'll cop to dreaming about Tetris when I was younger - .
Jonnane
Can I shoot them both for engaging in such an overextended and arbitrary metaphor?
Looking at the original 'Allais Paradox' post - under what theorem is the reduction of uncertainty by 100% equivalent to the reduction of uncertainty by 1/67th?
It takes energy to plan ahead - the energy required to plan ahead with 100% certainty of outcome is considerably more than the energy required to plan ahead with 99% certainty. But there's no such difference in energy consumption planning between the possibilities inherent in 67% and 66% - those are functionally equivalent.
So, um, why is this result even slightly surprising?
Edit: - Now, what would be interesting would be the question of the decisions made if the options are $24K with 94% probability to $27K with 93% probability, and variants thereof where the reduction in uncertainty exactly balances out the increase in value.
Really? -4 for not liking a defense of marketing sophistry? One which literally noted "Advertise the color" as a positive virtue?
Sorry, if that's not favoring the darkside, I'm not sure how you're defining 'darkside', and karma around here is way too arbitrary - {G}. I will concede to a bias against marketing as a solution to anything - the marketing textbook I was subjected to in college was the most self-important ego-centric defense of a field I've ever seen - {G}.
Jonnan
Don't . . . get any of that on me please. Ick.
I think I fundamentally disagree with your premise. I concede, I have seen communities where this happened . . . but by and large, they have been the exception rather than the rule.
The fundamental standard I have seen in communities that survived such things, versus those that didn't fall under two broad patterns.
A) Communities that survived were those where politeness was expected - a minimal standard that dropping below simply meant people had no desire to be seen with you.
B) Communities where the cultural context was that of (And I've never quite worded this correctly in my own mind) acknowledging that you were, in effect, not at home but at a friendly party at a friends house, and had no desire to embarrass yourself or your host by getting drunk and passing out on the porch - {G}.
Either of these attitude seems to be very nearly sufficient to prevent the entire issue (and seem to hasten recovery even on the occasion when it fails), combined they (in my experience) act as a near invulnerable bulwark against party crashers.
Now exactly how these attitudes are nurtured and maintained, I have never quite explained to my own satisfaction - it's definitely an "I know it when I see it" phenomena, however unsatisfying that may be.
But given an expectation of politeness and a sense of being in a friendly venue, but one where there will be a group memory among people whose opinions have some meaning to you, the rest of this problem seems to be self-limiting.
Again, at least in my experience - {G}. Jonnan
This intuitively feels to me very similar to the questions I have about things like memory and the way people act when the situational context has been gamed to cause unethical behavior (see "The Lucifer Effect").
One wants to believe that one's personal memory is not only accurate, but indeed unbiased, but to what extent does the realization that it may not be actually help to mitigate the fact that it may not be? Does my awareness of things such as the Stanford Prison Experiment have any correlation with whether I will or will not be sucked into the group mindset under similar circumstances in reality?
Indeed, what would one do if the answer was "No"?
Jonnan
I can honestly say, I actually have healthy tastes - I actually like salad (I have a salad garden for exactly that reason), and do work on a small (3 acres) property when I'm not at my day job.
Although I do like most traditional deserts, they are not a typical portion of the meal, barring holidays. I do tend to eat 'candy' when it's around . . . which is one reason I don't keep it around.
So I sympathize entirely with the original poster when he says eating nothing but healthy foods doesn't help. My 'Vitamin Pill' version of the Shangra-la diet lost me 30 pounds straight through the holidays when I was eating deserts . . . and stopped.
So there are definitely other factors that are being missed.
Jonnan
What I find interesting is this matches, almost exactly, the 30 pounds I lost when I decided to consistently take a multi-vitamin with every meal on the theory that hunger was caused (at least at times) by vitamin deficiencies, and maybe making sure I was flush with vitamins would help.
Worked great for a month or so - I lost (and have kept off) 30 pounds (Unfortunately that means I'm down to 310). Then it just kinda stopped - I haven't gone back up (indeed there have been moments when it acted like it might start going back down again, but so far I'm stuck in fluctuation mode).
But what it to me interesting is this is the second occasion that happened - the first time was four years ago when I started buying flavored carbonated waters, and drinking those on a regular basis - not fatty at all, and flavored (both in opposition to, arguably, the vitamins), but virtually identical results (I did regain that weight, but only at an identical rate to my previous weight gain.).
Maybe I can switch them out?
Jonnan
Not if omniscience is A) a necessary prerequisite to the existence of a deity, and B) by definition unverifiable to an entity that is not itself omniscient.
Without being omniscient myself, I can only adjudge the accuracy of Omega's predictions based in the accuracy of it's known predictions versus the accuracy of my own.
Unfortunately, the mere fact that I am not omniscient means I cannot, with 100% accuracy, know the accuracy of Omega's decisions, because I am aware of the concepts of selection bias, and furthermore may not be capable of actually evaluating the accuracy of all Omega's predictions.
I can take this further, but fundamentally, to be able to verify Omega's omniscience, I actually have to be omniscient . Otherwise I can only adjudge that Omega's ability to predict the future is greater, statistically, than my own, to some degree 'x', with a probable error on my part 'y', said error which may or may not place Omega's accuracy equal to or greater than 100%.
Omega may in fact be omniscient, but that fact is itself unverifiable, and any philosophical problem that assumes A) I am rational, but not omniscient B) Omega is omniscient, and C) I accept B as true has a fundamental contradiction. By definition, I cannot be both rational and accept that Omega is Omniscient. At best I can only accept that Omega has, so far as I know, a flawless track record, because that is all I can observe.
Unfortunately, I think this seemingly small difference between "Omniscient" and "Has been correct to the limit of my ability to observe" makes a fairly massive difference in what the logical outcome of "Omega" style problems is.
Jonnan
Umm - who are these people that would rather donate their time than their money?
I guess, I have never been one of those people - unless someone needs work in my realm of expertise (in my case, tweaking computers to do what you want it to do, fairly cheaply, or training people to use them), I don't volunteer for very much at all.
I do love modern web banking - I can set my bank account to send $5 a month to my local NPR/PBS affiliate, 2nd week of the month, the Monday after my payday (So I can turn it off if I'm unexpectedly tight). The ACLU get it's $5 on the 4th Monday, and if I'm doing well the EFF gets a donation on week 3. It's been a bad year so I can't honestly remember what I had toggled up for week one - it's my weakest week financially (mortgage payment) so my lowest priority was there.
Giving money is great - let them hire who they need. If I show up at a soup kitchen, they needed a computer geek.
Jonnan
I think you're undervaluing the value of simple respect in the equation, as opposed to strict honesty. There is the potential for simply telling your boss - "I don't have the skill required to explain this in laymens terms yet, and you don't have the skill required to evaluate this as raw data yet, but we have a serious problem. Give me time to get someone smarter than me to either debunk this or verify it"
It has worked for me numerous times.
I guess I'm a bit tired of "God was unable to make the show today so the part of Omniscient being will be played by Omega" puzzles, even if in my mind Omega looks amusingly like the Flying Spaghetti Monster.
Particularly in this case where Omega is being explicitly dishonest - Omega is claiming to be either be sufficiently omniscient to predict my actions, or insufficiently omniscient to predict the result of a 'fair' coin, except that the 'fair' coin is explicitly predetermined to always give the same result . . . except . . .
What's the point of using rationalism to think things through logically if you keep placing yourself into illogical philosophical worlds to test the logic?
No, to make it work you have to assume that you believe in omniscience in order to clarify whether you believe in omniscience, a classic 'begging the question' scenario.
The problem is the "least convenient world" seems to involve a premise that would, in and of itself, be unverifiable.
The best example is the pascals wager issue - Omega tells me with absolute certainty that It's either a specific version of God (Not, for instance Odin, but Catholicism), or no God.
But I'm not willing to believe in an omniscient deity called God, taking it back a step and saying "But we know it's either or, because the omniscient de . . . errr . . . Omega tells you so" is just redefining an omniscient deity.
Well, if I don't believe is assuming god exists without proof, I can happily not assume Omega exists without proof. Proof is verifiably impossible, because all I can prove is that Omega is smarter than me.
Since I won't assume anything based only on the fact that someone is smarter than me - which is all I know about Omega - then no, the fact that Omega says any of this stuff and states it by fiat isn't going to convince me.
If Omega is that damn smart, it can go to the effort of proving it's statements.
Jonnan
Post-script: Which suddenly explains to me why I would pick the million dollar box, and leave the $1000 dollars alone. Because that's win win - either I get the million or I prove Omega is in fact not omniscient. He might be smarter than me (almost certainly is - the memory on this bio-computer I'm running needs upgraded something fierce, and the underlying operating system was last patched 30,000 years ago or so), but I can't prove it, I can only debunk it, and the only way to do that is to take the million.
I am what I like to call a "Greedy Progressive", inasmuch as my liberal instincts are not based in the guilt theory that a lot of conservatives and some liberals associate with liberalism, but on an implicit assumption that others doing well helps my life get better - and after a certain point, indeed helping others helps my quality of life in more immediately helpful ways than even spending money on myself or my family, though exactly where this point is at is subject to argument.
However, fundamentally the point is that I am not a progressive because I'm a sweet guy, but because I get a return on the investment. This implies two obvious things
A) That as improving others life also improves mine, improving my life also improves the lives of others in society. I am no less worth of living in comfort than someone in Africa either.
B) That although it helps me to help someone in Africa, it may very well help me more to help someone here, who in turn helps someone else slightly further from my sphere of influence, et al. Since this is not about me being a sweet guy, the question of who I help depends on my (perception of) return on investment.
C) Once I get below a certain point, the highest return on investment for expenditure of X money for Y personal happiness, is me. And, since I am in fact as important as anyone else, I give myself explicit permission to do that. I quit giving to my local public radio and the ACLU when I get below that point - and start again when I get above. The same for every other charity in existence.
And that's where I dislike the article. It assumes my happiness is in fact less important than the happiness of those I could help. So in point of fact, no, there is a definite limit to what i will sacrifice for random strangers, just because my happiness is no less important.
"All you can do in science is discover something before anyone else discovers it"
Mulling this over - maybe I'm taking a false view, but although I never had a particular admiration for the 'race to the south pole' type of exploration, the more general 'Going where no one has gone before' I do admire.
Because the second one does indeed do something - it establishes a new baseline that the next generation can start from. And so with Science - Newton described the real world with a precision greater than anyone before him, but off to the side Riemann established a new mathematics, which obviously had nothing to do with the real world, except of course Einstein proved it actually described the real world even better than Newton. And then of course the Quantum Theorist's described a 'Real World' greater still.
I've no particular advocacy for 'celebrity' science that races to be the first to something we already know can be done, although assuredly the innovation fostered by friendly or unfriendly rivalry has it's place, but science that actually expands our boundaries and tells us of new and different possibilities?
If I had the good fortune to be remembered for nothing more than have expanded the realms of possibility by setting up a base camp in unexplored science territory so someone smarter than I can mark 'Here be Dragons' a bit further out on the map, I could live with that - .
Jonnan
In the neither here nor there range, Much as I have fallen 'out of love' with Ender's game, in part having read some of Card's political rants, his definition of 'porn' as he applies it to Card's writing in that essay would qualify any work I can think of as 'porn', if the reviewer didn't like it. "I don't like the message" is sufficient, even "I think it's intellectually dishonest" and why - certainly I feel that way about every Ayn Rand novel I have subjected myself to. But his essay seems more about rationalization than rationality - .
That said - for myself, probably the first thought of ethics as logic came from Asimov's "Three Laws", as a set of rules that allowed for logical consideration of when it was fair to help pr hinder yourself or others, although I was probably primed to look for a logical basis for ethics by Mr. Spock.
Jonnan
Begs the question - I would posit that the minimum assumption for any form of 'spirituality' is body/mind duality, and your proposed 'better' definition of insanity presupposes the result that there is no axiomatic, logical system that can result in body/mind duality being either true, or undecidable.
However, so long as it is even undecidable, then a person that uses it as an axiom for further thought is no more 'insane' than someone that explores the logical consequences of parallel lines crossing.
Now, Religion posits not only body-mind duality, but a number of other assumptions, and those other assumptions are generally quite amenable to debunking. But I suspect dualism itself qualifies as undecidable, which would place it outside the pale of propositions which one can both maintain a cohesive logical structure and explicitly deny.
There seems to be to assumptions that need to be correct for blind review to be detrimental: Both A) Older established scientists are more likely to be correct when they are positing an anti-establishment thinking explanantion than a younger, less established scientist, and B) those scientists are nonetheless no more capable of marshaling the required set of arguments to do so when faced with blind review than that younger scientist.
I have no issue with A), but B) seems to me to be supremely unlikely - the very factors of an established pattern of rigor that make it more likely that an older scientist may be a safer bet when he breaks from the establishment than I am, also would appear to make it more likely that he or she can establish the case without relying upon reputation.
I might be wrong, but I wouldn't have to fake surprise at learning I was.
I'm not sure how useful this is, and I feel odd posting it this way (Intuitive Rationality?), but there is a 'feel' to when there is a fallacy camouflaged in the discussion. If the reader could learn to pay attention to it and dig for it when they feel it, I would consider that a worthwhile book.
Just a personal problem that seems to me to be a precursor to the rationality question.
Various studies have shown that a persons 'memory' of events is very much influenced by later discussion of the event, when put into situations such as the 'Stanford Prison Experiment' or the 'Milgram Experiment' people will do unethical acts under pressure of authority and situation.
Yet people have a two-fold response to these experiments. A) They deny the experiments are accurate, either in whole, or in degree B) They deny that they fall into the realm of those that would be so affected.
With of course, the obvious caveat that some people actually are not so affected in those experiments (or do remember thing accurately), and will stand up for what they determine as ethical regardless.
The obvious fact seems to be that it is among those that honestly consider the possibility that their thoughts can be affected by these outside influences that the greatest chance of successfully maintaining one's own identity against them exists, but others than acknowledging this fact (Which can certainly be faked, even self-deceptively) what self-assessments allow one to develop this?
Once we have that, it seems to me that the question of maintaining rationality itself clarifies itself greatly.
Jonnan