Posts
Comments
Yes, that is true as well.
My point was that since our cultural instinct is to give, but in practice this is done inefficiently, [charities are wasteful, people don't give to charities to optimize utility but rather to charities that they think they like, and a flat percentage is probably worse than a progressive tax], and therefore it would probably be better for society if we didn't expect charity from people - this seemingly beneficial cultural obligation can be argued to be harmful.
I like this approach.
It makes sense, and it mostly dodges the problem that other "simple" formulae for charity have - namely that most simple systems tend to be essentially voluntary regressive taxation.
This is why the 10% rule has always bugged me - it is a culturally accepted voluntary regressive tax, and as such it exacerbates social inequality.
[Also, one of my friends likes to joke that our culture holds that you give 10% of your income to charity, but capital gains are exempt...]
I'm always on the lookout for things that seem innocuous or even beneficial that actually are ways of enforcing the social structure and preventing upwards mobility, like our strange insistence on prescriptive rules of language, and upon the necessity of "sounding intelligent".
Language are evolved social constructs, and "correct grammar" is determined by native speakers. However, we impose additional rules that stray from the natural form of the language, and develop a notion that certain ways of speaking/writing are proper, and that other ways are ignorant. To learn how to speak in a way that sounds intelligent requires additional investment of time and effort, and those that cannot afford to do so (can't afford to spend as much time reading, or comes from an area with worse schools) will grow up speaking a completely intelligible version of the language, but one that is generally recognized as sounding like a marker of ignorance, and thus limits possibilities for advancement.
Ok, I really got off topic there, but my point was that our cultural construct that people should give a fixed percentage of their income to charity might very well not be a force for good, but rather a force opposing good.
It is a regressive taxation system, but one that is culturally supported. Further, because so many people feel like everyone is already voluntarily consenting to give to charity (especially through religious organizations) that actual taxation is an unnecessary imposition.
If we didn't have a culturally accepted obligation for charity, we wouldn't give as much money to inefficient charities and religious institutions, and might be more willing to consent to a higher progressive tax.
I'm not prepared to make that bet.
I don't suspect the bias would vanish, but rather be diminished.
people who they voted for < who they predicted would win < bet on who would win, where '<' indicates predictive accuracy.
This is exactly what I was saying.
I didn't mean to imply I thought it was, though I see how that wasn't clear.
I didn't intend that last bracketed part to be an example, but rather a related phenomenon - it is interesting to me how asking a random sample of people who they voted for is a worse predictor than asking a random sample of people who they would predict got the most votes, and that this accuracy further improves when people are asked to stake money on their predictions.
I simply was pointing out that certain biases might be significantly more visible when there is no real incentive to be right.
For instance, one supplemental explanation for the False Consensus Effect (because just because it is one effect doesn't mean it has only one cause) that I have heard is that in most cases it is a "free" way of obtaining comfort.
If presented with an opportunity to believe that other people are like you, with no penalty for being wrong, one could expect people will err on the side of predicting behavior consistent with one's own behavior.
I obviously haven't done this experiment, but I suspect that if the subjects asked to wear the sign were offered a cash incentive based on their accuracy of prediction for others, both groups would make a more accurate prediction.
[See also - political predictions are more accurate when the masses are asked to make monetary bets on the winner of the election, rather than simply indicate who they would vote for]
It sounds like you might be looking for something like The Onion Router (Tor)
For X to be able to model the decisions of Y with 100% accuracy, wouldn't X require a more sophisticated model?
If so, why would supposedly symmetrical models retain this symmetry?
I actually acknowledge that deeper in the thread [in the response to PECOS-9], noting that this is the publicly understood complement, despite being wrong: society teaches that the primary colors are Red, Yellow, Blue and not Magenta, Yellow, Cyan.
Fair enough.
I must admit, this makes my theory less likely, but I still don't see your reading as the unambiguously correct interpretation, but I will freely cede that it look plausible that it is an interrupt, not an elaboration. This may, in part, stem from the fact that I am a big proponent of using "-" in my writing, and my usage is somewhat nonstandard.
Even if that is right, I don't think it rules out my guess about Quirrell's plan, but again, I'm significantly less confident now.
The complement of hot is not-red?
His attitude after hearing the prophecy can be summed up by his words to McGonagall, which are consistent with everything he does thereafter
I would say that his request to McGonagall is consistent with my theory - he knew that her attempts to stop Harry would have the opposite effect. I am guessing that Quirrell has some alternate interpretation to the prophecy.
One possibility for this is "The End of the World" corresponds to an change to the natural order that makes the world unrecognizable, such as the removal of mortality.
It is possible that instead of burning up his own life to destroy all the dementors or defeat death, Harry could burn up some stars, which could explain the rest of the prophecy.
I'm not saying that I am correct, but I still see no actions that are inconsistent with my theory.
I think part of the confusion is that we are interpreting the punctuation differently. I don't interpret your second quotation (first quotation from the text) as meaning that he was happy, until interrupted by hearing the prophecy, but rather that the prophecy was the reason he had smiled.
Personally, I think Quirrell killed Hermione, in the hopes of getting Harry to actually figure out how to defeat death - something no one else has ever done.
The reason he was happy when he heard the prediction that Harry would break the Universe is that this was near-confirmation that Harry would be successful.
In short, here is my version of Quirrell's plan:
1) For deniability reasons, be anti-resurrection from the start, and horribly worried about what Harry will do - tell Harry this
2) Kill someone Harry won't allow to stay dead (Hermione)
3) Become convinced by Harry to help with the plan - provide magic knowledge he doesn't have access to on his own
4) Use any means necessary (Unicorn blood) to stay alive until Harry is close to success
5) Harry is now the solution to whatever is slowly killing you
Magic and supernatural might often work as synonyms, but I still think hearing God called "magic" is not generally accepted, even if "supernatural" is.
Your point is well taken about D&D - although I wasn't proposing that we actually use the D&D system to describe the belief system. I was expressing regret that a similar dichotomy doesn't exist within the language already.
Hearing the Christian God referred to as "magic" reminds me of another apparent lexical gap in English. I think most theologians would be uncomfortably hesitant to call the purported miracles in their faith as the result of magic - although to my knowledge there is no better word to replace it.
I wish that our culture expressed the Divine Magic vs. Arcane Magic dichotomy that exists in Dungeons and Dragons.
My sense of the word complement is that if two things are complements, they sum to 1, or some equivalent.
A is the complement of ~A because P(A or ~A) = 1
Red and green are considered to be complementary colors because together they contain all primary colors of pigments. [although, that is based on the societal understanding that the primary colors are Red, Yellow and Blue. This is actually incorrect. For pigments, the primary colors are really Magenta, Yellow, and Cyan. For light, they are Red, Green, and Blue.]
That is a very good suggestion.
While better than anything I came up with on my own, I'm not sure that antonym is a perfect fit though.
For one, while hot/cold works, I'm not sure that red/green works.
Plus, antonym has a different connotation - it is the antonym of synonym. Antonym implies a word with the "opposite" meaning, not a concept with the "opposite" meaning.
I wouldn't be comfortable talking about the antonym of a concept.
Does anyone know if there are any languages that don't have this problem?
But many explanations which use "entropy" could also use "disorder" without becoming overtly incoherent or contradicting accounts given by most others; which was the requirement of #5.
That works for physical entropy. For the sense of entropy used in information theory, a better substitution would be uncertainty.
You are right - my mistake.
An increase in entropy is a movement from a macrostate with a smaller number of microstates to a macrostate with a larger number of microstates.
"The number of microstates for a given macrostate tends to increase over time"
Or, are microstate and macrostate also garblejargon?
Is anyone else bothered by the word "opposite"?
It has many different usages, but there are two in particular that bother me: "The opposite of hot is cold" "The opposite of red is green" Opposite of A is [something that appears to be on the other side of a spectrum from A]
"The opposite of hot is not-hot" "The opposite of red is not-red"
Opposite of A is ~A
These two usages really ought not to be assigned to the same word. Does anyone know if there are simple ways to unambiguously use one meaning and not the other that already exist in English?
(Basically, are there two words/phrases foo and bar so that one could say "The foo of hot is cold, but the bar of hot is not-hot")
"Intelligence" is one of my favorite examples of Reification - a cluster of concepts that were grouped together into a single word to make communication easier, and as a result is often falsely thought of as a single concept, rather than an abstract collection of several separable ideas.
Knowledge of relevant facts, algorithmic familiarity, creativity, arithmetic capabilities, spatial reasoning capabilities, awareness and avoidance of logical fallacies, and probably dozens of others are all separable concepts that all could reasonable be described as intelligence, but that correlate with each other to an unknown degree, and the effects of which can be observed in [near] isolation.
While intelligence remains useful as a word, it is a troublesome one.
IQ is no less troubling. It measures only a small fraction of the skills that could be described as intelligence. In addition, it appears to measure significantly more than just intelligence, with variation as big as 20 points being subject to cultural, or unknown environmental factors. http://psycnet.apa.org/psycinfo/1987-17534-001
One problem I remember reading about was the "odd item out" style of question historically found in many IQ tests - four objects were presented, and subjects were supposed to decide which one didn't belong. Unless 3 out of the 4 objects were identical, this task is ambiguous - and one anthropologist [citation needed] found that different cultures can have a different generally accepted "correct answer" to such a question.
TL;DR "Intelligence" isn't only vague, but it is an abstract combination of many semi-correlated skill-sets IQ on the other hand is a well-defined test, but it is not free of bias. It measures only a subset of what we would call "intelligence", and really only reliably predicts how well someone will do on future IQ tests.
If you aren't sure if you subvocalize while reading, try forcing yourself to imagine the words being read in a specific way - possibly in your friend's voice, or read in a certain easily stereotyped accent. Once you do that, you can see how different that feels from the reading you normally do.
When I try "reading in a Russian accent", my reading speed severely decreases, and the feeling is considerably more auditory than when I am reading with no gimmicks.