Posts
Comments
Many times on the internet I've seen people claim that something is more or less popular/known than it really is based on a poorly formulated Google search.
I've seen it too. Even Nate Silver did it in this New York Times blog post, where he estimates the number of fans for each team in the National Hockey League "by evaluating the number of people who searched for the term “N.H.L.”" Using his method, Montreal is the only Canadian market with a team for which it is estimated that fewer than half of the people are avid hockey fans (as he defined it).
In Montreal, French is the official language and the language spoken at home by most people.In French, the NHL is called the "Ligue nationale de hockey," abbreviated "L.N.H."
I honestly can't think of a single instance where I was convinced of an informal, philosophical argument through an academic paper. Books, magazines, blog posts - sure, but papers just don't seem to be a thing.
I have been convinced of the invalidity of other arguments by academic papers.
I have also been significantly persuaded by the failure of academic papers to make their case. That is, seeing that a poor argument is held in wide regard is evidence that the advocates of that position have no better arguments.
I too do not remember being convinced of many things by formal academic papers, just a very few things.
Probably most importantly, what do you view as the purpose of SIAI's publishing papers? Or, if there are multiple purposes, which do you see as the most important?
In order to think of some things I do that only have one important purpose, it was necessary to perform the ritual of closing my eyes and thinking about nothing else for a few minutes by the clock.
I plan on assuming things have multiple important purposes and asking for several, e.g. "what do you view as the purposes of X."
There was nothing wrong with what you said, but it is strange how easily the (my?) mind stops questioning after coming up with just one purpose for something someone is doing. In contrast, when justifying one's own behavior, it is easy to think of multiple justifications.
It makes some sense in a story about motivated cognition and tribal arguments. It might be that to criticize, we look mostly for something someone does that has no justification, and invest less in attacking someone along a road that has some defenses. A person being criticized invests in defending against those attacks they know are coming, and does not try and think of all possible weaknesses in their position. There is some advantage in being genuinely blind to one's weaknesses so one can, without lying, be confident in one's own position.
Maybe it is ultimately unimportant to ask what the "purposes" of someone doing something is, since they will be motivated to justify themselves as much as possible. In this case, asking what the "purpose" is would force them concentrate on their most persuasive and potentially best argument, even if it will rarely actually be the case that one purpose is a large supermajority of their motivation.
Supply Chain Inventory Replenishment: The Debiasing Effect of Declarative Knowledge
However lately I realized I need to interact with other rationalists in order to further my development.
1) What made you believe this?
2) At present, what do you think are the best reasons for believing this?
Teaching tree thinking through touch.
These experiments were done with video game trees showing evolutionary divergence, and this method of teaching outperformed traditional paper exercises. Perhaps a simple computer program would make teaching probability trees easier, or the principles behind the experiments could be applied in another way to teach how to use these trees.
since presumably you're "updating" a lot, just like regular humans
It's a psychological trick to induce more updating than is normal. Normal human updating tends to be insufficient).
I say to myself in my mind, "nice clothes, nice clothes," alluding to belief as attire, and imagine they're wearing what most caused their statement.
For example, if someone said "Jesus never existed!" I might imagine them wearing a jacket that says "Respect me! I am sophisticated," or a hat saying "accept me, I'm a leftist just like you," or a backpack that says "I am angry at my parents."
Presumably without the ribbons they'd have to be paid more. And the status perks seem tied to the same thing that causes people to call war dead "heroes."
What about infantry v. armor? Or helicopter pilots v. people piloting drones from a base in Nevada? "Military" isn't too homogeneous a category.
Section 5 deals with this
This makes me think that you are right.
There was a weakness in the method, though. In appendix table one they not only show how likely it actually is that a baby with a certain name is white/black, they show the results from an independent field survey that asked people to pick names as white or black. In table eight, they only measure the likelihood someone with a certain name is in a certain class (as approximated by mother's education). Unfortunately, they don't show what people in general, or employers in particular, actually think. If they don't know about class differences between "Kenya" and "Latonya," or the lack of one between "Kenya" and "Carrie," they can't make a decision based on class differences as they actually are.
Apparent poorly grounded belief in SI's superior general rationality
I found this complaint insufficiently detailed and not well worded.
Average people think their rationality is moderately good. Average people are not very rational. SI affiliated people think they are adept or at least adequate at rationality. SI affiliated people are not complete disasters at rationality.
SI affiliated people are vastly superior to others in generally rationality. So the original complaint literally interpreted is false.
An interesting question might be on the level of: "Do SI affiliates have rationality superior to what the average person falsely believes his or her rationality is?"
Holden's complaints each have their apparent legitimacy change differently under his and my beliefs. Some have to do with overconfidence or incorrect self-assessment, others with other-assessment, others with comparing SI people to others. Some of them:
Insufficient self-skepticism given how strong its claims are
Largely agree, as this relates to overconfidence.
...and how little support its claims have won.
Moderately disagree, as this relies on the rationality of others.
Being too selective (in terms of looking for people who share its preconceptions) when determining whom to hire and whose feedback to take seriously.
Largely disagree, as this relies significantly on the competence of others.
Paying insufficient attention to the limitations of the confidence one can have in one's untested theories, in line with my Objection 1.
Largely agree, as this depends more on accurate assessment of one's on rationality.
Rather than endorsing "Others have not accepted our arguments, so we will sharpen and/or reexamine our arguments," SI seems often to endorse something more like "Others have not accepted their arguments because they have inferior general rationality," a stance less likely to lead to improvement on SI's part.
There is instrumental value in falsely believing others to have a good basis for disagreement so one's search for reasons one might be wrong is enhanced. This is aside from the actual reasons of others.
It is easy to imagine an expert in a relevant field objecting to SI based on something SI does or says seeming wrong, only to have the expert couch the objection in literally false terms, perhaps ones that flow from motivated cognition and bear no trace of the real, relevant reason for the objection. This could be followed by SI's evaluation and dismissal of it and failure of a type not actually predicted by the expert...all such nuances are lost in the literally false "Apparent poorly grounded belief in SI's superior general rationality."
Such a failure comes to mind and is easy for me to imagine as I think this is a major reason why "Lack of impressive endorsements" is a problem. The reasons provided by experts for disagreeing with SI on particular issues are often terrible, but such expressions are merely what they believe their objections to be, and their expertise is in math or some such, not in knowing why they think what they think.
However the reaction of some lesswrongers to the title I initially chose for the post was distinctly negative. The title was "Most rational programming language?"
Many people have chosen similar titles for their posts. Many. It is very unusual to respond to criticism by writing a good post like "Avoid Inflationary use of Terms."
How did you do it?
Perhaps you initially had a defensive reaction to criticism just as others have had, and in addition have a way of responding to criticism well. Alternatively, perhaps your only advantage over the others was not having as much of a defensive impulse, and those others aren't necessarily missing any positive feature that turns criticism into useful thought. The phrase "channeling criticism" seems to assume the later is the case.
Was there a feature of the criticism that made its indirect result your post? Perhaps it was convincing from its unanimity, or non-antagonism, or humor, or seeming objectivity, or other?
Do not ask whether it is “the Way” to do this or that. Ask whether the sky is blue or green. If you speak overmuch of the Way you will not attain it.
I still believe in Global Warming. Do you?
-Ted Kaczynski, The Unabomber
-Heartland Institute billboard
From the press release:
1. Who appears on the billboards?
The billboard series features Ted Kaczynski, the infamous Unabomber; Charles Manson, a mass murderer; and Fidel Castro, a tyrant. Other global warming alarmists who may appear on future billboards include Osama bin Laden and James J. Lee (who took hostages inside the headquarters of the Discovery Channel in 2010).
These rogues and villains were chosen because they made public statements about how man-made global warming is a crisis and how mankind must take immediate and drastic actions to stop it.
2. Why did Heartland choose to feature these people on its billboards?
Because what these murderers and madmen have said differs very little from what spokespersons for the United Nations, journalists for the “mainstream” media, and liberal politicians say about global warming. They are so similar, in fact, that a Web site has a quiz that asks if you can tell the difference between what Ted Kaczynski, the Unabomber, wrote in his “Manifesto” and what Al Gore wrote in his book, Earth in the Balance.
The point is that believing in global warming is not “mainstream,” smart, or sophisticated. In fact, it is just the opposite of those things. Still believing in man-made global warming – after all the scientific discoveries and revelations that point against this theory – is more than a little nutty. In fact, some really crazy people use it to justify immoral and frightening behavior.
Interestingly, science is the first thing mentioned in the next section:
3. Why shouldn’t I still believe in global warming?
Because the best available science says about two-thirds of the warming in the 1990s was due to natural causes, not human activities; the warming trend of the second half of the twentieth century century already has stopped and forecasts of future warming are unreliable; and the benefits of a moderate warming are likely to outweigh the costs. Global warming, in other words, is not a crisis.
Thank you very much. I'm all set for now.
One problem is that I can't find the table of contents, so I am not exactly sure.
Google books has preview available for pages 1-4 and 11-22. I know pages 5-10 would be very helpful for me, probably the rest of chapter one, but maybe not. It is likely everything I need is in pages 5-10.
Thank you for your help.
Please help me find: Fallacies and Judgments of Reasonableness: Empirical Research Concerning the Pragma-Dialectical Discussion Rules, by Frans H. van Eemeren, Garssen, Bart, Meuffels, Bert
The main problem is that a test tests ability to take the test, independently of what its makers intended. The more similar tests are to each other, the more taking the first is training for the second, and the easier it is to teach directly to the test rather than to the skill that inspired the test. The less similar the before and after tests are, the less comparable they are.
Rationality training is particularly tricky because one is to learn formal models of both straight and twisted thinking, recognize when real-life situations resemble those patterns, and then decide how much formal treatment to give the situation, as well as how much weight to give to one's formal model as against one's feelings, reflexive thoughts, and so on.
Traditional classroom tests are set up to best test the first bit, knowledge of the formal models, if one did solve the problems inherent in testing. Even to the extent one can ask people about how one ought to react in the field, e.g. when to use which sort of calculation, that is still a question with a correct answer according to a formal model and one is still not testing the ability to apply it!
These problems resemble those the military has faced in its training and testing. They use indoctrination, simulations, and field tests. Decision making is tested under uncomfortable conditions, ensuring probable good decision making under most circumstances. In general, knowing what they do is likely to be helpful.
The problems with tests are not intractable. One can limit the gain on the second test from having taken the first test by saturating the test taker with knowledge of the test before it is taken the first time, though few would be motivated. One can try to make a test similar to the skill tested, so ability at the test is well correlated with the skill one intends to test. One can try to devise very different sorts of tests that measure the same thing (I doubt that will work here).
One component of a useful classroom test might resemble the classic research on correspondence bias. In it, people judge individuals' support for positions based off an essay they supposedly wrote. Some subjects are told that the writer chose the thesis, others that the writer had it assigned. (The theses were either pro- or anti-Castro.) People inferred that the essay's author significantly agreed with the thesis even when they were told it was assigned to them. The quality of an essay a person produces is some evidence of what they believe, as is their willingness to write it at all, etc., but in general people overly infer others' dispositions from actions they take under social constraint, even when they know of the constraint.
Here is how the framework could translate into a useful rationality test: the test would give people some evidence for something they are biased to overly believe, and the quantity and quality of legitimate evidence in the test would vary widely. One would not be able to pass the test by simply detecting the bias and then declare oneself unmoved in that wrong direction, as one might be able to do for, say, sunk costs. Instead, the valid evidence and invalid inclination would be along the same vector such that one would have to distinguish the bias from the rest of the evidence in the environment.
This solves the problem of having a classroom test be an easy exercise of spotting the biased thought pattern and quashing it. Videos or essays of various people with known beliefs arguing for or against those beliefs could be used to train and test people in this. It's actually probably a skill one could learn without any idea of how one was doing it.
Expressed abstractly, the idea is to test for ability to quantify wrong thinking by mixing it with legitimate evidence, all of which increases confidence in a particular conclusion. This is hard to game because the hard part isn't recognizing the bias. The material's being media from real life prevents testers from imposing an unrealistic model that ignores actual evidence (e.g., a strongly pro-Castro person really might refuse to write an anti-Castro essay).
the most...memetically dangerous groups
What are your criteria for this?
Consider giving an example of the sort of decision making procedure that is taught in camp, with the subject of the example whether one should attend the camp.
E.g.:
Write down all the reasons you think you are considering on a sheet of paper, in pro and con columns. Circle those that do not refer to consequences of going or not going to camp. Then shut your eyes to think for two minutes and think of at least five alternatives that you are likely to do instead of camp. Make pro and con lists for the most likely three of these. Then circle non-consequences. Generate consequences you should be considering but aren't by imagining what is likely to happen if you go to camp. Be sure not to think that compelling stories with many features are most likely, and give greater consideration to self-generated stories with fewer contingent parts. Generate at least four seemingly likely stories of what will likely happen. Put a star next to each alternative for which the time and/or money is spent acquiring an experience, rather than material goods, as the science of happiness consistently shows that such acquisitions are more uplifting...etc.
Alternatively, a sample VOI calculation on how much time people should spend considering it would do.
I have friends and relatives who live in the area. How central to the camp is the communal living aspect? What would you charge to commute to it, if that is possible?
The median is almost always around 7, for almost anything.
I tried to take that into account when reading.
treating the indexes as utilities
Please explain.
"Is there evidence this will be worthwhile according to my values now, independently of how it might change my values?"
"Is there evidence that this is instrumentally useful for more than warm fuzzies?"
"Is there evidence that for the probable benefit of this event the costs are substantially optimized for it? I.e., if the benefit is substantially social, even if this would be worth flying around the world for, a program could actually be optimized for social benefits, and/or I could attend a closer/cheaper/shorter program with similar benefits to me."
"Regardless of anyone's intent, what is this program optimized for?"
"How's the food?"
It's easy to imagine a Christian brainwashing retreat run by someone similar to Luke that would also have that property.
7b) Is there any evidence I'll be glad I went that a Christian brainwashing retreat could not produce just as easily?
If you went to a Jehovah's Witness retreat, and were in an accident, and you were conscious enough to refuse a blood transfusion, you'd be glad for having learned what you did at the retreat, even if you knew the refusal would be fatal.
In general, anything that is compelling and affects your decisions will make you glad for it, and its being compelling is probably not inversely related to its being true. So I'm not too concerned that my tentative answer to this question is "no."
you'll find that people are searching for "less wrong cult" and "singularity institute cult" with some frequency.
Maybe a substantial number of people are searching for the posts about cultishness.
I entirely agree with this.
That's what I intended.
Can someone provide the full text of this?
Slippery slope arguments (SSAs) have a bad philosophical reputation. They seem, however, to be widely used and frequently accepted in many legal, political, and ethical contexts. Hahn and Oaksford (2007) argued that distinguishing strong and weak SSAs may have a rational basis in Bayesian decision theory. In this paper three experiments investigated the mechanism of the slippery slope showing that they may have an objective basis in category boundary re-appraisal.
Also this:
...he argued that the very reasons that can make SSAs strong arguments mean that we should be poor at abiding by the distinction between good and bad SSAs, making SSAs inherently undesirable. We argue that Enoch’s meta-level SSA fails on both conceptual and empirical grounds.
Detecting implausible social network effects in acne, height, and headaches: longitudinal analysis
depending on how those techniques are applied,
But as far as I know there's nothing in Cox's theorem or the axioms of probability theory or anything like those that says I had to use that particular prior
The way I interpret hypotheticals in which one person is said to be able to do something other than what they will do, such as "depending on how those techniques are applied," all of the person's priors are to be held constant in the hypothetical. This is the most charitable interpretation of the OP because the claim is that, under Bayesian reasoning, results do not depend on how the same data is applied. This seems obviously wrong if the OP is interpreted as discussing results reached after decision processes with identical data but differing priors, so it's more interesting to talk about agents with other things differing, such as perhaps likelihood-generating models, than it is to talk about agents with different priors.
I could just as easily have used a different...likelihood model, and gotten a totally different posterior that's nonetheless legitimate.
Can you give an example?
Cigarette smoking: an underused tool in high-performance endurance training
In summary, existing literature supports the use of cigarettes to enhance endurance performance through weight loss and increased serum hemoglobin levels and lung volumes.
musical contrast and chronological rejuvenation
...people were nearly a year-and-a-half younger after listening to “When I’m Sixty-Four” (adjusted M = 20.1 years) rather than to “Kalimba” (adjusted M = 21.5 years), F(1, 17) = 4.92, p = .040.
Length of stay in hospital and duration of fever were significantly shorter in the intervention group than in the control group (P=0.01 and P=0.04, respectively)...Remote, retroactive [emphasis added] intercessory prayer said for a group is associated with a shorter stay in hospital and shorter duration of fever in patients with a bloodstream infection and should be considered for use in clinical practice.
depending on how those techniques are applied, can lead to different results when analyzing the same data
But two Bayesian inferences from the same data can also give different results. How could this be a non-issue for Bayesian inference while being indicative of a central problem for NHST?
If the OP is read to hold constant everything not mentioned as a difference, that includes the prior beliefs of the person doing the analysis, as against the hypothetical analysis that wasn't performed by that person.
Does "two Bayesian inferences" imply it is two different people making those inferences, with two people not possibly having identical prior beliefs? Could a person performing axiom-obeying Bayesian inference reach different conclusions than that same person hypothetically would have had they performed a different axiom-obeying Bayesian inference?
Is the sunk cost fallacy a fallacy?
I ask myself about many statements: would this have the same meaning if the word "really" were inserted? As far as my imagination can project, any sentence that can have "really" inserted into it without changing the sentence's meaning is at least somewhat a wrong question, one based on an unnatural category or an argument by definition.
If a tree falls in the forest, does it make a sound? --> If a tree falls in the forest, does it really make a sound?
Is Terry Schiavo alive? --> Is Terry Schiavo really alive?
Is the sunk cost fallacy a fallacy? --> Is the sunk cost fallacy really a fallacy?
When you surround an army, leave an outlet free. Do not press a desperate foe too hard.
Game theory won out over good wishes.
Not that it's bad, for that would be confusing levels, even if "shit" were being used in its usual figurative sense. For example, I would consider some true things said that are self-harmful violations of social norms "shit."
Like others I read it from a link on LW, I think...thanks for posting.
Shit and Bullshit Rationalists Don't Say:
"I've read more papers by Scott Aaronson than just the one." "Which one?" (Both of these.)
Quantity of experience: brain-duplication and degrees of consciousness Nick Bostrom
Decision Tree: Roots of Knowledge.
Decision Tree: Applied Wisdom.
Decision Tree: Our mascot is a thinly veiled rip-off of an Ent! Sweet!
My favorite so far.
a future, more evolved version of myself.
Just kidding.
The ideas of economists and political philosophers, both when they are right and when they are wrong, are more powerful than is commonly understood. Indeed the world is ruled by little else. Practical men, who believe themselves to be quite exempt from any intellectual influence, are usually the slaves of some defunct economist. Madmen in authority, who hear voices in the air, are distilling their frenzy from some academic scribbler of a few years back.
--John Maynard Keynes
Nemeth … divided two hundred and sixty-five female undergraduates into teams of five. … The first set of teams got the standard brainstorming spiel, including the no-criticism rules. Other teams were told … “Most studies suggest that you should debate and criticize each other’s ideas.” The rest received no further instructions. …The brainstorming groups slightly outperformed the groups given no instructions, but teams given the debate condition were the most creative by far. On average, they generated twenty per cent more ideas. And after the teams disbanded, … brainstormers and the people given no guidelines produced an average of three additional ideas; the debaters produced seven. …
“There’s this Pollyannaish notion that the most important thing to do when working together is stay positive and get along, to not hurt anyone’s feelings. … Well, that’s just wrong.”
Did they notice that they were possibly changing the amount of offense taken and feelings hurt by criticism, when they told people what was optimal? They told people that criticism was a duty, such that they probably wouldn't take it as personally, and they found that the group was more creative. But did they measure the amount or nature of criticism given in the groups?
There are many reasons why such a rule could inhibit creativity. I wonder how important each factor is.
That's advice for the skimming/reading/intensive study of 1,000 papers to get their knowledge, balancing completeness, depth, breadth, and the like.
I want advice on summarizing 100 individual articles, each one fairly completely read, so that many other people can do that and share the results with each other. The thing you do best, rather than the thing lukeprog does best.
deciding who to trust
This can be unpacked/dissolved.
First, I think of people/situation pairs rather than people. Specific situations influence things so much that one loses a lot by trying to think of people more abstractly; there is the danger of the fundamental attribution error.
Some people/situations are wrong more often than others are. Some people/situations lie more to others than others do. Some people/situations lie more to themselves than others do.
Some are more concerned with false positives, others with false negatives.
I also tend to think of people as components of decision making processes, as well as comprised of analogous decision making processes. Science takes advantage of this through the peer review process, which pits credulous humans against each other in attempts to prove each other's ideas wrong, and it ultimately produces a body of knowledge each piece of which is unlikely to be false. It is the best input for anyone who instead cares about something slightly different, such as what is most likely to be true when false positives and false negatives would be similarly dangerous.
This is the source of my respect for Scott Adams (creator of Dilbert), which I've noticed is surprisingly prevalent if irregular among intelligent people I respect who have no particular reason to connect with anything having to do with office work or cubicles. It's something that people either "get" or "don't get," like the orange joke. The man in an incomplete thinker, and many hundreds of millions of people are better decision makers than he, but as a member of a decision making group few could better come up with creative, topical, unique approaches to problems. Pair him with an intelligent, moderately critical mind and one would have a problem solving group better than one of two moderately intelligent and creative people.
Some people/situations produce more signal than others, others a better signal/noise ratio, some only advise when they are confident in their advice, some advise whenever they think it would have marginal gain, etc.
If you have an important decision to make, ask how to make the decision, not who should make it. Set up a person/situation network - even if the only person to trust is yourself (I have seen some research on patterns of decisions better made on a full bladder than an empty one, and vice versa. There is no you, there is only a you/situation (e.g. bladder) pair. Nothing corresponds to you/(no bladder situation, empty, full, or intermediate)! Likewise for decisions that differ dependent on whether or not your facial muscles are in the shape of a smile, etc.
Also, for every aspect of "trust," beliefs are properly probabilistic; for the chances the person has good intentions, understands how you interpreted their words and actions, knows the right answer, knows they know the right answer, etc.
If you have a specific question you want advice to, asking about it most abstractly to avoid political associations was a great first move. Yet the abstract question is an imprecise summary and function of specific possible worlds. I think continuous rephrasing from more to less abstract might work well, as one could select from among variously abstract advice at different levels of political contamination and idiosyncratic specificity. Going in the other direction wouldn't work as well, since the political content revealed early would taint later responses.
I think it's time for a meta-post in which gwern discusses summarizing articles and gives advice.
eminent scientists tend to be
Base rate?
"Advanced Sanity" matches a strong comparative qualifier to a basic trait. While "sanity" has problems, as mentioned below, I think the phrase derives much of its power from its underlying pattern, which can be used in other suggestions.
The Anti-Zombie Conspiracy
Bell?