Posts

AI Box Role Plays 2012-01-22T19:11:13.975Z
The time has come to talk of whether pigs have wings 2011-10-27T00:55:28.526Z
Questions for the Singularity Summit 2011-10-15T00:10:48.179Z
Sweet Unconsciousness 2011-10-02T03:33:51.553Z
Rationality Quotes With Attributions Hidden: from Mein Kampf to Men****x 2011-09-28T04:50:23.745Z
Guardian Angels: Discrete Extrapolated Volitions 2011-09-25T22:51:44.719Z
Proposal: Rationality Quotes Thread With Attributions in rot13 2011-09-02T13:38:44.725Z
30 day karma 2011-08-24T23:30:37.344Z
Judging the intent of others favorably 2011-08-09T17:10:10.888Z
Analogies and learning 2011-05-23T22:44:53.138Z

Comments

Comment by lessdazed on The Problem of "Win-More" · 2014-04-04T20:10:49.149Z · LW · GW

Many times on the internet I've seen people claim that something is more or less popular/known than it really is based on a poorly formulated Google search.

I've seen it too. Even Nate Silver did it in this New York Times blog post, where he estimates the number of fans for each team in the National Hockey League "by evaluating the number of people who searched for the term “N.H.L.”" Using his method, Montreal is the only Canadian market with a team for which it is estimated that fewer than half of the people are avid hockey fans (as he defined it).

In Montreal, French is the official language and the language spoken at home by most people.In French, the NHL is called the "Ligue nationale de hockey," abbreviated "L.N.H."

Comment by lessdazed on Why Academic Papers Are A Terrible Discussion Forum · 2012-06-20T20:57:41.468Z · LW · GW

I honestly can't think of a single instance where I was convinced of an informal, philosophical argument through an academic paper. Books, magazines, blog posts - sure, but papers just don't seem to be a thing.

I have been convinced of the invalidity of other arguments by academic papers.

I have also been significantly persuaded by the failure of academic papers to make their case. That is, seeing that a poor argument is held in wide regard is evidence that the advocates of that position have no better arguments.

I too do not remember being convinced of many things by formal academic papers, just a very few things.

Comment by lessdazed on Why Academic Papers Are A Terrible Discussion Forum · 2012-06-20T20:28:15.847Z · LW · GW

Probably most importantly, what do you view as the purpose of SIAI's publishing papers? Or, if there are multiple purposes, which do you see as the most important?

In order to think of some things I do that only have one important purpose, it was necessary to perform the ritual of closing my eyes and thinking about nothing else for a few minutes by the clock.

I plan on assuming things have multiple important purposes and asking for several, e.g. "what do you view as the purposes of X."

There was nothing wrong with what you said, but it is strange how easily the (my?) mind stops questioning after coming up with just one purpose for something someone is doing. In contrast, when justifying one's own behavior, it is easy to think of multiple justifications.

It makes some sense in a story about motivated cognition and tribal arguments. It might be that to criticize, we look mostly for something someone does that has no justification, and invest less in attacking someone along a road that has some defenses. A person being criticized invests in defending against those attacks they know are coming, and does not try and think of all possible weaknesses in their position. There is some advantage in being genuinely blind to one's weaknesses so one can, without lying, be confident in one's own position.

Maybe it is ultimately unimportant to ask what the "purposes" of someone doing something is, since they will be motivated to justify themselves as much as possible. In this case, asking what the "purpose" is would force them concentrate on their most persuasive and potentially best argument, even if it will rarely actually be the case that one purpose is a large supermajority of their motivation.

Comment by lessdazed on Free research help, editing and article downloads for LessWrong · 2012-06-14T18:10:14.592Z · LW · GW

"THE IMPACT OF INDIVIDUAL DEBIASING EFFORTS ON FINANCIAL DECISION EFFECTIVENESS IN THE SUPPLIER SELECTION PROCESS"

Supply Chain Inventory Replenishment: The Debiasing Effect of Declarative Knowledge

Comment by lessdazed on Welcome to Less Wrong! (2012) · 2012-06-07T23:09:31.611Z · LW · GW

However lately I realized I need to interact with other rationalists in order to further my development.

1) What made you believe this?

2) At present, what do you think are the best reasons for believing this?

Comment by lessdazed on AI Risk & Opportunity: Strategic Analysis Via Probability Tree · 2012-06-07T17:38:57.909Z · LW · GW

Teaching tree thinking through touch.

These experiments were done with video game trees showing evolutionary divergence, and this method of teaching outperformed traditional paper exercises. Perhaps a simple computer program would make teaching probability trees easier, or the principles behind the experiments could be applied in another way to teach how to use these trees.

Comment by lessdazed on Have you changed your mind lately? On what? · 2012-06-05T16:25:16.013Z · LW · GW

since presumably you're "updating" a lot, just like regular humans

It's a psychological trick to induce more updating than is normal. Normal human updating tends to be insufficient).

Comment by lessdazed on How can I argue without people online and not come out feeling bad? · 2012-06-05T14:39:21.772Z · LW · GW

I say to myself in my mind, "nice clothes, nice clothes," alluding to belief as attire, and imagine they're wearing what most caused their statement.

For example, if someone said "Jesus never existed!" I might imagine them wearing a jacket that says "Respect me! I am sophisticated," or a hat saying "accept me, I'm a leftist just like you," or a backpack that says "I am angry at my parents."

Comment by lessdazed on When None Dare Urge Restraint, pt. 2 · 2012-06-04T06:28:25.168Z · LW · GW

Presumably without the ribbons they'd have to be paid more. And the status perks seem tied to the same thing that causes people to call war dead "heroes."

Comment by lessdazed on When None Dare Urge Restraint, pt. 2 · 2012-05-31T17:00:29.221Z · LW · GW

What about infantry v. armor? Or helicopter pilots v. people piloting drones from a base in Nevada? "Military" isn't too homogeneous a category.

Comment by lessdazed on Why Are Individual IQ Differences OK? · 2012-05-31T06:39:47.393Z · LW · GW

Section 5 deals with this

This makes me think that you are right.

There was a weakness in the method, though. In appendix table one they not only show how likely it actually is that a baby with a certain name is white/black, they show the results from an independent field survey that asked people to pick names as white or black. In table eight, they only measure the likelihood someone with a certain name is in a certain class (as approximated by mother's education). Unfortunately, they don't show what people in general, or employers in particular, actually think. If they don't know about class differences between "Kenya" and "Latonya," or the lack of one between "Kenya" and "Carrie," they can't make a decision based on class differences as they actually are.

Comment by lessdazed on Thoughts on the Singularity Institute (SI) · 2012-05-31T05:54:35.797Z · LW · GW

Apparent poorly grounded belief in SI's superior general rationality

I found this complaint insufficiently detailed and not well worded.

Average people think their rationality is moderately good. Average people are not very rational. SI affiliated people think they are adept or at least adequate at rationality. SI affiliated people are not complete disasters at rationality.

SI affiliated people are vastly superior to others in generally rationality. So the original complaint literally interpreted is false.

An interesting question might be on the level of: "Do SI affiliates have rationality superior to what the average person falsely believes his or her rationality is?"

Holden's complaints each have their apparent legitimacy change differently under his and my beliefs. Some have to do with overconfidence or incorrect self-assessment, others with other-assessment, others with comparing SI people to others. Some of them:

Insufficient self-skepticism given how strong its claims are

Largely agree, as this relates to overconfidence.

...and how little support its claims have won.

Moderately disagree, as this relies on the rationality of others.

Being too selective (in terms of looking for people who share its preconceptions) when determining whom to hire and whose feedback to take seriously.

Largely disagree, as this relies significantly on the competence of others.

Paying insufficient attention to the limitations of the confidence one can have in one's untested theories, in line with my Objection 1.

Largely agree, as this depends more on accurate assessment of one's on rationality.

Rather than endorsing "Others have not accepted our arguments, so we will sharpen and/or reexamine our arguments," SI seems often to endorse something more like "Others have not accepted their arguments because they have inferior general rationality," a stance less likely to lead to improvement on SI's part.

There is instrumental value in falsely believing others to have a good basis for disagreement so one's search for reasons one might be wrong is enhanced. This is aside from the actual reasons of others.

It is easy to imagine an expert in a relevant field objecting to SI based on something SI does or says seeming wrong, only to have the expert couch the objection in literally false terms, perhaps ones that flow from motivated cognition and bear no trace of the real, relevant reason for the objection. This could be followed by SI's evaluation and dismissal of it and failure of a type not actually predicted by the expert...all such nuances are lost in the literally false "Apparent poorly grounded belief in SI's superior general rationality."

Such a failure comes to mind and is easy for me to imagine as I think this is a major reason why "Lack of impressive endorsements" is a problem. The reasons provided by experts for disagreeing with SI on particular issues are often terrible, but such expressions are merely what they believe their objections to be, and their expertise is in math or some such, not in knowing why they think what they think.

Comment by lessdazed on Avoid inflationary use of terms · 2012-05-31T04:41:41.812Z · LW · GW

However the reaction of some lesswrongers to the title I initially chose for the post was distinctly negative. The title was "Most rational programming language?"

Many people have chosen similar titles for their posts. Many. It is very unusual to respond to criticism by writing a good post like "Avoid Inflationary use of Terms."

How did you do it?

Perhaps you initially had a defensive reaction to criticism just as others have had, and in addition have a way of responding to criticism well. Alternatively, perhaps your only advantage over the others was not having as much of a defensive impulse, and those others aren't necessarily missing any positive feature that turns criticism into useful thought. The phrase "channeling criticism" seems to assume the later is the case.

Was there a feature of the criticism that made its indirect result your post? Perhaps it was convincing from its unanimity, or non-antagonism, or humor, or seeming objectivity, or other?

Do not ask whether it is “the Way” to do this or that. Ask whether the sky is blue or green. If you speak overmuch of the Way you will not attain it.

Comment by lessdazed on Reversed Stupidity Is Not Intelligence · 2012-05-21T06:30:05.167Z · LW · GW

I still believe in Global Warming. Do you?

-Ted Kaczynski, The Unabomber

-Heartland Institute billboard

From the press release:

1. Who appears on the billboards?

The billboard series features Ted Kaczynski, the infamous Unabomber; Charles Manson, a mass murderer; and Fidel Castro, a tyrant. Other global warming alarmists who may appear on future billboards include Osama bin Laden and James J. Lee (who took hostages inside the headquarters of the Discovery Channel in 2010).

These rogues and villains were chosen because they made public statements about how man-made global warming is a crisis and how mankind must take immediate and drastic actions to stop it.

2. Why did Heartland choose to feature these people on its billboards?

Because what these murderers and madmen have said differs very little from what spokespersons for the United Nations, journalists for the “mainstream” media, and liberal politicians say about global warming. They are so similar, in fact, that a Web site has a quiz that asks if you can tell the difference between what Ted Kaczynski, the Unabomber, wrote in his “Manifesto” and what Al Gore wrote in his book, Earth in the Balance.

The point is that believing in global warming is not “mainstream,” smart, or sophisticated. In fact, it is just the opposite of those things. Still believing in man-made global warming – after all the scientific discoveries and revelations that point against this theory – is more than a little nutty. In fact, some really crazy people use it to justify immoral and frightening behavior.

Interestingly, science is the first thing mentioned in the next section:

3. Why shouldn’t I still believe in global warming?

Because the best available science says about two-thirds of the warming in the 1990s was due to natural causes, not human activities; the warming trend of the second half of the twentieth century century already has stopped and forecasts of future warming are unreliable; and the benefits of a moderate warming are likely to outweigh the costs. Global warming, in other words, is not a crisis.

Comment by lessdazed on Free research help, editing and article downloads for LessWrong · 2012-04-09T23:09:57.932Z · LW · GW

Thank you very much. I'm all set for now.

Comment by lessdazed on Free research help, editing and article downloads for LessWrong · 2012-04-06T22:37:11.449Z · LW · GW

One problem is that I can't find the table of contents, so I am not exactly sure.

Google books has preview available for pages 1-4 and 11-22. I know pages 5-10 would be very helpful for me, probably the rest of chapter one, but maybe not. It is likely everything I need is in pages 5-10.

Thank you for your help.

Comment by lessdazed on Free research help, editing and article downloads for LessWrong · 2012-04-06T21:05:35.233Z · LW · GW

Please help me find: Fallacies and Judgments of Reasonableness: Empirical Research Concerning the Pragma-Dialectical Discussion Rules, by Frans H. van Eemeren, Garssen, Bart, Meuffels, Bert

Comment by lessdazed on Minicamps on Rationality and Awesomeness: May 11-13, June 22-24, and July 21-28 · 2012-03-30T19:19:44.399Z · LW · GW

The main problem is that a test tests ability to take the test, independently of what its makers intended. The more similar tests are to each other, the more taking the first is training for the second, and the easier it is to teach directly to the test rather than to the skill that inspired the test. The less similar the before and after tests are, the less comparable they are.

Rationality training is particularly tricky because one is to learn formal models of both straight and twisted thinking, recognize when real-life situations resemble those patterns, and then decide how much formal treatment to give the situation, as well as how much weight to give to one's formal model as against one's feelings, reflexive thoughts, and so on.

Traditional classroom tests are set up to best test the first bit, knowledge of the formal models, if one did solve the problems inherent in testing. Even to the extent one can ask people about how one ought to react in the field, e.g. when to use which sort of calculation, that is still a question with a correct answer according to a formal model and one is still not testing the ability to apply it!

These problems resemble those the military has faced in its training and testing. They use indoctrination, simulations, and field tests. Decision making is tested under uncomfortable conditions, ensuring probable good decision making under most circumstances. In general, knowing what they do is likely to be helpful.

The problems with tests are not intractable. One can limit the gain on the second test from having taken the first test by saturating the test taker with knowledge of the test before it is taken the first time, though few would be motivated. One can try to make a test similar to the skill tested, so ability at the test is well correlated with the skill one intends to test. One can try to devise very different sorts of tests that measure the same thing (I doubt that will work here).

One component of a useful classroom test might resemble the classic research on correspondence bias. In it, people judge individuals' support for positions based off an essay they supposedly wrote. Some subjects are told that the writer chose the thesis, others that the writer had it assigned. (The theses were either pro- or anti-Castro.) People inferred that the essay's author significantly agreed with the thesis even when they were told it was assigned to them. The quality of an essay a person produces is some evidence of what they believe, as is their willingness to write it at all, etc., but in general people overly infer others' dispositions from actions they take under social constraint, even when they know of the constraint.

Here is how the framework could translate into a useful rationality test: the test would give people some evidence for something they are biased to overly believe, and the quantity and quality of legitimate evidence in the test would vary widely. One would not be able to pass the test by simply detecting the bias and then declare oneself unmoved in that wrong direction, as one might be able to do for, say, sunk costs. Instead, the valid evidence and invalid inclination would be along the same vector such that one would have to distinguish the bias from the rest of the evidence in the environment.

This solves the problem of having a classroom test be an easy exercise of spotting the biased thought pattern and quashing it. Videos or essays of various people with known beliefs arguing for or against those beliefs could be used to train and test people in this. It's actually probably a skill one could learn without any idea of how one was doing it.

Expressed abstractly, the idea is to test for ability to quantify wrong thinking by mixing it with legitimate evidence, all of which increases confidence in a particular conclusion. This is hard to game because the hard part isn't recognizing the bias. The material's being media from real life prevents testers from imposing an unrealistic model that ignores actual evidence (e.g., a strongly pro-Castro person really might refuse to write an anti-Castro essay).

Comment by lessdazed on Minicamps on Rationality and Awesomeness: May 11-13, June 22-24, and July 21-28 · 2012-03-30T08:54:18.381Z · LW · GW

the most...memetically dangerous groups

What are your criteria for this?

Comment by lessdazed on Minicamps on Rationality and Awesomeness: May 11-13, June 22-24, and July 21-28 · 2012-03-30T03:59:07.957Z · LW · GW

Consider giving an example of the sort of decision making procedure that is taught in camp, with the subject of the example whether one should attend the camp.

E.g.:

Write down all the reasons you think you are considering on a sheet of paper, in pro and con columns. Circle those that do not refer to consequences of going or not going to camp. Then shut your eyes to think for two minutes and think of at least five alternatives that you are likely to do instead of camp. Make pro and con lists for the most likely three of these. Then circle non-consequences. Generate consequences you should be considering but aren't by imagining what is likely to happen if you go to camp. Be sure not to think that compelling stories with many features are most likely, and give greater consideration to self-generated stories with fewer contingent parts. Generate at least four seemingly likely stories of what will likely happen. Put a star next to each alternative for which the time and/or money is spent acquiring an experience, rather than material goods, as the science of happiness consistently shows that such acquisitions are more uplifting...etc.

Alternatively, a sample VOI calculation on how much time people should spend considering it would do.

Comment by lessdazed on Minicamps on Rationality and Awesomeness: May 11-13, June 22-24, and July 21-28 · 2012-03-29T22:29:39.771Z · LW · GW

I have friends and relatives who live in the area. How central to the camp is the communal living aspect? What would you charge to commute to it, if that is possible?

Comment by lessdazed on Minicamps on Rationality and Awesomeness: May 11-13, June 22-24, and July 21-28 · 2012-03-29T22:23:36.618Z · LW · GW

The median is almost always around 7, for almost anything.

I tried to take that into account when reading.

treating the indexes as utilities

Please explain.

Comment by lessdazed on Minicamps on Rationality and Awesomeness: May 11-13, June 22-24, and July 21-28 · 2012-03-29T22:18:40.201Z · LW · GW

"Is there evidence this will be worthwhile according to my values now, independently of how it might change my values?"

"Is there evidence that this is instrumentally useful for more than warm fuzzies?"

"Is there evidence that for the probable benefit of this event the costs are substantially optimized for it? I.e., if the benefit is substantially social, even if this would be worth flying around the world for, a program could actually be optimized for social benefits, and/or I could attend a closer/cheaper/shorter program with similar benefits to me."

"Regardless of anyone's intent, what is this program optimized for?"

"How's the food?"

Comment by lessdazed on Minicamps on Rationality and Awesomeness: May 11-13, June 22-24, and July 21-28 · 2012-03-29T21:38:29.762Z · LW · GW

It's easy to imagine a Christian brainwashing retreat run by someone similar to Luke that would also have that property.

Comment by lessdazed on Minicamps on Rationality and Awesomeness: May 11-13, June 22-24, and July 21-28 · 2012-03-29T21:38:20.416Z · LW · GW

7b) Is there any evidence I'll be glad I went that a Christian brainwashing retreat could not produce just as easily?

If you went to a Jehovah's Witness retreat, and were in an accident, and you were conscious enough to refuse a blood transfusion, you'd be glad for having learned what you did at the retreat, even if you knew the refusal would be fatal.

In general, anything that is compelling and affects your decisions will make you glad for it, and its being compelling is probably not inversely related to its being true. So I'm not too concerned that my tentative answer to this question is "no."

Comment by lessdazed on Cult impressions of Less Wrong/Singularity Institute · 2012-03-20T19:57:36.942Z · LW · GW

you'll find that people are searching for "less wrong cult" and "singularity institute cult" with some frequency.

Maybe a substantial number of people are searching for the posts about cultishness.

Comment by lessdazed on Cult impressions of Less Wrong/Singularity Institute · 2012-03-20T19:50:08.809Z · LW · GW

I entirely agree with this.

Comment by lessdazed on The Strangest Thing An AI Could Tell You · 2012-03-19T23:04:17.524Z · LW · GW

That's what I intended.

Comment by lessdazed on Fallacies as weak Bayesian evidence · 2012-03-19T23:03:03.176Z · LW · GW

Can someone provide the full text of this?

Slippery slope arguments (SSAs) have a bad philosophical reputation. They seem, however, to be widely used and frequently accepted in many legal, political, and ethical contexts. Hahn and Oaksford (2007) argued that distinguishing strong and weak SSAs may have a rational basis in Bayesian decision theory. In this paper three experiments investigated the mechanism of the slippery slope showing that they may have an objective basis in category boundary re-appraisal.

Also this:

...he argued that the very reasons that can make SSAs strong arguments mean that we should be poor at abiding by the distinction between good and bad SSAs, making SSAs inherently undesirable. We argue that Enoch’s meta-level SSA fails on both conceptual and empirical grounds.

Comment by lessdazed on Using degrees of freedom to change the past for fun and profit · 2012-03-09T22:51:02.642Z · LW · GW

Detecting implausible social network effects in acne, height, and headaches: longitudinal analysis

Comment by lessdazed on How to Fix Science · 2012-03-08T04:12:00.758Z · LW · GW

depending on how those techniques are applied,

But as far as I know there's nothing in Cox's theorem or the axioms of probability theory or anything like those that says I had to use that particular prior

The way I interpret hypotheticals in which one person is said to be able to do something other than what they will do, such as "depending on how those techniques are applied," all of the person's priors are to be held constant in the hypothetical. This is the most charitable interpretation of the OP because the claim is that, under Bayesian reasoning, results do not depend on how the same data is applied. This seems obviously wrong if the OP is interpreted as discussing results reached after decision processes with identical data but differing priors, so it's more interesting to talk about agents with other things differing, such as perhaps likelihood-generating models, than it is to talk about agents with different priors.

I could just as easily have used a different...likelihood model, and gotten a totally different posterior that's nonetheless legitimate.

Can you give an example?

Comment by lessdazed on How to Fix Science · 2012-03-07T21:07:50.502Z · LW · GW

Cigarette smoking: an underused tool in high-performance endurance training

In summary, existing literature supports the use of cigarettes to enhance endurance performance through weight loss and increased serum hemoglobin levels and lung volumes.

musical contrast and chronological rejuvenation

...people were nearly a year-and-a-half younger after listening to “When I’m Sixty-Four” (adjusted M = 20.1 years) rather than to “Kalimba” (adjusted M = 21.5 years), F(1, 17) = 4.92, p = .040.

Effects of remote, retroactive intercessory prayer on outcomes in patients with bloodstream infection: randomised controlled trial

Length of stay in hospital and duration of fever were significantly shorter in the intervention group than in the control group (P=0.01 and P=0.04, respectively)...Remote, retroactive [emphasis added] intercessory prayer said for a group is associated with a shorter stay in hospital and shorter duration of fever in patients with a bloodstream infection and should be considered for use in clinical practice.

Comment by lessdazed on How to Fix Science · 2012-03-07T20:51:46.577Z · LW · GW

depending on how those techniques are applied, can lead to different results when analyzing the same data

But two Bayesian inferences from the same data can also give different results. How could this be a non-issue for Bayesian inference while being indicative of a central problem for NHST?

If the OP is read to hold constant everything not mentioned as a difference, that includes the prior beliefs of the person doing the analysis, as against the hypothetical analysis that wasn't performed by that person.

Does "two Bayesian inferences" imply it is two different people making those inferences, with two people not possibly having identical prior beliefs? Could a person performing axiom-obeying Bayesian inference reach different conclusions than that same person hypothetically would have had they performed a different axiom-obeying Bayesian inference?

Comment by lessdazed on Help! Name suggestions needed for Rationality-Inst! · 2012-03-05T03:43:59.430Z · LW · GW

PASADENA, Calif. - A proposed Discovery mission concept led by NASA's Jet Propulsion Laboratory, Pasadena, Calif., to investigate the formation and evolution of terrestrial planets by studying the deep interior of Mars now has a new name, InSight.

Comment by lessdazed on Is Sunk Cost Fallacy a Fallacy? · 2012-02-11T00:34:06.138Z · LW · GW

Is the sunk cost fallacy a fallacy?

I ask myself about many statements: would this have the same meaning if the word "really" were inserted? As far as my imagination can project, any sentence that can have "really" inserted into it without changing the sentence's meaning is at least somewhat a wrong question, one based on an unnatural category or an argument by definition.

If a tree falls in the forest, does it make a sound? --> If a tree falls in the forest, does it really make a sound?

Is Terry Schiavo alive? --> Is Terry Schiavo really alive?

Is the sunk cost fallacy a fallacy? --> Is the sunk cost fallacy really a fallacy?

Comment by lessdazed on [SEQ RERUN] Leave a Line of Retreat · 2012-02-11T00:18:50.928Z · LW · GW

When you surround an army, leave an outlet free. Do not press a desperate foe too hard.

The Art of War

Comment by lessdazed on Rationality Quotes February 2012 · 2012-02-11T00:00:57.009Z · LW · GW

Game theory won out over good wishes.

--Burning Man organizers

Comment by lessdazed on Waterfall Ethics · 2012-01-31T00:57:00.467Z · LW · GW

Not that it's bad, for that would be confusing levels, even if "shit" were being used in its usual figurative sense. For example, I would consider some true things said that are self-harmful violations of social norms "shit."

Like others I read it from a link on LW, I think...thanks for posting.

Comment by lessdazed on Waterfall Ethics · 2012-01-30T21:35:55.005Z · LW · GW

Shit and Bullshit Rationalists Don't Say:

"I've read more papers by Scott Aaronson than just the one." "Which one?" (Both of these.)

Quantity of experience: brain-duplication and degrees of consciousness Nick Bostrom

Comment by lessdazed on Help! Name suggestions needed for Rationality-Inst! · 2012-01-30T21:28:24.169Z · LW · GW

Decision Tree: Roots of Knowledge.

Decision Tree: Applied Wisdom.

Decision Tree: Our mascot is a thinly veiled rip-off of an Ent! Sweet!

Comment by lessdazed on Help! Name suggestions needed for Rationality-Inst! · 2012-01-30T20:51:57.426Z · LW · GW

My favorite so far.

Comment by lessdazed on Welcome to Less Wrong! (2012) · 2012-01-30T01:55:10.677Z · LW · GW

a future, more evolved version of myself.

I'm offended!

Just kidding.

Comment by lessdazed on There's learned philosophers but not philosophical experts · 2012-01-30T01:45:42.812Z · LW · GW

The ideas of economists and political philosophers, both when they are right and when they are wrong, are more powerful than is commonly understood. Indeed the world is ruled by little else. Practical men, who believe themselves to be quite exempt from any intellectual influence, are usually the slaves of some defunct economist. Madmen in authority, who hear voices in the air, are distilling their frenzy from some academic scribbler of a few years back.

--John Maynard Keynes

Comment by lessdazed on The problem with too many rational memes · 2012-01-29T00:19:13.506Z · LW · GW

Nemeth … divided two hundred and sixty-five female undergraduates into teams of five. … The first set of teams got the standard brainstorming spiel, including the no-criticism rules. Other teams were told … “Most studies suggest that you should debate and criticize each other’s ideas.” The rest received no further instructions. …The brainstorming groups slightly outperformed the groups given no instructions, but teams given the debate condition were the most creative by far. On average, they generated twenty per cent more ideas. And after the teams disbanded, … brainstormers and the people given no guidelines produced an average of three additional ideas; the debaters produced seven. …

“There’s this Pollyannaish notion that the most important thing to do when working together is stay positive and get along, to not hurt anyone’s feelings. … Well, that’s just wrong.”

Did they notice that they were possibly changing the amount of offense taken and feelings hurt by criticism, when they told people what was optimal? They told people that criticism was a duty, such that they probably wouldn't take it as personally, and they found that the group was more creative. But did they measure the amount or nature of criticism given in the groups?

There are many reasons why such a rule could inhibit creativity. I wonder how important each factor is.

Comment by lessdazed on The Personality of (great/creative) Scientists: Open and Conscientious · 2012-01-29T00:06:20.961Z · LW · GW

That's advice for the skimming/reading/intensive study of 1,000 papers to get their knowledge, balancing completeness, depth, breadth, and the like.

I want advice on summarizing 100 individual articles, each one fairly completely read, so that many other people can do that and share the results with each other. The thing you do best, rather than the thing lukeprog does best.

Comment by lessdazed on Trust · 2012-01-28T23:57:11.451Z · LW · GW

deciding who to trust

This can be unpacked/dissolved.

First, I think of people/situation pairs rather than people. Specific situations influence things so much that one loses a lot by trying to think of people more abstractly; there is the danger of the fundamental attribution error.

Some people/situations are wrong more often than others are. Some people/situations lie more to others than others do. Some people/situations lie more to themselves than others do.

Some are more concerned with false positives, others with false negatives.

I also tend to think of people as components of decision making processes, as well as comprised of analogous decision making processes. Science takes advantage of this through the peer review process, which pits credulous humans against each other in attempts to prove each other's ideas wrong, and it ultimately produces a body of knowledge each piece of which is unlikely to be false. It is the best input for anyone who instead cares about something slightly different, such as what is most likely to be true when false positives and false negatives would be similarly dangerous.

This is the source of my respect for Scott Adams (creator of Dilbert), which I've noticed is surprisingly prevalent if irregular among intelligent people I respect who have no particular reason to connect with anything having to do with office work or cubicles. It's something that people either "get" or "don't get," like the orange joke. The man in an incomplete thinker, and many hundreds of millions of people are better decision makers than he, but as a member of a decision making group few could better come up with creative, topical, unique approaches to problems. Pair him with an intelligent, moderately critical mind and one would have a problem solving group better than one of two moderately intelligent and creative people.

Some people/situations produce more signal than others, others a better signal/noise ratio, some only advise when they are confident in their advice, some advise whenever they think it would have marginal gain, etc.

If you have an important decision to make, ask how to make the decision, not who should make it. Set up a person/situation network - even if the only person to trust is yourself (I have seen some research on patterns of decisions better made on a full bladder than an empty one, and vice versa. There is no you, there is only a you/situation (e.g. bladder) pair. Nothing corresponds to you/(no bladder situation, empty, full, or intermediate)! Likewise for decisions that differ dependent on whether or not your facial muscles are in the shape of a smile, etc.

Also, for every aspect of "trust," beliefs are properly probabilistic; for the chances the person has good intentions, understands how you interpreted their words and actions, knows the right answer, knows they know the right answer, etc.

If you have a specific question you want advice to, asking about it most abstractly to avoid political associations was a great first move. Yet the abstract question is an imprecise summary and function of specific possible worlds. I think continuous rephrasing from more to less abstract might work well, as one could select from among variously abstract advice at different levels of political contamination and idiosyncratic specificity. Going in the other direction wouldn't work as well, since the political content revealed early would taint later responses.

Comment by lessdazed on The Personality of (great/creative) Scientists: Open and Conscientious · 2012-01-28T23:22:00.846Z · LW · GW

I think it's time for a meta-post in which gwern discusses summarizing articles and gives advice.

eminent scientists tend to be

Base rate?

Comment by lessdazed on Help! Name suggestions needed for Rationality-Inst! · 2012-01-28T19:36:57.506Z · LW · GW

"Advanced Sanity" matches a strong comparative qualifier to a basic trait. While "sanity" has problems, as mentioned below, I think the phrase derives much of its power from its underlying pattern, which can be used in other suggestions.

Comment by lessdazed on Help! Name suggestions needed for Rationality-Inst! · 2012-01-28T10:56:53.983Z · LW · GW

The Anti-Zombie Conspiracy

Comment by lessdazed on Help! Name suggestions needed for Rationality-Inst! · 2012-01-28T06:16:58.961Z · LW · GW

Bell?