Posts

Comments

Comment by manuelg on Something to Protect · 2008-01-30T20:04:00.000Z · LW · GW

I get an uncomfortable feeling, Eliezer, that this work is to ultimately lead to a mechanism to attract:

  • people of libertarian bent

  • people interested in practically unbounded longevity of consistent, continual consciousness

and also lead to a mechanism to tar people disinclined to those two goals; tar them with the label "sentimentally irrational".

Rationality to me is simply a tool. I would have absolutely no confidence in it without the ongoing experiences of applying it iteratively, successfully to specific goals.

And of course, no matter how much you profess your love of mere usefulness, you should never actually end up deliberately believing a useful false statement.

I haven't yet needed to "deliberately believe a useful false statement" (to my knowledge), but I wouldn't be particularly disturbed if I tried to, and found it repeatedly successful. Another tool for my tool belt.

Right now I am having some success with modeling the world over the conditions I care about with:

  • scientific laws (including information theory)

  • mathematics

  • groups of causality graphs, for the same phenomena, in competition

  • specific causality graphs

  • naive Bayesian

  • straightforward use of Bayes' theorem

  • frequentist probability and statistics

  • discrete probability

  • logic

(causality graphs considered can include relations defined by simulation, and all other tools listed. Whatever it is, shove it into a causality graph. I haven't found it useful to restrict the use of anything in a causality graph, particularly if they are forced to compete over the ability to be consistent with past data and predict future results.)

(The list above is somewhat ordered over more applicable to specific situations, to less applicable to specific situations. I attach the lowest confidence to any specific causality graph, more confidence with the graphs in aggregate in competition. I attach more confidence in frequentist analysis over good data, over Bayesian, but Bayesian is applicable in more circumstances.)

I have to deal with finite resource allocation in a manufacturing plant. Where else to use these tools? Possibly an the opportunity from celebrating the differences in all the people working in the plant.

I am often confused by your writing, because I don't see where you have "skin in the game". Where are you exercising your tools of rationality?

Is it all just to make the world slightly more hospitable to libertarians interested in life extension? (No negative judgment if that is the case.)

(Sorry to beg your indulgence of a long post)

Comment by manuelg on The "Intuitions" Behind "Utilitarianism" · 2008-01-28T19:14:03.000Z · LW · GW

3^^^3?

http://www.overcomingbias.com/2008/01/protecting-acro.html#comment-97982570

A 2% annual return adds up to a googol (10^100) return over 12,000 years

Well, just to point out the obvious, there aren't nearly that many atoms in a 12,000 lightyear radius.

Robin Hanson didn't get very close to 3^^^3 before you set limits on his use of "very very large numbers".

Secondly, you refuse to put "death" on the same continuum as "mote in the eye", but behave sanctimoniously (example below) when people refuse to put "50 years of torture" on the same continuum as "mote in the eye".

Where music is concerned, I care about the journey.

When lives are at stake, I shut up and multiply.

I assert the use of 3^^^3 in a moral argument is to avoid the effort of multiplying. Demonstration: what is 3^^^3 times 6? What is 3^^^3 times a trillion to the trillionth power?

Where am I going with this? I am very interested in improving my own personal morality and rationality. I am profoundly disinterested in passing judgment on any one else's morality or rationality.

I assert that the use of 3^^^3 in a moral argument has nothing to do with someone improving their own personal morality or rationality. It has everything to do with trying to shame someone else into admitting that they aren't A "good little rational moralist".

My comment is an attempt to steer the thread of your (very interesting and well written) posts towards topics that will help me improve my own personal morality and rationality. (I admit that I perceive no linkage between the "wheel in my hand" and the "rudder of the ship", so I doubt my steering will work.)

Comment by manuelg on Rationality Quotes 1 · 2008-01-16T19:18:28.000Z · LW · GW

It appears to be a quite general principle that, whenever there is a randomized way of doing something, then there is a nonrandomized way that delivers better performance but requires more thought.

If I was a casino owner, I would not purchase a non-randomized slot-machine or a non-randomized roulette wheel. (I might if I was running an underground gaming room.)

Two uses of randomness:

  • Have to express a sequence, and need that sequence to have minimal information content about inner state.

  • Don't want to be doomed by history, always want maintain a tiny chance of success.

What other uses am I missing?

Comment by manuelg on On Expressing Your Concerns · 2007-12-27T18:58:33.000Z · LW · GW

The mere fact that you asked that question makes me a little worried about you, manuel.

Uh, thx 4 ur concern. Kthxbye.

I call myself a liberal. Not because I act or think like most self-described liberals, but because the simple word "liberal" sends waves of debilitating revulsion through many people. Precisely the people whom I identify with a low probability of sustaining rational thought.

I am a liberal, but I am profoundly uninterested in coercing change in the beliefs or behavior of others. I find it a full-time job to coerce change in the beliefs and behavior of myself, consistent with goals, values, responsibilities, and personal roles I choose for myself. After working on myself, there is no time or energy left to try to affect others.

Frankly, I have zero confidence in any program of coercing change in the beliefs or behavior of others, regardless of the agency or the means. The specific means always overtake whatever was the initial positive goal. And the outcome becomes waste, sin, or cruelty.

That is what I find puzzling about "Overcoming Bias: The Blog Website". It is interesting when it discusses self-disciplines that are conducive to rationality. It is puzzling when it discusses irrationality of others. Because there is no agency or means to force others to be rational.

Others delight in irrationality. Full stop.

Comment by manuelg on On Expressing Your Concerns · 2007-12-27T05:01:19.000Z · LW · GW

But isn't this just another failure mode of groups working together, which we already know is far from optimal?

Like so many of the other failure modes of groups (stupid but loud people having an over-sized influence, smart but shy people having no influence, stopping exploring the problem/solution space way to early, couching everything in weasel-words, etc), you can do so much better with an iterative process:

  1. Quick brainstorming

  2. Written summary of everything said during brainstorming

  3. All participants work on sub-problems on their own.

  4. All participants present individual findings before whole group.

  5. Repeat (solo-work becoming less about research and more about production as time goes on)

This gets to the heart of one thing I don't understand about "Overcoming Bias: The Blog-site". Is the idea to stamp out bias in others, or is the idea to prevent bias in ourselves?

The only people who have a chance over "overcoming-bias" are the ones striving for a goal under significant constraints. Because the are the only ones willing to shoulder the burden of consistent rationality.

Comment by manuelg on Guardians of the Truth · 2007-12-16T01:43:56.000Z · LW · GW

The perfect age of the past, according to our best anthropological evidence, never existed.

Minor point: in defense of the esteemed Taoist, I would argue Chuang Tzu was speaking of the time humans were small groups of hunter-gatherers. Based on my understanding of Jared Diamond's "Agriculture: the worst mistake in the history of the human race".

Back on the point of your post. I am not ashamed to say I listen to Zig Ziglar tapes (I probably should be). His folksy way of putting it is "Do you want to be a learner, or learned?" With "learned" implying that you have mastered a system of thought perfectly suited for a receding past.

Comment by manuelg on Argument Screens Off Authority · 2007-12-14T01:21:54.000Z · LW · GW

Apropos of nothing: you have a lot to say about the discrete Bayesian. But I would argue that talking about the quality of manufacturing processes, one would often do best talking about continuous distributions.

The distributions that my metal-working machines manifest (over the dimensions under tolerance that my customers care about) are the Gaussian normal, the log normal, and the Pareto.

When the continuous form of the Bayesian is discussed, they always talk about the Beta distributions.

I have tried reasoning with the lathe, the mill, and the drill presses, to begin exhibiting the Beta, but they just ignore my pleadings, and spit hot metal chips at me.

The standard frequentist approaches seem like statistical theater. So I am inclined to explore other approaches.

Comment by manuelg on Argument Screens Off Authority · 2007-12-14T00:58:29.000Z · LW · GW

There are so many different observations bearing on global warming...

Your liberal bias is showing here, Eliezer. It is not "global warming". The earth is becoming slightly "frigidity impaired".

Comment by manuelg on Every Cause Wants To Be A Cult · 2007-12-12T19:08:39.000Z · LW · GW

"Resist against being human" is an interesting choice of words. Surely, most people would not see that as a goal worth pursuing.

Nominull nailed it on the head, Eliezer. What are the human qualities worth amplifying, and what are the human qualities worth suppressing?

For myself, "cultishness" is definitely a human group failure mode.

To others, maybe "cultishness" is a comfortable state of being, like relaxing in a warm bath. (Partake in the vivid imagery of a group of nude people in a drowsy state soaking in their collective body-temperature urea solution...)

I assert that the choice of what elements of humanity are worthy, and what are unworthy, is completely personal and subjective. I would be interested in seeing the argument for the differentiation being objective. Is there an objective criteria for what elements of humanity are worthy, and what are unworthy?

A different point: You really demonstrate the value of blogging and independently developing a stable of ideas, and then being able to reference those ideas with terminology backed up by a hyperlink. I am constantly rereading your posts as you link back to them, and it is interesting and profitable.

Comment by manuelg on Evaporative Cooling of Group Beliefs · 2007-12-08T00:51:11.000Z · LW · GW

Have you read Bion's "Experiences in Groups"? He was an English Freudian, so he was extremely passive while observing group behavior, which is fine, because he was also careful to record what was happening.

I am less satisfied with his analysis, because, as a typical Freudian, he always has ad-hoc reasons why any piece of evidence (or its exact opposite) perfectly confirms his theories. Absolutely impossible to falsify.

What I took from it was that, after you establish a concrete, positive goal for a group's interactions, for each and every sub-element of the goal, you can find some element of the human personal dynamic, or the human group dynamic, that will work against it.

It is a strong statement along the lines of Murphy's Law: "Whatever can go wrong, will go wrong". If it is a sub-element of a concrete, positive goal, there will be, in opposition, some element of the human personal dynamic, or the human group dynamic. It is a strong statement you can use to predict failure modes of your particular work group.

* If great progress is being made by the group, the individual identities of the members will lessen in importance. So people will assert their individual identities through disruption, gaining attention.

* If progress of the group demands full attention on the final objective, the group will become paranoid and invent some internal or external bug-a-boo to focus on instead.

My personal take, informed by Buddhism, is that this is not necessarily a bad thing. There should be some push-back on the goals of working groups. 99 out of 100 ideas turn out to be terrible ideas, and so it is a good thing they die from the irrational failure modes of the human personal dynamic or the human group dynamic. If it is a good idea, then it is worth to take care from assaults from human irrationality.

And also, attempting to attack head-on any particular irrational failure mode will only make it stronger. For example, a troll will never have so many defenders as when the group leaders focus in to remove him. Better to use Jujutsu. (Trolls are best countered with neglect leading to boredom.)

Bion's "Experiences in Groups" will give a good sample of failure modes. You can attempt to skillfully steer around them, and keep the group working on the positive goal.

Yours is an interesting idea, keeping a "token" troll around. I would make a rule: any discipline against a troll will be matched by identical discipline against anyone who engages that troll, even in attack. "Feeding the troll" will have precisely the same sanctions as trolling itself.

Comment by manuelg on Fake Utility Functions · 2007-12-06T20:49:58.000Z · LW · GW

You shouldn't expect to be able to compress a human morality down to a simple utility function, any more than you should expect to compress a large computer file down to 10 bits.

I think it is a helpful exercise, in trying to live "The Examined Life", to attempt to compress a personal morality down to the fewest number of explicitly stated values.

Then, pay special attention to exactly where the "compressed morality" is deficient in describing the actual personal morality.

I find, often but not always, that it is my personal morality that would benefit from modification, to make it more like the "compressed morality".

Comment by manuelg on Uncritical Supercriticality · 2007-12-05T00:31:36.000Z · LW · GW

From what I've read, virtually nobody in China is a communist now, just as people had stopped believing in the last days of the Soviet Union. In North Korea or among the rebels of Nepal there are still true-believers, but I don't think there are as many as there are Christians.

I find it useful to distinguish between the Chinese and the Swedish. I call the Chinese form of government "communism", and I call the Swedish form of government "socialism". If they are all sub-tribes of "Canadians" to you, then you don't prize distinction as much I do.

There are certainly more "self-reported communists" than there are "humans whose daily actions are informed by the example of Jesus Christ".

...All I need are are a few dozen "self-repored communists" to prove that...

Comment by manuelg on Uncritical Supercriticality · 2007-12-04T17:49:06.000Z · LW · GW

Minor point. It is peculiar to talk about the "death of communism" when there are about as many communists in the world as there are Christians.

"Death of the Purported Worldwide Worker's Communist Revolution" is closer to the truth (and a mouthful).

How about "Death of Worldwide Revolutionary Communism"?

Comment by manuelg on Evolving to Extinction · 2007-11-17T01:05:42.000Z · LW · GW

That's what evolution is defined as -- changes in gene frequency.

And how does this relate to the jibber-jabber that is Darwin's _On_the_Origin_ofSpecies? I can't find the word "gene" in the index.

I wouldn't use the word "evolve" to describe the death of a species in a handful of generations. I wouldn't use the word "evolve" to describe any process that begins and then terminates only over a handful of generations.

Instead, in this particular case, I would describe it as a failure mode of the species' genetic mechanisms.

I think "evolving to extinction" is an unhelpful and misleading way to describe this particular possible failure mode of a species' genetic mechanisms. (I don't doubt the phenomena couldn't or doesn't exist.)

Comment by manuelg on Evolving to Extinction · 2007-11-16T21:51:56.000Z · LW · GW

But the male chromosome isn't competing against the female chromosome. The mutant male chromosome is competing against the unmutant male chromosome. The mutant male chromosome is fitter, rises to fixation at its allele location, and in one more generation the species as a whole goes extinct.

I would still be loath to call it "evolved to death". Where is the "evolution"? You are describing an event that would wipe out a species in an instant (considering it on the time scales that evolution acts on). Species die out instantaneously (on an evolutionary time scales) for many reasons.

How else can I respond to an event that takes "one more generation" to kill the whole species? Nothing "evolved"; the species died because the evolved machinery of genes didn't preclude such a mutation from killing a species in a handful of generations. Too bad, so sad. If there was an "evolution fairy", she would have designed a better machinery of genes. But if it is essential for the preferred use of the phrase "evolved to death" to describe events that take place instantaneously on an evolutionary time scale, I have to describe that phrase as misleading.

it is still possible for a species to evolve to extinction directly.

Favor me with another example. I found the other examples lacking.

From the fog of my misunderstanding, I am surprised you would use the phrase "evolved to death" without it immediately being followed by qualifications and clarifications. I look forward to you removing this fog away from my person.

Comment by manuelg on Evolving to Extinction · 2007-11-16T19:01:36.000Z · LW · GW

"Evolved to Extinction"? I would be personally loath to use that phrase.

"Evolved to Extinction" because female mice become rarer and rarer? "Female mice become rarer and rarer" is another way of saying at least 50% of all the genes in all the female individuals will make it to the next generation. Which is pretty damn good odds. Consider all the mutations in all those male individuals that will never get a chance to make it to the next generation, because the male individuals will never even get a chance to get close to a female, much less mate with one.

Virus "evolved to extinction" by killing the host before the host has a chance to spread the virus to another host? If such a mutation makes it past a few generations, and then later dies out, I would describe it as "having the chair pulled out from under it" because the host density, that previously allowed it replicate, changed. The host density went down (could be because people dropped dead faster than they could reproduce, but maybe it changed for another reason) and extinction followed.

"Evolved to Extinction" strikes me like saying "because you walked northbound to step onto a southbound train, you really were walking southbound to begin with".

Also, unless something evolves the incredible ability to survive the heat-death of the universe, every species will "evolve to extinction".

I am really surprised you used the phrase "evolved to extinction".

Comment by manuelg on Terminal Values and Instrumental Values · 2007-11-15T20:56:50.000Z · LW · GW

The very first "compilation" I would suggest to your choice system would be to calculate the "Expected Utility of Success" for each Action.

1) It is rational to be prejudiced against Actions with a large difference between their "Expected Utility of Success" and their "Expected Utility", even if that action might have the highest "Expected Utility". People with a low tolerance for risk (constitutionally) would find the possible downside of such actions unacceptable.

2) Knowing the "Expected Utility of Success" gives information for future planning if success is realized. If success might be "winning a Hummer SUV in a raffle in December", it would probably be irrational to construct a "too small" car port in November, even with success being non-certain.

Eliezer, I have a question.

In a simple model, how best to avoid the failure mode of taking a course of action with an unacceptable chance of leading to catastrophic failure? I am inclined to compute separately, for each action, its probability of leading to a catastrophic failure, and immediately exclude from further consideration those actions that cross a certain threshold.

Is this how you would proceed?

Comment by manuelg on Thou Art Godshatter · 2007-11-13T19:57:25.000Z · LW · GW

Godshatter? What I may or may not have shat out of my divine anus is of no concern of yours.

Signed, God (big bearded guy in the sky)

Comment by manuelg on The Tragedy of Group Selectionism · 2007-11-07T19:11:43.000Z · LW · GW

Eliezer -

Would "innovation" in genetic error correction, or changes to the proteins responsible for allowing greater or fewer mutations in DNA...

...would such "meta-changes" (changes to the mechanisms of DNA replication) be the basis for group selection?

If different groups had slightly different rules for their DNA replication, intuitively I could see that their competition would be best understood as group selection.

Consider two groups, both formed by mating of a single mother pregnant with a son, leading to two groups with slightly different rules for their DNA replication.

We might expect to see this if some population was regularly exposed to absolutely devastating conditions, where the often a population would have to recover from a single individual, a mother pregnant with a son.

If not this, how did "innovations" to DNA error correction and selection for the different rules about how many mutations to allow in DNA copying even form in the first place?