Rational Me or We?

post by RobinHanson · 2009-03-17T13:39:29.073Z · LW · GW · Legacy · 156 comments

Contents

156 comments

Martial arts can be a good training to ensure your personal security, if you assume the worst about your tools and environment.  If you expect to find yourself unarmed in a dark alley, or fighting hand to hand in a war, it makes sense.  But most people do a lot better at ensuring their personal security by coordinating to live in peaceful societies and neighborhoods; they pay someone else to learn martial arts.  Similarly, while "survivalists" plan and train to stay warm, dry, and fed given worst case assumptions about the world around them, most people achieve these goals by participating in a modern economy.

The martial arts metaphor for rationality training seems popular at this website, and most discussions here about how to believe the truth seem to assume an environmental worst case: how to figure out everything for yourself given fixed info and assuming the worst about other folks.  In this context, a good rationality test is a publicly-visible personal test, applied to your personal beliefs when you are isolated from others' assistance and info.  

I'm much more interested in how we can can join together to believe truth, and it actually seems easier to design institutions which achieve this end than to design institutions to test individual isolated general tendencies to discern truth.  For example, with subsidized prediction markets, we can each specialize on the topics where we contribute best, relying on market consensus on all other topics.  We don't each need to train to identify and fix each possible kind of bias; each bias can instead have specialists who look for where that bias appears and then correct it. 

Perhaps martial-art-style rationality makes sense for isolated survivalist Einsteins forced by humanity's vast stunning cluelessness to single-handedly block the coming robot rampage.  But for those of us who respect the opinions of enough others to want to work with them to find truth, it makes more sense to design and field institutions which give each person better incentives to update a common consensus.

156 comments

Comments sorted by top scores.

comment by Sideways · 2009-03-17T18:26:22.388Z · LW(p) · GW(p)

Robin Hanson has identified a breakdown in the metaphor of rationality as martial art: skillful violence can be more or less entirely deferred to specialists, but rationality is one of the things that everyone should know how to do, even if specialists do it better. Even though paramedics are better trained and equipped than civilians at the scene of a heart attack, a CPR-trained bystander can do more to save the life of the victim due to the paramedics' response time. Prediction markets are great for governments, corporations, or communities, but if an individual's personal life has gotten bad enough to need the help of a professional rationalist, a little training in "cartography" could have nipped the problem in the bud.

To put it another way, thinking rationally is something I want to do, not have done for me. I would bet that Robin Hanson, and indeed most people, respect the opinions of others in proportion to the extent that they are rational. So the individual impulse toward learning to be less wrong is not only a path to winning, but a basic value of a rationalist community.

Replies from: mark_spottswood, ciphergoth
comment by mark_spottswood · 2009-03-17T20:51:10.429Z · LW(p) · GW(p)

One can think that individuals can profit from being more rational, while also thinking that improving our social epistemic systems or participating in them actively will do more to increase our welfare than focusing on increasing individual rationality.

comment by Paul Crowley (ciphergoth) · 2009-03-17T23:49:28.789Z · LW(p) · GW(p)

Another thing that you must do for yourself is politics; sadly EY is right that we can't start discussing that here.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-03-17T17:48:40.558Z · LW(p) · GW(p)

Yes, it would be silly to think of ourselves as isolated survivalists in a society where so many people are signed up for cryonics, where Many-Worlds was seen as retrospectively obvious as soon as it was proposed, and no one can be elected to public office if they openly admit to believing in God. But let us be realistic about which Earth we actually live in.

I too am greatly interested in group mechanisms of rationality - though I admit I put more emphasis on individuals; I suspect you can build more interesting systems out of smarter bricks. The obstacles are in many ways the same: testing the group, incentivizing the people in it. In most cases if you can test a group you can test an individual and vice versa.

But any group mechanism of that sort will have the character of a band of survivalists getting together to grow carrots. Prediction markets are lonely outposts of light in a world that isn't so much "gone dark" as having never been illuminated to begin with; and the Policy Analysis Markets were burned by a horde of outraged barbarians.

We have always been in the Post-Apocalyptic Rationalist Environment, where even scientists and academics are doing it wrong and Dark Side Epistemology howls through the street; I don't even angst about this, I just take it for granted. Any proposals for getting a civilization started need to take into account that it doesn't already exist.

Replies from: RobinHanson
comment by RobinHanson · 2009-03-17T23:02:05.371Z · LW(p) · GW(p)

Sounds like you do think of yourself as an isolated survivalist in a world of aliens with which you cannot profitably coordinate. Let us know if you find those more interesting systems you suspect can be built from smarter bricks.

Replies from: Eliezer_Yudkowsky, ciphergoth
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-03-18T00:28:58.291Z · LW(p) · GW(p)

It's pretty hard to be isolated in a world of six billion people. The more key question is the probability of coordinating with any randomly selected person on a rationalist topic of fixed difficulty, and the total size of the community available to support some number of institutions.

To put it bluntly, if you built the ideal rationalist institution that requires one million supporters, you'd be in trouble because the 99.98th percentile of rationality is not adequate to support it (and also such rationalists may have other demands on their time).

But if you can build institutions that grow starting from small groups even in a not-previously-friendly environment, or upgrade rationalists starting from the 98th percentile to what we would currently regard as much higher levels, then odds look better for such institutions.

We both want to live in a friendly world with lots of high-grade rationalists and excellent institutions with good tests and good incentives, but I don't think I already live there.

Replies from: AndySimpson, RobinHanson, Marshall
comment by AndySimpson · 2009-03-18T08:44:23.855Z · LW(p) · GW(p)

Even in the most civilized civilizations, barbarity takes place on a regular basis. There are some homicides in dark alleys in the safest countries on earth, and there are bankruptcies, poverty, and layoffs even in the richest countries.

In the same way, we live in a flawed society of reason, which has been growing and improving with starts and fits since the scientific revolution. We may be civilized in the arena of reason in the same way you could call Northern Europe in the 900s civilized in the arena of personal security: there are rules that nearly everyone knows and that most obey to some extent, but they are routinely disrespected, and the only thing that makes people really take heed is the theater of enforcement, whether that's legally-sanctioned violence against notorious bandits or a dressing-down of notorious sophists.

Right now, we are only barely scraping together a culture of rationality, it may have a shaky foundation and many dumber bricks, but it seems a bit much to say we don't have one.

comment by RobinHanson · 2009-03-18T12:41:21.825Z · LW(p) · GW(p)

Let me us distinguish "truth-seekers", people who respect and want truth, from "rationalists", people who personally know how to believe truth. We can build better institutions that produce truth if only we have enough support from truth-seekers; we don't actually need many rationalists. And having rationalists without good institutions may not produce much more shared accessible truth.

Replies from: Yvain, Eliezer_Yudkowsky
comment by Scott Alexander (Yvain) · 2009-03-18T14:22:40.582Z · LW(p) · GW(p)

I'm not sure I can let you make that distinction without some more justification.

Most people think they're truth-seekers and honestly claim to be truth-seekers. But the very existence of biases shows that thinking you're a truth-seeker doesn't make it so. Ask a hundred doctors, and they'll all (without consciously lying!) say they're looking for the truth about what really will help or hurt their patients. But give them your spiel about the flaws in the health system, and in the course of what they consider seeking the truth, they'll dismiss your objections in a way you consider unfair. Build an institution that confirms your results, and they'll dismiss the institution as biased or flawed or "silly". These doctors are not liars or enemies of truth or anything. They're normal people whose search for the truth is being hijacked in ways they can't control.

The solution: turn them into rationalists. They don't have to be black belt rationalists who can derive Bayes' Theorem in their sleep, but they have to be rationalist enough that their natural good intentions towards truth-seeking correspond to actual truth-seeking and allow you to build your institutions without interference.

Replies from: MichaelBishop, RobinHanson
comment by Mike Bishop (MichaelBishop) · 2009-03-18T15:29:17.287Z · LW(p) · GW(p)

"The solution: turn them into rationalists."

You don't say how to accomplish this. Would it require (or at least benefit greatly from) institutional change?

comment by RobinHanson · 2009-03-24T12:48:39.269Z · LW(p) · GW(p)

I had in mind that you might convince someone abstractly to support eg prediction markets because they promote truth, and then they would accept the results of such markets even if it disagreed with their intuitions. They don't have to know how to bet well in such markets to accept that they are a better truth-seeking institution. But yes, being a truth-seeker can be very different from believing that you are one.

Btw, I only just discovered the "inbox" that lets me find responses to my comments.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-03-18T18:12:45.405Z · LW(p) · GW(p)

This sounds like you're postulating people who have good taste in rationalist institutions without having good taste in rationality. Or you're postulating that it's easy to push on the former quantity without pushing on the latter. How likely is this really? Why wouldn't any such effort be easily hijacked by institutions that look good to non-rationalists?

Replies from: SocratesDissatisfied
comment by SocratesDissatisfied · 2021-08-03T10:59:27.147Z · LW(p) · GW(p)

Eliezer, to the extent that any epistemic progress has been made at all, was it not ever thus?

To give one example: the scientific method is an incredibly powerful tool for generating knowledge, and has been very widely accepted as such for the past two centuries.
But even a cursory reading of the history of science reveals that scientists themselves, despite having great taste in rationalist institutions, often had terrible taste in personal rationality. They were frequently petty, biased, determined to believe their own theories regardless of evidence, defamatory and aggressive towards rival theorists, etc. 
Ultimately, their taste in rational institutions coexisted with frequent lack of taste in personal rationality (certainly, a lack of Eliezer-level taste in personal rationality). It would have been better, no doubt, if they had had both tastes. But they didn't. But in the end, it wasn't necessary that they did. 

I would also make some other points:
1. People tend to have stronger emotive attachments - and hence stronger biases - in relation to concrete issues (e.g. "is the theory I believe correct") than epistemic institutions (e.g. "should we do an experiment to confirm the theory"). One reason is that such object level issues tend to be more politicised. Another is that they tend to have a more direct, concrete impact on individual lives (N.B. the actual impact of epistemic institutions is probably much greater, but for triggering our biases, the appearance of direct action is more important (cf thought experiments about sacrificing a single identifiable child to save faceless millions)). 

2. Even very object-level biased people can be convinced to follow the same institutional epistemic framework. After all, if they are convinced that the framework is a truth-productive one, they will believe it will ultimately vindicate their theory. I think this is a key reason why competing ideologies agree to free speech, why competing scientists agree to the scientific method, why (by analogy) competing companies agree to free trade, etc. 
[The question of what happens when one person's theory begins to lose out under the framework is a different one, but by that stage, if enough people are following the epistemic framework, opting out may be socially impossible (e.g. if a famous scientist said "my theory has been falsified by experiment, so I am abandoning the scientific method!", they would be a laughing stock)]

3. I really worry that "everyone on Earth is irrational, apart from me and my mates" is an incredibly gratifying and tempting position to hold. The romance of the lone point of light in an ocean of darkness! The drama of leading the fight to begin civilisation itself! The thrill of the hordes of Dark Side Epistemologists, surrounding the besieged outpost of reason! Who would not be tempted? I certainly am. But that is why I suspect.  

comment by Marshall · 2009-03-18T05:33:46.236Z · LW(p) · GW(p)

I wonder: Whether a world "with lots of high-grade rationalists" necessarily is a friendly world. I doubt it. So I think rationality has to be tempered with something else. Let's just call it "the milk of human kindness".

Replies from: MichaelBishop, Roko, ciphergoth
comment by Mike Bishop (MichaelBishop) · 2009-03-18T15:07:38.101Z · LW(p) · GW(p)

I'm surprised to see this go negative.

Granted, Marshall didn't explain his position in any detail. But his position is not indefensible, and I'm glad he's willing to share it.

comment by Roko · 2009-03-18T13:45:04.096Z · LW(p) · GW(p)

I wonder: Whether a world "with lots of high-grade rationalists" necessarily is a friendly world.

Downvote this heretic! I wannt see him on -50 Karma, dammit! ;-0

Replies from: Marshall
comment by Marshall · 2009-03-18T18:48:51.789Z · LW(p) · GW(p)

Thanks Roko - nice with a bit of humour - btw your wish is almost granted I've lost 23 points in the space of 12 hours. Rationalists are fun people.....

Replies from: Roko
comment by Roko · 2009-03-18T20:15:01.651Z · LW(p) · GW(p)

I've lost 23 points in the space of 12 hours.

How did you manage that!? What I want to know is what were the 3 people who downvoted my humorous comment thinking? Maybe 3 out of all the 10 or so people still reading this thread actually thought I was serious and downvoted me for ingroup bias? Or maybe people think that humor is a no-no on LW? I can see how too much humor would dilute the debate. Writing humorous comments is fun, and probably good in small amounts, but if it caught on this could turn into a social space rather than an intellectual one...

Replies from: pjeby
comment by pjeby · 2009-03-18T22:45:32.307Z · LW(p) · GW(p)

It doesn't take much - just one jerk systematically downvoting a page or two of your existing comments. I lost like 37 points in less than an hour that way a few days ago. We really need separate up/down counts, or better yet ups and downs per voter, so you can ignore systematic friend upvotes and foe downvotes.

Replies from: Eliezer_Yudkowsky, Emile
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-03-18T22:53:07.921Z · LW(p) · GW(p)

Are we already getting this behavior? I'll have to start looking into voting patterns... Sigh.

Replies from: ciphergoth
comment by Paul Crowley (ciphergoth) · 2009-03-18T23:10:05.150Z · LW(p) · GW(p)

Have you looked at Raph Levien's work on attack resistant trust metrics?

comment by Emile · 2009-03-19T10:56:54.564Z · LW(p) · GW(p)

Couldn't it also be due to a change in the karma calculation rules in order to, say, not take your own upvote in account on karma calculations? I remember that was mentioned, but don't know if it was implemented in the meantime.

Edit: Well, it seems that it isn't implemented yet, since posting this got me a karma point :)

comment by Paul Crowley (ciphergoth) · 2009-03-18T13:20:33.545Z · LW(p) · GW(p)

If your picture of a high-grade rationalist is still this Spock crap, what are you doing here?

Replies from: Yvain
comment by Scott Alexander (Yvain) · 2009-03-18T14:10:52.241Z · LW(p) · GW(p)

By principle of charity, I interpret Marshall as saying not that rationalists can't be kind, but that rationalism alone doesn't make you kind. Judging by my informal torture vs. pie experiments, I find this to be true. Rationality is necessary but not sufficient for a friendly world. We also need people who value the right kind of things. Rationality can help clarify and amplify morality, but it's got to start from pre-rational sources. Until further research is done, I suggest making everyone watch a lot of Thundercats and seeing whether that helps :)

Of course, like with every use of the principle of charity, I might just be reading too much into a statement that really was stupid.

Replies from: astray, Eliezer_Yudkowsky
comment by astray · 2009-03-18T16:29:04.665Z · LW(p) · GW(p)

Your torture vs. pie experiment makes me think of another potential experiment. Is torture ever preferable to making, say, 3^^^3 people never have pie again? (In the sense of dust specks, the never eating pie is to be the entire consequence of the action. The potential pie utility is just gone, nothing else.)

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-03-18T18:17:12.348Z · LW(p) · GW(p)

By the principle of accuracy, I look up Marshall's other comments: http://lesswrong.com/user/Marshall/

Marshall doesn't have to be voted down for being wrong. He can be voted down for using an applause light and being vague.

Replies from: Annoyance, Marshall
comment by Annoyance · 2009-03-18T20:15:23.380Z · LW(p) · GW(p)

"Marshall doesn't have to be voted down for being wrong. He can be voted down for using an applause light and being vague."

So can Eliezer_Yudkowsky.

comment by Marshall · 2009-03-18T19:19:00.382Z · LW(p) · GW(p)

"Marshall doesn't have to be voted down for being wrong. He can be voted down for using an applause light and being vague"

I have stared at this sentence for a long time, and I have wondered and wondered. I too have read my comments again. They are not vague. Not in the slightest. I think they belong to a slightly other reference-set than the other postings and emphasize language as metaphor (which I think Eliezer calls appealing to applause lights).

I would call Eliezers quoted sentence as brutal. Majesticaly brutal - and I would think they have contributed to the 23 karma-points I lost in 12 hours of non-activity.

I have no wish to be a member of a club, who will not have me. I have no wish to be a member of a club with royal commands.

Replies from: topynate
comment by topynate · 2009-03-18T20:09:49.849Z · LW(p) · GW(p)

"I have no wish to be a member of a club, who will not have me."

This is not the case. You've made over 30 comments; it's trivial for an individual to swing your karma by large amounts. I note that your karma has made large swings in the ~30 minutes I've been considering this reply. If you want to discuss the group dynamics of LW then I have more to say, but I'm going to request (temporarily) that you don't accuse me of groupthink or status seeking if you do.

comment by Paul Crowley (ciphergoth) · 2009-03-17T23:38:17.423Z · LW(p) · GW(p)

Putting so much work into talking about these things isn't the act of an isolated survivalist, though.

comment by Roko · 2009-03-17T17:02:38.469Z · LW(p) · GW(p)

For example, with subsidized prediction markets, we can each specialize on the topics where we contribute best, relying on market consensus on all other topics.

If no one person has a good grasp of all the material, then there will be significant insights that are missed. Science in our era is already dominated by dumb specialists who know everything about nothing. EY's work has been so good precisely because he took the effort to understand so many different subjects. I'll bet at long odds that a prediction market containing an expert on evo-psych, an expert on each of five narrow AI specialisms, an expert on quantum mechanics, an expert on human biases, an expert on ethics and an expert on mathametical logic would not even have produced FAI as an idea to be bet upon.

We don't each need to train to identify and fix each possible kind of bias; each bias can instead have specialists who look for where that bias appears and then
correct it.

If people could see inside each others' heads and bet on (combinations of) people's thoughts, this would work.

In reality, what will happen is that a singly debiased single subject specialist will simply not produce any ideas for the prediction market that (a) involve more than his specialism and (b) would require him to debias in more than one way.

For example, a logic expert who suffers from overconfidence in the effectiveness of logic in AI will not hypothesize that maybe something other than a logical KR is appropriate for the semantic web. [people in my research group were shocked when I produced this hypothesis] A bayesian stats researcher will not produce this hypothesis because he doen't know the semantic web exists; it isn't part of his world.

What I am driving at with this comment is that the strength of connection between thoughts held in one mind is much greater than the strength of connection between thoughts in a market. In a market, two distinct predictions interact in a very simple way: their price. In a mind, two or more insights can be combined. If no individual mind is bias-free, then we lose this "single mind" advantage. [Apologies for comment deletion. It would be nice to have a preview button...]

Replies from: xamdam, scientism, Eliezer_Yudkowsky
comment by xamdam · 2010-07-01T20:12:31.000Z · LW(p) · GW(p)

'll bet at long odds that a prediction market containing an expert on evo-psych, an expert on each of five narrow AI specialisms, an expert on quantum mechanics, an expert on human biases, an expert on ethics and an expert on mathametical logic would not even have produced FAI as an idea to be bet upon.

I think Robin already pre-answered this, though perhaps with a touch of sarcasm: "Perhaps martial-art-style rationality makes sense for isolated survivalist Einsteins forced by humanity's vast stunning cluelessness to single-handedly block the coming robot rampage."

comment by scientism · 2009-03-17T19:32:50.476Z · LW(p) · GW(p)

Can you offer any examples of generalists (and/or rationalists) who have produced significant insights besides Eliezer? When I look at history, I see subject specialists successfully branching out into new areas and making significant progress, whereas generalists/rationalists have failed to produce any significant work (look at philosophy).

Replies from: anonym, teageegeepea, Roko, Michelle, JulianMorrison, Roko, CronoDAS
comment by anonym · 2009-03-17T23:47:22.582Z · LW(p) · GW(p)

Leibniz, Da Vinci, Pascal, Descartes, and John von Neumann spring immediately to mind for me.

There's also Poincaré, often considered the last universalist. Kant is famous as a philosopher, but also worked in astronomy. Bertrand Russell did work in philosophy as well as mathematics, and was something of a generalist. Noam Chomsky is the linguist of the 20th century, and if you consider any of his political and media analysis outside of linguistics to be worthwhile, he's another. Bucky Fuller. Charles Peirce. William James. Aristotle. Goethe. Thomas Jefferson. Benjamin Franklin. Omar Khayyám.


Just thought of Gauss, who in addition to his work in mathematics did considerable work in physics.

Herbert Simon: psychology and computer science (got an economics Nobel).

Alan Turing: don't know how I could have forgotten him.

Norbert Wiener.

Replies from: Yvain, scientism
comment by Scott Alexander (Yvain) · 2009-03-17T23:57:48.302Z · LW(p) · GW(p)

Good answers. Also, Pierre-Simon Laplace, one of the inventors of Bayesian statistics, was also an excellent astronomer and physicist (and briefly the French Minister of the Interior, of all things)

Replies from: anonym
comment by anonym · 2009-03-18T00:02:11.094Z · LW(p) · GW(p)

Yeah, Laplace certainly belongs close to the top of any such list.

comment by scientism · 2009-03-18T00:13:38.601Z · LW(p) · GW(p)

There's probably a few in there. I won't try to dispute them on a case by case basis. There are, on the other hand, literally thousands of specialists who have achieved more impressive feats in their fields than many of the people you cite. (I take straightforward exception to Chomsky who founded a school of linguistics that's explicitly anti-empirical.)

Replies from: komponisto, Roko
comment by komponisto · 2009-03-18T00:44:37.092Z · LW(p) · GW(p)

Not to defend anything specific about Chomsky's program, but "anti-empirical" is unfair. "Anti-empiricist" would be more reasonable (though still missing the point, in my opinion).

comment by Roko · 2009-03-18T11:47:00.341Z · LW(p) · GW(p)

There's probably a few in there. I won't try to dispute them on a case by case basis. There are, on the other hand, literally thousands of specialists

  • I thought this would happen. The wisdom of my plan to list the top 10 academics first and then check whether they're specialists or generalists is paying off...
Replies from: astray
comment by astray · 2009-03-18T16:14:03.382Z · LW(p) · GW(p)

Another method may be to list the top 10 achievements first and then check whether a specialist or a generalist. I imagine Prometheus was a generalist.

Replies from: anonym
comment by anonym · 2009-03-18T16:36:41.387Z · LW(p) · GW(p)

This is a good idea. But I think 10 is too few. It would be better to pick the top 100 or 200, and see how many people who contributed to multiple fields are on the list.

I've not created the list first, but have thought of which of those I listed above have done something that would belong on that list, so feel free to take possible confirmation bias into account on my part, but even after trying to account for that, I think many of the following accomplishments would be on the list:

  • Calculus: Leibniz, Newton
  • Physics: Newton [forgot about Newton originally, but he was a generalist]
  • Entscheidungsproblem, Turing machine: Turing
  • Too much important math to list: Gauss
  • Contributions to quantum mechanics, economics & game theory, computer science (we're using a von Neumann-style computer), set theory, logic, and much else: von Neumann
Replies from: scientism, thomblake
comment by scientism · 2009-03-18T18:22:10.713Z · LW(p) · GW(p)

It's worth remembering that what we're looking for is not just people who contributed to multiple fields but generalists/rationalists: people who took a "big picture" view. (I'm willing to set aside the matter of whether their specific achievements were related to their "big picture" view of things since it will probably just lead to argument without resolution.) Leibniz would definitely fall into that category, for example, but I'm not sure Newton would. He had interests outside of physics (religion/mysticism) but they weren't really related to one another.

Replies from: Roko
comment by Roko · 2009-03-18T20:08:05.799Z · LW(p) · GW(p)

not just people who contributed to multiple fields but generalists/rationalists

what's the difference between being a generalist and contributing to multiple fields?

I'm willing to set aside the matter of whether their specific achievements were related to their "big picture" view of things since it will probably just lead to argument without resolution.

No, no! This is the meat of the question. If it were the case that generalism correlated with but did not cause great insights (for example, in a world that forced all really clever people to study at least 3 subjects for their whole academic lives this would be the case), then my original argument would fail.

comment by thomblake · 2009-03-18T18:46:18.196Z · LW(p) · GW(p)

It should be noted that Turing and Shannon both studied with Norbert Wiener, and he might have come up with most of their interesting ideas (and possibly von Neumann's as well). Also, Wiener founded the study of cybernetics, made notable contributions to gunnery, and made the first real contribution to the field of computer ethics.

ETA: not to discredit the work of Turing, Shannon, and von Neumann, but rather to note that Wiener is definitely someone who made major contributions and should be on the 'generalists' list.

Replies from: anonym
comment by anonym · 2009-03-19T06:24:54.356Z · LW(p) · GW(p)

Wiener is on the original list I gave a couple of posts up.

Do you have a reference for Turing studying with Wiener and Turing getting his ideas from him? I checked all pages in Hodges's biography of Turing that mention Wiener, and none of them mention that he studied with Wiener.

Turing's Entscheidungsproblem paper (which also introduced the Turing machine) was published in 1936. The only (in-person) connection between them I found (though I didn't search other than checking the bio) is that Wiener spoke with Turing about cybernetics in 1947 while passing by on his way to Nancy.

Are there specific discoveries you believe are falsely attributed to Turing, von Neumann, or Shannon, and can you provide any evidence?

comment by teageegeepea · 2009-03-17T23:26:39.423Z · LW(p) · GW(p)

I like Eliezer's writing, but I think he himself has described his work as "philosophy of AI". He's been a great popularizer (and kudos to folks like him and Dawkins), but that's different from having "produced significant insights". Or perhaps his insight is supposed to be "We are really screwed unless we resolve certain problems requiring significant insights!".

comment by Roko · 2009-03-17T20:09:03.293Z · LW(p) · GW(p)

Ok, the best way for me to answer this question is to list the 10 most important scientists/academics of all time, and then look them up on wikipedia. I'll write down the list, and then comment again once I've ascertained how "generalist" they are. So, in order of importance:

  1. Galileo
  2. Darwin
  3. Newton
  4. Descartes
  5. Socrates
  6. Aristotle
  7. Plato
  8. Hume
  9. Einstein
  10. Francis Bacon

EDIT: I kind of picked these guys at random out of "famous important academics". Berners-Lee is on my mind as I study the semantic web. The main point of the exercise is that I wrote down the names before I went and read their wikipedia articles to see how much they're generalists. Do feel free to suggest changes to this list. Once some consensus is reached, I will post the analysis. I kicked pythagoras off in favor of Francis Bacon, since Bacon seems to be particularly relevant to this site's interests, and the article on Pythagoras disputes the worth of his science. Strictly speaking, this is a bit naughty of me, but what the hell - I'll allow this one indulgence. Note that I didn't look at Francis Bacon's article before I decided he was to go on the list; I was spurred into including him by scientism's comment below.

Replies from: gwern, Michelle, astray, thomblake, MBlume, rhollerith, Eliezer_Yudkowsky, John_Maxwell_IV
comment by gwern · 2009-03-18T16:51:20.682Z · LW(p) · GW(p)

Roko: rather than picking out of random, it'd be better to start with a survey of the historical literature. Fortunately, the search and statistical ranking has already been done in Human Accomplishment.

For the combined science index, we get:

  • Newton
  • Galileo
  • Aristotle
  • Kepler
  • Lavoisier
  • Descartes
  • Huygens
  • Laplace
  • Einstein
  • Faraday

It's a list that seems reasonable to me, as surprising as Lavoisier, Huygens, and Faraday may be.

Replies from: Roko
comment by Roko · 2009-03-18T17:34:36.424Z · LW(p) · GW(p)

OK, that's an interesting list.

Of course, it misses out the philosophers, but they appear in the "western philosophy" list. Since aristotle, plato, descartes and hume all appear in the top ten of that list, it seems that the only odd ones out in my list are bacon, darwin and socrates.

But, a larger list will not hurt us. So I'll throw in the top ten from combined sciences, and the top ten from philosophy. Corr, that's going to be quite some work to do...

comment by Michelle · 2009-03-19T07:32:15.140Z · LW(p) · GW(p)

I think an important issue in this generalist/specialist debate and this attempt to create a list of the most important figures is that the historical time frame may be very relevant.

As the world becomes increasingly complex and fields of study, old and new, become increasingly specialized, would this not affect the ability of a generalist/specialist to produce a significant insight or make a significant contribution?

Perhaps it makes more sense to consider much more recent people as examples if we want to apply this to society as it stands now.

comment by astray · 2009-03-18T16:11:51.083Z · LW(p) · GW(p)

Darwin was almost preempted by Wallace. Newton and Leibniz arrived at the same calculus independently, and similar work was done by Seki Kowa at the same time. They were merely there first and most prominently, but not uniquely. I think to satisfy importance, we want cut vertex scientists and academics.

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-03-18T18:09:05.449Z · LW(p) · GW(p)

What constitutes a "cut vertex" here depends entirely on how far you want to take the counterfactual. Who do you shoot so that humanity makes no further progress, ever?

Replies from: MichaelHoward
comment by MichaelHoward · 2009-03-18T20:31:41.024Z · LW(p) · GW(p)

Stanislav Yevgrafovich Petrov?

comment by thomblake · 2009-03-18T18:04:35.285Z · LW(p) · GW(p)

Socrates is an odd fellow to have on the list, since there aren't any works by Socrates. If you think Plato should be on the list, feel free to kick Socrates off.

comment by MBlume · 2009-03-17T23:29:28.025Z · LW(p) · GW(p)

As a physicist, I've always been partial to Maxwell's work -- he deduced the induction of a curled magnetic field by a changing electric field solely from mathematical considerations, and from this, was able to guess the nature of light before any other human.

I've mixed feelings about Descartes. The pull of the Cartesian Theater has muddling effects in serious cognitive philosophy. On the other hand, by making the concept explicit, he did make it easier for others to point out that it was wrong.

Replies from: thomblake, Court_Merrigan
comment by thomblake · 2009-03-18T18:12:08.375Z · LW(p) · GW(p)

Regarding the Cartesian Theater, I think it obviously had an impact on Global Workspace Theory, which actually seems to be going in the right direction.

And let's not forget Decartes's many other contributions. The coordinate grid and analytic geometry, anyone?

comment by Court_Merrigan · 2009-03-18T02:15:45.230Z · LW(p) · GW(p)

Exactly. Descartes laid the foundation for future progress.

comment by rhollerith · 2009-03-17T22:55:04.066Z · LW(p) · GW(p)

The top-ten list needs Galileo. Galileo > Newton. Galileo > Einstein.

And Berners-Lee? If he had never started the WWW, within 2 years of when he did start it, someone else would have started something very similar. (And his W3C does dumb things.) If you want a contributor to the internet on the list, I humbly suggest J.C.R. Licklider, his protogee Roberts, or one of the four authors of "The End-to-end Argument".

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-03-17T22:43:27.010Z · LW(p) · GW(p)

Berners-Lee? Recency effect much?

comment by John_Maxwell (John_Maxwell_IV) · 2009-03-17T23:15:32.949Z · LW(p) · GW(p)

Darwin? Seriously? The essential kernel of his theory is so easy to understand that I'm reluctant to give him much credit for inventing it.

Replies from: VAuroch, MBlume
comment by VAuroch · 2013-12-17T23:44:25.960Z · LW(p) · GW(p)

Most truly great insights feel obvious in retrospect.

comment by MBlume · 2009-03-17T23:17:29.216Z · LW(p) · GW(p)

Massive hindsight bias. Whether we, as a race, are proud of it or not, it wasn't until Darwin, only 150 years ago, that someone seriously suggested and developed it.

Replies from: John_Maxwell_IV, Cameron_Taylor
comment by John_Maxwell (John_Maxwell_IV) · 2009-03-18T19:33:03.406Z · LW(p) · GW(p)

Natural selection is the combination of two ideas: 1. Population characteristics change over time if members of the population are systematically disallowed reproduction. 2. Nature systematically disallows reproduction.

I'm willing to accept that I'm suffering from hindsight bias. But will you at least give me that his theory is much easier to understand than any of the others? And maybe a few guesses on the topic of why it was so hard to think of?

Also, even if an insight is rare, that doesn't mean its bearer deserves credit. Many inventors made important accidental discoveries, and I imagine luck must have factored into Darwin's discovery somehow as well. If 1% of biologists who had gone on the voyage Darwin went on also would have developed the theory, does he still deserve to be on the list of the top ten intellectuals?

Addendum: Here is an argument that ancient scientists and mathematicians don't deserve as much credit as we give them: they were prolific. We have no modern equivalent of Euler or Gauss; John Von Neumann was called "the last of the great mathematicians". There are two possibilities here: either the ancient thinkers were smarter than we were, or their accomplishments were more important and less difficult than those of modern thinkers. The Flynn effect suggests that IQs are rising over time, so I'm inclined to believe that their accomplishments were genuinely less difficult.

And even if making new contributions to these fields isn't getting more difficult, surely you must grant that it must become more difficult at some point, assuming that to make a new contribution to a field you must understand all the concepts your contribution relies on, and all the concepts those concepts rely on, etc.

Replies from: MBlume, thomblake
comment by MBlume · 2009-03-21T04:33:13.175Z · LW(p) · GW(p)

Natural selection is the combination of two ideas: 1. Population characteristics change over time if members of the population are systematically disallowed reproduction. 2. Nature systematically disallows reproduction.

I'm willing to accept that I'm suffering from hindsight bias. But will you at least give me that his theory is much easier to understand than any of the others? And maybe a few guesses on the topic of why it was so hard to think of?

Extraordinarily so, yes -- it does astonish me that no one hit it before. Nonetheless, the empirical fact remains, so...

I suppose the sense of "mystery" people attached to life played into it somewhat.

People were breeding animals, people were selecting them, and...socially there was already some idea of genetic fitness. Men admired men who could father many children.The idea of heredity was there.

Honestly, the more I think of it, the more I share your confusion. It is deeply odd that we were blinded for so long. Perhaps we should work to figure out how this happened, and whether we can avoid it in the future.

I don't think luck can factor in quite as much as you imagine though. We're not attempting to award credit, so much as we are attempting to identify circumstances which tend to produce people who tend to produce important insights. Darwin's insight was incredibly important, and had gone unseen for centuries. To me, that qualifies him.

Even if you put it at a remove, even if you say, well, Darwin was uniquely inspired by his voyage, another biologist could have done the same, then the voyage becomes important. Why didn't another biologist wind up on a voyage like that? What can we do to ensure that inspiring experiences like that are available to future intellectuals? In this way, Darwin's life remains an important data point, even if -- especially if -- we deny that there was anything innately superior about the man.

Addendum: Here is an argument that ancient scientists and mathematicians don't deserve as much credit as we give them: they were prolific.

Agreed, completely -- they pulled the low-hanging fruit from the search space.

comment by thomblake · 2009-03-18T19:42:07.365Z · LW(p) · GW(p)

I'm confused - do you mean that deism, specifically, made it hard to think of, or easy? And I'm not sure many were deists - I can't find numbers, but I was under the impression deism was always a really small movement.

EDIT: nevermind, reference to deism was removed in an edit.

Replies from: John_Maxwell_IV
comment by John_Maxwell (John_Maxwell_IV) · 2009-03-19T21:35:13.839Z · LW(p) · GW(p)

I meant that I thought the fact that so many took for granted the fact that God created the animals was one of the factors that made evolution hard to think of, and Darwin shouldn't get genius status just because he overcame it. But then I remembered Lamarck and thought better of it. I still think it is a weak argument in favor of Darwin not being a genius, though.

comment by Cameron_Taylor · 2009-03-18T02:47:18.186Z · LW(p) · GW(p)

Massive hindsight bias. Whether we, as a race, are proud of it or not, it wasn't until Darwin, only 150 years ago, that someone seriously suggested and developed it.

I'd also suggest that Darwin's insight was far less intuitive than the insights of Newton (although this may reflect just different degrees of hindsight bias).

Replies from: Roko
comment by Roko · 2009-03-18T11:33:35.779Z · LW(p) · GW(p)

(although this may reflect just different degrees of hindsight bias).

Indeed, I suspect it does. Imagine not having calculus... or mechanics, and then having to reinvent it. That formalism has been in my head for the last 10 years, so it's really hard for me to let go of it. Are you a physical sciences guy too?

comment by Michelle · 2009-03-19T07:27:56.073Z · LW(p) · GW(p)

I think an important issue in this generalist/specialist debate and this attempt to create a list of the most important figures is that the historical time frame may be very relevant.

As the world becomes increasingly complex and fields of study, old and new, become increasingly specialized, would this not affect the ability of a generalist/specialist to produce a significant insight or make a significant contribution?

Perhaps it makes more sense to consider much more recent people as examples if we want to apply this to society as it stands now.

comment by JulianMorrison · 2009-03-18T21:12:09.729Z · LW(p) · GW(p)

Aubrey De Grey hasn't yet been proved right, so he's a tentative example, but he is a rare biological theorist where most biologists are specialized experimenters.

comment by Roko · 2009-03-17T21:22:02.256Z · LW(p) · GW(p)

Out of interest, when you said:

whereas generalists/rationalists have failed to produce any significant work (look at philosophy).

Who were you thinking of?

Replies from: scientism
comment by scientism · 2009-03-17T22:46:20.724Z · LW(p) · GW(p)

I think philosophy is a good example. Philosophers are supposed to be more logical/rational than other people and have been generalists until recently (many still are). They have also failed to produce a single significant piece of work on par with anything found in science. Now, some people might disagree with that assessment, but I suspect their counterexamples would be chiefly in specialist sub-disciplines: formal logic, for example. I think to the degree that there has been "good philosophy" it's found under the model of specialists working under the kind of robust institutional framework Robin alludes to rather than individual theorists taking a global perspective (philosophy as martial arts). I can't think of any systematizers I'd credit with discovering truth. I do not think Socrates, Plato, Aristotle and Descartes discovered any substantial truths (Descartes mathematical work aside) so we probably differ there. Regardless, I think there's a good argument to be made that historically truth has come from robust institutions involving many specialists (such as science) rather than brilliant lone thinkers taking a global perspective.

Replies from: Roko
comment by Roko · 2009-03-17T23:24:12.995Z · LW(p) · GW(p)

I do not think Socrates, Plato, Aristotle and Descartes discovered any substantial truths (Descartes mathematical work aside) so we probably differ there.

You seem to differ from the rest of the world, too. Wikipedia:

Plato was a Classical Greek philosopher, mathematician, writer of philosophical dialogues, and founder of the Academy in Athens, the first institution of higher learning in the western world. Along with his mentor, Socrates, and his student, Aristotle, Plato helped to lay the foundations of Western philosophy.

René Descartes was a French philosopher, mathematician, scientist, and writer who spent most of his adult life in the Dutch Republic. He has been dubbed the "Father of Modern Philosophy," and much of subsequent Western philosophy is a response to his writings, which continue to be studied closely to this day.

René Descartes established the framework for a scientific method's guiding principles in his treatise, Discourse on Method

Replies from: scientism
comment by scientism · 2009-03-18T00:01:34.216Z · LW(p) · GW(p)

There's a huge difference between being considered historically important and having discovered substantial truth. The Bible is historically important. It helped lay the foundations of Western culture. This is hardly disputable. It does not, however, contain much in the way of truth. Nor do the works of Plato and Aristotle.

Replies from: Court_Merrigan
comment by Court_Merrigan · 2009-03-18T02:14:49.319Z · LW(p) · GW(p)

To take one example: Aristotle laid down the foundation of what became modern science. Modern science became modern science as we think of it by rebelling against Aristotle's a priori assumptions; without Aristotle, what science we have today would be very different, indeed.

I don't think you can so easily dismiss Plato, Aristotle, Descartes, et al: without them we we wouldn't be where we are today.

This is part of the problem I often detected at OB and see again here at LW: people with little respect for intellectual history.

comment by CronoDAS · 2009-03-17T20:29:43.304Z · LW(p) · GW(p)

Isaac Asimov was a generalist.

Make of that what you will.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-03-17T17:28:16.544Z · LW(p) · GW(p)

Roko, great comment, but you should've just Edited. Why delete and repost?

Replies from: Roko, RobinHanson
comment by Roko · 2009-03-17T19:01:07.508Z · LW(p) · GW(p)

Thanks. I wrote the original comment, then realized that I hadn't read the post as thoroughly as I should have done and worried that I'd straw-manned Robin, so I deleted the comment not realizing that Robin had replied to it. When I'd read the post again and read my comment, I made a slight change and decided that the critique was on point and I was really critiquing Robin's position, not a straw man. Preview would help slightly, because you could read your comment next to the OP and do a "did I straw man him?" sanity check.

comment by RobinHanson · 2009-03-17T17:49:50.602Z · LW(p) · GW(p)

FYI, I had replied to the previous version of the comment.

Replies from: Roko
comment by Roko · 2009-03-17T19:03:59.273Z · LW(p) · GW(p)

Robin said:

Combining two or even three particular topics can the thing that you specialize in.

Or even combining two or three topics with 5 or 6 ways to debias... if you're going to go to the effort of combining several academic subjects in one mind, it is almost certainly worth the effort of adding in the subject of "heuristics and biases/rationality arts"; at the cost of learning 1 more subject, you'll improve your performance across the board, and in particular you'll improve your ability to combine subjects as you'll be in a good position to dispassionately weigh the merits of various approaches and synergies.

comment by Z_M_Davis · 2009-03-17T18:16:17.857Z · LW(p) · GW(p)

One problem with trusting the experts rather than trying to think things through for yourself is that you need a certain amount of expertise just to understand what the experts are saying. The experts might be able to tell you that "all symmetric matrices are orthonormally diagonalizable," and you might have perfect trust in them, but without a lot of personal study and inquiry, the mere words don't help you very much.

Replies from: John_Maxwell_IV, Vladimir_Nesov, mark_spottswood, Roko, soreff
comment by John_Maxwell (John_Maxwell_IV) · 2009-03-17T23:18:47.916Z · LW(p) · GW(p)

That doesn't matter if the expert can say "hire this guy", "invest in this company", "vote for this guy", or "donate to this charity". If you're doing some sort of complicated action with careful integration of expert advice, then it's probably worthwhile becoming at least a semi-expert yourself.

comment by Vladimir_Nesov · 2009-03-17T19:57:05.007Z · LW(p) · GW(p)

All the worse if you are convinced that God hates diagonalizable matrices, and so you prefer not to believe the heathen.

comment by mark_spottswood · 2009-03-17T20:53:49.134Z · LW(p) · GW(p)

Experts don't just tell us facts; they also offer recommendations as to how to solve individual or social problems. We can often rely on the recommendations even if we don't understand the underlying analysis, so long as we have picked good experts to rely on.

Replies from: Cameron_Taylor
comment by Cameron_Taylor · 2009-03-18T02:53:03.129Z · LW(p) · GW(p)

Experts don't just tell us facts; they also offer recommendations as to how to solve individual or social problems. We can often rely on the recommendations even if we don't understand the underlying analysis, so long as we have picked good experts to rely on.

There is a key right there. Ability in rational thinking and understanding of common biasses can drastically impact who we consider as a good expert. The most obvious examples are 'experts' in medicine and economics. I suggest that the most influential experts in those fields are not those with the most accurate understanding.

Rationalist training could be expected to improve our judgement when choosing experts.

Replies from: mark_spottswood
comment by mark_spottswood · 2009-03-18T15:20:14.256Z · LW(p) · GW(p)

True. But it is still easier in many cases to pick good experts than to independently assess the validity of expert conclusions. So we might make more overall epistemic advances by a twin focus: (1) Disseminate the techniques for selecting reliable experts, and (2) Design, implement and operate institutions that are better at finding the truth.

Note also that your concern can also be addressed as one subset of institutional design questions: How should we reform fields such as medicine or economics so that influence will better track true expertise?

comment by Roko · 2009-03-17T19:34:15.953Z · LW(p) · GW(p)

and if an expert says "all matrices are orthonormally diagonalizable", it sounds equally impressive, but it is false as false can be.

Replies from: Court_Merrigan
comment by Court_Merrigan · 2009-03-18T02:07:35.830Z · LW(p) · GW(p)

But there are simply far too many areas of life involving putative "orthonormally diagonalizable matrices" for any one individual to be able to rationally investigate. At some point you have to take someone's word for it; so rather than taking one expert's word, you're likely better off trusting a community of experts. A current example might be with global warming - most scientists seem to feel it's a major issue.

Unfortunately, though, radical changes in thinking come usually come from the margin, e.g., Galileo. The hard part, it seems to me, is to distinguish between mere status quo convention and genuine expert agreement.

comment by soreff · 2009-03-18T02:06:09.945Z · LW(p) · GW(p)

Without the study, you wouldn't have a basis for understanding? (grin/duck/run)

comment by jimmy · 2009-03-17T23:05:24.566Z · LW(p) · GW(p)

Following the martial arts analogy, I guess that makes Robin a supporter of "Rationalist Gangs".

comment by anonym · 2009-03-17T17:28:41.458Z · LW(p) · GW(p)

One of the ways that I think that OB could have been better, and that I think LW could be more helpful, is to put a greater emphasis on practice and practical techniques for improving rationality in the writings here and to give many more real-life examples than we do.

When making a post that hints at any kind of a practical technique, posters could really make an effort to clearly identify the practical implications and techniques, to put all the practical parts together in the essay rather than mixing them throughout 15 paragraphs of justification and reasoning, and to highlight that practical part of the post.

The practical parts could be extracted and placed together somewhere in order to have one single place that people can go to easily find them. Perhaps the LW software could provide some kind of support for distinguishing the practice sections of a post, and the extraction and aggregation of the practical howto sections could be automated.

Replies from: Court_Merrigan
comment by Court_Merrigan · 2009-03-18T02:09:48.006Z · LW(p) · GW(p)

Hear, hear. Practice and practical techniques. Isn't that what we're after here?

comment by Mike Bishop (MichaelBishop) · 2009-03-17T14:49:48.979Z · LW(p) · GW(p)

Robin was kind enough not to say what overemphasizing the heroic individual rationalist implies about our true motivations.

Replies from: anonym
comment by anonym · 2009-03-17T22:23:33.870Z · LW(p) · GW(p)

That's overly simplistic. Two people might have the same motivations and goals but disagree about the most effective way of achieving those goals. If you think that's not the case, you should give an argument to that effect. If you think it doesn't apply in the particular case that we all know you have in mind, you should give an argument to that effect.

I'm surprised the parent is rated up to 10 points. It indulges in armchair psychologizing with no supporting evidence or reasoning, and it interprets the situation in the least intellectually charitable way and assumes the worst of motivations.

comment by markrkrebs · 2010-02-26T13:37:39.106Z · LW(p) · GW(p)

I love your thesis and metaphor, that the goal is for us all jointly to become rational, seek, and find truth. But I do not "respect the opinions of enough others." I have political/scientific disagreements so deep and frequent that I frequently just hide them and. worry. I resonated best with your penultimate sentence: "humanity's vast stunning cluelessness" does seem to be the problem. Has someone written on the consequences of taking over the world? The human genome, presumptively adapted to forward it's best interests in a competitive world, may have only limited rationality, inadequate to the tasks of altruism, global thinking, and numerical analysis. By this last phrase I refer to our overreaction to a burning skyscraper, when an equal number of deaths monthly on freeways, by being less spectacular or poignant, motivates a disproportionately low response. Surely the difference there is a "gut" reaction, not a cogent one. We need to change what we care about, but we're hardwired to worry about spectacle, perhaps?

Replies from: wedrifid
comment by wedrifid · 2010-02-26T14:19:09.197Z · LW(p) · GW(p)

We need to change what we care about, but we're hardwired to worry about spectacle, perhaps?

Unusual threat by a rival tribe. Retaliation necessary. Excuse to take politically self serving moves by surfing a tide of patriotic sentiment. That sort of thing. What you would expect monkeys to care about.

comment by Emile · 2009-03-17T14:46:16.491Z · LW(p) · GW(p)

Maybe personal finance is a better analogy than Martial Arts. It's useful for nearly anybody to know about personal finance, yet many people are lacking even in the basics. Some high-falutin stock market concepts may not be useful to the average joe, the same way advanced rationality ("better then Einstein") may not be needed, but still, education about the basics is useful.

Replies from: RobinHanson
comment by RobinHanson · 2009-03-17T17:00:04.512Z · LW(p) · GW(p)

Sure, most prediction market traders could stand to review some rationality basics.

comment by mtraven · 2009-03-17T20:00:20.223Z · LW(p) · GW(p)

For whatever reason, the community here (so-called "rationalists") is heavily influenced by overly-individualistic ideologies (libertarianism, or in its more extreme forms, objectivism). This leads to ignoring entire realms of human phenomena (social cognition) and the people who have studied them (Vygotsky, sociologists of science, ethnomethodology). It's not that social approaches to cognition provide a magic bullet -- they just provide a very different perspective on how minds work. Imagine if you stop believing that beliefs are in the head and locate themselves in a community or institution. If interested, you could start with How Institutions Think by Mary Douglas.

Replies from: Yvain, topynate, Carinthium, Vaniver
comment by Scott Alexander (Yvain) · 2009-03-17T23:52:35.644Z · LW(p) · GW(p)

I am guilty as charged in being much more familiar with individualistic than socially oriented ideologies.

Why don't you write some posts about techniques or discoveries from socially-oriented science that could help rationalists?

comment by topynate · 2009-03-17T23:31:14.755Z · LW(p) · GW(p)

I would say Robin Hanson's views on status fit quite well into the gap you perceive. I do find it interesting that status isn't talked about more on Less Wrong.

Maybe I can tie this into what I think about the article. LW's articles do currently take an individualist stance on rationality (although I doubt objectivism has any role in this). The "refinements" they propose are mostly alterations of cognitive habits, not suggested ways of changing group dynamics. But LW as a whole is not simply a bunch of iconoclasts. Rather, there appears to be a clear attempt to collectively change patterns of thought. People write stuff, get +/- karma, feel good/bad, update their beliefs and try again. So even though the content of LW is individually applicable, posters will naturally develop preferred topics of expertise, subjects on which they know enough to benefit the community by what they write. And developing expertise does benefit from the martial arts analogy.

Replies from: wedrifid
comment by wedrifid · 2009-11-20T03:00:19.727Z · LW(p) · GW(p)

I would say Robin Hanson's views on status fit quite well into the gap you perceive. I do find it interesting that status isn't talked about more on Less Wrong.

Was there a time when we neglected status as a topic? wow. I don't remember that.

comment by Carinthium · 2010-11-24T09:08:20.596Z · LW(p) · GW(p)

The flaw in that is that ignores dissenters- to some extent, minorities in a community can dissent from the common belief.

comment by Vaniver · 2010-10-28T01:44:58.449Z · LW(p) · GW(p)

Imagine if you stop believing that beliefs are in the head and locate themselves in a community or institution. If interested, you could start with How Institutions Think by Mary Douglas.

This sounds to me a lot like "Imagine if you stop believing that information is in the genes and locate it in a species."

I don't think institutional effects on thought are a bad thing to study- institutions definitely have massive effects on the environments individuals operate in- but I think assigning thinking entity status to institutions is a bad way to approach that study. Thinking about information stored in species has a long and storied history of making worse predictions than thinking about information stored in genes.

But institutions certainly apply selection pressure on memes, and influence how memes replicate themselves and propagate. The analogy is also somewhat tenuous- institutions are far more fluid (almost by definition) in their boundaries than species. Because of their tremendous impact, institutional design deserves comparable attention to environmental design (architecture, agriculture, lots of smaller fields).

(We do already have those fields, though; the economy is the environment commercial institutions are built for (and other institutions reside in as well), and economists try to study it and design it. Public choice theorists help study the design of (primarily democratic) political institutions.)

comment by Cameron_Taylor · 2009-03-17T14:39:29.612Z · LW(p) · GW(p)

Martial arts can be a good training to ensure your personal security, if you assume the worst about your tools and environment. If you expect to find yourself unarmed in a dark alley, or fighting hand to hand in a war, it makes sense. But most people do a lot better at ensuring their personal security by coordinating to live in peaceful societies and neighborhoods; they pay someone else to learn martial arts. Similarly, while "survivalists" plan and train to stay warm, dry, and fed given worst case assumptions about the world around them, most people achieve these goals by participating in a modern economy.

As a martial arts enthusiast I have to concur that the practical survivability impact of my training is somewhat limited. In fact, I would go as far as to say that my martial art training is far less likely to save my life than is my previous sporting hobby, running.

The martial arts metaphor for rationality training applies as much to my motives for participation as it does for the training itself. I don't expect to beat many armed assailants to a pulp in a dark alley and nor do I expect elimitating biasses from my cognition to make a dramatic impact on my success or life satisfaction. However, I relish every opportunity to push both my body and mind to their limits in elegance and performance. I am also attracted to subcultures that tend to non-exclusivity with skill based elitism.

I unashamedly confess that I'd be a rationalist even if it had absolutely no direct benefit (over participation in the activities of any other arbitrary non-rationilist subculture to a similar degree). But at the same time I have to concur with Robin on the best way to go about finding truth.

But for those of us who respect the opinions of enough others to want to work with them to find truth, it makes more sense to design institutions which give each person better incentives to update a common consensus.

Absolutely. There is just something comforting in knowing that if the information I am relying apon is flawed, someone is losing money because of it. It's even better to know that if you do find flaws you'll be rewarded for doing so, not hunted down and persecuted as a 'whistleblower' or a heretic.

Unfortunately 'designing institutions' doesn't sound like the hard part. The hard part is taking these institutions and making them an active reality. Diluting the influence of authority tends to go against the interests of those in authority, at least how they perceive it. Of course, that particular robotic rampage of human stupidity is not something I personally need to overcome with my own rationalist-fu. I can respect the opinions of Robin et. al. and eagerly keep abreast of their instights and practical solutions.

Replies from: RobinHanson, vizikahn
comment by RobinHanson · 2009-03-17T14:43:32.694Z · LW(p) · GW(p)

Yes, you are right that designing need not be the hard part. So I just changed "design" to "design and field."

comment by vizikahn · 2009-03-17T15:32:10.041Z · LW(p) · GW(p)

As a martial arts enthusiast I have to concur that the practical survivability impact of my training is somewhat limited. In fact, I would go as far as to say that my martial art training is far less likely to save my life than is my previous sporting hobby, running.

My krav maga instructor (a bouncer) used to emphasize that 90% of realistic self-defense is about avoiding trouble, and running is a battle-tested survival technique. I think running was the best way to keep your sanity in the Cthulhu role-playing too. So, the first line of self-defense: don't open that old book, run away and read what people at LW are saying.

Replies from: nazgulnarsil
comment by nazgulnarsil · 2009-03-17T18:29:06.797Z · LW(p) · GW(p)

90% of actual self defense confrontations involve extremely simple techniques. hard core martial arts training is about beating other martial artists. if you just want practical survival skills you learn the control techniques cops use and just practice.

comment by Paul Crowley (ciphergoth) · 2009-03-17T14:03:28.761Z · LW(p) · GW(p)

On this point, we should also be talking about effective evangelism for rationality.

Replies from: John_Maxwell_IV
comment by John_Maxwell (John_Maxwell_IV) · 2009-03-17T23:24:26.415Z · LW(p) · GW(p)

One thing I thought of is to print out a bunch of copies of this paper and start giving it to the Greenpeace activists I see around my community college.

comment by James_Miller · 2009-03-17T14:18:05.539Z · LW(p) · GW(p)

We should learn how to identify trustworthy experts. Is there some general way, or do you have to rely on specific rules for each category of knowledge?

Two examples of rules are never trust someone's advice about which specific stocks you should buy unless the advisor has material non-public information, and be extremely skeptical of statistical evidence presented in Women Studies' journals. Although both rules are probably true you obviously couldn't trust financial advisers or Women Studies' professors to give them to you.

Replies from: PhilGoetz, RobinHanson, mark_spottswood
comment by PhilGoetz · 2009-03-17T16:07:26.787Z · LW(p) · GW(p)

Have you evaluated statistical evidence in Women Studies' journals?

comment by RobinHanson · 2009-03-17T14:36:22.285Z · LW(p) · GW(p)

Prediction markets can forecast the accuracy or fame of purported experts. But preferably you'd accept the market estimate on your question and so not need to know who is an expert.

Replies from: igoresque
comment by igoresque · 2009-03-29T01:11:28.778Z · LW(p) · GW(p)

This is ofcourse exactly the point. People will be people. The solution is to depersonalize, not pick some fine guy and put faith in him. Trying to find out which experts to trust feels to me like asking which tyrants can be best trusted. Experts are valuable (unlike tyrants), but is better be placed in a market, rather than in individual people.

comment by mark_spottswood · 2009-03-17T14:26:25.523Z · LW(p) · GW(p)

Obviously it helps if the experts are required to make predictions that are scoreable. Over time, we could examine both the track records of individual experts and entire disciplines in correctly predicting outcomes. Ideally, we would want to test these predictions against those made by non-experts, to see how much value the expertise is actually adding.

Another proposal, which I raised on a previous comment thread, is to collect third-party credibility assessments in centralized databases. We could collect the rates at which expert witnesses are permitted to testify at trial and the rate at which their conclusions are accepted or rejected by courts, for instance. We could similarly track the frequency with which authors have their articles accepted or rejected by journals engaged in blind peer-review (although if the review is less than truly blind, the data might be a better indication of status than of expertise, to the degree the two are not correlated). Finally, citation counts could serve as a weak proxy for trustworthiness, to the degree the citations are from recognized experts and indicate approval.

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-03-18T18:22:07.073Z · LW(p) · GW(p)

The suggestions from the second paragraph all seem rather incestuous. Propagating trust is great but it should flow from a trustworthy fountain. Those designated "experts" need some non-incestuous test as their foundation (a la your first paragraph).

Replies from: mark_spottswood
comment by mark_spottswood · 2009-03-18T19:30:19.493Z · LW(p) · GW(p)

Internal credibility is of little use when we want to compare the credentials of experts in widely differing fields. But is is useful if we want to know whether someone is trusted in their own field. Now suppose that we have enough information about a field to decide that good work in that field generally deserves some of our trust (even if the field's practices fall short of the ideal). By tracking internal credibility, we have picked out useful sources of information.

Note too that this method could be useful if we think a field is epistemically rotten. If someone is especially trusted by literary theorists, we might want to downgrade our trust in them, solely on that basis.

So the two inquiries complement each other: We want to be able to grade different institutions and fields on the basis of overall trustworthiness, and then pick out particularly good experts from within those fields we trust in general.

p.s. Peer review and citation counting are probably incestuous, but I don't think the charge makes sense in the expert witness evaluation context.

comment by mark_spottswood · 2009-03-17T14:08:23.222Z · LW(p) · GW(p)

Another good example is the legal system. Individually it serves many participants poorly on a truth-seeking level; it encourages them to commit strongly to an initial position and make only those arguments that advance their cases, while doing everything they can to conceal their cases' flaws short of explicit misrepresentation. They are rewarded for winning, whether or not their position is correct. On the other hand, this set-up (in combined with modern liberalized disclosure rules) works fairly well as a way of aggregating all the relevant evidence and arguments before a decisionmaker. And that decisionmaker is subject to strong social pressures not to seek to affiliate with the biased parties. Finally, in many instances the decisionmaker must provide specific reasons for rejecting the parties' evidence and arguments, and make this reasoning available for public scrutiny.

The system, in short, works by encouraging individual bias in service of greater systemic rationality.

Replies from: RobinHanson
comment by RobinHanson · 2009-03-17T14:44:54.782Z · LW(p) · GW(p)

The legal system does supposedly encourage individual bias to aggregate evidence; I'm more of a skeptic about how well it actually does this in practice though.

Replies from: mark_spottswood
comment by mark_spottswood · 2009-03-17T14:51:42.150Z · LW(p) · GW(p)

Care to explain the basis for your skepticism?

Interestingly, there may be a way to test this question, at least partially. Most legal systems have procedures in place to allow judgments to be revisited upon the discovery of new evidence that was not previously available. There are many procedural complications in making cross-national comparisons, but it would be interesting to compare the rate at which such motions are granted in systems that are more adversarially driven versus more inquisitorial systems (in which a neutral magistrate has more control over the collection of evidence).

comment by Matt_Simpson · 2009-03-17T16:50:31.814Z · LW(p) · GW(p)

The rationality dojo seems to be part of a world where "we" work together for truth, at least if you don't take the dojo metaphor too seriously. I assume that training individuals to be more rational is part of your optimal strategy. So I take it that you argument is that we should emphasize individual training less relative to designing institutions which facilitate truth-finding despite our biases. Am I understanding you correctly?

Replies from: RobinHanson
comment by RobinHanson · 2009-03-17T16:54:45.165Z · LW(p) · GW(p)

Yup.

comment by Alan · 2009-03-17T16:25:09.816Z · LW(p) · GW(p)

Robin wrote: "Martial arts can be a good training to ensure your personal security, if you assume the worst about your tools and environment." But this does not mean that martial arts cannot also be good training if you assume a more benign environment. Environments are known to be unpredictable.

One of the most important insights a person gains from martial arts training is to understand one's limits--which relates directly to the bias of overconfidence. If martial arts training enables a person to project an honestly greater degree of self confidence, then the signaling benefit alone may merit the effort. Does rationality training confer analogous signaling benefits?

Replies from: Nebu
comment by Nebu · 2009-03-17T17:25:38.779Z · LW(p) · GW(p)

One of the most important insights a person gains from martial arts training is to understand one's limits--which relates directly to the bias of overconfidence.

Good point. Fortunately, I think the OB and LW blogs have helped me understand my limits, in the sense that it showed me many errors-in-rationality in the ways I used to (and unfortunately, currently do still) think.

If martial arts training enables a person to project an honestly greater degree of self confidence, then the signaling benefit alone may merit the effort. Does rationality training confer analogous signaling benefits?

It probably does. If you go to cocktail parties tossing around terms like "Bayesian updating with Occam priors" or "Epistemic rationality" and sound like you really know what you're talking about, then you'll probably exude this signal of being a fairly smart person.

But you have to ask yourself if your goal is to sound smart, or to actually be smart.

comment by JulianMorrison · 2009-03-17T15:41:21.865Z · LW(p) · GW(p)

An attempt to even find Einsteins is doomed unless the number of them is large enough as a fraction of the population. (cf: Eliezer's introduction to Bayes.)

On the other hand, a purely aggregate approach is a dirty hack that somehow assumes no (irrational) individual is ever able to be a bottleneck to (aggregate) good sense. It's also fragile to societal breakdown.

It seems evident to me that what's really urgent is to "raise the tide" and have it "lift all boats". Because then, tests start working and the individual bottleneck is rational.

Replies from: None
comment by [deleted] · 2009-03-17T16:47:29.264Z · LW(p) · GW(p)

I predict that aggregate approaches are going to be more common in the future than waiting around for an Einstein-level intelligence to be born.

For example, Timothy Gowers recently began a project (Polymath1) to solve an open problem in combinatorics through distributed proof methods. Current opinion is that they were probably successful; unfortunately, the math is too hard for me to render judgment.

Now, it's possible that they were successful because the project attracted the notice of Terence Tao, who probably qualifies as an Einstein-level mathematician. If you look at the discussion, Tao and Gowers both dominate it. On the other hand, many of the major breakthroughs in the project didn't come from either of them directly, but from other anonymous or pseudo-anonymous comments.

The time of an Einstein or Tao is too valuable for them to do all the thinking by themselves. We agree that raising the tide is absolutely necessary for this kind of project to grow.

Replies from: ramana-kumar
comment by Ramana Kumar (ramana-kumar) · 2009-10-31T11:18:23.246Z · LW(p) · GW(p)

For Polymath the kind of desired result of collaboration is clear to me: a (new) (dis-) proof of a mathematical statement.

What is the kind of desired result of collaborating rationalists?

From the talk about prediction markets it seems that "accurate predictions" might be one answer. But predictions of what? Would we need to aggregate our values to decide what we want to predict?

The phrase in Robin's post was "join together to believe truth", so perhaps the desired result is more true beliefs (in more heads)? Did you envision making things that are more likely to be true more visible, so that they become defaults? In other words, caching the results of truth-seeking so they can be easily shared by more people?

comment by Marshall · 2009-03-17T19:01:29.849Z · LW(p) · GW(p)

"How can we join together to believe truth?"

Yes!

I am being deluged here on LW by all the posts and comments. Spending so much time in front of the screen does not seem sensible or rational.

What to do?

Replies from: ciphergoth, John_Maxwell_IV
comment by Paul Crowley (ciphergoth) · 2009-03-17T23:55:13.941Z · LW(p) · GW(p)

Spending so much time in front of the screen does not seem sensible or rational.

If you didn't have a better plan for making the world a better place already, then spending time thinking about how to improve the general level of optimisation for good things seems like one of the more productive ways to waste time on the Internet.

comment by John_Maxwell (John_Maxwell_IV) · 2009-03-17T23:19:36.791Z · LW(p) · GW(p)

Several weeks ago I unsuccessfully resolved to start doing community service every weekend.

comment by Roko · 2009-03-17T16:48:02.926Z · LW(p) · GW(p)

For example, with subsidized prediction markets, we can each specialize on the topics where we contribute best, relying on market consensus on all other topics.

If no one person has a good grasp of all the material, then there will be insights that are missed. Science in our era is already dominated by dumb specialists who know everything about nothing. EY's work has been so good precisely because he took the effort to understand so many different subjects. I'll bet at long odds that a prediction market containing an expert on evo-psych, an expert on each of five narrow AI specialisms, an expert on quantum mechanics, an expert on human biases, an expert on ethics and an expert on mathametical logic would not even have produced FAI as an idea to be bet upon.

Replies from: RobinHanson
comment by RobinHanson · 2009-03-17T16:57:12.041Z · LW(p) · GW(p)

Combining two or even three particular topics can the thing that you specialize in.

comment by JamesAndrix · 2009-03-17T15:57:12.823Z · LW(p) · GW(p)

I was just about to respond by asking if you would advocate a website in the beliefs of the members are aggregated based on their reliability, then I remembered: prediction markets.

I'm guessing you're not pushing a real prediction market due to legal issues, but why not create one that uses token points instead of real money?

My first thought was slightly different: have testable predictions, as in a market, but the system treats each persons' likelihood ratios as evidence (as well as the tags for the prediction, to account for each person's area of expertise)

It seems to me that the real issue still is a supply of testable problems.

Replies from: RobinHanson, steven0461, MBlume
comment by RobinHanson · 2009-03-17T16:58:32.654Z · LW(p) · GW(p)

It does take work to create judgeable claims, but there are other real issues as well.

comment by steven0461 · 2009-03-17T15:58:32.260Z · LW(p) · GW(p)

I'm guessing you're not pushing a real prediction market due to legal issues, but why not create one that uses token points instead of real money?

Foresight Exchange

comment by MBlume · 2009-03-17T19:53:41.951Z · LW(p) · GW(p)

I'm guessing you're not pushing a real prediction market due to legal issues, but why not create one that uses token points instead of real money?

Laws can, and in this case should, be changed.

comment by CannibalSmith · 2009-03-17T15:33:21.313Z · LW(p) · GW(p)

Aren't we the supposed martial rationalists of the humanity? Aren't we the ones being paid (I wish) to protect the neighborhoods from the marauding apologists? Aren't we the ones to go to the wild places and battle dragons?

comment by [deleted] · 2015-06-18T04:28:05.781Z · LW(p) · GW(p)

Division of labour

comment by advael · 2015-06-15T20:32:40.125Z · LW(p) · GW(p)

I would guess that martial arts are so frequently used as a metaphor for things like rationality because their value is in the meta-skills learned by becoming good at them. Someone who becomes a competent martial artist in the modern world is:

  • Patient enough to practice things they're not good at. Many techniques in effective martial arts require some counter-intuitive use of body mechanics that takes non-trivial practice to get down, and involve a lot of failure before you achieve success. This is also true of a variety of other tasks.

  • Possessing the fine balance of humility and confidence required to learn skills from other people. Generally if you're going to get anywhere in martial arts, you're not going to derive it from first principles. This is true of most human knowledge domains. Learning to be a student or apprentice is valuable, as is learning to respect the opinions of others when they demonstrate their competence.

  • Practiced in remaining calm and thinking strategically under pressure. If one is taught to competently handle a high-stress situation such as a physical fight, one can make decisions quickly and confidently even when stressed. This skill is useful for reasons I hope I don't have to go into depth on.

  • Able to engage mirror neurons to understand and reason about the nonverbal behavior of other humans, and somewhat understand their intentions and strategies. This is useful in a fight and taught by many martial arts, but extremely useful in other contexts, not the least of which being negotiation with semi-cooperative individuals.

  • Probably pretty physically fit. It's a decent whole-body exercise regimen, and there are numerous benefits to exercising frequently and keeping in good shape. It is probably not the most efficient exercise regimen out there by a long shot, but it may be one that is intrinsically fun to do for a lot of people, and thus it's likely that they'll stick with it.

  • Almost incidentally, reasonably capable of defending oneself in one of the few instances where civilized behavior temporarily breaks down (An argument with a seemingly reasonable person who quickly becomes unreasonable, perhaps alcohol is involved? I don't know. Fights are low-stakes and uncommon these days but they still happen). This is kind of a weird edge case in a modern society but might non-trivially prevent injury or gain you status when it comes up.

Note that there are a lot of vectors by which one can gain these meta-skills. While there are a bunch of martial arts enthusiasts out there who would probably claim that martial arts have the exclusive ability to grant you one or more of these, I really doubt that's the case. However, martial arts get a pretty good amount of coverage in real and fictional cultural reference frames that we can be reasonably confident most people are familiar with, and it's not a bad example of a holistic activity that can hone a lot of these meta-skills.

It's also worth noting that while the skills involved in interacting with a society of people you trust and want to work with are often different from the skills involved in becoming a competent individual, many of the latter can be helpful in the former. I would much rather be on a team with a bunch of people who understand the meta-skill of staying calm under pressure, or the meta-skill of making their beliefs pay rent, than be on a team with a bunch of people who don't. Aggregated individual prowess isn't the only factor for group success, and it may not even be the most important one, but it certainly doesn't hurt.

comment by b1shop · 2010-08-15T10:35:49.988Z · LW(p) · GW(p)

For example, with subsidized prediction markets, we can each specialize on the topics where we contribute best, relying on market consensus on all other topics.

Why do these prediction markets have to be subsidized? In the U.S., online prediction markets are currently considered internet gambling and are hampered. Is there a reason legal, laissez-faire prediction markets couldn't take hold?

Replies from: gwern
comment by gwern · 2010-08-15T10:48:17.112Z · LW(p) · GW(p)
  • Prediction markets are currently immature and controversial, and so might have trouble bootstrapping.
  • Their legality is problematic. (The IEM had to get a special exemption from the SEC to run.)
  • Prediction markets like Intrade currently are structured in ways bad for financial return. (IIRC, the issue is that Intrade offers a very low or no interest rate on deposited funds - the float is a source of profit for it.)
  • Long-run prediction markets like many possible scientific or academic questions are not financially viable (see 'opportunity cost'), while sports and gambling bets are inherently short-term, taking no more than a year.
  • A succession of short-term markets might help, but then you have the problem that with the natural low prices on 'success' shares, it's hard to make any profit. (eg. imagine a 'cold fusion in 2010' market - it'd be at a penny or two. Suddenly shares double due to a new paper! But because it's so lightly traded, you only made a dime on your prescient long position.)

(Did I miss any?)

Hence, subsidies. Peter McCluskey ran a market-maker bot (OB coverage). Some traders discuss bots; note that they say it's hard to arbitrage Intrade & Betfair in part due to low volume and fees and costs (McCluskey's page mentions that Intrade "agreed not to charge any trading or expiry fees".)

Replies from: b1shop
comment by b1shop · 2010-08-15T11:04:44.844Z · LW(p) · GW(p)

Thanks. I've been curious about the interest question for a while.

Replies from: gwern, gwern
comment by gwern · 2010-08-15T11:16:58.027Z · LW(p) · GW(p)

Googling some more, relevant links are http://www.overcomingbias.com/2007/11/intrade-fee-str.html and http://bb.intrade.com/intradeForum/posts/list/4471.page

Probably could find more examples of how Intrade is not an optimal prediction market using this tag: http://www.overcomingbias.com/tag/prediction-markets

comment by PhilGoetz · 2009-03-17T16:04:42.842Z · LW(p) · GW(p)

The martial arts metaphor for rationality training seems popular at this website, and most discussions here about how to believe the truth seem to assume an environmental worst case: how to figure out everything for yourself given fixed info ...

A very good point!

But I can't easily explain why it is a good point without violating the ban on mention of AI.

This observation doesn't invalidate Less Wrong. Someone still has to study these things. But the emphasis on individualism here can diminish awareness of the big picture.

comment by zaph · 2009-03-17T13:59:47.338Z · LW(p) · GW(p)

I think it was just brainstorming based on Eliezer's post; he also wrote about the sanity water line, which I see your rational society approach fitting in with. Maybe a dojo is a bit extreme, but I think a zendo isn't implausible, with people working on rationality koans. Or maybe rationality group therapy, where people can express potential irrationality that they can receive non-judgemental feedback on. Grassroots bottom up approaches could work with larger top down approaches to create the rational society, or whatever word Yvain might find less taboo :)

comment by Xaq · 2009-11-20T00:54:38.050Z · LW(p) · GW(p)

If one goes off the notions of others without coming to conclusions for themselves they're just as blind as an evangelical christian. True insight can only come from within. That's why reason is of premium importance.

It is important to denote the difference between insight and belief, however; for insight is based off of rationality and logic whereas belief is based on primal emotions and instincts.

Replies from: wedrifid
comment by wedrifid · 2009-11-20T02:51:40.913Z · LW(p) · GW(p)

If one goes off the notions of others without coming to conclusions for themselves they're just as blind as an evangelical christian.

Evangelical Christians sometimes form their own insights and conclusions, even about things with religious significance.

comment by Roko · 2009-03-17T17:19:52.063Z · LW(p) · GW(p)

I should also add that I think group rationality techniques are important. We've already seen that being a good group rationalist means acting differently than just trying to be individually as accurate as possible. [in particular, you should not be swayed by what the rest of the group thinks].

Replies from: steven0461, John_Maxwell_IV
comment by steven0461 · 2009-03-17T17:26:31.774Z · LW(p) · GW(p)

No, you should still be swayed, you just shouldn't represent the swaying as being independent analysis. You also should take into account that the opinions of other group members may have been caused by swaying rather than independent analysis, but that was already true in the individual accuracy case.

Replies from: Roko
comment by Roko · 2009-03-18T21:28:49.566Z · LW(p) · GW(p)

take into account that the opinions of other group members may have been caused by swaying rather than independent analysis, but that was already true in the individual accuracy case.

Right, I see. For a group of perfect rationalists, yes, I agree, at least to an extent.

The problem is that this is very hard to do in reality. If I have 15 commenters down/upvote a post or comment I make on LW, how do I know to what extent they're providing 15 distinct opinions vs. 1 opinion followed by 14 swingers? How do I estimate the swinginness coefficient? It seems that group rationality is maximized if individuals state their own opinions on a particular question independently of the group, and only update once a really overwhelming consensus is reached, some time after that particular discussion is over. The group's decision is then the average on n independent opinions. This would make for a very clever group iff each individual is quite clever.

I should emphasize: this will mean that the group (overall) displays smart behavior, but that the individuals do worse than they otherwise would.

Also, how relevant is Robin's paper on Aumann's agreement theorem for wannabe/imperfect bayesians to this debate? It seems that he might (under certain assumptions) have proved the opposite f what I'm claiming here.

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-03-18T22:04:10.210Z · LW(p) · GW(p)

Roko, when you run into a case of "group win / individual loss" on epistemic rationality you should consider that a Can't Happen, like violating conservation of momentum or something.

In this case, you need to communicate one kind of information (likelihood ratios) and update on the product of those likelihood ratios, rather than trying to communicate the final belief. But the Can't Happen is a general rule.

Replies from: Roko, MichaelHoward
comment by Roko · 2009-03-19T02:11:23.111Z · LW(p) · GW(p)

you should consider that a Can't Happen

I'd like to see a proof if it's that fundamental. Is it theorem x.xx in one of the Aumann agreement papers?

comment by MichaelHoward · 2009-03-18T22:47:07.265Z · LW(p) · GW(p)

Roko, when you run into a case of "group win / individual loss" on epistemic rationality you should consider that a Can't Happen, like violating conservation of momentum or something.

Really!? No exceptions?

This doesn't feel right. If it is right, it sounds important. Please could you elaborate?

comment by John_Maxwell (John_Maxwell_IV) · 2009-03-17T23:25:39.784Z · LW(p) · GW(p)

Not being swayed means not taking advantage of your group membership.