No One Can Exempt You From Rationality's Laws

post by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2007-10-07T17:24:44.000Z · LW · GW · Legacy · 53 comments

Contents

53 comments

Traditional Rationality is phrased in terms of social rules, with violations interpretable as cheating—as defections from cooperative norms. If you want me to accept a belief from you, you are obligated to provide me with a certain amount of evidence. If you try to get out of it, we all know you’re cheating on your obligation. A theory is obligated to make bold predictions for itself, not just steal predictions that other theories have labored to make. A theory is obligated to expose itself to falsification—if it tries to duck out, that’s like trying to duck out of a fearsome initiation ritual; you must pay your dues.

Traditional Rationality is phrased similarly to the customs that govern human societies, which makes it easy to pass on by word of mouth. Humans detect social cheating with much greater reliability than isomorphic violations of abstract logical rules.1 But viewing rationality as a social obligation gives rise to some strange ideas.

For example, one finds religious people defending their beliefs by saying, “Well, you can’t justify your belief in science!” In other words, “How dare you criticize me for having unjustified beliefs, you hypocrite! You’re doing it too!”

To Bayesians, the brain is an engine of accuracy: it processes and concentrates entangled evidence into a map that reflects the territory. The principles of rationality are laws in the same sense as the Second Law of Thermodynamics: obtaining a reliable belief requires a calculable amount of entangled evidence, just as reliably cooling the contents of a refrigerator requires a calculable minimum of free energy.

In principle, the laws of physics are time-reversible, so there’s an infinitesimally tiny probability—indistinguishable from zero to all but mathematicians—that a refrigerator will spontaneously cool itself down while generating electricity. There’s a slightly larger infinitesimal chance that you could accurately draw a detailed street map of New York without ever visiting, sitting in your living room with your blinds closed and no Internet connection. But I wouldn’t hold your breath.

Before you try mapping an unseen territory, pour some water into a cup at room temperature and wait until it spontaneously freezes before proceeding. That way you can be sure the general trick—ignoring infinitesimally tiny probabilities of success—is working properly. You might not realize directly that your map is wrong, especially if you never visit New York; but you can see that water doesn’t freeze itself.

If the rules of rationality are social customs, then it may seem to excuse behavior X if you point out that others are doing the same thing. It wouldn’t be fair to demand evidence from you, if we can’t provide it ourselves. We will realize that none of us are better than the rest, and we will relent and mercifully excuse you from your social obligation to provide evidence for your belief. And we’ll all live happily ever afterward in liberty, fraternity, and equality.

If the rules of rationality are mathematical laws, then trying to justify evidence-free belief by pointing to someone else doing the same thing will be around as effective as listing thirty reasons why you shouldn’t fall off a cliff. Even if we all vote that it’s unfair for your refrigerator to need electricity, it still won’t run (with probability ~1). Even if we all vote that you shouldn’t have to visit New York, the map will still be wrong. Lady Nature is famously indifferent to such pleading, and so is Lady Math.

So—to shift back to the social language of Traditional Rationality—don’t think you can get away with claiming that it’s okay to have arbitrary beliefs about XYZ, because other people have arbitrary beliefs too. If two parties to a contract both behave equally poorly, a human judge may decide to impose penalties on neither. But if two engineers design their engines equally poorly, neither engine will work. One design error cannot excuse another. Even if I’m doing XYZ wrong, it doesn’t help you, or exempt you from the rules; it just means we’re both screwed.

As a matter of human law in liberal democracies, everyone is entitled to their own beliefs. As a matter of Nature’s law, you are not entitled to accuracy. We don’t arrest people for believing weird things, at least not in the wiser countries. But no one can revoke the law that you need evidence to generate accurate beliefs. Not even a vote of the whole human species can obtain mercy in the court of Nature.

Physicists don’t decide the laws of physics, they just guess what they are. Rationalists don’t decide the laws of rationality, we just guess what they are. You cannot “rationalize” anything that is not rational to begin with. If by dint of extraordinary persuasiveness you convince all the physicists in the world that you are exempt from the law of gravity, and you walk off a cliff, you’ll fall. Even saying “We don’t decide” is too anthropomorphic. There is no higher authority that could exempt you. There is only cause and effect.

Remember this, when you plead to be excused just this once. We can’t excuse you. It isn’t up to us.

1Leda Cosmides and John Tooby, “Cognitive Adaptations for Social Exchange: Evolutionary Psychology and the Generation of Culture,” in The Adapted Mind, ed. Jerome H. Barkow, Leda Cosmides, and John Tooby (New York: Oxford University Press, 1992), 163–228.

53 comments

Comments sorted by oldest first, as this post is from before comment nesting was available (around 2009-02-27).

comment by Jacob_Stein · 2007-10-07T18:28:42.000Z · LW(p) · GW(p)

My blog happens to be absolutely packed with rational proof of Orthodox Judaism and proof against atheism.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2007-10-07T18:38:23.000Z · LW(p) · GW(p)

Let anyone who ever thinks that I am attacking a strawman visit the above blog.

comment by Jacob_Stein · 2007-10-07T18:41:53.000Z · LW(p) · GW(p)

Let anyone who thinks I am mistaken take the trouble to explain why. Unless they cannot.

comment by savagehenry · 2007-10-07T18:59:55.000Z · LW(p) · GW(p)

Fantastic post Eliezer, many of your recent posts have been articulating thoughts that I've been mulling about in my head over the last year or so. This one especially since I had an argument with a friend on this very subject not even a week ago haha. When you get around to publishing all of this as a book I will definitely buy a copy for myself and for my friend.

Also, Jacob, I glanced at your blog and saw the post on human evolution, if that is any indication of the quality of proof you have for Orthodox Judaism I don't think I need to read any further.

comment by Jacob_Stein · 2007-10-07T19:02:40.000Z · LW(p) · GW(p)

Savagehenry, I would happy to hear you enlighten with your thoughts. If you indeed have any.

comment by Adirian · 2007-10-07T20:03:35.000Z · LW(p) · GW(p)

Jacob - going into detail about why atheists are evil, violent, pornography-loving, science-worshiping people doesn't disprove their worldview. (And I find it interesting that you claim that atheists go into science, rather than scientists choosing atheism - but then, you don't seem to know what science is, so this shouldn't surprise me.)

Incidentally, out of eight models for quantum mechanics, at least two continue to permit determinism, which, notably, is another thing you erroneously attribute to atheism. One, neo-realism, of which Einstein was a follower. The other, multiverse theory. One of many matters on which you get your facts entirely wrong.

Replies from: DanielLC, Hawisher
comment by DanielLC · 2011-09-20T06:30:33.220Z · LW(p) · GW(p)

And since this has been posted, Eliezer has talked about quantum mechanics at length. He's for a deterministic interpretation, for reasons other than just favoring determinism.

comment by Hawisher · 2012-10-01T15:49:25.968Z · LW(p) · GW(p)

I would argue that one's religion or lack thereof is typically determined before one chooses a profession. I, personally, am religious, but I still think this guy is being ridiculous. I think that God made a bunch of awesome things, and one of the awesome things He made is a world that works without us having to take it apart, look under every rock, and go "LOOOK!!!! GODDDDDD!!!!! HEATHENS! I WAS RIIIIIIIIIGHT!"

Science is awesome. Rationality is awesome. Evolution is as close to fact as science can give us. You do your religion a grave disservice, Jacob.

Replies from: Osuniev
comment by Osuniev · 2013-02-25T23:44:24.601Z · LW(p) · GW(p)

upvoted for not taking argument as soldiers.

comment by Jacob_Stein · 2007-10-07T20:14:02.000Z · LW(p) · GW(p)

Quantum mechanics has posed a very serious challenge to determinism. http://en.wikipedia.org/wiki/Determinism#Determinism.2C_quantum_mechanics_and_classical_physics

However, I would not expect someone as deeply in denial of reality as an atheist to be able to admit that.

comment by anonymous9 · 2007-10-07T20:23:28.000Z · LW(p) · GW(p)

Eliezer,

Given that Jacob Stein claims the reason Darwinism is accepted by scientists is that it provides justification for raping children, I think it's fair to say that he is a straw man.

Anyway, great post. As usual.

comment by Jacob_Stein · 2007-10-07T20:36:19.000Z · LW(p) · GW(p)

I guess we should just accept Darwinism and abandon monotheism because scientists say so. That appeal to authority is SO convincing. I can't imagine why it hasn't occurred to me?

I suppose 300 years ago, when every professor at Oxford would have told me that Jesus Christ was my Lord and Savior, I would have been a fool for denying that as well.

comment by Richard_Hollerith · 2007-10-07T20:49:36.000Z · LW(p) · GW(p)

Not even a vote of the whole human species can obtain mercy in the court of Nature.

Because the Vote was corrupted in some way: conglomerates own the media; racism and sexism persist; the Voters have internalized their oppression; violence is institutionalized in Western culture.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2007-10-07T20:54:49.000Z · LW(p) · GW(p)

Hollerith, was that a joke? Sarcasm sometimes doesn't carry well over text.

comment by Tiiba2 · 2007-10-07T20:58:06.000Z · LW(p) · GW(p)

Between appeal to valid authority and guilt by association, I choose argumentum ad logicam.

comment by Jacob_Stein · 2007-10-07T21:00:54.000Z · LW(p) · GW(p)

Ah, very Latin of you. Is argumentum ad logicam something like argumentum ad webicam?

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2007-10-07T21:07:44.000Z · LW(p) · GW(p)

Jacob, please ensure that at any given point, no more than two of the comments listed on the "Recent Comments" display are from you. I would also suggest that anyone interested in engaging Jacob on this, do so on his blog, rather than here.

comment by Jacob_Stein · 2007-10-07T22:09:26.000Z · LW(p) · GW(p)

I am looking forward to reading any comments, however many, which anyone may wish to leave on my blog. I am sure I will find them interesting.

comment by James_Bach · 2007-10-07T22:09:44.000Z · LW(p) · GW(p)

Your point would be so much stronger, Eliezer, if you were allowed to ignore the role of models in rationality. But in all cases an infinity of alternative models may also account for what you think you have proven rationally. In your terms, no one can revoke the law that any belief in "accurate beliefs" rests on a priori assertions about what can exist and what constitutes evidence. It rests on a priori structures in your brain, designed to notice some things and not others.

Rationality is heuristic. In the case of waiting for water to spontaneously freeze at room temperature, it may be a marvelous heuristic not to hold your breath, but that's a straw man. What I'm worried about as a post-modern skeptic is what ways of organizing the world you and I have systematically failed to consider in our rational analyses. Because many internally consistent constructions of the world may be incommensurable, and yet lead not only to different predictions, but incommensurable predictions.

When you write about rationality as a way to defeat self-certainty, I'm excited and grateful. That's also how I use it. I'm more nervous when you write as if rationality is a tool that inevitably to accurate beliefs.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2007-10-07T22:40:09.000Z · LW(p) · GW(p)

When you write about rationality as a way to defeat self-certainty, I'm excited and grateful. That's also how I use it. I'm more nervous when you write as if rationality is a tool that inevitably to accurate beliefs.

I think, if you'll look at what's said above, the idea is that to get anywhere we must follow a course that partakes of the rationality-pattern, of evidence-processing. Nothing is said of all problems being solvable. Nothing is being said of humans becoming perfect through following, with their finite computing power and noisy brains, some rationalist's apprehension of the Way.

But it is equally an error to say, "This Way will not always give you a solution; therefore, it must not be the true Way; therefore, to draw maps without looking must be the true Way, and will always yield a solution."

More on the concept of "a priori" tomorrow.

comment by Doug_S. · 2007-10-07T22:53:22.000Z · LW(p) · GW(p)

I suspect that if I tried to argue with Jacob Stein, the discussion would eventually turn into something like this.

Me: "X is so." Jacob: "No, ~X is so." Me: "But experts say X is so." Jacob: "But these other experts say ~X is so." Me: "Your experts are wrong and incompetent." Jacob: "No, your experts are wrong and incompetent." ad infinitum.

We'd just contradict each other and get nowhere.

Replies from: Hughdo, amaury-lorin
comment by Hughdo · 2012-04-18T00:51:49.058Z · LW(p) · GW(p)

That's when you have to pull up the evidence, and compare the life's work of thousands of scientists examining the physical world to the life's work of thousands of people perusing a few disreputable books for meaning.

comment by momom2 (amaury-lorin) · 2024-09-18T11:29:44.180Z · LW(p) · GW(p)

Seems like you need to go beyond arguments of authority and stating your conclusions and instead go down to the object-level disagreements. You could say instead "Your argument for ~X is invalid because blah blah" and if Jacob says "Your argument for the invalidity of my argument for ~X is invalid because blah blah" then it's better than before because it's easier to evaluate argument validity than ground truth.
(And if that process continues ad infinitam, consider that someone who cannot evaluate the validity of the simplest arguments is not worth arguing with.)

comment by Richard_Hollerith · 2007-10-07T22:53:46.000Z · LW(p) · GW(p)

Yes, joke. Irony often yields wonderful conciseness, but can make the person I am replying to feel attacked, so maybe I should refrain.

Although you display a healthy scepticism towards America's civic religion, many of your colleague-bloggers here are pickled in it, as evidenced by the fascination with the agreement theorem and with flimsy evidence for the genius of crowds for estimating weights and numbers of marbles, so I get eager to counter their pernicious influence.

I was happy to see evidence that you have looked at Unqualified Reservations, which IMO is second only to Nick Szabo's blog on our legal and political institutions.

comment by Jacob_Stein · 2007-10-07T23:43:50.000Z · LW(p) · GW(p)

Doug, I hope I'm not exceeding too much my comment quota. But why be so pessimistic? The Talmud states "A wise person is someone who learns from everyone." Even if we do not reach complete agreement, we make pick up a few good ideas from each other.

comment by g · 2007-10-08T00:30:30.000Z · LW(p) · GW(p)

Why be so pessimistic? Well, right now this minute the latest article on your blog suggests that if an eminent scientist declares that evolution is a fact then one should consider seriously that "quite possibly" he "may well" be saying it only because he's a child-molesting predator.

Whether this indicates that you're incredibly stupid, or that you simply don't care whether or not what you say is true, or that you think there's nothing wrong with slandering thousands of people in this way, or that you're living in some separate reality from the rest of us, doesn't really matter; it's already sufficient to make my personal estimate of the probability of any worthwhile interaction well below 1%. (And also to make it very difficult for me to believe that you have any sincere intention of "picking up good ideas from each other".)

comment by g · 2007-10-08T00:31:22.000Z · LW(p) · GW(p)

(Incidentally, I think it very unlikely that incredible stupidity is the explanation. That is not, in this context, to your credit.)

comment by Jacob_Stein · 2007-10-08T10:05:36.000Z · LW(p) · GW(p)

g, I am merely suggesting that we take everything with a grain of salt. I'm sorry if you have a problem with that. Perhaps you are accepting atheism as uncritically as you previously accepted Christianity.

comment by g · 2007-10-08T10:15:35.000Z · LW(p) · GW(p)

No, Jacob, you aren't merely saying that and it's transparently obvious that you aren't merely saying that. To think that a good way of merely saying that is to do as you did -- or even to think that any sane person would believe you when you claim it -- would require that incredible stupidity I mentioned, and I don't think you're actually incredibly stupid.

Anyway, I shall now take Eliezer's advice and stop attempting to discuss things with you here. If I'd noticed that an anonymous commenter here had already drawn attention to your odious comments about scientists and believers in evolution, I wouldn't have bothered in the first place. My apologies to readers here for wasting bandwidth.

comment by Tom3 · 2007-10-08T10:31:33.000Z · LW(p) · GW(p)

Greatest OB discussion thread ever.

comment by Jacob_Stein · 2007-10-08T12:41:29.000Z · LW(p) · GW(p)

g, I would also like to point out that the fact that you previously embraced Christianity and currently believe in evolution with equal enthusiasm does not demonstrate that you are a foolish dupe. I would merely say that it's a possibility one should consider when debating with you.

comment by RIchard_Hollerith3 · 2007-10-08T14:36:06.000Z · LW(p) · GW(p)

Here is another comment on democracy, and I warn the reader of highly provocative and unorthodox opinions ahead.

Here is a quick example of what I mean when I say that some of the the bloggers here are pickled in our civic religion:

As for liberal democracy, it's clearly an error to assert without further argument that liberal democracy will solve all future problems. But it is not a mistake to say that it is far and away the most successful thing that humans have ever come up with, and so that it is the best framework in which to try to address future problems.

Note that it does not say, "the most successful political system," which is IMO a reasonable assertion, but rather, "the most successful thing," e.g., more successful than our science and technology.

If liberal democracy is more potent or more successful than science and technology, why have there been no significant innovations (or improvements) in that tradition since the Jeffersonian- Washingtonian- Hamiltonian- Franklin innovation? (The civil-rights movement was not a significant innovation IMO, but merely a more consistent holding of the system to promises already made by the original (Jeffersonian et al) innovation. Also, even if we count it as a significant innovation, the makes only one significant innovation in 200 years. The parliamentary innovation is an innovation only if you disbelieve the Jeffersonian argument about the separation of powers and about the advisability of intentionally making it hard to institute sweeping changes.)

Since the last important innovation in liberal democracy, our (Baconian- Galilean- Newtonian) scientific tradition has seen vigorous improvement. Namely, we have seen the Frege- Hilbert- Russell innovation in mathematical logic, the Bayesian- Laplacean- Goodean- Jaynesian innovation (well, the Goodean- Jaynesian part of it anyway: Bayes predates the American revolution and Laplace was 32 when the war ended), the Darwinian- Wallacean innovation, the Lorentz- Poicaire- Einstein innovation, the innovation that is quantum physics, the Williams- Maynard Smith- Hamilton- Trivers innovation in evolutionary psychology, the Kahneman- Tversky innovation.

Similarly, since the last important innovation in liberal democracy, we have witnessed impressive improvement in our technology: the Industrial Revolution, the Eli Whitney- Henry Ford- Frederick Winslow Taylor- Peter Drucker- Deming innovation for running manufacturing firms, the von-Neumann- Turing- Aiken- Backus innovation of the digital computer, the Licklider- Roberts- Cerf- Engelbart- Saltzer- Reed- Clark innovation in packet-switched networking, the Sutherland- Nelson- Engelbart- Berners-Lee- Cunningham- Sanger- Wales innovation in hypertext and communications, etc.

The idea that majority votes tend to produce correct or ethical decisions (either directly or via elected representatives) and the idea that if they do not, then they can be made to do so by promoting social justice, social, racial or economic equality, nonviolence or wider participation in elections and in the public discourse leading up to elections -- that is a really silly and poisonously false idea.

In reality, the capacity to make correct and ethical decisions in our complex modern world is distributed extremely unevenly in the population, and no New Deal or Great Society program or progressive agenda is going to change that fact in any relevant time frame.

At the Singularity Summit, a woman in the audience asked, "I am an artist." How can I participate in the implementation of the singularity (not verbatim)? Well, the answer is, Unless you are an extremely unusually rational artist, you can't. Your best course of action is not to try to add your voice to the conversation. If you want to help, send money or if you seek a more personal involvement, befriend a singularitarian and share with him the knack for pleasure and delightful experience that many artists have.

Is Eliezer going to tell me that my answer to the artist is wrong?

Well, maybe he is. After all, if I understand correctly he depends for his living on donations to the Singularity Institute. Agreeing with me will alienate most prospective donors. They say that it is impossible to convince a man of a truth if his livelihood depends on his not understanding it. Although I believe Eliezer to be much less prone to bias than most, maybe this bias ("livelihood bias"? "bias towards wanting to eat and make the car payment"?) is too much for him to overcome.

Arrogance is not a very nice trait, but does believing oneself to be more rational than most people always entail arrogance? Is there no chain of experiences a human being can undergo under which it is rational to conclude that one is much more rational than the average person?

One important reason liberal democracy and later elaborations involving social justice, equality, nonviolence, universal sufferage have so few thoughtful critics is because those with the skill and knowledge to critique it tend to be employed as scholars, scientists or at least as professionals of some sort, and the ideologues will get you fired from these sorts of jobs if your critique is too successful at finding an audience. Also, it can be quite costly to one's career or one's social standing to create the perception among prospective friends and partners of being recklessly impolitic or dangerously heterodox.

Admittedly, another (quite sensible) reason liberal democracy has so few thoughtful critics is that the two (nondemocratic) political innovations of the 20th Century that were tried on a large scale in the industrial world went disasterously wrong and killed many people. So let me stress that I am not advocating another large-scale experiment in government. I explain the reason for my critiquing of democracy below.

Again, I repeat: most of our received political wisdom is really silly and poisonously false.

It would take a concerted campain of organized violence to correct the problem, however, because you have to divest the ideologues from their power (particularly from the universities and the school system), and if you were to succeed, then human nature being what it is, some new political or religious orthodoxy would take its place in a few generations. Also, there is nothing that the ideologues are likely to do that cannot be corrected over the course of a few human generations. Russia, Central Asia and Eastern Europe were damaged by their 70-year experiment with Marxist-Leninist ideology, but there is no reason to think that they will not recover from the damage in a few generations. I do not mean to dismiss the real suffering and loss of human life and human potential caused by that experiment in Marxism-Lenninism, or of the (less severe) suffering caused by the political system of the Western democracies, but since you and I have only very course control over the political environment, we cannot (without ultratechnologies) prevent it from doing significant damage. It is important not to lose track of the most important consideration, which in this case is as follows: Ever since Kepler, Galileo and Newton, our civilization has seen almost continuous progress in science, technology and the amount of wealth that is available to the average person. As a result, a young prospective rationalist has available leisure hours, bodies of knowledge and tools (like the internet) far better suited to making rapid progress learning rational skill than that available to previous generations. Our political system is not the ideal one to nurture the continuation of this progress (e.g., it has a worrisome tendency to enforce an orthodoxy on social scientists), but it is not too bad, and is probably the best out of all the systems that have been tried. And changing it would be very disruptive and, well, bloody. So, it is probably best to leave it standing as long as it continues to allow scientific, technological and economic progress.

The one exception I know of to the general rule that our silly and poisonous political culture cannot do any damage that cannot be corrected in a few generations is the project Eliezer started, namely, the deliberate creation of an intelligence explosion through AI programming. A mistake there can persist for billions of years and indeed might even persists in ways that cannot be adequately described by the passage of time. Moreover, in 2004 or so, Eliezer authored a document that was titled Collective Volition and is now titled Coherent Extrapolated Volition, which in my humble opinion shows Eliezer to have been as of 2004 too credulous and too enamoured of this idea that important decisions come out better when as many people as possible have a say or a vote or an influence. Note the similarity between the title "Collective Volition" and the fatuous platitude "the Will of the People".

Individuals like Eliezer who have mastered a great deal of science at a young age will tend to have come from loving and saner-than-usual families and to be able to attract saner-than-usual friends and colleagues, so it is possible that they find it hard to imagine fully the mendacity, fatuousness and zealotry of the idealists in our "opinion-making" professions and the ruthless careerism and casual butchering of truth of the realists in our opinion-making professions. It is of course our opinion-makers who create and refine our political culture.

Since I have lived my life on the margins of our society and have depended often on ordinary professionals (in my case, doctors, other health-care providers, a few lawyers) with no particular distinction in rationalist skill, maybe I can convey to the reader just how shoddily ordinary professionals treat evidence and treat hypotheses.

It would serve no purpose IMO to challenge the democratic ideologues on, e.g., Daily Kos or indeed in the vast majority of public forums. It would merely ignite a nasty flamewar and it is unlikely to change anyone's opinion.

In contrast, the presence of Eliezer and young Eliezer wanna-bes on this blog makes it worthwhile for somebody (me if no one else steps up) regularly to criticize democratic political ideals, to try to neutralize the flood of democratic ideology and ideals (as exemplified by the quote above).

Again, I ask the reader to consider the proposition that most of our political traditions are silly, fatuous and false -- the modern equivalent to the Medieval Catholic Christianity. Our political culture enjoys its near monopoly on political opinion, like I said, by making life miserable for dissenters and enacting quite serious and severe punishements on dissenters.

Eliezer has over the last few months presented an accomplished technical explanation of how an individual human being makes good decisions (and how that skill might be improved). His explanations are free from slogans. Everything reduces to the non-mysterious operation of physical laws. When his explanations depend on or refer to received wisdom (about physics or neuroscience, for example), there is no reason to believe that that received wisdom is maintained by the punishment of dissenters, the removal of dissenters from positions of visibility (e.g., academic and journalistic positions), or the denouncing of dissenters as immoral or hateful.

Dear reader, ask yourself, Where can I find a corresponding detailed technical explanation of how my favorite political system makes a good decision? What is the causal mechanism in the political system that produces the correct or ethical decisions? How many decibels of evidence support the hypothesis of the existence of the causal mechanism? Has anyone calculated that quantity? The reader should demand detailed answers free from mysteries and from trite slogans that derive their persuasive power from endless repetition. Whey the voting scheme is run in a "controlled experiment", meaning one in which the voting system is asked to answer a mathematical, scientific, technological or economic question to which the answer is already known, does it in fact produce the correct answer?

Replies from: Polymeron, dmitrii-zelenskii
comment by Polymeron · 2011-03-06T15:58:42.720Z · LW(p) · GW(p)

These are all good and well as observations go, but it is unclear what alternative you are proposing, if any.

I would also like to point out that, once you start discriminating between who should have more (or any) weight in decision-making, the biases of whoever is making said discrimination could very well result in excluding beneficial or even indispensable viewpoints for whatever decision is being made. That isn't to argue, that extreme equality of decision-making power is optimal; but it does raise an important issue with systems that lack it, which needs to be addressed in any alternative method. There are other similar pitfalls, but I think this may be the main one.

comment by Дмитрий Зеленский (dmitrii-zelenskii) · 2020-03-20T13:28:35.459Z · LW(p) · GW(p)

I would say that you're rather strawmanning the author of HPMoR where some reasons to distrust democracy are nicely illustrated - by (spoiler, now rot13ed) gur znva ureb thvyg-gevccrq vagb gnxvat n yvgreny gevc gb Nmxnona naq uvf orfg sevraq nyzbfg trggvat nabgure bar va funpxyrf.

Replies from: jeronimo196
comment by jeronimo196 · 2020-03-31T13:42:51.270Z · LW(p) · GW(p)

When the original comment was posted, those chapters of HPMoR were probably not written yet.

Also, spoilers are often written in ROT13 around here. (https://rot13.com/)

comment by michael_vassar3 · 2007-10-09T05:53:23.000Z · LW(p) · GW(p)

Richard: I think that you misunderstand the reason for interest in the agreement theorem. The theorem is not seen as evidence in favor of America's civic religion, but rather as a particularly important proof of how far almost everyone departs from mutually recognized rationality almost all the time.

comment by RIchard_Hollerith3 · 2007-10-09T12:58:38.000Z · LW(p) · GW(p)

Vassar, maybe I misunderstand. I always thought informing someone about the agreement theorem will decrease the probability that that person will dare to dissent from a widely-held consensus. The belief that Majority Rule is an effective and reliable way to make correct or ethical decisions is of course a widely-held consensus. I would be obliged if those who write about the agreement theorem would periodically disclaim or at least profess agnosticism towards the notion that it applies to pursuits such as religion and politics in which dissenters face widespread ostracism and other sanctions. Nor do I buy that it applies to markets with strong network effects (meaning markets with a large "first-mover advantage" or in which an incumbent enjoys a large advantage over upstart competitors) e.g., the market for operating systems for personal computers, e.g., the market for undergraduate education at highly competitive colleges.

If I may be allowed a short tangent from the topic of this post, my strongest objection to the enthusiastic application of Majority Rule and related ideas is directed not so much at the governance of nation-states as at, e.g., the important Wikipedia project and, e.g., the extremely important singularitarian project. Rather than elections, Wikipedia would better serve its public IMO by scrapping elections and making it as easy as possible for groups to fork Wikipedia. Putting the content under a permissive, open-source license was a major step in that direction. The 2 major remaining steps IMO are a technical provision by which every competing encyclopedia's software may be notified of every change to every Wikipedia page as soon as the change is saved and the development of search engines able to lead the surfer through the bewildering array of world views and editorial approaches nurtured by the governance structure I just described.

Replies from: David_Gerard
comment by David_Gerard · 2011-01-19T15:28:53.461Z · LW(p) · GW(p)

Somewhat off-topic and answering an ancient comment, but a useful reminder of how important endeavours can actually be horribly short of resources and much more fragile than people think:

Rather than elections, Wikipedia would better serve its public IMO by scrapping elections and making it as easy as possible for groups to fork Wikipedia. Putting the content under a permissive, open-source license was a major step in that direction. The 2 major remaining steps IMO are a technical provision by which every competing encyclopedia's software may be notified of every change to every Wikipedia page as soon as the change is saved and the development of search engines able to lead the surfer through the bewildering array of world views and editorial approaches nurtured by the governance structure I just described.

We know this :-) Wikipedia as monopoly provider of the world's encyclopedia is an anti-pattern. But the network effects are very powerful.

This means we are a great big single point of failure. Our single data centre is one hurricane away from disappearing. Even making a good backup of English Wikipedia is a remarkably difficult endeavour because it's SO BIG. A billion and a half words. Can you mentally grasp how big that is? I sure can't.

And the distributed network you outline would be a wonderful thing. But, like most things that it would be nice to do with Wikipedia, it requires coding on MediaWiki. Lots of people have "Why don't you ..." technical ideas - nearly none of them follow them with the requisite code.

The budget for this year includes a pile of cash on technical resources: a second data centre and a lot more coders. We're also developing a pattern where young whizzkids work for WMF for a couple of years at charity pay and go off to make a bundle in industry - and that's fine by us.

Replies from: Richard_Kennaway, JoshuaZ
comment by Richard_Kennaway · 2011-01-19T17:31:40.772Z · LW(p) · GW(p)

A billion and a half words. Can you mentally grasp how big that is?

1.5N GB, where N is the average bytes per English word. Multiply by, say, 5 for the HTML overhead and it would still all fit onto a 64GB memory stick uncompressed, though I'd want something faster for actually accessing it.

It would actually be larger, as you'd need all the images as well, and you'd want the ancillary things like wikisource and wiktionary (I don't know if those are independent projects or if they're included in your figure) but even so, it sounds like the whole thing would easily fit onto a typical hard disc.

Replies from: kybernetikos, David_Gerard
comment by kybernetikos · 2011-01-19T17:39:47.899Z · LW(p) · GW(p)

I have all of the english wikipedia available for offline searching on my phone. It's big, sure, but it doesn't fill the memory card by any means (and this is just the default one that came with the phone).

For offline access on a windows computer, WikiTaxi is a reasonable solution.

I'd recommend that everyone who can carry around offline versions of wikipedia. I consider it part of my disaster preparedness, not to mention the fun of learning new things by hitting the 'random article' button.

comment by David_Gerard · 2011-01-19T18:05:02.330Z · LW(p) · GW(p)

No. You or I can say the numbers. But can you mentally grasp how much text that is? I doubt it.

Oh, and English Wikipedia is now being written faster than anyone could possibly read it.

Replies from: topynate, TheOtherDave
comment by topynate · 2011-01-19T18:21:20.992Z · LW(p) · GW(p)

It's roughly as many words as are spoken worldwide in 2.5 seconds, assuming 7450 words per person per day. It's very probably less than the number of English words spoken in a minute. It's also about the number of words you can expect to speak in 550 years. That means there might be people alive who've spoken that many words, given the variance of word-production counts.

So, a near inconceivable quantity for one person, but a minute fraction of total human communication.

comment by TheOtherDave · 2011-01-19T18:31:08.212Z · LW(p) · GW(p)

In the context you started out talking about -- making a backup -- mentally grasping how much data that is as text seems far less relevant than mentally grasping how much data that is as a fraction of the storage capacity of a phone, or grasping it as an amount of time required to transfer it from one network location to another.

It sounds like you've switched contexts along the way, though I'm not really sure to what.

Replies from: David_Gerard
comment by David_Gerard · 2011-01-19T18:36:35.180Z · LW(p) · GW(p)

Yeah, I went off on a sidetrack of expressing how flabbergasted I am at the size of the thing. Sorry about that.

comment by JoshuaZ · 2011-01-19T17:35:26.528Z · LW(p) · GW(p)

Note that the size of 1.5 billion words isn't what really makes it so large. The real issue is the sheer number of revisions which increases the database size by orders of magnitude. The large number of images also contribute.

Replies from: David_Gerard
comment by David_Gerard · 2011-01-19T18:03:23.092Z · LW(p) · GW(p)

Yeah, it's the full history dump that basically hasn't worked properly in years.

comment by Robin_Hanson2 · 2007-10-09T13:09:45.000Z · LW(p) · GW(p)

Richard, I claim the agreement results apply fully to most topics in religion and politics, and to markets with network effects. I suspect you misunderstand what is being claimed; it applies to honest beliefs, not words or actions.

comment by Richard_Hollerith · 2007-10-09T14:05:33.000Z · LW(p) · GW(p)

People often consistently profess some belief, which their actions belie. But that is not what you mean. Do I understand you to maintain that there is a third aspect to every person's model of reality, namely, their "honest belief", that has no particular relationship to the model they profess or the model that can be inferred from their actions? What would you consider evidence that this "honest belief" does not exist?

What I most want to know: Do you believe the results imply or strongly suggest that if a large group of people share some model of reality which differs in an important detail from models shared by much smaller groups, the model of the large group usually contains more true information and less false information than every model shared by every smaller group? Even in academic psychology, academic sociology, principles of governance and how the government should intervene in the economy?

If yes, do you require the large group to consist only of experts or those with training in the domain to which the models pertain?

comment by Robin_Hanson2 · 2007-10-09T14:59:30.000Z · LW(p) · GW(p)

Richard, honest belief is the belief they would have if they were being honest. Averaging over all possible groups large and small, the large group average belief must tend to be more accurate. But if we condition on a small group having some feature X, then of course it becomes an open question, depending on X.

comment by Richard_Hollerith · 2007-10-10T17:46:29.000Z · LW(p) · GW(p)

Vassar, I think I get what you are trying to tell me: the agreement theorem says that if everyone were a perfect Bayesian reasoner (and a few other conditions hold) then everyone would be in perfect agreement. But surely no one believes the converse! Surely everyone agrees that perfect agreement can come about by other means, e.g., a program of indoctrination and suppression of dissenters.

comment by Nick_Tarleton · 2007-10-12T14:11:16.000Z · LW(p) · GW(p)

I can see two cases where tu quoque arguments could have some utility:

  • if they make the recipient realize "hey, I'm doing it wrong too", or
  • if they lead the recipient to explain her valid reason for what looks like a wrong behavior, which might either enlighten her accuser to see that his own behavior can't be excused in this way, or enlighten the recipient herself into realizing that her excuse also excuses the accuser's behavior.

This probably doesn't happen often, though.

comment by snewmark · 2016-05-25T13:04:47.497Z · LW(p) · GW(p)

So what if I'm a hypocrite? You're a hypocrite too!

comment by Jake_NB · 2021-07-12T08:14:47.706Z · LW(p) · GW(p)

While the general argument is valid, I'm not sure how these accusations of socially-derived rules making up traditional rationality. There were many mathematicians and scientists before Bayes was born, and they derived their beliefs from logic and evidence, not social norms. Take Galileo as an extreme and famous example. Is there any evidence behind these unflattering descriptions of traditional rationalists?