Argument Screens Off Authority

post by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2007-12-14T00:05:35.000Z · LW · GW · Legacy · 85 comments

Contents

85 comments

Scenario 1: Barry is a famous geologist. Charles is a fourteen-year-old juvenile delinquent with a long arrest record and occasional psychotic episodes. Barry flatly asserts to Arthur some counterintuitive statement about rocks, and Arthur judges it 90% probable. Then Charles makes an equally counterintuitive flat assertion about rocks, and Arthur judges it 10% probable. Clearly, Arthur is taking the speaker’s authority into account in deciding whether to believe the speaker’s assertions.

Scenario 2: David makes a counterintuitive statement about physics and gives Arthur a detailed explanation of the arguments, including references. Ernie makes an equally counterintuitive statement, but gives an unconvincing argument involving several leaps of faith. Both David and Ernie assert that this is the best explanation they can possibly give (to anyone, not just Arthur). Arthur assigns 90% probability to David’s statement after hearing his explanation, but assigns a 10% probability to Ernie’s statement.

It might seem like these two scenarios are roughly symmetrical: both involve taking into account useful evidence, whether strong versus weak authority, or strong versus weak argument.

But now suppose that Arthur asks Barry and Charles to make full technical cases, with references; and that Barry and Charles present equally good cases, and Arthur looks up the references and they check out. Then Arthur asks David and Ernie for their credentials, and it turns out that David and Ernie have roughly the same credentials—maybe they’re both clowns, maybe they’re both physicists.

Assuming that Arthur is knowledgeable enough to understand all the technical arguments—otherwise they’re just impressive noises—it seems that Arthur should view David as having a great advantage in plausibility over Ernie, while Barry has at best a minor advantage over Charles.

Indeed, if the technical arguments are good enough, Barry’s advantage over Charles may not be worth tracking. A good technical argument is one that eliminates reliance on the personal authority of the speaker.

Similarly, if we really believe Ernie that the argument he gave is the best argument he could give, which includes all of the inferential steps that Ernie executed, and all of the support that Ernie took into account—citing any authorities that Ernie may have listened to himself—then we can pretty much ignore any information about Ernie’s credentials. Ernie can be a physicist or a clown, it shouldn’t matter. (Again, this assumes we have enough technical ability to process the argument. Otherwise, Ernie is simply uttering mystical syllables, and whether we “believe” these syllables depends a great deal on his authority.)

So it seems there’s an asymmetry between argument and authority. If we know authority we are still interested in hearing the arguments; but if we know the arguments fully, we have very little left to learn from authority.

Clearly (says the novice) authority and argument are fundamentally different kinds of evidence, a difference unaccountable in the boringly clean methods of Bayesian probability theory.1 For while the strength of the evidences—90% versus 10%—is just the same in both cases, they do not behave similarly when combined. How will we account for this?

Here’s half a technical demonstration of how to represent this difference in probability theory. (The rest you can take on my personal authority, or look up in the references.)

If P(H|E1) = 90% and P(H|E2) = 9%, what is the probability P(H|E1,E2)? If learning E1 is true leads us to assign 90% probability to H, and learning E2 is true leads us to assign 9% probability to H, then what probability should we assign to H if we learn both E1 and E2? This is simply not something you can calculate in probability theory from the information given. No, the missing information is not the prior probability of H. The events E1 and E2 may not be independent of each other.

Suppose that H is “My sidewalk is slippery,” E1 is “My sprinkler is running,” and E2 is “It’s night.” The sidewalk is slippery starting from one minute after the sprinkler starts, until just after the sprinkler finishes, and the sprinkler runs for ten minutes. So if we know the sprinkler is on, the probability is 90% that the sidewalk is slippery. The sprinkler is on during 10% of the nighttime, so if we know that it’s night, the probability of the sidewalk being slippery is 9%. If we know that it’s night and the sprinkler is on—that is, if we know both facts—the probability of the sidewalk being slippery is 90%.

We can represent this in a graphical model as follows:

Whether or not it’s Night causes the Sprinkler to be on or off, and whether the Sprinkler is on causes the sidewalk to be Slippery or unSlippery.

The direction of the arrows is meaningful. Say we had:

This would mean that, if I didn’t know anything about the sprinkler, the probability of Nighttime and Slipperiness would be independent of each other. For example, suppose that I roll Die One and Die Two, and add up the showing numbers to get the Sum:

If you don’t tell me the sum of the two numbers, and you tell me the first die showed 6, this doesn’t tell me anything about the result of the second die, yet. But if you now also tell me the sum is 7, I know the second die showed 1.

Figuring out when various pieces of information are dependent or independent of each other, given various background knowledge, actually turns into a quite technical topic. The books to read are Judea Pearl’s Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference and Causality: Models, Reasoning, and Inference. (If you only have time to read one book, read the first one.)

If you know how to read causal graphs, then you look at the dice-roll graph and immediately see:

P(Die 1,Die 2) = P(Die 1) ✕ P(Die 2)

P(Die 1,Die 2|Sum) ≠ P(Die 1)|Sum) ✕ P(Die 2|Sum) .

If you look at the correct sidewalk diagram, you see facts like:

P(Slippery|Night) ≠ P(Slippery)

P(Slippery|Sprinkler) ≠ P(Slippery)

P(Slippery|Night,Sprinkler) = P(Slippery|Sprinkler) .

That is, the probability of the sidewalk being Slippery, given knowledge about the Sprinkler and the Night, is the same probability we would assign if we knew only about the Sprinkler. Knowledge of the Sprinkler has made knowledge of the Night irrelevant to inferences about Slipperiness.

This is known as screening off, and the criterion that lets us read such conditional independences off causal graphs is known as D-separation.

For the case of argument and authority, the causal diagram looks like this:

If something is true, then it therefore tends to have arguments in favor of it, and the experts therefore observe these evidences and change their opinions. (In theory!)

If we see that an expert believes something, we infer back to the existence of evidence-in-the-abstract (even though we don’t know what that evidence is exactly), and from the existence of this abstract evidence, we infer back to the truth of the proposition.

But if we know the value of the Argument node, this D-separates the node “Truth” from the node “Expert Belief” by blocking all paths between them, according to certain technical criteria for “path blocking” that seem pretty obvious in this case. So even without checking the exact probability distribution, we can read off from the graph that:

P(truth|argument,expert) = P(truth|argument) .

This does not represent a contradiction of ordinary probability theory. It’s just a more compact way of expressing certain probabilistic facts. You could read the same equalities and inequalities off an unadorned probability distribution—but it would be harder to see it by eyeballing. Authority and argument don’t need two different kinds of probability, any more than sprinklers are made out of ontologically different stuff than sunlight.

In practice you can never completely eliminate reliance on authority. Good authorities are more likely to know about any counterevidence that exists and should be taken into account; a lesser authority is less likely to know this, which makes their arguments less reliable. This is not a factor you can eliminate merely by hearing the evidence they did take into account.

It’s also very hard to reduce arguments to pure math; and otherwise, judging the strength of an inferential step may rely on intuitions you can’t duplicate without the same thirty years of experience.

There is an ineradicable legitimacy to assigning slightly higher probability to what E. T. Jaynes tells you about Bayesian probability, than you assign to Eliezer Yudkowsky making the exact same statement. Fifty additional years of experience should not count for literally zero influence.

But this slight strength of authority is only ceteris paribus, and can easily be overwhelmed by stronger arguments. I have a minor erratum in one of Jaynes’s books—because algebra trumps authority.


1See “What Is Evidence? [? · GW]” in Map and Territory.

85 comments

Comments sorted by oldest first, as this post is from before comment nesting was available (around 2009-02-27).

comment by RobinHanson · 2007-12-14T00:14:23.000Z · LW(p) · GW(p)

Unfortunately, it is only in a few rare technical areas where one can find anything like "full technical cases, with references" given to a substantial group "knowledgeable enough to understand all the technical arguments", and it is even more rare that they actually bother to do so. Even when people appear to be giving such technical arguments to such knowledgeable audiences, the true is more often otherwise. For example, the arguments presented are often only a small fraction of what convinced someone to support a position.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2007-12-14T00:23:34.000Z · LW(p) · GW(p)

Robin, that's surely true. But the human default seems to be to give too much credence to authority in cases where we can partially evaluate the arguments. Even experts exhibit herd behavior, math errors go undetected, etc. It's certainly a mistake to believe plausible verbal arguments from a nonexpert over math you can't understand. But I think you could make a good case that as a general heuristic, it is wiser to try to rely harder on argument, and less on authority, wherever you can.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2007-12-14T00:48:27.000Z · LW(p) · GW(p)

An example of where not to apply this advice: There are so many different observations bearing on global warming, that if you try to check the evidence for yourself, you will be even more doomed than if you try to decide which authority to trust.

Replies from: slicedtoad
comment by slicedtoad · 2015-07-07T19:14:37.822Z · LW(p) · GW(p)

So, when trying to form an opinion or position on climate change, what is a rational approach?

As far as I can tell the experts don't agree and have all taken political positions (therefore irrational positions).

Replies from: Wes_W, ChristianKl
comment by Wes_W · 2015-07-08T06:54:47.239Z · LW(p) · GW(p)

Given a field with no expert consensus, where you can't just check things yourself, shouldn't the rational response be uncertainty?

I don't think global warming fits this description, though. AFAIK domain experts almost all broadly agree.

Replies from: Lumifer
comment by Lumifer · 2015-07-08T15:38:38.590Z · LW(p) · GW(p)

AFAIK domain experts almost all broadly agree.

The devil is in the details. They "broadly agree" on what? I don't think there's that much consensus on forecasts of future climate change.

Replies from: slicedtoad
comment by slicedtoad · 2015-07-13T19:14:18.109Z · LW(p) · GW(p)

Yes. This. And the details aren't trivial. They make a huge difference in policy. From "do nothing" to "reduce all growth and progress immediately or we go extinct".

comment by ChristianKl · 2015-07-08T07:34:56.147Z · LW(p) · GW(p)

As far as I can tell the experts don't agree

Who do you consider to be the experts and how do you know that they don't agree?

Replies from: Jiro, slicedtoad
comment by Jiro · 2015-07-08T14:24:02.069Z · LW(p) · GW(p)

I think there's a difference between "experts agree that global warming is caused by humans" and "experts agree that global warming is caused by humans, is of severity X, and we need these particular politically convenient anti-global warming measures taken immediately".

Replies from: ChristianKl, gjm
comment by ChristianKl · 2015-07-08T14:50:01.972Z · LW(p) · GW(p)

That doesn't answer the question.

Replies from: Jiro
comment by Jiro · 2015-07-08T14:54:23.210Z · LW(p) · GW(p)

I wasn't answering the question, I was questioning the question's implication. The question was trying to imply that taking a "position on climate change" means "taking a position on whether climate change exists", not "taking a position on climate change policy".

comment by gjm · 2015-07-08T16:37:09.146Z · LW(p) · GW(p)

The anti-global-warming measure most commonly advocated as needing to be done immediately (or sooner) is reduction in fossil-fuel use. So far as I can see, this isn't politically convenient for anybody.

Beyond that: sure, it's worth distinguishing between "do experts agree that AGW is real?" and "do experts agree that AGW is real and likely to produce more than a 2degC rise over the next 50 years?" and "do experts agree that we need to cut our CO2 emissions substantially if the result isn't going to cause vast amounts of suffering and expense and inconvenience and death?" and "do experts all predict exactly the same best-guess curve for temperatures over the next 50 years?" and all the other questions that one might ask.

Eliezer's original nomination of global warming as something not to try to work out on one's own was from back in 2007, and slicedtoad's claim that "experts don't agree" is from yesterday. There's been a shift, over the years, in the commonest "skeptical" position on global warming (and hence in what question we might want to ask) from "it probably isn't happening" to "well, of course it's happening, everyone knows that, but it probably isn't our fault" to "well, of course it's happening and it's our fault, everyone knows that, but it probably won't be that bad" to "well, of course it's happening, it's our fault, and it's likely to be really bad, everyone knows that, but major cuts in fossil fuel use probably aren't the best way to address it". I think we're in the middle of the transition between the last two right now. My guess is that in another 5-10 years it may have switched again to "well, of course it's happening, it's our fault, and it's likely to be really bad, and the answer would have been to cut fossil-fuel use, but it's too late now so we might as well give up" which I've actually seen (I think here on LW, but it might have been over on Hacker News or somewhere of the kind).

In 2007, a little digging suggests that the transition from "probably not happening" to "probably not our fault" was in progress, so perhaps the question to look at is "are human activities causing a non-negligible increase in global temperature?". On that question, I think it's fair to say that "experts agree".

Right now, though, the question is probably "how bad will it be if we continue with business as usual, and what should we do about it?". My impression is that to the first part the experts all give answers of the form "we don't know exactly, but here's a crude probability distribution over outcomes"[1] and their distributions overlap a lot and basically all say at least "probably pretty bad", so I'm pretty comfortable saying that "experts agree" although I might prefer to qualify it a little.

As for "what should we do about it?", I'm not sure who would even count as an expert on that. I'd guess a solid majority of climate scientists would say we ought to reduce CO2 emissions, but maybe the nearest thing we have to experts on this question would be politicians or economists or engineers or something. I wouldn't want to make any claim about whether and how far "experts agree" on this question without first making sure we're all talking about roughly the same experts.

[1] Though they don't usually put it that way, and in particular despite my language they usually don't attach actual probabilities to the outcomes.

Replies from: Lumifer, Jiro, VoiceOfRa
comment by Lumifer · 2015-07-08T16:48:36.192Z · LW(p) · GW(p)

and basically all say at least "probably pretty bad"

I don't believe this is true. In particular, I would like to draw your attention to the Stern Review which came out with quite non-scary estimates for the consequences of the global warming even after its shenanigans with the discount rates.

As for "what should we do about it?", I'm not sure who would even count as an expert on that. I'd guess a solid majority of climate scientists would say...

The climate scientists are definitely NOT domain experts on "what should we do about it" and I don't see why their opinion should carry more weight than any other reasonably well-educated population group.

Replies from: gjm
comment by gjm · 2015-07-08T22:20:40.691Z · LW(p) · GW(p)

the Stern review which came out with quite non-scary estimates for the consequences

Some quotations from the Wikipedia page you linked to:

The Stern Review's main conclusion is that the benefits of strong, early action on climate change far outweigh the costs of not acting. [...] According to the Review, without action, the overall costs of climate change will be equivalent to losing at least 5% of global GDP each year, now and forever. Including a wider range of risks and impacts could increase this to 20% of GDP or more, also indefinitely. Stern believes that 5-6 degrees of temperature increase is "a real possibility".

And (from the WP page's summary of the SR's executive summary):

The scientific evidence points to increasing risks of serious, irreversible impacts [...] associated with business-as-usual [...] Climate change threatens the basic elements of life for people around the world [...] the poorest countries and people will suffer early and most.

I would say that (1) a permanent 5-20% reduction in global GDP sounds pretty bad, (2) a 5-6 degree increase also sounds pretty bad, (3) serious irreversible impacts on the basic elements of life, with the world's poorest suffering earliest and most, sound pretty bad, and (4) it seems that Stern agrees that these are bad since the SR recommends strong early action.

The climate scientists are definitely NOT domain experts on "what should we do about it"

That was rather my point, and in particular I was not claiming that "their opinion should carry more weight than any other reasonably well-educated population group". (Though I think that's slightly overstated. They should be unusually well informed about what "it" is that we might want to do something about, which is useful information in deciding what to do.)

Replies from: Lumifer
comment by Lumifer · 2015-07-09T14:30:51.268Z · LW(p) · GW(p)

Not to restart the whole debate again, but first, let's separate handwaving (20%, "serious irreversible impacts", etc.) from specifics, and second, to quote the same Wikipedia page

most (greater than 90%) of the Review's monetised damages of climate change occur after 2200

That is not 2020, that is 2200. I submit that anyone who monetizes damages after 2200 is full of the brown stuff.

In general, the Stern Review tried very hard (including what I think are clearly inappropriate assumptions -- see the same discount rates) to produce a scary report to force action now... and it failed.

Replies from: gjm
comment by gjm · 2015-07-09T15:00:24.831Z · LW(p) · GW(p)

Originally you said: the Stern Review came out with quite non-scary estimates ("even after its shenanigans with the discount rates"). Now you're saying: the Stern Review came out with really scary estimates but that's OK because it fudged things, e.g. by using too-low discount rates.

The first, if it had been true, would have been good evidence against my statement that more or less all climate scientists say the consequences of business-as-usual would be pretty bad.

The second may be true (I haven't looked closely enough to have a confident opinion) but even if true doesn't give us good evidence that climate scientists don't think the consequences will be bad. (It might indicate that they don't think it'll be so bad, else why not present their true reasons?. Or it might indicate that they're so convinced it'll be really bad that they're prepared to fudge things to get the point across. Or it might be anywhere in between.)

Replies from: Lumifer
comment by Lumifer · 2015-07-09T15:04:14.943Z · LW(p) · GW(p)

Now you're saying: the Stern Review came out with really scary estimates

No, not quite. The actual estimates from the Stern Review are non-scary. Indeed, non-scary to the degree that the authors felt the need to add some scary handwaving. But handwaving is not estimates.

doesn't give us good evidence that climate scientists don't think the consequences will be bad

Climate scientists are not domain experts in forecasting the effect of environmental change on human society.

Replies from: gjm
comment by gjm · 2015-07-09T16:17:44.542Z · LW(p) · GW(p)

5-6 degrees of temperature rise is scary. Economic loss equivalent to permanent 5-20% loss of global GDP is scary.

If you personally happen to be unscared by those figures, whether because you don't believe them or because they're about the fairly far future and your own discount rate is relatively high, fair enough. In that case we simply have a disagreement about what constitutes scariness.

scary handwaving

One difficulty here is that many of the things we may reasonably care about are not readily quantified; and any description of unquantified or barely-with-difficulty-quantified things can be dismissed as handwaving.

(But some of those things have less-handwavy more-quantified counterparts in the Report itself: e.g., tens to hundreds of millions of people displaced from their homes by the middle of the century because of flooding, sea level rise, and drought; 15%-40% of land plant and animal species extinct if temperatures rise a further 2degC. Again, how scary you think those things are depends on how much you care about the future, how much you care about people far away, how much you care about biodiversity, etc., and maybe we differ on some or all of them. I find them quite scary. Note that "how scary is this?" is a separate question from "will it actually happen?" and we're discussing the former.)

Climate scientists are not domain experts in forecasting the effect of environmental change on human society.

I never claimed they are. I said only that climate scientists' forecasts for the consequences of anthropogenic global warming are consistently at least "pretty bad". (They are, of course, experts in forecasting what the environmental change is likely to be, which is an important part of the task of forecasting what its consequences will be.)

Replies from: Lumifer
comment by Lumifer · 2015-07-09T16:31:14.826Z · LW(p) · GW(p)

In that case we simply have a disagreement about what constitutes scariness.

We probably do. In this context, "scariness" mean willingness to spend resources now for expected mitigation of potential issues in the future. My willingness is lower than the current mainstream opinion and much lower than that of environmentalists.

many of the things we may reasonably care about are not readily quantified; and any description of unquantified or barely-with-difficulty-quantified things can be dismissed as handwaving.

Handwaving is not just lack of quantification, handwaving is asserting things without adequate support.

For example, I count the phrase "including a wider range of risks and impacts could increase this to 20% of GDP or more" as pure handwaving even though it includes a number.

how scary you think those things are depends on how much you care about the future

I don't think we're talking about "caring" in conventional sense. As I mentioned above, this is really about pricing of future risks with an overlay of generational transfer issues.

Replies from: gjm
comment by gjm · 2015-07-09T17:51:26.100Z · LW(p) · GW(p)

"scariness" means willingness to spend resources

Just to be clear: The issue here AIUI is whether the Stern Report's predictions, if correct, are scary. If we're on the same page here, you're saying that the prospect of a permanent loss of 5% or more of world GDP, of millions of extra deaths, and of tens to hundreds of people displaced from their homes, does not seem to you enough to justify the cost of the sort of mitigation the Stern Report proposes. Is that right?

If so: OK, fair enough; I'm not going to try to adjust your values. But I suggest that it's probably quite unusual to regard those prospective harms as "quite non-scary", as not "probably quite bad".

pure handwaving

I think it's actually somewhat impure handwaving because in the body of the Report there is a little explanation. But that explanation is itself fairly handwavy and in particular it's not at all clear where the figure of 20% comes from.

pricing of future risks with an overlay of generational transfer issues

That seems to me like just one way of expressing "caring about the future". In particular, using "caring" that way seems almost exactly as reasonable as using "scary" in the closely-related way you say you're using it.

Replies from: Lumifer
comment by Lumifer · 2015-07-09T18:43:40.967Z · LW(p) · GW(p)

Is that right?

Not exactly -- I don't believe the "millions of extra deaths" projections and I strongly suspect that if I were to dig into the data, I would find the 5% GDP loss to be a shaky number.

In general, my position is that the best way to deal with uncertain threats in the future is to make sure future people are wealthy and technologically advanced. As an analogy, it would have been very unwise of, say, Europe in the XVIII century to suppress the industrialization because of concerns over deforestation, smog, and mines' tailings.

Replies from: gjm, gjm
comment by gjm · 2015-07-09T22:19:43.973Z · LW(p) · GW(p)

I don't believe [...]

This is irrelevant to the question we were actually addressing, namely how scary the predictions are. You made, in case you've forgotten, the following claim:

I would like to draw your attention to the Stern Review which came out with quite non-scary estimates for the consequences of the global warming even after its shenanigans with the discount rates.

but since then you have modified that by saying

  • that the SR's NPV estimates of future harm are wrong because they should have discounted the future more steeply
  • that a lot of their predictions should be ignored because they are "handwaving"
  • that some of the more alarming other ones should be ignored because you don't believe them
  • that the appropriate measure of scariness is your willingness to pay to make the scary thing go away.

At which point, it seems to me, you have completely abandoned the original statement that the Stern Review made non-scary predictions, and what we're left with is that if we take only those parts of the Stern Review that you agree with and discount the future much more steeply than they do then you don't find that considering what's left makes you want to pay a lot of money to address the issue.

Or, to put it slightly differently, what we're left with is: "Lumifer doesn't think we should take drastic action to address possible negative future consequences of climate change".

Well, fair enough. You're a smart chap and no doubt your opinions are worth listening to. But this no longer has anything to do with the original issue, namely the extent to which climate scientists agree or disagree about climate change.

Replies from: Lumifer
comment by Lumifer · 2015-07-10T14:28:22.655Z · LW(p) · GW(p)

This is irrelevant to the question we were actually addressing, namely how scary the predictions are.

Let me, then, make the usual disclaimers which I thought were implicitly understood. I speak for myself, do not speak for anyone else, and when I discuss e.g. "how scary the predictions are", I am talking about my perceptions, not about the reaction of an average (wo)man on the street.

Here I distinguish between what I think the Review actually says and what how it is presented. In my opinion, what the Stern Review says is not scary. It is presented as scary, of course, because that was the whole point of writing the Review. In fact, the shenanigans (e.g. discussed in the Wikipedia article) deemed necessary to produce the required degree of scariness reinforce my perception that the Review has major difficulties in creating a sufficiently fearful picture and contribute to my belief that what it actually found is non-scary.

comment by gjm · 2015-07-09T22:29:31.149Z · LW(p) · GW(p)

the best way to deal with uncertain threats in the future is to make sure future people are wealthy and technologically advanced

The difficulty I have with applying that principle here is: which people? As the Stern Report says, the harms currently expected to result from climate change will fall overwhelmingly on the world's poorest people. Ensuring that the inhabitants of the US and northwestern Europe are wealthy and technologically advanced will be very nice for us, but I'm not sure the people whose land becomes uninhabitable will (or should) consider that a great tradeoff.

Replies from: Lumifer
comment by Lumifer · 2015-07-10T14:32:17.570Z · LW(p) · GW(p)

The difficulty I have with applying that principle here is: which people?

I don't understand your difficulty. The answer is: all and any.

This is similar to an observation that having a well-functioning immune system is the best way to deal with colds and other minor infections. The question "which people should have a well-functioning immune system?" makes no sense to me.

Replies from: viv3ka, Caperu_Wesperizzon
comment by viv3ka · 2020-07-21T07:56:26.991Z · LW(p) · GW(p)

If indeed the answer is "all and any", then the broad consensus that climate change under BAU scenarios will displace 50 million people in Bangladesh by the end of the century - turning vast numbers of prosperous farmers into penniless refugees - is a strong cause for action.

comment by Caperu_Wesperizzon · 2022-08-22T11:07:23.710Z · LW(p) · GW(p)

The default way to ensure future people are wealthy and technologically advanced is to let those who are not die.

comment by Jiro · 2015-07-08T17:24:03.080Z · LW(p) · GW(p)

The anti-global-warming measure most commonly advocated as needing to be done immediately (or sooner) is reduction in fossil-fuel use. So far as I can see, this isn't politically convenient for anybody.

Seriously? You don't understand that there's ideological opposition to fossil fuels (and to technology in general with unprincipled exceptions for such things as the anti-technology people's personal iPads) and that global warming is extremely convenient for it?

Also, one of the if not the biggest measure advocated is government regulation and taxes. Surely you can see how that is politically convenient.

Replies from: gjm
comment by gjm · 2015-07-08T22:09:57.149Z · LW(p) · GW(p)

It appears to me that opposition to technology as such is rare among voters and even rarer among politicians, at least in the countries whose politics I know anything about. I'm sure there are some luddites who talk up the dangers of climate change in order to attack technology, but if you're claiming that they explain a substantial fraction of what political support there is for taking action against climate change then I'll need to see some evidence.

Yes, one way to discourage fossil-fuel use is to tax it heavily, and I can see why a politician might want more revenue to play with. But I can equally see why they might want to be seen not to favour high taxes; all else being equal, most voters prefer to be taxed less. If I imagine a Machiavellian politician thinking "I'll advocate higher taxes to discourage the burning of fossil fuels, and then X will happen, and then I'll be more powerful / more likely to be elected / richer / ...", I'm having trouble thinking of any really credible X.

Replies from: Jiro, Lumifer
comment by Jiro · 2015-07-08T23:00:19.101Z · LW(p) · GW(p)

It goes the other way around: They advocate taxing fossil fuels because they are generally in favor of government regulation.

Replies from: gjm
comment by gjm · 2015-07-09T01:44:37.045Z · LW(p) · GW(p)

How do you know?

Replies from: Jiro
comment by Jiro · 2015-07-09T02:30:56.446Z · LW(p) · GW(p)

Same way you do. You imagined a Machiavellan politician; well, I imagined another one.

Replies from: gjm, hairyfigment
comment by gjm · 2015-07-09T02:44:09.311Z · LW(p) · GW(p)

I have to ask: Do you seriously think you are making a rational argument at this point? (Or have you, e.g., decided I'm an idiot not worth engaging with in actual rational discussion? Because if so, you could just say so.)

It makes no sense to answer my question "How do you know?" with "Same way you do" because I am not claiming to know anything about politicians' motivations here, and you are.

When you imagine your Machiavellian politician, does your imagination provide you with something that plays the role of X for mine? Or does your imagined politician simply want more regulation as a terminal value, regulation purely for the sake of regulation? If the latter, what reason is there to think that the number of such politicians is not tiny?

Replies from: Jiro, Lumifer
comment by Jiro · 2015-07-09T03:04:39.164Z · LW(p) · GW(p)

I don't expect a politician to literally want more regulation sa a terminal goal. However, I expect a politician to have terminal goals, such as doing better in the bureaucracy, signalling power to other politicians, etc. which more regulation helps him achieve. Bureaucracies given a chance to expand to encompass more regulation will take it.

Replies from: gjm
comment by gjm · 2015-07-09T10:20:29.642Z · LW(p) · GW(p)

It looks to me as if you're mixing up two things that sound almost the same but are actually importantly different. (1) An actual preference for there to be more regulation. (2) A tendency to make there be more regulation. I agree that politicians are likely to have #2 because it may boost their status if their name is on lots of laws. But I don't think that implies #1, and it's #1 rather than #2 that I can imagine being responsible for insincere professions of belief in and concern about anthropogenic climate change.

I would expect #2 to manifest as politicians liking to introduce laws about whatever they happen to think important, or whatever they expect their voters to be impressed by. If you're a politician with a severe case of #2 and not otherwise inclined to think climate change is a big deal, there's no need for you to jump on the bandwagon just in order to have regulations to introduce. It's not like there's a shortage of other things to regulate. (Or, for some sorts of politician, to deregulate. That can go down well with voters and senior party officials too.)

In any case, I realise I'm not quite sure why we're talking about politicians in any case. Do you have the impression that there is much push for action on climate change coming from politicians? It doesn't look that way to me. I mean, for sure some politicians are saying there should be action on climate change, but I think there has consistently been less political support for such action than climate scientists' analyses would lead one to expect.

There's one obvious high-profile exception, namely Al Gore who has been unusually active in promoting action against climate change, and who (so I gather) has if anything overstated rather than understated the case in comparison to what actual experts would say. But this doesn't seem well explained in terms of political considerations like "doing better in the bureaucracy" or "signalling power to other politicians"; Gore seems pretty clearly to be out of politics now. (I dunno, maybe he'll surprise everyone by running for president in 2020 or something, but I bet he won't. Aside from everything else, he'd be as old in 2020 as McCain was in 2008, and McCain's age clearly hurt him.)

[EDITED to fix an inconsequential typo.]

Replies from: Jiro
comment by Jiro · 2015-07-09T16:42:13.494Z · LW(p) · GW(p)

Do you have the impression that there is much push for action on climate change coming from politicians?

There seems to be much push for political solutions. Even if it's not a politician who pushes for the solution, the people pushing for the solution generally benefit from increasing their side's political power, and that includes proposing solutions that politicians on their side want because of other incentives.

There's also interplay between different causes (if you can pull off a carbon tax, that increases the respectibility of taxes as solutions, which may help your side if your side also proposes taxes as solutions to other problems).

Replies from: gjm
comment by gjm · 2015-07-09T18:03:20.755Z · LW(p) · GW(p)

As I say, it looks to me as if politicians have generally favoured less intervention than the scientific consensus has seemed to warrant, which would be the exact opposite of what your analysis would predict. But I don't have any very compelling evidence for this. How about you?

Replies from: Jiro
comment by Jiro · 2015-07-09T21:24:00.846Z · LW(p) · GW(p)

What matters here is the direction, not the end value. The idea is that politicians favor more intervention than we actually have, even if they favor less intervention than the scientific consensus. If so, then people allied with the politicians benefit from supporting intervention.

(Also, I don't actually believe there is a scientific consensus on how much intervention is needed. That's inherently a political question; it depends on how to value various tradeoffs, what you think the chance is of a policy being abused, etc. It's like asking if there's a scientific consensus about what to do to stop hunger.)

Replies from: gjm
comment by gjm · 2015-07-09T22:03:46.772Z · LW(p) · GW(p)

If we're trying to assess the theory that AGW policies are strongly perturbed by politicians' alleged desire to increase taxes and regulation, then we need to compare actual AGW policies with a baseline estimate that ignores the effects of that desire. There's no point comparing against doing nothing, unless we know there's no reason to do anything. ("The captain of this ship says there's a big iceberg ahead and we have to steer to the left, but I think he's mostly steering left because he likes the view in that direction. And look, we're veering way further to the left than we would if we just kept going in a straight line -- clearly that shows he's biased." Compare that with "... way further to the left than I think we need to do avoid the iceberg I can see ahead", which of course might be wrong if I am inexpert concerning either icebergs or steering but is at least trying to address the right question.)

I don't actually believe there is a scientific consensus on how much intervention is needed.

I didn't say there is (and agree that there probably isn't, though there might e.g. be a scientific consensus that the answer is "more than we're doing now"). By "less than the scientific consensus has seemed to warrant" I mean: look at what the scientists say about the likely climatic outcome of business as usual and of various levels of intervention, look at what politicians are actually doing, and consider whether it's credible that this is close to optimal given any reasonable set of priorities. In general you'd expect this to be really hard because there are lots of difficult things to evaluate, but the politicians have made it easier by keeping the level of intervention almost indistinguishable from zero.

Replies from: Jiro
comment by Jiro · 2015-07-10T17:52:44.097Z · LW(p) · GW(p)

If we're trying to assess the theory that AGW policies are strongly perturbed by politicians' alleged desire to increase taxes and regulation, then we need to compare actual AGW policies with a baseline estimate that ignores the effects of that desire.

Yes, but the baseline itself is relative to the current situation. Politicians want to regulate more than the regulation we actually have, so if you also want to increase regulation to more than we have, that benefits politicians. It may be true that you want an end point far beyond what the politician wants, but that's going to be irrelevant unless your push has some reasonable chance of going that far, which it probably doesn't.

comment by Lumifer · 2015-07-09T14:38:08.083Z · LW(p) · GW(p)

does your imagined politician simply want more regulation as a terminal value

The terminal value is power. Regulation is an intermediate instrumental goal.

comment by hairyfigment · 2015-07-11T01:48:17.726Z · LW(p) · GW(p)

No, you did not. A Machiavellan politician wants to stay in power, that is, to be elected. You're asserting a group interest that does not exist. We observe that politicians are happy to cut taxes (for people who can benefit them) if they personally get paid as much or more than before. Why would it be otherwise? (And any long-term interest, eg power for their family, should take the state of their civilization into account.)

Replies from: Jiro
comment by Jiro · 2015-07-11T16:50:19.381Z · LW(p) · GW(p)

We observe that politicians are happy to cut taxes (for people who can benefit them) if they personally get paid as much or more than before. Why would it be otherwise?

Having the ability to take and redistribute someone else's money provides a concentrated benefit to the one doing the taking and redistributing. Cutting taxes produces a much more diffuse benefit. Concentrated benefits lead to Machiavellian behavior much more than diffuse benefits. It is possible, of course, to have an anti-taxes lobbying group which provides a concentrated benefit, but the overall balance between concentrated and diffuse benefits is on the side of the higher taxes.

(And any long-term interest, eg power for their family, should take the state of their civilization into account.)

That would be a diffuse cost. The politician may care about the portion of the diffuse effectthat affects his family, but that's only a small portion of the total. If the politician makes policy based on which costs help him and his family and which ones hurt him and his family, the concentrated ones will win. The ones that affect all civilization, a small portion of which he actually cares about because it goes to his family, will lose.

Replies from: hairyfigment, hairyfigment
comment by hairyfigment · 2015-07-11T18:02:02.934Z · LW(p) · GW(p)

a concentrated benefit to the one doing the taking and redistributing. Cutting taxes produces a much more diffuse benefit.

FFS, I shouldn't have to tell you the government is not a person and does not make decisions like one. Show me a correlation between tax rates and benefits to individual politicians, or admit it's diffuse as all Hell. Oh, wait:

Having the ability to take

Well then, since that's always present, we seem to have reached agreement that actually using it is unnecessary for a given politician. Nor, I would say, do we need additional reasons to justify implied taxation threats in a world where the USA is deep in debt.

comment by hairyfigment · 2015-07-13T19:37:09.457Z · LW(p) · GW(p)

Here's another comment for Eugine Nier to downvote: you are talking about a political issue and asserting by definition that politicians like Inhofe are not politicians. The real world doesn't enter into it. You are spouting the most shameful tribalist garbage.

Replies from: Jiro
comment by Jiro · 2015-07-13T20:47:01.015Z · LW(p) · GW(p)

"The politician" doesn't mean that I am making the statement about every single politician in the world.

Replies from: hairyfigment
comment by hairyfigment · 2015-07-13T21:22:40.296Z · LW(p) · GW(p)

No indeed, your No True Scotsman fallacy is over here. Though as I keep saying, the more fundamental problem is that you haven't shown anyone has the personal interest in question. And you try to hide this by talking as if the government were an agent, in violation of what should be conservative insights. I think I'm done with this.

comment by Lumifer · 2015-07-09T14:36:38.868Z · LW(p) · GW(p)

If I imagine a Machiavellian politician thinking "I'll advocate higher taxes to discourage the burning of fossil fuels, and then X will happen, and then I'll be more powerful / more likely to be elected / richer / ...", I'm having trouble thinking of any really credible X.

I don't see why you are having trouble.

"I'll advocate higher taxes to get more revenue while saying it is to discourage the burning of fossil fuels, and then I will have control of of more money which I'll channel to my cronies and use to bribe voters.

The common characteristic of most politicians is that they want more power. In Western democracies having control over budget and having money to allocate is a large part of that power.

Replies from: gjm
comment by gjm · 2015-07-09T16:35:25.415Z · LW(p) · GW(p)

First of all, for clarity, my imaginary politician was saying "I'll advocate (higher taxes to discourage the burning of fossil fuels)" rather than "I'll advocate higher taxes) to discourage the burning of fossil fuels". That is, I wasn't meaning to presuppose that the politician's real purpose was as stated.

to get more revenue [...] channel to my cronies and use to bribe voters

OK, so if this sort of thing is (say) 50% of why those politicians who say we should take action to reduce or mitigate anthropomorphic climate change, then we should expect that (if politicians are perfectly Machiavellian and totally indifferent to what's true and what's beneficial) 50% of politicians who say that either are closely associated with "green energy" companies and the like, or else represent voters a substantial fraction of whom stand to benefit from "green energy" initiatives. If politicians are actually less than perfectly Machiavellian, and temper their pursuit of self-interest with occasional consideration of what would actually be best for their country and what the evidence actually says, then that figure of 50% needs to be correspondingly higher.

We should also, if politicians are that Machiavellian, expect to find that any politician who, e.g., represents a substantial number of voters who could be bribed in this way will advocate action against climate change.

I haven't looked at the statistics, so my opinion isn't worth much at present, but I don't get the impression that things are anywhere near so clear-cut. Do you have data?

Just out of curiosity: What is your opinion about the motivation of politicians who say we shouldn't take much action against anthropogenic climate change? If we discount the stated opinions of politicians on both sides, and of lobbyists for, e.g., solar panel fitters and oil companies, what opinions do you expect to find remaining?

It seems to me that this sort of argument constitutes a fully general justification for ignoring what politicians say. Which, actually, sounds on the whole like a pretty good idea.

Replies from: Lumifer
comment by Lumifer · 2015-07-09T16:49:13.253Z · LW(p) · GW(p)

I don't get the impression that things are anywhere near so clear-cut

Things, of course, are not clear-cut at all because in reality you have a very complex network of incentives, counter-incentives, PR considerations, estimates and mis-estimates, the traditional bungling, etc. etc.

What is your opinion about the motivation of politicians who say we shouldn't take much action against anthropogenic climate change?

The same :-)

what opinions do you expect to find remaining?

Well, the whole spectrum from "this is bollocks!" to "humanity's survival is at stake!", but probably dominated by "I dunno" :-D

comment by VoiceOfRa · 2015-07-12T00:27:24.981Z · LW(p) · GW(p)

My guess is that in another 5-10 years it may have switched again to "well, of course it's happening, it's our fault, and it's likely to be really bad, and the answer would have been to cut fossil-fuel use, but it's too late now so we might as well give up" which I've actually seen (I think here on LW, but it might have been over on Hacker News or somewhere of the kind).

I doubt anyone was advocating this position seriously. More likely they were pointing out the logical implications of taking the alarmist position, with its ever shifting timeline for when disaster happens, seriously.

"do experts agree that AGW is real and likely to produce more than a 2degC rise over the next 50 years?"

I remember when the alarmist position was that it would happen in 20 years. Come to think of it, that was roughly 20 years ago.

Replies from: gjm
comment by gjm · 2015-07-12T00:59:01.425Z · LW(p) · GW(p)

More likely they were pointing out the logical implications [...]

That is not the impression I remember getting, but since I don't even remember where I saw this you shouldn't trust my memory much.

the alarmist position was that it would happen in 20 years [...] that was roughly 20 years ago.

Here is an IPCC report from 20 years ago. (Warning: large PDF file.) It predicted a 2 degC rise, relative to a baseline in 1990, by 2100.

The report mentions that its predecessor in 1990 gave a more pessimistic best estimate. Here is that report. (Warning: large PDF file.) Its best estimate (with much uncertainty stated) was about 0.3 degC per decade "during the next century"; according to that estimate, 2 degC of warming would take about 70 years.

So, please, whose alarmist position was that there would be 2degC of rise in the next 20 years, and why should we care?

(Thanks for the ~20 downvotes, by the way. You're a real pleasure to talk to.)

comment by slicedtoad · 2015-07-13T19:06:57.258Z · LW(p) · GW(p)

They disagree in exactly the way gjm mentions below. Experts are climate scientists and scientists in related fields. Some politicians may be included as 'experts' in terms of solutions, too, I suppose. They disagree about the severity, cause, timeline and solution. And not by some trivial amount, but by enough to drastically shift priorities.

Also, while this is a reply to Eliezer's 2007 comment, I'm aware the situation has changed. I really just want to know how to begin to form a rational belief about climate change as of now.

I find climate change a strange issue. Not the situation itself but the public response and political tactics that are used.

On the surface, it looks like the vaccination controversies where one side goes "you guys are stupid for completely ignoring science". The difference is, the science for vaccines is rock solid. There is a negligible chance of vaccines hurting you. And we have an extremely large amount of evidence. Not evidence from computer models or something theoretical, but actual data from millions of people being vaccinated.

Climate change, no matter your opinion, cannot be said to be this sure of a thing. Yet the tactics used are the same. "If you don't take up our cause, you are the enemy of Science." Science isn't some deity. I don't obey out of some appeal to authority. It's useful because it can convince me with reason.

If the issue is trivial or unanimous, I may just accept the scientific consensus at face value. But for a possible existential threat that could either kill us or cost an unimaginable amount of money to prevent... And there are experts that disagree! And the consensus has changed several times in the last few decades! And politicians are pushing a certain direction! ...I'm not being ideological here, am I? This isn't a black and white issue is it?

Replies from: viv3ka
comment by viv3ka · 2020-07-21T08:39:15.872Z · LW(p) · GW(p)

There are some black and white issues, where a valid comparison to antivax strongly holds, and some fuzzier issues where it holds weakly or not at all.

The strongest claim is that global warming is real and anthropogenic. This is very solid science, indeed pretty basic atmospheric physics. Nonetheless there are people in positions of power who argue that it is false. To make that argument is indeed to set oneself up in opposition to science - that is, opposed to the scientific method as a way to discover truth.

Denial of this basic truth is a serious enough problem that 18 scientific associations felt the need to issue a joint statement to that effect, backing up a further 200 with the same view.

https://climate.nasa.gov/scientific-consensus/

The second strongest claim is that global warming is a threat to humanity, and that immediate action to reduce CO2 emissions to net zero (primarily through the use of fossil fuels) is essential to preserve human life.

This view is held and strongly advocated by every scientific association with any claim to relevant knowledge that I can find. For example the AAAS says that "global climate change caused by human activities is now underway, and it is a growing threat to society" and "The time to control greenhouse gas emissions is now" (when "now" was 2006). The AGU says that "Human activities are changing Earth’s climate, causing increasingly disruptive societal and ecological impacts" and "global carbon dioxide (CO2) emissions must reach net-zero by around 2070 to have a good chance of limiting warming to a 2° C increase and by about 2050 to achieve a more protective limit of a 1.5°C (2.7°F) increase".

In the last few decades, the consensus on the first issue has changed not at all; and on the second issue the only change has been a steady increase in the number of scientific associations willing to openly advocate immediate action.

To argue against action on climate change isn't quite the same as arguing that vaccines don't work, or that the germ theory of disease is false. It's more like arguing that the cost of vaccines make them unjustifiable, or that surgeons generally have clean hands so scrubs and gloves are an unnecessary impost.

That's about it though. After this you get to questions of *how* to decarbonise. There is far less consensus on that, and the issues are not so clear-cut. This gives plenty of wiggle room for anyone who would prefer not to decarbonise at all, by pushing the responsibility onto someone else, sending things to committee for debate, setting up processes that require full consensus between inimical participants before any action is taken, and so on.

comment by Ian_C. · 2007-12-14T00:50:14.000Z · LW(p) · GW(p)

Was there supposed to be a second book there?

Thanks

comment by manuelg · 2007-12-14T00:58:29.000Z · LW(p) · GW(p)

There are so many different observations bearing on global warming...

Your liberal bias is showing here, Eliezer. It is not "global warming". The earth is becoming slightly "frigidity impaired".

comment by Doug_S. · 2007-12-14T01:14:17.000Z · LW(p) · GW(p)

Book 1: Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference

Book 2: Causality

comment by RobinHanson · 2007-12-14T01:16:30.000Z · LW(p) · GW(p)

Sometimes people attend too much to authority, and sometimes too little. I'm not sure I can discern an overall bias either way.

Replies from: Rinon
comment by Rinon · 2012-06-04T15:28:39.461Z · LW(p) · GW(p)

I haven't done any studies, but I have a feeling that people attend to authority when it supports their natural biases, and ignore authority when it opposes their natural biases.

comment by manuelg · 2007-12-14T01:21:54.000Z · LW(p) · GW(p)

Apropos of nothing: you have a lot to say about the discrete Bayesian. But I would argue that talking about the quality of manufacturing processes, one would often do best talking about continuous distributions.

The distributions that my metal-working machines manifest (over the dimensions under tolerance that my customers care about) are the Gaussian normal, the log normal, and the Pareto.

When the continuous form of the Bayesian is discussed, they always talk about the Beta distributions.

I have tried reasoning with the lathe, the mill, and the drill presses, to begin exhibiting the Beta, but they just ignore my pleadings, and spit hot metal chips at me.

The standard frequentist approaches seem like statistical theater. So I am inclined to explore other approaches.

comment by James_Bach · 2007-12-14T06:01:38.000Z · LW(p) · GW(p)

You said: "So it seems there's an asymmetry between argument and authority. If we know authority we are still interested in hearing the arguments; but if we know the arguments fully, we have very little left to learn from authority."

I like your conclusion, but I can't find anything in your argument to support it! By rearranging some words in your text I could construct an equally plausible (to a hypothetical neutral observer) argument that authority screens off evidence. You seem to believe that evidence screens off authority simply because you think evidence is what makes authority believe something. But isn't that assuming the very thing you want to demonstrate?

Your scenarios in the first paragraphs are neither arguments nor demonstrations. They are statements of what you believe. Fair enough. But then I was expecting that you'd provide some reason for me to reject the hypothesis-- a hypothesis that carried a lot of weight during the era of Scholasticism-- that there is no such thing as evidence without authority (in other words, it is authority that consecrates evidence as evidence).

I used to wonder how anyone could take the obviously wrong physics of Aristotle seriously, until I learned enough about history that it dawned on me that for the Scholastic thinkers of the middle ages, how physics really worked was far less important than maintaining social order. If maintaining social order is the problem that trumps all others in your life and in your society, then evidence must necessarily carry little weight compared to authority. You will give up a lot of science, of course, but you will give it up gladly.

Obviously, we aren't in that situation. But I worry when I see, for instance, rational arguments for the existence of God that assume the very thing they purport to prove. And your argument (hopefully I've misunderstood it) seems a lot like those.

Replies from: JoshuaZ, ejstheman, durgadas
comment by JoshuaZ · 2010-04-30T05:24:30.916Z · LW(p) · GW(p)

Much of what is obviously wrong about Aristotle or likely to be wrong was discussed. Orseme for example wrote in the 1300s and discussed a lot of problems with Aristotle (or at least his logic). He proposed concepts of momentum and gravity that were more or less correct but lacked any quantization. And people from a much earlier time understood that Aristotle's explanation of movement of thrown objects was deeply wrong. Attempts to repair this occurred well before the Scholastics even were around. Scholastics were more than willing to discuss alternate theories, especially theories of impetus. People seem to fail to realize how much discussion there was in the middle ages about these issues. It didn't go Aristotle and then Galileo and Newton. Between Aristotle and Galileo were Oresme, Benedetti (who proposed a law of falling objects very similar to Galileo) and many others. Also, many of the Scholastics paid very careful attention to Avicenna's criticism and analysis of Aristotle (Edit: My impression is that they became in some ways more knee-jerk Aristotelian after Averroism became prevalent but I don't know enough about the exact details to comment on ratios or the like).

It might be fun to dismiss everyone in the Middle Ages as religion-bound control freaks, but that's simply not the case. The actual history is much more complicated.

comment by ejstheman · 2011-07-14T18:11:28.507Z · LW(p) · GW(p)

If we observe experts changing their beliefs based on evidence often, but evidence changing based on the beliefs of experts never, then it seems reasonable that the chain of causality goes reality->evidence->beliefs of experts->beliefs of non-experts, with the possible shortcut reality->evidence->beliefs of non-experts, when the evidence is particularly abundant or clear.

comment by durgadas · 2012-08-20T21:16:12.975Z · LW(p) · GW(p)

"I used to wonder how anyone could take the obviously wrong physics of Aristotle seriously, until I learned enough about history that it dawned on me that for the Scholastic thinkers of the middle ages, how physics really worked was far less important than maintaining social order. If maintaining social order is the problem that trumps all others in your life and in your society, then evidence must necessarily carry little weight compared to authority. You will give up a lot of science, of course, but you will give it up gladly.

Obviously, we aren't in that situation. But I worry when I see, for instance, rational arguments for the existence of God that assume the very thing they purport to prove. And your argument (hopefully I've misunderstood it) seems a lot like those."

Well, reading Sam Harris' account of speaking to prominent atheists backing a moralistic relativism "on behalf of" the world's religions would led me to suspect that we are just as, maybe more influenced by the idea of maintaining social order. I think that the tyrrany of choice (50 kinds of ketchup anyone?) makes it seem like we've got more 'apparent choices' many of which aren't fundamentally different from each other as far as what social cliques to participate in.

If you look closely, each of these apparently different groups has a uniform and a rallying cry, but on the whole say much the same thing, even where the 'authority' in each case seems quite different.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2007-12-14T06:26:10.000Z · LW(p) · GW(p)

Changed first use of "evidence" to link to "What is Evidence?" and first use of "Bayesian" to link to "An Intuitive Explanation of Bayesian Reasoning", respectively the qualitative and quantitative definitions of evidence that I use as standard. See also this on rationality as engine of map-territory correlation.

Map-territory correlation ("truth") being my goal, I have no use for Scholasticism.

comment by Unknown · 2007-12-14T08:28:34.000Z · LW(p) · GW(p)

The overall bias that people have is to point to authority when it seems to support their position more, but to point to argument when it seems to support their position more: i.e. confirmation bias.

comment by Bob_Unwin6 · 2007-12-14T09:43:00.000Z · LW(p) · GW(p)

Similarly, if we really believe Ernie that the argument he gave is the best argument he could give, which includes all of the inferential steps that Ernie executed, and all of the support that Ernie took into account - citing any authorities that Ernie may have listened to himself - then we can pretty much ignore any information about Ernie's credentials.

It might take an intellectual life-time (or much more) to get all the relevant background. For example, mathematicians (and other people in very technical domains) develop very good intuitions about whether or not certain statements hold. They might be quite sure that something is true long before they are able to give even a sketchy proof and it seems rational to follow them based on their credentials (e.g. having made contributions to this sub-discipline). Yet there is probably no way to really get a grasp of their inferential steps without having done lots of the math they have.

I stress "doing" the math, rather than reading about it. Lots of math is "knowing how" rather than "knowing that". The same sort of thing might hold for aesthetical or ethical judgments. Without having played (or at least studied) a lot of classical music for the clarinet, I might not be able to grasp the "inferential" steps that led a professional player to his judgment about the superiority of a certain piece of music.

comment by billswift · 2007-12-14T14:50:11.000Z · LW(p) · GW(p)

Part of the problem is that "authority" conflates two distinct ideas. The first is "justified use of coercion" as when the government is referred to as "the authorities". The second is as a synonym for expertise. The two are united in parents but otherwise distinct. It may be useful to do as I have in my notes and avoid using "authority" when "expertise" is what is meant, at least it reduces the confusion a little.

comment by Dynamically_Linked · 2007-12-15T03:48:34.000Z · LW(p) · GW(p)

Has anyone read Learning Bayesian Networks by Richard E. Neapolitan? How does it compare with Judea Pearl's two books as an introduction to Bayesian Networks? I'm reading Pearl's first book now, but I wonder if Neapolitan's would be better since it is newer and is written specifically as a textbook.

comment by Richard_Hollerith2 · 2007-12-16T12:19:25.000Z · LW(p) · GW(p)

Sorry, I do not know that book.

Bob Unwin, in my humble opinion, math is a poor choice of example to make your point because mathematical knowledge can be established by a proof (with a calculation being a kind of proof) and what distinguishes a proof from other kinds of arguments is the ease with which a proof can be verified by nonexperts. (Yes, yes, a math expert's opinion on whether someone will discover a proof of a particular proposition is worth something, but the vast majority of the value of math resides in knowledge for which a proof already exists.)

comment by Joshua_Fox · 2007-12-16T14:47:07.000Z · LW(p) · GW(p)

Great stuff as always. Enhanced diagrams (beyond the simple ASCII ones), with clear labels, and even inline explanations, on nodes and edges, would make the Bayesian explanations much clearer.

comment by steven · 2007-12-16T14:55:09.000Z · LW(p) · GW(p)

Eliezer, good reduxification. I'm still not sure about the point that Tom McCabe made about when authority stops mattering because overwhelming evidence brings the probability close to 0 or 1. Screening seems to do at least some of the work, though.

manuelg,

"The standard frequentist approaches seem like statistical theater."

I lost any remaining respect for standard frequentist inference when I was taught a test that would sometimes "neither reject nor fail to reject" a null hypothesis. Haha.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2007-12-16T19:54:10.000Z · LW(p) · GW(p)

Dynamically, I haven't read Neapolitan's book, but judging by the table of contents, it's more directed toward people who just want to use the algorithms and less at people who want a really deep understanding of why they work, where they come from, what the meaning is, and why these algorithms and no others. Read Pearl's book first.

Billswift, I think I've consistently used "authority" in the sense of "trusted expert", and for social coercion I've used "regulation" or "goverment".

comment by Wei_Dai2 · 2007-12-20T11:00:47.000Z · LW(p) · GW(p)

Eliezer, what is your view of the relationship between Bayesian Networks and Solomonoff Induction? You've talked about both of these concepts on this blog, but I'm having trouble understanding how they fit together. A Google search for both of these terms together yields only one meaningful hit, which happens to be a mailing list post by you. But it doesn't really touch on my question.

On the face of it, both Bayesian Networks and Solomonoff Induction are "Bayesian", but they seem to be incompatible with each other. In the Bayesian Networks approach, conditional probabilities are primary, and the full probability distribution function is more of a mathematical formalism that stays in the background. Solomonoff Induction on the other hand starts with a fully specified (even if uncomputable) prior distribution and derives any conditional probabilities from it as needed. Do you have any idea how to reconcile these two approaches?

comment by clockbackward · 2010-10-11T14:03:26.181Z · LW(p) · GW(p)

Unfortunately, in practice, being as knowledgable about the details of a particular scenario as an expert does not imply that you will process the facts as correctly as the expert. For instance, an expert and I may both know all of the facts of a murder case, but (if expertise means anything) they are still more likely to make correct judgements about what actually happened due to their prior experience. If I actually had their prior experience, it's true that their authority would mean a lot less, but in that case I would be closer to an expert myself.

To give another example, a mathematically inclined high school student may see a mathematical proof, with each step laid out before them in detail. The high school student may have the opportunity to analyze every step to look for potential problems in the proof and see none. Then, a mathematician may come along, glance over the proof, and say that it is invalid. Who are you going to believe?

In some cases, we are the high school student. We can stare at all the raw facts (the details of the proof) and they all make sense to us and we feel very strongly that we can draw a certain inference from them. And yet, we are unaware of what we don't know that the expert does know. Or the expert is simply better at reasoning in these kinds of problems, or avoiding falling into logical traps that sound valid but are not.

Of course, the more you know about the expert's arguments, the less their authority counts. But sometimes, the expertise lies in the ability to correctly process the type of facts at hand. If a mathematician's argument about the invalidness of step 3 does not seem convincing to you, and your argument about why step 3 is valid seems totally convincing, you should still at least hesitate in concluding you are correct.

comment by mat33 · 2011-10-05T10:56:12.541Z · LW(p) · GW(p)

"If we know authority we are still interested in hearing the arguments; but if we know the arguments fully, we have very little left to learn from authority."

Really? We don't deny any ideas/possibilities without 5 minutes of thinking, at least (on the authority of Harry Potter :)). Right. But I'll need a lot more time (days at least) to understand an advanced research of any able professional. And I am ready to fail understanding any work of true genius before it's included in the textbooks for, well, students.

comment by Dojan · 2011-10-18T11:12:25.278Z · LW(p) · GW(p)

This post begs the question of when we assign authority to someone. For example, I don't usually take the pope very seriously, even though by many standards he is a high authority; But Carl Sagan rocks. But if I listen ever so slightly more to the Sagan than to the pope (which isn't true: I don't listen even a little to the pope); when did I decide that? I mean, if I only assign authority to the people who already agrees with me and share my worldview, in't that a short trip to the happy death spiral?

comment by royf · 2012-08-23T05:16:25.374Z · LW(p) · GW(p)

p(H|E1,E2) [...] is simply not something you can calculate in probability theory from the information given [i.e. p(H|E1) and p(H|E2)].

Jaynes would disapprove.

You continue to give more information, namely that p(H|E1,E2) = p(H|E1). Thanks, that reduces our uncertainty about p(H|E1,E2).

But we are hardly helpless without it. Whatever happened to the Maximum Entropy Principle? Incidentally, the maximum entropy distribution (given the initial information) does have E1 and E2 independent. If your intuition says this before having more information, it is good.

Don't say that an answer can't be reached without further information. Say: here's more information to make your answer better.

comment by BlueAjah · 2013-01-12T17:08:26.446Z · LW(p) · GW(p)

You've called two different things "Argument Goodness" so you can draw your diagram, but in reality the arguments that the expert heard that led them to their opinion, and the argument that they gave you, are always going to be slightly different.

Also your ability to evaluate the "Argument Goodness" of the argument they gave you is going to be limited, while the expert will probably be better at it.

comment by cousin_it · 2013-09-23T10:06:29.210Z · LW(p) · GW(p)

Note that if we strengthen "argument" to "valid formal proof", and "authority" to "proof generator", then the statement of this post is wrong. For a good decision theory, seeing a valid formal proof that some action leads to higher utility than others should not be reason enough to choose that action, because such a decision theory would be exploitable by Lobian proof generators.

I'm not sure if this counterargument transfers continuously to everyday reasoning, or it's just a fluke of how we think about decision theory. Maybe there could be a different formalization of logical counterfactuals in which "argument screens off authority" stays true. But that doesn't seem likely to me...

Replies from: private_messaging
comment by private_messaging · 2013-09-24T06:47:16.999Z · LW(p) · GW(p)

I think what applies to everyday reasoning is that an argument is usually an informal suggestion pointing at a single component out of, often, a very huge sum, or, in other cases, a proposition reliant on a very large number of implicit assumptions and/or very prone to being destroyed "from the outside" by expert knowledge.

If the term from the sum was picked at random, it would have to be regressed towards the mean when you estimate expected value of the sum; when the term is not picked at random, and you don't know to which extent it's choice is correlated with it's value, you can't really use it in any way to meaningfully improve an estimate of the sum (even though the authority and non-authority alike will demand that you add in their argument somehow, and will not suggest you treat it as an estimation of the totality of the arguments).

comment by Colombi · 2014-02-20T05:24:35.064Z · LW(p) · GW(p)

Hmmm. I'm not sure what to believe here: you, or So8rien.

comment by Douglas_Reay · 2015-07-08T19:56:34.407Z · LW(p) · GW(p)

Assuming that Arthur is knowledgeable enough to understand all the technical arguments—otherwise they're just impressive noises—it seems that Arthur should view David as having a great advantage in plausibility over Ernie, while Barry has at best a minor advantage over Charles.

This is the slippery bit.

People are often fairly bad at deciding whether or not their knowledge is sufficient to completely understand arguments in a technical subject that they are not a professional in. You frequently see this with some opponents of evolution or anthropogenic global climate change, who think they understand slogans such as "water is the biggest greenhouse gas" or "mutation never creates information", and decide to discount the credentials of the scientists who have studied the subjects for years.

comment by Valerio1988 (ValerioB88) · 2021-01-01T11:33:41.646Z · LW(p) · GW(p)

Hi. I just want to mention that the last graph is wrong in the printed edition, which created some confusion for me.

comment by tmercer · 2022-07-06T21:08:38.141Z · LW(p) · GW(p)

I think there's some nuance missing from Scenario 1. Authority doesn't include competence or expert-ness. Authority is like believing someone because they're paid to be a scientist, but you absolutely SHOULD assign more weight to assertions from actual scientists, people who follow the scientific method to update their beliefs, people who don't believe in the wrong default/null beliefs/hypotheses. This gets mixed up all the time.

Authority: M.D., member of AHA, J.D., professional ______

Competence/expert-ness: performs specific surgery with X% success rate, wins X% of cases in specific legal niche

You shouldn't care at all, not even ceteris paribus, about this definition of authority. You should care a lot about competence/expert-ness, because people can't get competent/expert without having accurate beliefs in the area of competence, so even if they're ignoring some piece of evidence or got to their beliefs very unconsciously or un-Bayesianly, you have strong evidence that their beliefs are close to correct.