The Triumph of Humanity Chart

post by Dias · 2015-10-26T01:41:06.913Z · LW · GW · Legacy · 79 comments

Cross-posted from my blog here.

One of the greatest successes of mankind over the last few centuries has been the enormous amount of wealth that has been created. Once upon a time virtually everyone lived in grinding poverty; now, thanks to the forces of science, capitalism and total factor productivity, we produce enough to support a much larger population at a much higher standard of living.

EAs being a highly intellectual lot, our preferred form of ritual celebration is charts. The ordained chart for celebrating this triumph of our people is the Declining Share of People Living in Extreme Poverty Chart.

Share in Poverty

(Source)

However, as a heretic, I think this chart is a mistake. What is so great about reducing the share? We could achieve that by killing all the poor people, but that would not be a good thing! Life is good, and poverty is not death; it is simply better for it to be rich.

As such, I think this is a much better chart. Here we show the world population. Those in extreme poverty are in purple – not red, for their existence is not bad. Those who the wheels of progress have lifted into wealth unbeknownst to our ancestors, on the other hand, are depicted in blue, rising triumphantly.

Triumph of Humanity2

Long may their rise continue.

 

79 comments

Comments sorted by top scores.

comment by Lumifer · 2015-10-26T14:57:39.358Z · LW(p) · GW(p)

What is "extreme poverty"?

Replies from: OrphanWilde, Jayson_Virissimo
comment by OrphanWilde · 2015-10-26T20:18:04.862Z · LW(p) · GW(p)

Per Google/the World Bank, "Extreme poverty is defined as average daily consumption of $1.25 or less and means living on the edge of subsistence."

I would assume (but don't know) that the value is reasonably well calibrated, and seems absolute enough.

At worst, it's still probably a decent proxy for the number of people living near absolute subsistence level, and is certainly more useful than the much more relative poverty measures generally used (which are often little more than restatements of the GINI coefficient - that is, measurements of inequality rather than actual material need).

Replies from: Lumifer, PhilGoetz
comment by Lumifer · 2015-10-26T20:35:00.717Z · LW(p) · GW(p)

Right. So that gets me curious about how did they estimate the percentage of people living in "extreme poverty" in, say, 1850 China, and what are the error bars on that estimate.

Speaking qualitatively, if we take the "living on the edge of subsistence" meaning, the charts say that around 90% of the human population lived "on the edge of subsistence" in mid-XIX century. Is that so? I am not sure it matches my intuition well. Even if we look at Asia, at peasantry of Russia and China, say, these people weren't well-off, but I have doubts about the "edge of subsistence" for all of them. Of course, a great deal of their economy was non-trade local which makes estimating their consumption in something like 2009 US dollars... difficult.

Replies from: Gunnar_Zarncke
comment by Gunnar_Zarncke · 2015-10-26T21:13:18.529Z · LW(p) · GW(p)

From the LW slack: http://www.measuringworth.com/

Replies from: Lumifer
comment by Lumifer · 2015-10-26T21:27:26.436Z · LW(p) · GW(p)

That site isn't going to help me with XIX century China.

I understand interest rates, and inflation, and purchasing power parity, and all that. That all works fine for more or less developed economies where people buy with money the great majority of what they consume.

The charts posted claim to reflect the entire world and they go back to early XIX century. Whole-world data at that point is nothing but a collection of guesstimates.

Replies from: None
comment by [deleted] · 2015-10-27T17:07:10.218Z · LW(p) · GW(p)

. Whole-world data at that point is nothing but a collection of guesstimates.

Yeah. My understanding is you basically get a bunch of economists in the room to break down the problem into relevant parts, then get a bunch of historians in the room, calibrate them, get them to give credible intervals for the relevant data, and plug it all in to the model.

Replies from: Lumifer
comment by Lumifer · 2015-10-27T17:17:52.427Z · LW(p) · GW(p)

Is this how you think it works or is this how you think it should work?

In particular, I am curious about the "calibrating historians" part. You're going to calibrate experts against what?

Replies from: None
comment by [deleted] · 2015-10-27T17:29:29.320Z · LW(p) · GW(p)

It's how I think it works.

You're going to calibrate experts against what?

Known historical data (which they don't know).

Replies from: Lumifer
comment by Lumifer · 2015-10-27T17:54:39.679Z · LW(p) · GW(p)

The problem is that you want to use the best experts you have. If you are going to try to calibrate them in their field, they know it (and might have written the textbook you're calibrating them against), and if you're trying to calibrate them in the field they haven't studied, I'm not sure it's relevant to the quality of their studies.

As to "how it works", I'm pretty sure no one is actually trying to calibrate historians. I suspect the process actually works by looking up published papers and grabbing the estimates from them without any further thought -- at best. At worst you have numbers invented out of thin air, straight extrapolation of available curves, etc. etc.

Replies from: None
comment by [deleted] · 2015-10-28T03:09:27.126Z · LW(p) · GW(p)

The problem is that you want to use the best experts you have. If you are going to try to calibrate them in their field, they know it (and might have written the textbook you're calibrating them against), and if you're trying to calibrate them in the field they haven't studied, I'm not sure it's relevant to the quality of their studies.

Resolution and calibration are separate. They may have lower resolution in other fields but they shouldn't have lower calibration.

Edit: Thought about the previous comment, but it's not true. One thing they talk about in superforecasting is that people tend to be overconfident in their own fields while better calibrated in others.

Replies from: Lumifer
comment by Lumifer · 2015-10-28T14:47:09.315Z · LW(p) · GW(p)

You're thinking about this in terms of forecasting. This is not forecasting, this is historical studies.

Consider the hard sciences equivalent: you take, say, some geneticists and try to figure out whether their estimates of which genes cause what are any good by asking them questions about quantum physics to "check how they are calibrated".

Replies from: None
comment by [deleted] · 2015-10-28T16:29:43.591Z · LW(p) · GW(p)

You're thinking about this in terms of forecasting.

No. Bayesian estimate calibration is most often used in forecasting, but it's effective in any domain which there's uncertainty, including hard sciences. In fact, calibration training is often done with either numerical trivia, using 90% credible intervals, or with true or false questions using a single percentage estimate. I recommend checking out "How to Measure Anything" for a more indepth treatment.

Consider the hard sciences equivalent: you take, say, some geneticists and try to figure out whether their estimates of which genes cause what are any good by asking them questions about quantum physics to "check how they are calibrated".

Yes, that's essentially how it works, except that you then give them feedback to see if they're over or under confident. They'd have to be relatively easy questions though, otherwise all the estimates would cluster around fifty percent and it wouldn't be very useful training for high resolution answers.

Replies from: Lumifer
comment by Lumifer · 2015-10-28T17:04:41.092Z · LW(p) · GW(p)

it's effective in any domain which there's uncertainty, including hard sciences

Citation needed.

Not all uncertainty is created equal. If uncertainty comes from e.g. measurement limitations, the Bayesian calibration is useless.

Note that science is mostly about creating results that can be replicated by anyone regardless of how well or badly calibrated they are.

Yes, that's essentially how it works

That's how you imagine it to work, since I don't expect anyone to actually be doing this. But let's see, assume we have successfully run the calibration exercises with our group of geneticists. What do you expect them to change in their studies of which genes do what? We can get even more specific, let's say we're talking about one of the twin studies where the author tracked a set of twins, tested them on some phenotype feature X, and is reporting the results that the twins correlate Y% while otherwise similar general population is correlated Z%. What results would better calibration affect?

Replies from: None
comment by [deleted] · 2015-10-28T17:45:16.705Z · LW(p) · GW(p)

Citation needed.

That was an overconfident statement, but for more on how Calibration is useful in places other than Forecasting, check out "How to Measure Anything" as mentioned in the last comment.

But let's see, assume we have successfully run the calibration exercises with our group of geneticists. What do you expect them to change in their studies of which genes do what? We can get even more specific, let's say we're talking about one of the twin studies where the author tracked a set of twins, tested them on some phenotype feature X, and is reporting the results that the twins correlate Y% while otherwise similar general population is correlated Z%. What results would better calibration affect?

Once calibrated, they can make estimates on how sure they are of certain hypotheses, and of how likely treatments based on those hypotheses would lead to lives saved. This in turn can allow them to quantify what experiment to run next using value of information calculations.

Furthermore, by taking a survey of many of these calibrated genetic experts then extremizing their results, you can get an idea of how likely certain hypotheses are to turn out being correct.

Replies from: Lumifer
comment by Lumifer · 2015-10-28T17:59:10.388Z · LW(p) · GW(p)

Once calibrated, they can make estimates on how sure they are of certain hypotheses

I don't know if you read scientific papers, but they don't "make estimates on how sure they are of certain hypotheses". They present the data and talk about the conclusions and implications that follow from the data presented. The potential hypotheses are evaluated on the basis of data, not on the basis of how well-calibrated does a particular researcher feel.

Calibration is good for guesstimates, it's not particularly valuable for actual research.

how likely treatments based on those hypotheses would lead to lives saved ...

That's forecasting. Remember, we're not talking about forecasting.

Replies from: None
comment by [deleted] · 2015-10-28T18:12:26.433Z · LW(p) · GW(p)

"I don't know if you read scientific papers, but they don't "make estimates on how sure they are of certain hypotheses". They present the data and talk about the conclusions and implications that follow from the data presented. The potential hypotheses are evaluated on the basis of data, not on the basis of how well-calibrated does a particular researcher feel.

I'm not really sure how to answer this because I think you misunderstand calibration.

Science moves forward through something called scientific consensus. How does scientific consensus work right now? Well, we just kind of use guesswork. Expert calibration is a more useful way to understand what the scientific consensus actually is.

That's forecasting. Remember, we're not talking about forecasting.

No, it's a decision model. The decision model uses a forecast "How many lives can be saved", but it also uses calibration of known data "Based on the data you have, how sure are you that this particular fact is true".

Replies from: Lumifer
comment by Lumifer · 2015-10-28T18:37:03.361Z · LW(p) · GW(p)

Science moves forward through something called scientific consensus.

No. This is absolutely false. Science moves forward through being able to figure out better and better how reality works. Consensus is really irrelevant to the process. The ultimate arbiter is reality regardless of what a collection of people with advanced degrees can agree on.

The decision model uses a forecast "How many lives can be saved", but it also uses calibration of known data "Based on the data you have, how sure are you that this particular fact is true".

That has nothing to do with calibration. "How many lives can be saved" is properly called a point forecast which provides an estimate of the center of the distribution. These are very popular but also limited because a much more useful forecast would come with an expected error and, ideally, would specify the shape of the distribution as well.

"Based on the data you have, how sure are you that this particular fact is true" is properly a question about the standard error of the estimate and it has nothing to do with subjective beliefs (well-calibrated or not) of the author.

I only care about someone's calibration if I'm asking him to guess. If the answer is "based on the data", it is based on the data and calibration is irrelevant.

Replies from: passive_fist, None
comment by passive_fist · 2015-10-28T22:26:46.640Z · LW(p) · GW(p)

No. This is absolutely false. Science moves forward through being able to figure out better and better how reality works.

While this completely true, and consensus only plays a minor role in science, it's not true that consensus is irrelevant. Given no other information about a certain hypothesis other than that the majority of scientists believe it to be true, the rational course of action would be to adjust belief in the hypothesis upward. Of course, evidence contradicting the hypothesis would nullify this consensus effect. Even a small amount of evidence trumps a large consensus.

comment by [deleted] · 2015-10-28T19:37:59.934Z · LW(p) · GW(p)

No. This is absolutely false. Science moves forward through being able to figure out better and better how reality works. Consensus is really irrelevant to the process. The ultimate arbiter is reality regardless of what a collection of people with advanced degrees can agree on.

No, that's the popular conception of science, but unfortunately it's not an oracle that proves reality true or false. What observation and experiments give us are varying levels evidence that can falsify some hypotheses and point towards the truth of other hypotheses. We then use human reasoning to put all this evidence together and let humans decide how sure they are of something. If they have lots and lots of evidence that thing can become a "theory" based on the consensus that there's quite a lot of it and it's really good, and even more evidence that's even better makes that thing a "law". But it's based on a subjective sense of "how good these data are."

"Based on the data you have, how sure are you that this particular fact is true" is properly a question about the standard error of the estimate and it has nothing to do with subjective beliefs (well-calibrated or not) of the author.

Not quite. It also has to do with all the other previous experiments done, your certainty in the model itself, your ideas about how reality works, and a lot of other things.

That has nothing to do with calibration. "How many lives can be saved" is properly called a point forecast which provides an estimate of the center of the distribution. These are very popular but also limited because a much more useful forecast would come with an expected error and, ideally, would specify the shape of the distribution as well.

Yes, ideally this would be a credible interval with an estimated distribution, but even a credible interval assuming uniform distirubtion be very useful for this purpose.

In terms of calibration, if someone is well calibrated, and they give a credible interval with 90% confidence, the better calibrated you are, the more sure you can be that if they make 100 of such estimates, around 90% of them will lie within the credible interval you gave.

I only care about someone's calibration if I'm asking him to guess. If the answer is "based on the data", it is based on the data and calibration is irrelevant.

Well calibrated people will base their guesses on data, poorly calibrated people will not. Your understanding of calibration isn't in line with research done by Douglas Hubbard, Phillip Tetlock, and others who research human judgement.

Replies from: Lumifer
comment by Lumifer · 2015-10-29T15:08:53.679Z · LW(p) · GW(p)

that's the popular conception of science

Heh. Do you mean that's a conception of science held by not-too-smart uneducated people? X-)

an oracle that proves reality true or false

Sense make not. Reality is always true.

Speaking generally, you seem to treat science as people asserting certain things and so, to decide on how much to trust them, you need to know how calibrated those people are. That seems very different from my perception of science which is based on people saying "This is so, you can test it yourself if you want".

Under your approach, the goal is achieving consensus. Under my system, the goal is to provide replicability and show that it actually works.

Data does not depend on calibration of particular people.

Replies from: None, gjm
comment by [deleted] · 2015-10-30T03:11:16.910Z · LW(p) · GW(p)

This is so, you can test it yourself if you want Under your approach, the goal is achieving consensus. Under my system, the goal is to provide replicability and show that it actually works.

I think we have to separate two ideas here.

  1. There's the data you get from an experiment

  2. There's the conclusions you can draw from that data.

I would agree that the data does not depend on the calibration of particular people. But the conclusions you get from that data DO need to be calibrated. Furthermore, other scientists may want to do experiments based on those conclusions... their decision to do that will really be based on how likely they think the conclusions are accurate. The process of science is building new conclusions on the basis of those old conclusions - if it's just about gathering the data, you never gain a deeper understanding of reality.

Replies from: Lumifer
comment by Lumifer · 2015-10-30T14:47:14.469Z · LW(p) · GW(p)

There's the conclusions you can draw from that data.

In the word "conclusions" you conflate two different things which I wish to keep separate.

One of them is subjective opinion/guesstimate/evaluation/conclusion of a person. I agree that the calibration of the person whose opinion we care about is relevant.

The other is objective facts/observations/measurements/conclusions that do not depend on anyone in particular. That's not just "data" from your first point. That's also conclusions that follow from the data in an explicit, non-subjective way. A study can perfectly well come to some conclusions by showing how the data leads to them without depending on anyone's calibration.

The answer to doubts about the first kind of conclusions is "trust me because I know what I'm talking about". The answer to doubts about the second kind of conclusions is "you don't have to trust me, see for yourself".

The process of science is building new conclusions on the basis of those old conclusions

I continue to disagree. In your concept of science the idea of testing against reality is somewhere in the back row. What's important is achieving consensus and being well-calibrated. I don't think this is what science is about.

Replies from: None
comment by [deleted] · 2015-10-30T20:46:05.133Z · LW(p) · GW(p)

In your concept of science the idea of testing against reality is somewhere in the back row. What's important is achieving consensus and being well-calibrated. I don't think this is what science is about.

Let's stop using the word "science" because I don't really care how we define that specific word.

Let's change it instead to "the process of learning things about reality" because that's what I'm talking about. I think it's what you're talking about as well, but traditionally science can also mean "the process of running experiments" - and if we defined it that way, then I'd agree that calibration isn't needed.

The other is objective facts/observations/measurements/conclusions that do not depend on anyone in particular. That's not just "data" from your first point. That's also conclusions that follow from the data in an explicit, non-subjective way.

I can't think of an example where conclusions are proven true from data in a specific, non-subjective way. Science works on falsification - you can prove things false in a specific, non-subjective way (assuming you trust completely in the protocol and the people running it), but you can't prove things true, because there's still ANOTHER experiment someone could run in different conditions that could theoretically falsify your current hypothesis. Furthermore, you may get the correlation right, but may misunderstand the causation.

Don't get too caught up on this example, because it's just a silly illustration of a general point, but say you made a hypothesis that "An object falling due to gravity accelerates at a rate of 9.8 meters/second squared". You could run many experiments with data that fit your hypothesis, but it's always possible that an alternative hypothesis that "Objects accelerate at 9.8 meters/second squared - except on Tuesday's when it's a full moon". Unless you had specifically tested that scenario, that hypothesis has some infinitesimal chance of being right - and the thing is, there's no way to test ALL of the potential scenarios.

That's where calibration comes in - you don't have certainty that objects accelerate at that rate due to gravity in every situation, but as you prove it in more and more situations, you (and the scientific community) become more and more certain that it's the correct hypothesis. But even then, someone like Einstein can come along, find some random edge case involving the speed light where the hypothesis doesn't hold, and present a better one.

Replies from: Lumifer
comment by Lumifer · 2015-10-30T20:51:58.583Z · LW(p) · GW(p)

Let's change it instead to "the process of learning things about reality" because that's what I'm talking about.

"The process of learning things about reality" is much MUCH larger and more varied than science.

That ain't where goalposts used to be :-/

Replies from: None
comment by [deleted] · 2015-10-30T21:09:38.345Z · LW(p) · GW(p)

We just had different goal posts. You learned science as "running an experiment" - I learned science as "Doing background research, determining likely outcomes, running experiments, sharing results back with the community". That's why I tabooed the word, to make sure we were on the same page.

Are we in agreements about the basic concept, if we agree that we have two different definitions of science?

Replies from: Lumifer
comment by Lumifer · 2015-11-01T22:35:33.207Z · LW(p) · GW(p)

I learned science as...

Do tell. Where and how did you "learn science" this way?

Are we in agreements about the basic concept

What is the "basic concept"?

Replies from: None
comment by [deleted] · 2015-11-01T23:30:03.528Z · LW(p) · GW(p)

Do tell. Where and how did you "learn science" this way?

Throughout elementary and middle school (early education here in the US) through textbooks with diagrams like this

What is the "basic concept"?

That experiments can give you mostly non-subjective data about one experiment, but to draw broader conclusions about how the world works you have to combine the data from many experiments into a subjective estimate about how likely a hypothesis is.

Replies from: Lumifer
comment by Lumifer · 2015-11-02T04:44:16.164Z · LW(p) · GW(p)

Throughout elementary and middle school

That does not strike me as an adequate basis for deciding what science is or is not.

you have to combine the data from many experiments into a subjective estimate

So, are you saying that the outcome of science is a set of subjective estimates that most people agree with?

Replies from: None
comment by [deleted] · 2015-11-02T05:43:34.771Z · LW(p) · GW(p)

That does not strike me as an adequate basis for deciding what science is or is not.

Words mean different things to different people... as I said, I'm not interested in arguing over the "proper" definition of this word. I'm interested in clarifying the process through which experiments lead to new knowledge about the world. You can call this process "not science" and I won't argue - it's not an interesting argument to me.

So, are you saying that the outcome of science is a set of subjective estimates that most people agree with?

I'm not sure... what do you mean by "the outcome of science?"

comment by gjm · 2015-10-29T20:10:30.194Z · LW(p) · GW(p)

That seems very different from my perception of science

Aren't both these views of science oversimplifications? I mean, in practice most of the people making use of the work scientists have done aren't really testing the scientists' work for themselves (they're kinda doing it implicitly by making use of that work, but the whole point is that they are confident it's not going to fail).

Reality certainly is the ultimate arbiter, but regrettably we don't get to ask Reality directly whether our theories are correct; all we can do is test them somewhat (in some cases it's not even clear how to begin doing that; I'm looking at you, string theory) and that testing is done by fallible people using fallible equipment, and in many cases it's very difficult to do in a way that actually lets you separate the signal from the noise, and most of us aren't well placed to evaluate how fallibly it's been done in any given case, and in practice usually we have to fall back on something like "scientific consensus" after all.

I think you and MattG are at cross purposes about the role he sees for calibration in science. The process by which actual primary scientific work becomes useful to people who aren't specialists in the field goes something like this:

  • Alice does some work where she exposes laboratory rats to bad journalism and measures the rate at which they get cancer. (So do Alex, Amanda, Aloysius, et Al.)
    • She forms some opinions about this stuff; we could, in LW style, represent these opinions as some kind of probability distribution over relationships between bad journalism and cancer. Both her point estimates and her estimates of the distribution around them are strongly constrained by the work she's done, but of course there are probably things she's failed to think of. If she's sensible, her opinions will include explicit allowance for having (maybe) made mistakes and missed things. Such considerations will probably not appear explicitly in the articles she publishes.
  • Bob talks to Alice (and Alex, Amanda, ...) or reads the articles they publish.
    • As a result, Bob too forms opinions about this stuff, which again we can represent in probabilistic terms. Bob's knowledge of the actual work is less direct than Alice's, and his opinions are going to depend not only on Alice's observed risk ratios and samples sizes and p-values and whatnot but also on how much he trusts Alice (having read her papers) to have done good work. And of course he will be trying to integrate what he learns from Alice with what he learns from Alex, Amanda et Al.
    • Bob may actually also be a primary researcher in the field, but here we're considering him in his role as someone who has looked at the primary researchers' work and drawn some conclusions.
  • Bob and Bill and Beth and Bert and all the other journo-oncologists (some of whom are in fact Alice and Alex etc.) all read more or less the same articles, and talk to one another at conferences, and write articles commenting on other people's work. Over the next few years, journo-oncological opinion converges to a rough consensus that reading the Daily Mail probably does causes cancer, that further work might pin that down further, but that the field has higher research priorities.
  • Carol, a non-specialist who wants to know whether reading the Daily Mail causes cancer, talks to some experts in the field or reads a popular book on the subject or even gets into the journals and finds a review article or two.
    • As a result, Carol also forms opinions about journo-oncology. If she has the necessary skills she may also look cursorily at some of the primary literature and get some idea of how rigorous that work is, how big the sample sizes are, whether the research was funded by Rupert Murdoch, etc., but on the whole she's dependent on what Bob and the other Bs tell her. So her opinions are going to be mostly shaped by what Bob says and what she thinks of Bob's accuracy on this point.

Calibration (in the sense we're talking about here) isn't of much relevance to Alice when she's doing the primary research. She will report that the Daily Mail is positively associated with brain cancer in rats (RR=1.3, n=50, CI=[1.1,1.5], p=0.01, etc., etc., etc.) and that's more or less it. (I take it that's the point you've been making.)

But Bob's opinion about the carcenogenicity of the Daily Mail (having read Alice's papers) is an altogether slipperier thing; and the opinion to which he and Beth and the others converge is slipperier still. It'll depend on their assessment of how likely it is that Alice made a mistake, how likely it is that Aloysius's results are fraudulent given that he took a large grant from the DMG Media Propaganda Fund, etc.; and on how strongly Bob is influenced when he hears Bill say "... and of course we all know what a shoddy operation Alex's lab is."

It is in these later stages that better calibration could be valuable, and that I think Matt would like to see more explicit reference to it. He would like Bob and Bill and Beth and the rest to be explicit about what they think and why and how confidently, and he would like the consensus-generating process to involve weighing people's opinions more or less heavily when they are known to be better or worse at the sort of subjective judgement required to decide how completely to mistrust Aloysius because of his funding.

I'm not terribly convinced that that would actually help much, for what it's worth. But I don't think what Matt's saying is invalidated by pointing out that Alice's publications don't talk about (this kind of) calibration.

Replies from: Lumifer
comment by Lumifer · 2015-10-29T21:14:51.778Z · LW(p) · GW(p)

I mean, in practice most of the people making use of the work scientists have done aren't really testing the scientists' work for themselves (they're kinda doing it implicitly by making use of that work, but the whole point is that they are confident it's not going to fail).

First, I think the "implicitly" part is very important. That glowing gizmo with melted-sand innards in front of me works. By working it verifies, very directly, a whole lot of science.

And "working in practice" is what leads to confidence, not vice versa. When a sailor took the first GPS unit on a cruise, he didn't say "Oh, science says it's going to work, so that's all going to be fine". He took it as a secondary or, probably, a tertiary navigation device. Now, after years of working in practice sailors take the GPS as a primary device and most often, a second GPS as a secondary.

Note, by the way, that we want useful science and useful science leads to practical technologies that we test and use all the time.

Calibration (in the sense we're talking about here) isn't of much relevance to Alice when she's doing the primary research.

Oh, good, we agree.

But Bob's opinion ... is an altogether slipperier thing; and the opinion to which he and Beth and the others converge is slipperier still.

Sure, that's fine. Bob and Beth are not scientists and are not doing science. Allow me to quote myself: "Calibration is good for guesstimates, it's not particularly valuable for actual research." Bob and Bill and Beth and Bert are not doing actual research. They are trying to use published results to form some opinions, some guesstimates and, as I agree, their calibration matters for the quality of their guesstimates. But, again, that's not science.

Replies from: gjm
comment by gjm · 2015-10-30T03:54:36.508Z · LW(p) · GW(p)

Bob and Beth are not scientists and are not doing science.

Bob and Beth are scientists (didn't I make it clear enough in my gedankenexperiment that they are intended to be journo-oncologists just as much as Alice et al, it's just that we're considering them in a different role here?). And they are forming their opinions in the course of their professional activities. Doing science is not only about doing experiments and working out knotty theoretical problems; when two scientists discuss their work, they are doing science; when a scientist attends a conference presentation given by another, they are doing science; when a scientist sits and thinks about what might be a good problem to attack next, they are doing science.

Doing actual research is a more "central" scientific activity than those other things. But the other things are real, they are things scientists actually do, they are things scientists need to do, and I don't see any reason to deny that doing them is part of how science (the whole collective enterprise) functions.

Replies from: Lumifer
comment by Lumifer · 2015-10-30T14:53:52.672Z · LW(p) · GW(p)

when a scientist sits and thinks about what might be a good problem to attack next, they are doing science.

Sure, and you've expanded the definition of "doing science" into uselessness. "Doodling on paper napkins is doing science!" -- well, yeah, if you want it so, what next?

I'm not talking about what large variety of things scientists do in the course of their professional lives. I'm talking about the core concept of science and whether it, as MattG believes, "moves forward through something called scientific consensus".

In particular, I would like to distinguish between "doing science" (discovering how the world works) and "applying science" (changing the world based on your beliefs about how it works).

Replies from: gjm
comment by gjm · 2015-10-30T21:26:46.648Z · LW(p) · GW(p)

the core concept of science

Let's distinguish two things. (1) The core activities of science are, for sure, things like doing carefully designed experiments and applying mathematics to make quantitative predictions based on precisely formulated theories. These activities, indeed, don't proceed by consensus, but no one claimed otherwise; even to ask whether they do is a type error. (2) How scientific knowledge actually advances. This is not only a matter of #1; if we had nothing but #1 then science wouldn't advance at all, because in order for science to advance each scientist's work needs to be based in, or at least aware of, the work of their predecessors. And #2, as it happens, does involve something like consensus, and it's reasonable to wonder whether being more explicitly and carefully rational about #2 would help science to advance more effectively. And that is what (AIUI) MattG is proposing.

Replies from: Lumifer, Richard_Kennaway, Richard_Kennaway
comment by Lumifer · 2015-11-01T22:32:41.606Z · LW(p) · GW(p)

but no one claimed otherwise

I do believe MattG claimed otherwise. At least that was the most straightforward reading of what he said.

in order for science to advance each scientist's work needs to be based in, or at least aware of, the work of their predecessors.

That is true, the scientists do trust what's considered "solved", but that trust is conditional. One little ugly fact can blow up a lot of consensus sky-high.

I think one of the core issues here is resistance to cargo cult science. Consensus is dangerous because it is enables cargo cults, but the sceptical "show me" attitude is invaluable here.

more explicitly and carefully rational about #2 would help science to advance more effectively

What do you mean by "carefully rational"? How is that better than the baseline "show me"?

Replies from: gjm
comment by gjm · 2015-11-02T02:05:51.773Z · LW(p) · GW(p)

I do believe MattG claimed otherwise.

I think you can only reach that conclusion by applying your preferred definition of "science" to MattG's statement about science. That's a mistake unless you know he's not using a substantially different definition.

that trust is conditional

Yes, of course. (Did anyone suggest it's not?)

For the avoidance of doubt, I am not for a minute suggesting blind or unquestioning trust of scientific consensus; at least, not for scientists. (It is possible that below some threshold of scientific competence blind trust is in fact the best available strategy.)

What do you mean by "carefully rational"?

I mean what happens if the Bobs in my thought experiment, rather than arriving at their opinions informally and qualitatively, think explicitly about what they've heard and read and about how much evidence each thing they've heard or read provides, and determine their own opinions by deliberate reflection on that (not necessarily by actual calculation, but with that always available in cases of doubt).

This might well not be an improvement (e.g., because System 1 has hardware support that System 2 doesn't) but it's not obvious that it isn't.

How is that better than the baseline "show me"?

"Carefully rational" isn't a proposed replacement for "show me", it's a proposed replacement for things like "I've read about this in a few papers so I'll assume it's true" (which probably doesn't get said explicitly very often, of course).

"Show me" is always there (usually in the background) as an option. Most scientists, most of the time, don't go banging on other scientists' lab doors demanding further evidence for what's in their papers. Most scientists, most of the time, don't attempt to replicate other scientists' results before (at least provisionally) accepting them.

(One reason is that replication and door-banging take effort. This is also an argument against the more explicit "carefully rational" approach I think MattG is advocating.)

Replies from: Lumifer
comment by Lumifer · 2015-11-02T04:42:36.438Z · LW(p) · GW(p)

I fail to discern your point. There is a lot of clarifications, adjustments, and edge-nibbling, but what is it that you want to say?

Replies from: gjm
comment by gjm · 2015-11-02T11:55:03.815Z · LW(p) · GW(p)

In the absence of any more information than that you "fail to discern [my] point", I don't know what I can usefully say to help. In ascending order of cynicism:

  • If nothing in my previous comment conveyed any meaning to you at all, then it seems like we have a big impedance mismatch and fixing the problem (whatever it is) seems likely to be more trouble than it's worth.
  • If you just can't be bothered to say with any specificity what the problem is, then I suppose that indicates that you think your time is much more valuable than mine, a position I cordially decline to share.
  • If you're just being generally dismissive because that's rhetorically more effective than engagement, I'm not interested in discussion on those terms.

(I'm sorry if you find my style uncongenially cautious. This deep into a tangential discussion like this one, I'd expect much of what's said to be clarifications and edge-nibbling, and in particular it seems peculiar to (1) ask questions of the form "what do you mean by X and why is it better than Y?" and then (2) complain that you're getting clarification and edge-nibbling in response.)

Replies from: Lumifer
comment by Lumifer · 2015-11-02T15:43:15.512Z · LW(p) · GW(p)

I mean it literally. I can't see a coherent position behind your criticisms, there is no overarching framework which backs them up. I don't understand what is the core of your disagreement amongst all the clarifications.

Replies from: gjm
comment by gjm · 2015-11-02T16:55:14.015Z · LW(p) · GW(p)

I don't know that my disagreement has a single core. It looks to me as if you are making a number of separate (but related) mistakes.

I think you are defining "science" narrowly, to include only actual experimentation and analysis, then interpreting MattG's comments as if he is using a similarly narrow definition of "science" (which he has said he isn't). This is a mistake because of course what someone says is liable to come out wrong when you give its words different meanings from the one they had in mind.

I think you are defining "science" narrowly, to include only actual experimentation and analysis, in a discussion of whether knowledge would advance more effectively if scientists explicitly represented their beliefs about scientific theories in probabilistic terms, did something like Bayes-rule updates on learning new things, and attempted to monitor the reliability of other scientists using notions like "calibration". This is a mistake because the question at issue is not about actual experimentation and analysis.

I think you are writing as if the only important things scientists do in their capacity as scientists are actual experimentation and analysis. This is a mistake because science is in fact a collective endeavour whose success in advancing knowledge depends on scientists' communication with other scientists, and evaluation of their work.

Perhaps this is the core: I do not think that, in this discussion, it is helpful for you to insist on a narrow definition of what counts as "science". I think your suggestion upthread that the only alternative is to say that absolutely anything is "science" is ridiculous. I don't have any objection to a narrow definition of "science" as such; there are surely contexts in which it's better than a broad one; but I don't think this discussion is such a context.

Replies from: Lumifer
comment by Lumifer · 2015-11-02T19:22:55.120Z · LW(p) · GW(p)

Perhaps this is the core: I do not think that, in this discussion, it is helpful for you to insist on a narrow definition of what counts as "science"

Interesting. I don't perceive this subthread as mostly about definitions, I think of it as being about the balance between two approaches to claims about reality: the hard one ("show me", see also this) and the soft one ("let's construct as subjective probability assessment on the basis of opinions of experts").

Notably, this subthread started with MattG saying "Science moves forward through something called scientific consensus" and me going "Whaaaa...?"

Replies from: gjm
comment by gjm · 2015-11-02T21:41:20.993Z · LW(p) · GW(p)

I also don't think the discussion is about definitions, but I think it's being made needlessly more difficult by differences in definitions.

It is (I think) a simple matter of empirical fact that most of the time scientists get information from one another without saying "show me!". That doesn't mean that "show me!" isn't always there in the background -- it is -- but only that the actual practice of science-broadly-conceived (by which I don't mean "science-narrowly-conceived plus fake science", I mean "science-narrowly-conceived plus the other things scientists do without which science as a whole would make much less progress") does in fact involve subjective probability assessments on the basis of experts' opinions.

Replies from: Lumifer
comment by Lumifer · 2015-11-02T22:06:11.483Z · LW(p) · GW(p)

It is (I think) a simple matter of empirical fact that most of the time scientists get information from one another without saying "show me!".

Actually, I will disagree with that. There is a reason published papers consist mostly of detailed descriptions of what was done and what happened. If what you are saying were true, executive summaries would suffice: We have discovered that frobnicating frotzed blivets leads to emission of magic smoke. The End.

Certainly, large parts of scientific knowledge have passed into the "just accept it's true" realm, but any new claims are required to be supported by fairly large amounts of "show me".

Replies from: gjm
comment by gjm · 2015-11-02T23:31:17.201Z · LW(p) · GW(p)

If what you are saying were true, executive summaries would suffice

I don't see why. The details are there for the following reasons, none of which appears to me to be invalidated by anything I've said. (1) They are interesting for their own sake (to those immersed in the field, at least). (2) They clarify what useful opportunities there may be for followup work ("Hmm, all their blivets were frotzed with titanium chloride. What happens if we use uranium nitride instead?"). (3) They provide a way to do "show me!"-like checks for those relatively few who want to without needing to interrogate the authors (replicating the analysis is easier than replicating the experiment). (4) They provide, in principle, the information needed for a more thorough "show me!" check (outright replication) for those even fewer who want to do that.

If you've got the impression that I don't agree that independent experimental test is the nearest thing we have to an ultimate arbiter of scientific truth, then I've been unclear or you've been obtuse or both; I do agree with that. Most of the time, though, scientists don't go all the way to the ultimate arbiter.

comment by Richard_Kennaway · 2015-10-31T06:58:24.319Z · LW(p) · GW(p)

Consensus is the result, not the means.


But this thread has drifted far from reality. It began with Lumifer's comment about estimates of historical poverty:

The charts posted claim to reflect the entire world and they go back to early XIX century. Whole-world data at that point is nothing but a collection of guesstimates.

To which MattG replied:

My understanding is you basically get a bunch of economists in the room to break down the problem into relevant parts, then get a bunch of historians in the room, calibrate them, get them to give credible intervals for the relevant data, and plug it all in to the model.

Lumifer:

Is this how you think it works or is this how you think it should work?

MattG:

It's how I think it works.

And the conversation drifted into the stratosphere with no further discussion of where those numbers actually came from.

comment by Richard_Kennaway · 2015-10-31T06:38:50.919Z · LW(p) · GW(p)

Consensus is the result, not the means.

comment by PhilGoetz · 2015-10-29T14:13:00.072Z · LW(p) · GW(p)

I spent a month in a farming village in China about 15 years ago. Farmhands there made about $8 a day during the growing season, and little during the winter. They would be supporting a family of 4 or more, so that would be under $2 a day on average. Yet prices for rent and food were so low that, if you considered only the essentials, they were making better wages than many people in America. They were poor if they wanted to buy manufactured goods, and poor in that certain standards (clean air, quiet neighbors, reliable electricity) were unavailable even for the rich. Most of them had indoor toilets (with nasty open sewers) and television (the true necessity). I don't know about the price of fuel or electricity.

My point is that using the exchange rate to compute how many dollars a day someone makes in a country in which the exchange rate is only used to price things that the locals don't buy is very misleading.

Replies from: satt
comment by satt · 2015-10-31T15:18:53.030Z · LW(p) · GW(p)

My point is that using the exchange rate to compute how many dollars a day someone makes in a country in which the exchange rate is only used to price things that the locals don't buy is very misleading.

I believe the World Bank defines poverty in terms of PPP-adjusted incomes for that reason.

comment by Jayson_Virissimo · 2015-10-26T17:36:13.156Z · LW(p) · GW(p)

The World Bank's stipulative definition of "extreme poverty" is earning less than an inflation-adjusted $1.25 a day.

comment by [deleted] · 2015-10-26T10:10:03.700Z · LW(p) · GW(p)

What happened in 1970 that poverty started sharply declining?

Replies from: Tripitaka, knb
comment by Tripitaka · 2015-10-26T12:47:37.038Z · LW(p) · GW(p)

Seems to be mostly Asia getting richer. Hans Rosling gives a very impressive talk with amazing visuals about that here: https://www.youtube.com/watch?v=hVimVzgtD6w You can also play with the data for yourself http://www.gapminder.org/world

comment by knb · 2015-10-27T23:17:44.229Z · LW(p) · GW(p)

I can think of a couple things that might have contributed. In the second half of the 1960s Chinese government policy switched from encouraging maximally large families to encouraging family planning and control of population growth. In 1970 the 2-child policy was implemented. Since lots of Chinese babies born in that period would have been in extreme poverty it seems likely that played a part. Interestingly 1970 also roughly marks the end of the Great Compression and beginning of the Great Stagnation in the US and many other developed economies. The obvious explanation is that this was the inflection point for a new phase of globalization and labor arbitrage, resulting in stagnant incomes for 1st world workers and higher earnings for 3rd worlders.

comment by Daniel_Burfoot · 2015-10-27T15:21:30.732Z · LW(p) · GW(p)

Well, the trend in the second chart is clearly unsustainable, so it's hardly something to get too excited about. I would be happy if the second chart showed poverty dropping off while total population stayed roughly flat.

Replies from: knb
comment by knb · 2015-10-27T23:43:37.111Z · LW(p) · GW(p)

Well, the trend in the second chart is clearly unsustainable, so it's hardly something to get too excited about.

What aspect do you think is unsustainable? The population growth or the reduction in absolute poverty? Over what time period?

Replies from: malcolmocean
comment by MalcolmOcean (malcolmocean) · 2015-10-30T05:20:43.475Z · LW(p) · GW(p)

@Daniel_Burfoot's second sentence was "I would be happy if the second chart showed poverty dropping off while total population stayed roughly flat." so I think it's pretty clear he meant the population growth.

comment by Lukas_Gloor · 2015-10-26T12:47:36.432Z · LW(p) · GW(p)

The developments you highlight are impressive indeed. But you're making it sound as though everyone should agree with your normative judgments. You imply that doubling extreme poverty would be a good thing if it comes with a doubling of the rest of the population. This view is not uncontroversial and many EAs would disagree with it. Please respect that other people will disagree with your value judgments.

Replies from: Dagon, Luke_A_Somers
comment by Dagon · 2015-10-26T15:44:43.342Z · LW(p) · GW(p)

I think he's showing the opposite. The first graph does imply what you say. The second graph shows that EVEN if we look at number of people in extreme poverty as an absolute, rather than a ratio, we've been making steady progress since 1971 and are now below 1820 levels of poverty.

It's not judgement-free, as nothing on this topic can or should be. However, it's showing that the positive results are robust to multiple dimensions that people are likely to judge on.

To be specific: what normative judgement do you prefer for which this graph is misleading? Or are you saying "there are important things not covered in either graph", which is true of pretty much any such summary.

Replies from: Lukas_Gloor
comment by Lukas_Gloor · 2015-10-26T18:33:22.022Z · LW(p) · GW(p)

I'm referring to the text, not the graph(s). The two paragraphs between the graphs imply

that doubling extreme poverty would be a good thing if it comes with a doubling of the rest of the population.

He does not preface any of it by saying "I think", he just presents it as obvious. Well, I know for a fact that there are many people who self-identify as rationalists to whom this is not obvious at all. It also alienates me that people here, according to the karma distributions, don't seem to get my point.

Replies from: Dagon, ChristianKl, Lumifer, OrphanWilde
comment by Dagon · 2015-10-27T00:19:40.389Z · LW(p) · GW(p)

I sympathize with the feeling of alienation and confusion when something valuable gets downvoted.I try not to learn too much from small karma amounts - there's enough inconsistency in what different groups of readers seem to want that it's easier to post mostly for my own amusement.

I don't agree that it's all that controversial that "copy an overall-positive-value population distribution" is positive. The second half of the repugnant conclusion (that adjusting satisfaction toward the average is non-negative) is somewhat disputed, but wasn't suggested here.

I also don't think that was the post's main point, so even if I disagreed, I'd be sure to call out that I agree with his main point and only want to clarify this side-implication.

comment by ChristianKl · 2015-10-26T20:07:52.669Z · LW(p) · GW(p)

It also alienates me that people here, according to the karma distributions, don't seem to get my point.

Reading "implyed" claims into an article and then disagreeing with the claims you believe are implied is frequently not something that's good karma wise.

I also see no disrespect by Dias that warrents to call him to "please respect..."

comment by Lumifer · 2015-10-26T18:55:35.608Z · LW(p) · GW(p)

It also alienates me

And does that oblige anyone to do anything?

comment by OrphanWilde · 2015-10-26T18:45:26.181Z · LW(p) · GW(p)

It also alienates me that people here, according to the karma distributions, don't seem to get my point.

I downvoted you. I got your "point". I found it concern-trolling at worst, and irrelevant at best, with a dose of error tossed into the mix.

Oh no, somebody dared to say something without putting a qualifier in the front to make explicit a fact that we all understand, that this is what -they think-. That's why he's writing it, and your protest at the absence of weasel-words is nonsense, a rationalization for your statement, and a blanket refusal to admit to the fact that you might have been wrong.

Your protest at being "alienated" by the fact that people didn't upvote you as much as the person who disagreed with you makes it worse, because you imply you're obligated some level of karma balance with those who disagree with you. I say this, knowing for a fact it may get me downvoted, but I have enough honesty of self not to care: Fuck your entitlement. Grow up, become an adult, and realize that nobody is obligated to upvote you, regardless of what effect it might have on your feelings. That's not what the upvote/downvote system is for, and you're not a victim because half the people, including myself, who read your post felt that Less Wrong could use -less- of the kind of thing you wrote.

Replies from: michaelkeenan, Lukas_Gloor
comment by michaelkeenan · 2015-10-26T22:49:42.007Z · LW(p) · GW(p)

This comment seems aggressive and rude, so I doubt it will be persuasive to Lukas. As Yvain wrote in How To Not Lose An Argument, we should beware of status effects during arguments. If Lukas agrees with you now, then Lukas agrees he is a weasel-word-using rationalizing entitled infantile fake-victim, which is very difficult to accept. Without the insults, Lukas would have had the opportunity to make an easier update - that he misunderstood, or the text was unclear, or that he'd prefer Dias to have clarified but reasonable people could disagree, or something like that.

Replies from: OrphanWilde
comment by OrphanWilde · 2015-10-27T13:59:00.210Z · LW(p) · GW(p)

Yvain makes the mistake of believing that the person he is arguing with is the person he is convincing.

I'm not interested in convincing Lukas of anything. My target is the audience, who I'm not arguing with, but negotiating with.

Observe the neutral karma score of my rude comment, at least as of now - it might change, as I reveal something: Had I been so rude to somebody else in different circumstances, it would have been deeply negative. Lukas lost considerable status by complaining about being downvoted, and half the participating audience is happy to upvote me for targeting somebody who has thus earned a lowered status. Those who downvote largely agree with the status assessment, but, like you, disagree with my behavior.

Everybody who upvoted my rude comment, or was tempted to? I was acting like a bully of an approved low-status target - and you approved. Chew on that. (And observe your cognitive dissonance, as you rationalize that being a bully might be appropriate in some circumstances, given the right target.)

Replies from: Viliam, entirelyuseless
comment by Viliam · 2015-10-27T22:37:22.428Z · LW(p) · GW(p)

Observe the neutral karma score of my rude comment, at least as of now ... Everybody who upvoted my rude comment, or was tempted to? ...

Now I'm almost sorry I didn't see your comment while it had neutral karma. I believe I wouldn't have upvoted it, but that's exactly the kind of judgement I don't trust.

Okay, I generally have a rule to never upvote comments that speak about their own karma ("it may get me downvoted"), so at least that would have stopped me, if nothing else.

Anyway... such drama... so meta... wow

Replies from: OrphanWilde
comment by OrphanWilde · 2015-11-02T16:54:41.610Z · LW(p) · GW(p)

Lukas' karma for the comment I responded to was quite negative when mine was neutral, as well (down to -5 at one point, if my memory serves me well, which is an iffy prospect). By turning Lukas into the underdog in this conversation (by identifying myself as a bully), I've changed people's perceptions of his comment, as well.

That part wasn't intentional, but in retrospect, it should have been an obvious side effect.

I actually inserted the "it may get me downvoted" as a signal, although I don't recall what the purpose of the signal was, and it's not obvious to me now. Pity.

comment by entirelyuseless · 2015-10-27T14:15:12.911Z · LW(p) · GW(p)

What exactly is the point you are making here? If you disapprove of your own behavior, you should apologize to Lukas. If you don't disapprove of it, then if you are right, people might not be rationalizing if they conclude that being a bully might be appropriate in some cases.

Replies from: OrphanWilde
comment by OrphanWilde · 2015-10-27T15:01:24.502Z · LW(p) · GW(p)

Didn't I just say I was negotiating, rather than arguing? Quit looking for a point. Look instead for a purpose.

In one comment, I leverage a petty form of darks arts, with no karma penalty to myself, and a hefty cost to the person I targeted. In the next, I call myself out for doing so, and those who fell for it as well - with a pretty hefty karma penalty.

I'll dryly observe the amusement I find in a community which purports to be about becoming stronger getting rather huffy about having their weaknesses revealed to them. Which might suggest some of my purpose.

comment by Lukas_Gloor · 2015-10-27T01:03:35.976Z · LW(p) · GW(p)

Maybe I'm wrong, but my guess is that if someone wrote "Life is neutral; some states are worse than death, and adding new happy people is nice but not important", that person would be called out, and the post would receive a large portion of downvotes. I'm not sure about the downvotes (personally I didn't even downvote the OP), but I think pointing out the somewhat controversial nature of such a blanket statement is definitely a good thing. Would you oppose this as well (similarly aggressively)?

We could talk about whether my view of what's controversial or not is biased. I would not object to someone saying "Murder is bad" without prefacing it with "Personally, I think", even though I'm sure most uncontrolled AIs will disagree with this for reasons I cannot find any faults in. But assuming that we're indeed talking about an issue where there's no consensus among EAs, then to me it seems epistemically appropriate to at least hint at this lack of consensus, just like you do when you talk about a scientific hypothesis that is controversial among experts. And it makes even more sense to hint at this, if some people don't even realize that there's a lack of consensus. For whatever reason, EAs that came to EA through LW care much more about preventing death than EAs that found to EA through e.g. Peter Singer's books. And I thought it might be interesting to LW-originating EAs that a significant fraction of EAs "from elsewhere" feel alienated by the way some issues are being discussed on LW. Whether they give a shit about it is a different question of course.

Replies from: OrphanWilde
comment by OrphanWilde · 2015-10-27T13:33:29.878Z · LW(p) · GW(p)

See, the issue is that you think the downvotes were because of your views. I can't speak for other people, but I downvoted you because you were engaging in behaviors I prefer to discourage; namely, ignoring the substantive thrust of a post to nitpick at a relatively insignificant comment made in the middle whose absence wouldn't affect the post as a whole. And, as we see here, you made that comment not because it was substantive or seriously detracted from the post, but because it was an ideological matter with which you disagreed with the author. Hence my comment to you: "I found it concern-trolling at worst, and irrelevant at best".

Because, as Dagon pointed out, using your criteria, the progress is -still- a positive thing. That's the point of this post. Taking it as an opportunity to try to start an ideological fight is just bad manners.

See, downvotes here don't mean Less Wrong disagrees with you (although that's how some people use it, it's not the cultural standard). Downvotes mean people want to see less of the kind of post/comment that was downvoted.

I honestly don't give a tinker's cuss about the intra-movement arguments within EA, and if this is how EA behaves, I'd like to see less of it as a whole. You're not representing your movement very well.

Replies from: Lukas_Gloor
comment by Lukas_Gloor · 2015-10-27T14:22:36.002Z · LW(p) · GW(p)

you made that comment not because it was substantive or seriously detracted from the post, but because it was an ideological matter with which you disagreed with the author

I generally dislike it when people talk about moral views that way, even if they mention views I support. I might be less inclined to call it out in a case where I intuitively strongly agree, but I still do it some of the time. I agree it wasn't the main point of his post, I never denied that. In fact I wrote that I agree the developments are impressive. By that, I meant the graphs. Since when is it discouraged to point out minor criticism in a post? The fact that I singled out this particular post to make a comment that would maybe fit just as well elsewhere just happens to be a coincidence.

Taking it as an opportunity to try to start an ideological fight is just bad manners.

No one is even talking about arguments or intuition-pumps for or against any of the moral views mentioned. I wasn't "starting an ideological flight", I was making meta remark about the way people present moral views. If anything, I'd be starting an ideological fight about my metaethical views and what I consider to be a productive norm of value-related discourse on this site.

Replies from: OrphanWilde
comment by OrphanWilde · 2015-10-27T15:09:03.610Z · LW(p) · GW(p)

Since when is it discouraged to point out minor criticism in a post?

Again, I wasn't speaking for all of Less Wrong. You'd have to ask the others why they downvoted you, but having committed the major faux pax of complaining about being downvoted, I don't think they'll be as receptive at this point.

I discourage anything that relates to pedantry, downvote whenever somebody is making a point, not because the point needs to be heard, but because they need to be heard. There's some subjectivity to it, of course. But it boils down to "Do I find that this comment adds, or detracts, from the meaningful conversation that can be had?" And I found yours to detract more than it added, for reasons already specified.

There's also more than a slight smell of identity politics to the way you're approaching this, particularly in the way you immediately threw yourself into the "Victim" role as soon as you perceived you weren't being treated with the gravity you expected. That might be an avenue for you to consider. Identity politics don't go over well here.

comment by Luke_A_Somers · 2015-10-26T23:12:40.788Z · LW(p) · GW(p)

You imply that doubling extreme poverty would be a good thing if it comes with a doubling of the rest of the population.

Kind of? The point of the second plot is to show that we didn't get where we are in fractional terms by murdering the poor, which would be bad, I think, regardless of whether one holds that doubling the overall population is good or bad. And if we got where we are in fractional terms by adding rich people without actually cutting into the number of poor people, that would be bad too, though not as bad as murdering them.

Of course, the plots can't show that we didn't grow the rich population while also killing the poor, but, well, that's not what happened either.

Replies from: Lukas_Gloor
comment by Lukas_Gloor · 2015-10-26T23:46:26.908Z · LW(p) · GW(p)

I at one point phrased it "comes with a doubling of the (larger) rest of the population" to make it more clear, but deleted it for a reason I have no introspective access to.

And if we got where we are in fractional terms by adding rich people without actually cutting into the number of poor people, that would be bad too, though not as bad as murdering them.

It would, obviously, if there are better alternatives. In consequentialism, everything where you have better viable alternatives is bad to some extent. What I meant is: If the only way to double the rest of the population is by also doubling the part that's in extreme poverty, then the OP's values implies that it would be a good thing. I'm not saying this view is crazy, I'm just saying that creating the impression that it's some sort of LW-consensus is mistaken. And in a latter point I added that it makes me, and probably also other people with different values, feel unwelcome. It's bad for an open dialogue on values.

comment by [deleted] · 2015-10-27T00:13:34.402Z · LW(p) · GW(p)

Wouldn't the addition of money into economies where it was previously a less-than-frequent enabler of the flow of goods and services cause this to be overstated?

comment by MarsColony_in10years · 2015-10-26T15:36:09.061Z · LW(p) · GW(p)

Individual wealth has diminishing returns on investment. The marginal utility of each extra dollar of income is less. There's reason to believe that we'll have to slowly shift the focus of our efforts elseware, if we want to continue making equally huge strides forward.

We hit the UN's old goal of having extreme poverty level from 1990. We even did it 5 years ahead of the 2015 target date, which is fantastic. But if we want to hit the next set of goals, we'll need more than just more economic growth. For example, this TED talk indicates that all of the UN's Global Goals can be expressed roughly as an increase in global Social Progress Index from 61 to ~75. However, if we rely entirely on continued economic growth and don't have any social change, then he claims we will only move from 61 to ~62.4.

As an asside, I find the Social Progress Index to be an interesting metric. It's an equally weighted composit of "Basic Human Needs" (such as nutrition and basic medicine), "Foundations of Wellbeing" (such as access to education and information), and "Opportunit" (such as personal rights and tollerence).

comment by AlexanderEarlheart · 2016-01-05T21:28:11.372Z · LW(p) · GW(p)

The chart is flawed -- it doesn't contain numbers predating the Industrial Revolution, when many of the agricultural workers who lived off the land tended to be much happier than the overworked, depressed populations of today. What's the point of "productivity" if you don't have the free time to enjoy the fruits of your labor? Our current system is designed to benefit the people at the top, regardless of how much the exploited lower and middle class workers are paid.

Replies from: Lumifer, polymathwannabe
comment by Lumifer · 2016-01-05T21:33:49.294Z · LW(p) · GW(p)

many of the agricultural workers who lived off the land tended to be much happier than the overworked, depressed populations of today

[Citation needed]

comment by polymathwannabe · 2016-01-05T21:37:04.433Z · LW(p) · GW(p)

The overworked, depressed workers of the industrial factory don't have much to envy the overworked, depressed farmers of feudal society.