How minimal is our intelligence?

post by Douglas_Reay · 2012-11-25T23:34:06.733Z · LW · GW · Legacy · 214 comments

Contents

  The First City
  From Village to City
  Ice Ages
  Brains, Genes and Calories
  Summary
  Comment Navigation Aide
None
214 comments

Gwern suggested that, if it were possible for civilization to have developed when our species had a lower IQ, then we'd still be dealing with the same problems, but we'd have a lower IQ with which to tackle them.   Or, to put it another way, it is unsurprising that living in a civilization has posed problems that our species finds difficult to tackle, because if we were capable of solving such problems easily, we'd probably also have been capable of developing civilization earlier than we did.

How true is that?

In this post I plan to look in detail at the origins of civilization with an eye to considering how much the timing of it did depend directly upon the IQ of our species, rather than upon other factors.

Although we don't have precise IQ test numbers for our immediate ancestral species, the fossil record is good enough to give us a clear idea of how brain size has changed over time:

brain mass as a percent of body mass against time

and we do have archaeological evidence of approximately when various technologies (such as pictograms, or using fire to cook meat) became common.

The First City

The Ziggurat of Ur-Nammu

About 6,000 years ago (4000 BCE), Ur was a thriving trading village on the flood plain near the mouth of the river Euphrates in what is now called southern Iraq and what historians call Sumeria.

By 3000 BCE it was the heart of a city-state with a core built up populated area covering 37 acres, and would go on over the following thousand years to lead the Sumerian empire, raise a great brick Ziggurat to its patron moon goddess, and become the largest city in the world (65,000 people concentrated in 54 acres).

It was eventually doomed by desertification and soil salination, caused by its own success (over-grazing and land clearing) but, by then, cities had spread throughout the fertile crescent of rivers at the intersection of the European, African and Asian land masses.

Ur may not have been the first city, but it was the first one we know of that wasn't part of a false dawn - one whose culture and technologies did demonstrably spread to other areas.  It was the flashpoint.

We don't know for certain what it was about the culture surrounding the dawn of cities that made that particular combination of trade, writing, specialisation, hierarchy and religion communicable, when similar cultures from previous false dawns failed to spread.   We can trace each of those elements to earlier sources, none of them were original to Ur, so perhaps it was a case of a critical mass achieving a self-sustaining reaction.

What we can look at is why the conditions to allow a village to become a large enough city for such a critical mass of developments to accumulate, occurred at that time and place.

From Village to City

Motivation aside, the chief problem with sustaining large numbers of people together in a small area, over several generations, keeping them healthy enough for the population to grow without continual immigration, is ensuring access to a scalable renewable predictable source of calories.

To be predictable means surviving famine years, which requires crops that can be stored for several years, such as grasses (wheat, barley and millet) with large seeds, and good storage facilities to store them in.   It also means surviving pestilence, which requires having a variety of such crops.    To be scalable and renewable means supplying water and nutrients to those crops on an ongoing basis, which requires irrigation and fertiliser from domesticated animals (if you don't have handy regular floods).

Having large mammals available to domesticate, who can provide fertiliser and traction (pulling ploughs and harrows) certainly makes things easier, but doesn't seem to have been a large factor in the timing of the rise of civilisation, or particularly dependent upon the IQ of the human species.   Research suggests that domestication may have been driven as much by the animals own behaviour as by human intention, with those animals daring to approach humans more closely getting first choice of discarded food.

Re-planting seeds to ensure plants to gather in following years, leading to low nutrition grasses adapting into grains with high protein concentrations in the seeds, does seem to a mainly intentional human activity in that we can trace most of the gain in size of such plant species seeds to locations where humans have transitioned from the palaeolithic hunter-gatherer culture (about 2.5 million years ago, to about 10,000 years ago) to the neolithic agricultural culture (about 10,000 year ago, onwards).

Good grain storage seems to have developed incrementally starting with crude stone silo pit designs in 9500 BCE, and progressing by 6000 BCE to customised buildings with raised floors and sealed ceramic containers which could store 80 tons of wheat in good condition for 4 years or more.  (Earthenware ceramics date to 25,000 BCE and earlier, though the potter's wheel, useful for mass production of regular storage vessels, does date to the Ubaid period.)

The main key to the timing of the transition from village to city seems to have been not human technology but the confluence of climate and biology.  Jared Diamond points the finger at the geography of the region - the fertile crescent farmers had access to a wider variety of grains than anywhere else in the world because that area links and has access to the species of three major land masses.   The Mediterranean climate has a long dry season with a short period of rain, which made it ideal for growing grains (which are much easier to store for several years than, for instance bananas).  And everything kicked off when the climate stabilised after the most recent ice age ended about 12,000 years ago.

Ice Ages

Strictly speaking, we're actually talking about the end of a "glacial period" rather than the end of an entire "ice age".  The timeline goes:

200,000 years ago - 130,000 years ago : glacial period
130,000 years ago - 110,000 years ago : interglacial period
110,000 years ago -  12,000 years ago : glacial period
  12,000 years ago - present : interglacial period


So the question now is, why didn't humanity spawn civilisation in the fertile crescent 130,000 years ago, during the last interglacial period?  Why did it happen in this one?  Did we get significantly brighter in the mean time?

It isn't, on the face of it, an implausible idea.  100,000 years is long enough for evolutionary change to happen, and maybe inventing pottery or becoming farmers did take more brain power than humanity had back then.  Or, if not IQ, perhaps it was some other mental change like attention span, or the capacity to obey written laws, live as a specialist in a hierarchy, or similar.

But there's no evidence that this is the case, nor is there a need to hypothesise it because there is at least one genetic change we do know about during that time period, that is by itself sufficient to explain the lack of civilisation 130,000 years ago.  And it has nothing to do with the brain.

Brains, Genes and Calories

Using the San Bushpeople as a guide to the palaeolithic diet, hunter-gather culture was able to support an average population density of one person per acre.   Not that they ate badly, as individuals.  Indeed, they seem to have done better than the early Neolithic farmers.  But they had to be free to wander to follow nomadic food sources, and they were limited by access to food that the human body could use to create Docosahexaenoic acid, which is a fatty acid required for human brain development.  Originally humans got this from fish living in the lakes and rivers of central Africa.   However, about 80,000 years ago, we developed a gene that let us synthesise the same acid from other sources, freeing humanity to migrate away from the wet areas, past the dry northern part, and out into the fertile crescent.

But there is a link between diet and brain.  Although the human brain represents only 2% of the body weight, it receives 15% of the cardiac output, 20% of total body oxygen consumption, and 25% of total body glucose utilization.  Brains are expensive, in terms of calories consumed.  Although brain size or brain activity that uses up glucose is not linearly related to individual IQ, they are linked on a species level.

IQ is polygenetic, meaning that many different genes are relevant to a person's potential maximum IQ.  (Note: there are many non-genetic factors that may prevent an individual reaching their potential).   Algernon's Law suggests that genes affecting IQ that have multiple alleles still common in the human population are likely to have a cost associated with the alleles tending to increase IQ, otherwise they'd have displaced the competing alleles.   In the same way that an animal species that develops the capability to grow a fur coat in response to cold weather is more advanced than one whose genes strictly determine that it will have a thick fur coat at all times, whether the weather is cold or hot; the polygenetic nature of human IQ gives human populations the ability to adapt and react on the time scale of just a few generations, increasing or decreasing the average IQ of the population as the environment changes to reduce or increase the penalties of particular trade-offs for particular alleles contributing to IQ.   In particular, if the trade-off for some of those alleles is increased energy consumption and we look at a population of humans moving from an environment where calories are the bottleneck on how many offspring can be produced and survive, to an environment where calories are more easily available, then we might expect to see something similar to the Flynn effect.

Summary

There is no cause to suppose, even if the human genome 100,000 years ago had the full set of IQ-related-alleles present in our genome today, that they would have developed civilisation much sooner.

 


Comment Navigation Aide

link - DuncanS              - animal vs human intelligence
link - DuncanS              - brain size & brain efficiency
link - JaySwartz            - adaptability vs intelligence
link - RichardKennaway - does more intelligence tend to bring more societal happiness?
link - mrglwrf               - Ur vs Uruk
link - NancyLebovitz     - does decreased variance of intelligence tend to bring more societal happiness?
link - fubarobfusco       - victors writing history
link --                            consequentialist treatment of library burning
link --                            the average net contribution to society of people working in academia
link - John_Maxwell_IV - independent development of civilisation in the Americas
link - shminux              - How much of our IQ is dependant upon Docosahexaenoic acid?
link - army1987            - implications for the Great Filter
link - Vladimir_Nesov    - genome vs expressed IQ
link - Vladimir_Nesov    - Rhetorical nitpick
link - Vaniver               - IQ & non-processor-speed components of problem solving
link - JoshuaZ              - breakthroughs don't tend to require geniuses in order to be made
link - Desrtopa             - cultural factorors



214 comments

Comments sorted by top scores.

comment by fubarobfusco · 2012-11-20T08:14:02.888Z · LW(p) · GW(p)

Ur may not have been the first city, but it was the first one we know of that wasn't part of a false dawn - one whose culture and technologies did demonstrably spread to other areas. It was the flashpoint.

A contrary view — and I'm stating this deliberately rather strongly to make the point vivid:

"False dawn" is a retrospective view; which is to say an anachronistic one; which is to say a mythical one. And myths are written by the victors.

It's true that we perceive more continuity from Ur to today's civilization than from Xyz (some other ancient "dawn of civilization" point) to today. But why? Surely in part because the Sumerians and their Akkadian and Babylonian successors were good at scattering their enemies, killing their scribes, destroying their records, and stealing credit for their innovations. Just as each new civilization claimed that their god had created the world and invented morality, each claimed that their clever forefather had invented agriculture, writing, and tactics. If the Xyzzites had won, they would have done the same.

What's the evidence? Just that that's how civilizations — particularly religious empires — have generally behaved since then. The Hebrews, Catholics, and Muslims, for instance, were all at one time or another pretty big on wiping out their rivals' history and making them out to be barbaric, demonic, subhuman assholes — when they weren't just mass-murdering them. So our prior for the behavior of the Sumerians should be that they were unremarkable in this regard; they did the same wiping-out of rivals' records that the conquistadors and the Taliban did.

Today we have anti-censorship memes; the idea that anyone who would burn books is a villain and an enemy of everyone. But we also have the idea that mass censorship and extirpation of history is "Orwellian" — as if it had been invented in the '40s! This is backwards; it's anti-censorship that is the new, weird idea. Censorship is the normal behavior of normal rulers and normal priests throughout normal history.

Damnatio memoriae, censorship, burning the libraries (of Alexandria or the Yucatán), forcible conversion & assimilation — or just mass murder — are effective ways to make the other guy's civilization into a "false dawn". Since civilizations prior to widespread literacy (and many after it) routinely destroyed the records and lore of their rivals; we should expect that the first X that we have records of is quite certainly not the first X that existed, especially if its lore makes a big deal of claiming that it is.

Put another way — quite a lot of history is really a species of creationism, misrepresenting a selective process as a creative one. So we should not look to "the first city" for unique founding properties of civilization, since it wasn't the first and didn't have any; it was just the conquering power that happened to end up on top.

Replies from: Douglas_Reay, army1987, None
comment by Douglas_Reay · 2012-11-20T09:50:55.050Z · LW(p) · GW(p)

Thus my caveat "we know of".

However, while it would be quite possible for a victor to erase written mention of a rival, it is harder to erase beyond all archaeological recovery the signs of a major city that's been stable and populated for a thousand years or more. For instance, if we look at Jericho, which was inhabited earlier than Ur was, we don't see archaeological evidence of it becoming a major city until much later than Ur (see link and link).

If there was a city large enough and long lived enough, around before Ur, that passed onto Ur the bundle of things like writing and hierarchy that we known Ur passed onto others, then I'm unaware of it, and the evidence has been surprisingly thoroughly erased (which isn't impossible, but neither is it a certainty that such a thing happened).

See also the comment about Uruk. There were a number of cities in Sumer close together that would have swapped ideas. But the things said about calories and types of grain apply to all of them.

comment by A1987dM (army1987) · 2012-11-20T19:35:50.471Z · LW(p) · GW(p)

burning the libraries (of Alexandria

Whaaat? Did people do that on purpose? What the hell is wrong with my species?

Replies from: Nornagest, Salemicus, thomblake
comment by Nornagest · 2012-11-20T19:57:30.833Z · LW(p) · GW(p)

It's not entirely clear. Wikipedia lists four possible causes of or contributors to the Library of Alexandria's destruction; all were connected to changes in government or religion, but only two (one connected to Christian sources, the other to Muslim) appear deliberate. Both of them seem somewhat dubious, though.

The destruction of Central American literature is a more straightforward case. Bishop Diego de Landa certainly ordered the destruction of Mayan codices where found, which only a few survived.

Replies from: fubarobfusco
comment by fubarobfusco · 2012-11-20T21:33:36.406Z · LW(p) · GW(p)

Wikipedia lists four possible causes of or contributors to the Library of Alexandria's destruction; all were connected to changes in government or religion, but only two (one connected to Christian sources, the other to Muslim) appear deliberate.

The Library also wasn't one building; and had some time to recover between one attack and the next. (As an analogy: Burning down some, or even most, of the buildings of a modern university wouldn't necessarily lead to the institution closing up shop.)

I'd been thinking of the 391 CE one, though, which I'd thought was widely understood to be an attack against pagan sites of learning. Updating in progress.

The destruction of Central American literature is a more straightforward case.

It's worth noting that there were people on the Spanish "team" who regretted that decision and spoke out against it, most famously Bartolomé de las Casas.

comment by Salemicus · 2012-11-20T21:01:50.347Z · LW(p) · GW(p)

It's all consequentialism around here... until someone does something to lower the social standing of academia.

Replies from: army1987
comment by A1987dM (army1987) · 2012-11-20T21:30:27.900Z · LW(p) · GW(p)

What?

Replies from: Salemicus
comment by Salemicus · 2012-11-20T22:06:50.863Z · LW(p) · GW(p)

A consequentialist would ask, with an open mind, whether burning the libraries lead to good or bad consequences. A virtue ethicist would express disgust at the profanity of burning books. Your comment closely resembles the latter, whereas most discussion here on other topics tries to approximate the former.

I think it is no coincidence that this switch occurs in this context. Oh no, some dusty old tomes got destroyed! Compared to other events of the time, piddling for human "utility." But burning books lowers the status of academics, which is why it is considered (in Haidt-ian terms) a taboo by some - including, I would suggest, most on this site.

Replies from: JoshuaZ, army1987, FluffyC, Bugmaster, Peterdjones
comment by JoshuaZ · 2012-11-21T00:58:06.405Z · LW(p) · GW(p)

I think it is no coincidence that this switch occurs in this context. Oh no, some dusty old tomes got destroyed! Compared to other events of the time, piddling for human "utility." But burning books lowers the status of academics, which is why it is considered (in Haidt-ian terms) a taboo by some - including, I would suggest, most on this site.

We have good reason to think that the missing volumes of Diophantus were at Alexandria. Much of what Diophantus did was centuries before his time. If people in the 1500s and 1600s had complete access to his and other Greek mathematicians' work, math would have likely progressed at a much faster pace, especially in number theory.

We also have reason to think that Alexandria contained the now lost Greek astronomical records, which likely contained comets and possibly also historical nova observations. While we have some nova and supernova observations from slightly later (primarily thanks to Chinese and Japanese records), the Greeks were doing astronomy well before. This sort of thing isn't just an idle curiosity: understanding the timing of supernova connects to understanding the most basic aspects of our universe. The chemical elements necessary for life are created and spread by supernova. Understanding the exact ratios, how common supernova are, and understanding more how supernova spread out, among other issues, are all important to understanding very important questions like how common life is, which is directly relevant to the Great Filter. We do have a lot of supernova observations in the last few years but historical examples are few and far between.

Compared to other events of the time, piddling for human "utility."

On the contrary. Kill a few people or make them suffer and it has little direct impact beyond a few years in the future. Destroying knowledge has an impact that resonates down for far longer.

But burning books lowers the status of academics, which is why it is considered (in Haidt-ian terms) a taboo by some - including, I would suggest, most on this site.

This is an interesting argument, and I find it unfortunate that you've been downvoted. The hypothesis is certainly interesting. But it may also be taboo for another reason: in many historical cases, book burning has been a precursor to killing people. This is a cliche, but it is a cliche that happens to have historical examples before it. Another consideration is that a high status of academics is arguably quite a good thing from a consequentialist perspective. People like Norman Borlaug, Louis Pasteur, and Alvin Roth have done more lasting good for humanity than almost anyone else. Academics are the main people who have any chance of having a substantial impact on human utility beyond their own lifespans (the only other groups are people who fund academics or people like Bill Gates who give funding to implement academic discoveries on a large scale). So even if it is purely an issue of status and taboo, there's a decent argument that those are taboos which are advantageous to humanity.

Replies from: Salemicus
comment by Salemicus · 2012-11-21T19:18:12.883Z · LW(p) · GW(p)

Number theory might have progressed faster... we might better understand the “Great Filter”

Isn’t this kind of thing archetypal of knowledge that in no way contributes to human welfare?

In many historical cases, book burning has been a precursor to killing people.

Perhaps, but note that this wasn’t a precursor to killing people; people were being widely killed regardless. But the modern attention is not on the rape, murder, pillage, etc... it’s on the book-burning. Why the distorted values?

a high status of academics is arguably quite a good thing from a consequentialist perspective

Alvin Roth is no doubt a bright guy, but the idea that he has done more lasting good for humanity than, say, Sam Walton, is absurd. You’re right that Bill Gates has made a huge impact – but his lasting good was achieved by selling computer software, not through the mostly foolish experimentation done by his foundation. Sure, some academics have done some good (although you wildly overstate it) but you have to consider the opportunity cost. The high status of academics causes us to get more academic research than otherwise, but it also encourages our best and brightest to waste their lives in the study of arcana. Can anyone seriously doubt that, on the margin, we are oversupplied with academics, and undersupplied with entrepreneurs and businessmen generally?

Replies from: JoshuaZ, JoshuaZ, Desrtopa, TorqueDrifter, Peterdjones
comment by JoshuaZ · 2012-11-21T20:42:13.211Z · LW(p) · GW(p)

Isn’t this kind of thing archetypal of knowledge that in no way contributes to human welfare?

Well, no. In modern times number theory has been extremely relevant for cryptography for example, and pretty much all e-commerce relies on it. But other areas of math have direct, useful applications and have turned out to be quite important. For example, engineering in the late Middle Ages and Renaissance benefited a lot from things like trig and logarithms. Improved math has lead to much better understanding of economies and financial systems as well. These are but a few limited examples.

But the modern attention is not on the rape, murder, pillage, etc... it’s on the book-burning

You are missing the point in this context having the taboo against book burning is helpful because it is something one can use as a warning sign.

Alvin Roth is no doubt a bright guy, but the idea that he has done more lasting good for humanity than, say, Sam Walton, is absurd.

So I'm curious as to how you are defining "good" in any useful sense that you can reach this conclusion. Moreover, the sort of thing that Roth does is in the process of being more and more useful. His work allowing for organ donations for example not only saves lives now but will go on saving lives at least until we have cheap cloned organs.

ou’re right that Bill Gates has made a huge impact – but his lasting good was achieved by selling computer software, not through the mostly foolish experimentation done by his foundation.

This is wrong. His work with malaria saves lives. His work with selling computer software involved making mediocre products and making up for that by massive marketing along with anti-trust abuses. There's an argument to be made that economic productivity can be used as a very rough measure of utility, but that breaks down in a market where advertising, marketing, and network effects of specific product designs matter more than quality of product.

Can anyone seriously doubt that, on the margin, we are oversupplied with academics, and undersupplied with entrepreneurs and businessmen generally?

Yes, to the point where I have to wonder how drastically far off our unstated premises about the world are. If anything, it seems like we have the exact opposite problem. We have a massive oversupply of "quants" and the like who aren't actually producing more utility or even actually working with real market inefficiencies but are instead doing things like moving servers a few feet closer to the exchange so they can shave a fraction of a second off of their transaction times. There may be an "oversupply" of how many academics there are compared to the number of paying positions but that's simply connected to the fact that most research has results that function as externalities(technically public goods) and thus the lack of academic jobs is a market failure.

Replies from: Salemicus
comment by Salemicus · 2012-11-21T22:33:33.996Z · LW(p) · GW(p)

No-one is disputing that mathematics can be useful. The question is, if we had slightly more advanced number theory slightly earlier in time, would that have been particularly useful? Answer - no.

You are missing the point in this context having the taboo against book burning is helpful because it is something one can use as a warning sign.

No, I am not missing the point. I am perfectly willing to concede that a taboo against book-burning might be helpful for that reason. But here we have an example where people were,at the same time as burning books, doing the exact worse stuff that book burning is allegedly a warning sign of. But no-one complains about the worse stuff, only the book burning. Which makes me disbelieve that people care about the taboo for that reason.

People say that keeping your lawn tidy keep the area looking well-maintained and so prevents crime. Let's say one guy in the area has a very messy lawn, and also goes around committing burglaries. Now suppose the Neighbourhood Watch shows no interest at all in the burglaries, but is shocked and appalled by the state of his lawn. We would have to conclude that these people don't care about crime, what they care about is lawns, and this story about lawns having an effect on crime is just a story they tell people because they can't justify their weird preference to others on its own terms.

Moreover, the sort of thing that Roth does is in the process of being more and more useful. His work allowing for organ donations for example not only saves lives now but will go on saving lives at least until we have cheap cloned organs.

Or, we could just allow a market for organ donations. Boom, done. Where's my Nobel?

Now, if you specify that we have to find the best fix while ignoring the obvious free-market solutions I don't deny that Alvin Roth has done good work. And I'm certainly not blaming Roth personally for the fact that academia exists as an adjunct to the state - although academics generally do bear the lions share of responsibility for that. But I am definitely questioning the value of this enterprise, compared to bringing cheap food, clothes, etc, to hundreds of millions of people like Sam Walton did.

This is wrong. His work with malaria saves lives. His work with selling computer software involved making mediocre products and making up for that by massive marketing along with anti-trust abuses. There's an argument to be made that economic productivity can be used as a very rough measure of utility, but that breaks down in a market where advertising, marketing, and network effects of specific product designs matter more than quality of product.

I don't see why "saves lives" is the metric, but I bet that Microsoft products have been involved in saving far more lives. Moreover, people are willing to pay for Microsoft products, despite your baseless claims of their inferiority. Gates's charities specifically go around doing things that people say they want but don't bother to do with their own money. I don't know much about the malaria program, but I do know the educational stuff has mostly been disastrous, and whole planks have been abandoned.

Yes, to the point where I have to wonder how drastically far off our unstated premises about the world are.

Obviously very far indeed.

Replies from: JoshuaZ, Bugmaster
comment by JoshuaZ · 2012-11-21T23:43:10.745Z · LW(p) · GW(p)

No-one is disputing that mathematics can be useful. The question is, if we had slightly more advanced number theory slightly earlier in time, would that have been particularly useful? Answer - no.

Answer: Yes. Even today, number theory research highly relevant to efficient crypto is ongoing. A few years of difference in when that shows up would have large economic consequences. For example, as we speak, research in ongoing into practical fully homomorphic encryption which if it is implemented will allow cloud computing and deep processing of sensitive information, as well as secure storage and retrieval of sensitive information (such as medical records) from clouds. This is but one example.

But no-one complains about the worse stuff, only the book burning. Which makes me disbelieve that people care about the taboo for that reason.

Well, there is always the danger of lost-purpose. But it may help to keep in mind that the book-burnings and genocides in question both occurred a long-time ago. It is easier for something to be at the forefront of one's mind when one can see more directly how it would have impacted one personally.

Or, we could just allow a market for organ donations. Boom, done. Where's my Nobel?

So, I'm generally inclined to allow for organ donation markets (although there are I think legitimate concerns about them). But since that's not going to happen any time soon, I fail to see its relevance. A lot of problems in the world need to be solved given the political constraints that exist. Roth's solution works in that context. The fact that a politically untenable better solution exists doesn't make his work less beneficial.

But I am definitely questioning the value of this enterprise, compared to bringing cheap food, clothes, etc, to hundreds of millions of people like Sam Walton did.

So, Derstopa already gave some reasons to doubt this. But it is also worth noting that Walton died in 1992, before much of Walmart's expansion. Also, there's a decent argument that Walmart's success was due not to superior organization but rather a large first-mover advantage (one of the classic ways markets can fail): Walmart takes advantage of its size in ways that small competitors cannot do. This means that smaller chains cannot grow to compete with Walmart in any fashion, so even if a smaller competitor is running something more efficiently, it won't matter much. (Please take care to note that this is not at all the mom-and-pop-store argument which I suspect you and I would both find extremely unconvincing.)

I don't see why "saves lives" is the metric

Ok. Do you prefer Quality-adjusted life years ? Bill is doing pretty well by that metric.

but I bet that Microsoft products have been involved in saving far more lives

"Involved with" is an extremely weak standard. The thing is that even if Microsoft had never existed, similar products (such as software or hardware from IBM, Apple, Linux, Tandy) would have been in those positions.

Moreover, people are willing to pay for Microsoft products, despite your baseless claims of their inferiority.

Let's examine why people are willing to do so. It isn't efficiency. For example, by standard benchmarks, Microsoft browsers have been some of the least efficient (although more recent versions of IE have performed very well by some metrics such as memory use ). Microsoft has had a massive marketing campaign to make people aware of their brand (classically marketing in a low information market is a recipe for market failure). And Microsoft has engaged in bundling of essentially unrelated products. Microsoft has also lobbied governments for contracts to the point where many government bids are phrased in ways that make non-Microsoft options essentially impossible. Most importantly: Microsoft gains a network effect: This occurs when the more common a product is, the more valuable it is compared to other similar products. In this context, once a single OS and set of associated products is common, people don't want other other products since they will run into both learning-curve with the "new" product and compatibility issues when trying to get the new product to work with the old.

Gates's charities specifically go around doing things that people say they want but don't bother to do with their own money.

That some people make noise about wanting to help charity but don't doesn't make the people who actually do it as contributing less utility. Or is there some other point here I'm missing?

I don't know much about the malaria program, but I do know the educational stuff has mostly been disastrous, and whole planks have been abandoned.

Yes, there's no question that the education work by the Gates foundation has been profoundly unsuccessful. But the general consensus concerning malaria is that they've done a lot of good work. This may be something you may want to look into.

Replies from: Salemicus
comment by Salemicus · 2012-11-22T00:40:20.404Z · LW(p) · GW(p)

Answer: Yes. Even today, number theory research highly relevant to efficient crypto is ongoing...

Yes, but number theory only gained practical application relatively recently. Your claim was that if people in the 1500s and 1600s had had access to this number theory, we'd all be better off now. It seems you believe that we'd now have more advanced number theory because of this. But my claim is that this stuff was seen as useless back then, so they would have mostly sat on this knowledge, and number theory now would be about where it is.

A lot of problems in the world need to be solved given the political constraints that exist...

But those "political constraints" are not laws of nature, they are descriptions of current power relations which academics have helped bring about. I'm glad that the economics faculty spend much of their time thinking of ways to fix the problems caused by the sociology faculty, but it would save everyone time and money if they all went home.

Ok. Do you prefer Quality-adjusted life years ?

No. I'd be more impressed with Gates if he gave people cash to satisfy their own revealed preferences, rather than arbitrary metrics. But that wouldn't look as caring.

"Involved with" is an extremely weak standard. The thing is that even if Microsoft had never existed, similar products (such as software or hardware from IBM, Apple, Linux, Tandy) would have been in those positions.

Yeah, maybe, although presumably it helped on the margin. But if Gates hadn't set up his foundation, the resources involved would have gone to some use. Why are you only looking at one margin, and not the other?

Let's examine why people are willing to do so. It isn't efficiency...

But this notion of "efficiency" that you are using is merely a synonym for what we care about, specifically efficiency to the user. A more convenient UI, for example, is likely orders of magnitude more important than memory usage for efficiency to the user - yet convenience is subjective. Moreover, bundling, marketing, brands and network effects are not examples of market failure. In fact, marketing produces positive externalities. Of course Microsoft has a first-mover advantage over someone trying to make a new OS today, but that's not an "unfair" advantage.

I'll grant you that Microsoft has advantages in government contracting that they wouldn't have in a proper free market, but you should in turn admit that they have also suffered from anti-trust laws.

Gates's charities specifically go around doing things that people say they want but don't bother to do with their own money.

That some people make noise about wanting to help charity but don't doesn't make the people who actually do it as contributing less utility. Or is there some other point here I'm missing?

I wasn't talking about potential donors, I was talking about recipients. If you talk a lot about how much you love literature, but in fact you spend all your money on beer, then some so-called philanthropist building you a library is just a waste of everyone's time.

Replies from: orthonormal, JoshuaZ, army1987
comment by orthonormal · 2012-11-22T06:12:50.596Z · LW(p) · GW(p)

Yes, but number theory only gained practical application relatively recently. Your claim was that if people in the 1500s and 1600s had had access to this number theory, we'd all be better off now.

Advances in Diophantine number theory in the Renaissance led directly to complex numbers and analytic geometry, which led to calculus and all of physics. If the Library at Alexandria had been preserved, the Industrial Revolution could have happened centuries earlier.

Replies from: satt
comment by satt · 2012-11-22T23:11:52.404Z · LW(p) · GW(p)

That's an intriguing causal chain but its length & breadth give me pause. Are there any articles, books, or papers that nicely sum up the evidence for it (and ideally the evidence against it)?

comment by JoshuaZ · 2012-11-22T01:00:22.870Z · LW(p) · GW(p)

Yes, but number theory only gained practical application relatively recently. Your claim was that if people in the 1500s and 1600s had had access to this number theory, we'd all be better off now. I

Sorry, poor wording on my part. I mentioned number theory initially as an area because it is the one where we most unambiguously lost Greek knowledge. But it seems pretty clear we lost many other areas also, hence why I mentioned trig, where we know that there were multiple treatises on the geometry of triangles and related things which are no longer extant but are referenced in extant works.

I'm not at all convinced incidentally by the argument that people would have just sat on the number theory. Since the late 1700s, the rate of mathematical progress has been rapid. So while direct focus on the areas relevant to cryptography might not have occurred, closely connected areas (which are relevant to crypto) would certainly be more advanced.

I'm glad that the economics faculty spend much of their time thinking of ways to fix the problems caused by the sociology faculty, but it would save everyone time and money if they all went home.

So what evidence do you have that the economists are fixing the problems created by the sociologists in any meaningful sense?

No. I'd be more impressed with Gates if he gave people cash to satisfy their own revealed preferences, rather than arbitrary metrics. But that wouldn't look as caring.

So that won't work at multiple levels. A major issue when assisting people in the developing world is coordination problems (there are things that will help a lot but if everyone has a little bit of money they don't have an easy way to pool the money together in a useful fashion). Moreover, this assumes a degree of knowledge which people simply don't have. A random African doesn't know necessarily that bednets are an option, or even have any good understanding of where to get them from. And then one has things like vaccine research. You are essentially assuming that market forces will win do what is best when one is dealing with people who are lacking in basic education and institutions to effectively exercise their will even if they had the education.

Moreover, bundling, marketing, brands and network effects are not examples of market failure.

Huh? All of these can result in total utility going down compared to what might happen if one picked a different market equilibrium. How are these not market failures?

In fact, marketing produces positive externalities.

In limited circumstances, marketing can produce positive utility (people learn about products they didn't have knowledge of, or they get more data to compare products), but I'm curious to here how marketing is at all likely to produce positive externalities.

I'll grant you that Microsoft has advantages in government contracting that they wouldn't have in a proper free market, but you should in turn admit that they have also suffered from anti-trust laws.

Yes, they have and that's ok. Anti-trust laws help market stability. The prevent the problem we've seen in the banking and auto industries of being too big to fail, and prevent the problem of bundling to force products on new markets (which again I'm quite curious to here an explanation for how that isn't a market failure).

Replies from: Salemicus
comment by Salemicus · 2012-11-22T02:40:47.149Z · LW(p) · GW(p)

So what evidence do you have that the economists are fixing the problems created by the sociologists in any meaningful sense?

I confess I don't understand this question. Could you please clarify?

A major issue when assisting people in the developing world is coordination problems... education... institutions...

But these "institutions" are not laws of nature, they aren't even tangible things - an "institution" is just a description of the way people co-ordinate with each other. So yes, people in developing countries often can't co-ordinate because they have bad institutions, but it would be equally true to say that they can't have good institutions because they don't co-ordinate.

A random African doesn't know necessarily that bednets are an option, or even have any good understanding of where to get them from.

Actually, I think that a "random African" likely knows a lot more about what would improve his standard of living than you or I, and my mind boggles at any other presumption. If he'd rather spend his money on beer than bednets, but you give him a bednet anyway, then I hope it makes you happy, because you're clearly not doing it for him.

I'm curious to here how marketing is at all likely to produce positive externalities.

Apart from the reasons you already mentioned, marketing creates a brand which reduces information costs. This is of course particularly important in a low information market. Spending money to promote your brand is a pre-commitment to provide satisfactory quality products.

Huh? All of these can result in total utility going down compared to what might happen if one picked a different market equilibrium. How are these not market failures?

Firstly, no-one can "pick" a market equilibrium. Secondly, order is defined in the process of its emergence. Thirdly, proof of a possibility and a demonstration of a real-world effect are not the same thing.

Microsoft have also suffered from anti-trust laws

Yes, they have and that's ok... The prevent the problem we've seen in the banking and auto industries of being too big to fail

So every time a business gains on account of departures from the free market, that's a travesty, but every time it loses, that's the way things are supposed to work. No wonder you think academics are the only ones who do any good. Besides, TBTF isn't an economic problem, this is a political problem. They had too many lobbyists to be allowed to fail, that's all.

I'm quite curious to here an explanation for how [bundling] isn't a market failure

How is it a market failure? It's possible for bundling to reduce consumer surplus, but that's just a straight transfer.

Replies from: JoshuaZ, Peterdjones, army1987, RomanDavis
comment by JoshuaZ · 2012-11-23T17:56:30.476Z · LW(p) · GW(p)

So what evidence do you have that the economists are fixing the problems created by the sociologists in any meaningful sense?

I confess I don't understand this question. Could you please clarify?

You said earlier that:

I'm glad that the economics faculty spend much of their time thinking of ways to fix the problems caused by the sociology faculty, but it would save everyone time and money if they all went home.

My confusion was over this claim in that it seems to assume that a) sociologists are creating societal problems and b) economists are solving those problems.

But these "institutions" are not laws of nature, they aren't even tangible things - an "institution" is just a description of the way people co-ordinate with each other. So yes, people in developing countries often can't co-ordinate because they have bad institutions, but it would be equally true to say that they can't have good institutions because they don't co-ordinate.

Human behavior is not path independent. Institutions help coordination because prior functioning governments and organizations help people to keep coordinating. Values also come into play: Countries with functioning governments have citizens with more respect for government so they are more likely to cooperate with it an so on.

Apart from the reasons you already mentioned, marketing creates a brand which reduces information costs. This is of course particularly important in a low information market. Spending money to promote your brand is a pre-commitment to provide satisfactory quality products.

This only makes sense in a context where markets are low information and marketing creates actual information and where negative behavior by a brand will have a substantial reduction in sales. In practice, people have strong brand loyalty based on familiarity with logos and the like,. So people will keep buying the same brands not because they are the best but that's because what they've always done. Humans are cognitive misers, and a large part of marketing is hijacking that.

Huh? All of these can result in total utility going down compared to what might happen if one picked a different market equilibrium. How are these not market failures?

Firstly, no-one can "pick" a market equilibrium.

You are missing the point. The point is that there are other stable equilibria that are better off for everyone but issues like networking effects and technological lock-in prevent people from moving off the local maximum.

Secondly, order is defined in the process of its emergence. Thirdly, proof of a possibility and a demonstration of a real-world effect are not the same thing.

What do these two sentences mean?

So every time a business gains on account of departures from the free market, that's a travesty, but every time it loses, that's the way things are supposed to work.

I don't know where you saw a statement that implied that, and I'm curious how you got that sort of idea from what I wrote.

Besides, TBTF isn't an economic problem, this is a political problem. They had too many lobbyists to be allowed to fail, that's all.

There's an argument for that in the case of the car industry, but the economic consensus is that the economy as a whole would have gotten much worse if the banks hadn't been bailed out.

How is it a market failure? It's possible for bundling to reduce consumer surplus, but that's just a straight transfer.

Technological lock-in and network effects again. For example, in the case of Internet Explorer, having it bundled with Windows meant that many people ended up using IE by default, got very used to it, and then it had an advantage compared to other browsers which stayed around (because people then wrote software that needed IE and webpages emphasized looking good in IE). In this context, if the consumers had been given a choice of browsers, it is likely that other browsers, especially Netscape (and later Firefox) would have done much better, and by most benchmarks Netscape was a better browser.

comment by Peterdjones · 2012-11-23T19:54:16.994Z · LW(p) · GW(p)

Actually, I think that a "random African" likely knows a lot more about what would improve his standard of living than you or I, and my mind boggles at any other presumption.

So why haven't they all done so? An RA may well have more on-the-ground knowledge, but a do-gooder may well have more of the kind of knowledge that you learn in school. Since the D-G is not seeking to take over the RA's entire life, it's a case of two heads are better than one.

If he'd rather spend his money on beer than bednets, but you give him a bednet anyway, then I hope it makes you happy, because you're clearly not doing it for him.

You could do with distingusihing short term gains from long term interests. People don't pop out of the womb knowing how to maximise the latter. It takes education. And institions: why save when the banks are crooked.

But these "institutions" are not laws of nature, they aren't even tangible things - an "institution" is just a description of the way people co-ordinate with each other. So yes, people in developing countries often can't co-ordinate because they have bad institutions, but it would be equally true to say that they can't have good institutions because they don't co-ordinate.

It would be truer still to say that good institions take a long and fragile history to evolve. They didn't evolve everywerhe and they arrived late where they did. Go back about 500 years and no one had good (democratic, accountable, fair honest) instituions.

Apart from the reasons you already mentioned, marketing creates a brand which reduces information costs. This is of course particularly important in a low information market. Spending money to promote your brand is a pre-commitment to provide satisfactory quality products.

It's hard to know where to start with that lot. Brands aren't information like lib raries are information, they are an attempt to get people t to buy things by whatever means is permitted. They can be wors than no information at all, since African mothers would have presumably continued to brest fee without nestle's intervention:

"The Nestlé boycott is a boycott launched on July 7, 1977, in the United States against the Swiss-based Nestlé corporation. It spread in the United States, and expanded into Europe in the early 1980s. It was prompted by concern about Nestle's "aggressive marketing" of breast milk substitutes (infant formula), particularly in less economically developed countries (LEDCs), which campaigners claim contributes to the unnecessary suffering and deaths of babies, largely among the poor.[1] Among the campaigners, Professor Derek Jelliffe and his wife Patrice, who contributed to establish the World Alliance for Breastfeeding Action (WABA), were particularly instrumental in helping to coordinate the boycott and giving it ample visibility worldwide."

comment by A1987dM (army1987) · 2012-11-23T09:57:50.153Z · LW(p) · GW(p)

Actually, I think that a "random African" likely knows a lot more about what would improve his standard of living than you or I, and my mind boggles at any other presumption. If he'd rather spend his money on beer than bednets, but you give him a bednet anyway, then I hope it makes you happy, because you're clearly not doing it for him.

Yes, because all humans are perfectly rational and have unlimited willpower. But then again, why doesn't he sell the bednet I gave him and buys beer with it?

Replies from: Salemicus
comment by Salemicus · 2012-11-23T10:31:11.814Z · LW(p) · GW(p)

Yes, because all humans are perfectly rational and have unlimited willpower.

And I claim that where? Are you claiming that the donors are perfectly rational and have unlimited willpower - and are also perfectly altruistic?

My claim is the modest and surely uncontroversial one that JoshuaZ's "random African" is a better guardian of his own welfare than you, or Bill Gates, or a random do-gooder.

But then again, why doesn't he sell the bednet I gave him and buys beer with it?

Because you gave everyone else in the area a bednet, so now there's a local glut. But yes, I'm sure some people do sell their bednets.

Replies from: CCC, army1987, ArisKatsaris
comment by CCC · 2012-11-23T13:28:27.768Z · LW(p) · GW(p)

My claim is the modest and surely uncontroversial one that JoshuaZ's "random African" is a better guardian of his own welfare than you, or Bill Gates, or a random do-gooder.

It is not (language warning) entirely uncontroversial. Whether through poor education or through giving disproportionate value to present circumstances (and none or too little to future circumstances), people can and often do do things that are, in the long run, self-defeating. (And note that this 'long run' can be measured in weeks or months, even hours in particularly extreme cases).

At least one study suggests that the ability to reject a present reward in favour of a greater future reward is detectable at a young age and is correlated with success in life.

Replies from: Salemicus
comment by Salemicus · 2012-11-23T14:49:44.036Z · LW(p) · GW(p)

Of course people can do things that are self-defeating - did I ever suggest otherwise? I never said people are perfect guardians of their own self-interest, I said, and I repeat, that a random person is a better guardians of his own self-interest than a random do-gooder.

I am getting a little frustrated with people arguing against strawmans of my positions, which has now happened several times on this thread. Am I being unclear?

None of those links suggest that people are worse guardians of their own self-interest than the outsider. In fact, quite the reverse. Take the fertilizer study. The reason that the farmers weren't following the advice of the Kenyan Ministry of Agriculture was that it was bad advice. To quote:

[T]he full package recommended by the Ministry of Agriculture is highly unprofitable on average for the farmers in our sample... the official recommendations are not adapted to many farmers in the region.

So the study demonstrates that the farmers were better guardians of their own self-interest than some bureaucrat in Nairobi (no doubt advised by a western NGO). If they had been forced to follow the (no doubt well-meaning) advice, they would have been much worse off. Maybe some would have died. Now, at the same time, they don't know every possible combination, and it turns out that if they changed their farming methodology, they could become more productive. Great! That's how society advances - by persuading people as to what is in their self-interest, not by making someone else their guardian.

Replies from: CCC, Desrtopa, Peterdjones
comment by CCC · 2012-11-23T20:16:49.425Z · LW(p) · GW(p)

a random person is a better guardians of his own self-interest than a random do-gooder.

Phrased in this way, I think that I agree with you, on average.

In the original context of your statement, however, I had thought that you meant that "a random charity recipient" instead of "a random person".

Now, charity recipients are still people, of course. However, charity recipients are usually people chosen on the basis of poverty; thus, the group of people who are charity recipients tend to be poor.

Now, some people are good guardians of their own long-term self-interest, and some are not. This is correlated with wealth in an unsurprising way (as demonstrated in the marshmallow experiment linked to above); those people who are better guardians of their own long-term self-interest are, on average, more likely to be above a certain minimum level of wealth than those who are not. They are, therefore, less likely to be charity recipients. Therefore, I conclude that people who are in a position to receive benefits from a charity are, on average, worse guardians of their own long-term self-interest than people who are in a position to contribute to a charity.

So. I therefore conclude that a person, on average, will be a better guardian of his own self-interest than a random person of the category (charity recipient); since the selection of people who fall into that category biases the category to those who are poor guardians of their own self-interest. It's not the only factor selected for in that category, but it's significant enough to have a noticeable effect.

comment by Desrtopa · 2012-11-23T15:09:38.124Z · LW(p) · GW(p)

I said, and I repeat, that a random person is a better guardians of his own self-interest than a random do-gooder.

But likely not a better guardian of his or her own self interest than a nonrandom do-gooder who's researched the specific problems the person is dealing with and developed expertise in solving them with resources that the person is has had no opportunity to learn to make use of.

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2012-11-23T16:47:45.605Z · LW(p) · GW(p)

Certainly -- tautologically -- he's not a better guardian of his or her own self interest than some who is in fact a better one. But inventing a fictional character who is in fact a better one does not advance any argument.

One might as well invent a nonrandom do-gooder who thinks they have properly researched what they think are the specific problems the person is dealing with and thinks they have developed expertise in solving them with resources that they think the person has had no opportunity to learn to make use of, but who is nonetheless wrong. As with, apparently, and non-fictionally, the Kenyan Ministry of Agriculture.

Replies from: Desrtopa
comment by Desrtopa · 2012-11-23T17:50:55.424Z · LW(p) · GW(p)

Of course, less qualified guardians of an individual's self interest who believe themselves to be more qualified are a legitimate risk, but that doesn't mean that the optimal solution is to have individuals act exclusively as guardians of their own self interest.

If the Kenyan Ministry of agriculture follows the prescriptions of the researchers in the article cited above, they will thereby become better guardians of those farmers' interests than the farmers have thus far been, within that domain.

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2012-11-23T18:35:39.070Z · LW(p) · GW(p)

Of course, less qualified guardians of an individual's self interest who believe themselves to be more qualified are a legitimate risk, but that doesn't mean that the optimal solution is to have individuals act exclusively as guardians of their own self interest.

The optimal solution isn't necessarily to have someone else act as the "guardian" of their self-interest, however well informed. BTW, I don't know if this is true of American English, but in British English, one meaning of "guardian" is what an orphan has in place of parents. Doing things for adults without consulting them is usually a bad idea.

If the Kenyan Ministry of agriculture follows the prescriptions of the researchers in the article cited above, they will thereby become better guardians of those farmers' interests than the farmers have thus far been, within that domain.

Not if they try to do so by simply coming in and telling the farmers what to do, nor by deciding the farmers' economic calculations are wrong and manipulating subsidies to get them to do differently. (I've only glanced at the article; I don't know if this is what they did.) Even when they're right about what the farmers should be doing, they will go wrong if they use the wrong means to get that to happen. Providing information might be a better way to go. The presumption that if only you know enough, you can direct other people's lives for them is pretty much always wrong.

Replies from: Peterdjones, DSimon
comment by Peterdjones · 2012-11-23T20:00:37.466Z · LW(p) · GW(p)

Doing things for adults without consulting them is usually a bad idea.

But none of the examples given are of one person taking over another's life. Most of this debate revolves around the fallact that someone either runs their own life, or has it run for them. In fact, there are many degress of advice/help/co-operation.

comment by DSimon · 2012-11-23T19:13:10.333Z · LW(p) · GW(p)

I don't know if this is true of American English, but in British English, one meaning of "guardian" is what an orphan has in place of parents.

Yes, this meaning is in American English as well. A typical parental permission form for a child to go on a field trip or what-have-you will ask for the signature of "a parent or legal guardian".

comment by Peterdjones · 2012-11-23T19:58:16.959Z · LW(p) · GW(p)

a random person is a better guardians of his own self-interest than a random do-gooder.

That isn't obvious. D-G's are likely to be qualified to help people, the people they are helping are likely to have a hsitory of not helping themselves sucessfully.

So the study demonstrates that the farmers were better guardians of their own self-interest

Would they have had access to fertilizer at all w/out the govt? Two heads are better than one, again.

comment by A1987dM (army1987) · 2012-11-23T14:15:57.219Z · LW(p) · GW(p)

Are you claiming that the donors are perfectly rational and have unlimited willpower - and are also perfectly altruistic?

I'm not.

My claim is the modest and surely uncontroversial one that JoshuaZ's "random African" is a better guardian of his own welfare than you, or Bill Gates, or a random do-gooder.

I might agree about myself or “a random do-gooder” (dunno about Gates), but these people do look like they've done their homework.

comment by ArisKatsaris · 2012-11-23T11:34:22.589Z · LW(p) · GW(p)

My claim is the modest and surely uncontroversial one that JoshuaZ's "random African" is a better guardian of his own welfare than you, or Bill Gates, or a random do-gooder.

People like Bill Gates aren't "random do-gooders". I'd not it find it strange that in the counterfactual that Bill Gates had the time to guard my own welfare (let alone some random African's), he might do a better job at it than I would myself. Certainly he might provide useful tips about how to invest my money for example, might know ways that I've not even heard of.

Replies from: Salemicus
comment by Salemicus · 2012-11-23T14:32:27.718Z · LW(p) · GW(p)

People like Bill Gates aren't "random do-gooders".

Bill Gates was a software executive who got into malaria prevention and education reform and poverty reduction and God knows what else, not because of his deep knowledge and expertise in those subjects, but a generalised wish to become a philanthropist. He's the very archetype of a random do-gooder.

I'd not it find it strange that in the counterfactual that Bill Gates had the time to guard my own welfare (let alone some random African's), he might do a better job at it than I would myself.

But the whole point is that he doesn't have the time (or knowledge, or inclination, or incentives, or ...) to guard your welfare - or that of a "random African". Sure, if Bill Gates was my father he might be a better guardian of my welfare than I am. But he ain't.

Replies from: ArisKatsaris
comment by ArisKatsaris · 2012-11-23T14:54:33.818Z · LW(p) · GW(p)

He's the very archetype of a random do-gooder.

My point is that the qualification "random" is rather silly when applied to one of the most wealthy people in the world. That he achieved that wealth (most of which he did not inherit) implies some skills and intelligence most likely beyond that of a randomly selected do-gooder.

An actually randomly-selected do-gooder would probably be more like a middle-class individual who walks to a soup-kitchen and offers to volunteer, or who donates money to UNICEF.

comment by RomanDavis · 2012-11-22T16:45:34.426Z · LW(p) · GW(p)

So every time a business gains on account of departures from the free market, that's a travesty, but every time it loses, that's the way things are supposed to work. No wonder you think academics are the only ones who do any good. Besides, TBTF isn't an economic problem, this is a political problem. They had too many lobbyists to be allowed to fail, that's all.

He didn't say that. You're being a troll.

comment by A1987dM (army1987) · 2012-11-23T14:21:48.144Z · LW(p) · GW(p)

If you talk a lot about how much you love literature, but in fact you spend all your money on beer, then some so-called philanthropist building you a library is just a waste of everyone's time.

I disagree. See this post by Yvain.

comment by Bugmaster · 2012-11-22T01:07:06.726Z · LW(p) · GW(p)

No-one is disputing that mathematics can be useful. The question is, if we had slightly more advanced number theory slightly earlier in time, would that have been particularly useful? Answer - no.

My answer is "probably yes". Mathematics directly enables entire areas of science and engineering. Cathedrals and bridges are much easier to build if you know trigonometry. Electricity is a lot easier to harness if you know trigonometry and calculus, and easier still if you are aware of complex numbers. Optics -- and therefore cameras and telescopes, among many other things -- is a lot easier with linear algebra, and so are many other engineering applications. And, of course, modern electronics are practically impossible without some pretty advanced math and science, which in turn requires all these other things.

If we assume that technology is generally beneficial, then it's best to develop the disciplines which enable it -- i.e., science and mathematics -- as early as possible.

Replies from: army1987
comment by A1987dM (army1987) · 2012-11-22T19:47:12.039Z · LW(p) · GW(p)

He was talking about number theory specifically, not mathematics in general -- in the first sentence you quoted he admitted it can be useful. (I doubt advanced number theory would have been that practically useful before the mid-20th century.)

comment by JoshuaZ · 2012-11-21T21:14:54.963Z · LW(p) · GW(p)

Follow up reply in a separate comment since I didn't notice this part of the remark the first time through (and it is substantial enough that it should probably not just be included as an edit):

... we might better understand the “Great Filter”

Isn’t this kind of thing archetypal of knowledge that in no way contributes to human welfare?

If this falls into that category then the archetypes of knowledge that doesn't contribute to human welfare is massively out of whack. Figuring out how much of the Great Filter is in front of us or behind us is extremely important. If most of it is behind us, we have a lot less worry. If most of the Great Filter is in front of us, then existential risk is a severe danger to humanity as a whole. Moreover, if it is in front of us, then it most likely some form of technology and caused by some sort of technological change (since natural disasters aren't common enough to wipe out every civilization that gets off the ground). Since we're just beginning to travel into space, it is likely that if there is heavy Filtration in front of us, it isn't very far ahead but is in the next few centuries.

If there is heavy Filtration in front of us, then it is vitally important that we figure out what that Filter is and what we can do to avert it, if anything. This could be the difference between the destruction of humanity and humanity spreading to the stars. If there are any contributions that contribute to the welfare of humanity, those which involve our existence as a whole should be high up on the list.

comment by Desrtopa · 2012-11-21T22:54:28.873Z · LW(p) · GW(p)

Alvin Roth is no doubt a bright guy, but the idea that he has done more lasting good for humanity than, say, Sam Walton, is absurd.

I wouldn't be so sure about that. I'm not about to investigate the economics of their entire supply chain (I already don't shop at Walmart simply due to location, so it doesn't even stand to influence my buying decisions,) but I wouldn't be surprised if Walmart is actually wealth-negative in the grand scheme. They produce very large profits, but particularly considering that their margins are so small and their model depends on dealing in such large bulk, I think there's a fair likelihood that the negative externalities of their business are in excess of their profit margin.

It's impossible for a business to be GDP negative, but very possible for one to be negative in terms of real overall wealth produced when all externalities are accounted for, which I suspect leads some to greatly overestimate the positive impact of business.

Replies from: Salemicus
comment by Salemicus · 2012-11-22T00:51:42.328Z · LW(p) · GW(p)

Why focus on the negative externalities rather than the positive? And why neglect all the partner surpluses - consumer surplus, worker surplus, etc? I'd guess that Walmart produces wealth at least an order of magnitude greater than its profits.

Replies from: Desrtopa
comment by Desrtopa · 2012-11-22T02:26:45.162Z · LW(p) · GW(p)

Why focus on the negative externalities rather than the positive?

Because corporations make a deliberate effort to privatize gains while socializing expenses.

GDP is a pretty worthless indicator of wealth production, let alone utility production; the economists who developed the measure in the first place protested that it should by no means be taken as a measure of wealth production. There are other measures of economic growth which paint a less optimistic picture of the last few decades in industrialized nations, although they have problems of their own with making value judgments about what to measure against industrial activity, but the idea that every economic transaction must represent an increase in well-being is trivially false both in principle and practice.

Replies from: Salemicus
comment by Salemicus · 2012-11-23T09:57:28.358Z · LW(p) · GW(p)

Because corporations make a deliberate effort to privatize gains while socializing expenses.

This is true of everyone, not just corporations. I'm very suspicious that you take this scepticism only against corporations, but not academics.

Replies from: JoshuaZ, zslastman, Peterdjones, Desrtopa
comment by JoshuaZ · 2012-11-23T20:17:36.374Z · LW(p) · GW(p)

Someone who is doing research that is published and doesn't lead to direct patents is socializing gains whether or not they want to.

Replies from: Salemicus, FluffyC
comment by Salemicus · 2012-11-24T00:56:17.420Z · LW(p) · GW(p)

Only if there are any gains to socialize. Consider honestly the societal gain from the marginal published paper, particularly given that it gets 0 cites from other papers not by the same author.

Replies from: JoshuaZ
comment by JoshuaZ · 2012-11-24T00:59:18.880Z · LW(p) · GW(p)

Consider honestly the societal gain from the marginal published paper, particularly given that it gets 0 cites from other papers not by the same author.

So, I'd be curious what evidence you have that the average paper gets 0 citations from papers not by the same author across a wide variety of fields. But, in any event, the marginal return rate per a paper isn't nearly as important as the marginal return rate per a paper divided by the cost of a paper. For many fields (like math), the average cost per a paper is tiny.

Replies from: Salemicus
comment by Salemicus · 2012-11-24T01:46:23.991Z · LW(p) · GW(p)

Consider honestly the societal gain from the marginal published paper, particularly given that it gets 0 cites from other papers not by the same author.

So, I'd be curious what evidence you have that the average paper gets 0 citations from papers not by the same author across a wide variety of fields.

Either I cannot write clearly or others cannot read clearly, because again and again in this thread people are responding to statements that are not what I wrote. The common factor is me, which makes me think it is my failure to write clearly, but then I look at the above. I referred to "the marginal published paper", and even italicised the word marginal. JoshuaZ replies by asking whether I have evidence for my statement about "the average paper." I don't know what else to say at this point.

However, yes, I have plenty of evidence that the marginal paper across a wide variety of fields gets 0 citations, see e.g. Albarran et al. Note incidentally that there are some fields where the average paper gets no citations!

Replies from: nshepperd, JoshuaZ
comment by nshepperd · 2012-11-24T02:52:39.416Z · LW(p) · GW(p)

the marginal paper across a wide variety of fields gets 0 citations

I don't think a marginal paper is a thing (marginal cost isn't a type of cost, but represents a derivative of total cost). Do you mean that d(total citations)/d(number of papers) = 0?

Note incidentally that there are some fields where the average paper gets no citations!

This seems impossible, unless all papers get no citations, ie. no-one cites anyone but themselves. That actually happens?

Replies from: Salemicus
comment by Salemicus · 2012-11-24T11:40:13.399Z · LW(p) · GW(p)

Of course the marginal paper is a thing. If there were marginally less research funding, the research program cancelled wouldn't be chosen at random, it would be the least promising one. We can't be sure ex ante that that would be the least successful paper, but given that most fields have 20% or more of papers getting no citations at all, and other studies show that papers with very low citation counts are usually being cited by the same author, that's good evidence.

Note that I did not say that papers, on average, get no citations. I said that the average (I.e. median) paper gets no citations.

Replies from: JoshuaZ, army1987, nshepperd
comment by JoshuaZ · 2012-11-24T14:35:32.201Z · LW(p) · GW(p)

f there were marginally less research funding, the research program cancelled wouldn't be chosen at random, it would be the least promising one.

Weren't you just a few posts ago talking about the problems of politics getting involved in funding decisions? But now you are convinced that in event of a budget cut it will be likely to go cut the least promising research? This seems slightly contradictory.

comment by A1987dM (army1987) · 2012-11-24T15:20:40.269Z · LW(p) · GW(p)

If there were marginally less research funding, the research program cancelled wouldn't be chosen at random, it would be the least promising one.

Would it? I fear it would be the one whose participants are worst at ‘politics’.

comment by nshepperd · 2012-11-24T14:04:26.459Z · LW(p) · GW(p)

If there were marginally less research funding, the research program cancelled wouldn't be chosen at random, it would be the least promising one.

Ah, right, we're on the same page now. Your argument, however, assumes that a) fruitfulness of research is quite highly predictable in advance, and b) available funds are rationally allocated based on these predictions so as to maximise fruitful research (or the proxy, citations). Insofar as the reality diverges from these assumptions, the expected number of citations of the "marginal" paper is going to approach the average number.

Note that I did not say that papers, on average, get no citations. I said that the average (I.e. median) paper gets no citations.

Oh, by "average" I assumed you meant the arithmetic mean, since that is the usual statistic.

comment by JoshuaZ · 2012-11-24T01:50:44.833Z · LW(p) · GW(p)

Sorry, in this context, I switched talking about the marginal to talking about the average. You shouldn't take my own poor thinking as a sign of anything, although in this context, it is possible that I was without thinking trying to steel man your argument, since when one is talking about completely eliminating academic funding, the average rate matters much more than the marginal rate. But the citation you've given is convincing that the marginal rate is generally quite low across a variety of fields.

Replies from: Salemicus
comment by Salemicus · 2012-11-24T11:47:52.794Z · LW(p) · GW(p)

[I]t is possible that I was without thinking trying to steel man your argument, since when one is talking about completely eliminating academic funding...

Who exactly is arguing for completely eliminating academic funding? If you mean me, I hope you can provide a supporting quote.

Replies from: JoshuaZ
comment by JoshuaZ · 2012-11-24T14:33:08.247Z · LW(p) · GW(p)

Who exactly is arguing for completely eliminating academic funding?

Well, various statements you've made seemed to imply that, such as your claim that burning down the Library of Alexandria had the advantage that:

Academics now forced to get useful job and contribute to society

and you then stated

The point is that some academics are useful and some are not; there is no market process that forces them to be so. It may be that some of the academics are able to continue doing exactly what they were doing, just for a private employer.

If you prefer, to be explicit, you seem to be arguing that all goverment funding of academics should be cut. Is that an accurate assessment? In that context, given that that's the vast majority of academic research, the relevant difference is still the average not the marginal rate of return.

comment by FluffyC · 2012-11-24T00:44:09.129Z · LW(p) · GW(p)

That being a large portion of academia, this presents at least a partial argument for the present state of affairs wrt academia being publicly funded.

comment by zslastman · 2012-11-23T10:25:29.896Z · LW(p) · GW(p)

The majority of people, other than psychopaths, are not as ruthless in the quest to externalize their costs. A substantial portion of academics sacrifice renown and glory to do research they believe has intrinsic value. This is in large part the reason they can be paid so much less than people of equivalent ability in the private sector.

I agree with your general point about business men and entrepreneurship being undervalued however.

comment by Peterdjones · 2012-11-23T20:04:16.219Z · LW(p) · GW(p)

This is true of everyone, not just corporations.

Uh huh. Is it true of charities?

comment by Desrtopa · 2012-11-23T14:53:30.422Z · LW(p) · GW(p)

As zslatsman already said, this is not true to nearly as great an extent of most people as it is of corporations. Corporations have an obligation to maximize profits, whereas humans are rarely profit maximizers.

Some people are more willing to externalize costs than others. For instance, some people, given the opportunity to file expense reports under which they can cover luxuries, will take the opportunity to live it up as much as possible on someone else's dollar. Other people, myself included, would feel guilty, and try to be as frugal as possible.

Try not to overgeneralize your own mentality.

comment by TorqueDrifter · 2012-11-21T19:36:53.546Z · LW(p) · GW(p)

Number theory might have progressed faster... we might better understand the “Great Filter”

Isn’t this kind of thing archetypal of knowledge that in no way contributes to human welfare?

I don't think you'll find many here to agree that math doesn't help with human welfare.

comment by Peterdjones · 2012-11-23T20:43:23.478Z · LW(p) · GW(p)

Alvin Roth is no doubt a bright guy, but the idea that he has done more lasting good for humanity than, say, Sam Walton, is absurd.

Apples and oranges. Business is there to make money. Money is instrumental, it is there to be spent on terminal values, things of intrinsic worth. People spend their excess on entertainement, art, hobbies, family life, and, yes knowledge. All these things are terminal values.

comment by A1987dM (army1987) · 2012-11-21T00:26:58.385Z · LW(p) · GW(p)

I don't need to carry out expected utility calculations explicitly to guess that burning down a library is way more likely to be bad than good. My "What?" was because I can't see any obvious reason to suspect that actually carrying it out would yield a substantially different answer than my guess, and wondered whether you had such a reason in mind.

Replies from: Salemicus
comment by Salemicus · 2012-11-21T19:16:15.071Z · LW(p) · GW(p)

I don't need to carry out expected utility calculations explicitly to guess that burning down a library is way more likely to be bad than good.

But note that we aren’t talking about a library being burned down by an arsonist. Most of the various stories have it being burned down by the government of the day, as a public policy measure. It appears that you don’t even consider their reasons before condemning them – highly suspicious.

Let’s assume for the sake of argument that the library was destroyed by Amr ibn al-Aas as I was taught (although wiki is not so sure). Reasons why his burning down a library might be bad:

  • Risk of fire spreading (but does not appear to have been the case)
  • Loss of private property by the owners of the building or books (but does not apply here)
  • Loss of useful knowledge (does not appear to apply here, but disputed by JoshuaZ)

Reasons why his burning down a library might be good:

  • Academics now forced to get useful job and contribute to society (important)
  • Destruction of contentious material likely to cause civil unrest (important)
  • Owner of building now able to build something more useful in its place (minor)

It is an empirical question as to which effects are stronger – and the record shows that Egypt was richer, more peaceful and more stable under the Umayyids than it had been under the Byzantines. Personally I regard it as similar to Henry VIII’s dissolution of the monasteries.

Replies from: DaFranker, JoshuaZ, Douglas_Reay, Peterdjones
comment by DaFranker · 2012-11-21T21:00:59.331Z · LW(p) · GW(p)

Reasons why his burning down a library might be bad:

  • Risk of fire spreading (but does not appear to have been the case)
  • Loss of private property by the owners of the building or books (but does not apply here)
  • Loss of useful knowledge (does not appear to apply here, but disputed by JoshuaZ)

Seems you're missing a key one. Probably the most important one. At the very least, this is to me the primary advantage of even building libraries at all:

  • Loss of a major means of spreading, disseminating and finding knowledge, whether the knowledge itself is lost or not.

Imagine trying to find information regarding a specific species of bird (a standard "Encyclopedia" will only have an entry on birds, not on each known species), with no internet and no libraries. You're going to run around for a while until you finally find someone who knows someone who's heard of someone who owns a book that might contain the information you want on that particular bird.

Libraries are, first and foremost, a convenient place to store a lot of books, which implies a convenient place to find any book in particular you're looking for with much higher success rates than asking a random friend.

comment by JoshuaZ · 2012-11-21T20:30:03.634Z · LW(p) · GW(p)

Academics now forced to get useful job and contribute to society

There appears to be a massive implied premise here that academics aren't useful.

Replies from: Salemicus
comment by Salemicus · 2012-11-22T01:00:02.606Z · LW(p) · GW(p)

Not at all. The point is that some academics are useful and some are not; there is no market process that forces them to be so. It may be that some of the academics are able to continue doing exactly what they were doing, just for a private employer. But I would not bet on that outcome for most.

Replies from: IlyaShpitser, None, JoshuaZ
comment by IlyaShpitser · 2012-11-22T01:13:09.241Z · LW(p) · GW(p)

The point is that some academics are useful and some are not; there is no market process that forces them to be so.

???

Academics need funding. The ability to get funding is well correlated with usefulness in most fields (to be fair, other things are in play, like institutional inertia, fashion, etc. etc.) Useful research also results in spin off companies, and fame. There are all sorts of incentives to be useful in academia.

There is also the issue of hedging, and diversification of intellectual effort -- you want people doing long term payoff and long shot research as well. Modern business culture is generally much worse at this than academic culture is at being useful. Google, a company started by two ex graduate students, is one of the few notable exceptions.

Replies from: Salemicus
comment by Salemicus · 2012-11-22T01:25:21.000Z · LW(p) · GW(p)

Modern business culture is generally much worse at this than academic culture is at being useful.

How would you tell?

Replies from: IlyaShpitser
comment by IlyaShpitser · 2012-11-22T01:43:56.365Z · LW(p) · GW(p)

I could look at output over the last 50 years, comparing useful stuff out of academia vs long term research done by corporate research labs, scaled by funding. Or I could look at incentives high level corporate decision makers have, which heavily favor the short term and empire building. Or I could look at anecdotal evidence based on testimonies of my corporate vs academic acquaintances. Or I could look at universities today that produce useful applied research (basically any major research university) vs companies today that have labs working on fundamental research (Google, Microsoft, possibly some oil companies (?), maybe Honda, maybe some big pharmas and that's about it).

The vast majority of big companies (the only ones who could afford fundamental research) do not engage in fundamental research. The vast majority of research universities do very useful applied work.

Replies from: Salemicus
comment by Salemicus · 2012-11-22T01:54:39.265Z · LW(p) · GW(p)

The vast majority of big companies (the only ones who could afford fundamental research) do not engage in fundamental research.

This might equally lead to the conclusion that the kind of "fundamental research" you're talking about just isn't very worthwhile.

Replies from: IlyaShpitser, Vaniver
comment by IlyaShpitser · 2012-11-22T04:17:10.133Z · LW(p) · GW(p)

Ok. Which of the following do you think is a worthwhile research question:

Non-Euclidean geometry (Lobachevsky, 1826).

Galois theory (Galois, 1830).

Complex numbers (Bombelli, 1572).

The periodic table (Mendeleev, 1869).

Interventionist causality (Wright, Neyman, Rubin, Pearl, Robins, etc. 1920-today).

Objectivism (Rand, ~1950s).

None of these were developed by corporations (or similar entities) or corporation sponsored individuals, to my knowledge.

Replies from: JoshuaZ, Salemicus
comment by JoshuaZ · 2012-11-22T04:39:02.347Z · LW(p) · GW(p)

Bombelli predates corporations on any large scale, so including his work seems strange in this context.

Replies from: IlyaShpitser
comment by IlyaShpitser · 2012-11-22T04:44:16.805Z · LW(p) · GW(p)

Private individuals could not incorporate until fairly late (after Bombelli) but states started establishing commercial entities that play the role modern corporations play in our society fairly early. I edited a little to clarify, though, thanks.

http://en.wikipedia.org/wiki/List_of_oldest_companies

http://en.wikipedia.org/wiki/History_of_corporations

comment by Salemicus · 2012-11-23T10:15:38.940Z · LW(p) · GW(p)

I don't know anything about interventionist causality, and Objectivism seems like a waste of time. The rest all seem to have produced worthwhile results.

But this is costless analysis! Of course if you buy lots of lottery tickets, and look only at the winners, then buying lottery tickets looks worthwhile. You have to consider the opportunity costs, not just of these research projects, but also of every other similarly situated research project. And you also have to do time-discounting. Bombelli discovering complex numbers in 1572 looks like a waste, when they weren't useful for 200 years or more.

Moreover, I don't know why "corporations" is the comparison. The comparison is the private sector generally. Huge amounts of scientific work are done, and continue to be done, by enthusiastic amateurs, and the charitable sector. I am not making the claim that in an ideal world all government academics should be fired (although I do think that would be a big improvement on the current situation). I merely claimed that, on the margin, we are hugely oversupplied with academics, and undersupplied with businessmen.

Replies from: IlyaShpitser
comment by IlyaShpitser · 2012-11-23T23:51:05.344Z · LW(p) · GW(p)

Ok. So, helpful fundamental research, as it is currently produced: (a) imposes a heavy opportunity cost, (b) has a low success rate, (c) is generally discovered "too early," leading to waste. What is your proposal for doing better? Can you give me some examples of things like complex numbers discovered in the private sector, by charities, or 'enthusiastic amateurs'?

Incidentally, the success rate of fundamental research for a given finite time horizon k is an untestable quantity. Thus, (b) is a weak complaint. (a) is hard to argue also, because you need to construct counterfactual scenarios that people will believe.

Replies from: JoshuaZ, Salemicus, Bugmaster
comment by JoshuaZ · 2012-11-24T00:06:13.154Z · LW(p) · GW(p)

Can you give me some examples of things like complex numbers discovered in the private sector, by charities, or 'enthusiastic amateurs'?

Examples that fit Salemicus's narrative in this context aren't non-existent. For example, Fermat was a lawyer by profession and did math as a hobby in his free time. There are many similar examples prior to the 19th century or so. And some major charities have helped fund successful research- one sees a lot of this with a variety of diseases.

comment by Salemicus · 2012-11-24T00:48:32.314Z · LW(p) · GW(p)

Can you give me some examples of things like complex numbers discovered in the private sector, by charities, or 'enthusiastic amateurs'?

Sure. For a start, complex numbers themselves - Bombelli was in the private sector himself. Some others, taken at random:

For-profit: Lightbulb (Edison, 1881), Propranolol and Cimetedine (Black, 1960s), Fractals (Mandelbrot, 1975)

Amateurs: Evolutionary Theory (Darwin, 1830s-50s), Photoelectric effect, Brownian Motion, Special Relativity, Matter-Energy (Einstein, 1905), Linear B (Ventris, 1951),

Incidentally, the success rate of fundamental research for a given finite time horizon k is an untestable quantity. Thus, (b) is a weak complaint. (a) is hard to argue also, because you need to construct counterfactual scenarios that people will believe.

Actually, if the success rate of this spending is as unknowable and untestable as you say it is, that's an excellent argument to stop forcing unwilling people to pay for it. But note that although I'm happy to fight you on your strongest ground (pure scientific research), surely you must then concede that the rest of the battlefield is mine, and that all government spending on academia that can't be justified in these terms (e.g. arts, humanities, medicine, space exploration, etc) should be eliminated. I'm not doctrinaire - I'd settle for that compromise.

EDIT: I'd truly be fascinated to know why was this voted down.

Replies from: JoshuaZ
comment by JoshuaZ · 2012-11-26T18:51:07.359Z · LW(p) · GW(p)

I don't know precisely why your comment was voted down, but one obvious guess is factual issues:

Amateurs: Evolutionary Theory (Darwin, 1830s-50s)

Darwin got most of his ideas from his time working the survey/exploration ship the HMS Beagle. As you may gather from the "HMS" in front, this was a ship in the British navy which had specific funding to hire and provide support to a naturalist.

But note that although I'm happy to fight you on your strongest ground (pure scientific research), surely you must then concede that the rest of the battlefield is mine, and that all government spending on academia that can't be justified in these terms (e.g. arts, humanities, medicine, space exploration, etc) should be eliminated. I'm not doctrinaire - I'd settle for that compromise.

This is why I suspect you are getting downvoted. First, it shows a confused notion of what is "pure scientific research"- a large part of space exploration and medical research counts as pure research for purposes of the arguments being made by IlyaShpitser and others in this context. Moreover, you seem to be wanting a "compromise" as if the truth must be somehow negotiable. The key of discussion is to understand what is likely to be actually true. Settling for a compromise isn't how one reaches truth (and no, Aumann's agreement theorem doesn't apply here).

Replies from: Salemicus
comment by Salemicus · 2012-11-26T19:23:27.732Z · LW(p) · GW(p)

I will not argue with your post as although I disagree with some things you stated, I requested comments on why downvoted. However, I think the following correction is necessary:

I don't know precisely why your comment was voted down, but one obvious guess is factual issues:

Amateurs: Evolutionary Theory (Darwin, 1830s-50s)

Darwin got most of his ideas from his time working the survey/exploration ship the HMS Beagle. As you may gather from the "HMS" in front, this was a ship in the British navy which had specific funding to hire and provide support to a naturalist.

Darwin was a "gentleman naturalist," held no official post on the Beagle, received no salary, and in fact had to pay to go on the journey. The ship was going for surveying purposes, and would have gone on its journey regardless of his presence. He held no academic position in the years afterwards when he was working on his theory. I think it's quite reasonable to classify this as amateur.

Replies from: JoshuaZ
comment by JoshuaZ · 2012-11-26T19:29:47.920Z · LW(p) · GW(p)

So from reading this Wikipedia summary it looks like the situation was actually more complicated than either of us realized (although definitely closer to your summary):

FitzRoy had found a need for expert advice on geology during the first voyage, and had resolved that if on a similar expedition, he would "endeavour to carry out a person qualified to examine the land; while the officers, and myself, would attend to hydrography... he asked his friend and superior, Captain Francis Beaufort, to seek a gentleman naturalist as a self-financing passenger who would give him company during the voyage. A sequence of inquiries led to Charles Darwin, a young gentleman on his way to becoming a rural clergyman, joining the voyage

So yes, he was self-financed. But the primary issue is that he was given support by the crew and the entire existence of an exploration ship (which if anything seems pretty similar to the space program you've criticized). Would you call someone who does work without pay now using data from NASA an amateur? If so, then the term "amateur" isn't very relevant to capturing the most important detail- where the resources for their work comes from.

comment by Bugmaster · 2012-11-27T10:41:31.957Z · LW(p) · GW(p)

Incidentally, the success rate of fundamental research for a given finite time horizon k is an untestable quantity.

How exactly do you measure "success" in this case ? As for me, I find myself hard-pressed to think of any examples of fundamental research that weren't ultimately beneficial -- except perhaps for instances of outright fraud or gross incompetence.

Even if a scientist spent five years and a million dollars trying to discover, say, the link between gene X and phenotype Y, and found no such link, then the work was still not in vain. Firstly, we can now be more certain that gene X does not cause Y; secondly, we can most likely gain a lot of collateral benefits from the work, leading to an increased rate of discovery in the future.

comment by Vaniver · 2012-11-22T02:55:35.436Z · LW(p) · GW(p)

This might equally lead to the conclusion that the kind of "fundamental research" you're talking about just isn't very worthwhile.

No. Just... no. What differentiates basic research and applied research is how many years there are until commercial application. For applied research, the number of low- five years is stretching it, and hopefully it'll be less than one. For basic research, the is number is larger- it was around three decades from Einstein's explanation of the photoelectric effect to the commercialization of cameras based on it.

The point that some here considering burning of books a taboo, and that that disagrees with consequentialism, is an interesting and valid point. The point that public goods can be provided without government intervention is an interesting and valid point as well, but you're not arguing it well.

Replies from: Bugmaster
comment by Bugmaster · 2012-11-22T04:29:08.040Z · LW(p) · GW(p)

For basic research, the is number is larger- it was around three decades from Einstein's explanation of the photoelectric effect to the commercialization of cameras based on it.

Another problem with fundamental research -- from a commercial corporation's point of view -- is that its results may not be applicable at all to products in your target market. For example, you might start by researching the formation of clouds in the atmosphere, and end up with major breakthroughs in atomic theory. Those are interesting, to be sure, but how are you going to sell that ?

This is but one of the reasons why large corporations tend to stay away from fundamental research, unless they can write it off their taxes or something.

comment by [deleted] · 2012-11-22T01:39:43.340Z · LW(p) · GW(p)

Can you briefly taboo 'useful' for me? What do you mean by that?

Replies from: Salemicus
comment by Salemicus · 2012-11-22T01:57:22.384Z · LW(p) · GW(p)

In this context, I mean providing a service that someone else is willing to pay for.

Replies from: army1987, JoshuaZ, None
comment by A1987dM (army1987) · 2012-11-22T13:11:44.426Z · LW(p) · GW(p)

By that definition, heroin is useful.

comment by JoshuaZ · 2012-11-22T02:11:13.732Z · LW(p) · GW(p)

So by that notion, anything that is an externality but can't be captured by market forces is by definition not useful? Does that capture your intuition for the word useful?

Replies from: Salemicus
comment by Salemicus · 2012-11-22T02:21:27.899Z · LW(p) · GW(p)

I said is willing to pay for - not necessarily that they can pay for it. Any one-sentence definition of a word as complex as useful is going to necessarily be incomplete, but I certainly mean to include externalities in it. If, for example, people value "a sense of belonging to a community" and are willing to give up something meaningful for it, but co-ordination problems or whatever else means it can't be captured by market forces, then I would absolutely view someone who creates "a sense of belonging to a community" as useful - provided that the cost of their doing so is less than the price that the community members would be hypothetically willing to pay.

Replies from: JoshuaZ
comment by JoshuaZ · 2012-11-22T02:28:19.917Z · LW(p) · GW(p)

Fair enough. How do you determine then what people are counterfactually willing to pay?

Replies from: Salemicus
comment by Salemicus · 2012-11-22T02:49:13.722Z · LW(p) · GW(p)

I don't think anyone has a good way of doing that.

comment by [deleted] · 2012-11-22T02:43:32.319Z · LW(p) · GW(p)

Would you grant that many things are valuable which are nevertheless not useful in that sense?

EDIT: I don't mean anything fancy here. Eating a hot dog, for example, is valuable (I'm willing to pay to do it) but not in any sense useful (no one is willing to pay me to do it).

Replies from: Salemicus
comment by Salemicus · 2012-11-22T19:37:37.886Z · LW(p) · GW(p)

Yes, sure. But I was talking about useful as a quality of a person, not as a quality of an object.

However, as my comments in this thread are getting voted down, I assume it's not really worthwhile to continue this conversation.

comment by JoshuaZ · 2012-11-22T01:04:48.860Z · LW(p) · GW(p)

I see. And does the fact that much academic research produces positive externalities/public goods, which thus aren't easily funded by private employers says what in this context?

Replies from: Salemicus
comment by Salemicus · 2012-11-22T01:23:02.866Z · LW(p) · GW(p)

Deadweight cost of taxation + deadweight costs of political rent-seeking + resource misallocation due to political decision-making + opportunity costs

versus

costs from possibility that private sector is unable to capture all externalities

Markets fail. Use markets.

Replies from: JoshuaZ
comment by JoshuaZ · 2012-11-22T01:28:06.172Z · LW(p) · GW(p)

I'm confused by this remark, given the context is about academic jobs, not government intervention in the market. Can you expand/clarify what you mean?

Replies from: Salemicus
comment by Salemicus · 2012-11-22T01:32:53.370Z · LW(p) · GW(p)

Government creation of academic jobs is intervention in the market for academic jobs.

Replies from: JoshuaZ
comment by JoshuaZ · 2012-11-22T01:36:53.564Z · LW(p) · GW(p)

Yes, but that's a tiny fraction of the issues you list above. Unless I'm misreading you. By for example deadweight cost of taxation, you mean the deadweight loss of the portion of taxes that go to fund academic research?

Taking that sort of interpretation throughout, I'm still not sure what your point is. Can you be more explicit and maybe use full sentences?

Replies from: Salemicus
comment by Salemicus · 2012-11-22T02:14:00.247Z · LW(p) · GW(p)

You stated that academics aren't easily funded by the private sector because of an externality argument. I agreed that it is possible to argue that "basic research" or some such is underprovided by the market, because the private sector may not be able to capture all externalities, and that this is in some sense a market failure. However, I am saying that:

  • No-one can say how by much the private sector is underproviding. Therefore even if the intervention were done by angels, it is as likely to make things worse as better.
  • Government intervention will cost money, resulting in deadweight losses through tax.
  • The creation of a powerful body of rent-seeking will cause academic research to be massively oversupplied
  • Moreover "research" is not a fungible good; the money and resources will not necessarily go to the most useful areas, but to the most politically convenient ones
  • The rent-seeking will also have deadweight costs (e.g. academics spending lots of time writing grant proposals, taxpayers having to organise to prevent themselves getting robbed blind)
  • This will also incentivise rent-seeking elsewhere (if the academics are successful in asking for a subsidy, it encourages the farmers)
  • The adoption of the subsidy discourages market participants from finding new ways to capture the externality.

So, even though there may be a textbook "market failure," there is no reason for any intervention. Dissolve the modern-day monasteries, and let academics prove their use, if they can. And indeed, I'm sure Alvin Roth would be just fine if we did.

Replies from: JoshuaZ
comment by JoshuaZ · 2012-11-23T18:58:08.407Z · LW(p) · GW(p)

No-one can say how by much the private sector is underproviding. Therefore even if the intervention were done by angels, it is as likely to make things worse as better

No. We can make such estimates by looking at how helpful basic research was in the past.

Government intervention will cost money, resulting in deadweight losses through tax.

Sure. How much?

The creation of a powerful body of rent-seeking will cause academic research to be massively oversupplied

That's a danger certainly, but what evidence do you have that that's happening?

Moreover "research" is not a fungible good; the money and resources will not necessarily go to the most useful areas, but to the most politically convenient ones

The areas where politics has heavy aspects are actually the areas with the least government funding. For example, physics has a lot of government funding, whereas most of the humanities and social sciences have comparatively little. Thus, the politics comes into play primarily through the interaction with outside donors with agendas. That's how you get virulently anti-Israel attitudes in Middle-Eastern studies due to funds from rich Saudis and you get Israel studies as a subject which is about as ridiculously biased in the other direction for the same reason. Political problems in the sciences are rare.

The adoption of the subsidy discourages market participants from finding new ways to capture the externality.

We have theorems and a lot of empirical of how externalities interact with markets. If you think there's something wrong with that vast body of literature, feel free to point it out.

This will also incentivise rent-seeking elsewhere (if the academics are successful in asking for a subsidy, it encourages the farmers)

Is this a serious argument?

The rent-seeking will also have deadweight costs (e.g. academics spending lots of time writing grant proposals, taxpayers having to organise to prevent themselves getting robbed blind)

Yes, grant proposal writing is annoying and often a waste of time. Question: What fraction of taxpayer money is going to academic research?

Replies from: Salemicus
comment by Salemicus · 2012-11-24T01:27:19.975Z · LW(p) · GW(p)

No. We can make such estimates by looking at how helpful basic research was in the past.

At this point I think I have to cite Use of Knowledge in Society, Hayek, 1945 link.

Sure. How much?

Where I live, we spend approx $4bn per year (0.64% of GDP) on state-funded research (note that this figure is conservative because it doesn't include the way that higher education funds get siphoned off into research). Conservatively then, let's say $1bn in deadweight loss annually, just in this country - and our state-funded research is low compared to most OECD countries. If we extrapolate this figure to the world economy, we get a deadweight loss of approx $127bn annually, just due to government research spending. That's a lot of bednets.

The areas where politics has heavy aspects are actually the areas with the least government funding... political problems in the sciences are rare.

I am not talking about partisan clashes. I am talking about money being spent on worthless projects because they seem cool or win votes. NASA has a budget of almost $18bn!

Is this a serious argument?

Of course it's a serious argument - subsidizing one group of rent-seekers encourages others. I am of course being a little facetious in the sense that both the academics and the farmers already have their snouts deep in the trough.

Replies from: JoshuaZ
comment by JoshuaZ · 2012-11-24T01:42:27.795Z · LW(p) · GW(p)

At this point I think I have to cite Use of Knowledge in Society, Hayek, 194

Sorry, I'm not following. You are citing Hayek to argue what here?

Sure. How much? Where I live, we spend approx $4bn per year (0.64% of GDP) on state-funded research (note that this figure is conservative because it doesn't include the way that higher education funds get siphoned off into research). Conservatively then, let's say $1bn in deadweight loss annually, just in this country - and our state-funded research is low compared to most OECD countries. If we extrapolate this figure to the world economy, we get a deadweight loss of approx $127bn annually, just due to government research spending. That's a lot of bednets.

Ok. So we have less than 1% of GDP going to state-funded research. And where is that going to go?

I am talking about money being spent on worthless projects because they seem cool or win votes. NASA has a budget of almost $18bn!

Projects seeming "cool" is a very different claim than political rent-seeking. In this case though, looking at the overall NASA budget isn't very helpful: First, much of that budget is not going to what would be considered academic research. Second, you are talking about the space program of one of the world's largest economies, so the total cost is a misleading metric. Third, technologies developed by the US space program (especially GPS, communication satellites and weather satellites) have had large-scale world-changing impact.

Of course it's a serious argument - subsidizing one group of rent-seekers encourages others.

It often doesn't, and the case you've picked is a really good one. In the US, many of the people getting farm subsidies are people who are rural and if anything anti-ivory tower. A large fraction would probably be turned off of the idea of government subsidies if it was compared to what those East Coast intellectuals were doing. (I'm engaging in some broad brush strokes here obviously but some people like this do exist.) This sort of thing is connected to why many groups (including farmers) have tried to get their money through tax breaks rather than direct subsidies. Of course, from an economic perspective, tax expenditures are identical to subsidies. But people don't like to think of themselves as getting handouts so they prefer tax breaks (at least in the US).

By the way, my point earlier about only a small fraction of tax money going to academic research was (to be clear) about the claim that academic research would necessitate tax policy watchdog groups.

Replies from: Salemicus
comment by Salemicus · 2012-11-24T12:01:46.511Z · LW(p) · GW(p)

Sorry, I'm not following. You are citing Hayek to argue what here?

That no central planner can know how much "ought" to be spent on research.

Ok. So we have less than 1% of GDP going to state-funded research. And where is that going to go?

I don't know what people would spend their own money on. That's the whole point.

Projects seeming "cool" is a very different claim than political rent-seeking.

Yes, which is why I made distinct points. One is the problem of rent-seeking, but the point you are responding to there is about misallocation.

my point earlier about only a small fraction of tax money going to academic research was (to be clear) about the claim that academic research would necessitate tax policy watchdog groups.

Oh every group of rent-seekers bleeding the polity dry claim that they've only made a small nick, so there's no need to worry. Meanwhile we die of a thousand cuts. Are academia worse rent-seekers than (say) teachers? Obviously not. But the opportunity cost is probably higher, because they are far more likely to be able to do something productive.

Replies from: JoshuaZ
comment by JoshuaZ · 2012-11-24T14:27:10.329Z · LW(p) · GW(p)

That no central planner can know how much "ought" to be spent on research.

Since no one is arguing for complete central planning, I don't see how this is relevant.

I don't know what people would spend their own money on. That's the whole point.

You are missing my point, maybe I should be more explicit: You have a tiny portion of GDP going to research, and most of those resources go back into the economy.

Oh every group of rent-seekers bleeding the polity dry claim that they've only made a small nick

Missing the point. You claimed that academics getting tax money for research necessitated the creation of tax payer watchdog groups. The point is that since there are much larger interest groups getting much more money who are much more effectively organized, the watchdog groups will be necessary no matter what.

comment by Douglas_Reay · 2012-11-21T19:56:59.322Z · LW(p) · GW(p)

Loss of useful knowledge (does not appear to apply here, but disputed by JoshuaZ)

Since we don't have a full list which books were in the library, let alone a list of which ones the library had the only copy of, how can you have any certainty that none of the lost books contained any useful knowledge?

Replies from: Bugmaster, Salemicus
comment by Bugmaster · 2012-11-21T20:02:53.600Z · LW(p) · GW(p)

To be fair, he didn't say "does not apply here" with certainty; he said "does not appear to apply here", implying some degree of uncertainty.

Replies from: army1987
comment by A1987dM (army1987) · 2012-11-22T15:39:03.970Z · LW(p) · GW(p)

In absence of substantial evidence either way, my prior probability assignment that none of the knowledge stored a major library is useful is very small.

comment by Salemicus · 2012-11-21T22:00:08.281Z · LW(p) · GW(p)

As Bugmaster says, I don't claim certainty.

But we can look for evidence. Are there any technologies that go missing after the destruction of the library (similar to the loss of Greek fire that happened 600 years later)? Or perhaps an industrial regress that might inidicate missing knowledge? And if there is no evidence of any such, how should we update?

Replies from: JoshuaZ
comment by JoshuaZ · 2012-11-21T22:05:55.883Z · LW(p) · GW(p)

Are there any technologies that go missing after the destruction of the library (similar to the loss of Greek fire that happened 600 years later)?

Well, it is hard to say, since direct technologies leave as more of an archaeological record as say math texts which are essentially technologies. But one can't help but notice that the Antikythera mechanism predated the library, and we didn't have anything like it again until the 1400s. However, this is imperfect in that it looks like a lot of wars and problems occurred between the height of the Greeks and the burning of Alexandria so pinning this sort of thing on it is tough.

comment by Peterdjones · 2012-11-23T20:54:09.315Z · LW(p) · GW(p)

Loss of useful knowledge (does not appear to apply here, but disputed by JoshuaZ)

  1. Loss of intrisically valubale knowledge.

Money is instumental.

Academics now forced to get useful job and contribute to society (important)

Are artists useful? Musicians? Enterainers?

Destruction of contentious material likely to cause civil unrest (important)

Inresting definition of "good". There is a much better solution, which is a live-and-let-live culture.

richer, more peaceful and more stable under the Umayyid

But, presubaly, dumber. Pig Happy might be OK for you, but count me out.

Replies from: Sniffnoy
comment by Sniffnoy · 2012-11-23T22:28:16.511Z · LW(p) · GW(p)

Yes, I'm surprised so many people are trying to argue that the lost knowledge would have been useful. This may be true, but is it really relevant? Well, apparently it is to Salemicus.

Although it's worth noting here that going by the other threads Salemicus is using an unusual notion of "useful". In fact it's specific enough that it allows us to answer the question

Are artists useful? Musicians? Enterainers?

with "yes", "yes", and "yes". This is sufficiently different from the ordinary notion of "useful" that I suspect a different word should be used for clarity. Maybe "valuable"? (I mean, I'd say that the knowledge is valuable in the ordinary sense regardless of whether it's valuable in the sense I've proposed -- because, you know, terminal values -- but from here on out I'm talking about the sense I've proposed, not the ordinary sense.)

Which raises the point -- we'd certainly consider that valuable now. I.e., I think you could get people to pay quite a lot to recover whatever lost knowledge was burnt, even if it's not very useful in the ordinary sense. Should this be counted? I.e., if we're going to measure the value of something by how much people are willing to pay for it, as Salemicus does, should we count that only in its own time, or cross-temporally? The former is the usual way of doing things, but Salemicus hasn't specified, and I have to wonder if there's something to the latter way of thinking, even if it's impossible to compute...

comment by FluffyC · 2012-11-24T01:00:16.114Z · LW(p) · GW(p)

Surely a consequentialist could come to a conclusion about book-burning being bad and then write an outraged comment about it--the potential negatives in the long-term of the burning of such a library are debatable but the potential positives in the long-term are AFAICT non-existent. Such a catastrophic failure of cost-benefit analysis would be something a consequentialist could in fact be quite outraged about.

Incidentally,

Compared to other events of the time, piddling for human "utility."

...it seems self-evident to me that this is not in any way an interesting or meaningful comparison to ask people to make (ETA: in light of the above, anyway). It's "good" rhetoric but seems to be abysmal rationality; it's a "there are starving children in Africa, eat your peas" argument.

comment by Bugmaster · 2012-11-21T19:43:04.822Z · LW(p) · GW(p)

A consequentialist would ask, with an open mind, whether burning the libraries lead to good or bad consequences. A virtue ethicist would express disgust at the profanity of burning books.

Despite being a consequentialist (*), I believe that the act of burning libraries possesses such a massive disutility that it is almost always the wrong thing to do. I can elaborate on my reasoning if you're interested, but my main point is that consequentialism and virtue ethicism can sometimes come to the same conclusion; this does not invalidate either philosophy.

(*) Or as close to being one as I can accomplish given my biases.

comment by Peterdjones · 2012-11-23T20:46:34.919Z · LW(p) · GW(p)

Compared to other events of the time, piddling for human "utility

Since there is no gain whatsoever, it is still negative consequentially.

Replies from: JoshuaZ
comment by JoshuaZ · 2012-11-23T21:07:20.795Z · LW(p) · GW(p)

This doesn't seem that relevant. If you look above you'll see that Salemicus primary argument concerning the library wasn't that it was necessarily a good thing to do but that it wasn't severe compared to much worse things that happened in the same time period. His other argument about the role of academics was a subthread of that.

Replies from: Peterdjones
comment by Peterdjones · 2012-11-23T21:10:03.876Z · LW(p) · GW(p)

How do you quantify the worth of knowledge when you don't know what it is?

Replies from: JoshuaZ
comment by JoshuaZ · 2012-11-23T21:35:04.665Z · LW(p) · GW(p)

How do you quantify the worth of knowledge when you don't know what it is?

With difficulty. If you read the rest of this thread, specific examples based on what is suspected to have been at Alexandria have been discussed. One can make reasoned guesses based on was known and what was referenced elsewhere as being studied topics. See the earlier discussion about Diophantus (in the same subthread) for example.

Replies from: Peterdjones
comment by Peterdjones · 2012-11-23T21:45:39.250Z · LW(p) · GW(p)

Ok. The comment wasn't directed at you. It's just another of the many problems of trying to evaluate eveything by monetary worth.

comment by thomblake · 2012-11-20T19:47:43.156Z · LW(p) · GW(p)

That one was probably an accident. Caesar had to burn his own ships so the enemy couldn't keep him from using them, and the fire got out of control.

comment by [deleted] · 2012-11-20T20:21:57.353Z · LW(p) · GW(p)

Since civilizations prior to widespread literacy (and many after it) routinely destroyed the records and lore of their rivals; we should expect that the first X that we have records of is quite certainly not the first X that existed, especially if its lore makes a big deal of claiming that it is.

This would work if conquerers were also effective at destroying archeological evidence. But they seem not to have been, and a complete archeological record would just settle the question of which city was really first, given some appropriate and agreed upon standard. And that city would be the true dawn.

comment by JoshuaZ · 2012-11-19T16:19:44.980Z · LW(p) · GW(p)

Also, this piece seems to be of high enough quality and of general interest that it probably makes sense to move it to main.

Replies from: Douglas_Reay
comment by Douglas_Reay · 2012-11-19T17:25:22.878Z · LW(p) · GW(p)

Ok, moved.

comment by Vladimir_Nesov · 2012-11-19T16:45:07.939Z · LW(p) · GW(p)

There is no cause to suppose, even if the human genome 100,000 years ago had the full set of IQ-related-alleles present in our genome today, that they would have developed civilisation much sooner.

The original point was not about genomes, it was about expressed IQ. Suppose the reasons for absence of the currently normal IQ in the past were environmental. If I understand correctly, your argument in particular suggests that it's the environmentally-mediated increase in IQ that might have enabled the rise of civilization (in this interglacial period). Then it's still the case that present IQ level is about as low as it can be.

The distinction your argument makes seems to be about the reason for the recent rise in IQ (environmental, not generic, at least not with changes in genes directly related to brains), not about the level of expressed IQ necessary to spark a technological civilization.

Replies from: gwern, JoshuaZ
comment by gwern · 2012-11-19T17:54:46.311Z · LW(p) · GW(p)

Yes, I think this would be my past self's reply (I don't remember making that particular argument, but it does sound like something I would say). Even if we granted that IQ-linked alleles were identical 100kya, we still wouldn't have to grant that IQ was the same! We know of many powerful environmental effects on phenotypic IQ: to give a recent example of interest to me, just iodine & iron deficiency will cost on average 15 IQ points. One might expect random diseases and parasites to cost even more. (And remember that aside from the effect on the mean, the tails of the bell curve are going to be affected even more outrageously.)

And we know IQ connects in all sorts of way to economic attitudes, activity, growth, etc, with patterns indicative of bidirectional causality; see http://lesswrong.com/lw/7e1/rationality_quotes_september_2011/4r01

More importantly, we have the equivalent of natural experiments on the importance of national IQ averages: African countries. There are countries where the limited samples suggest particularly low IQs; these are also the countries where economic growth is least, and anecdotally, charitable efforts like installing new infrastructure fail frequently.

Logically, it should be easier to leapfrog or catchup in growth based on existing technologies & methods, and this explains things like why it could take hundreds of millennia to go from apes to sub-Saharan Africa levels of wealth but South Korea could then go from sub-Saharan levels to industrialized democracy in something like 40 years. So, if the African countries with the least average intelligence can hardly maintain the existing infrastructures or per capita wealth, then this doesn't bode well for the prospects of them taking off, and is perfectly consistent with the observation of ~90 millennia of stagnation. (Now there's a 'great stagnation' for you!)

Replies from: JulianMorrison, JoshuaZ
comment by JulianMorrison · 2012-11-24T11:23:42.626Z · LW(p) · GW(p)

The trouble with epigenetic IQ drop as a theory is that hunter gatherers were (IIRC, anthropologists please confirm) better fed, taller and healthier than early farmers. This being due to a combination of better diet (not a monoculture of one or two staples) and also due to the beginnings of the peasant/ruler classes and taxation of surplus. You would expect the farmers to be the ones with epigenetic lower IQ.

Replies from: gwern
comment by gwern · 2012-11-25T00:23:58.819Z · LW(p) · GW(p)

I don't think 'epigenetic' means what you think it means. But anyway: yes, there is anthropological evidence of that sort (covered in Pinker's Better Angels and in something of Diamond's, IIRC), and height and mortality are generally believed to correlate with health and presumably then to IQ.

The problem with that is that that is a problem for all theories of civilization formation: if early farming was so much worse than hunter-gathering that we can tell just from the fossils, then why did civilization ever get started? There must have been something compelling or self-sustaining or network effects or something about it.

So, suppose it takes less IQ to maintain a basic civilization than to start one from scratch (as I already suggested in my Africa example), and suppose civilization has some sort of self-reinforcing property where it will force itself to remain in existence even when superior alternatives exist (as it seems it must, factually, given the poorer health of early farmers/civilizationers compared to hunter-gatherers sans civilization).

Then what happened was: over a very long period of time hunter-gatherers slowly accumulated knowledge or tools and IQs rose from better food or perhaps sexual selection or whatever, until finally relatively simultaneously multiple civilizations arose in multiple regions, whereupon the farmer effect reduced their IQ but not enough to overcome the self-sustaining-civilization effect. And then history began.

Replies from: RomeoStevens, JulianMorrison, Nornagest
comment by RomeoStevens · 2012-11-26T00:36:02.014Z · LW(p) · GW(p)

if early farming was so much worse than hunter-gathering that we can tell just from the fossils, then why did civilization ever get started?

and why did European settlers in the Americas, when presented with the direct juxtaposition of hunter gatherer lifestyle with their own often 'go native'?

Farming solves military coordination problems that allow them to conquer neighbors. It would be a mistake to think that civilizations were successful because they provided a better quality of life for their denizens. We should expect to see the most successful civilization to be that which is able to devote a larger amount of wealth towards expansion.

Replies from: gwern
comment by gwern · 2012-11-26T01:14:46.857Z · LW(p) · GW(p)

and why did European settlers in the Americas, when presented with the direct juxtaposition of hunter gatherer lifestyle with their own often 'go native'?

Uh, going native is exactly what the vein of thought is predicting. The question is not why did some go native, but why didn't all the rest?

Farming solves military coordination problems that allow them to conquer neighbors. It would be a mistake to think that civilizations were successful because they provided a better quality of life for their denizens. We should expect to see the most successful civilization to be that which is able to devote a larger amount of wealth towards expansion.

An old suggestion, but just as old is the point that civilizations routinely fail at military matters: it's a trope of history going back at least as far as Ibn Khaldun that amazingly often the barbarians roll over civilization, and conquer everything, only to fall victim to the next barbarians themselves.

Replies from: Nornagest, RomeoStevens
comment by Nornagest · 2012-11-26T07:12:32.301Z · LW(p) · GW(p)

it's a trope of history going back at least as far as Ibn Khaldun that amazingly often the barbarians roll over civilization, and conquer everything, only to fall victim to the next barbarians themselves.

That does happen a lot, but the barbarians in question tend to be nomadic pastoralists, very rarely foragers. About the only exceptions I can think of happened in immediately post-contact North America, and that was a fantastically turbulent time culturally -- between the introduction of horses and 90+% of the initial population getting wiped out by disease, pretty much everything would likely have been up for grabs.

I don't know offhand how healthy or long-lived pastoralist cultures tended to be by comparison with sedentary agriculturalists. I do know that they generally fell somewhere between foragers and agriculturalists in terms of sustainable population density.

comment by RomeoStevens · 2012-11-26T03:47:04.109Z · LW(p) · GW(p)

why didn't all the rest?

Insufficient opportunity and brainwashing.

Barbarian hordes consume great amounts of the fruits of civilization and destroy the infrastructure that created it in their wake. They are self limiting.

Replies from: gwern, Nornagest
comment by gwern · 2012-11-26T04:00:20.353Z · LW(p) · GW(p)

Barbarian hordes consume great amounts of the fruits of civilization and destroy the infrastructure that created it in their wake.

What civilization-wide infrastructure did the Mongols destroy in the process of creating the greatest land empire in history which then doomed them and limited their spread?

Replies from: RomeoStevens
comment by RomeoStevens · 2012-11-26T04:33:14.499Z · LW(p) · GW(p)

The mongols were emphatically not barbarians, they introduced systems that were in most cases improvements over what they destroyed.

Replies from: Oligopsony
comment by Oligopsony · 2012-11-26T04:49:06.619Z · LW(p) · GW(p)

I suspect the connotations of "barbarian" are getting in the way here. The Mongols were highly mobile pastoralists and raiders; this did not get in the way of setting up sophisticated and creative institutions. (Nor did the latter undo the considerable net loss in poulation and extent of cultivation that accompanied the Mongol conquests.)

comment by Nornagest · 2012-11-26T07:23:11.824Z · LW(p) · GW(p)

Insufficient opportunity and brainwashing.

I think this is basically correct, but I'd express it in terms of cultural inertia rather than brainwashing. It's not (usually) part of a planned campaign of retention, it's just that learning a completely different culture and language and set of survival skills is a huge risk and would take a huge amount of effort: it might be attractive in marginal cases, but most people would likely feel they had too much to lose. Particularly if the relationship between the cultures is already adversarial.

comment by JulianMorrison · 2012-11-26T12:23:37.330Z · LW(p) · GW(p)

There are probably pure-win half steps, like the kind of farming where you plant in the seasonal area you always come back to at a certain time of the year, as you follow the herds, or the kind where game is so plentiful you can afford to settle, hunt, and dabble in farming vegetables beside your settlement (such as in the American Pacific north west). Farming seems to be tied to settlement. Farms stabilize settlements; settlements nurture farms. And farms domesticate crops, making farming easier and supporting a larger population.

In the Mesopotamia region, there were settlements in the rainy hills where the local wildlife was conveniently easy to domesticate but farming was hard. Those moved down centuries later into the rainless flood plain between the Tigris and Euphrates, where only group effort could ensure irrigation, and group surpluses were needed to stave off bad harvests, but farming worked well. The "Ubaid period" (neolithic) was pretty egalitarian, but centralization emerges in the "Jemdet Nasr period" and kingship in the "early dynastic period" (Sumerian for king is "lugal", "lu"=man, "gal"=big, and initially it seems to have been just a word for "boss"). With centralization and kingship, empires follow fast. Civilization was co-existing with non-farming groups, but civilization tempts even non-farmers to switch from hunting to raiding. Sumer got sacked repeatedly by nearby tribes.

I am thinking there was a demographic transition point, probably quite early, when the number of people that could be kept alive - not as healthy, but alive - by farming or equally by raiding the surplus of farmers, exceeded the carrying capacity of the local game and wild plants. At that point walking away from the fields was not possible. Therefore agriculture has a ratchet effect.

comment by Nornagest · 2012-11-26T01:26:49.891Z · LW(p) · GW(p)

if early farming was so much worse than hunter-gathering that we can tell just from the fossils, then why did civilization ever get started? There must have been something compelling or self-sustaining or network effects or something about it.

I tend to think of this by analogy with gene-centered evolution. Just as natural selection selects for genes which are particularly good at reproducing themselves without any special regard for the well-being of their carriers, cultural evolution selects for similarly potent memetic systems without any particular regard for the well-being of the people propagating them.

From skeletal evidence forager lifestyles seem on average a lot healthier, but they also require much lower population densities. You can fit a lot more people per unit area with an agriculturalist lifestyle: if skeletal proxies are to be believed they'll individually be weaker, sicker, and shorter-lived, but they'll be populous enough that the much rarer foragers are going to have trouble displacing them. Cycle that over a few thousand years and eventually civilization ends up ruling the world, with the few remaining foragers pushed into little enclaves where agriculture is unsustainable for one reason or another. We'd occasionally see defections from one lifestyle to the other, but historically they don't seem very common.

The tricky part of this model seems to be figuring out how forager populations self-limit without lowering quality of life to agriculturalist levels. I'm not anthropologist enough to have a definitive answer to this, but I'd speculate that forager resource acquisition isn't as linearly dependent on population as agriculture is: put too many people in a given area and you end up scaring off game, overconsuming food plants, et cetera. Over time I'd expect this to inform territorial behavior and intuitions about optimal group size. Violence is probably also part of the answer.

Replies from: Vaniver
comment by Vaniver · 2012-11-26T01:43:30.069Z · LW(p) · GW(p)

We'd occasionally see defections from one lifestyle to the other, but historically they don't seem very common.

Or, at least, they end up becoming irrelevant for the same reasons that the agriculturalists won in the first place. If Roanoke disappeared because all of the settlers decided to ditch the farm and live as Indians, there were still way more Europeans coming than the few Europeans that defected, and the new colonists could support a much higher population density than the ones that went native.

comment by JoshuaZ · 2012-11-19T19:00:35.579Z · LW(p) · GW(p)

How much of the failure of the African countries is due to their average lower intelligence and how much is that a consequence of other systemic problems (e.g. lack of institutions) that also make the maintenance of modern technologies difficult?

Replies from: gwern, Vaniver, Izeinwinter
comment by gwern · 2012-11-19T19:09:04.635Z · LW(p) · GW(p)

In graphs of interacting cause & effects, that's not necessarily the best way to ask that question. Because IQ is predictive at least of general economic growth (but also increased by growth, 'bidirectional'), those systemic problems can be perfectly real and also rooted in lower IQs.

comment by Vaniver · 2012-11-19T19:39:11.028Z · LW(p) · GW(p)

How much of the failure of the African countries is due to their average lower intelligence and how much is that a consequence of other systemic problems (e.g. lack of institutions) that also make the maintenance of modern technologies difficult?

I get the impression that "average lower intelligence" is a big cause of systemic problems, like lack of institutions. I'm reminded of Yvain's example that, in Haiti, they could not understand sorting things numerically or alphabetically. This meant bureaucratic institutions were basically worthless: "where is your file? Let me look at all of the files and try to find yours."

Edit: Also, see this paper.

Replies from: army1987, CCC
comment by A1987dM (army1987) · 2012-11-20T00:36:40.666Z · LW(p) · GW(p)

I was going to say, “well, maybe that's a failure of education, not of intelligence”, but...

Not just "they don't want to do it" or "it never occurred to them", but after months and months of attempted explanation they don't understand that sorting alphabetically or numerically is even a thing. [emphasis added]

Okay, I'm shocked. (It might still be something that people with IQ between (say) 70 and 90 can learn if they're taught it in elementary school but couldn't ever learn as adults if they haven't, but the “privileging the hypothesis” warning light in my brain is on.)

Replies from: TorqueDrifter
comment by TorqueDrifter · 2012-11-20T01:15:52.625Z · LW(p) · GW(p)

Tangentially, and specifically because I followed the link from LessWrong, this jumped out at me:

"Haitians have a culture of tending not to admit they're wrong[.]"

(Pretend that this sentence is a list of reasonable caveats about what to conclude from that.)

comment by CCC · 2012-11-21T08:01:07.442Z · LW(p) · GW(p)

I'm pretty sure that that's a failure of education, not of intelligence. Education has the best effect at a young age, when habits are formed; it's a lot harder to educate someone later, unless that someone really wants to be educated.

Looking at that specific example, I can see why someone who is lazy, and unfamiliar with alphabetic sorting, might not want to try it. Mainly, that step one would be to figure out this whole 'alphabet' thing and memorise what order things go in (a significant neural effort, done now; somewhat easier if already literate, but note that 'literate' does not necessarily mean 'familiar with the order of the alphabet'); step two would be to sort in alphabetical order everything that's already in the office (a significant physical effort, done now); step three would be to actually bother to put new things in order instead of just toss them in a random drawer (an ongoing effort).

So much easier to just pretend to understand less than one does (admittedly, it does mean a bit more time searching for a piece of paper when someone asks, but that's a minor task, and won't have to be done immediately in any case).

Replies from: Viliam_Bur, Vaniver
comment by Viliam_Bur · 2012-11-22T09:35:57.803Z · LW(p) · GW(p)

You can get some benefit even without learning the order of the alphabet. If you divide things to groups by their first letter, even if the groups are sorted randomly, and the things in one group are sorted randomly, the search time should be at least 10 times shorter.

As a bonus, you can switch to this system gradually. Create empty groups for each first letter, and consider everything else as an "unsorted" group. When searching, first look in the group with given letter, then in the "unsorted" group. When finished, always put the thing into the group starting with that letter. Your system will sort gradually.

Replies from: CCC
comment by CCC · 2012-11-22T14:24:46.306Z · LW(p) · GW(p)

You are correct. This methodology will work, as long as we assume that no-one will put a piece of paper in the wrong (apparently sorted) file.

Was it ever explained to the Haitians in this way, though?

comment by Vaniver · 2012-11-21T16:00:38.542Z · LW(p) · GW(p)

I'm pretty sure that that's a failure of education, not of intelligence.

By this you mean that you think P(Can't understand sorting | low education, moderate intelligence)>P(Can't understand sorting | moderate education, low intelligence)?

If you had said "this might be the result of low energy," then I would have agreed that's a likely partial explanation, as you argued for that fairly well and it fits the rigors of a tropical climate. But I'm concerned that you're conflating education and energy.

Replies from: CCC
comment by CCC · 2012-11-21T18:20:55.193Z · LW(p) · GW(p)

No, I mean that I think that P(low education|group of humans who can't understand sorting)>P(low intelligence|group of humans who can't understand sorting).

The 'group' part is important; while education is often constant or near-constant among a community, intelligence is often not; thus, something that is true for an entire group is more likely a result of education than intelligence. Similarly, 'humans' is an important word, because I know that many humans are capable of sorting, and thus there is no species barrier.

Having said that, "low energy" is almost certainly also a contributing factor.

comment by Izeinwinter · 2013-01-04T22:36:20.830Z · LW(p) · GW(p)

Childhood malnutrition reduces IQ. Major childhood trauma reduces IQ. No childhood education makes you massively unlikely to grasp formal logic, Ect, ect. Most third world countries are profoundly crippling places to grow up, - The good news is that any such place that manages to not be a circle of hell for a straight 20 year stretch should see its economy do a hard takeoff as a generation reaches adulthood that was not lobotomized.

comment by JoshuaZ · 2012-11-19T18:59:17.542Z · LW(p) · GW(p)

If I understand correctly, your argument in particular suggests that it's the environmentally-mediated increase in IQ that might have enabled the rise of civilization (in this interglacial period).

This doesn't seem to be all that Douglas_Reay is arguing. There's also an aspect to his argument of the right environmental aspects being available for an extended period of time, along with the slow development of the right technologies for society to take off. See in particular these two paragraphs:

Good grain storage seems to have developed incrementally starting with crude stone silo pit designs in 9500 BCE, and progressing by 6000 BCE to customised buildings with raised floors and sealed ceramic containers which could store 80 tons of wheat in good condition for 4 years or more. (Earthenware ceramics date to 25,000 BCE and earlier, though the potter's wheel, useful for mass production of regular storage vessels, does date to the Ubaid period.)

The main key to the timing of the transition from village to city seems to have been not human technology but the confluence of climate and biology. Jared Diamond points the finger at the geography of the region - the fertile crescent farmers had access to a wider variety of grains than anywhere else in the world because that area links and has access to the species of three major land masses. The Mediterranean climate has a long dry season with a short period of rain, which made it ideal for growing grains (which are much easier to store for several years than, for instance bananas). And everything kicked off when the climate stabilised after the most recent ice age ended about 12,000 years ago.

comment by Desrtopa · 2012-11-19T15:20:20.051Z · LW(p) · GW(p)

If the gene for the synthesis of docosahexaenoic acid arose 80kya, and the current interglacial period began 12kya, that still leaves four thousand years between the end of the glacial period and the beginning of city-based civilization, which, keep in mind, is a long time.

If the civilization developments followed within a hundred years or so of the necessary biological and environmental factors coming into place, I wouldn't be so skeptical that our intelligence already exceeded the minimum necessary to produce those developments. But we already had domesticated grazing animals thousands of years before the foundation of Ur, and grains earlier than that. Don't forget that when we're dealing with cultural rather than biological evolution, a millenium is no longer a relative eyeblink.

Replies from: John_Maxwell_IV, John_Maxwell_IV
comment by John_Maxwell (John_Maxwell_IV) · 2012-11-20T12:11:50.228Z · LW(p) · GW(p)

Humans are relatively conformist, and we often have a hard time translating abstract/revolutionary ideas in to practice. It seems likely that many humans had ideas for things resembling civilization, or things that could've lead to the development of a civilization, before the first actual civilization, in the same way more people dream about starting businesses than actually start businesses.

Paul Graham seems to think that local culture plays a huge role in startup success. Now consider that even the cultures Paul Graham considers pretty bad are still American city cultures, and America has a reputation for individualism, rags-to-riches success, etc. and that's all on a foundation of enlightenment values related to progress, questioning authority, and so on. And we've got a long and storied history of society changing on a large scale, within our lifetimes even.

So yeah, stagnant cultures are not necessarily being held back by lack of intelligence. It could be the standard akrasia/agency-failure type stuff that we're still struggling with today. (Arguably something similar is going on for peoples' failure to appreciate the possible magnitude of human-level AGI--it's just way too bizarre relative to historical norms for most of us to take it seriously.)

comment by John_Maxwell (John_Maxwell_IV) · 2012-11-20T11:48:39.919Z · LW(p) · GW(p)

Still, if you figure that our intelligence was increasing in a linear fashion, it seems slightly unlikely that it would trip over the civilization-making threshold during one of the relatively shorter interglacial periods. So I think we probably bought ourselves at least a little head start because of the ice age thing.

By the way, I wonder how well racial IQ correlates with civilization formation. Are people of Sumerian descent unusually smart, for instance? If not, maybe civilization formation has more of an element of serendipity than we're giving it credit for? Arguably sticking to a hunter-gatherer lifestyle might actually be smarter than forming a civilization in the short run.

Replies from: gwern
comment by gwern · 2012-11-20T19:33:56.288Z · LW(p) · GW(p)

By the way, I wonder how well racial IQ correlates with civilization formation. Are people of Sumerian descent unusually smart, for instance?

I have no idea how we would check this, unfortunately, short of a lot of digging up extremely old bones. The area that is now Sumeria has been swept by invasion after invasion after invasion from pretty much every direction, and over 4000 years there'd be a lot of drift even if there were no invasions and no immigration. IQ being highly polygenic makes matters worse: a few generations of dysgenic selection (extensive cousin marriage?) could wipe out much of the faint signal one is looking for, and the poor quality DNA from digs might have the same issue.

comment by mrglwrf · 2012-11-20T17:22:45.377Z · LW(p) · GW(p)

Historical quibble- in "The First City" section, you seem to be partially confusing Ur with Uruk. Uruk is generally regarded as the first city in Sumeria, during the eponymous Uruk period (4000-3100 BC). Also generally believed to be the center of the "Uruk phenomenon" during which cuneiform writing and a number of other features of Mesopotamian civilization were developed. Ur was the capital of the Neo-Sumerian Ur III empire c.2000 BC, which built the Great Ziggurat of Ur shown in the picture.

Replies from: Douglas_Reay
comment by Douglas_Reay · 2012-11-20T20:27:22.567Z · LW(p) · GW(p)

Yep, they were both big and in the same area around the same time. I gave the tip of the hat to Ur being the flashpoint because we can document, via the spread of the Code of Ur-Nammu, its influence upon others. But it could be argued either way.

Replies from: mrglwrf
comment by mrglwrf · 2012-11-23T01:41:55.212Z · LW(p) · GW(p)

In the earlier period, Uruk was in fact substantially larger, thus the quibble. Marc Van De Mieroop, The Ancient Mesopotamian City, p.37:

But many aspects of Uruk show its special status in southern Mesopotamia. Its size greatly surpasses that of contemporary cities: around 3200 it is estimated to have been about 100 hectares in size, while in the region to its north the largest city measured only 50 hectares, and in the south the only other city, Ur, covered only 10-15 hectares. ... And Uruk continued to grow: around 2800 its walls encircled an area of 494 hectares and occupation outside the walls was likely.

comment by Vladimir_Nesov · 2012-11-19T16:36:24.550Z · LW(p) · GW(p)

(Summary:) There is no cause to suppose, even if the human genome 100,000 years ago had the full set of IQ-related-alleles present in our genome today, that they would have developed civilisation much sooner.

(Rhetorical nitpick:) You gave an argument against one such cause. This doesn't mean there are aren't other causes, and it's not clear that your argument is decisive.

comment by John_Maxwell (John_Maxwell_IV) · 2012-11-20T01:47:25.791Z · LW(p) · GW(p)

Didn't civilization develop independently in several different places, e.g. the Aztec or Inca civilizations in the Americas?

Replies from: Nornagest
comment by Nornagest · 2012-11-20T02:04:13.377Z · LW(p) · GW(p)

Yeah, the agricultural transition (and resulting centralization of living, development of hierarchical government, etc.) is thought to have happened in about a half-dozen places between around 9000 BC and 1000 BC.

Replies from: Douglas_Reay
comment by Douglas_Reay · 2012-11-20T07:01:08.404Z · LW(p) · GW(p)

Looking at the Americas, we have evidence of cultures with agriculture and pottery, roughly equivalent to Europe's Linear A, going back about 6000 years ago (4000 BCE). We have writing dating back to about 3000 years ago (1000 BCE), though this was probably delayed by much of their function earlier being usurped by quipu (which date back at least to 2600 BCE). This corresponds to the emergence of the first long term stable cities in the Americas starting at about 1500 BCE and the growth, about 1000 years later, of Teotihuacan, a true majestic city rivalling ancient Ur in size and influence.

So yes, that is an independent (but later) development of civilization, which I think endorses the idea that once the climate settled down after the intergalacial, our species was going to develop civilization on a fairly quick timescale (compared to biological evolutionary timescales), and that it wasn't lack of intelligence holding us up.

comment by Vaniver · 2012-11-19T16:26:23.960Z · LW(p) · GW(p)

Having large mammals available to domesticate, who can provide fertiliser and traction (pulling ploughs and harrows) certainly makes things easier, but doesn't seem to have been a large factor in the timing of the rise of civilisation, or particularly dependent upon the IQ of the human species.

How would we test this? If human IQ matters, it seems like we would need some animal which is in contact with low-IQ humans and higher IQ humans, which the first couldn't tame but the second could. You already link to an example of recent man domesticating the fox, and there's quite a bit of European zebra-taming, though fully domesticating a species takes generations to breed out deleterious traits. The example of deer seems like weak support as well; some strains were somewhat domesticated by northern peoples, but that evidence is only weak to me because it's not clear the economic value of deer was the same across climates.

And it has nothing to do with the brain.

which is a fatty acid required for human brain development.

Er... what?

It seems to me that most IQ-related alleles are "build the brain this way," and so a gene that creates a necessary acid out of whatever you have lying around seems like it's an IQ-related allele. Among those with sufficient diets, there will be no effect, but among those with deficient diets, there will be a positive effect; unless the entire population has sufficient diets, that'll lead to a positive correlation.

If you're making the claim that "processing speed is not the only factor," then sure! The best example of that is neanderthals, who probably were around as good (if not better) at abstract problem-solving, foresight, tool-making, and so on, but contributed only a small percentage of genes to current humans. It's not certain why that's the case yet, but a strong partial explanation is they didn't have trade networks, and so were making the best of local materials while their competitors were able to use superior materials acquired from far away. Another good example is that developed, large civilizations moved northward with agriculture, even though there's strong evidence that IQs are higher among groups that spent significant timescales in colder (i.e. more northern) climates).

But it seems really odd to me to claim that if you dropped current humans into the world at the start of the previous interglacial period with no extral capital besides their genes, you would expect them to take twelve thousand years to reach the state of development we're at now. They've already got neat things like lactase persistence (developed 5-10k years ago), and while their adapations to modern society might be a handicap during their hunter-gatherer phase, supposing they survive it should speed up the progress of their civilization afterward.

Replies from: JoshuaZ
comment by JoshuaZ · 2012-11-19T16:29:41.394Z · LW(p) · GW(p)

The best example of that is neanderthals, who probably were better at abstract problem-solving, foresight, tool-making, and so on

What evidence do we have for this?

Replies from: Vaniver
comment by Vaniver · 2012-11-19T16:39:34.239Z · LW(p) · GW(p)

I'll have to check my original source for that when I get home; I was under the impression it was because their forebrains were larger, but looking now I'm primarily finding claims that their whole brains were larger (which, given their larger body size, doesn't mean all that much).

This looks like the closest thing in the relevant wiki article to my claim:

The quality of tools found at archaeological sites is further said to suggest that Neanderthals were good at "expert" cognition, a form of observational learning and practice acquired through apprenticeship that relies heavily on long-term procedural memory.

but it's also tempered by things that might be evidence the other way. (Neanderthal tools changed little in thousands of years- is that because they found the local optimum early, or because they were bad at innovating?)

[edit] This argument wasn't in the book I thought it was in, so I'm slightly less confident in it. I think there's strong evidence that the primary differential between neanderthals and their successors was social, not mental processing speed / memory / etc., and will edit the grandparent to reflect that.

comment by DuncanS · 2012-11-22T00:25:18.150Z · LW(p) · GW(p)

What is the essential difference between human and animal intelligence? I don't actually think it's just a matter of degree. To put it simply, most brains are once-through machines. They take input from the senses, process it in conjunction with memories, and turn that into actions, and perhaps new memories. Their brains have lots of special-purpose optimizations for many things, and a surprising amount can be achieved like this. The brains are once-through largely because that's the fastest approach, and speed is important for many things. Human brains are still mostly once-through.

But we humans have one extra trick, which is to do with self-awareness. We can to an extent sense the output of our brains, and that output then becomes new input. This in turn leads to new output which can become input again. This apparently simple capability - forming a loop - is all that's needed to form a Turing-complete machine out of the specialized animal brain.

Without such a loop, an animal may know many things, but it will not know that it knows them. Because it isn't able to sense explicitly about it was just thinking about, it can't then start off a new thought based on the contents of the previous one.

The divide isn't absolute, I'm sure - I believe essentially all mammals have quite a bit of self-awareness, but only in humans does that facility seem to be good enough to allow the development of a chain of thought. And that small difference makes all the difference in the world.

Replies from: orthonormal, JoshuaZ
comment by orthonormal · 2012-11-22T06:17:37.167Z · LW(p) · GW(p)

Chimps can suss out recursive puzzles where you have color-coded keys and locks, and you need to unlock Box A to get Key B to unlock Box B to get Key C to unlock Box C which contains food. They even choose the right box to unlock when one chain leads to the food and the other doesn't.

Sorry, there's not a difference of kind to be found here.

Replies from: jsteinhardt, MugaSofer
comment by jsteinhardt · 2012-11-22T20:21:21.610Z · LW(p) · GW(p)

How much training is necessary for them to do this? Humans can reason this out without any training, if the chimps had to be trained substantially (e.g. first starting with one box, being rewarded with food, then starting with two boxes, etc.) then I think this would constitute a difference.

Replies from: army1987
comment by A1987dM (army1987) · 2012-11-22T20:27:22.859Z · LW(p) · GW(p)

Well, one could argue that humans "train" for similar problems throughout their lives... Would you expect a feral child to figure that out straight away?

comment by MugaSofer · 2012-11-22T06:28:14.015Z · LW(p) · GW(p)

But then, there are plenty of examples of chimps exhibiting behavior that implies intelligence.

comment by JoshuaZ · 2012-11-22T00:33:26.811Z · LW(p) · GW(p)

The divide isn't absolute, I'm sure - I believe essentially all mammals have quite a bit of self-awareness, but only in humans does that facility seem to be good enough to allow the development of a chain of thought.

If dolphins or chimps did or did not have chains of thought how would be able to tell the difference?

Replies from: DuncanS
comment by DuncanS · 2012-11-22T00:56:33.319Z · LW(p) · GW(p)

Because of what you can do with a train of thought.

"That mammoth is very dangerous, but would be tasty if I killed it."

"I could kill it if I had the right weapon"

"What kind of weapon would work?"

As against.... "That mammoth is very dangerous - run!"

Computer science is where this particular insight comes from. If you can lay down memories, execute loops and evaluate conditions, you can simulate anything. If you don't have the ability to read your own output, you can't.

If dolphins or chimps did have arbitrarily long chains of thought, they'd be able to do general reasoning, as we do.

Replies from: PeterisP, JoshuaZ
comment by PeterisP · 2012-11-26T11:52:14.092Z · LW(p) · GW(p)

The examples of corvids designing and making specialized tools after observing what they would need to solve specific problems (placement of an otherwise unaccessible treat) seem to demonstrate such chains of thought.

comment by JoshuaZ · 2012-11-22T01:03:36.719Z · LW(p) · GW(p)

So what do you expect to be the signs of arbitrary general reasoning? Humans run out of memory eventually. If a dolphin or a chimp can do arbitrary reasoning but lacks the capacity to keep long-chains inside but for this, what would you expect to see. I'm still not sure what actual testable distinction would occur in these cases, although in so far as I can think of what might arguably be evidence, it looks like dolphins pass, as you can see in this article already linked to in this thread.

Replies from: DuncanS
comment by DuncanS · 2012-11-25T22:00:15.460Z · LW(p) · GW(p)

Let's think about the computer that you're using to look at this website. It's able to do general purpose logic, which is in some ways quite a trivial thing to learn. It's really quite poor at pattern matching, where we and essentially all intelligent animals excel. It is able to do fast data manipulation, reading its own output back.

As I'm sure you know, there's a distinction between computing systems which, given enough memory, can simulate any other computing system and computing systems which can't. Critical to the former is the ability to form a stored program of some description, and read it back and execute it. Computers that can do this can emulate any other computer, (albeit in a speed-challenged way in some cases).

Chimps and dolphins are undoubtedly smart, but for some reason they aren't crossing the threshold to generality. Their minds can represent many things, but not (apparently) the full gamut of what we can do. You won't find any chimps or dolphins discussing philosophy or computer science. My point actually is that humans went from making only relatively simple stone tools to discussing philosophy in an evolutionary eye-blink - there isn't THAT much of a difference between the two states.

My observation is that when we think, we introspect. We think about our thinking. This allows thought to connect to thought, and form patterns. If you can do THAT, then you are able to form the matrix of thought that leads to being able to think about the kinds of things we discuss here.

This only can happen if you have a sufficiently strong introspective sense. If you haven't got that, your thoughts remain dominated by the concrete world driven by your other senses.

Can I turn this on its head? A chimp has WAY more processing power than any supercomputer ever built, including the Watson machine that trounced various humans at jeopardy. The puzzle is why they can't think about philosophy, not why we can. Our much vaunted generality is pretty borderline at times - humans are truly BAD at being rational, and incredibly slow at reasoning. Why is such a powerful piece of hardware as us so utterly incompetent at something so simple?

The reason, I believe, is that our brains are largely evolved to do something else. Our purpose is to sense the world, and rapidly come up with some appropriate response. We are vastly parallel machines which do pattern recognition and ultra-fast response, based on inherently slow switches. Introspection appears largely irrelevant to this. We probably evolved it only as a means of predicting what other humans and creatures would do, and only incidentally did it turn into a means of thinking about thinking.

What is the actual testable distinction? Hard to say, but once you gain the ability to reason independently from the senses, the ability to think about numbers - big numbers - is not that far away.

Something like the ability to grasp that there is no largest number is probably the threshold - the logic's simple, but requires you to think of a number separately from the real world. Hard to know how to show whether dolphins might know this or not, I appreciate that. I think it's essentially proven that dolphins are smart enough to understand the logical relationships between the pieces of this proof, as the relationships are simple, and they can grasp things of that complexity that are driven by the external world. But perhaps they can't see their internal world well enough to be able to pull 'number' as an idea out from 'two' and 'three' (which are ideas that dolphins are surely able to get.), and then finish the puzzle.

Perhaps it's not chains that are the issue, but the ability to abstract clear of the outside world and carry on going.

comment by Richard_Kennaway · 2012-11-20T17:27:43.022Z · LW(p) · GW(p)

Gwern suggested that, if it were possible for civilization to have developed when our species had a lower IQ, then we'd still be dealing with the same problems, but we'd have a lower IQ with which to tackle them. Or, to put it another way, it is unsurprising that living in a civilization has posed problems that our species finds difficult to tackle, because if we were capable of solving such problems easily, we'd probably also have been capable of developing civilization earlier than we did.

And to put it yet another way, by something like a Peter Principle ("people are promoted to their level of incompetence"), we create problems up to our capacity to deal with them. However stupid or intelligent we are, we will always be dealing with problems at the edge of what we can deal with.

This, btw, makes me sceptical about predictions of radical increases in intelligence (of us or of our creations) bringing about paradise.

Replies from: None, Vaniver
comment by [deleted] · 2012-11-21T07:38:53.241Z · LW(p) · GW(p)

we create problems up to our capacity to deal with them. However stupid or intelligent we are, we will always be dealing with problems at the edge of what we can deal with.

Were you thinking of any specific societal problems when you wrote this?

Most societal problems of today had smaller scale analogues in the past. Foreign relations, warfare, and internal security should have existed at least as long as there have been city states. Unsustainable development and overpopulation relative to available resources are nothing new; they were even cited in the main post as contributors to Ur's downfall. Likewise, public sickness, waste management, violent and coercive crime, inadequate housing, and unfavorable economic climates would all be familiar to, say, the Indus Valley Civilization. A few examples of modern anthropogenic risks: climate change, unfriendly intelligence explosion, nuclear warfare, nanotech. And then of course negentropy opportunity cost is an old problem we didn't create, we just didn't know about it back in the day.

In short, smart societies make a few new difficult problems, but mostly make larger societies which have larger versions of the old problems.

comment by Vaniver · 2012-11-20T18:17:06.338Z · LW(p) · GW(p)

This, btw, makes me sceptical about predictions of radical increases in intelligence (of us or of our creations) bringing about paradise.

To the extent that a boring place probably isn't paradise, sure. But a world in which almost all of your effort is spent tussling with other minds at your level seems much better than, say, the present world, where much of your effort is spent on the annoyances of corporeal existence.

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2012-11-20T18:55:15.963Z · LW(p) · GW(p)

Yes, things can get better. Better than we can barely imagine. But by that standard, we're already living in the paradise of the past, and it's not exactly happy ever after, is it?

Replies from: johnlawrenceaspden
comment by johnlawrenceaspden · 2012-11-20T19:27:39.589Z · LW(p) · GW(p)

It's ok! Fix death and I'd be cool with it.

Replies from: Douglas_Reay
comment by Douglas_Reay · 2012-11-21T02:21:14.293Z · LW(p) · GW(p)

Do you include in the scope of that 'fix' dealing with problems associated with population, promotion, ambition, recidivist criminals who (after serving a few terms in jail) have time to learn to be good at crime, etc?

Replies from: johnlawrenceaspden
comment by johnlawrenceaspden · 2012-11-22T22:31:19.077Z · LW(p) · GW(p)

We would, as they say, have as long as we liked to sort out those sorts of issues.

Replies from: Douglas_Reay
comment by Douglas_Reay · 2012-11-23T09:38:33.652Z · LW(p) · GW(p)

How does that differ from saying "Given unlimited time to fix all social problems, society will eventually become a paradise" or "The root problem with current society is that we have not yet had sufficient time to fix all the other problems with it"? Couldn't the same be said about any imperfect society? I don't see how it is praise for the state of our current society versus previous societies.

comment by DuncanS · 2012-11-22T00:06:06.955Z · LW(p) · GW(p)

Evolution, as an algorithm, is very much better as an optimizer of an existing design than it is as a creator of a new design. Optimizing the size of the brain of a creature is, for evolution, an easy problem. Making a better, more efficient brain is a much harder problem, and happens slowly, comparatively speaking.

The optimization problem is essentially a kind of budgeting problem. If I have a budget of X calories per day, I can spend it on X kilos of muscle, or Y grams of brain tissue. Both will cost me the same amount of calories, and each brings its own advantages. Since evolution is good at this kind of problem, we can expect that it will correctly find the point of tradeoff - the point where the rate of gain of advantage for additional expenditure on ANY organ in the body is exactly the same.

Putting it differently, a cow design could trade larger brain for smaller muscles, or larger muscles for smaller brain. The actual cow is found at the point where those tradeoffs are pretty much balanced.

A whale has a large brain, but it's quite small in comparison to the whale as a whole. If a whale were to double the size of its brain, it wouldn't make a huge dent in the overall calorie budget. However, evolution's balance of the whale body suggests that it wouldn't be worth it. Making a whale brain that much bigger wouldn't make the whale sufficiently better for it to cost in.

Where this argument basically leads is to turn the conventional wisdom on its head. People say that big brains are better because they are bigger. However, the argument that evolution can balance the size of body structures efficiently and quickly leads to the opposite conclusion. Modern brains are bigger because they are better. Because modern brains are better than they used to be - because evolution has managed to create better brains - it becomes more worthwhile making them bigger. Because brains are better, adding more brain gives you a bigger benefit, so the tradeoff point moves towards larger brain sizes.

Dinosaur brains were very much smaller, on the whole, than the brains of similar animals today. We can infer from this argument that this because their brains were less effective, and that in turn lowered any advantage that might have been gained from making the size of the brain larger. Consequently, dinosaurs must have been even more stupid than the small size of their brains suggests.

Although there is a nutritional argument for bigger brains in humans - the taming of fire allowed for much more efficient food usage - perhaps there is also some sense in which the human brain has recently become better, which in turn led it to become larger. Speculative, perhaps. But on the larger scale, looking at the sweeping increase in brain sizes across the whole of the geological record, the qualitative improvement in brains has to be seen in the gradual increase in size.

Replies from: JoshuaZ
comment by JoshuaZ · 2012-11-22T00:17:54.402Z · LW(p) · GW(p)

Although there is a nutritional argument for bigger brains in humans - the taming of fire allowed for much more efficient food usage - perhaps there is also some sense in which the human brain has recently become better, which in turn led it to become larger.

Human brains have been shrinking..

comment by Luke_A_Somers · 2012-11-26T20:01:06.971Z · LW(p) · GW(p)

it is unsurprising that living in a civilization has posed problems that our species finds difficult to tackle, because if we were capable of solving such problems easily, we'd probably also have been capable of developing civilization earlier than we did.

I'd more say, it's unsurprising that life poses problems our species finds difficult to tackle, because we have moving goalposts of satisfaction in terms of our problems being solved.

comment by chaosmage · 2012-11-26T16:32:04.409Z · LW(p) · GW(p)

We don't know for certain what it was about the culture surrounding the dawn of cities that made that particular combination of trade, writing, specialisation, hierarchy and religion communicable, when similar cultures from previous false dawns failed to spread. We can trace each of those elements to earlier sources, none of them were original to Ur, so perhaps it was a case of a critical mass achieving a self-sustaining reaction.

I suggest that the decisive ingredient was an explicit, somewhat accurate understanding of how children are conceived, and following from this, a concept of fatherhood.

Many hunter-gatherer societies didn't have that when we contacted them. They all had figured out it had something to do with childbearing age and menstruation. Some had narrowed it down to the pregnant woman having recently had sex with a man. But you don't need to know ejaculation inside the vagina is what counts, and that it matters who ejaculates there, unless you're trying to domesticate mammals.

From my superficial understanding of anthropology, it appears that in hunter-gatherer societies, the men have very little responsibility for the kids. Of course they contribute food and protection, which is commonly shared among the whole group including the kids. They'll teach the boys the essential skills, but any man will teach any boy the same set of skills; there's no personal connection and no specialization of labor. As a man in a hunter-gatherer society, you essentially need not worry about the next generation. And we do find that in these societies, the men (as well as the kids) tend to have a lot of spare time between hunts.

I imagine a hunter-gatherer, experimenting with domestication, first realizing he could be a father. That gives him one hell of an evolutionary advantage, and he's probably not the dumbest member of his group, so he may have good intelligence-related traits that he can now spread more effectively. But I think what's far more important is that this realization creates a lot of new priorities for him, and for everone he tells about this. Because he'd naturally start to measure his own success by the well-being of his children, much like the success of mothers was measured before. So he starts to invest much more time (both his own and the kid's) into teaching them skills that mothers can't teach because they're busy mothering. He could pass on more knowledge than a hunter-gatherer would, he'd prefer to teach his own kids over others, and boom he invents trades, family businesses, distribution of labor. Now knowledge can accumulate, inventions can be copied and spread, memetic/cultural evolution kicks in. Both the technologies that allow cities, and the refined fighting skills of the nomadic raiders, follow from intensified education.

Education increases expressed IQ. However, it also increases the value of expressed IQ in sexual selection. So I don't think we're quite as dumb as we were when civilization began. But I do think you won't find significant division of labor in any society that doesn't know about domestication of animals.

So when you ask why people accept the comparatively bad living conditions of early civilization, the answer is simple: they do it for the kids. You don't do that when you think that being a man, you can't have any.

comment by JaySwartz · 2012-11-21T02:34:52.506Z · LW(p) · GW(p)

200k years ago when Homo Sapiens first appeared, fundamental adaptability was the dominant force. The most adaptable, not the most intelligent, survived. While adaptability is a component of intelligence, intelligence is not a component of adaptability. The coincidence with the start of the ice age is consistent with this. The ice age is a relatively minor extinction event, but none the less the appearance and survival of Homo Sapiens is consistent, where less adaptable life forms did not survive.

Across the Hominidae family Homo Sapiens proved to be most adaptable. During the ice age the likely focus was simply to survive. When a temperate climate returned there are some who believe Homo Sapiens, much as future Aztecs and others, began to systematically eliminate their competition.

Concurrently, another phenomenon was occurring. Homo Sapiens was learning and steadily increasing their understanding of the world. While there is not evidence that has survived the years, it would be reasonable to posit that learning continued in much the same fashion as today; new knowledge building on established knowledge. Being less organized than later situations it would progress more slowly.

Our improved knowledge likely increased our survival rates through the second ice age. When temperate climates returned, the stage was set for the advancement of mankind to organized farming, written language and Ur.

Somewhere in this time frame, intelligence began to overtake adaptability as the dominant force. This also marked the shift from evolutionary pressure to societal pressure as the underlying force behind advancement and survivability. The random nature of evolutionary advances gave way to a more complex society-driven selection process.

It's also important to draw a subtle distinction. The advances were not a function of increase in general IQ. They were a function of integration of the concepts envisioned by a subset of high IQ individuals into society; i.e., a societal variant of evolutionary adaptability.

comment by Shmi (shminux) · 2012-11-19T20:41:21.109Z · LW(p) · GW(p)

But they had to be free to wander to follow nomadic food sources, and they were limited by access to food that the human body could use to create Docosahexaenoic acid, which is a fatty acid required for human brain development. Originally humans got this from fish living in the lakes and rivers of central Africa. However, about 80,000 years ago, we developed a gene that let us synthesise the same acid from other sources, freeing humanity to migrate away from the wet areas, past the dry northern part, and out into the fertile crescent.

So your point is that the expressed IQ was DHA-bound for those living away from the shore, thus making them not smart enough to develop civilization where the conditions were ripe, right? Why the DHA specifically? Wouldn't there be workarounds goven that "IQ is polygenic", if the evolutionary pressure was toward higher IQ? I'm wondering if this is but one of a multitude of possible explanations, and how one would attempt to falsify it.

Replies from: Strange7
comment by Strange7 · 2012-11-20T00:56:44.728Z · LW(p) · GW(p)

The workaround that ended up being selected for was a new DHA synthesis pathway.

comment by A1987dM (army1987) · 2012-11-19T20:05:32.992Z · LW(p) · GW(p)

Yep. As I implied elsewhere, I think that the step between intelligence and civilization is an important though overlooked one in the Great Filter.

Replies from: gwern, John_Maxwell_IV
comment by gwern · 2012-11-19T20:45:12.343Z · LW(p) · GW(p)

That's a paper I'd like to see someone do at some point: given the scaling information about human-level brains in the very interesting recent paper "The remarkable, yet not extraordinary, human brain as a scaled-up primate brain and its associated cost", Herculano-Houzel 2012 (quotes from it are in my essay linked in OP), and something like OP and my African points, estimate how close to the break-even point we are: how few calories/day of brain consumption are we away from being able to support civilization development on any timescale at all?

comment by John_Maxwell (John_Maxwell_IV) · 2012-11-20T04:43:06.488Z · LW(p) · GW(p)

Given that the neolithic revolution happened in more than one place, I don't see how it could be a very significant filter. Or are you referring to "civilization" in a sense not achieved by the Incans or the Aztecs? It's interesting to wonder how far the Aztecs & subsequent civilizations could've gone if they hadn't been interrupted by the Europeans.

Replies from: IlyaShpitser, CCC, army1987
comment by IlyaShpitser · 2012-11-20T05:01:02.959Z · LW(p) · GW(p)

The Aztecs were an interesting society. I wonder how much of their gratuitous sacrifices were politically calculated to keep the city states in line, and how much was due to their genuine and profound existential anxiety ("we owe the Gods for their continued sacrifice to keep the world alive -- so we better keep sacrificing to them or the sun may not come up tomorrow!")

I don't think Aztecs are a good candidate for an alternative history civ, they feel to me like a failure mode. Incas make more sense (they also had potatoes, quinoa, llamas, etc.)

Replies from: tut
comment by tut · 2012-12-14T18:20:54.067Z · LW(p) · GW(p)

Both Judaism and Hinduism also started out as cosmic maintenance religions, so that might be a stage that civilizations need to pass rather than a specific failure mode of only one of them.

comment by CCC · 2012-11-21T08:14:25.143Z · LW(p) · GW(p)

That just means that the right conditions were a worldwide (or close to worldwide) phenomenon on Earth. This does not imply that the right conditions for the development of civilisation are necessarily common given the right conditions for the formation of intelligent life.

Unfortunately, we only have one example of a planet having the right conditions for the formation of intelligent life. Drawing statistical inferences from a single example is not a good idea.

comment by A1987dM (army1987) · 2012-11-20T09:47:22.081Z · LW(p) · GW(p)

Given that the neolithic revolution happened in more than one place, I don't see how it could be a very significant filter.

But it probably wouldn't have happened anywhere if there wasn't an interglacial period. My point is that intelligent life is unlikely to develop a technological civilization unless the planet they're on allows them to achieve very high population densities (e.g. by artificially growing more food than otherwise available), which ISTM that Earth before the interglacial period didn't.

Or are you referring to "civilization" in a sense not achieved by the Incans or the Aztecs?

From what I read on http://en.wikipedia.org/wiki/Aztec#Economy they definitely do count as a civilization by my standards.

comment by NancyLebovitz · 2012-11-20T16:50:29.854Z · LW(p) · GW(p)

if it were possible for civilization to have developed when our species had a lower IQ, then we'd still be dealing with the same problems, but we'd have a lower IQ with which to tackle them.

On the other hand, so many of our problems are caused by other people, and some of them are caused by smart people. It took a lot of intelligence to make the financial crisis happen.

Now I'm wondering whether a more equal distribution of intelligence would lead to fewer problems.

Replies from: CCC
comment by CCC · 2012-11-21T08:11:45.644Z · LW(p) · GW(p)

I strongly suspect that fewer idiots would lead to fewer problems (but someone who knows that they are an idiot, and listens closely to the advice of more intelligent people, may cause fewer problems than an arrogant but more intelligent person who believes that no-one can give him good advice). However, I don't think that fewer geniuses would dramatically reduce problems (on the basis that a problem caused by a genius is often temporary - like the financial crisis - while a problem solved by a genius - like the invention of X for a given X - is often solved permanently).

comment by CarlShulman · 2012-12-20T19:28:05.613Z · LW(p) · GW(p)

then we might expect to see something similar to the Flynn effect.

The Flynn Effect has been an order of magnitude too fast to be accounted for by such factors.

comment by JoshuaZ · 2012-11-19T16:03:21.992Z · LW(p) · GW(p)

Technologies allow more technologies to be built. For example, writing bootstraps the ability to pass on knowledge a lot. Similarly, larger populations allow a higher chance that people will make discoveries.

The toy model I sometimes use to describe this is a biased coin with a chance of turning up heads of something like 1- /(C(k +n)) where C and k and are constants, with C very small, and k very large, and n is the number of previous heads. Here a heads denotes a discovery or invention. If for example C=1 and k=10^5 then it will take a long time to get the first few coin flips but once one has a few discoveries will start to become increasingly common. C essentially denotes intelligence, so a smarter species will start getting coin flips faster.

Of course this sort of thing only works if a species has a chance at getting to civilization at all, which the vast majority don't. But it does suggest that decreased intelligence could still result in a civilization. It doesn't seem implausible that if you took out a few of the genes that occasionally come together to result in Isaac Newtons and Terry Taos, you'd still be able to get progress at a decent pace. Even Newton for example was doing stuff that was largely being investigated by other people like Leibnitz and Hooke.

Replies from: Decius
comment by Decius · 2012-11-19T20:37:25.200Z · LW(p) · GW(p)

Breakthroughs do cluster, but that's because of the tendency for a group to be working on a lot of related problems at once, and a breakthrough in any one area might resolve a key issue in any number of other areas.

For example, the motor/generator is a moderate breakthrough in the field of mechanics that solves several larger issues in electrical distribution. The relay, created for electrical distribution, led to the vacuum tube and then the transistor.

In a purer sense, better smelting practices provided more consistent steel, which allowed the polishing of more precise lenses, developing better telescopes which provided more information about the crystalline structure of metals yielding better metallurgy. The cycle doesn't recurse infinitely because we virtually never have some project that is just waiting on a development that is two steps ahead of current understanding.

Replies from: Vaniver, JoshuaZ
comment by Vaniver · 2012-11-19T23:30:10.188Z · LW(p) · GW(p)

developing better telescopes which provided more information about the crystalline structure of metals

That doesn't sound like the history of solid state physics / materials engineering that I know; what do you have in mind here?

Replies from: Decius
comment by Decius · 2012-11-20T04:35:08.112Z · LW(p) · GW(p)

Sorry- the need of optics to have metals with certain properties is part of any history of optics, and in order to understand metallurgy one needs to see metals as crystalline, which requires optics superior to those which have been created without applied metallurgy.

There's a certain advantage in that much of materials science can be cheated by experimentation without understanding, such that it is possible to work steel without knowing what steel is.

Replies from: Vaniver
comment by Vaniver · 2012-11-20T13:25:21.631Z · LW(p) · GW(p)

I was under the impression that the discovery that metals were crystalline was due to Bragg in 1912, and the wide angles involved don't require significant lens quality.

Metals do have microstructure that's very metallurgically relevant, which can be seen under a microscope (and there lens quality is rather relevant). While understanding the underlying crystalline structure helps the analysis, as you point out the experimentalists were able to find useful alloys and cooling recipes without knowing about the crystalline structure, with some help from knowing the microstructure.

I think the word "crystalline" was what was throwing me off from your description, though it is unclear to me how much advances in optics helped experimental metallurgists.

Replies from: Decius
comment by Decius · 2012-11-20T17:54:52.750Z · LW(p) · GW(p)

Most of the alloying and cooling was developed without even looking at what you call the microstructure. Current-generation optical microscopes are easily capable of observing individual surface crystals under elastic and inelastic deformation.

The effects of a given heat treatment on a given object is fairly simple to measure, but to predict the effect of an untested combination requires deeper understanding. Trial and error can create isolated useful developments, but understanding the next level allows accurate prediction of interesting developments. For example, the effects of alloying agents in iron remain experimentally determined, rather than predicted.

comment by JoshuaZ · 2012-11-20T03:55:45.545Z · LW(p) · GW(p)

Breakthroughs do cluster, but that's because of the tendency for a group to be working on a lot of related problems at once, and a breakthrough in any one area might resolve a key issue in any number of other areas.

This is an explanation for clustering in modern breakthroughs. But there's a different sort of clustering: Discoveries and inventions are happening more and more rapidly. A few thousand years ago they happened at best every few hundred years. By the time one reached the late middle ages they happened every few decades. In the 19th century discoveries and inventions occurred at a breakneck pace. There's a decent argument that things have slowed down again in the last few years (possibly with a peak around 1900 and a decline since then) but it is this sort of more and more rapid pace in the large scale that suggests this type of model.

Replies from: Decius
comment by Decius · 2012-11-20T05:05:51.485Z · LW(p) · GW(p)

So, there were more than 20 clusters of related discoveries in the 19th century? What were they?

A large number of related discoveries about e.g. electromagnetism should count the same as the large number of related discoveries about food preparation, or chipping flint, or masonry, or architectural engineering.

Replies from: JoshuaZ
comment by JoshuaZ · 2012-11-20T05:19:43.555Z · LW(p) · GW(p)

So, there were more than 20 clusters of related discoveries in the 19th century? What were they?

Well, electricity is one area where there were easily at least 20. Volta made the eponymous pile, Ohm discovers his law, Faraday discovers induction, Maxwell discovers his laws (and notes that the speed of propagation of an electromagnetic field is the observed speed of light), Faraday invented the first generators, Siemens refined it, Seebeck discovered the thermoelectric effect, Edison made a practical lightbulb, Edison made large scale electric grids, Hertz transmitted radio waves, Marconi used them to transmit signals, Daniell makes the first practical batteries (later improved to gravity cells), lead acid batteries also occur in this time period. Etc.

But this is missing part of the primary point: Discoveries help out even in not directly related areas. Better communication helps all areas. Thus for example, the ease of modern transportation and communication helped make the late 19th century transits of Venus to be observed with far more careful coordination than previous transits. And Darwin and other 19th century naturalists were able to do much of their work because sea travel had become substantially faster and more reliable in the 19th century than earlier. This is part of a general pattern: technologies and developments beget more technologies and insights even to areas that aren't directly connected.

Replies from: Decius
comment by Decius · 2012-11-20T18:06:28.465Z · LW(p) · GW(p)

If fire and composting each count as one cluster, then electricity, electromagnetic radiation, and the relationship between the two are each one cluster. Also, I think that both Newtonian physics and Aristotelian physics count equally much as major developments, along with a very large number of developments that have been completely abandoned and forgotten. Combined with the developments that 'everybody knows' now (e.g. how to create and extinguish fires, till soil, make plants edible), I think that the rate of new discoveries has remained roughly proportional to the number of people alive and the degree by which they exceed subsistence living.

Granted, that is a huge increase in absolute rate, but it isn't strictly linked to an increase in intelligence or reasoning abilities.

Replies from: JoshuaZ
comment by JoshuaZ · 2012-11-20T19:39:55.075Z · LW(p) · GW(p)

Even if it is an increase proportional to the population, that still means that a model where increased technology (which allows a larger population) is responsible for further increases. So the upshot is still the same, which is that it is highly plausible in that context that other species had enough intelligence to make civilization but never got the first few lucky technologies.

Replies from: Decius
comment by Decius · 2012-11-21T07:23:05.046Z · LW(p) · GW(p)

A dolphin's ability to invent novel behaviours was put to the test in a famous experiment by the renowned dolphin expert Karen Pryor. Two rough-toothed dolphins were rewarded whenever they came up with a new behaviour. It took just a few trials for both dolphins to realise what was required. A similar trial was set up with humans. The humans took about as long to realise what they were being trained to do as did the dolphins. For both the dolphins and the humans, there was a period of frustration (even anger, in the humans) before they "caught on". Once they figured it out, the humans expressed great relief, whereas the dolphins raced around the tank excitedly, displaying more and more novel behaviours.

source

And cue the Douglas Adams reference.

Replies from: CAE_Jones
comment by CAE_Jones · 2012-11-21T10:17:08.014Z · LW(p) · GW(p)

I have to wonder how much dolphin anatomy factors into their apparent lack of civilization-building. Then again, I haven't read anything about dolphins developing anything like agriculture (whereas some social insects seem to manage some impressive achievements, such as ants domesticating other insects, farming fungi, and building vast inter-connected colonies). Yet it seems pretty clear that social insects are nothing like intelligent in the way that primates and dolphins are.

Replies from: Decius
comment by Decius · 2012-11-21T17:46:21.755Z · LW(p) · GW(p)

Well, there is the complex hunting behavior, and indications of limited tool use. Why is agriculture special?