The Popularization Bias

post by Wei Dai (Wei_Dai) · 2009-07-17T15:43:30.338Z · LW · GW · Legacy · 54 comments

I noticed that most recommendations in the recent recommended readings thread consist of either fiction or popularizations of specific scientific disciplines. This introduces a potential bias: aspiring rationalists may never learn about some fields or ideas that are important for the art of rationality, just because they've never been popularized.

In my recent post on the fair division of black-hole negentropy, I tried to introduce two such ideas/fields (which may be one too many for a single post :). One is that black holes have entropy quadratic in mass, and therefore are ideal entropy dumps (or equivalently, negentropy mines). This is a well-known result in thermodynamics, plus an obvious application of it. Some have complained that the idea is too sci-fi, but actually the opposite is true. Unlike other perhaps equally obvious futuristic ideas such as cryonics, AI and the Singularity, I've never read or watched a piece of science fiction that explorered this one. (BTW, in case it's not clear why black-hole negentropy is important for rationality, it implies that value probably scales superlinearly with material and that huge gains from cooperation can be directly derived from the fundamental laws of physics.)

Similarly, there are many popularizations of topics such as the Prisoner's Dilemma and the Nash Equilibrium in non-cooperative game theory (and even a blockbuster movie about John Nash!), but I'm not aware of any for cooperative game theory.

Much of Less Wrong, and Overcoming Bias before it, can be seen as an attempt to correct this bias. Eliezer's posts have provided fictional treatments or popular accounts of probability theory, decision theory, MWI, algorithmic information theory, Bayesian networks, and various ethical theories, to name a few, and others have continued the tradition to some extent. But since popularization and writing fiction are hard, and not many people have both the skills and the motivation to do them, I wonder if there are still other important ideas/fields that most of us don't know about yet.

So here's my request: if you know of such a field or idea, just name it in a comment and give a reference for it, and maybe say a few words about why it's important, if that's not obvious. Some of us may be motivated to learn about it for whatever reason, even from a textbook or academic article, and may eventually produce a popular account for it.

 

54 comments

Comments sorted by top scores.

comment by whpearson · 2009-07-17T20:13:18.078Z · LW(p) · GW(p)

The No free lunch theorems of search could do with a populist write up.

Basically to tell people making AIs that they need to reference the world/problems they are trying to deal with.

Replies from: sketerpot, timtyler
comment by sketerpot · 2009-07-17T23:45:11.761Z · LW(p) · GW(p)

There are an awful lot of caveats that apply to the No Free Lunch theorem. Is it really very applicable in practice? If you're just going to use it as a hand-wave concept, I think it's more honest to use TANSTAAFL and make your lack of rigorous mathematical backing clear.

So, can anybody list a few lessons we can draw from the NFL theorem?

comment by timtyler · 2009-07-17T21:48:50.779Z · LW(p) · GW(p)

Occam's razor means that the no free lunch theorems are practically irrelevant.

comment by Vladimir_Nesov · 2009-07-17T20:22:43.182Z · LW(p) · GW(p)

One must select what's important, there is too much science to tell about it all. "Correcting" popularization bias must consist in steering the selection effect according to some specific criteria different from sum-total of popularization in the world. Since what's important to specific people heavily depends on their interests, it's unlikely for there to be a magic bullet that more or less universally improves on available popularized material.

The valid way out of this debacle seem to be to acquire general knowledge, to learn to see what science knows and understand it for yourself, given enough effort. Popularizing this skill instead of popularizing specific content may be a better strategy.

comment by Wei Dai (Wei_Dai) · 2009-07-17T16:38:42.530Z · LW(p) · GW(p)

To start things off, here are my entries:

Replies from: timtyler
comment by timtyler · 2009-07-17T17:16:16.699Z · LW(p) · GW(p)

Hypercomputation seems like a misguided attack on the Church-Turing thesis to me. If nobody can build a hypercomputer - and there's no evidence that anyone ever will be able to - then I am not sure I can see what the point is.

Replies from: timtyler
comment by timtyler · 2009-07-17T21:59:27.207Z · LW(p) · GW(p)

I guess it's because there is no proof that someone won't find a way of computing the uncomputable. It seems unlikely to me - but I suppose there is not much harm in philosopers speculating.

Replies from: timtyler
comment by timtyler · 2009-07-18T07:49:36.042Z · LW(p) · GW(p)

Re: Toby's "Regardless of the actual computational limits of our universe, I have no doubt that the study of hypercomputation will lead to many important theoretical results across computer science, philosophy, mathematics and physics."

Hmm. What have we got so far out of Omegas and Oracles? I expect what we will get out of Hypercomputation will be mostly confusion - since it sounds as though it is a field with a real object of study.

Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2009-07-19T12:03:15.710Z · LW(p) · GW(p)

Well, one practical result we've got is that we shouldn't program AIs to assume (either implicitly or explicitly) that the universe must be computable. See this discussion between Eliezer and me about this.

Replies from: timtyler
comment by timtyler · 2009-07-20T08:34:01.233Z · LW(p) · GW(p)

Making agents with assumptions about anything which we are not confident of the truth of seems like a dubious strategy.

We are fairly confident of the Church-Turing thesis, though: "Today the thesis has near-universal acceptance" - http://en.wikipedia.org/wiki/Church–Turing_thesis

comment by Wei Dai (Wei_Dai) · 2009-07-30T22:19:05.381Z · LW(p) · GW(p)

The Theory of Bayesian Aggregation - Bayesian Group Agents and Two Modes of Aggregation by Mathias Risse.

ABSTRACT: Suppose we have a group of Bayesian agents, and suppose that they would like for their group as a whole to be a Bayesian agent as well. Moreover, suppose that those agents want the probabilities and utilities attached to this group agent to be aggregated from the individual probabilities and utilities in reasonable ways. Two ways of aggregating their individual data are available to them, viz., ex ante aggregation and ex post aggregation. The former aggregates expected utilities directly, whereas the latter aggregates probabilities and utilities separately. A number of recent formal results show that both approaches have problematic implications. This study discusses the philosophical issues arising from those results. In this process, I hope to convince the reader that these results about Bayesian aggregation are highly significant to decision theorists, but also of immense interest to theorists working in areas such as ethics and political philosophy.

Replies from: Vladimir_Nesov, Vladimir_Nesov
comment by Vladimir_Nesov · 2009-07-31T15:24:35.505Z · LW(p) · GW(p)

Wasn't as enlightening as the abstract made it sound.

Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2009-08-01T02:08:36.239Z · LW(p) · GW(p)

The results seem quite significant, even if it's not clear what they mean. One possible interpretation is that expected utility maximization is not the correct ideal for group rationality.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2009-08-01T02:20:32.876Z · LW(p) · GW(p)

Or they just do it totally wrong.

comment by Vladimir_Nesov · 2009-07-31T00:56:52.650Z · LW(p) · GW(p)

Good find, thanks!

comment by Wei Dai (Wei_Dai) · 2009-07-18T04:58:04.924Z · LW(p) · GW(p)

I wonder if I over-corrected upon learning about cooperative game theory. Based on the relative lack of responses here, perhaps there aren't that many nuggets of knowledge left to be picked off the street, so to speak.

I'm curious, was anyone else aware of cooperative game theory, before I mentioned it here?

Replies from: gwern, gworley, cousin_it, conchis, GuySrinivasan
comment by gwern · 2009-07-18T22:52:35.687Z · LW(p) · GW(p)

I'm curious, was anyone else aware of cooperative game theory, before I mentioned it here?

I had vaguely heard of it and the main result you presented, but I didn't find it very interesting - and I still don't, even after your post. (The black hole material was much more interesting.)

In comparison, the first time I read about the Prisoner's Dilemma and the Tragedy of the Commons, my reaction was: 'this is amazing! It provides a new way to look at just about everything - littering on sidewalks, war, traffic & SUVs, cheating on taxes...' For a year or two, I saw everything through that lense.

comment by Gordon Seidoh Worley (gworley) · 2009-07-21T18:27:07.025Z · LW(p) · GW(p)

Yes. Not to sound like a jerk, but I didn't realize it was so poorly known.

On the issue of nuggets of knowledge left, I think it's more so the case that we just don't know where we'll find them or that they aren't already well known. It will take something that will make someone who is aware of the details of some field realize that a popular account is needed because even his/her fellow smart people don't know about it.

comment by cousin_it · 2009-07-21T12:02:07.768Z · LW(p) · GW(p)

I'd read the Wikipedia page before, for some reason it didn't seem very interesting to pursue further.

comment by conchis · 2009-07-21T11:41:22.021Z · LW(p) · GW(p)

Yup. Although I think that the core) is possibly a more useful concept than the Shapley value. (I actually had a vague suspicion it could be useful for Toby and Nick Bostrom's work on dealing with moral uncertainty, but never bothered to follow up.)

comment by GuySrinivasan · 2009-07-19T17:29:12.185Z · LW(p) · GW(p)

Yes, when I first learned about the Shapely value, I bothered everyone I knew by telling them all excited-like about it when they obviously didn't much care. :)

comment by Richard_Kennaway · 2009-07-17T18:56:16.784Z · LW(p) · GW(p)

Complexity theory. Back when I learned it, Garey and Johnson was the standard book, but there must be more up to date sources -- perhaps even popular ones (for some less than Harry Potter-sized value of popular).

Replies from: anonym
comment by anonym · 2009-07-18T19:27:04.185Z · LW(p) · GW(p)

Michael Sipser's Introduction to the Theory of Computation is an extremely friendly introduction to theory of computation, including complexity theory and computability theory. As opposed to Garey and Johnson, it seems broader and shallower, covering computability theory (incl. space complexity and other non-NP-Complete topics) as well as complexity theory, and probably in a much friendlier fashion. It's one of the few compsci books I've ever read that I would describe as a "page turner": it was so interesting and readable that I couldn't put it down when reading it, and I still like to pick it up from time to time just to reread sections for pleasure.

[The 1st edition is much cheaper than the 2nd edition for anybody interesting in buying ($10-$20 used, versus >$55 used on 2nd edition or $115 new).]

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-07-17T18:18:31.198Z · LW(p) · GW(p)

Unlike other perhaps equally obvious futuristic ideas such as cryonics, AI and the Singularity, I've never read or watched a piece of science fiction that explorered this one.

"The Gravity Mine" by Stephen Baxter. http://www.infinityplus.co.uk/stories/gravitymine.htm

Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2009-07-18T07:29:27.909Z · LW(p) · GW(p)

That's not a bad story, but the author seems more interested in using black holes as exotic locales with cool "special effects", rather than exploring the implications of their physics. The reader walks away entertained, but not really having learned anything about black-hole thermodynamics.

comment by timtyler · 2009-07-17T16:56:48.011Z · LW(p) · GW(p)

Re: One is that black holes have entropy quadratic in mass, and therefore are ideal entropy dumps (or equivalently, negentropy mines).

What would anyone want a black hole entropy dump for? If you are in orbit around a star, you can just let entropy radiate off as heat. Compared to that sending it into the nearest black hole would probably require a lot of energy. This seems like a bad idea - so what is the proposed point?

Replies from: Wei_Dai, Tiiba, djcb
comment by Wei Dai (Wei_Dai) · 2009-07-17T17:20:50.174Z · LW(p) · GW(p)

The point is that a black hole is much colder than interstellar space, and its temperature decreases as its mass increases. This coldness implies that it takes much less energy to dump a certain amount of entropy into a black hole than into interstellar space. Of course you probably don't want to ship that entropy across interstellar distances before dumping. That would likely wipe out any savings. You'd create a black hole close by, or build your civilization around an existing one.

Replies from: timtyler
comment by timtyler · 2009-07-17T17:38:54.752Z · LW(p) · GW(p)

It still doesn't seem to make sense. Buiding a black hole anywhere near a sentient agent seems like a really, really bad idea. Orbiting around one doesn't help you drop things into it much - because of orbital inertia. The suggestion seems rather like proposing that we dump the planet's excess heat into the Sun - as opposed to radiating it off in all directions. Yes, we could build a heat ray and point it at the sun - but if you think about that for a moment, you will realise why it wouldn't help get rid of entropy, and would actually just make things worse.

The tiny relative temperature difference between the surface of the hole and interstellar space hardly makes much difference if you are many millions of miles away from it. Also, the hole is likely to be surrounded by extremely hot stuff in orbit around it. Are you sure that you have thought this idea through?

Replies from: RolfAndreassen
comment by RolfAndreassen · 2009-07-17T17:59:07.693Z · LW(p) · GW(p)

By the time your civilisation is taking advantage of black holes, it's large enough that even a small temperature difference can scale to quite a bit of negentropy. Further, you don't have to be in orbit, you can build a Dyson shell around the hole at such a distance that the surface gravity is one g. (Or several shells, if people prefer different levels of gravity.) Then there's no orbital velocity to deal with. (And in any case, you could brake by tidal friction and extract some entropy that way.) Or to be shorter, why are you objecting to the practical details of a thought experiment? Nothing about the game theory relies on black holes or the particular exponent 2; it could just as well be mass^1.5, and the analysis would remain the same although the numbers would change a bit.

Replies from: timtyler
comment by timtyler · 2009-07-17T18:15:12.279Z · LW(p) · GW(p)

How is a Dyson sphere anything other than "in orbit"? Do you not know how they are supposed to work? Incidentally, Dyson spheres are a pretty silly idea as well. Slightly more realistic are rings - e.g. see my http://timtyler.org/the_rings_of_earth/

Replies from: eirenicon
comment by eirenicon · 2009-07-17T19:03:16.833Z · LW(p) · GW(p)

There are multiple types of Dyson sphere. Dyson's original vision, a swarm of satellites, would be in orbit, but the popular version more commonly seen in fiction - a solid shell - would not, any more than the Earth orbits its own core (although any one point on the shell could plausibly be said to orbit the centre, provided the sphere is spinning).

Replies from: billswift, timtyler
comment by billswift · 2009-07-18T09:08:02.459Z · LW(p) · GW(p)

A solid Dyson sphere is a dumb idea, the dynamics are unstable. See Niven's essay on the dynamics of ringworld for the problems, and realize a sphere would be even worse. I don't remember whether he discussed that in "Bigger than Worlds" or in an essay specifically on building Ringworld, he did discuss the dynamics problems in the novels as well.

Replies from: RolfAndreassen
comment by RolfAndreassen · 2009-07-19T21:16:26.130Z · LW(p) · GW(p)

So you have to expend a bit of energy moving it back to the midpoint every so often. What are attitude jets for?

comment by timtyler · 2009-07-17T19:09:41.403Z · LW(p) · GW(p)

In fantasy novels, you mean?

comment by Tiiba · 2009-07-18T06:07:19.391Z · LW(p) · GW(p)

Regarding this discussion, I'm totally confused what people are talking about. It sounds like you want to take some of your excess energy and throw it into a black hole. Wouldn't it be smarter to give it to me? How can energy be "excess"?

Replies from: Wei_Dai, timtyler
comment by Wei Dai (Wei_Dai) · 2009-07-18T09:30:10.707Z · LW(p) · GW(p)

Eliezer has a post that explains some of the background assumed here: http://lesswrong.com/lw/o5/the_second_law_of_thermodynamics_and_engines_of/.

Replies from: Tiiba
comment by Tiiba · 2009-07-19T07:38:22.729Z · LW(p) · GW(p)

I have just finished reading this article. I still have no idea what it is that you intend to do with the black hole, or why it's useful. Seriously, not even an inkling. And I seem to be unique in this regard, which sucks.

The only way that I can think of for a black hole to reduce entropy is if you throw things into it. Give them to me.

Replies from: HalFinney
comment by HalFinney · 2009-07-19T23:01:26.410Z · LW(p) · GW(p)

Tilba, Wei's earlier post pointed to this article:

http://weidai.com/black-holes.txt

You might also need to know that computation can be done in principle almost without expending energy, and the colder you do the computation, the less energy is wasted. Hence being cold is a good thing, and black holes are very cold.

Replies from: Tiiba
comment by Tiiba · 2009-07-20T03:13:29.892Z · LW(p) · GW(p)

I didn't get it right away, but now that I do, it's pretty ingenious. Let me see if I got it right. Build a big ball in space. If the ball was empty, starlight and cosmic background would heat it up, the inner surface would emit photons, and they would bounce around the shell - so you're back to square one. But the black hole at the center can absorb those photons without becoming hot. And the photons are unusable because they are ambient.

On the other hand, there is now a temperature difference between the inside and the outside. Can it be used to make usable energy?

comment by timtyler · 2009-07-18T06:57:28.230Z · LW(p) · GW(p)

Not energy, entropy. Energy is useful - entropy is useless.

comment by djcb · 2009-07-17T21:45:20.875Z · LW(p) · GW(p)

+1; indeed, this is interesting from an scifi-itch-scratching viewpoint, but I guess we have the next 10^6 years to worry about the details.

Anyway, I like LW for bringing such things to my attention (thanks Wei_Dai!), but apart from being interesting, this seems not like an idea that need mass-popularization, or?

Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2009-07-18T07:03:18.735Z · LW(p) · GW(p)

You ask a fair question, I think. Here are some potential short-term implications of black-hole negentropy:

  • The far future will most likely not be dominated by an everyone-for-himself type of scenario (like Robin Hanson's Burning the Cosmic Commons. Knowing that, and possibly having a chance to see the far future for yourself, does that affect your short-term goals?
  • There is less need to adopt drastic policies to prevent the Burning the Cosmic Commons scenario.
  • The universe is capable of supporting much more life than we might intuit, even after seeing calculations like the one in Nick Bostrom's Astronomical Waste, which fail to take into account quadratic negentropy. What are the ethical implications of that? I'm not sure yet, but I'd be surprised if there weren't any.
comment by HalFinney · 2009-07-19T23:04:40.377Z · LW(p) · GW(p)

I'd like to see a more popular discussion of Aumann's disagreement theorem (and its follow-ons), and what I believe is called Kripkean possible-world semantics, an alternative formulation of Bayes theorem, used in Aumann's original proof. The proof is very short, just a couple of sentences, but explaining the possible-world formalism is a big job.

comment by knb · 2009-07-17T18:34:22.653Z · LW(p) · GW(p)

I've never read or watched a piece of science fiction that explorered this one.

I believe the Silent Ones in the Golden Age trilogy used black holes for this purpose.

comment by thomblake · 2009-07-17T18:26:20.549Z · LW(p) · GW(p)

Unlike other perhaps equally obvious futuristic ideas such as cryonics, AI and the Singularity, I've never read or watched a piece of science fiction that explorered this one.

In Dr. Who, the Time Lords used a black hole as a 'mysterious energy source'.

Replies from: eirenicon, Document
comment by eirenicon · 2009-07-17T18:51:44.664Z · LW(p) · GW(p)

That has as much relevance to black-hole negentropy as Demolition Man does to cryonics. In science fiction, the inability to explain something is indistinguishable from attributing it to magic.

Replies from: thomblake
comment by thomblake · 2009-07-17T18:59:30.927Z · LW(p) · GW(p)

That has as much relevance to black-hole negentropy as Demolition Man does to cryonics.

Meh. Given that the impression was that no science fiction deals with it, I'd count it, just as I'd count Demolition Man as relevant to cryonics.

Replies from: eirenicon
comment by eirenicon · 2009-07-17T19:18:12.988Z · LW(p) · GW(p)

As far as I can recall, the last time we saw a black hole in Doctor Who, the TARDIS pulled another spaceship across its event horizon to safety. Just prior to that, they faced off against the actual literal Devil, who was chained in a hellish inferno inside a moon serviced by telepathic squid-people. I love Doctor Who, but I have a hard time calling it science fiction.

Replies from: thomblake
comment by thomblake · 2009-07-17T19:26:16.783Z · LW(p) · GW(p)

Aha. You're referring to that other show, also coincidentally called Doctor Who. But yes, the original series was just about that silly.

As for the implausibilty of telepathic squid people, just stay out of the dark places of the world and you should be fine for now. Until then, Cthulhu f'thagn.

comment by Document · 2010-11-02T18:45:57.728Z · LW(p) · GW(p)

In Dr. Who, the Time Lords used a black hole as a 'mysterious energy source'.

Same for the Ori in the SG-1 episode Beachhead (transcript here; summary and transcript of prior black-hole episode here and here, which may partly explain the writers' thinking).

comment by timtyler · 2009-07-17T16:58:13.420Z · LW(p) · GW(p)

Re: if it's not clear why black-hole negentropy is important for rationality, it implies that value probably scales superlinearly with material and that huge gains from cooperation can be directly derived from the fundamental laws of physics.

That is supposed to help clear up the issue?!? It has rather the opposite effect here.

comment by timtyler · 2009-07-17T17:21:41.413Z · LW(p) · GW(p)

If anyone else would like to read up on maximum entropy thermodynamics - particularly Dewar's recent work - that would be cool. This material explains much about why self-organising systems (including living ones) behave as they do - in thermodynamic terms. I discuss this here now and again, but - despite the links to Bayes and Jaynes - no-one seems to know very much about it.

A primer: http://en.citizendium.org/wiki/Life/Signed_Articles/John_Whitfield

Replies from: SilasBarta
comment by SilasBarta · 2009-07-17T18:38:20.842Z · LW(p) · GW(p)

That looked to be interesting until I glanced down at Figure 1, which reads:

Entropy and biodiversity are mathematically equivalent, making tropical forests the most entropic [entropy exporting] environments on Earth.

Eeek! Tropical forests the most entropy-exporting? Not, say, the 1000 C regions below the earth's surface? Not volcanoes or geysers?

Replies from: timtyler
comment by timtyler · 2009-07-17T19:05:38.821Z · LW(p) · GW(p)

Volcanoes and geysers are mostly uncommon, intermittent phenomena. Some volcano craters do stay pretty hot, for extended periods, though - it's true.

I'm not sure about how to measure the rate of entropy dissipation within the Earth - but I'm not sure it radiates as much heat from the surface as ultimately comes from the sun.

The insides of nuclear reactors, and other power plants are probably the most entropic places of all - again, per unit area. Whether those count as "environments" could be debated.