Open Thread: October 2009

post by gwern · 2009-10-01T12:49:19.084Z · LW · GW · Legacy · 436 comments

Contents

436 comments

Hear ye, hear ye: commence the discussion of things which have not been discussed.

As usual, if a discussion gets particularly good, spin it off into a posting.

(For this Open Thread, I'm going to try something new: priming the pump with a few things I'd like to see discussed.)

436 comments

Comments sorted by top scores.

comment by CannibalSmith · 2009-10-13T23:28:11.472Z · LW(p) · GW(p)

This post tests how much exposure comments to open threads posted "late" get. If you are reading this then please either comment or upvote. Please don't do both and don't downvote. When the next open thread comes, I'll post another test comment as soon as possible with the same instructions. Then I'll compare the scores.

If the difference is insignificant, a LW forum is not warranted, and open threads are entirely sufficient.

PS: If you don't see a test comment in the next open thread (e.g. I've gone missing), please do post one in my stead. Thank you.

Edit: Remember that if you don't think I deserve the karma, but still don't want to comment, you can upvote this comment and downvote any one or more of my other comments.

Replies from: Cyan, Jack, JGWeissman, Alicorn, kpreid, Nick_Tarleton, gwern, thomblake, RobinZ, None, aausch, Tyrrell_McAllister, arundelo, thomblake, AdeleneDawner, AllanCrossman, Vladimir_Nesov
comment by Cyan · 2009-10-14T00:48:03.017Z · LW(p) · GW(p)

don't to both

I not only read it, I spotted a typo. I am the most awesome person ever.

comment by Jack · 2009-10-14T00:42:38.563Z · LW(p) · GW(p)

If the difference is insignificant, a LW forum is not warranted, and open threads are entirely sufficient.

I don't think this is true. One reason to want a forum is to maximize the total views of more narrowly focused posts. If I post a comment in an open thread and it is only of interest to a handful of people on here they might never see it. But if I post in a forum where the post is on the page longer and in a place on the forum indexed such that people with my interests can find it there is a greater likelihood that someone will respond. The proper comparison is between the views a forum post gets and the views an open thread comment gets- not between two open thread comments at different times of the month. Plus some people would like a space where they can post less complete ideas without worrying about getting hit with downvotes.

The way to decide this issue is really simple. Start a forum and see what happens.

(Edit: Also, this is my notice that I read the comment)

Replies from: CannibalSmith
comment by CannibalSmith · 2009-10-14T10:45:15.027Z · LW(p) · GW(p)

One reason against a forum that I can think of is that we'd rather we not say low quality things at all. Maybe we want to force us to put our karma on the line at all times. Maybe we want to deny all opportunity for chatting. Enforce high standards. Discipline ourselves.

comment by JGWeissman · 2010-03-19T01:04:48.649Z · LW(p) · GW(p)

I am replying to this because I saw Nick Tarleton's comment in the recent comments panel, which Nick made because he saw ThomBlake's comment.

Of course, that sort of thing can in fact happen to a normal open thread comment, so it may still be a reasonable test.

comment by Alicorn · 2009-10-14T00:23:05.928Z · LW(p) · GW(p)

I'm reluctant to upvote you for making this test without a karma-equalizing mechanism in place. At the same time, I don't want to mess up your test by failing to reply at all when I did see this comment. So I'm writing this. I feel a little like my good nature has been abused.

Replies from: CannibalSmith
comment by CannibalSmith · 2009-10-14T10:29:31.609Z · LW(p) · GW(p)

Downvote one or more random comments of mine to balance things out.

comment by kpreid · 2009-10-14T00:17:28.865Z · LW(p) · GW(p)

I read the comments feed (and am annoyed that it regularly overflows the only-20-comments limit between checks).

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-10-14T00:19:34.172Z · LW(p) · GW(p)

There is a "Next" button. Also, this counts as my comment.

Replies from: kpreid
comment by kpreid · 2009-10-14T01:33:40.981Z · LW(p) · GW(p)

There is no "Next" button on the comments feed; while there is IIRC a RFC for a formalized "Next page" function, it is not widely implemented.

comment by Nick_Tarleton · 2010-03-19T00:53:57.009Z · LW(p) · GW(p)

I saw thomblake's comment, not this one.

It seems to me that forums provide a better experience for very long-running threads (though such discussions would often warrant top-level posts) (being able to re-root a comment thread under a new post would be a nice feature), and better indexing (ditto both parentheticals).

(FWIW, I tried to establish an unofficial forum for OB in 2008; a maximum of about five people ever used it.)

comment by gwern · 2009-10-13T23:30:00.746Z · LW(p) · GW(p)

I'm commenting only because I saw the comment in the sidebar and wondered who would be posting to a nigh-dead open thread.

comment by thomblake · 2010-03-19T00:12:52.581Z · LW(p) · GW(p)

I wonder if anyone else is reading this...

Replies from: gregconen
comment by gregconen · 2010-03-19T00:55:31.475Z · LW(p) · GW(p)

You should probably make an explicit karma balance post for this.

comment by RobinZ · 2009-10-28T13:23:20.824Z · LW(p) · GW(p)

Wow, there are a lot of people watching the "Recent Comments".

Replies from: rhollerith_dot_com
comment by RHollerith (rhollerith_dot_com) · 2009-10-28T19:26:47.123Z · LW(p) · GW(p)

This post tests how much exposure comments to open threads posted "late" get. If you are reading this then please either comment or upvote.

I saw the above quoted request (today, two weeks after it was made) because I saw RobinZ's reply to it (which was made today) at lesswrong.com/comments, got curious about the context of RobinZ's comment, then clicked on its "Parent" link.

Parenthetically, I do not like the idea of running part of this community on "web forum" software (e.g., phpBB) and will not participate unless I have to participate to continue to be part of the community.

Replies from: MichaelBishop
comment by Mike Bishop (MichaelBishop) · 2009-10-28T19:38:52.780Z · LW(p) · GW(p)

i just crossed rhollerith's comment.

comment by [deleted] · 2009-10-28T10:14:44.615Z · LW(p) · GW(p)

I read this comment. You may like to note that it was the first comment I saw (I always have my sort set to "Top") and it was quoted in the Google result for this thread, so I couldn't help but do so.

comment by aausch · 2009-10-16T02:47:08.548Z · LW(p) · GW(p)

Check

comment by Tyrrell_McAllister · 2009-10-14T00:55:29.347Z · LW(p) · GW(p)

I saw this comment show up in the Recent Comments bar.

comment by arundelo · 2009-10-14T00:37:52.018Z · LW(p) · GW(p)

Ack.

comment by thomblake · 2009-10-13T23:54:17.313Z · LW(p) · GW(p)

I do check 'recent comments'. Is this supposed to be creating a feedback loop?

Replies from: gwern
comment by gwern · 2009-10-14T00:41:30.879Z · LW(p) · GW(p)

We could see this as the upper bound on comments posted to an old open thread; it's possible that a comment be posted that is really good and invites comment, so logically you'd need to take into account the feedback loop it might cause (if you want to make any generalization about open threads).

comment by AdeleneDawner · 2009-10-13T23:48:38.762Z · LW(p) · GW(p)

I noticed Vlad's comment in the recent comments sidebar, and was curious. Make of that what you will.

comment by AllanCrossman · 2009-10-13T23:29:35.025Z · LW(p) · GW(p)

Saw it, only because I happened to look at recent comments at the time.

comment by Vladimir_Nesov · 2009-10-13T23:38:48.028Z · LW(p) · GW(p)

Or I could ignore this, for obscurity of purpose.

comment by mormon2 · 2009-10-28T16:00:05.273Z · LW(p) · GW(p)

I apologize if this is blunt or already addressed but it seems to me that the voting system here has a large user based problem. It seems to me that the karma system has become nothing more then a popularity indicator.

It seems to me that many here vote up or down based on some gut-level agreement or disagreement with the comment or post. For example it is very troubling that some single line comments of agreement that should have 0 karma in my opinion end up with massive amounts and comments that may be in opposition to the popular beliefs here are voted down despite being important to the pursuit of rationality.

It was my understanding that karma should be an indicator of importance and a way of eliminating useless information not just a way of indicating that a post is popular. The popularity of a post is nearly meaningless when you have such a range of experience and inexperience on a blog such as this.

Just a thought feel free to disagree...

Replies from: RobinZ
comment by RobinZ · 2009-10-28T16:16:17.224Z · LW(p) · GW(p)

I think you're on to something - many commenters (myself included) probably vote based more on agreement or disagreement than on anything else, and this necessarily reinforces the groupthink. If we wanted to fix it, the way to go would be to define standard rules for upvoting and downvoting which reduced the impact of opinion. It cannot be eliminated - if someone says something stupid, for example, saying it should not be rewarded - but a set of clear guidelines could change the karma meter from a popularity score to a filter sorting out the material worth paying attention to.

I think a well-thought-out proposal of such a method could make a reasonable top-level post.

comment by CannibalSmith · 2009-10-01T15:24:55.552Z · LW(p) · GW(p)

So, I'm reading A Fire Upon The Deep. It features books that instruct you how to speedrun your technological progress all the way from sticks and stones to interstellar space flight. Does anything like that exist in reality? If not, it's high time we start a project to make one.

Edit (10 October 2009): This is encouraging.

Replies from: Jack, Aurini, CronoDAS, DanArmak, sketerpot, gwern, gwern, Jack, Jonathan_Graehl
comment by Jack · 2009-10-01T16:53:16.178Z · LW(p) · GW(p)

A lot of stick and stones civilizations that can read, are there?

Agree that it is a cool idea though, does Vinge give more details?

It strikes me that the most crucial aspects of such a book would probably be mechanical engineering (wheels, mills, ship construction, levers and pullies) and chemical identification (where to find and how to identify loadstones, peat, saltpeater, tungsten) things no one here is going to have much experience with.

What I'd like to know is what the ideal order of scientific discoveries would be. Like what would have been possible earlier in retrospect, what later inventions could have been invented earlier and sped up subsequent innovation the most. Could you teach a sticks and stones civilization calculus? What is the earliest you could build a computer? Many countries went skipped building phone infrastructure and have gone straight to cellular. What technologies were necessary intermediate steps and which could be skipped?

Any hypotheses for these questions?

Replies from: AllanCrossman, uselessheuristic, CannibalSmith
comment by AllanCrossman · 2009-10-02T11:15:15.482Z · LW(p) · GW(p)

A lot of stick and stones civilizations that can read, are there?

Not yet.

Replies from: Jack
comment by Jack · 2009-10-05T23:48:18.788Z · LW(p) · GW(p)

Is the likelihood that future sticks and stones civilizations will know how to read such that the first chapter doesn't need to be teaching them how to read the rest of the book? It seems to me that the probability a collapsed civilization is mostly illiterate is high enough to justify some kind of lexical key.

comment by uselessheuristic · 2009-10-02T15:52:48.455Z · LW(p) · GW(p)

In the book it's chemicals (gunpowder) and radios. The application of radios by Vinge's version of non-anthropomorphic intelligences is especially interesting.

What about a "Mote In God's Eye" -style technology bunker? Would having a set of raw materials, instructions, and tomes of information be the ideal setup? Perhaps something along the lines of the Svalbard Seed Vault. What are the most useful artifacts that can survive A) the catastrophe and B) the length of time it takes for the artifacts to be recovered? Such a timeframe could be short or many, many generations long (even geologic time?). Do we want this to potentially survive until the next intelligent being evolves, in the case of total destruction of mankind? What sealing mechanism would still be noticeable and breach-able by a low-tech civilization?

Or do we want to assume there is NO remaining technology and we're attempting to bootstrap from pure knowledge? Either way, I think it would be an interesting problem to solve.

comment by CannibalSmith · 2009-10-02T12:44:49.748Z · LW(p) · GW(p)

Basic electrics are possible as soon as you have decent metalworking. Dynamos are just a bunch of spools of copper wires and magnets. Add some graphite, and you have telephones. ̶G̶r̶e̶e̶k̶s̶ ̶c̶o̶u̶l̶d̶ ̶h̶a̶v̶e̶ ̶m̶a̶d̶e̶ ̶t̶h̶e̶m̶.

Replies from: DanArmak, billswift, Psy-Kosh
comment by DanArmak · 2009-10-02T15:05:17.508Z · LW(p) · GW(p)

A printing press should be easier to make...

comment by billswift · 2009-10-02T14:14:15.566Z · LW(p) · GW(p)

Wire making is easy if you have copper. The real problem is insulating the wire, especially with something flexible enough for winding coils. This is part of the problem of infrastructure - and very few people know enough to really even start working on a serious rebuilding problem, for example after a dinosaur killer impact. I know more than anyone else I have ever met, especially in the areas of food (agriculture and cooking) and shelter (designing, concrete, masonry, carpentry, plumbing, wiring, etc), and even I barely know enough to get started. For example, I don't know of any way to make insulation for wires without an already existing chemical industry, except natural rubber, which would most likely not be available.

Replies from: RobinZ, Jack, Nubulous, CannibalSmith
comment by RobinZ · 2009-11-07T14:00:24.604Z · LW(p) · GW(p)

I've seen cloth-wrapped appliance cords - never tried it, but it might be feasible.

Replies from: billswift
comment by billswift · 2009-11-07T17:16:38.508Z · LW(p) · GW(p)

If you look closer, you'll probably find only the outer protective layers are cloth; I've seen that on a lot of older wiring, but the ones I have seen all had a thin inner layer of rubber right on the copper. Tarred cloth probably would work, as long as the voltages were low enough and there were multiple layers; paper might be even better though. (Most of the old wiring I have seen was a thin layer of rubber next to the copper, then paper wrapping protecting the rubber and separating the individual wires, and the whole bundle protected with fabric.)

comment by Jack · 2009-11-07T09:13:11.930Z · LW(p) · GW(p)

I think I solved the insulation problem. Cockroach bodies should work if you can gather enough.

comment by Nubulous · 2009-10-02T19:15:44.029Z · LW(p) · GW(p)

Wrap cloth around it, or coil a single thread around it, then coat with tar. Lots of organics will turn to "tar" if abused properly, even cooking oils. Also, steel will do instead of copper, you just need a few times a much of it.

On the other hand, is any of this really necessary ? If this civilization collapses, there'll be enough ready-made materials lying around to last for a thousand years.

comment by CannibalSmith · 2009-10-02T14:43:30.026Z · LW(p) · GW(p)

Wind each layer sparsely so that wires don't touch and pack insulator (dried leaves) between the layers. Makes for a woefully inefficient spool, but still.

Gotta try this out with scrap metal.

Replies from: billswift
comment by billswift · 2009-10-02T15:23:00.141Z · LW(p) · GW(p)

That would probably work, the only problem with it is that you would have to know in advance what you were doing, this isn't something that would be tried by an experimenter trying to figure things out for example.

Replies from: CannibalSmith
comment by CannibalSmith · 2009-10-02T15:50:42.824Z · LW(p) · GW(p)

Hence the book.

comment by Psy-Kosh · 2009-10-02T12:54:41.314Z · LW(p) · GW(p)

Well, did the Greeks have the ability to make decent enough wire in sufficient quantities?

Replies from: gwern, DanArmak, CannibalSmith
comment by gwern · 2009-12-20T03:55:28.566Z · LW(p) · GW(p)

I think they could. Remember the Antikythera mechanism's high quality of fabrication. And fine metal wire was useful for jewelry and art:

comment by DanArmak · 2009-10-02T14:48:24.219Z · LW(p) · GW(p)

I don't know, but even if they could do it, they had no reason to. So we can't really tell.

The real question is - if they really really wanted to and had a book of helpful tips, could they have made decent enough wire? (And could they get copper in sufficient quantities? By Roman times they certainly could.)

Replies from: Psy-Kosh
comment by Psy-Kosh · 2009-10-02T15:17:44.427Z · LW(p) · GW(p)

Could they make it thin enough (even with insulation) to be able to fit large amounts of windings?

ie, assuming they had reason to try, could they do it based on what we know of their capabilities at the time?

Replies from: DanArmak
comment by DanArmak · 2009-10-02T15:39:51.675Z · LW(p) · GW(p)

Incidentally, a radio would be much cheaper to make and almost certainly within their capabilities.

comment by Aurini · 2009-10-01T18:41:57.217Z · LW(p) · GW(p)

This reminds me of an episode of Mythbusters where the crew set up a bunch of of MacGyver puzzles for the two hosts - pick a lock with a lightbulb filament, develop film with common household chemicals, and signal a helicopter with a tent and camping supplies.

In all seriousness though, Philisophical Materialism and the Scientific Method are probably the most important things; three years ago I bought my first car for a pack of cigarettes, and a $20 Hayes manual. At the time I didn't even know what an alternator was; three months later I'd diagnosed a major electrical problem, and performed an engine swap. The manual helped (obviously), but for the most part it was the knowledge that any mechanical device could be reduced to simple causal patterns which allowed me to do this (incidentally, this is a hobby that I strongly recommend to other LW members - you get to put the scientific method into practice in a hands-on manner, and at the end of it you get a car which is slightly less crappy).

I tend to think that the mere knowledge that flying machines are possible will allow the survivors of WWIII to redevelop the prewar tech within a century.

Replies from: randallsquared, Jack, ciphergoth
comment by randallsquared · 2009-10-02T21:21:45.952Z · LW(p) · GW(p)

I tried this with one of my first cars back in the early 90s. It turns out that there are a very large number of things that can go wrong with essentially every step of repairing a car, and I didn't have the money or time to continue replacing parts I'd destroyed or troubleshooting problems I'd caused while trying to fix another problem.

I like programming because it has the same features of tracking down problems, but almost entirely without the autocommit feature of physical reality, as long as you choose to back up and test.

Also, even in the 90s, a computer was far cheaper than a good set of tools.

comment by Jack · 2009-10-05T23:53:53.475Z · LW(p) · GW(p)

Could you explain Philosophical Materialism and the Scientific Method without first having the read do science? I agree that these might be the most important things but it isn't clear to me how they can be explained to a civilization that lacks a general scientific vocabulary or the context to interpret things like falsifiability, hypotheses, ontological fundamental mental entities, etc. Does the most important lesson have to be toward the end of the book?

comment by Paul Crowley (ciphergoth) · 2009-10-02T10:55:26.306Z · LW(p) · GW(p)

incidentally, this is a hobby that I strongly recommend to other LW members - you get to put the scientific method into practice in a hands-on manner, and at the end of it you get a car which is slightly less crappy.

Does the same principle apply to motorcycle maintenance? :-)

Replies from: billswift
comment by billswift · 2009-10-02T14:20:35.403Z · LW(p) · GW(p)

A book I was reading that suggested doing your own minor auto repairs, warned strongly against doing motorcycle repairs for anything after the late 1970s. He claimed that newer cycles were so tightly integrated and the tools for working on them so specialized, that you were too likely to get something taken apart that you literally could not reassemble.

Replies from: AngryParsley
comment by AngryParsley · 2009-10-03T21:20:30.391Z · LW(p) · GW(p)

I'd say that's true for modern supersports and superbikes, but a beginner bike like a Kawasaki Ninja EX-250 has very little in the way of electrics or other tightly-integrated mechanisms. Just as an anecdote: I do regular maintenance on my 2006 SV-650/S, but anything more complicated than oil changes on my 1972 Honda CB350 is done by a mechanic. While newer bikes have complicated parts like ECUs and fuel injection, those are usually the most reliable parts. Repairing older motorcycles typically involves scrounging e-bay for parts that are no longer manufactured.

The thing I like most about motorcycles is that they are simple, so it's pretty easy to diagnose any problems. It only takes a minute to tell if you're running lean or rich. Simply starting, hearing, and smelling an engine can tell you whether you just need new piston rings or if you've damaged the crankshaft journal bearings.

If you really want the most mechanically simple vehicle, I'd suggest an old scooter such as a Honda Cub. The set of failure modes for an air-cooled single-cylinder engine is quite small.

comment by DanArmak · 2009-10-01T17:16:30.497Z · LW(p) · GW(p)

What for? There aren't any stick-and-stones cultures around.

Do you assign significant probability to the need for such a book in humanity's future? I don't. It would require that:

  • No technological human societies survive
  • Adults who know the relevant things don't survive
  • Technological artifacts and particularly sources of knowledge (e.g., copies of encyclopedias or entire libraries-on-disk) don't survive

But also that:

  • Some people survive all this
  • Such a book will survive all this and there will be a high chance of a copy being found by survivor groups
  • Survivors will be able to use the book (requires resources like extra food/manpower to sink into rebuilding project, and the organization/government to provide this) - in fact survivors will mostly lack for knowledge
Replies from: Gavin, Johnicholas, billswift, CannibalSmith, whpearson
comment by Gavin · 2009-10-02T05:01:53.513Z · LW(p) · GW(p)

There's a huge different between having the raw knowledge available and simple step by step instructions.

A book created for this express purpose would be an order of magnitude more useful than any number of encyclopedias or even entire libraries. A big challenge would be even knowing what to research--if you don't have the next technology, you may not even know what it will be.

The biggest obstacle is really distribution. What you'd need its a government, church, or NGO to put a copy in every branch or something.

Maybe you could donate a copy to every prison library. Prisons would actually be a really defensible location to stay post-societal collapse . . .

comment by Johnicholas · 2009-10-01T17:45:25.176Z · LW(p) · GW(p)

We can imagine a handbook that is written to be useful for a broad spectrum of possible disastrous situations.

The handbook could be written for post-disaster survivors finding themselves in many possible situations. For example, your first bullet "No technological human societies survive" could be expanded to "(No|Few|Distant|Hostile) technological human societies survive". Indeed, uncertainty about which of the aforementioned possibilities actually hold might be quite probable, given both a civilization-destroying disaster and some survivors.

To some extent, the Long Now's Rosetta project (to build sturdy discs inscribed with examples of many languages) is an example of this sort of handbook.

http://rosettaproject.org/

Replies from: DanArmak
comment by DanArmak · 2009-10-01T20:10:47.389Z · LW(p) · GW(p)

I agree a knowledge repository would be very useful for survivors right after the disaster. But I don't think any scenario is probable that involves a society with a reasonably stable level of technology and food production existing and profiting from such a book.

BTW, the Rosetta project seems to be purely about describing languages so future people can understand them.

For example, your first bullet "No technological human societies survive" could be expanded to "(No|Few|Distant|Hostile) technological human societies survive".

If a few distant technological societies survive, even just one with some reasonable shipping & industry, then I expect they will quickly establish contact with most of the world, if only to exploit natural resources & farming. Most or all tech. economies today rely on many imports of minerals, food, etc. And knowledge and technology would be dispersed quicker with the assistance of this society than by means of such a book.

If a 'hostile' society survives - well, hostile towards whom? Towards all other, non-high-tech survivors? I don't see this as the default attitude of a surviving society that's the most powerful country left on Earth, so without knowing more I hesitate to try to empower whoever they're hostile towards. What did you have in mind here?

Replies from: Johnicholas
comment by Johnicholas · 2009-10-01T20:41:15.637Z · LW(p) · GW(p)

Your first point is that the handbook is not likely to be useful for the purpose of helping reconstruction after a disaster, because the chance of a disaster being total enough to destroy technology, but not total enough to destroy humanity, is small. I agree completely - you have a very strong argument there.

However, you go on to argue that IF a technology-destroying-humanity-sparing disaster occured, THEN technological societies would quickly establish contact, disperse knowledge, et cetera. In this after-the-disaster reasoning, you're using our present notions of what is likely and unlikely to happen.

Reasoning like this beyond the very very unlikely occurrence seems fraught with danger. In order for such an unlikely occurrence to occur, we must have something significantly wrong with our current understanding of the world. If something like that happened, we would revise our understanding, not continue to use it. Anyone writing the handbook would have to plan for a wild array of possibilities.

Instead of focusing on the fact that the handbook is not likely to be used for its intended purpose, consider:

  1. Might it have side benefits, spin-offs from its officially-intended purpose?
  2. Is it harmful?
  3. Is it neat, cool, and fun?
Replies from: DanArmak
comment by DanArmak · 2009-10-01T20:48:19.338Z · LW(p) · GW(p)

If we assume that there is "something significantly wrong with our current understanding of the world" but don't know anything more specific, we can't come to any useful conclusions. There's a huge number of things we could do that we think aren't likely to be useful but where we might be wrong.

So is writing this book something we should do (as the original comment seemed to suggest)? No. But I agree it's something we could do, is very unlikely to be harmful, and is neat and fun into the bargain.

With that said, I'm going back to working on my cool, neat, fun, non-humanity-saving project :-)

comment by billswift · 2009-10-02T14:28:51.540Z · LW(p) · GW(p)

Adults who know the relevant things don't survive

Actually, all you would need for serious problems is that none of the relatively few people who know the essential details of a critical piece of support technology don't survive. Or at least don't survive in your group or that you otherwise have access to. And since if that happens, and you can't know ahead of time what bits of information you might lose, having references to everything possible only makes good sense. Especially given how relatively inexpensive references are now. Cheap insurance against an very unlikely result (of course, they can also be helpful day-to-day too).

Replies from: DanArmak
comment by DanArmak · 2009-10-02T14:58:54.714Z · LW(p) · GW(p)

There's a mixup of two different scenarios here.

What you seem to be talking about is a group of people a few years to a few decades post collapse, who want to operate or rebuild preexisting tech and need a reference work. If they had a copy of wikipedia plus a good technical & reference library, it would probably answer most of their needs. A special book isn't essential.

What I was talking about is a group of people completely lacking pre-collapse knowledge and experience. You can't give them instructions for building a radio because they tend to ask questions like "what's a screwdriver?" and "how can I avoid being burnt as a witch?" That's what a real stones-and-sticks to high-tech guide book needs to address.

Replies from: billswift
comment by billswift · 2009-10-02T15:26:39.305Z · LW(p) · GW(p)

You might think of "my book" as a subset of yours. My book would be more likely to be useful (though hopefully not) and could be expanded to add the material necessary for yours. And your book would be a library in itself, there is no possible way that such a "book" would not span many volumes.

Replies from: DanArmak
comment by DanArmak · 2009-10-02T15:35:13.691Z · LW(p) · GW(p)

A single long "book" would have high quality cross links, well ordered reading sequences, a uniform style, no internal contradictions, etc. In that sense it's a book as opposed to a library collection.

comment by CannibalSmith · 2009-10-02T12:37:52.360Z · LW(p) · GW(p)

Do you assign significant probability to the need for such a book in humanity's future?

Black Swan.

Replies from: DanArmak
comment by DanArmak · 2009-10-02T15:06:17.824Z · LW(p) · GW(p)

Just saying "black swan" isn't enough to give higher probability. If you think I can't assign any meaningful probability at all to this scenario, why?

Replies from: billswift, CannibalSmith
comment by billswift · 2009-10-02T15:29:41.100Z · LW(p) · GW(p)

I don't believe anyone can assign meaningful very small or very large probabilities in most situations. It is one of my long-running disagreements with people here and on OB.

Replies from: DanArmak
comment by DanArmak · 2009-10-02T15:42:09.370Z · LW(p) · GW(p)

There are indeed many known human biases of this kind, plus general inability to predict small differences in probability.

But we can't treat every low probability scenario as being e.g. of p=0.1 or some other constant! What do you suggest then?

Replies from: billswift
comment by billswift · 2009-10-02T17:25:55.473Z · LW(p) · GW(p)

I don't know of a unified way of handling extremely small risks, but there are two things that can be helpful. First, as suggested by Marc Stiegler in "David's Sling", is to simply recognize explicitly that they are possible, that way if they do occur you can get on with dealing with the problem without also having to fight disbelief that it could have happened at all. Second, different people have different perspectives and interests and will treat different low possibility events differently, this sort of dispersion of views and preparation will help ensure that someone is at least somewhat prepared. As I said, neither of these is really enough, but I simply can't see any better options.

comment by CannibalSmith · 2009-10-02T15:48:45.609Z · LW(p) · GW(p)

I'm saying "Black Swan" to compress the following message: We cannot assign probability at all because we don't have statistics. Nevertheless, the stakes are so high that we should be overly cautious. We need the book "just in case". It's a very specific, actionable step in existential threat mitigation. Unlike other measures it requires no new discoveries but just a modest investment of money and time.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2009-10-02T15:56:11.594Z · LW(p) · GW(p)

You have to assign probabilities anyway. See the amended article:

Considering some event a black swan doesn't give a leave to not assign any probabilities, since making decisions depending on the plausibility of such event is still equivalent to assigning probabilities that make the expected utility calculation give those decisions.

Replies from: CannibalSmith
comment by CannibalSmith · 2009-10-02T16:20:30.746Z · LW(p) · GW(p)

Okay, okay! How much our civilization is worth? Say, 10^20 dollars. If I had the money, I would be willing to part with 10^6 dollars to develop, manufacture, and distribute the book. Therefore, the probability of the book serving it's primary purpose is 10^(-14).

Replies from: DanArmak
comment by DanArmak · 2009-10-02T16:44:41.920Z · LW(p) · GW(p)

How much our civilization is worth? Say, 10^20 dollars.

That's meaningless. You can't assign a value in dollars to the continued existence of our civilization. Dollars are only useful for pricing things inside that civilization. (Some people argue for using utilons to price the civilization's existence.)

If I had the money, I would be willing to part with 10^6 dollars to develop, manufacture, and distribute the book. Therefore, the probability of the book serving it's primary purpose is 10^(-14).

The amount you're willing to pay is a fact about you, not about the book's usefulness. You're saying you estimate its probability of usefulness at 10^-14. But why?

Replies from: wedrifid
comment by wedrifid · 2009-10-02T16:56:55.498Z · LW(p) · GW(p)

Clearly the market for civilization creation books is efficient.

Replies from: DanArmak
comment by DanArmak · 2009-10-02T17:03:20.680Z · LW(p) · GW(p)

Nice point. Maybe we should instead talk about scenarios where humanity (including us) no longer suffers aging but a collapse still occurs.

Incidentally, I wonder what the market price for writing a civilization-destroying book might be?

Replies from: wedrifid
comment by wedrifid · 2009-10-02T17:06:18.190Z · LW(p) · GW(p)

I believe the going rate is 45 virgins in the afterlife.

comment by whpearson · 2009-10-02T10:03:45.601Z · LW(p) · GW(p)

I see scenarios like the following not impossible.

90% of the human population dies from a plague/meteor along with the knowledge/sufficient numbers to maintain things like power plants, steel mills and the trappings of modern life. Those people that are left with the knowledge have to spend all their time subsistence farming just to survive.

A few generations later when the population has increased a bit and subsistence farming improved in yield due to experience. People want to recreate technology, with just the knowledge passed down by word of mouth.

comment by sketerpot · 2009-10-04T18:18:32.104Z · LW(p) · GW(p)

There's a time-traveler's cheat sheet that covers a lot of the basics. (Credit goes to Ryan North. )

Replies from: CannibalSmith
comment by CannibalSmith · 2009-10-04T23:26:18.321Z · LW(p) · GW(p)

TAKE THE CREDIT

Replies from: ata
comment by ata · 2009-10-05T00:36:55.750Z · LW(p) · GW(p)

I'm going to go back in time and take credit for that cheat sheet.

comment by Jack · 2009-10-05T23:55:11.079Z · LW(p) · GW(p)

So, does anyone want to write out a very preliminary table of contents? Other ideas about how such a book would be organized?

Replies from: DanArmak
comment by DanArmak · 2009-10-06T13:04:05.164Z · LW(p) · GW(p)

You have to handle two issues first:

  1. What if your audience can't read? What if they don't understand English, because they're rural Chinese, or because it's 3000 AD?
  2. Supposing that they read the first chapter out of curiosity, you need to convince them it'll be worth their while to read the rest of the book and do what it tells them to, at cost of time, effort, materials, money and reputation.

You also need to choose a catchy title. I recommend From Sticks and Stones to Atom Bombs: How to Build Your Own World-Destroying Civilization In Only 30 Days!!!

Replies from: Jack
comment by Jack · 2009-10-06T16:27:09.815Z · LW(p) · GW(p)

Right, I'm thinking the first chapter will have to teach numbers, logical connectors and a basic English vocabulary. Additional vocabulary can be added throughout the book. We'll just have to hope that the reader can understand more or less universal symbols like arrows to point directions, circles to indicate groupings, proximity indicates labels etc. Also, a section on anatomy will be less helpful the more they've mutated.

I think arithmetic can probably be taught with reference to dots. So:

" " =

" "= 4

" + " = 4 etc.

Geometry shouldn't be a problem either. The whole thing would have to be heavily illustrated anyway.

Maybe the first couple pages should just depict really happy people using technology paired with stone agers looking miserable.

comment by Jonathan_Graehl · 2009-10-05T23:29:41.188Z · LW(p) · GW(p)

This, along with building an AI that can self-improve by reading instructional material intended for humans, was a cherished childhood fantasy of mine.

Now, I implement machine learning algorithms to be used in dumb statistical NLP systems.

comment by pdf23ds · 2009-10-02T02:46:51.423Z · LW(p) · GW(p)

What's the best way to follow the new comments on a thread you've already read through? How do you keep up with which ones are new? It'd be nice if there were a non-threaded view. RSS feed?

Replies from: Kaj_Sotala, rhollerith_dot_com
comment by Kaj_Sotala · 2009-10-02T11:54:59.006Z · LW(p) · GW(p)

Scanning through the new comments page is probably your best bet, though I wish there was a better solution.

Replies from: matt
comment by matt · 2009-10-11T20:08:28.326Z · LW(p) · GW(p)

Any ideas for a better solution? The devs are busy, but they're listening (and if the devs don't have time, the code is open).

Replies from: pdf23ds, Kaj_Sotala, Jonii
comment by pdf23ds · 2009-10-11T23:19:50.332Z · LW(p) · GW(p)

My idea would be to just have a link to a article-specific "recent comments" page on each article.

(But if they're going to work on anything, they might want to work first on the bug I posted about elsewhere in this thread.)

Replies from: matt
comment by matt · 2009-10-13T03:23:31.937Z · LW(p) · GW(p)

(But if they're going to work on anything, they might want to work first on the bug I posted about elsewhere in this thread.)

Hmm… raised as Issue 194

comment by Kaj_Sotala · 2009-10-27T11:05:06.163Z · LW(p) · GW(p)

My thought would be a "recent posts in your subscribed threads" kind of a feature, as they have on forums. In other words, an ability to add specific posts to a personal watchlist, and then have a page like the "new comments page" that only shows comments to posts on your watchlist.

comment by Jonii · 2009-10-13T03:36:02.976Z · LW(p) · GW(p)

Something like the playback-feature on Google Wave would rock. Some neat way to specify that you only want to see(or highlight) comments that were made after some specific time would also be nice.

Replies from: matt
comment by matt · 2009-10-14T01:46:11.896Z · LW(p) · GW(p)

Something like the playback-feature on Google Wave would rock.

Yes, that would rock. Unfortunately it's not a small feature.

comment by RHollerith (rhollerith_dot_com) · 2009-10-04T20:13:54.107Z · LW(p) · GW(p)

What's the best way to follow the new comments on a thread you've already read through? How do you keep up with which ones are new?

Kaj's suggestion (http://lesswrong.com/comments/) is your best bet, but there is another option that might merit consideration: if you happen to know that a new relevant comment is likely to be authored by JohnJones or Sally_Smith, keep an eye on the following 2 non-threaded views:

http://lesswrong.com/user/JohnJones

http://lesswrong.com/user/Sally_Smith

Pages like those 2 have RSS feeds associated with them, BTW. But, yeah, it would be nice if there were more options.

comment by gwern · 2009-09-27T18:15:28.724Z · LW(p) · GW(p)

One of the old standard topics of OB was cryogenics; why it's great even thought it's incredibly speculative & relatively expensive, and how we're all fools for not signing up. (I jest, but still.)

Why is there so much less interest in things like caloric restriction? Or even better, intermittent fasting, which doesn't even require cuts in calories? If we're at all optimistic about the Singularity or cryogenic-revival-level-technology being reached by 2100, then aren't those way superior options? They deliver concrete benefits now, for a price that can't be beat, and on the right time-scale*.

Yet I don't think I've seen Robin or Eliezer even once say something like "and if you don't buy the benefits of cryogenic preservation, why on earth aren't you at least doing CR?".

* Assume we're in or close to our teens - as many of the readers are, and would live to to 80 or 90 due to our family background; that pushes our death date out to ~2080; assume CR/IF deliver less benefits in humans than in lower organism, say, 20%; that gets us another 18 years, or to 2098, which is close enough to 2100 as to make no difference.

Replies from: gwern, knb, PlaidX, timtyler, LauraABJ, Eliezer_Yudkowsky, AngryParsley, CannibalSmith
comment by gwern · 2009-09-27T18:22:31.384Z · LW(p) · GW(p)

In the same vein, although I fear I tread too close to 'life-hacking' territory here (and I recall the LW community had consciously decided to avoid descending down into the 'cool tips/tools' territory? or am I wrong about that?), I've noticed very little discussion of the various substances labeled 'nootropics'.

We discussed quite a bit how to motivate ourselves and increase the percentage of time spent being 'productive'; shouldn't it be equally fascinating to us that things like modafinil* can eliminate the need for sleep, gaining hours? Even if modafinil's benefits averaged out cuts the need for sleep only in half or a quarter, well, it's the rare productivity or mind technique that saves you 4 and a half or 2 and a quarter hours a day.

* which I know for a fact some LWers happily & effectively use

Replies from: CannibalSmith, wedrifid
comment by CannibalSmith · 2009-10-01T14:27:16.395Z · LW(p) · GW(p)

and I recall the LW community had consciously decided to avoid descending down into the 'cool tips/tools' territory?

This is an Open Thread. No restrictions here. Though, I wish we'd replace these with a proper forum that's active throughout the month.

Replies from: AngryParsley, Douglas_Knight, zaph, Cosmos, bogus, matt
comment by AngryParsley · 2009-10-02T04:50:09.693Z · LW(p) · GW(p)

I'm not a fan of having a Less Wrong forum. One of LW's advantages is that it has low volume and high quality. It doesn't take much of my time to read and most of the posts are worth reading. Forums are the opposite: higher volume and lower quality. This makes forums a bigger time sink for everyone: moderators, posters, and readers.

Replies from: zaph
comment by zaph · 2009-10-02T14:52:39.837Z · LW(p) · GW(p)

I think the low volume high quality nature of the LW front page is why a forum would be a bonus. People could hash out more low to mid quality ideas without detracting from the more developed postings that the readers who want to invest less time are looking for. I'm not a fan of a forum in lieu of the current LW format, but as an idea incubator, I think it could be interesting and of use.

comment by Douglas_Knight · 2009-10-12T17:37:45.878Z · LW(p) · GW(p)

I wish we'd replace these with a proper forum that's active throughout the month.

If the open thread were always visible somewhere in the sidebar, would that constitute "a proper forum" for you? or if there were weekly open threads?

comment by zaph · 2009-10-01T19:01:41.409Z · LW(p) · GW(p)

I think a forum here would be fantastic. I don't believe it would detract from the articles, it would just give discussions that have potentially smaller interest bases a chance to still develop.

comment by Cosmos · 2009-10-01T15:58:37.767Z · LW(p) · GW(p)

I definitely agree that a forum would allow for more discussion, particularly of the less-momentous but still-beneficial topics. In particular, I think that discussions of actual strategies people have tried, what has worked and not worked, could actually be highly beneficial. I see them as data we need to collect in order to begin forming some kind of method for actually helping rationalists win in real world situations.

Replies from: Aurini
comment by Aurini · 2009-10-01T18:54:34.825Z · LW(p) · GW(p)

Even a general forum would be great - I wouldn't mind finding out what books and movies the rest of LW enjoys; this place is what turned me onto Torchwood. Though I could understand worries that it might distract from the core purpose of this site.

comment by bogus · 2009-10-11T21:45:18.893Z · LW(p) · GW(p)

AIUI, the forum idea was tried for Overcomingbias.com back when it was a shared-authorship blog. It didn't quite work out.

There's plenty of opportunity to hash out lower-interest points here. In addition to the monthly open threads, just clicking on "Recent posts" in the sidebar will bring up a list of posts which didn't make the front page.

Replies from: Douglas_Knight
comment by Douglas_Knight · 2009-10-12T17:33:33.904Z · LW(p) · GW(p)

AIUI, the forum idea was tried for Overcomingbias.com back when it was a shared-authorship blog. It didn't quite work out.

What are you referring to?

comment by matt · 2009-10-11T21:12:38.489Z · LW(p) · GW(p)

Some life hacking: narrow the distance between "I wish" and "I will". Shared hosting starts in the realm of $5/month. Open source forum software is very available. With fairly basic computer skills and Google you're probably not more than 6hrs away from having the forum you want. Some early research might narrow that gap further.

Replies from: CannibalSmith
comment by CannibalSmith · 2009-10-11T22:40:28.422Z · LW(p) · GW(p)

Forum is people first and foremost. I see no way I could attract LWers to a forum on a separate site. Besides, that is not what I want at all. I want a forum here.

comment by wedrifid · 2009-10-12T18:04:31.030Z · LW(p) · GW(p)

We discussed quite a bit how to motivate ourselves and increase the percentage of time spent being 'productive'; shouldn't it be equally fascinating to us that things like modafinil* can eliminate the need for sleep, gaining hours?

More fascinating for me is how modafinil improves my motivation.

Replies from: gwern
comment by gwern · 2009-10-12T18:15:21.419Z · LW(p) · GW(p)

Yes, I've noticed that too, but it's hard to say what it is: is it a simple placebo effect, or is it the miser in me saying 'you spent $1.20 on modafinil for today, and dammit you'd better get >1.20 out of it!', or is it the reduction of tiredness, or the sense of lots of time in front of one (I think of Lin Yutang's quote: "A man who has to be punctually at a certain place at five o'clock has the whole afternoon from one to five ruined for him already.")?

Or something entirely else, like that one notices the drop in motivation only when stopping modafinil, and this drop might be due just to recovery from usage? (A slow depletion of dopamine, eg.)

If it's this last suggestion, then the motivation effect is just a modest version of the amphetamine motivation-then-crash - but what makes modafinil most interesting is that it by and large seems like a 'free lunch', and those are so rare in biology/pharmaceuticals.

Replies from: SilasBarta
comment by SilasBarta · 2009-10-12T18:55:11.511Z · LW(p) · GW(p)

A lot of commenters outside America on this one? You need a prescription for Modafinil in the US.

Replies from: gwern
comment by gwern · 2009-10-12T22:53:18.157Z · LW(p) · GW(p)

You need a prescription for Modafinil in the US.

Yes. Yes, you do.

comment by knb · 2009-10-02T05:16:01.013Z · LW(p) · GW(p)

The real costs of caloric restriction are very high. We experience all sorts of negative symptoms, like lack of attention/lack of sexual function and physical pain when we are hungry. I am quite certain that I couldn't achieve a true CR diet if I tried. Even if I made a strong effort, there is still a fair chance I will wind up in an unhappy medium, in which I don't achieve the benefits because I couldn't pass some threshold at which CR becomes effective.

In fact, for most people, CR is probably impossible. Most of us do not even have the willpower to keep our weights in the "acceptable" range in spite of the fact that we idealize lean, low-fat bodies. We're battling millions of years of evolutionary programming.

However, we might see some of the same benefits from taking resveratrol or the forthcoming sirtuin drugs. Resveratrol is pretty cheap, much cheaper than CR (in terms of suffering), so I bet that would be a better candidate for most people than attempting (and likely failing) CR.

Replies from: matt, gwern
comment by matt · 2009-10-11T21:34:22.192Z · LW(p) · GW(p)

knb: I found it a little hard to separate your experience from your speculations there - could you clarify the meaning of "we experience" vs "I couldn't achieve a true CR diet if I tried". I suspect that you're speculating.

CR isn't a line you need to get over - more CRON (CR with Optimal Nutrition) is better: http://www.crsociety.org/files/images/cr-youth.gif

I don't CR as much as I'd like to, but I lost about 18% of my body weight from my set point (at which point my family instructed me not to look any freakishly thinner)… and it was only hard at first. Some of what makes it easier is habit, some is clearing the high GI cycle from your system (once I stopped eating high GI foods I fairly quickly stopped craving high GI foods), but I think most of it is simple life hacking:

  • shop on a full stomach
  • buy good snacks that are not very tasty (nuts, seeds, etc)
  • don't leave any food in plain view in your house or workplace
  • if someone gives/leaves bad food in your house, throw it in the bin as soon as you can
  • plan your meals in advance and shop only for what you've planned to eat
  • and etc. - every time you see a temptation you have to spend mental energy to overcome it, so remove them
comment by gwern · 2009-12-20T04:01:42.227Z · LW(p) · GW(p)

knb: but what about IF? You get all the calories you want there. From my college days with the buffet, I remember on more than a few occasions I would simply not eat for a day and then the next day I would gorge. (I wasn't losing weight during this time, just to be clear, and I was also more athletic than my norm.)

Replies from: knb
comment by knb · 2009-12-20T22:23:36.902Z · LW(p) · GW(p)

That's actually really interesting. When I was an undergrad, I "accidentally" used intermittent fasting as well. I was about 20 lbs overweight when I started school one year, I managed to lose 25 lbs on accident, in spite of the fact that I regularly binged after 24 hours of being to busy to eat.

My (limited) understanding implies this kind of thing is unhealthy and leads to suboptimal mental functioning.

Replies from: gwern
comment by gwern · 2009-12-21T00:54:14.049Z · LW(p) · GW(p)

My (limited) understanding implies this kind of thing is unhealthy and leads to suboptimal mental functioning.

If there's any unhealthiness to it, I didn't notice. It seemed to work out fine with my fencing & Taekwondo.

But mental functioning I really don't know. I ate pretty healthily even in the binging phase, but I know from my N-backing and polyphasic sleep experiments that one can be utterly unaware of even large deficits (or surpluses), and I was using no mental benchmark or task back then, so I would have remained unaware.

comment by PlaidX · 2009-10-06T06:39:43.702Z · LW(p) · GW(p)

I eat between 1200 and 1500 calories a day. I found it surprisingly easy to make the transition.

I've also tried polyphasic sleep, which would be a huge tangible benefit if I could get it to work, but I simply lack the willpower to stick with it through the transition period.

Replies from: Douglas_Knight
comment by Douglas_Knight · 2009-10-06T12:22:58.640Z · LW(p) · GW(p)

Did you try polyphasic sleep before or during CR? I think MV found it only possible during CR.

Replies from: PlaidX
comment by PlaidX · 2009-10-06T22:32:22.220Z · LW(p) · GW(p)

I've tried it several times, both before and during.

comment by timtyler · 2009-10-01T19:49:00.455Z · LW(p) · GW(p)

One of my videos is about the topic. See:

"Tim Tyler: Why dietary energy restriction works"

comment by LauraABJ · 2009-10-12T19:03:00.382Z · LW(p) · GW(p)

I think that's an excellent question. I would guess that it's harder to do CR/regular fasting than sign up for cryonics, and not many ppl want to preach what they don't practice. I take flaxseed oil daily, which is perhaps the easiest if not the best way to improve long-term health.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-10-01T14:40:13.691Z · LW(p) · GW(p)

1) I can't work and starve at the same time.

2) State of evidence in favor of CR wasn't very good last time I checked. I recall something along the lines of, "Cutting calories by 40% extends the lifespan of (some short-lived creature) by a week, and it's looking like it may extend human lifespan by a week as well."

Replies from: Cosmos, Eliezer_Yudkowsky, Vladimir_Nesov
comment by Cosmos · 2009-10-01T19:30:20.811Z · LW(p) · GW(p)

1) I can't work and starve at the same time.

That assumes you're starving during intermittent fasting. Many practitioners actually find that they are much more clear-headed when they have not eaten recently.

My guess is that you're equating hypoglycemia with hunger. I eat a paleo diet, which has low levels of dietary carbohydrates. This forces the body to use gluconeogenesis to meet its glucose needs. Because you're producing it endogenously, your blood sugar remains completely steady. You only suffer from hypoglycemia when you're dependent upon exogenous sources of glucose, forcing you to eat every few hours. I much prefer the freedom to eat whenever I want.

Replies from: Vladimir_Nesov, i77, Cyan
comment by Vladimir_Nesov · 2009-10-01T19:53:07.813Z · LW(p) · GW(p)

Many practitioners actually find that they are much more clear-headed when they have not eaten recently.

I find that I'm more light-headed when I haven't eaten enough, but it's not the same as clear-headed.

comment by i77 · 2009-10-02T21:31:21.748Z · LW(p) · GW(p)

I eat a paleo diet, which has low levels of dietary carbohydrates. ... I much prefer the freedom to eat whenever I want.

I just wanted to add myself as another data point: I have been low-carb for three months and I can vouch for this. (I also lost 10 kg)

If only I had known this when I was a kid. So many mid-mornings at school, hungry (and suddenly sleepy) because of "healthy" breakast cereals!

comment by Cyan · 2009-10-01T20:20:08.213Z · LW(p) · GW(p)

There's prior discussion on this subject that you haven't read -- in particular, this.

Replies from: Douglas_Knight
comment by Douglas_Knight · 2009-10-02T00:01:23.162Z · LW(p) · GW(p)

There's even been a little discussion of hypoglycemia.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-10-04T22:49:41.664Z · LW(p) · GW(p)

Update: Gregory Benford (who recently founded a life extension company) says that CR slows down life processes in both flies and humans. You live longer but you're less active. Sounds plausible.

comment by Vladimir_Nesov · 2009-10-01T14:49:16.454Z · LW(p) · GW(p)

I remember that there is a considerable benefit for mice (not a week), but no good evidence for people. On the other hand, there is lots of evidence about correlation of weight with all sorts of diseases, which themselves kill.

Replies from: MichaelETurner, anonym, Eliezer_Yudkowsky
comment by MichaelETurner · 2009-10-01T15:12:42.386Z · LW(p) · GW(p)

I remember that there is this resource called Wikipedia. So I look up Calorie Restriction. I find there's a very detailed research summary there.

On primates, it starts out with this:

"A study on rhesus macaques, funded by the National Institute on Aging, was started in 1989 at the University of Wisconsin-Madison. This study showed that caloric restriction in rhesus monkeys blunts aging and significantly delays the onset of age related disorders such as cancer, diabetes, cardiovascular disease and brain atrophy. The monkeys were enrolled in the study at ages of between 7 and 14 years; at the 20 year point, 80% of the calorically restricted monkeys were still alive, compared to only half of the controls...."

The section on negative effects talks mainly about what happens when nutrition is poor, or when calories are too low to sustain life. My favorite: "A calorie restriction diet can cause extreme hunger that may lead to binge eating behaviour." Uh-huh. Every guide on CR I've read counsels taking a gradual approach, to give your body time to adjust, and people on CR diets often report that the feelings of hunger attenuate.

The so-called CRON approach ("Calorie Restriction with Optimal Nutrition") focuses on preventing malnutrition, as you'd expect from the name, and is decidely not "starvation", which is obviously an eventually terminal condition.

There is promising, but inconclusive, evidence for a positive effect with human beings. If it works well for monkeys, yeast, fruit flies, nematodes and mice, it's hard to see why it wouldn't work for human beings. But human beings are exceptional in a number of ways, so I suppose it's possible it doesn't work for us.

Replies from: CronoDAS
comment by CronoDAS · 2009-10-01T20:59:45.013Z · LW(p) · GW(p)

But human beings are exceptional in a number of ways, so I suppose it's possible it doesn't work for us.

Indeed. Most mammals tend to have roughly the same number of heartbeats in a lifespan; short-lived mammals such as mice have much faster heartbeats than long-lived mammals such as elephants. Nearly every mammal on the planet (except those that hibernate) has a lifespan of about one billion heartbeats, give or take a few hundred million here and there.

Humans have a lifespan of two billion heartbeats.

Compared to other mammals, we already have a greatly enhanced lifespan. It's quite possible that whatever switch calorie restriction turns on in mice, humans already have turned on by default.

Replies from: gwern
comment by gwern · 2009-12-20T03:59:24.283Z · LW(p) · GW(p)

It's quite possible that whatever switch calorie restriction turns on in mice, humans already have turned on by default.

This is, incidentally, the exact same argument David Brin gave me. (He also argued that if CR/IF really worked, we ought to know already based on millennia of religious practices that imply CR/IF and said communities' intense interest in health matters such as herbal remedies.)

Replies from: CronoDAS
comment by CronoDAS · 2009-12-20T06:30:32.644Z · LW(p) · GW(p)

That's where I got my argument from, actually.

comment by anonym · 2009-10-03T19:44:18.002Z · LW(p) · GW(p)

There is evidence of benefit for non-human primates.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-10-01T18:15:57.365Z · LW(p) · GW(p)

I thought that mild "obesity" (BMI 25) was associated with lower lethality rates than being thin (due to thin people dying more easily when sick; apparently that body fat actually does do its required job sometimes). Normal weight is probably still better, but is that what CR gets you?

Replies from: billswift, timtyler, Vladimir_Nesov, Jordan
comment by billswift · 2009-10-02T14:45:25.776Z · LW(p) · GW(p)

Actually, the lower death rates with moderate rather than lower BMI, was an early claim and was later shown to be the result of people being thinner as the result of previously undiagnosed illnesses. I don't remember the source, as I have read several books on the subject, I sort of think it was from Fumento's "The Fat of the Land", but it could have been several others (none of which supported the superiority of moderate over lower BMI, until you get down to starvation levels, ie BMI of less than 18).

comment by timtyler · 2009-10-01T19:59:36.459Z · LW(p) · GW(p)

Looking at mortality rates in the general population broken down by BMI gives a poor guide to the effects of dietary energy restriction - since many people get thin through being sick or malnourished.

A fairly typical study on the topic:

"How Much Should We Eat? The Association Between Energy Intake and Mortality in a 36-Year Follow-Up Study of Japanese-American Men"

comment by Vladimir_Nesov · 2009-10-01T19:35:30.405Z · LW(p) · GW(p)

You are probably right, hence the disclaimer that it's unchecked memory. There clearly must be some unknown point after which the diet starts to kill you, and this point may be very human-specific.

comment by Jordan · 2009-10-01T18:37:48.511Z · LW(p) · GW(p)

I don't know about CR, but I've done IF (intermittent fasting) for months at a time while maintaining my normal body weight.

comment by AngryParsley · 2009-10-01T22:16:23.059Z · LW(p) · GW(p)

Even if caloric restriction increases longevity, it doesn't protect you against death due to accident, disease, or violence.

Replies from: timtyler, PhilGoetz
comment by timtyler · 2009-10-03T11:37:09.344Z · LW(p) · GW(p)

Maybe - but protection against heart attack, stroke and cancer is worth quite a bit.

comment by PhilGoetz · 2009-10-01T23:17:05.066Z · LW(p) · GW(p)

A Bayesian gives the win to CR.

Replies from: AngryParsley
comment by AngryParsley · 2009-10-02T03:14:43.103Z · LW(p) · GW(p)

Actually, I think the costs of caloric restriction are higher than cryo and the benefits are less.

I'm a 24 year-old male. According to this actuarial table I can expect to be alive for 52 more years, which puts my death in 2061. I'll use gwern's numbers and assume caloric restriction increases life span by 20% in humans. In that case CR would give me 10 more years of life, moving my funeral out to 2071.

CR would only pay off if life-extension/singularity/whatever technology happens in that 10 year span. I'm very confident that advances in curing aging will happen sooner than 2060, so I'm not concerned about dying from old age. I am concerned about dying due to accident, disease, or violence, so I'm signed up for cryonics.

Caloric restriction doesn't cost money, but it does decrease quality of life. Hunger makes it harder for me to have fun. I can't think as clearly. I can't run or cycle as fast. I'm not nearly as productive. Cryonics doesn't require a major lifestyle change and it doesn't hurt my current quality of life.

Replies from: timtyler
comment by timtyler · 2009-10-03T11:39:46.169Z · LW(p) · GW(p)

What eating less energy does to your quality of life depends on how fat you are.

For many people in the west, eating less dietary energy would improve their quality of life - often rather dramatically.

Replies from: AngryParsley
comment by AngryParsley · 2009-10-03T21:47:37.672Z · LW(p) · GW(p)

First, let me explain why caloric restriction isn't for me: I weigh 120lbs and I exercise a lot.

I think you're overstating the benefits of caloric restriction and neglecting to mention other ways to get healthier, such as aerobic exercise. Also, there's a big difference between recommending that fat Americans eat less and recommending that fat Americans do caloric restriction.

comment by CannibalSmith · 2009-10-01T14:35:32.103Z · LW(p) · GW(p)

Body building is extremely at odds with fasting.

Replies from: billswift, Jordan
comment by billswift · 2009-10-02T14:53:52.601Z · LW(p) · GW(p)

True, if you mean body building as "bulking up". But I work with weights partially to keep from losing muscle mass when dieting. If you diet without strength training you lose muscle mass right along with the fat.

comment by Jordan · 2009-10-01T17:11:57.438Z · LW(p) · GW(p)

Not necessarily true, actually. Fasting can release a good deal of growth hormone. It can also keep your insulin response in good condition. Intermittent fasting, in particular, doesn't even decrease the total number of calories a person eats, so could be ideal for body building.

comment by Vladimir_Nesov · 2009-10-11T20:14:29.090Z · LW(p) · GW(p)

Eliezer Yudkowsky and Andrew Gelman on Bloggingheads: Percontations: The Nature of Probability

I haven't watched it yet, but the set-up suggests it could focus a discussion, so should probably be given a top-level post.

Replies from: roland
comment by roland · 2009-10-12T00:40:04.262Z · LW(p) · GW(p)

I watched it and it ends abruptely so maybe Eliezer is trying to fix that. One interesting thing in the discussion was the Netflix challenge, unfortunately they didn't get much into it. Would a simpler method be able to solve it more efficiently?

comment by PeerInfinity · 2009-10-02T14:14:43.063Z · LW(p) · GW(p)

A link you might find interesting:

The Neural Correlates of Religious and Nonreligious Belief

Summary:

Religious thinking is more associated with brain regions that govern emotion, self-representation, and cognitive conflict, while thinking about ordinary facts is more reliant upon memory retrieval networks, scientists at UCLA and other universities have found. They used fMRI to measure signal changes in the brains of committed Christians and nonbelievers as they evaluated the truth and falsity of religious and nonreligious propositions. For both groups, belief (judgments of "true" vs "false") was associated with greater signal in the ventromedial prefrontal cortex, an area important for self-representation, emotional associations, reward, and goal-driven behavior. "While religious and nonreligious thinking differentially engage broad regions of the frontal, parietal, and medial temporal lobes, the difference between belief and disbelief appears to be content-independent," the study concluded. "Our study compares religious thinking with ordinary cognition and, as such, constitutes a step toward developing a neuropsychology of religion. However, these findings may also further our understanding of how the brain accepts statements of all kinds to be valid descriptions of the world."

comment by SilasBarta · 2009-10-01T14:40:38.256Z · LW(p) · GW(p)

I plan to develop this into a top level post, and it expands on my ideas in this comment, this comment, and the end of this comment. I'm interested in what LWers have to say about it.

Basically, I think the concept of intelligence is somewhere between a category error and a fallacy of compression. For example Marcus Hutter's AIXI purports to identify the inferences a maximally-intelligent being would make, yet it (and efficient approximations) does not have practical application. The reason (I think) is that it works by finding the shortest hypothesis that fits any data given to it. This means it makes the best inference, on average, over all conceivable worlds it could be placed in. But the No Free Lunch theorems suggest that this means it will be suboptimal compared to any algorithm tailored to any specific world. At the very least, having to be optimal for the all of the random worlds and anti-inductive worlds, should imply poor performance in this world.

The point is that I think "intelligence" can refer to two useful but very distinct attributes: 1) the ability to find the shortest hypothesis fitting the available data, and 2) having beliefs (a prior probability distribution) about one's world that are closest to (have the smallest KL divergence from) that world. (These attributes roughly correspond to what we intuit as "book smarts" and "street smarts" respectively.) A being can "win" if it does well on 2) even if it's not good at 1), since using a prior can be more advantageous than finding short hypothesis since the prior already points you to the right hypothesis.

Making something intelligent means optimizing the combination of each that it has, given your resources. What's more, no one algorithm can be generally optimal for finding the current world's probability distribution, because that would also violate the NFL theorems.

Organisms on earth have high intelligence in the second sense. Over their evolution history they had to make use of whatever regularity they could find about their environment, and the ability to use this regularity became "built in". So the history of evolution is showing the result of one approach to finding the environment's distribution (ETC), and making an intelligent being means improving upon this method, and programming it to "springboard" from that prior with intelligence in the first sense.

Thoughts?

Replies from: Daniel_Burfoot, Daniel_Burfoot, timtyler, Eliezer_Yudkowsky
comment by Daniel_Burfoot · 2009-10-01T15:33:28.465Z · LW(p) · GW(p)

Basically, I think the concept of intelligence is somewhere between a category error and a fallacy of compression.

This may be tangential to your point, but it's worth remembering that human intelligence has a very special property, which is that it is strongly domain-independent. A person's ability to solve word puzzles correlates with her ability to solve math puzzles. So you can measure someone's IQ by giving her a logic puzzle test, and the score will tell you a lot about the person's general mental capabilities.

Because of that very special property, people feel more or less comfortable referring to "intelligence" as a tangible thing that impacts the real world. If you had to pick between two doctors to perform a life-or-death operation, and you knew that one had an IQ of 100 and the other an IQ of 160, you would probably go with the latter. Most people would feel comfortable with the statement "Harvard students are smarter than high school dropouts", and make real-world predictions based on it (e.g a Harvard student is more likely to be able to write a good computer program than a high school dropout, even if the former didn't study computer science).

The point is that there's no reason this special domain-independence property of human intelligence should hold for non-human reasoning machines. So while it makes sense to score humans based on this "intelligence" quantity, it might be totally meaningless to attempt to do so for machines.

Replies from: SilasBarta, SilasBarta
comment by SilasBarta · 2009-10-01T16:16:11.711Z · LW(p) · GW(p)

This may be tangential to your point, but it's worth remembering that human intelligence has a very special property, which is that it is strongly domain-independent.

Not so fast. Human intelligence is relatively domain independent. But human minds are constantly exploiting known regularities of the environment (by making assumptions) to make better inferences. These reguarities make up a tiny sliver of the Platonic space of generating functions. By (correctly) assuming we're in that sliver, we vastly improve our capabilities compared to if we were AIXIs lacking that knowledge.

Human intelligence appears strongly domain-indepdent because it generalizes to all the domains that we see. It does not generalize to the full set of computable environments -- no intelligence can do that while still performing as well in each as we do in this environment.

Non-human animals are likewise "domain-independently intelligent" for the domains that they exist in. Most humans would die, for example, if dropped in the middle of the desert, ocean, or arctic.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2009-10-01T16:27:07.848Z · LW(p) · GW(p)

But human minds are constantly exploiting known regularities of the environment (by making assumptions) to make better inferences.

Not just by making assumptions: you can learn (domain-specific) optimizations that don't introduce new info, but improve ability, allowing to understand more from the info you have (better conceptual pictures for natural science; math).

comment by SilasBarta · 2009-10-02T17:16:19.912Z · LW(p) · GW(p)

Another example of how domain-dependent human intelligence actually is, is optical illusions.

Optical illusions are when an image violates an assumption your brain is making to interpret visual data, causing it to misinterpret the image. And remember, this is only going slightly outside of the boundary of the assumptions your brain makes.

comment by Daniel_Burfoot · 2009-10-01T15:42:00.290Z · LW(p) · GW(p)

But the No Free Lunch theorems suggest that this means it will be suboptimal compared to any algorithm tailored to any specific world.

This is a subtle point. The NFL theorem does prohibit any algorithm from doing well over all possible worlds. But Solomonoff induction does well on any world that has any kind of computable regularity. If there is no computable regularity, then no prior can do well. In fact, the Solomonoff prior does just as well asymptotically as any computable prior.

As is often the case, thinking in terms of codes can clear up the issue. A world is a big data file. Certainly, an Earth-specific algorithm can get good compression rates if it is fed data that comes from Earth. But as the data file gets large, the Solomonoff general-purpose compression algorithm will achieve compression rates that are nearly as good; in the worst case, just has to prepend the code of the Earth-specific algorithm to its encoded data stream, and it only underperforms by that program size.

The reason AIXI doesn't work in practice is that the "efficient approximations" aren't really efficient, or aren't good approximations.

Replies from: Wei_Dai, SilasBarta
comment by Wei Dai (Wei_Dai) · 2009-10-01T22:27:39.877Z · LW(p) · GW(p)

If there is no computable regularity, then no prior can do well. In fact, the Solomonoff prior does just as well asymptotically as any computable prior.

This seems to be a common belief. But see this discussion I had with Eliezer where I offered some good arguments and counterexamples against it.

The link goes to the middle, most relevant part of the discussion. But if you look at the top of it, I'm not arguing against the Solomonoff approach, but instead trying to find a generalization of it that makes more sense.

I've linked to that discussion several times in my comments here, but I guess many people still haven't seen it. Maybe I should make a top-level post about it?

comment by SilasBarta · 2009-10-01T16:08:39.865Z · LW(p) · GW(p)

This is a subtle point. The NFL theorem does prohibit any algorithm from doing well over all possible worlds. But Solomonoff induction does well on any world that has any kind of computable regularity.

Okay, fair point. But by smearing its optimality over the entire Platonic space of computable functions, it is significantly worse than those algorithms tuned for this world's function. And not surprisingly, AIXI has very little practical application.

If there is no computable regularity, then no prior can do well. In fact, the Solomonoff prior does just as well asymptotically as any computable prior.

And that's unhelpful when, as is likely, you don't hit that asymptote until the heat death of the universe.

The reason AIXI doesn't work in practice is that the "efficient approximations" aren't really efficient, or aren't good approximations.

My point is that the most efficient approximations can't be efficient in any absolute sense. In order to make AIXI useful, you have to feed it information about which functions it can safely skip over -- in other words, feed it intelligence of type 2), the information about its environment that you already gained through other methods Which just shows that those kinds of intelligence are not the same.

Replies from: Vladimir_Nesov, Daniel_Burfoot
comment by Vladimir_Nesov · 2009-10-01T16:23:52.269Z · LW(p) · GW(p)

Actually Solomonoff induction is insanely fast. Its generality is not just that it learns everything to as good an extent as anything else, but also that typically it learns everything from indirect observations "almost as fast as directly" (not really, but...). The "only problem" is that Solomonoff induction is not an algorithm, and so its "speed" is for all practical purposes a meaningless statement.

Replies from: SilasBarta
comment by SilasBarta · 2009-10-01T17:21:49.812Z · LW(p) · GW(p)

When someone says "very fast, but uncomputable", what I hear is "dragon in garage, but invisible".

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2009-10-01T17:30:09.464Z · LW(p) · GW(p)

Generalize that to a good chunk of classical math.

Replies from: SilasBarta
comment by SilasBarta · 2009-10-01T17:34:01.265Z · LW(p) · GW(p)

The analog would be to theorem proving. No one claims that knowing the axioms of math gets you to every theorem "very fast" -- because the problem of finding a proof/disproof for an arbitrary proposition is also uncomputable.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2009-10-01T17:40:57.633Z · LW(p) · GW(p)

A "solution" might be that only proofs matter, while theorems (as formulas) are in general meaningless in themselves, only useful as commentary on proofs.

Replies from: SilasBarta
comment by SilasBarta · 2009-10-01T17:51:16.892Z · LW(p) · GW(p)

Nevertheless, the original point stands: no one says "I've discovered math! Now I can can learn the answer to any math problem very fast." In contrast, you are saying that because we have Solomonoff induction, we can infer distributions "very fast".

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2009-10-01T18:00:02.898Z · LW(p) · GW(p)

In contrast, you are saying that because we have Solomonoff induction, we can infer distributions "very fast".

To be more precise, we can specify the denotation of distributions very close to the real deal from very few data. This technical sense doesn't allow the analogy with theorem-proving, which is about algorithms, not denotation.

Replies from: SilasBarta
comment by SilasBarta · 2009-10-01T18:08:43.116Z · LW(p) · GW(p)

But the analogy is in terms of the "fast but uncomputable" oxymoron.

comment by Daniel_Burfoot · 2009-10-01T23:51:28.580Z · LW(p) · GW(p)

But by smearing its optimality over the entire Platonic space of computable functions, it is significantly worse than those algorithms tuned for this world's function. And not surprisingly, AIXI has very little practical application.

Let's do another thought experiment. Say that humanity has finally resolved to send colonists to nearby star systems. The first group is getting ready to head out to Alpha Centauri.

The plan is that after the colonists arrive and set up their initial civilization, they will assemble a data archive of size T about the new world and send it back to Earth for review. Now it is expensive to send data across light-years, so obviously they want to minimize the number of bits they have to send. So the question is: what data format do the two parties agree on at the moment of parting?

If T is small, then it makes sense to think this issue over quite a bit. What should we expect the data to look like? Will it be images, audio, health reports? If we can figure something out about what the data will look like in advance (ie, choose a good prior), then we can develop a good data format and get short codes.

But if T is large (terabytes) then there's no point in doing that. When the Alpha Centauri people build their data archive, they spend some time analyzing it and figuring out ways to compress it. Finally they find a really good compression format (=prior). Of course, Earth doesn't know the format - but that doesn't matter, since the specification for the format can just be prepended to the transmission.

I think this thought experiment is nice because it reveals the pointlessness of a lot of philosophical debates about Solomonoff, Bayes, etc. Of course the colonists have to choose a prior before the moment of parting, and of course if they choose a good prior they will get short codes. And the Solomonoff distribution may not be perfect in some metaphysical sense, but it's obviously the right prior to choose in the large T regime. Better world-specific formats exist, but their benefit is small compared to T.

Replies from: Johnicholas, SilasBarta
comment by Johnicholas · 2009-10-02T01:37:21.486Z · LW(p) · GW(p)

The choice that they will prepend a description (and the format of the description) is a choice of prior.

comment by SilasBarta · 2009-10-02T03:55:44.487Z · LW(p) · GW(p)

I think this thought experiment is nice because it reveals the pointlessness of a lot of philosophical debates about Solomonoff, Bayes, etc. Of course the colonists have to choose a prior before the moment of parting, and of course if they choose a good prior they will get short codes. And the Solomonoff distribution may not be perfect in some metaphysical sense, but it's obviously the right prior to choose in the large T regime. Better world-specific formats exist, but their benefit is small compared to T.

Well, the thought experiment doesn't accomplish that. Solomonoff induction is not necessarily optimal (and most probably isn't optimal) in your scenario, even and especially for large T. The amount of time it takes for any computable Occamian approximation of S/I to find the the optimal encoding, is superexponential in the length of the raw source data. So the fact that it will eventually get to a superior or near-superior encoding is little consolation, when Alpha Centauri and Sol will have long burned out before Solomonoff has converged on a solution.

The inferiority of Solomonoff Occamian induction, of iterating up through shorter generating algorithms until the data is matched, is not some metaphysical or philosophical issue, but rather, deals directly with the real-world time constraints that arise in practical situations.

My point is, any practical attempt to incorporate Solomonoff induction must also make use of knowledge of the data's regularity that was found some other way, making it questionable whether Solomonoff induction incorporates everything we mean by "intelligence". This incompleteness also raises the issue of what this-world-specific methods we actually did use to get to our current state of knowledge that makes Bayesian inference actually effective.

comment by timtyler · 2009-10-01T20:04:57.705Z · LW(p) · GW(p)

One pattern I have noticed: those who think the No Free Lunch theorems are interesting and important are usually the people who talk the most nonsense about them. The first thing people need to learn about those theorems is how useless and inapplicable to most of the real world they are.

Replies from: SilasBarta, timtyler
comment by SilasBarta · 2009-10-01T20:16:31.281Z · LW(p) · GW(p)

So, you're disagreeing that an algorithm that is optimal, on average, over a set of randomly-selected computable environments, will perform worse in any specific environment than an algorithm optimized specifically for that environment?

Because if not, that's all I need to make my point, no matter what subtlety of NFL I'm missing. (Actually, I can probably make it with an even weaker premise, and could have gone without NFL altogether, but it grants some insight on the issue I'm trying to illuminate.)

Replies from: timtyler
comment by timtyler · 2009-10-01T20:25:09.046Z · LW(p) · GW(p)

The NFL deals with a space of all possible problems - while the universe typically presents embedded agents with a subset of those problems that are produced by short programs or small mechanisms. So: the NFL theorems rarely apply to the real world. In the real world, there are useful general-purpose compression algorithms.

Replies from: SilasBarta
comment by SilasBarta · 2009-10-01T20:37:49.722Z · LW(p) · GW(p)

The NFL deals with a space of all possible problems - while the universe typically presents embedded agents with a subset of those problems that are produced by short programs or small mechanisms.

Okay. I stated the NFL-free version of the premise I need. If you agree with that, this point is moot.

In the real world, there are useful general-purpose compression algorithms.

Now I know I'm definitely not using NFL, because I agree with this and it's consistent with the point in my initial post.

Yes, there are useful general-purpose programs: because researchers recognize regularities that generally appear across all types of files, which there must be because the raw data is rarely purely random. But they identify this regularity before writing the compressor, which then exploits that regularity by (basically) reserving shorter codes for the kinds of data consistent with that regularity.

Likewise, people have identified regularities specific to video files: each frame is very similar to the last. And regularities specific to picture files: each column or row is very similar to the neighboring.

But what they did not do was write an unbiased, Occamian prior program that went through various files and told them what regularities existed, because finding the shortest compression is uncomputable. Rather, they imported prior knowledge of the distribution of data in certain types of files, gained through some other method (type 2 intelligence in my convention), and tailored the compression algorithm to that distribution.

No "universal, all purpose" algorithm found that knowledge.

comment by timtyler · 2009-10-01T20:34:09.129Z · LW(p) · GW(p)

I should probably give you some proper feedback, as well as caustic comments. The intelligence subdivision looks useful and interesting - though innate intelligence is usually referred to as being 'instinctual'.

However, I was less impressed with the idea that the concept of intelligence lies somewhere between a category error and a fallacy of compression.

Replies from: SilasBarta
comment by SilasBarta · 2009-10-01T20:40:45.104Z · LW(p) · GW(p)

Okay, thanks for the proper feedback :-)

And I may be more leaning toward the "fallacy of compression" side, I'll grant that. But I don't see how you'd disagree with it since you find the subdivision I outlined to have some potential. If people are unknowingly shifting between two very different meanings of intelligence, that certainly is a fallacy of compression.

Replies from: timtyler
comment by timtyler · 2009-10-01T20:51:26.604Z · LW(p) · GW(p)

Another point: I'm not sure your description of AIXI is particularly great. AIXI works where Solomonoff induction works. Solomonoff induction works pretty well in this world. It might not be perfect - due to reference machine issues - but it is pretty good. AIXI would work very badly in worlds where Solomonoff induction was a misleading guide to its sense data. Its performance in this world doesn't suffer through trying to deal with those worlds - since in those worlds it would be screwed.

Replies from: SilasBarta
comment by SilasBarta · 2009-10-01T21:18:11.900Z · LW(p) · GW(p)

Well, actually you're highlighting the issue I raised in my first post: computable approximations of Solomonoff induction work pretty well ... when fed useful priors! But those priors come from a lot of implicit knowledge about the world that skips over an exponentially large number of shorter hypotheses by the time you get to applying it to any specific problem.

AIXI (and computable approximations), starting from a purely Occamian prior, is stuck iterating through lots of generating functions before it gets to the right one -- unfeasably long. To speed it up you have to feed it knowledge you gained elsewhere (and of course, find a way to represent that knowledge). But at that point, your prior includes a lot more than a penalty for length!

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-10-01T16:58:53.713Z · LW(p) · GW(p)

But the No Free Lunch theorems suggest that this means it will be suboptimal compared to any algorithm tailored to any specific world.

NFL theorems are about max-entropy worlds. Solomonoff induction works on highly lawful, simplicity-biased, low-entropy worlds.

If you could actually do Solomonoff induction, you would become at least as smart as a human baby in roughly 0 seconds (some rounding error may have occurred).

Replies from: SilasBarta
comment by SilasBarta · 2009-10-01T17:30:22.117Z · LW(p) · GW(p)

NFL theorems are about max-entropy worlds. Solomonoff induction works on highly lawful, simplicity-biased, low-entropy worlds.

The same (or a similar) point applies. If you limit yourself to the set of lawful worlds and use an Occamian prior, you will start off much worse than an algorithm that implictly assumes a prior that's close to the true distribution. As Solomonoff induction works its way up through longer algorithms, it will hit some that run into an infinite loop. Even if you program a constraint that gets it past or out of these, the optimality is only present "after a long time", which, in practice, means later than we need or want the results.

If you could actually do Solomonoff induction, you would become at least as smart as a human baby in roughly 0 seconds (some rounding error may have occurred).

What else can you tell us about the implications of being able to compute uncomputable functions?

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2009-10-01T17:36:24.881Z · LW(p) · GW(p)

As Solomonoff induction works its way up through longer algorithms, it will hit some that run into an infinite loop. Even you program a constraint that gets it past or out of these, the optimality is only present "after a long time", which, in practice, means later than we need or want the results.

You are arguing against a strawman: it's not obvious that there are no algorithms that approximate Solomonoff induction well enough in practical cases. Of course there are silly implementations that are way worse than magical oracles.

Replies from: SilasBarta
comment by SilasBarta · 2009-10-01T17:47:37.856Z · LW(p) · GW(p)

it's not obvious that there are no algorithms that approximate Solomonoff induction well enough in practical cases.

Right, but any such approximation works by introducing a prior about which functions it can skip over. And for such knowledge to actually speed it up, it must involve knowledge (gained separately from S/I) about the true distribution.

But at that point, you're optimizing for a narrower domain, not implementing universal intelligence. (In my naming convention, you're bringing in type 2 intelligence.)

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2009-10-01T17:52:44.057Z · LW(p) · GW(p)

Right, but any such approximation works by introducing a prior about which functions it can skip over.

It introduces a prior, period. Not a prior about "skipping over". Universal induction doesn't have to "run" anything in a trivial manner.

And for such knowledge to actually speed it up...

You can't "speed up" an uncomputable non-algorithm.

Replies from: SilasBarta
comment by SilasBarta · 2009-10-01T18:09:33.171Z · LW(p) · GW(p)

You can't "speed up" an uncomputable non-algorithm.

Okay, we're going in circles. You had just mentioned possible computable algorithms that approximate Solomonoff induction.

it's not obvious that there are no algorithms that approximate Solomonoff induction well enough in practical cases. [emphasis added -- SB]

So, we were talking about approximating algorithms. The point I was making, in response to this argument that "well, we can have working algorithms that are close enough to S/I", was that to do so, you have to bring in knowledge of the distribution gained some other way, at which point it is no longer universal. (And, in which case talk of "speeding up" is meaningful.)

Demonstrating my point that universal intelligence has its limits and must combine with intelligence in a different sense of the term.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2009-10-01T19:30:12.914Z · LW(p) · GW(p)

You introduce operations on the approximate algorithms (changing the algorithm by adding data), something absent from the original problem. What doesn't make sense is to compare "speed" of non-algorithmic specification with the speed of algorithmic approximations. And absent any approximate algorithms, it's also futile to compare their speed, much less propose mechanisms for their improvement that assume specific structure of these absent algorithms (if you are not serious about exploring the design space in this manner to obtain actual results).

Replies from: SilasBarta
comment by SilasBarta · 2009-10-02T14:35:56.120Z · LW(p) · GW(p)

You introduce operations on the approximate algorithms (changing the algorithm by adding data), something absent from the original problem.

What you call "the original problem" (pure Solomonoff induction) isn't. It's not a problem. It can't be done, so it's a moot point.

What doesn't make sense is to compare "speed" of non-algorithmic specification with the speed of algorithmic approximations

Sure it does. The uncomputable Solomonoff induction has a speed of zero. Non-halting approximations have a speed greater than zero. Sounds comparable to me for the purposes of this discussion.

And absent any approximate algorithms, it's also futile to compare their speed, much less propose mechanisms for their improvement that assume specific structure of these absent algorithms (if you are not serious about exploring the design space in this manner to obtain actual results).

There are approximate algorithms. Even Bayesian inference counts. And my point is that any time you add something to modify Solomonoff induction to make it useful is, directly or indirectly, introducing a prior unique to the search space -- cleary showing the distinctness of type 2 intelligence.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2009-10-02T15:08:22.651Z · LW(p) · GW(p)

To wrap up (as an alternative to not replying):

  • I don't understand why you'd continue arguing definitions about speed of Solomonoff induction or it being "the original problem". It's clear what we both mean.
  • I believe you are wrong about general statements about what needs to be done to implement approximate Solomonoff induction. Since we don't technically define in what sense this approximation has to be general, there remain possibilities for a good technical definition that preserves "generality" in an approximate implementation.
Replies from: SilasBarta
comment by SilasBarta · 2009-10-02T15:18:27.757Z · LW(p) · GW(p)

don't understand why you'd continue arguing definitions about speed of Solomonoff induction or it being "the original problem". It's clear what we both mean.

A better question would be why you brought up the issue. We both knew what the other meant before that, but you kept bringing it up.

I believe you are wrong ... there remain possibilities for a good technical definition that preserves "generality" in an approximate implementation.

Okay, well, I'll believe it when I see it. In the mean time, I suspect it will be far more productive to exploit whatever regularity we already know about the environment, and work on building that into the inference program's prior. (Arguably, even the Occamian prior does that by using our hard-won belief in the universe's preference for simplicity!)

comment by gwern · 2009-09-27T18:32:30.391Z · LW(p) · GW(p)

For you non-techies who'd like to be titillated, here's a second bleg about some very speculative and fringey ideas I've been pondering:

What do you think the connection between motivation & sex/masturbation is?

Here's my thought: it's something of a mystery to me why homosexuals seem to be so well represented among the eminent geniuses of Europe & America. The suggestion I like best is that they're not intrinsically more creative thanks to 'female genes' or whatever, but that they can't/won't participate in the usual mating rat-race and so in a Freudian manner channel their extra time into their art or science.

But then I did some googling looking for research on this, and though I didn't turn up much (it's a strangely hard area to search), I ran into some interesting pages on the links between motivation & dopamine, and dopamine & sex:

Which suggest to me an entirely different mechanism: it's not that they have more time, it's that they are having much less sex (even if only with their hand), and this depletes dopamine less & leaving motivation strong to do other things they'd like to do. (Cryptonomicon readers might also be familiar with this theory from one memorable section with Randy.)

So: does anyone know any research testing this? As I said, I couldn't find much.

Replies from: taw, pwno, childofbaud, Morendil
comment by taw · 2009-10-01T13:31:40.701Z · LW(p) · GW(p)

What suggests that homosexuals are getting less sex than heterosexuals in the first place? Naively they are probably having more sex, and more sexual partners than median heterosexual males.

Also, what suggests homosexuals are overrepresented among "eminent geniuses"? Let's use some objective benchmark - how many Nobel Prize winners were homosexuals, and how it compares with society average?

Replies from: SilasBarta, orthonormal, nerzhin
comment by SilasBarta · 2009-10-01T21:27:17.823Z · LW(p) · GW(p)

Along with what orthonormal said, I definitely think that up until ~1960, the Nobel Prize committee was very careful, in all categories, not to give the award to a person of "ill repute", which includes, among other things, being gay. So Nobel Prize winnings wouldn't be informative.

However, you could control for this by checking out how many men won the prize before 1960, and would be suspected of being gay (i.e. old and never-married).

Replies from: taw
comment by taw · 2009-10-02T02:52:28.031Z · LW(p) · GW(p)

Can you think of a better list, or is the entire question non-empirical in practice?

Replies from: gwern
comment by gwern · 2010-03-01T01:55:54.889Z · LW(p) · GW(p)

I would go with general metrics of 'influence' like in Murray's _Human Accomplishment_. It's easier to decide not to give someone a prize because you find them skeevy than it is to ignore their work and accomplishments in practice and to keep them out of the histories and reference works.

comment by orthonormal · 2009-10-01T21:21:40.856Z · LW(p) · GW(p)

What suggests that homosexuals are getting less sex than heterosexuals in the first place?

Genius being easier to claim in retrospect, I think the real claim is that until recent decades, there were plenty of nearly celibate homosexuals (for lack of public opportunities to seek out others, or from internalized stigmas).

Obvious thing to check is the contribution to science and art from other known celibates; plenty more examples (including Erdös) leap to mind.

comment by nerzhin · 2009-10-01T15:37:12.346Z · LW(p) · GW(p)

Some objective benchmark yes, Nobel Prize winners no. There are too few Nobel Prize winners in the first place, the categories aren't obviously the right ones, and the selection process is far too political.

Replies from: taw
comment by taw · 2009-10-01T16:18:55.848Z · LW(p) · GW(p)

There are 789 Nobel Prize winners. We can throw away peace and literature obviously, but the rest don't seem to be that politicized, at least I doubt they care about scientist's sexual orientation much.

It's as objective as it gets really, and very widely accepted. If there are any known gay Nobel Prize winners, I'm sure gay organizations would mention them somewhere.

Yahoo answers can think of only one allegedly bisexual one, but for all Wikipedia says it might have been just some casual experimentation, as he was married, so he doesn't count as gay.

If this is accurate, it means gays, at least the out-of-the-closet ones, are vastly underrepresented among Nobel Prize winners, definitely conflicting with the gay genius over-representation theory.

Replies from: gwern
comment by gwern · 2010-03-01T01:54:07.575Z · LW(p) · GW(p)

You missed Daniel Carleton Gajdusek, a quick Google tells me. And let's not forget those who weren't Nobelists. I don't think anyone here disputes that Turing deserved a Nobel or Fields medal, but it seems likely to me that he didn't get one because he was gay. It would be hard to correct for discrimination & prejudice like Turing suffered.

comment by pwno · 2009-10-01T17:14:47.167Z · LW(p) · GW(p)

You can narrow that down to: Sexually frustrated people have more motivation to do other things. This makes evolutionary sense. People who are sex-starved want to raise their status to better their odds.

comment by childofbaud · 2009-10-07T15:13:13.134Z · LW(p) · GW(p)

The conjecture you offer here has been floating about in philosophy and psychology circles for some time. It was a view heavily promoted by Freud, who used the term sublimation to describe the diversion of unfulfilled (sexual) desires into constructive pursuits. A search of this term may yield further findings.

Replies from: gwern
comment by gwern · 2009-10-07T16:14:19.483Z · LW(p) · GW(p)

Hm, yes, I was a bit familiar with Freud, but I was hoping for ties to biochemistry; it's one thing to intuit that the mind has n bits of energy & forces sloshing around and if they can't come out in sex they have to come out elsewhere, and entirely another to have a specific, materialist model of what's going on. I haven't found anything for the latter.

comment by Morendil · 2009-10-01T14:09:22.249Z · LW(p) · GW(p)

There's a passing mention in George Ainslie's book on akrasia, /Breakdown of Will/ which struck me as interesting. At the moment I can't recall just what or where. I'll dip into it again and see if I find it again.

comment by gwern · 2009-09-27T17:58:46.903Z · LW(p) · GW(p)

I have something of a technical question; on my personal wiki, I've written a few essays which might be of interest to LWers. They're in Markdown, so you would think I could just copy them straight into a post, but, AFAIK, you have to write posts in that WSYIWG editor thing. Is there any way around that? (EDIT: Turns out there's a HTML input box, so I can write locally, compile with Pandoc, and insert the results.)

The articles, in no particular order:

(If you have Gitit handy, you can run a mirror of my wiki with a command like darcs get http://www.gwern.net/ && cd www.gwern.net && gitit -f static/gwern.conf.)

Replies from: Alicorn, Jack, Zack_M_Davis, cousin_it, taw, CronoDAS, thomblake
comment by Alicorn · 2009-10-17T13:05:03.048Z · LW(p) · GW(p)

On the subject of banning new books, this objection to the proposal crystallized in my head yesterday evening: Fiction, like society, is capable of social progress. This isn't a completed project. Stopping the production of fiction in its tracks now would leave us with a corpus of stories that under- and misrepresents many groups, and this would become even more of a problem than it already is as those groups gain broader acceptance, rights, and numbers (assuming the population keeps trending up and policy keeps trending socially liberal worldwide).

Replies from: CronoDAS, gwern, cousin_it
comment by CronoDAS · 2009-10-18T05:53:06.655Z · LW(p) · GW(p)

I already mentioned Values Dissonance as a reason to prefer new fiction to old.

I personally ran into this effect with a work written in 1981 - the song "Same Old Lang Syne" has a casual reference to people driving away after splitting a six-pack of beer...

comment by gwern · 2009-10-18T23:04:27.517Z · LW(p) · GW(p)

Fiction, like society, is capable of social progress.

Progress is quite a loaded word, and if you assume fiction will progress, then you are almost assuming your conclusion.

This isn't a completed project. Stopping the production of fiction in its tracks now would leave us with a corpus of stories that under- and misrepresents many groups, and this would become even more of a problem than it already is

Let's make 'progress' concrete. Perhaps progress means that 'the fiction produced every year will feature characters that will statistically ever more closely match current demographics in the United States'.

Why is fiction mirroring demographics important?

Think of science-fiction; should Accelerando feature a carefully balanced cast with a few African-American men & women, 3 or 4 Hispanics of various ethnicities & nationalities, and a number of South-East Asians and old sansei? How would it be improved by such mimicking?

Or think of regular fiction - When William Shakespeare was writing Othello, the number of blacks in England must've been a rounding error; would he have done better to reflect the 100% white composition of England and make Othello an Arab or just a regular white northern European? When David Foster Wallace wrote Infinite Jest, would it be somehow more just or better, and not just more "progressive", if he had randomly noted that Michael Pemulis was of Chinese descent?

Fiction has never mirrored society even crudely, not in racial composition of characters, socio-economic status, career, religious or philosophical beliefs, or any distinction that you would like to honor with the title 'group'. That's the whole point: it's fiction. Not real. To make it ever more accurate this way would be to turn it into journalism, or render it as pointless as Borges's 1:1 map from "Of Exactitude in Science".

Replies from: RobinZ, Alicorn
comment by RobinZ · 2009-10-18T23:20:04.959Z · LW(p) · GW(p)

Or think of regular fiction - When William Shakespeare was writing Othello, the number of blacks in England must've been a rounding error [...]

It may have been small, but I severely doubt "rounding error" is accurate. Do we have a historian in the house?

Edit: In light of Alicorn's remarks, it would be good to have both Italy and England.

Replies from: gwern
comment by gwern · 2009-10-19T00:17:01.333Z · LW(p) · GW(p)

Everything I've read has said that England had, at least until the 1800s, a minuscule black population, and particularly before and during Shakespeare.

Here are some random links on the topic since I don't remember where I read that blacks were exotic & unpopular rareties in England and next to none of the slaves passing through British hands came to the home isles:

This book Black Breeding Machines mentions that blacks were such a small minority in England that when their presence began to bother the Londoners, Queen Elizabeth could simply order them out of the country. And it's worth noting that one of the few mentioned blacks in England is a 'blackamoor' in the Queen's service - reinforcing my rare, exotic characterization.

(And the general lack of material itself argues that there just weren't that many. It's hard to research what didn't exist.)

EDIT: As for Italy, I can only point to a similar sporadic appearance of black servants in Roman and medieval Italian sources, and links like http://www.blackpast.org/?q=perspectives/africa-and-africans-imagination-renaissance-italians-1450-1630 which make me think that if the medieval Italians could have such strange beliefs about Africa and its inhabitants, there couldn't've been very many actual Africans/blacks among them; and if that's true about Italy, which is right there above Africa, what about England, a continent away (so to speak)?

comment by Alicorn · 2009-10-18T23:17:49.623Z · LW(p) · GW(p)

I am not qualified to teach this subject, not even on the 101 "the stuff you are saying appears on bingo cards that anti-bigotry activists use to summarize common ignorance for crying out loud" level it seems to be on. Trying would be unpleasant, probably would have no positive effects on anyone, and would doubtless solidify the reputation I seem to have accumulated as a usually sane person who mysteriously loses her mind when bringing up "politics".

I will, however, note that Othello took place in Italy, not England, and it would be bizarre if it reflected England's demographics.

Replies from: dfranke
comment by dfranke · 2009-10-19T00:18:06.193Z · LW(p) · GW(p)

I think the two of you may be talking past each other here, namely that gwern overlooked the phrase "corpus of stories". What gwern seems to be attacking is the thesis that every individual story should have a racial/cultural balance of characters that mirrors the general population. Your argument that the corpus as a whole should contain a reasonable balance is not one which I think gwern would refute.

Replies from: gwern
comment by gwern · 2009-10-19T00:36:13.171Z · LW(p) · GW(p)

Obviously every story need not be balanced. But it's not obvious to me why the corpus should be balanced, and I can think of reasons why it either doesn't matter or is a good thing (half the attraction of anime for people is, I think, that it borrows enough Western material to be relatively easy to understand, but the overall corpus is still very 'unbalanced' from a US perspective).

Arguments for either position would be good, but Alicorn's original post just says being unbalanced is a problem and anything perpetuating the problem is bad, thus bans/taxes/withdrawal-of-subsidies is bad; I have no positive arguments in favor of new works from her, so I have to content myself with offering criticism and negative arguments in the hopes that she'll offer back.

(Or I could just drop this whole thread, but then I'd leave unsatisfied because I wouldn't know all the flaws with my approach, like the argument about works being enjoyable in different ways like being contemporary.)

Replies from: Alicorn
comment by Alicorn · 2009-10-19T00:50:23.983Z · LW(p) · GW(p)

If you're interested in continuing this conversation with me in particular, I'd prefer to move to a private venue. I really don't like the "mysteriously loses her mind over politics" thing, or the karma nosedive that comes with it, but I'm willing to assume that you as an individual won't interpret me that way.

Replies from: gwern
comment by gwern · 2009-10-19T00:56:55.066Z · LW(p) · GW(p)

I'd really prefer not to. I've made a point of conducting all of my Wikipedia business on the wiki itself, and similarly for mailing lists. There seems to be only one person downvoting you in this thread, and that's easy enough for me to cancel out.

Replies from: Alicorn
comment by Alicorn · 2009-10-19T01:04:34.080Z · LW(p) · GW(p)

The karma is only a secondary concern. It bothers me more than I would like it to that I am seen as suddenly and inexplicably turning irrational whenever stuff about -isms comes up. This is germane here in particular since to continue this conversation, I'd have to talk about (gasp) feeeeeeeeelings.

Replies from: Jack, rwallace
comment by Jack · 2009-10-19T04:09:21.956Z · LW(p) · GW(p)

The comment that claimed you turn irrational has zero karma. My response that it was an ungenerous interpretation is +2. So I'm not sure you should conclude that a significant number of people see you as turning uniquely irrational, but obviously there is no need for you to say anything you don't want to.

Replies from: Alicorn
comment by Alicorn · 2009-10-19T12:58:44.710Z · LW(p) · GW(p)

I don't think it's that many people (although I got the same reaction over the gender kerfluffle; it's not just this one-time thing). But it's enough to make me uncomfortable.

comment by rwallace · 2009-10-19T02:06:46.849Z · LW(p) · GW(p)

What's inexplicable about it? We all turn at least somewhat irrational whenever stuff about -isms comes up. It's human nature. Politics is the mind killer and all. That's why discussion of contemporary politics is discouraged here, or at least was last I heard.

Replies from: Alicorn
comment by Alicorn · 2009-10-19T02:11:20.300Z · LW(p) · GW(p)

Okay, perhaps I'm seen as explicably losing my mind. That's not a whole lot better. I don't like to have conversations with people who start out presuming me insane, even if they have a lovely narrative about exactly how it happened.

Replies from: cousin_it
comment by cousin_it · 2009-10-19T13:11:52.787Z · LW(p) · GW(p)

You're entitled to your emotional reactions, up to and including stonewalling unfavored commenters, but I see this behavior as a blatant self-defense mechanism for your beliefs. Likewise a theist could reject LW's arguments for atheism because oooh those evil people say I'm crazy and it's making me uncomfortable.

Replies from: Alicorn
comment by Alicorn · 2009-10-19T13:22:23.063Z · LW(p) · GW(p)

I don't think I'd characterize calling one's interlocutor crazy as "evil" so much as "mean". I wouldn't expect my theist friends to want to talk to me - about anything, really, much less religion - if I started out presuming them insane because they disagree with me! For the same reason, I don't blame the theists I know from steering clear of this site. It's a hostile environment for them, and they have no reason to enter any hostile environment, including this one. Similarly, I have precious little interest in having nonessential exchanges with or near people who have announced their intention to be mean to me. Calling it a "self-defense mechanism" looks like you think I need some reason to refrain from having conversations with or around you and those who agree with you, apart from predicting that they'll be unfun.

Replies from: thomblake
comment by thomblake · 2009-10-20T15:47:16.319Z · LW(p) · GW(p)

Indeed, categorizing one's enemies as "insane" seems like a bad epistemic move - a bit closer to deciding they're evil mutants.

comment by cousin_it · 2009-10-17T13:14:25.940Z · LW(p) · GW(p)

Translation: we shouldn't discourage new fiction, because we need more fiction that supports my worldview (which by the way happens to be good and true).

Alicorn, no offense intended, but your rationality just seems to switch off when you start talking about your politics. This isn't the first time I notice that.

Replies from: Jack
comment by Jack · 2009-10-17T13:46:35.634Z · LW(p) · GW(p)

Thats a highly ungenerous interpretation of alicorn's argument. Her argument holds up no matter what the underrepresented group is. It could be men's right activists or Ron Paul activists– all the argument requires is that previously small, unpopular and underrepresented groups become larger, more popular and better represented. If the world gets more racist we're going to need more white power books, as much as I would hate such a world. An evaluation of the groups that become popular isn't suggested by the argument.

Replies from: Zack_M_Davis, cousin_it
comment by Zack_M_Davis · 2009-10-17T22:05:12.399Z · LW(p) · GW(p)

Group underrepresenatation isn't even necessary, either. A more general form of the argument carries as long as you agree that "[fiction] isn't a completed project[;] [s]topping the production of fiction in its tracks now would leave us with a corpus of stories that" is suboptimal in some way.

Cf. DH7

Replies from: cousin_it
comment by cousin_it · 2009-10-19T13:13:39.969Z · LW(p) · GW(p)

Nope, doesn't work. Why do you think new fiction would make the corpus more optimal in any way?

Replies from: pengvado, Zack_M_Davis
comment by pengvado · 2009-10-19T15:37:03.635Z · LW(p) · GW(p)

Because the criteria of optimality change over time. If civilization ever becomes so static (or so cyclic) that I agree with people 50 years ago about what makes for a good story, then you can stop writing new fiction. As is, there certainly are some old works that were so good for their own time that they're still worth reading now, despite the differences in values. But I can't fail to notice those differences, and they do detract from my enjoyment unless I'm specifically in the mood for something alien.

Replies from: gwern
comment by gwern · 2009-10-19T16:21:22.105Z · LW(p) · GW(p)

As is, there certainly are some old works that were so good for their own time that they're still worth reading now, despite the differences in values.

If the criteria are always changing & devaluing old works, why do we read things like Gilgamesh or the Iliad or Odyssey? Did they have nigh-infinite value, that they could survive 3k+ years?

Replies from: Jack
comment by Jack · 2009-10-19T16:44:27.272Z · LW(p) · GW(p)

As far as I can tell this is just the "spirit of the times" point restated by people who can't be bothered to read our long-winded exchange.

comment by Zack_M_Davis · 2009-10-19T19:16:47.737Z · LW(p) · GW(p)

It makes the corpus more complete, if nothing else. Of course we don't want to write all possible books; that's just the useless Library of Babel. But that's physically impossible anyway; within the range that we can apprehend, I'm inclined to say that more books about more topics is better.

comment by cousin_it · 2009-10-17T13:50:23.323Z · LW(p) · GW(p)

The concept of "underrepresentation" itself is politically motivated, not just the choice of particular groups.

Replies from: Jack
comment by Jack · 2009-10-17T13:59:46.279Z · LW(p) · GW(p)

I guess. And maybe there is a political critique to be made of Alicorn's argument. But then it needs to be more developed then a snarky translation. There are no obvious ideological blinders in alicorn's comment and it certainly doesn't reduce to you translation.

Replies from: cousin_it
comment by cousin_it · 2009-10-17T14:08:47.074Z · LW(p) · GW(p)

Edit: removed screaming. Disregard this comment.

Replies from: Jack
comment by Jack · 2009-10-17T14:29:52.722Z · LW(p) · GW(p)

This is a better paraphrase that captures a political element of the argument. By making "policy keeps trending socially liberal world wide" into the opening sentence instead of a final parenthetical you've certainly made the argument look a lot more political. Congratulations, I guess. It is still a distorted rendering of the initial argument (which was as much about demographic changes as about changes in the allocation of political rights). And it still doesn't come close to reducing to "we need more fiction that supports my world view". Which of Alicorn's premises does she only hold for political reasons?

comment by Jack · 2009-10-01T17:55:33.347Z · LW(p) · GW(p)

(Edit to say that this is in response to the culture and aesthetics article)

I take there to be a number of different things we want out of an piece of cultural production.

  • Expression of universal aspects of human nature, emotions.

  • Sensory stimuli (why old horror movies aren't scary, older movies have longer shots, and Michael Bay has a career).

  • Shared cultural experience- (we like to consume works that are already cultural embedded, we want to share in something nearly everyone experiences- this is why it is worth reading Homer, seeing Star Wars and listening to the Beatles).

  • Capturing the spirit of the times (we like it when works express what is unique in us, works that capture our sense of place and time, how we're different from our parents, etc. this is why punk music wouldn't have worked in the 18th century, why we have shows like the Wire, and why Rambo's motivations are really confusing for people born after 1980 who never took a modern history course.).

Your argument seems to turn on saying that whatever piece of culture you're consuming now you could be equally satisfied with something older. This seems to be the case with regard to the first criterion but once one admits the second and the fourth new production is essential.

Replies from: gwern
comment by gwern · 2009-10-07T23:53:12.628Z · LW(p) · GW(p)

Sensory stimuli (why old horror movies aren't scary, older movies have longer shots, and Michael Bay has a career).

But what extra sensory stimulation does Dan Brown's novels have over Don Quixote? If anything, the medieval printings (to say nothing of the illuminated manuscripts) could be much more elaborate and visually complex, and for every adjective Dan Brown employes, Cervantes uses 10 and throws in an allegorical speech. (I kid, but you know what I mean.)

Further, if we imagined that we had only a few books in existence of high quality (ie. not a lifetime's worth), and nothing else but this hot new medium of video games, then the technical development must come to an end at some point and then regular production will push us ever closer to the point where the argument resurrects itself. Notice that Nintendo has for 2 consoles generations now chosen to not compete on sound or graphics. I don't doubt that there are further innovations in store, but at some point video games will become like novels are now, and movies are fast becoming: a medium whose full limitations are known and anything desired produced.

Capturing the spirit of the times (we like it when works express what is unique in us, works that capture our sense of place and time, how we're different from our parents, etc. this is why punk music wouldn't have worked in the 18th century, why we have shows like the Wire, and why Rambo's motivations are really confusing for people born after 1980 who never took a modern history course.).

With enough superior works of #1, we don't need that. But I think you're a little pessimistic. Why couldn'tve punk have worked in the 19th century?

Religious folks read the Bible and Islam in every time period for every conceivable purpose and regularly produce new interpretations for their time. Consider the hippie Jesus compared to the medieval Catholic Jesus; or look at higher biblical criticism. One might think that after 1800 and more years of intensive analysis & exegesis, nothing new could really be said about the text, much less a powerful new interpretation of just about everything that will send shockwaves through Christian & Jewish communities around the world and fundamentally altered many sects - in a way that was more appropriate for a post-Enlightenment/Industrial Age world.

And of course, Shakespeare keeps being tweaked and and reinterpreted to speak to society's current interests.

Replies from: Jack
comment by Jack · 2009-10-08T01:20:08.421Z · LW(p) · GW(p)

But what extra sensory stimulation does Dan Brown's novels have over Don Quixote?

I admit to never having read Don Quixote. I've read Dan Brown and mostly hate him. But it seems pretty obvious to me that Brown's pace is a lot faster and thats basically what we mean by sensory stimulation for books. Its the equivalent of shorter shots in film. New problems are always popping up, the setting is always changing, etc. And the mind's eye can only adjust to so much additional description. I don't think a longer, more detailed description of a single scene creates more stimulating than more basic descriptions of three different scenes.

then the technical development must come to an end at some point and then regular production will push us ever closer to the point where the argument resurrects itself

While this is true of some technologies I'm not sure this is necessarily true of all mediums. Either way, the technological advancements are permanent. Old black and white and color films don't suddenly become equally simulative once the technology plateaus. This means that the argument doesn't resurrect itself until well after the technology plateaus as you have to give the industry time to match older accomplishments in the other criteria. In other words, you don't oversaturate society with films until you've matched what is good about Citizen Kane, Casablanca and Seven Samurai but added high tech sensory stimulation.

With enough superior works of #1, we don't need that. But I think you're a little pessimistic. Why couldn'tve punk have worked in the 19th century?

/#1 and #4 aren't interchangeable. You can't quell the desire to consume works that speak to our uniqueness and "The Moment" by supplying people with universal works. Try forcing a teenager to listen to their parent's music (there is a surprising revival of classic rock with this generation but historically music taste revealed large generational differences).

The scholarly work on the rise of punk music almost always talks about punk as a response to a particular socio-polico-economic condition. Obviously cultural studies isn't a hard science and lacks ideal standards of evidence, but I've found this particular claim convincing. See Subculture: The Meaning of Style by Dick Hebdige. More obviously, the reaction to new music by older generations seems to suggest that what constitutes good music can be temporally relative. I think any invocation of "youth culture" pretty much suggests this.

I'm not sure what the force of your paragraph on reinterpreting the Bible is supposed to be?

Replies from: gwern
comment by gwern · 2009-10-14T15:05:56.314Z · LW(p) · GW(p)

But it seems pretty obvious to me that Brown's pace is a lot faster and thats basically what we mean by sensory stimulation for books. Its the equivalent of shorter shots in film.

Then shouldn't short story anthologies rule the roost? Those beat out any regular novel for scene changes (each story has several scenes, stories usually aren't long), yet they are almost as commercially suicidal as poems (even quicker than short stories, for that matter). And we don't see much travel fiction like Marco Polo or Ariosto these days.

While this is true of some technologies I'm not sure this is necessarily true of all mediums. Either way, the technological advancements are permanent. Old black and white and color films don't suddenly become equally simulative once the technology plateaus. This means that the argument doesn't resurrect itself until well after the technology plateaus as you have to give the industry time to match older accomplishments in the other criteria.

Sure, but this point is only important to prevent people from having an escape hatch: 'Aha! We have plenty of books, sure, but how about movies? video games, etc.' This point says that the clock is ticking even for them. In order for a new medium or genre to defeat this argument, it would have to be capable of improving itself for forever, and at a competitive price-point. I don't think this can be done short of the Holodeck or simulated worlds or something, and even then there may be issues. (Consider Pascal's mugging and bounded utility functions - if we create enough art to reach the bound, then we neither need nor want more.)

#1 and #4 aren't interchangeable. You can't quell the desire to consume works that speak to our uniqueness and "The Moment" by supplying people with universal works...I'm not sure what the force of your paragraph on reinterpreting the Bible is supposed to be?

The point is that I think your modalities 1-4 are like saying that there are different incommensurable kinds of utilons, and no number of 1-utilons can make up for a deficit in 3-utilons. The Bible example is specifically intended to show that people can derive all of those utilons from even the narrowest or most worthless resource, and that they can do so apparently ad infinitum (no sign of weariness of the Bible yet...), which all suggests to me that there's really just one utilon.

Replies from: Jack
comment by Jack · 2009-10-14T22:07:08.835Z · LW(p) · GW(p)

Then shouldn't short story anthologies rule the roost? Those beat out any regular novel for scene changes (each story has several scenes, stories usually aren't long), yet they are almost as commercially suicidal as poems (even quicker than short stories, for that matter). And we don't see much travel fiction like Marco Polo or Ariosto these days.

The criteria isn't scenes per page, its new mental picture per minute of reading time.

Sure, but this point is only important to prevent people from having an escape hatch: 'Aha! We have plenty of books, sure, but how about movies? video games, etc.' This point says that the clock is ticking even for them.

Conceded. But its a minor concession. Yes, when we have perfect-as-possible world-simulators new technology will at some time after that no longer be a driving force of cultural production. When we have perfect computer graphics/camera and film techniques technology will no longer drive the production of films once top-level films match earlier productions in the other criteria.

The point is that I think your modalities 1-4 are like saying that there are different incommensurable kinds of utilons, and no number of 1-utilons can make up for a deficit in 3-utilons. The Bible example is specifically intended to show that people can derive all of those utilons from even the narrowest or most worthless resource, and that they can do so apparently ad infinitum (no sign of weariness of the Bible yet...), which all suggests to me that there's really just one utilon.

There are diminishing returns with all the modalities. So you won't maximize total utility by just maximizing one of the modalities. So lets say modality #2 ceases to be relevant because of technological plateau. In that case people will best maximize their utility by consuming top-of-the-line productions that satisfy large amounts desires for #3, #4 and #1. Modality #3 is mostly contingent on the consumption decisions of everyone else so put that aside. Then the ideal cultural production will speak to the times and touch on universal themes. These might be rare but will only be possible if cultural production continues indefinitely. Aside from these works one would want to consume an equal of "speaking to the times" works and "universal" works (holding constant for preferring one over the other generally). Unless we value universal themes a heck of a lot more than timeliness this means there is additional need for new cultural production even when that production doesn't speak to universal themes.

I'm still not sure if I get the Bible thing. It is true that there are a lot of people who derive a lot of utility from reading the the Bible repeatedly. But the people who do this aren't reading the Bible as literature (are there non-theists who just love the Pentateuch? Is the Koran any atheist's "favorite book" on Facebook?). They're getting utility because they think they're reading the work a superbeing wrote to speak to their narrow parochial concerns. These are the only people who come up with modern interpretation and they do so precisely because the Bible taken at face value says so little about modern concerns. They're trying to make up for the shortcomings of the Bible with regard to #4.

You would rather have us clumsily interpreting Pride and Prejudice so that is seems more relevant to promiscuous, polyamorous culture than just write new books?

Replies from: gwern
comment by gwern · 2009-10-14T23:43:33.128Z · LW(p) · GW(p)

The criteria isn't scenes per page, its new mental picture per minute of reading time.

Don't see how that affects my examples. Here's another: how could a book of haiku have a less favorable ratio of 'new mental picture per minute of reading time' than a Dan Brown novel?

Then the ideal cultural production will speak to the times and touch on universal themes. These might be rare but will only be possible if cultural production continues indefinitely. Aside from these works one would want to consume an equal of "speaking to the times" works and "universal" works (holding constant for preferring one over the other generally). Unless we value universal themes a heck of a lot more than timeliness this means there is additional need for new cultural production even when that production doesn't speak to universal themes.

This is your best point so far. Now, diminishing returns doesn't mean no returns, nor does it necessarily implie converging on any constant (if I remember my limits correctly); but given a finite lifespan, hitting any diminishing returns means a suboptimal set of choices. So we could have thousands of Shakespeares waiting for readers, but if they are all eternal-veritied out, it's still a suboptimal situation.

This definitely blunts my argument. I think I can save it by permitting a small level of current-events production (if you produce too much, then it can't be consumed while current, after all), and there would still a lot of cost-savings - I saw my little sister with a copy of the very popular Pride and Prejudice and Zombies, which is certainly a current-events literary production if ever there was one, yet I'm sure it cost very little to write (Grahame-Smith claims he wrote only 15% of the final text, and the constraints surely made it much easier to write that), and no doubt much less than subsidizing universities to educate in creative writing hundreds of students.

But the people who do this aren't reading the Bible as literature (are there non-theists who just love the Pentateuch? Is the Koran any atheist's "favorite book" on Facebook?).

I'm an atheist, but I'll freely admit I derive tons of pleasure from the Book of Job, to just name one book. And as for the Koran: I was reasonably impressed on my read-through of the translation by its literary qualities, and I have been given to understand that the original Arabic was so highly regarded even by non-believers that Arabic literature can be divided into pre- and post- periods, and has since dominated Arabic prosody. Here's a random quick description:

"Besides making a masterful use of language on the level of words and phrases, it contains figures of speech, satire, and irony; employs a variety of narrative and dramatic techniques; and presents characters that, is spite of the sparse personal detail provided about them, come across as vivid figures. For those who can read the Qur'ān in Arabic, the all-pervading rhythm which, in conjunction with the sustained use of what may be called rhymed prose, creates in many sūrahs a spellbinding effect that is impossible to reproduce. There is the characteristic terseness of the Qur'ānic language which makes for some complex constructions, but which is difficult to convey in English without being awkward. The existing translations of the Qur'ān impose a further limitation, for they fall so far short of the highly nuanced original that a detailed study of the Qur'ānic language and style on their basis is well-nigh impossible." http://www.islamic-awareness.org/Quran/Q_Studies/Mirliter.html

(As for Facebook - if you're here, you can construct the social signaling argument why an atheist would specifically avoid publicizing his appreciation of religious literature, if he can even get past his own hangups in the first place.)

You would rather have us clumsily interpreting Pride and Prejudice so that is seems more relevant to promiscuous, polyamorous culture than just write new books?

We do that already, very inefficiently, via universities. And see my previous comment on Pride and Prejudice and Zombies... Writing new books is risky, as Jane Austens are rare; critics & interpreters, on the other hand, are plentiful & cheap.

Replies from: gwern, Jack
comment by gwern · 2009-10-16T16:44:46.528Z · LW(p) · GW(p)

And just to show that the Bible-as-literature isn't me, here's Richard Dawkins:

“Not entirely, sir. Parts of holy writ have great poetic merit, especially in the English translation known as the King James, or Authorized version of 1611. The cadences of the Book of Ecclesiastes and some of the prophets have seldom been surpassed, sir.”

comment by Jack · 2009-10-17T06:56:38.084Z · LW(p) · GW(p)

Don't see how that affects my examples. Here's another: how could a book of haiku have a less favorable ratio of 'new mental picture per minute of reading time' than a Dan Brown novel?

I mean, maybe they don't. But this haiku also don't have twisting plots with anti-matter bombs and ancient religious conspiracies. In general though, I don't think most short stories or most poems have more favorable ratios than thriller novels. But in any case there are other reasons to prefer thriller novels. The relevant comparison is thriller novels of today and thriller novels of the past.

I think I can save it by permitting a small level of current-events production (if you produce too much, then it can't be consumed while current, after all),

Right, though of course the entire culture won't want to consume the same set of these works. You'll want to have timely products specific to age-group and subculture. Now I don't know where the ideal level of timely cultural production is but I'm not sure why the market wouldn't have already sorted this out. Publishers, studios and record companies are all profit driven organizations and if they could make more money just re-releasing old works instead of signing new authors and artists I think they would since it would save them money. Why shouldn't we think the culture market is efficient? In fact, given how little time most people actually spend consuming Shakespeare (compared to Will Ferrell comedies) it seems to me that timeliness is valued far more than eternal truths.

I'm a fan of the book of Job too. I also like Genesis. And I have heard the same things about the Koran. But I couldn't possibly read the Bible everyday without it seriously diminishing in utility for me. And the are large swaths that are painful to read. I also don't have any particular need for it to have timely or prescient lessons. The people are getting large portions of the desire for cultural production fulfilled by reading the Bible again and again, day after day are almost exclusively believers.

As for subsidized universities teaching creative writing I don't have any reason to think that more creative writing (and I guess, Video Game Creation and Film) students actually translates into more resources wasted producing unnecessary cultural works. Those students are only ever going to get paid for their work if there is a market demand for it and to the extent they spend time producing works when there isn't a demand for it we should just classify that time as leisure time which benefits overall utility.

Replies from: gwern
comment by gwern · 2009-10-18T18:39:28.935Z · LW(p) · GW(p)

Now I don't know where the ideal level of timely cultural production is but I'm not sure why the market wouldn't have already sorted this out. Publishers, studios and record companies are all profit driven organizations and if they could make more money just re-releasing old works instead of signing new authors and artists I think they would since it would save them money. Why shouldn't we think the culture market is efficient?

I almost forebear from pointing this out but... we have very good reason to think that the culture market is not efficient. That is, the whole intellectual property regime constitutes massive government intervention & subsidy (as I specifically wrote). If Will Ferrell comedies weren't copyrighted, how much worse do you think they would do against Shakespeare?

(I'll note in passing that publishers like Folgers go to great lengths to make their Shakespeare editions copyrighted, by claiming editorial mending (eg. stitching together plays from the various folio and quartos), by adding in useless essays and retrospectives that the target demographic - students - will never ever read, and so on.)

But I couldn't possibly read the Bible everyday without it seriously diminishing in utility for me.

The people who can do so were raised that way. The Bible shows that to a great degree, the quality and 'endurance' (depth?) of a work is subjective & culturally set. If you were raised in a culture that discouraged/didn't-encourage new works, do you think you would still be literarily restless and footloose all your life? A different point: perhaps the Bible is not your ideal book, but do you think there does not now exist one for you?

Those students are only ever going to get paid for their work if there is a market demand for it and to the extent they spend time producing works when there isn't a demand for it we should just classify that time as leisure time which benefits overall utility.

This seems to assume an efficient market again. But wages and employment are portions of the economy notoriously irrational/inefficient (eg. 'wage stickiness'); if a student has spent 4 years learning creative writing (and even more for the masters), likely going into debt for it, are they really going to admit their mistake and work in some more remunerative field?

No, of course not, either out of sheer stubbornness (to do so would be to admit a massive mistake), or because they love the field. Ergo, an inefficiency where there is an oversupply of English majors. (I believe Robin Hanson has a similar theory: that there are too many musicians, resulting in near-minimum-wage average pay, because it's glamorous/socially-impressive.)

Replies from: Jack
comment by Jack · 2009-10-19T03:24:25.106Z · LW(p) · GW(p)

I almost forebear from pointing this out but... we have very good reason to think that the culture market is not efficient. That is, the whole intellectual property regime constitutes massive government intervention & subsidy (as I specifically wrote). If Will Ferrell comedies weren't copyrighted, how much worse do you think they would do against Shakespeare?

We actually can test this question. On the internet copyright laws are so poorly enforced that they might as well not exist. Do you think Will Ferrell movies are downloaded at a lower or higher rate than Shakespeare? Now maybe we think the reason for this is that Will Ferrell comedies are only available for free on the internet whereas Shakespeare is copyright free everywhere. But we can compare Will Ferrell movies to older movies that are still under copyright and they'll still do better-- maybe not always over the long term, but certainly in the period in which they are timely and relevant.

How exactly are copyright laws supposed to skew the market toward recent works anyway? Sure, it means the production companies need to produce new works and advertise them, but it basically counts as a tax on consuming any work produced in the copyright period. The fact that there is a thriving culture industry despite the existence of copyright termination should count as a reason to think there is a real desire for new production. We might think that the desire is just constructed by the industry through advertising-- but the culture industry wouldn't be different in this regard from any other industry.

The people who can do so were raised that way.

Maybe. But my argument is that they just think they're reading the words of God. I think that reason is a lot more compelling but I'm not sure how to settle it. Are there non-religious works that draw the same kind of adoration? If I thought there was a book written by God I would read it as much as possible, too.

If you were raised in a culture that discouraged/didn't-encourage new works, do you think you would still be literarily restless and footloose all your life?

Well that definitely isn't going to make me want to read one book again and again. If the quality of new works decreased I probably would read old works more but only because of the quality disparity not because I would no longer have a desire to read good, new works. I do wonder though, if there is a neurodiversity issue here. I have pretty serious ADHD which might contribute to my having a steeper drop in returns from repeat consumption.

Re: The English major

You're right. Though I think an English degree is mostly an inefficient because it doesn't get used, not because it does. Still it is plausible that a resulting surplus of works drives the production price down...

Edit: I'm not sure I have a response. Or if I need one. It sort of depends on what would happen to the quality of work in a world without English departments which I find very difficult to answer.

Replies from: gwern
comment by gwern · 2009-10-19T15:55:48.995Z · LW(p) · GW(p)

Do you think Will Ferrell movies are downloaded at a lower or higher rate than Shakespeare?

Heh. I don't see any feasible way to measure that!

Now maybe we think the reason for this is that Will Ferrell comedies are only available for free on the internet whereas Shakespeare is copyright free everywhere. But we can compare Will Ferrell movies to older movies that are still under copyright and they'll still do better-- maybe not always over the long term, but certainly in the period in which they are timely and relevant.

Is it fair to simply ignore the long term? It'd be kind of strange to hear advice that bonds are the best investment around 'because stocks aren't paying you anything right now'.

The fact that there is a thriving culture industry despite the existence of copyright termination should count as a reason to think there is a real desire for new production. We might think that the desire is just constructed by the industry through advertising-- but the culture industry wouldn't be different in this regard from any other industry.

A local but not global optima? I just read Ainslie's Breakdown of Will, and it really seems to me like hyperbolic discounting might explain why people go 'ooh, shiny!' about new works though they shouldn't want to pay the copyright tax.

If the quality of new works decreased I probably would read old works more but only because of the quality disparity not because I would no longer have a desire to read good, new works.

Would you really? In your life, there has surely been a year or two where quality of production has dropped (art isn't so reliable & consistent as to only improve every year); did you shift your reading habits?

Replies from: Jack
comment by Jack · 2009-10-19T16:34:03.060Z · LW(p) · GW(p)

Is it fair to simply ignore the long term? It'd be kind of strange to hear advice that bonds are the best investment around 'because stocks aren't paying you anything right now'.

I'm only ignoring the long term because I'm looking for evidence that the rate at which the market produces new, timely works is reasonably close to what the demand for such works is.

Would you really? In your life, there has surely been a year or two where quality of production has dropped (art isn't so reliable & consistent as to only improve every year); did you shift your reading habits?

My fiction reading habits have very little to do with timeliness concerns. I read new fiction when an author I like produces it. Otherwise my reading is focused on genre books (science fiction like everyone else here) and classics. The only temporal criteria in my reading involves preferring books written recently enough that the style isn't so dated it slows me down. There are some occasions for preferring timely topics, for sure. But the frame for timely in these cases tends to be about 5-10 years so my reading habits won't actually vary from year to year. I don't even have enough information about books to even make good selections until end-of-the year lists come out. And that is part of the problem. There is very little information around that lets one compare books (and music and movies) in any systematic way except in relation to other works that came out that year. Once in a while there is an instant classic but it is had to know what the choice works of year are until a couple years later.

In short, reading habits don't correspond to quality of output by year b/c 1) I lack the information to adjust habits accordingly and 2) the time frame for recent is quite a bit more than a year so there is no need to adjust habits accordingly.

Replies from: gwern
comment by gwern · 2010-03-01T01:43:52.736Z · LW(p) · GW(p)

I'm only ignoring the long term because I'm looking for evidence that the rate at which the market produces new, timely works is reasonably close to what the demand for such works is.

Close enough for government work, I suppose:

"Conservative estimates are in the hundreds of thousands - if as few as 600 books have been stripped at each of the closing stores (600 X 182 = 109,200). And thousands of books have been stripped at each store before being shipped back to distribution centers for disposal." http://donatenotdumpster.blogspot.com/2010/01/despite-our-pleas-borders-has-trashed.html

Indeed, the publishing industry thinks nothing of pulping millions of unsold (or libelous) books each year. And there was no outcry in 2003 when 2.5 million romance novels from the publisher Mills & Boon were buried to form the noise-reducing foundation of a motorway extension in Manchester, England. http://www.nytimes.com/2007/03/04/books/review/Schott.t.html

Even the Library of Congress doesn't want to keep copies of everything:

The Library receives some 22,000 items each working day and adds approximately 10,000 items to the collections daily. http://www.loc.gov/about/facts.html

comment by Zack_M_Davis · 2009-10-18T23:16:44.639Z · LW(p) · GW(p)

Re the culture piece: you make some important points, but the "Let's ban new books" thing seems rather self-undermining. If you're going to ban new books, why not ban new blog comments, too? The observation that we have more culture than anyone can know what to do with is hardly original, and your phrasing can't have been the best, so why did you spend all that time writing this piece, when you could have been making money?

My answer to this entire dilemma is just to say that culture isn't about economic consumption: I guess that was entirely your point , but I'm taking a different attitude about it. Writing has been made into a commodity, but writing as such is a means of communication between people. To say that I should not write is to say that I should not speak, and even the least educated and cultured among us says something now and again, so to say that I should not speak is to say that I should not live. Why should I live, when we already have billions of people already?---because I want to. I don't care if nothing I do has global, world-shaking effects; I don't care it's all been known and done before (if not here then somewhere across the many worlds); I want to know; I want to do---this particular conjunction of traits and ideas wants to know and do, even if lots of other superficially similar conjunctions have already known and done many superficially similar things.

I think that society ought not discourage the production of new novels, not because we need more novels around (you're right; we don't), but because I want to live in a world where everyone writes a good novel. No one's life is exactly bitwise identical to someone else's (or they'd really be the same person anyway), so everyone must have something to say that hasn't already been said in exactly the same way. So let's explore the space; the project is by no means complete. Yes, this means that a lot of crap will get written, but I still think it's more fun this way than tiling the galaxy with James Joyce. But de gustibus non est.

Replies from: gwern
comment by gwern · 2009-10-19T01:03:16.343Z · LW(p) · GW(p)

Re the culture piece: you make some important points, but the "Let's ban new books" thing seems rather self-undermining. If you're going to ban new books, why not ban new blog comments, too?

Yeah, it is undermining. But it's funny! You're reading along and then you see "Let's ban new books", which although a fairly logical extrapolation, is still something that no one would expect to be seriously suggested.

More seriously, as I think I've already argued here, blog comments (and most websites in general) don't affect the weakened argument about removing subsidies. Less Wrong receives no government support (if anyone mirrored us, would we actually sue them or even bother with DMCA takedowns? I note that we don't work under any CC licenses, but that seems more like an omission than anything).

The observation that we have more culture than anyone can know what to do with is hardly original,

Alas, there is nothing new under the sun. (Oops.) But it seems to've been novel enough to most of the people who read it, and it's not like non-philosophers read Schopenhauer any more.

and your phrasing can't have been the best, so why did you spend all that time writing this piece, when you could have been making money?

As a student, my time is worthless! Essays like this may be useful as advertising, or spinning off into assignments; and they're much better than playing Geometry Wars. (Also, is my phrasing that bad? I thought I wrote it pretty well. :()

My answer to this entire dilemma is just to say that culture isn't about economic consumption: I guess that was entirely your point

I was also trying to show that it's 'not about Esthetics' too: if it were, we would expect there to be a lifetime-length canon optimizing your esthetics-per-work count, with occasional tweaks (deletions & additions) by specialists when some work is realized to not be very good or just exceeded by some unincluded work. But that is manifestly not the case.

I think that society ought not discourage the production of new novels, not because we need more novels around (you're right; we don't), but because I want to live in a world where everyone writes a good novel.

So this would fall under the 'externalities' category - people writing novels become better people for it?

Replies from: Zack_M_Davis
comment by Zack_M_Davis · 2009-10-19T05:57:24.709Z · LW(p) · GW(p)

(Also, is my phrasing that bad? I thought I wrote it pretty well. :()

Well, I'm glad you wrote it, but I'm not the one complaining that we produce too much text.

if it were, we would expect there to be a lifetime-length canon optimizing your esthetics-per-work count, with occasional tweaks (deletions & additions) by specialists when some work is realized to not be very good or just exceeded by some unincluded work.

I think you're underestimating long tail effects. There is a sense in which we can say that some authors are much better than some others, but people have extremely specific tastes, too: no one canon will suffice, not even canons for particular genres and subgenres. Consider that I like the particular philosophical style of Greg Egan; giving me a list of top "hard science fiction" won't help me. Or consider that one of my favorite short stories ever is Scott Aaronson's "On Self-Delusion and Bounded Rationality." Now, Scott Aaronson isn't a professional fiction writer; I don't even think that story was even conventionally published in an official fiction venue; it's not going in any accepted canon. But why should I care? It's going in my canon. Or consider that there's a lot of work on very specific topics that I have reason to believe doesn't exist. So I'll have to create it. Even if most of you wouldn't understand or wouldn't care; well, I'm not living for your sake. Some clever person updated Warhol, you know: "In the future, everyone will be famous to fifteen people."

So this would fall under the 'externalities' category - people writing novels become better people for it?

Um, sure, although I'd phrase it differently. It's not so much "doing this stuff will make you a better person" as much as, "the entire point of this being-a-person business is doing stuff, and it might as well be this as not."

Replies from: gwern
comment by gwern · 2010-02-28T22:34:24.121Z · LW(p) · GW(p)

Consider that I like the particular philosophical style of Greg Egan; giving me a list of top "hard science fiction" won't help me.

This sounds like an acquired taste; if you only came to like Egan's style because it exists, and you would've come to like some style even if Egan had never been...

It's not so much "doing this stuff will make you a better person" as much as, "the entire point of this being-a-person business is doing stuff, and it might as well be this as not."

Well, OK. If writing books are leisure activities, then why does it need any protection or subsidies? You don't hear many panicked cries that there is a papier-mâché deficiency which needs state intervention.

"No man but a blockhead ever wrote except for money."

comment by cousin_it · 2009-10-01T14:40:04.098Z · LW(p) · GW(p)

The first essay was the best IMO. What do you think about banning net-unproductive websites?

Replies from: gwern
comment by gwern · 2009-10-07T23:58:36.634Z · LW(p) · GW(p)

It would be tremendously difficult, as we can generally agree whether a book is fiction or nonfiction, but 'net-unproductive websites' is unclear, and what subsidies are websites in general receiving that we could scrap? (An actual ban or tax obviously would be even more difficult to implement in a usefully Pigovian way.)

Books have copyrights, universities, direct government grants, etc.; but the Internet is famously disdainful of the former, and the mechanisms like the latter 2 are very rare indeed. (Quick: name an American poet or novelist who took a foundation or university-sponsored sabbatical to work on their website!)

comment by taw · 2009-10-01T15:06:01.855Z · LW(p) · GW(p)

As for your claim that old is as good as new - it's not.

Or consider another medium: movies. Have you seen even a fraction of the IMDB’s Top 250?

Yes, about half of them. Not all were actually good, IMDB has some systemic biases. Good movies are much less common than you claim.

Also you cannot just decide to skip making mediocre movies (or anything else) and only do the good ones. At best by halving number of movies made, you'll halve number of great movies made. Due to expected positive externalities (directors and so on learning from previous movies how to make better ones), it might lower number of great movies even more.

If you make the list of best movies tend to be more recent. Looking at IMDB, which I consider very strongly biased towards old movies, top 250 are from:

  • 1920s - 6
  • 1930s - 15
  • 1940s - 26
  • 1950s - 36
  • 1960s - 24
  • 1970s - 25
  • 1980s - 26
  • 1990s - 36
  • 2000s - 56

Which is quite strongly indicative that movie making industry is improving (and this effect is underestimated by IMDB quite considerably). On list of movies I rated 10/10 on imdb, only 1 out of 28 is not from 1990s or 2000s.

It's also true for books - progress is not that fast, but I can think of very few really great books earlier than mid 20th century. Or highly enjoyable music earlier than the last quarter of 20th century. No solid data here, it might be due to progress of technology in case of music, and better cultural match with me in case of books.

Replies from: CronoDAS, Technologos, dclayh, gwern
comment by CronoDAS · 2009-10-04T17:14:01.760Z · LW(p) · GW(p)

Random thoughts:

Values Dissonance is a real problem, even when applied over the scale of 50 years. Also, ScienceMarchesOn and even History Marches On. The more things we learn, the more things we can tell stories about.

I've found that, by reading an awful lot of books, I feel like I understand literature and storytelling. On the other hand, I really don't understand music very well. I can't tell what qualities make one piece of music good and another not as good. I can play the piano pretty well, but I can't really improvise or compose. My taste in music (or complete lack thereof) seems to have a great deal to do with the mere exposure effect; I like the kinds of music that I hear a lot and don't like the kinds of music that I hear less of.

Also, one other big difference between much contemporary popular music and much classical music is that a lot of contemporary popular music has lyrics that listeners can understand, and a lot of classical music is entirely instrumental or in foreign languages.

comment by Technologos · 2009-10-02T01:34:43.637Z · LW(p) · GW(p)

Obviously, if we were actually going to work through this data we would want to know the rate of best-movie-ranking rather than the absolute numbers. Just as importantly, we'd want to know the frequency of best-movie-ranking relative to the number of movies watched from each decade, such that best-movie-rankings aren't simply dependent on availability.

In my experience, of the older movies I have watched, a greater fraction were strongly memorable than of the newer movies I have watched. In part, I suspect this is because I watch older movies intentionally, knowing that they are reputed to be good, where I watch newer movies with a somewhat lower bar for putting in the effort (because they are available in theaters, are easier to talk about, etc.).

Replies from: taw
comment by taw · 2009-10-02T02:34:52.346Z · LW(p) · GW(p)

Assuming the best old movies don't get filtered out and stay available, this data is accurate for our purpose.

IMDB top list is based on Bayes-filtered ratings, it says what proportion of people watching the movie loved it, not how many people watched it. It will be automatically biased towards intentional watching (therefore old movies), and the bias is in my opinion fairly strong. Still, in spite of this new movies win.

Replies from: Technologos
comment by Technologos · 2009-10-02T07:44:13.404Z · LW(p) · GW(p)

To be clear, I agree that the list should be biased towards old movies in the manner you describe.

The total number of films created has been rising for a while, however (under the "Theatrical Statistics" report here, for instance). It's not entirely unreasonable to believe that over 3x as many films were made in the 2000s as in the 1930s, though; compare Wikipedia's lists of 1930s films and 2000s films. The latter is dramatically longer.

Like I said, we would want to know the fraction of films making the Top 250 list, not the absolute numbers.

Replies from: gwern
comment by gwern · 2009-10-08T00:15:37.460Z · LW(p) · GW(p)

It would also be interesting to apply the methods of _Human Accomplishment_, collating critical lists & histories other than IMDB, such as the rather grandiose "The Best 1,000 Movies Ever Made " from the New York Times. I would very much expect a recency effect.

comment by dclayh · 2009-10-01T20:08:31.121Z · LW(p) · GW(p)

Or highly enjoyable music earlier than the last quarter of 20th century.

Really? Really? I would put Mozart, Bach or Verdi against absolutely anyone from 1975 to the present.

Replies from: pdf23ds, Alicorn, billswift, taw
comment by pdf23ds · 2009-10-02T07:55:48.516Z · LW(p) · GW(p)

I'm trained as a classical pianist, and I still don't enjoy Mozart, Verdi, Scarlatti, or pretty much any other of the classical period composers. I love Bach, but I'm not familiar with other baroque composers.

But mainly, I really enjoy romantic and modern classical composers. I'd absolutely agree with the thesis that music has been getting better and better, even limiting oneself to classical music. (Bach is an amazing exception.)

Comparing classical to popular music is very interesting. Perhaps the difference is that classical music requires a very developed ear in order to enjoy, and so it only appeals to a much smaller subset of people--those with training or high musical talent--while still being comparable or superior in quality to popular music. I would compare it to wine, except there's strong evidence that wine appreciation is almost entirely about status. I'm not sure if there's anything else to compare it to. Programming as an art form?

Replies from: anonym
comment by anonym · 2009-10-03T19:58:43.367Z · LW(p) · GW(p)

I think enjoying poetry or literature is a good comparison. Both take effort and some hard work to be able to appreciate and are considered dull and boring by people with no training/study in the relevant discipline. They all also unfortunately appeal to some people's shallow sense of "high culture" and thereby encourage inauthentic signaling by lots of people that don't really enjoy them. It's easy to understand that if you had no experience yourself, and your experience with a small number of people who profess enjoyment is that they are engaged in false signaling, that you would think there is nothing more to it than that, that everybody who professes passion is just engaged in false signaling.

I'm convinced that most people who took a music appreciation class and studied music theory and ear training for a year, combined with some music lessons, would at the end of that process have a completely different reaction to classical music (assuming they did it all by choice and weren't forced into it by parents).

Replies from: Alicorn
comment by Alicorn · 2009-10-03T20:37:46.441Z · LW(p) · GW(p)

Mightn't that just be because those courses are specifically to teach appreciation of those kinds of music? I expect it's probably possible to teach people who don't like rap, or country, to appreciate those genres; but because rap and country don't fit the shallow sense of high culture, no one is motivated to learn to appreciate them if they don't already. There is very little net benefit to learning to appreciate a new kind of music - there is abundant music in most genres, and one can easily fill one's ears with whatever one can most readily enjoy, so you probably don't get more total enjoyment from music by adding to your enjoyed genres. In the case of classical music, the benefit of learning to like it isn't really in the form of enjoyment of classical music; it's in the form of getting to sincerely claim to like classical music, and no longer being left out when highly cultured people discuss classical music.

Replies from: cousin_it, anonym, pdf23ds
comment by cousin_it · 2009-10-04T16:37:05.245Z · LW(p) · GW(p)

There is very little net benefit to learning to appreciate a new kind of music - there is abundant music in most genres, and one can easily fill one's ears with whatever one can most readily enjoy, so you probably don't get more total enjoyment from music by adding to your enjoyed genres.

That argument only works if we aren't allowed to enjoy novelty.

Replies from: Alicorn
comment by Alicorn · 2009-10-04T16:41:25.776Z · LW(p) · GW(p)

We can still enjoy novelty! For instance, I have a near-perfect track record of liking show tunes. There's lots. I can get a steady supply of novelty, supplementing older musicals with the new ones that come out every year and the other sorts of music I like. I don't need to learn to appreciate entire new genres to do it. Unless you mean that appreciating a new genre is a qualitatively different form of novelty? But then learning to appreciate the new genre is self-defeating. By the time you've learned to like it, you've already been exposed to lots of it and it's no longer new.

Replies from: cousin_it
comment by cousin_it · 2009-10-04T17:02:22.820Z · LW(p) · GW(p)

Do you actually feel this aversion? Because it's so... foreign to me. Learning to enjoy a new genre of music is always a fascinating discovery. I hear a curious snippet somewhere and go hmmm, gotta investigate deeper, then 24 hours later I'm swimming in the stuff, following connections, reading and listening... sort of like this (warning, that site is like crack for the right kind of person.)

Replies from: Alicorn
comment by Alicorn · 2009-10-04T17:14:01.960Z · LW(p) · GW(p)

It's not an aversion. If I had nothing better to do, or had a terrible time finding anything new to listen to, I'd be okay with learning about and learning to appreciate classical music. But as it happens, new, immediately fun music enters my life at a pretty satisfactory rate. I added a new artist to my library just yesterday because my roommate played his CD in the car on the way to the grocery store and it sounded neat. There's no reason for me to spend extra time on music that doesn't promptly catch my ear, when I can just hit up friends for personalized recommendations, cruise Pandora, and keep up with the artists I already enjoy - unless I feel like succumbing to the status signals that make classical different from other music!

comment by anonym · 2009-10-04T08:18:51.926Z · LW(p) · GW(p)

In the case of classical music, the benefit of learning to like it isn't really in the form of enjoyment of classical music; it's in the form of getting to sincerely claim to like classical music, and no longer being left out when highly cultured people discuss classical music.

How would you know this given your admittedly limited experience with classical music?

Speaking for myself, there is lots of music that I love listening too, in many different genres, but nothing else has such power to move me as classical music as its best does -- for example -- the Confutatis from Mozart's Requiem, or the Bach d-minor Chaccone, or in a lighter vein that I think anybody can appreciate and feel moved by, Paganiniani or the vitali chaconne.

I love lots of popular music, and probably listen to popular music about as much as I do classical, but there is a certain kind of ecstatic -- almost mystical -- experience that some classical music triggers that I've never gotten with popular music.

Replies from: Alicorn
comment by Alicorn · 2009-10-04T12:19:50.386Z · LW(p) · GW(p)

Okay - so you get special, unique value from classical. Meanwhile, I get special, unique value from Phantom of the Opera. Why should I think that learning to like classical music is more worth my time - given that I'm now left bored by most classical, or think of it as pleasant background noise - than pirating more Andrew Lloyd Weber?

Replies from: anonym
comment by anonym · 2009-10-04T18:21:14.118Z · LW(p) · GW(p)

I'm not so much arguing for learning to like classical music as for learning to understand classical music. I think most people would enjoy it more if they had greater understanding. Classical music is especially rewarding with greater appreciation/understanding, and especially difficult to enjoy with less appreciation/understanding. Perhaps an analogy will convey my point better.

You write fiction, yes? Have you ever studied creative writing, taken a class, read a book on creative writing? Have you ever had an English class with a skilled and passionate teacher that involved analysis of texts that you gained more and more appreciation for after really careful reading and study? Do you feel that the process of becoming a better writer and/or learning to analyze fiction has increased your appreciation and enjoyment of fiction? Most people find that going through those sorts of processes results in much greater enjoyment and appreciation, and they are also able to enjoy fiction that they formerly would have found boring. I think the process is the same for classical music (and jazz as well, for that matter [it's true of any art/music/etc., but to different degrees]).

Expecting to either just "like it" or "find it boring" and thinking of it as being just another genre like rock or pop is like approaching Dostoevsky with the same background/expectations/skills/patience as you would a Tom Clancy novel. The fact that Dostoevsky is more difficult than Clancy, that most people find Dostoevsky boring and Clancy (or an equivalent easy read) engaging, doesn't mean that it's just a matter of taste which you happen to enjoy more. Some things require considerable experience and skill before it is possible to have an informed judgment about them: the literature classics, for example, and classical music.

As for whether it's worth anybody's while to do so, that's an individual choice.

Replies from: Alicorn
comment by Alicorn · 2009-10-04T20:31:36.095Z · LW(p) · GW(p)

You write fiction, yes? Have you ever studied creative writing, taken a class, read a book on creative writing?

Yes. No. No. No.

Have you ever had an English class with a skilled and passionate teacher that involved analysis of texts that you gained more and more appreciation for after really careful reading and study?

Hell no. I have a completely unbroken track record of hating every single book that I have ever read for the first time as a class assignment, and have never found that a book I already liked was improved by this kind of dissection.

Do you feel that the process of becoming a better writer and/or learning to analyze fiction has increased your appreciation and enjoyment of fiction?

Not one bit! I have mostly become a better writer by learning related skills (I was allowed to make up my own second major in undergrad, and therefore literally have a degree in worldbuilding), practicing, and emulating the good parts of what I read. I now have to turn off my critical faculties entirely to enjoy any works of fiction at all, even those that are overall very good, because detecting small flaws in their settings, characterization, handling of social issues, dialogue, use of artistic license, etc. will throw off my ability to not fling the book at a wall. Works that aren't overall good turn on said critical faculty in spite of my best efforts. I can barely have a conversation about a work of fiction anymore without starting to hate it unless I'm just having a completely content-free squee session with an equally enthusiastic friend!

Most people find that going through those sorts of processes results in much greater enjoyment and appreciation, and they are also able to enjoy fiction that they formerly would have found boring.

I guess I'm a mutant?

Expecting to either just "like it" or "find it boring" and thinking of it as being just another genre like rock or pop is like approaching Dostoevsky with the same background/expectations/skills/patience as you would a Tom Clancy novel.

Although I have never read an entire Dostoevsky novel (my reading list is enormous and I haven't gotten around to it), I have really liked the excerpts I've read - immediately, without having to work for it. This is why I plan to read more of his stuff when I get around to it. I've never tried any Tom Clancy. Is he worth reading?

Some things require considerable experience and skill before it is possible to have an informed judgment about them: the literature classics, for example, and classical music.

Maybe this is just my idiosyncrasy, but I think making the reader work hard when this isn't absolutely necessary - in fiction, nonfiction, or anything else - is a failure of clarity, not a masterstroke of subtlety. This isn't to say that you can't still have a good work that makes the reader do some digging to find all the content, but that's true of any flaw - you can also have a good work with a kinda stupid premise, or with a cardboard secondary character, or that completely omits female characters for no good reason, or has any of a myriad of bad but not absolutely damning awfulnesses.

Replies from: dfranke, anonym
comment by dfranke · 2009-10-04T20:59:27.515Z · LW(p) · GW(p)

I've preferred classical music over other genres since preschool. I think that's sufficient to rule out any explanation of my tastes involving signaling, because a preschooler's appreciation of classical music signals nothing to other preschoolers. Neither of my parents was particularly into classical music, so I wasn't reflecting any expectation of theirs either. I'm in agreement with anonym about the value of music education: it has heightened my enjoyment and appreciation of all music: classical especially, but pretty much everything else as well, other than maybe hip-hop.

However, I also agree with you about literature. Every English class that I had to take in middle school through college completely destroyed my ability to enjoy the subject under study for years to come. I used to love Michener until I had to write an essay about his work during my junior year of high school; I haven't been able to face him since. I don't think this contrast reveals anything unusual about my psyche; rather I think it means that the comparison of English education to music education is apples-to-oranges.

Replies from: Alicorn
comment by Alicorn · 2009-10-04T21:12:36.940Z · LW(p) · GW(p)

I don't think I've ever claimed that the only reason anyone would like classical music would be because of signaling. If you liked it as a preschooler, it seems to me that's just your taste, and I'd neither privilege it nor scorn it compared to the taste of someone who, in preschool, liked any other kind of music. I think that the only reason to devote time and energy to learning to like classical music when you don't already - which I doubt you did in preschool - is for signaling purposes.

Replies from: pdf23ds
comment by pdf23ds · 2009-10-04T21:19:45.720Z · LW(p) · GW(p)

Wait, the only reason? Really? I'll certainly admit it's a pretty common reason.

Replies from: Alicorn
comment by Alicorn · 2009-10-04T21:25:58.529Z · LW(p) · GW(p)

Okay, you're right, that was an overstatement. There could be boredom, or course requirements, or curiosity, or things like that.

Replies from: komponisto, pdf23ds
comment by komponisto · 2009-10-06T04:11:14.375Z · LW(p) · GW(p)

What about the desire to make an "aesthetic investment" -- that is, to put in some work upfront in order to reap the rewards of a high-quality experience later on? (Why, I wonder, are people so quick to dismiss the possibility of such rewards?)

As regards signaling as a "common" motivation: maybe this works in continental Europe, or in certain idiosyncratic communities where this kind of music enjoys social prestige. In the mainstream of American society, however, an interest in art music buys you little to no status (particularly as compared with a corresponding interest in similarly elevated forms of other arts, such as literature or painting). To be a devotee of this kind of music is to be a nerd of one of the worst kinds. (It's even considered un-American: witness Bill Clinton's remark that "Jazz is America's classical music".)

You know the cultural asymmetry that C.P. Snow famously described, wherein "well-rounded" educated people are expected to know about more about the humanities than the sciences? Well, it's dwarfed into insignificance by the asymmetry that exists between what "cultured" people are expected to know about music versus what they are expected to know about other arts.

So be extra cautious when positing status-signaling explanations for the behavior of art music devotees, particularly in America.

Replies from: anonym
comment by anonym · 2009-10-06T04:47:08.745Z · LW(p) · GW(p)

I've also wondered about the implicit assumption lots of people have that if music were going to yield extreme degrees of pleasure for them, then it would do so without much effort on their part and in quick order. I've also noticed the assumption you touch on that because all non-deaf people have a pair of working ears and have known how to use them since childhood, they are all equally capable of judging different types of music and recognizing that there they're basically all the same, like different flavors of ice cream.

I think you're spot on about classical music and status in America, at least in my neck of the woods. I work at a well-known company in the SF bay area that has a lot of very smart and very well-educated people, and it would be embarrassing for me to admit at work that Bach is my favorite musician or that classical music is my favorite music. It would be viewed as pathetically old-fashioned and uncool.

ETA: I think the status thing with regard to classical music in the SF area is generational. I'm in my thirties. If I and my peers were a generation older, then I think classical music would be regarded more positively and be less stigmatized. When I go to a classical music event, I see mostly people who are at least a generation older than me. In my workplace, the median age is probably something like 28-34, so the classical music listeners are of my peers' parents' generation. To be honest, I completely agree with the accusations of signaling for the vast majority of people you see at classical music events around here. Few of the (mostly older) people I see at concerts seem like they're there for the music -- they spend their time dozing off, fidgeting with things, people gazing and being gazed at, counting the minutes till intermission and then rushing out at the end without wanting to hear the encores. People my age and younger at concerts seem much more sincere, even if there are so few of them.

comment by pdf23ds · 2009-10-04T21:32:00.150Z · LW(p) · GW(p)

Here's a couple more: desire to learn an instrument (because training often uses mainly classical repertoire), or the recommendation of someone trusted. One could argue the latter is about status, but I don't think it always is.

Replies from: Alicorn
comment by Alicorn · 2009-10-04T21:34:00.209Z · LW(p) · GW(p)

Those are reasons too - good ones, even. And it probably depends on the motivation behind the recommendation whether it's about status.

comment by anonym · 2009-10-04T22:27:49.325Z · LW(p) · GW(p)

Hell no. I have a completely unbroken track record of hating every single book that I have ever read for the first time as a class assignment, and have never found that a book I already liked was improved by this kind of dissection.

Maybe I'm the mutant. I know that your reaction is very common, but I attribute it to either the result of bad teaching and/or students being forced against their will to do something that they will therefore be very likely to hate. When I have been in classes with smart, passionate teachers, and the students were there because they were genuinely curious and not to fill a requirement, I've seen lots of minds get turned on in a way that extended past the end of the course and positively affected their enjoyment afterwards. I've also recommended books like Gardner's The Art of Fiction: Notes on Craft for Young Writers to adult friends that are avid readers and have had only positive feedback, some of it of the 'profoundly changed the way I read for the better' variety.

Maybe this is just my idiosyncrasy, but I think making the reader work hard when this isn't absolutely necessary - in fiction, nonfiction, or anything else - is a failure of clarity, not a masterstroke of subtlety.

I don't think very many people would disagree with you on that as a general principle. I certainly don't. Not all difficulty is gratuitous though.

Replies from: Alicorn
comment by Alicorn · 2009-10-04T22:53:16.408Z · LW(p) · GW(p)

I've never taken an English or literature class voluntarily - it's one of many subjects that I was permanently turned off to in high school and was glad to be rid of after I finished my gen ed requirements in undergrad. (Except English in the sense of fine points of grammar, vocabulary, etc. which I've reclassified as "linguistics" to be comfortable with it.) So maybe I was badly taught. But except for the part where I suffered during required English/literature classes and (maybe) developed a block about a handful of books that I might have liked if I'd run into them on my own, I don't think I'm badly off for not having this sophisticated level of appreciation - given how I react when I exercise what artistic discernment I do have, and given how much fiction I find and enjoy without the help of literature instruction.

Not all difficulty is gratuitous at all! There are plenty of bits of content that would lose their impact if stated directly, for instance. But I think a large portion of the difficulty that is found in so-called classic literature is gratuitous.

Replies from: CronoDAS
comment by CronoDAS · 2009-10-04T23:08:42.845Z · LW(p) · GW(p)

I suggest self-study on the TV Tropes Wiki. ;)

Replies from: Alicorn
comment by Alicorn · 2009-10-04T23:09:54.989Z · LW(p) · GW(p)

Oh, for the love of chocolate-covered strawberries, never again. I spent a week in that sinkhole and am now mostly inoculated - I've read the majority of pages that catch my eye on a casual scan and don't feel compelled to re-read them.

comment by pdf23ds · 2009-10-03T21:39:53.461Z · LW(p) · GW(p)

Mightn't that just be because those courses are specifically to teach appreciation of those kinds of music?

Music theory, no, but the others, yes. (I wouldn't think music theory would increase classical appreciation more than other genres, though.)

There is very little net benefit to learning to appreciate a new kind of music

Disagree. Whatever the genre, more variety means listening is less tiring (because less monotonous) and, on the whole, more edifying. Each genre is enjoyed differently, and stimulates different parts of the mind. And in the specific case of classical music, on the theory that it is deeper and richer than other music (in the same way that set theory is deeper and richer than propositional logic, or Netflix is deeper and richer than Blockbuster) the limit of enjoyment is actually higher.

Replies from: Alicorn, anonym, gwern
comment by Alicorn · 2009-10-03T21:45:41.573Z · LW(p) · GW(p)

I took most of a year of AP music theory in high school (dropped out of it because I was being picked on) and never got the impression that we were learning about anything but archaic, old rules of music followed by dead composers. That, and how to take musical dictation, but none of the examples were contemporary. Was my music theory teacher just incompetent? Did I miss the generally applicable parts by leaving the class early?

And while having a variety of music is definitely good, there's plenty of variety within a genre! It doesn't seem obvious to me that you can get more valuable variety per ounce of effort by taking classes to learn to appreciate more genres than you can by spending time on Pandora.

Replies from: komponisto, Emily, pdf23ds
comment by komponisto · 2010-08-20T09:19:38.731Z · LW(p) · GW(p)

I don't know how I missed this comment at the time, but it demands a reply.

Although it's difficult for me to mentally organize such a reply, because I simultaneously believe all of the following:

  1. You probably were learning "old rules of music followed by dead composers."

  2. That doesn't constitute music theory.

  3. Your teacher's incompetence was likely not personal, but inherited from the discipline of "music theory" as a whole, which in my opinion has a far from satisfactory understanding of its own subject matter.

  4. However, your objections to the class you took are not necessarily related to this criticism of mine; in particular, the fact that that doesn't constitute music theory has nothing to do with whether certain composers happen to be dead or alive.

  5. There was probably considerable value in the curriculum they were trying to teach you, both for what it really was (music history; familiarization with art music as a pursuit, distinct from popular music) and as an indirect, nonexplicit (and frankly inefficient) way of teaching music theory (its traditional purpose).

  6. Hence your attitude is probably misguided, even though I wish you had been taught differently; in effect, you're "right for the wrong reasons".

  7. Musical dictation, though it may have seemed to you like merely one particular topic on the syllabus, really is the shibboleth for demonstrating understanding of music theory.

  8. You would not want examples from contemporary art music in your first introduction to musical dictation. The reason the examples were old is the same reason the examples in your math classes were old, rather than being drawn from contemporary journals.

  9. What you were really complaining about, probably, was that none of the examples were popular. (I really, really hate when people equate "contemporary" with "popular"!)

  10. However, as I mentioned above, it is among the purposes of such a course to familiarize the student with the pursuit of art music, as opposed to popular music (with which they are likely to be familiar already).

  11. In any case, conventional wisdom to the contrary, there aren't "separate magisteria" of music theory; the skill of musical dictation is what it is, and it doesn't matter exactly where the examples were drawn from, so long as they are of the appropriate level of complexity for the student's level. (Otherwise the theory being used is wrong.)

  12. I'm sorry that the course didn't communicate this to you, but there exists considerably more intellectual depth to the pursuit of music -- in particular art music -- than you are likely to have encountered just by living in the general culture, "spending time on Pandora", and the like.

  13. Please bear these points in mind -- in particular, the existence of people like me, who regard the creation of art music as an academic pursuit comparable in sophistication to science or philosophy -- when assessing the implications of your own experience in the domain of music.

Replies from: Morendil
comment by Morendil · 2010-08-20T09:22:29.457Z · LW(p) · GW(p)

What readings and activities would you recommend to someone interested in becoming able to compose music, as opposed to learning how to play any particular instrument?

Replies from: komponisto
comment by komponisto · 2010-08-20T09:33:59.890Z · LW(p) · GW(p)

Reading: An Introduction to Tonal Theory by Peter Westergaard.

Activity: Study scores. Copy them out by hand. (This is actually the traditional method of learning composition, believe it or not; it might be compared to tracing drawings in visual art). Make simplified versions (e.g. write out the "main line", then "main two lines", etc.). Make analyses of works as in Westergaard. And, above all: attempt to compose, and learn by trial and error.

Feel free to inquire further.

Replies from: Morendil
comment by Morendil · 2010-08-20T09:41:12.605Z · LW(p) · GW(p)

Thanks!

comment by Emily · 2009-10-03T23:10:39.080Z · LW(p) · GW(p)

Oh man I miss Pandora since they stopped streaming to the UK. :(

On topic: I had quite a few years of music lessons (though I wasn't really much good) and some musical theory, which I really enjoyed. And I do quite like listening to classical music in a vague sort of way, but I wouldn't say I have an "appreciation" for it: it's not as though I can pick out features or analyse it or anything. So am I appreciating it without a tuned ear, or am I just unaware of the work my bit of theoretical knowledge is doing behind the scenes?

Replies from: pdf23ds
comment by pdf23ds · 2009-10-03T23:15:10.087Z · LW(p) · GW(p)

I'd say appreciation is really just a synonym for enjoyment. You can be a world-class performer without knowing any theory at all.

Replies from: anonym
comment by anonym · 2009-10-04T07:47:16.674Z · LW(p) · GW(p)

Actually, I think appreciation and enjoyment are related but not symonyms. Enjoyment is visceral and emotional, it denotes the sheer pleasure of the experience. Appreciation implies recognition of the elements of the music, why those particular elements were chosen, how they might have been different, etc., as well as the extra enjoyment that comes about as a result of that appreciation. Not that I'm trying to say that's what everybody means by the terms, but that's how I think of them and how I've heard some other people talk about them.

comment by pdf23ds · 2009-10-03T22:39:36.913Z · LW(p) · GW(p)

My music theory course only had a slight emphasis on classical music. (Mainly because classical music is more analyzable with theory, I guess.) Probably your textbook was just old or inferior. But I got very little out of the course anyway.

I'm not suggesting that it's necessarily worth the effort to increase one's appreciation of classical music, given the opportunity cost. (I'm not exactly chomping at the bit to appreciate Ulysses or Gravity's Rainbow, or Hegel or Kant or Foucault or Derrida. Or wine, for that matter.) But the easiest way would probably be to pick a CD with some good classical music on it and listen to it many times through until you start to understand it musically. Courses are likely overkill. When I first started learning Bach (around the age of 10) it made no musical sense to me at all. I forget how long until I started to understand it, so I don't know how long you'd have to listen to start to get it. Maybe too long to bother.

there's plenty of variety within a genre!

Hmmmmmmmmmmm no. Doubt there's a good way to resolve this disagreement.

Replies from: anonym, Alicorn
comment by anonym · 2009-10-04T07:35:34.878Z · LW(p) · GW(p)

As for learning if coming to it as an adult, I'd recommend resources like Leonard Bernstein's Young People's Concerts (and any of his many writings on music, such as The Joy of Music), as well as Aaron Copland's What to Listen for in Music and works of that nature.

The key point in my opinion is that you have to learn to hear more in the music, to be able to hear and follow the different voices in a fugue, or recognize the development of a theme in a sonata allegro form, and this sort of ability only comes about through some offline study and intellectual training that is then applied when listening to music and the knowledge really comes alive.

Replies from: Jonathan_Graehl
comment by Jonathan_Graehl · 2009-10-06T00:28:47.618Z · LW(p) · GW(p)

I don't get much enjoyment in consciously recognizing long-horizon forms or themes, although I do enjoy many pieces that heavily rely on them, e.g. Liszt's Sonata in b

I probably lack perspective since I was a decent classical pianist in my youth, but I don't feel like any formal study is necessary to get a full pleasure-soup response to great classical music. Also, tastes vary; I don't enjoy a lot of highly regarded classical music (but there are at least a few hundred hours that are really great for me). I doubt my favorite hundred hours are the same as anyone else's.

Giving up either all music before 1950 or all after would be easy; I'd keep 1950- since there are still many good contemporary "classical" composers crowded out by the higher-status old masters.

Replies from: anonym
comment by anonym · 2009-10-06T06:44:38.547Z · LW(p) · GW(p)

Well, I like Bach most of all, and I find that as I've learned to hear more, sometimes by following along with a score and doing a bit of musical analysis and kind of hacking away at the piano and butchering parts of the works, I enjoy them more and get more from them. Sometimes it seems that the enjoyment is proportional to how much of it I can keep in mind at the time as I'm listening, how much I can pay attention to (both of which are facilitated through knowledge of the work), as well as how much the distinction between me and the experience disappears and I lose the sense of being a person listening to the music. I enjoy having a kind of high-level cognitive/emotional/musical blueprint of the work as a whole and feeling how the moment relates to the whole, knowing where it's going and remembering where it came from. Just as we can have a greater appreciation for a moment or portion in the life of a historical figure or in a movie by thinking about the future consequences that we're aware of, I think we can have the same kinds of reactions to music where the pieces mean more as a whole than as just the sum of the parts, and on multiple levels. I don't hear that much in Bach relative to what a great and well-studied performer of Bach would, but I hear more than I used to, and far more than I did before I had studied music at all. I think it is probably a matter of personality though, and that many would find my approach would detract from their experience.

Replies from: DanArmak
comment by DanArmak · 2009-10-06T12:32:50.134Z · LW(p) · GW(p)

Puts me in mind of this quote from Pratchett's Soul Music:

Lord Vetinari, the supreme ruler of Ankh Morpork, rather liked music.

People wondered what sort of music would appeal to such a man.

Highly  formalized chamber  music, possibly, or  thunder  and-lightning

opera scores.

In fact the kind of music he really  liked  was the kind that never got

played. It ruined music, in his opinion, to torment it by involving it on dried skins, bits of dead cat and lumps of metal hammered into wires and tubes. It ought to stay written down, on the page, in rows of little dots and crotchets, all neatly caught between lines. Only there was it pure. It was when people started doing things with it that the rot set in. Much better to sit quietly in a room and read the sheets, with nothing between yourself and the mind of the composer but a scribble of ink. Having it played by sweaty fat men and people with hair in their ears and spit dribbling out of the end of their oboe ... well, the idea made him shudder.

[...] Then he picked up the third movement of Fondel's Prelude in G Major and settled back to read.

I wonder if anyone does this with music they've never heard played before?

Replies from: anonym, CronoDAS, pdf23ds
comment by anonym · 2009-10-07T04:58:12.083Z · LW(p) · GW(p)

Glenn Gould was said to sometimes analyze and completely memorize works from the sheet music alone before playing them at the piano. His father recalled an instance of him learning an entire concerto from the score alone and then playing it from memory the first time, and Bruno Monsaingeon saw him play an entire movement from memory of a Mendelssohn string quartet after hearing it once on the radio (quoted in Bazzana's book on Gould).

That pales in comparison though to the fourteen-year-old Mozart transcribing from memory after once hearing the secret Miserere of Allegri, a dense polyphonic work that was performed only in the Sistine Chapel and was forbidden by the Vatican to be transcribed or reproduced under penalty of excommunication.

comment by CronoDAS · 2009-10-07T17:59:07.093Z · LW(p) · GW(p)

I sometimes found piles of sheet music sitting around in my high school's music room, and I'd read them once in a while.

comment by pdf23ds · 2009-10-07T05:48:02.909Z · LW(p) · GW(p)

I'm by no means an expert at the piano, but I'm probably halfway there, and I can without too much trouble get the general gist of complex unfamiliar piano music, and I can easily read simple music. I'd say it's pretty much analogous to the ability of expert chess players to play blindfolded, which is definitely a well-attested ability. (The record for simultaneous blindfold matches is around 50, played by Janos Flesch in Budapest in 1960.)

comment by Alicorn · 2009-10-03T23:23:45.336Z · LW(p) · GW(p)

Perhaps there are some genres with more or less variety than others? Or we're counting genres differently?

comment by anonym · 2009-10-04T07:17:44.961Z · LW(p) · GW(p)

I think music theory -- including ear training -- would disproportionately increase classical appreciation (but would also improve appreciation for other forms too). The reason is simple: classical music is more complex musically, so it rewards a more discriminating ear and a richer sense of harmony, counterpoint, etc.

There's a lot of popular music that I love and think is very interesting musically, harmonically, etc., but classical music is usually so much deeper and requires much more skill and musical knowledge to create (and also to appreciation). If you want to succeed in the classical world, as a performer or composer, you have to start by the age of 6, you have to be supremely talented, you have to work obsessively until you are accepted into a good conservatory, and then work even harder still. Your entire life is basically nothing but music from a very early age. That was true of Bach, Mozart, Beethoven, etc., and it is still true today. The situation with popular music is completely different. You can pick it up as an adult, and if you're talented, you might still have a successful career. You can pick it up as a teenager, and within a few years have developed enough musically to be on par with almost any other popular band. It seems pretty clear that something that takes decades of study and practice (and involves study of hundreds of years of music history) is going to involve more skill (and make more use of skills acquired) than something that can be achieved in years of study and practice, and when the composer is relying on decades of study and their intimate knowledge of hundreds of years of changes in music theory, counterpoint etc., it is definitely going to take some work on the part of a listener to do more than skim the surface in terms of enjoyment and appreciation.

comment by gwern · 2010-08-20T07:02:50.503Z · LW(p) · GW(p)

And in the specific case of classical music, on the theory that it is deeper and richer than other music (in the same way that set theory is deeper and richer than propositional logic, or Netflix is deeper and richer than Blockbuster) the limit of enjoyment is actually higher.

This sounds like a really terrible analogy; if anything, it ought to prove the opposite, that rap could beat classical since rap has access to all the instruments and styles classical does (including, let's not forget, the human voice which modern classical usually shuns) and much more (electronica?). So rap is more general than classical like set theory is more general than propositional logic. Or something.

Replies from: komponisto
comment by komponisto · 2010-08-20T08:27:21.612Z · LW(p) · GW(p)

the human voice which modern classical usually shuns

What on Earth are you talking about? Seriously, what data is generating this impression in your mind?

rap has access to...more (electronica?)

Same question. Where did you get the impression that electronic media aren't ...central to the recent history of art music?

The fundamental difference between art music and popular music (including rap) is nothing so superficial as instrumentation. It's complexity of musical structure. Art music is structurally more complex than popular music; just as "art"/"research" X is structurally more complex than "popular" X, for all X.

(By the way, that's different from what makes something art music vs. popular music. That has to do with memetic lineage. An individual work of art music could happen to be less complex than an individual work of popular music. But the differing lineages differ statistically as described here.)

Replies from: gwern
comment by gwern · 2010-08-20T09:05:26.035Z · LW(p) · GW(p)

Seriously, what data is generating this impression in your mind?

Sunday Baroque on NPR.

Is... is this not a good source of modern preferences in classical music?

Also, I feel that we are arguing in difference ways about complexity. I'm thinking in terms of total possibilities (eg. there are only so many 5 minute pieces expressible with 88 keys or whatever), but you seem to have some sort of entropy measure in mind.

Replies from: komponisto
comment by komponisto · 2010-08-20T09:46:50.633Z · LW(p) · GW(p)

Sunday Baroque on NPR.

Is... is this not a good source of modern preferences in classical music?

I completely misunderstood you. I thought you were talking about actual modern music, not modern preferences in Baroque music. (You did say "modern classical".)

For the record, there is an abundance of vocal music from the Baroque period; I don't know how much of it is played on NPR.

Also, I feel that we are arguing in difference ways about complexity. I'm thinking in terms of total possibilities (eg. there are only so many 5 minute pieces expressible with 88 keys or whatever), but you seem to have some sort of entropy measure in mind.

Yes on the latter point. As for the former, I don't understand what you mean. Are you saying something like "there are fewer possible art compositions than rap songs, because art music is limited to 5-minute piano pieces"? That would be absurd, but I can't come up with another meaning.

Replies from: gwern
comment by gwern · 2010-08-20T10:20:09.189Z · LW(p) · GW(p)

As for the former, I don't understand what you mean. Are you saying something like "there are fewer possible art compositions than rap songs, because art music is limited to 5-minute piano pieces"? That would be absurd, but I can't come up with another meaning.

What is absurd about it? It seems pretty apparent to me that rap can generate nearly-arbitrary sounds within the space of humanly-perceivable sounds, while Baroque/classical/classical-style music is limited to the smaller set of what pianos & flutes & etc. can generate.

Replies from: komponisto
comment by Alicorn · 2009-10-01T20:21:54.375Z · LW(p) · GW(p)

This is obviously a matter of taste. I really like Ode to Joy, but that's the only old music that has a ghost of a chance of competing for my affections on a par with my favorite show tunes or other more recent selections. If you like a lot of old music and not a lot of new music, it just means that you a) have common tastes with people who were rich music patrons in the Golden Age of your choice, or b) you're succumbing to some signaling effect having to do with the perceived absolute quality of old dead white musicians' work. If there is something like objective musical quality out there (which is a matter of open debate in aesthetics), it's probably very fuzzy. Maybe Ode to Joy is objectively better than Sk8er Boi, but the jury is out and they don't seem inclined to come back soon.

Replies from: dclayh
comment by dclayh · 2009-10-01T20:29:31.905Z · LW(p) · GW(p)

Obviously it's a matter of taste, yes. (And I do think about the signaling effects of my musical tastes from time to time; it is rather an interesting topic.) I was only putting forth my "no good music has been written since the death of Gershwin"* opinion to contrast with taw's "no good music was written before 1975" opinion, in order to produce a synthesis that would support gwern's original contention that enough art now exists that we needn't subsidize more of it.

*not actually my opinion, but close

comment by billswift · 2009-10-02T14:59:18.552Z · LW(p) · GW(p)

I really like Mozart, but I like a lot of techno, and some industrial and goth bands just about as much (it depends a bit on my mood). And for that matter I like a lot of 1940s and 50s big band music, country and western, and classic rock, when I'm in the right mood.

comment by taw · 2009-10-01T20:26:52.644Z · LW(p) · GW(p)

For me, and as far as I can tell vast majority of other people, they're just not terribly enjoyable.

comment by gwern · 2010-08-20T07:08:49.286Z · LW(p) · GW(p)

As Technologos points out, # of movies made per year seems to have increased considerably, so the fraction of good movies made could have dropped but your numbers be accurate. (eg. the 1930s saw 15, so 15 * 3 = 45, not too far from the 2000s's 56)

Replies from: taw
comment by taw · 2010-08-20T09:07:38.028Z · LW(p) · GW(p)

Average doesn't seem important at all. Also systemic bias - would you seriously argue that if a top rated movie from 1930s came out today (with just refurbished technology and such trivia) it would still be a hit? I find this nearly impossible.

Replies from: gwern
comment by gwern · 2010-08-20T09:26:37.101Z · LW(p) · GW(p)

A dropping average suggests (massively) diminishing returns.

And as far as remakes and sequels go? Well, you tell me...

Replies from: taw
comment by taw · 2010-08-20T09:30:37.626Z · LW(p) · GW(p)

I doubt computational power of an average chip is much higher than in 1970s. Ones on the top are ridiculously better, but at the same time we had explosion in number of really simple chips, so quite likely average isn't much better. Or at least median isn't much better. Does it imply lack of progress? (don't try to find numbers, I might be very well proven wrong, it's just a hypothetical scenario)

Replies from: gwern
comment by gwern · 2010-08-20T09:45:02.602Z · LW(p) · GW(p)

I think that analogy would be more insightful if you replaced the entries with 'supercomputers' and 'the TOP500'.

comment by CronoDAS · 2009-10-17T07:46:23.058Z · LW(p) · GW(p)

I recently thought of something else related to why one would prefer a "new" book to an old one. There's a certain suspense involved in reading a work in progress. Waiting for the next installment, making guesses at what's going to happen next, discussing your theories with your friends who are all at the same place in the story as you are, and so on, are all things that rarely, if ever, happen with old stories as intensely as they do with new stories. A message board I used to frequent had an extremely long-running discussion of Stephen King's "The Dark Tower" series that died shortly after the final book was published.

In other words, with new stories, you can give someone something to anticipate. Old stories tend to be well-known to the point where everybody already knows what happens, and the anticipation only lasts as long as it takes you to get from the beginning to the end.

Replies from: gwern, eirenicon
comment by gwern · 2009-10-18T18:48:22.514Z · LW(p) · GW(p)

A message board I used to frequent had an extremely long-running discussion of Stephen King's "The Dark Tower" series that died shortly after the final book was published.

Well, being a (former) Dark Tower fan myself, I think that's not necessarily related to the bald fact that the series ended so much as how it ended...

Waiting for the next installment, making guesses at what's going to happen next, discussing your theories with your friends who are all at the same place in the story as you are, and so on, are all things that rarely, if ever, happen with old stories as intensely as they do with new stories.

How much of this, do you think, is due simply to the fact that everyone is coordinated & equally ignorant due to sheer temporal necessity, and how much to the actual 'new' nature of releases?

I remember as a child I loved The Wizard of Oz, but I hadn't the slightest idea that there were sequels. One day, browsing through the very disorganized school library, I found one. I was shocked, and from then on, every few weeks or months as I rummaged, I would find another one. I recall being as thrilled to find one (though out of order) as I think I would have if they were freshly released & bought by the librarian, though they were, gosh, at least 80 years old by this point?

Replies from: CronoDAS
comment by CronoDAS · 2009-10-18T20:32:26.261Z · LW(p) · GW(p)

Well, being a (former) Dark Tower fan myself, I think that's not necessarily related to the bald fact that the series ended so much as how it ended...

I haven't seen people talking about the new Battlestar Galactica series after it ended, either. Often, once "the answer" exists, people stop wondering what it is.

How much of this, do you think, is due simply to the fact that everyone is coordinated & equally ignorant due to sheer temporal necessity, and how much to the actual 'new' nature of releases?

Yeah, I think that's what I'm getting at - you almost never get that kind of coordination when it comes to "old" works.

Replies from: Douglas_Knight, gwern
comment by Douglas_Knight · 2009-10-19T00:19:50.075Z · LW(p) · GW(p)

Yeah, I think that's what I'm getting at - you almost never get that kind of coordination when it comes to "old" works.

I don't think most people care so much about the suspense and discussing the next episode. People do discuss one-shot movies. But it's important that they all watch them at the same time, so that they can time the discussion. Before about 1970 movies were re-released in the theaters and I think this was adequate coordination. I'm not sure why it stopped. VCRs are an obvious answer, but I think they stopped rather earlier. And movies get remade today, which I think it greatly inferior to re-release.

comment by gwern · 2009-10-18T23:15:31.155Z · LW(p) · GW(p)

I haven't seen people talking about the new Battlestar Galactica series after it ended, either. Often, once "the answer" exists, people stop wondering what it is.

This point is surely correct, but you again pick an unfortunate example - I've heard the ending of BSG was even worse then DT's...

you almost never get that kind of coordination when it comes to "old" works.

Which is interesting, since there's nothing stopping a group from just not reading each & every book after a set period, thereby reaping the same gains but without issues like, I dunno, the author dying after 20 years & leaving it incomplete. (cough Wheel of Time cough)

The fact that people never do this, even in private, but rather prefer to tear through the entire series at once, suggests to me that this communality isn't worth much. (Aren't book clubs famous for falling apart after a little while?)

Perhaps the fans are just distracting themselves from the agony of waiting for something they love so much & killing time; I knew, before & during the prequels, more than one Star Wars fan who just tried to ignore anything they saw related to SW so they couldn't be bothered by the multi-year waits (out of sight, out of mind...) - they felt the itch you get when pausing a movie or show, or stopping in the middle of a book, but this itch would last for more than just a few minutes.

Replies from: CronoDAS
comment by CronoDAS · 2009-10-19T09:30:22.681Z · LW(p) · GW(p)

This point is surely correct, but you again pick an unfortunate example - I've heard the ending of BSG was even worse then DT's...

Yeah... Maybe Harry Potter is a better one?

(cough Wheel of Time cough)

Brandon Sanderson is finishing up the series based on Jordan's notes and other unpublished information he left behind.

comment by eirenicon · 2009-10-19T13:14:13.947Z · LW(p) · GW(p)

Thanks, it's been a while since I wasted a whole morning on TvTropes. Please link responsibly, people!

Replies from: CronoDAS
comment by CronoDAS · 2009-10-19T19:57:59.689Z · LW(p) · GW(p)

You're welcome.

comment by thomblake · 2009-10-01T14:28:59.821Z · LW(p) · GW(p)

You should be able to just copy and paste the HTML version into the WYSIWYG editor and it will magic something for you.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2009-10-01T14:45:42.838Z · LW(p) · GW(p)

There is a button in the editor that allows to enter raw HTML (and it should be easy to construct a regex script to get whatever).

Replies from: gwern
comment by gwern · 2009-10-07T23:29:47.080Z · LW(p) · GW(p)

Hm. I'll try that and pasting next time I feel like a top-post, then.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-10-30T16:49:51.100Z · LW(p) · GW(p)

This is just a comment I can edit to let people elsewhere on the Net know that I am the real Eliezer Yudkowsky.

10/30/09: Ari N. Schulman: You are not being hoaxed.

Replies from: Cyan
comment by Cyan · 2009-10-30T17:02:08.372Z · LW(p) · GW(p)

I'm Spartacus!

comment by Cyan · 2009-10-11T23:47:59.544Z · LW(p) · GW(p)

For them's what are following LW comments but not current OB activity, Eliezer and Robin are getting into it about the necessity of Friendliness in future agents of superhuman intelligence right now.

Replies from: Bo102010
comment by Bo102010 · 2009-10-12T00:09:38.230Z · LW(p) · GW(p)

Morpheus is fighting Neo!

comment by ZoneSeek · 2009-10-05T05:25:55.670Z · LW(p) · GW(p)

Dual n-back is a game that's supposed to increase your IQ up to 40%. http://en.wikipedia.org/wiki/Dual_n_back#Dual_n-back

Some think the effect is temporary, long-term studies underway. Still, I wouldn't mind having to practice periodically. I've been at it for a few days, might retry the Mensa test in a while. (I washed out at 113 a few years ago) Download link: http://brainworkshop.sourceforge.net/

It seems to make sense. Instead of getting a faster CPU, a cheap and easy fix is get more RAM. In a brain analogy, I've often thought of the "magic number seven," isn't there any way to up that number, have more working memory? Nicholas Negroponte said something like "Perspective is worth 50 IQ points." I think that's a scope fail, but good perspective, being able to hold more of the problem in your head, might be worth about 30 IQ points.

Replies from: gwern, gwern
comment by gwern · 2011-07-02T20:30:29.295Z · LW(p) · GW(p)

So it's been almost 2 years. Have you taken any IQ tests after practicing?

Replies from: ZoneSeek
comment by ZoneSeek · 2011-12-13T06:12:58.900Z · LW(p) · GW(p)

Sorry, hiatus. No haven't been tested recently, and slacked off on the DNB, it starts to feel monotonous, and frustrating, I couldn't break through D3B. I'll try and pick it up again when I figure out how to get it to work on Ubuntu.

Replies from: gwern
comment by gwern · 2012-09-18T00:04:12.987Z · LW(p) · GW(p)

Any progress since? (It seems to work fine for me on Debian.)

Replies from: ZoneSeek
comment by ZoneSeek · 2012-10-01T22:23:30.400Z · LW(p) · GW(p)

Took a crack at it again, just now worked out how to change directories in a terminal.

comment by gwern · 2009-12-20T03:43:06.607Z · LW(p) · GW(p)

To shill my DNB FAQ: http://www.gwern.net/N-back%20FAQ

As to temporary: if it's temporary, it's a very long temporary. From personal experience it takes months for my scores to begin to decay more than a few percent, and other people have reported scores unaffected by breaks of weeks or months as well.

The more serious concern for people who want big boosts is that looking over the multiple IQ before-after reports I've collated, I have 2 general impressions: that DNB helps you think quicker, but not better, and that the benefit is limited to around +10-15 points max.

(On a personal note, ZoneSeek, if after a few weeks or months of N-backing you've risen at least 4 levels and you retake the Mensa test, I would be quite interested to know what your new score is.)

comment by DonGeddis · 2009-10-02T20:07:19.189Z · LW(p) · GW(p)

Eliezer and Robin argue passionately for cyronics. Whatever you might think of the chances of some future civilization having the technical ability, the wealth, and the desire to revive each of us -- and how that compares to the current cost of signing up -- one thing that needs to be considered is whether your head will actually make it to that future time.

Ted Williams seems to be having a tough time of it.

Replies from: AngryParsley, AngryParsley
comment by AngryParsley · 2009-10-07T21:58:38.648Z · LW(p) · GW(p)

Alcor has posted a response to Larry Johnson's allegations.

comment by AngryParsley · 2009-10-02T20:39:58.681Z · LW(p) · GW(p)

I'm not sure what to think of Larry Johnson. Some of his claims are normal parts of Alcor's cryopreservation process, but dressed up to sound bad to the layperson. Other parts just seem so outrageous. A monkey wrench? An empty tuna can? Really? He claims that conditions were terrible, which is also unlikely. Alcor is a business and gets inspected by OSHA, the fire department, etc. They even offer free tours to the public. If conditions were so terrible, you'd think they'd have some environmental or safety violations. At the very least, some people who toured the facility would speak up.

The article also claims that Ted Williams was cryopreserved against his will, which is almost certainly not true. Alcor requires that you sign and notarize a last will and testament with two witnesses who are not relatives.

comment by Vladimir_Nesov · 2009-10-15T20:32:35.667Z · LW(p) · GW(p)

Henry Markram's recent TED talk on cortical column simulation. Features philosophical drivel of appalling incoherence.

Replies from: mormon2, timtyler
comment by mormon2 · 2009-10-17T08:30:22.433Z · LW(p) · GW(p)

True but the Blue Brain project is still very interesting and is and hopefully will continue to provide interesting results. Whether you agree with his theory or not the technical side of what they are doing is very interesting.

comment by timtyler · 2009-10-15T20:58:39.336Z · LW(p) · GW(p)

Yes - this talk is truly appalling.

comment by billswift · 2009-10-11T15:51:27.363Z · LW(p) · GW(p)

We need a snappy name like "analysis paralysis" that is focused on people who spend all their time studying rather than doing. They (we) intend to do, but never fell like they know enough to start.

comment by Nubulous · 2009-10-11T06:25:31.740Z · LW(p) · GW(p)

I came up with the following while pondering the various probability puzzles of recent weeks, and I found it clarified some of my confusion about the issues, so I thought I'd post it here to see if anyone else liked it:

Consider an experiment in which we toss a coin, to chose whether a person is placed into a one room hotel or duplicated and placed into a two room hotel. For each resulting instance of the person, we repeat the procedure. And so forth, repeatedly. The graph of this would be a tree in which the persons were edges and the hotels nodes. Each layer of the tree (each generation) would have equal numbers of 1-nodes and 2-nodes (on average, when numerous). So each layer would have 1.5 times as many outgoing edges as incoming, with 2/3 of the outgoing being from 2-nodes. If we pick a path away from the root, representing the person's future, in each layer we are going to have an even chance of arriving at a 1- or 2- node, so our future will contain equal numbers of 1- and 2- hotels. If we pick a path towards the root, representing the person's past, in each layer we have a 2/3 chance of arriving at a 2-node, meaning that our past contained twice as many 2-hotels as 1-hotels.

comment by PeerInfinity · 2009-10-06T03:55:05.139Z · LW(p) · GW(p)

I recently realized that I don't remember seeing any LW posts questioning if it's ever rational to give up on getting better at rationality, or at least on one aspect of rationality that a person is just having too much trouble with.

There have been posts questioning the value of x-rationality, and posts examining the possibility of deliberately being irrational, but I don't remember seeing any posts examining if it's ever best to just give up and stop trying to learn a particular skill of rationality.

For example, someone who is extremely risk-averse, and experiences severe psychological discomfort in situations involving risk, and who has spent years trying to overcome this problem with no success. Should this person keep trying to overcome the risk aversion, or just give up and never leave their comfort zone, focusing instead on strategies for avoiding situations involving risk?

yes, the "someone" I mention above is myself.

and yes, I am asking this hoping that the answer gives me an excuse to be lazy.

Replies from: PeerInfinity, AdeleneDawner, Jack, tut
comment by PeerInfinity · 2009-10-07T17:23:36.664Z · LW(p) · GW(p)

I'm surprised that noone gave the obvious answer yet, which is:

If overcoming the problem really is hopeless, then give up and focus on more productive things, otherwise keep trying.

If it isn't obvious whether it's hopeless or not, then do a more detailed cost/benefit analysis.

Still, I don't remember seeing any LW post that even mentioned that sometimes giving up is an acceptable option. Or maybe I just forgot, or didn't notice.

Replies from: CronoDAS, Jack, pdf23ds, Dagon
comment by CronoDAS · 2009-10-07T17:56:59.962Z · LW(p) · GW(p)

http://lesswrong.com/lw/gx/just_lose_hope_already/ ?

Replies from: PeerInfinity
comment by PeerInfinity · 2009-10-07T18:10:56.025Z · LW(p) · GW(p)

Yes, that link is relevant and helpful, thanks.

It's not specifically about giving up on overcoming a particular irrational behaviour, but I guess the same advice applies.

comment by Jack · 2009-10-07T19:58:16.528Z · LW(p) · GW(p)

This is random and for all sorts of reasons possibly a bad idea- but have you ever thought about anti-anxiety medication? It might have side effects that turn you off of it but it could help you deal with high risk situations.

(I should disclaim: I'm not a doctor, my knowledge doesn't extend past personal experience and a cog sci minor. Obviously, not medical advice, etc.)

comment by pdf23ds · 2009-10-07T19:21:18.586Z · LW(p) · GW(p)

I personally didn't suggest it because it seemed like it's obvious to you, so the only interesting response would be to deny it for some good reason.

I would note that you shouldn't give up permanently. Maybe wait a year or a few, then see if you've grown in other ways that would make a new attempt more fruitful.

Replies from: PeerInfinity
comment by PeerInfinity · 2009-10-07T19:50:41.174Z · LW(p) · GW(p)

upvoted. good advice. thanks.

comment by Dagon · 2009-10-07T20:41:26.300Z · LW(p) · GW(p)

It's been hinted at a few times, usually in terms of "how to pick goals" rather than "when to give up on goals". AFAIK, never a top-level post of "maybe you should give up and do something easier and/or more productive toward other goals". I think it'd be valuable.

comment by AdeleneDawner · 2009-10-07T05:59:37.453Z · LW(p) · GW(p)

I was hoping this would get more of a response - Peer and I have spent a considerable bit of time talking about this, and it's gotten to the point where other perspectives would be useful.

My opinion is that it is, at a minimum, appropriate for someone in Peer's situation to accept the fact that they are nearly guaranteed to be overwhelmed by emotion, to the point of becoming dangerously irrational, in certain situations, and to take that fact into account in deciding what problems to try to tackle. And, I see it as irrational to feel guilty or panicky about not being able to do more.

Part of the problem, though, is that the risky situations Peer mentioned are SIAI-related, and he seems to see doing anything less than his theoretical best (without taking psychological issues into account) in that context as not just lazy but immoral in some sense.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2009-10-07T13:26:23.056Z · LW(p) · GW(p)

Peer's comment is too vague and general for any meaningful response, and your comment doesn't add clarity ("Risky situations Peer mentioned are SIAI-related"?).

"Risk aversion"? In one interpretation it's a perfectly valid aspect of preference, not something that needs overcoming. For example, one can value sure-thing $1000 more than 11% probability at $10000.

Replies from: AdeleneDawner
comment by AdeleneDawner · 2009-10-07T14:01:47.997Z · LW(p) · GW(p)

I'm trying not to say anything here that's more Peer's business than mine, so I don't want to use real examples, and I'm not certain enough that I know the details of what's going on in Peer's head to make up examples, but it doesn't appear to be risk-aversion by that definition that's the problem. It's that when he's in what appears to him to be a high-stakes situation (and 'what appears to him to be' is very relevant there - this isn't a calculated response as far as I can tell, and being told by, for example, Michael Vassar that the risk in some situation is worth the reward is nearly useless), he panics, and winds up doing things that make the issue worse in some way - usually in the form of wasting a lot of energy by going around in circles and then eventually backing out of dealing with the situation at all.

Replies from: CronoDAS, PeerInfinity
comment by CronoDAS · 2009-10-07T18:00:38.294Z · LW(p) · GW(p)

Is this what's referred to as "choking under pressure"?

Replies from: PeerInfinity
comment by PeerInfinity · 2009-10-07T18:31:06.206Z · LW(p) · GW(p)

Yes, that seems like a reasonably accurate summary.

comment by PeerInfinity · 2009-10-07T15:08:08.336Z · LW(p) · GW(p)

Everything Adelene has said so far is accurate.

Sorry, but I still haven't thought of a good example that wouldn't take too long to explain.

Another topic that Ade and I have been discussing is the difference between my idealized utility function (in which a major component is "maximize the probability that the Singularity turns out okay"), and whatever it is that actually controls my decisions (in which a major component is "avoid situations where my actions have a significant probability of making things worse")

(I think there was at least one LW post on the topic of the difference between these two utility functions, but I didn't find them after a quick search.)

So to answer Vladimir's question, in my idealized utility function, certainty is not inherently valuable, and I know that when faced with a choice between certainty and uncertainty, I should shut up and multiply. However, my actual utility function has a paralyzing inability to deal with uncertainty.

Other relevant details are:

*severe underconfidence

*lack of experience, common sense, and general sanity

*fear of responsibility

*an inability to deal with (what appear to be) high-stakes situations. A risk of losing $1000 is already enough to qualify as "paralyzingly high stakes".

Replies from: CronoDAS
comment by CronoDAS · 2009-10-07T18:58:37.852Z · LW(p) · GW(p)

severe underconfidence

fear of responsibility

an inability to deal with (what appear to be) high-stakes situations. A risk of losing $1000 is already enough to qualify as "paralyzingly high stakes".

Hmmm... Yeah, anxiety sucks.

You know, physiologically, fear and excitement are very similar. My Psychology 101 textbook mentioned an experiment in which experimental subjects who met a young woman in a situation where the environment was scary (a narrow bridge over a deep chasm) reported her as being more attractive than subjects who met her in a neutral setting. Many people are afraid of public speaking or otherwise performing before an audience. I'm something of an exception, because I find it exciting instead of scary. Maybe some practice at turning fear into excitement could help? I don't know exactly how to do that, but you could try watching scary movies, or riding roller coasters, or playing games competitively, or something like that.

Also, perhaps another possible way to deal is to not care as much about the outcome? Always look on the bright side of life, and all that. Maybe I've just read too much fiction and played too many video games, but it seems like things usually do tend to work out okay. After all, humanity did survive the Cold War without blowing itself up. I don't know how to do this, but if you think you could try to take a more abstract and less personal perspective on whatever is scaring you, it might help.

Replies from: AdeleneDawner
comment by AdeleneDawner · 2009-10-08T01:55:20.653Z · LW(p) · GW(p) CronoDAS, are you enjoying being useful in this context? Is it more fun than video games? If so, that's important information. Note it. Replies from: CronoDAS
comment by CronoDAS · 2009-10-08T06:37:45.139Z · LW(p) · GW(p)

Well, one way I do nothing is by reading LessWrong and other blogs, and posting comments. I tend to be hesitant to give authoritative advice about dealing with personal issues, as I'm probably more screwed up than average, but I can still make suggestions. I find it hard to imagine myself as a counselor of any kind, though.

As for "better than video games", sometimes yes, sometimes no. It depends a lot on the particular video game.

Replies from: Vladimir_Nesov, AdeleneDawner
comment by Vladimir_Nesov · 2009-10-08T15:46:38.144Z · LW(p) · GW(p)

Well, one way I do nothing is by reading LessWrong and other blogs, and posting comments.

I feel it's a curiosity stopper to think of browsing the Internet as "doing nothing". You learn, you communicate, you help, you signal your expertise. Find better understanding of the gist of your motivation and turn it into a sustainable plan for driving your day-to-day activity (in particular for making some money).

Replies from: CronoDAS
comment by CronoDAS · 2009-10-09T08:24:28.543Z · LW(p) · GW(p)

It's not so much "doing nothing" as "something I do for no other reason than it's become part of my standard routine". I think I've become very much driven by habit; I have a tendency to keep playing a video game even after I've decided I don't like it very much and have plenty of others I could be playing.

Replies from: pdf23ds
comment by pdf23ds · 2009-10-09T15:22:35.221Z · LW(p) · GW(p)

Sometimes I play through my videogames repeatedly trying to set time records. (OK, I've only really done that once, for a couple weeks.)

comment by AdeleneDawner · 2009-10-08T13:45:06.248Z · LW(p) · GW(p)

To quote a friend of mine, 'it's pointless to doubt yourself. It only reduces what you can do.'

My meta-suggestion is to find things that you enjoy or care about (not the same thing) enough to put effort into handling them them better. Giving advice in general doesn't seem to fall into that category - I don't remember seeing you do it regularly, which is the only measure I really have access to - but you seemed pretty engaged in this case, so there may be an aspect of this situation that you care about more than you would care about a run-of-the-mill situation. If there is, and if you can figure out what it is, you can use that information to find more things of that type, which is likely to be useful - you run into that 'having something to protect' effect.

comment by Jack · 2009-10-07T07:54:31.826Z · LW(p) · GW(p)

Well given that I don't know what you've actually tried its hard to say if I think you've exausted your options (though it sounds like this sort of think might be best served by professional therapy). But sure, if the situations is really that bleak (assuming you have outside confirmation of this) then yeah give it up. Work on something else. Does your psychological discomfort come with any risk? Or just when particular kinds of things are at risk?

Relatedly, has anyone thought about the relationship between rationality and psychotherapy? It just occurred to me that there might be a lot there.

Replies from: wedrifid, pdf23ds
comment by wedrifid · 2009-10-07T09:12:39.865Z · LW(p) · GW(p)

Relatedly, has anyone thought about the relationship between rationality and psychotherapy? It just occurred to me that there might be a lot there.

It puts the 'R' in REBT.

comment by pdf23ds · 2009-10-07T08:14:36.214Z · LW(p) · GW(p)

Relatedly, has anyone thought about the relationship between rationality and psychotherapy? It just occurred to me that there might be a lot there.

Huh? You mean, like, psychotherapists are unusually irrational people? Or maybe that no rationalist would give any significant credence to any of the clinical psychology theory? Or maybe that a good rationalist will rarely need psychotherapy because their deduction skills are much better than most therapists? Please explain.

To be less snide, I find it quite unlikely that therapy would help PI significantly. (Of course, I know little of his/her specific circumstances.) I think a more fruitful course of action, if PI does want to overcome the problem*, would be to keep trying to overcome it directly, and meanwhile continue to form new free relationships with a variety of trusted people and see if they can help at all by providing insight or emotional support. Social networks are better than yellow book pages at finding people with relevant insights. And good friends are better than good therapists at emotional support.

* Which isn't to say that PI should keep trying.

Replies from: Jack
comment by Jack · 2009-10-07T17:03:36.175Z · LW(p) · GW(p)

It is possible that therapy isn't usually cost effective but I don't know of any study which suggests the therapist market is uniquely distorted. People pay a lot of money for a good therapists and therapists build their practice by way of references. I don't think I have to endorse Freudian psychoanalysis in order to think that talking to an experienced stranger about your problems might be helpful in ways that talking to friends wouldn't be. I don't know the details of peer's problem (and sorry peer, for hijacking this) but his risk aversion might extend to fear of losing social capital and being embarassed. If thats the case telling him to go make more friends and tell them about his problems seems to miss the point.

What I meant by a relationship between rationality and psychotherapy is that therapy often involves getting people to be happier by having them behave more rationally. It seems to me that that some of the methods and ideas discussed and used here could bear on therapeutic practice. Frankly, better than talking to friends for free (therapy from people you have other relationships is always going to be more complicated since there are all sorts of signaling and status issues that will get in the way of an honest dialog) would be talking to rationalist strangers for free. I imagine the Bayesian cult leaders of Eliezer's fiction could charge a nice fee for talking to people and helping them make life decisions free from bias and overcome akrasia. We've all recognized that a lot of the material that gets discussed here looks like less useless self-help. To me, that means that this material might also be less useless other-help.

I sort of doubt it- but it would be great if to know if there are any practicing therapists or social workers that read less wrong.

Replies from: pdf23ds, AdeleneDawner
comment by pdf23ds · 2009-10-07T19:16:05.804Z · LW(p) · GW(p)

but his risk aversion might extend to fear of losing social capital and being embarassed. If thats the case telling him to go make more friends and tell them about his problems seems to miss the point.

Certainly. I didn't get the impression that that was the case from his comment, but perhaps it is.

therapy often involves getting people to be happier by having them behave more rationally.

My main beef with therapy is that it's ineffective at this. (Not in all cases, but more likely in the case of LW members.) It's certainly a noble goal.

I don't think I have to endorse Freudian psychoanalysis in order to think that talking to an experienced stranger about your problems might be helpful in ways that talking to friends wouldn't be.

I think you're saying here that you don't have to endorse any particular methodology in order to think etc. I agree with the conditional, but I somewhat disagree with the consequent.

I write about my personal experience with therapy on my blog, which certainly informs my writings here.

Replies from: Jack
comment by Jack · 2009-10-07T19:50:31.942Z · LW(p) · GW(p)

My main beef with therapy is that it's ineffective at this. (Not in all cases, but more likely in the case of LW members.) It's certainly a noble goal.

I more or less agree with this. I was smarter than my therapist too but it was still helpful for three reasons. First, it forced me to recite my motives, reasons and feelings out loud which made me more conscious of them so that I could actually analyze and evaluate them. Second, the questions she asked prompted new thoughts that I wouldn't have had. Even if the premise of her questions was silly (she wasn't a Freudian but had a tendency to bring up my mother at inopportune times) it still brought forth helpful thoughts. Third, while she was behind me in IQ she had enough experience and knowledge of patterns of behavior to call me on my bullshit. In my experience (and as I understand it, in studies) intelligent people are especially good at rationalizing away behavior and channeling emotional reactions in weird, unhelpful directions.

Anyway thats what I got out of it. Eventually I think I reached a point of diminishing returns on it (once I could recognize patterns in my behavior paying money to have someone else do it did seem useless). I still have a problem of putting my conclusions about my own unhealthy, irrational behavior to good use, but that doesn't seem like the kind of thing anyone will be able to help me with.

You're definitely right that therapy is overall too ineffective- which is why I think it could benefit from the insights of this site. I actually think I could get a fair amount out of therapy with an extreme rationalist- and reading your blog it seems like your problem with therapists is that they're not enough like your average less wrong poster.

Replies from: pdf23ds
comment by pdf23ds · 2009-10-07T21:40:47.471Z · LW(p) · GW(p)

Hmm. Maybe I was born unusually introspective, because my therapists never deepened my analysis or called me on bullshit. My experience may be more atypical than I thought.

In my experience (and as I understand it, in studies) intelligent people are especially good at rationalizing away behavior and channeling emotional reactions in weird, unhelpful directions.

I haven't heard of those studies. I'd be interested in any references you have. I'm familiar with the correlation between intelligence and kookiness, but this sounds a bit different, though probably related.

your problem with therapists is that they're not enough like your average less wrong poster.

Heh. Well, sort of. That and, maybe, that I'm just not cut out for therapy.

comment by AdeleneDawner · 2009-10-07T17:41:33.121Z · LW(p) · GW(p)

This doesn't look like a hijack to me. I haven't suggested therapy to Peer, probably because I'm pretty strongly biased against doing so, but now that I think about it, it may be useful to at least consider it.

Carry on. :)

comment by tut · 2009-10-07T06:20:49.285Z · LW(p) · GW(p)

I agree. There are things that are part of you, but that you pretty much have to treat as external facts. Some of those are qualities of your utility function, such as risk aversion. I would not even try to change those.

Others are about abilities, like emotional beviour, or akrasia of various kinds. Those you can try to change, but sometimes that is not possible, or would cost more than it is worth, and then you just accept them and concentrate on other things.

comment by AndrewKemendo · 2009-10-03T08:49:20.391Z · LW(p) · GW(p)

I never see discussion on what the goals of the AI should be. To me this is far more important than any of the things discussed on a day to day basis.

If there is not a competent theory on what the goals of an intelligent system will be, then how can we expect to build it correctly?

Ostensibly, the goal is to make the correct decision. Yet there is nearly no discussion on what constitutes a correct decision. I see lot's of contributors talking about calculating utilons so that demonstrates that most contributors are hedonistic consequentialist utilitarians.

Am I correct then to assume that the implicit goal of the AI for the majority in the community is to aid in the maximization of human happiness?

If so I think there are serious problems that would be encountered and I think that the goal of maximizing happiness would not be accomplished.

Replies from: CronoDAS, timtyler
comment by CronoDAS · 2009-10-03T08:58:21.457Z · LW(p) · GW(p)

"Utilons" are a stand-in for "whatever it is you actually value". The psychological state of happiness is one that people value, but not the only thing. So, yes, we tend to support decision making based on consequentialist utilitarianism, but not hedonistic consequentialist utilitarianism.

See also: Coherent Extrapolated Volition

Replies from: AndrewKemendo, AndrewKemendo
comment by AndrewKemendo · 2009-10-03T14:04:32.224Z · LW(p) · GW(p)

Upon reading of that link (which I imagine is now fairly outdated?) his theory falls apart under the weight of its coercive nature - as the questioner points out.

It is understood that the impact of an AI will be on all in humanity regardless of it's implementation if it is used for decision making. As a result consequentialist utilitarianism still holds a majority rule position, as the link talks about, which implies that the decisions that the AI would make would favor a "utility" calculation (Spare me the argument about utilons; as an economist I have previously been neck deep in Bentham).

The discussion at the same time dismisses and reinforces the importance of the debate itself, which seems contrary. I personally think this is a much more important topic than is thought and I have yet to see a compelling argument otherwise.

From the people (researchers) I have talked to about this specifically, the responses I have gotten are: "I'm not interested in that, I want to know how intelligence works" or "I just want to make it work, I'm interested in the science behind it." And I think this attitude is pervasive. It is ignoring the subject.

comment by AndrewKemendo · 2009-10-03T13:18:10.102Z · LW(p) · GW(p)

"Utilons" are a stand-in for "whatever it is you actually value"

Of course - which makes them useless as a metric.

we tend to support decision making based on consequentialist utilitarianism

Since you seem to speak for everyone in this category - how did you come to the conclusion that this is the optimal philosophy?

Thanks for the link.

comment by timtyler · 2009-10-03T11:43:55.775Z · LW(p) · GW(p)

The topic of what the goals of the AI should be has been discussed an awful lot.

I think the combination of moral philosopher and machine intelligence expert must be appealing to some types of personality.

Replies from: AndrewKemendo
comment by AndrewKemendo · 2009-10-03T13:15:13.380Z · LW(p) · GW(p)

Maybe I'm just dense but I have been around a while and searched, yet I haven't stumbled upon a top level post or anything of the like here on the FHI, SIAI (other than ramblings about what AI could theoretically give us) OB or otherwise which either breaks it down or gives a general consensus.

Can you point me to where you are talking about?

Replies from: timtyler
comment by timtyler · 2009-10-03T14:23:18.276Z · LW(p) · GW(p)

Probably the median of such discussions was on http://www.sl4.org/

Machines will probably do what they are told to do - and what they are told to do will probably depend a lot on who owns them and on who built them. Apart from that, I am not sure there is much of a consensus.

We have some books of the topic:

Moral Machines: Teaching Robots Right from Wrong - Wendell Wallach

Beyond AI: Creating The Conscience Of The Machine - J. Storrs Hall

...and probably hundreds of threads - perhaps search for "friendly" or "volition".

comment by Morendil · 2009-10-03T07:19:45.420Z · LW(p) · GW(p)

Bayesian reasoning spotted in the wild at Language Log

Replies from: SilasBarta
comment by SilasBarta · 2009-10-03T15:34:07.283Z · LW(p) · GW(p)

More specifically, the Kullback-Leibler divergence, which is even awesomer.

comment by [deleted] · 2009-10-28T10:54:56.184Z · LW(p) · GW(p)

So, there's this set, called W. The non-emptiness of W would imply that many significant and falsifiable conjectures, which we have not yet falsified, are false. What's the probability that W is empty?

(Yep, it's a bead jar guess. Show me your priors. I will not offer clarification unless I find that there's something I meant to be clearer about but wasn't.)

Replies from: Alicorn, cousin_it
comment by Alicorn · 2009-10-28T13:20:17.142Z · LW(p) · GW(p)

How many is "many"?

comment by cousin_it · 2009-10-28T11:50:09.209Z · LW(p) · GW(p)

I say 0.9.

comment by Morendil · 2009-10-26T19:54:27.497Z · LW(p) · GW(p)

Movie: Cloudy with a Chance of Meatballs - I took the kids to see that this week-end and it struck me as a fun illustration of the UnFriendly AI problem.

comment by Z_M_Davis · 2009-10-16T19:58:18.781Z · LW(p) · GW(p)

On reflection, I'm actually going to start spelling my first name again.

Replies from: Zack_M_Davis
comment by Zack_M_Davis · 2009-10-16T20:01:46.353Z · LW(p) · GW(p)

Hence this new account.

ADDENDUM: I mean, unless we have some name-change feature that I just couldn't find.

SECOND ADDENDUM: To anyone reading this on my userpage, you might be interested in my older comments.

Replies from: komponisto, MBlume, Vladimir_Nesov
comment by komponisto · 2009-10-18T06:17:25.830Z · LW(p) · GW(p)

On reflection, I'm actually going to start spelling my first name again

Why? (If I may ask.)

Replies from: Zack_M_Davis
comment by Zack_M_Davis · 2009-10-18T06:48:27.132Z · LW(p) · GW(p)

I'll PM you.

comment by MBlume · 2009-10-16T20:37:25.035Z · LW(p) · GW(p)

unless we have some name-change feature that I just couldn't find.

I've been wishing we had one for a while -- I replicated my Reddit login without really thinking.

comment by Vladimir_Nesov · 2009-10-16T20:10:45.408Z · LW(p) · GW(p)

I guess you could implement one!

Replies from: Zack_M_Davis
comment by Zack_M_Davis · 2009-10-16T20:26:21.245Z · LW(p) · GW(p)

Regrettably my meager Python skills are not yet up to the task.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2009-10-16T20:28:05.719Z · LW(p) · GW(p)

A welcome occasion to learn more?

comment by Nubulous · 2009-10-11T03:25:55.674Z · LW(p) · GW(p)

I came up with the following while pondering the various probability puzzles of recent weeks, and I found it clarified some of my confusion about the issues, so I thought I'd post it here to see if anyone else liked it:

Consider an experiment in which we toss a coin, to chose whether a person is placed into a one room hotel or duplicated and placed into a two room hotel. For each resulting instance of the person, we repeat the procedure. And so forth, repeatedly. The graph of this would be a tree in which the persons were edges and the hotels nodes. Each layer of the tree (each generation) would have equal numbers of 1-nodes and 2-nodes (on average, when numerous). So each layer would have 1.5 times as many outgoing edges as incoming, with 2/3 of the outgoing being from 2-nodes. If we pick a path away from the root, representing the person's future, in each layer we are going to have an even chance of arriving at a 1- or 2- node, so our future will contain equal numbers of 1- and 2- hotels. If we pick a path towards the root, representing the person's past, in each layer we have a 2/3 chance of arriving at a 2-node, meaning that our past contained twice as many 2-hotels as 1-hotels.

comment by [deleted] · 2009-10-07T00:53:03.797Z · LW(p) · GW(p)

The Other Presumptuous Philosopher:

It begins pretty much as described here:

It is the year 2100 and physicists have narrowed down the search for a theory of everything to only two remaining plausible candidate theories, T1 and T2 (using considerations from super-duper symmetry). According to T1 the world is very, very big but finite, and there are a total of a trillion trillion observers in the cosmos. According to T2, the world is very, very, very big but finite, and there are a trillion trillion trillion observers. The super-duper symmetry considerations seem to be roughly indifferent between these two theories. The physicists are planning on carrying out a simple experiment that will falsify one of the theories.

...except the simple experiment won't quite falsify one of the theories. You see, the experiment has a trillion different possible outcomes. If T1 is true, the outcome will be a specific possibility that scientists have already calculated. If T2 is true, the outcome will be a random one, distributed uniformly among all possibilities.

Well, the experiment is performed, and the result is the one that's consistent with both theories. For whatever reason, anthropic reasoning is pretty standard in this hypothetical universe, so now, not before but after the experiment, the two theories are considered to be pretty much on par with each other. Enter the Other Presumptuous Philosopher: "Hey guys, we can stop experimenting now, because I can already show to you now, using non-anthropic reasoning, that T1 is about a trillion times more likely to be true than T2!"

My point: the Presumptuous Philosopher argument, though a good argument against certainty of either anthropic or non-anthropic reasoning, isn't a good argument against anything else. It's about as good an argument as "If you think that's true, why don't you bet your life on it?"

comment by pdf23ds · 2009-10-04T21:41:56.244Z · LW(p) · GW(p)

Bug alert: this comment has many children, but doesn't currently have a "view children" link when viewing this entire thread.

comment by [deleted] · 2009-10-04T18:14:07.449Z · LW(p) · GW(p)

I've only been reading Open Threads recently, so forgive me if it's been discussed before.

A band called The Protomen just recently came out with their second rock opera of a planned triology of rock operas based on (and we're talking based on) the Megaman video game. The first is The Protomen: Hope Rides Alone, the second one is Act II: The Father of Death.

The first album tells the story of a people who have given up and focuses on the idea of heroism. The second album is more about creation of the robots and the moral struggles that occur. I suggest you start with: The Good Doctor http://www.youtube.com/watch?v=HP2NePWJ2pQ

comment by JulianMorrison · 2009-10-03T23:22:29.808Z · LW(p) · GW(p)

Mini heuristic that seems useful but not big enough for a post.

To combat ingroup bias: before deciding which experts to believe, first mentally sort the list of experts by topical qualifications. Allow autodidact skills to count if they have been recognized by peers (publication, citing, collaboration, etc).

comment by CronoDAS · 2009-10-03T06:25:09.502Z · LW(p) · GW(p)

Mind-killer warning.

What is the opinion of everyone here on this? It's an essay of sorts (adapted from a speech) making a case for a guaranteed minimum income.

Replies from: Jordan, Alicorn
comment by Jordan · 2009-10-03T17:42:42.414Z · LW(p) · GW(p)

There's a difference between activities that are inherently desirable to do, just because they are fun/interesting/challenging, and activities that people can become accustomed to and eventually even like. I imagine farming is one of the latter. While I can envision a good deal of farmers continuing on farming without the economic incentive to do so, I doubt the replacement rate would be high enough to continue feeding the world.

I also imagine that, even if you abolish money, people would just recreate it, or at least an elaborate bartering system. I know I would personally. Note that there would be just as much desire from the 'consumer' as the 'producer' to recreate currency. Consider, for example, a hypothetical bridge building group, that just likes going around and building bridges for the sake of it. They're the best, and are in high demand. The group is happy to just build bridges as they work their way across the country, until suddenly a city not on their short list contacts them saying, "We desperately need a bridge! We'll do anything! You could live like kings here for months if you just build us a bridge!" It's one thing to want to do something for the joy of it, without remuneration, it's entirely another to actively reject payment. Thus, the cycle starts over again.

Replies from: CronoDAS
comment by CronoDAS · 2009-10-04T03:30:30.892Z · LW(p) · GW(p)

The author addresses this. He's not particularly opposed to paying people to do things; he's opposed to people having to do paid work or starve. The existence of a GMI should make people less willing to do unpleasant jobs for relatively low wages, effectively reducing the supply of unskilled labor. If you can't automate away a job that most people don't like doing, then just pay people the new, higher market rate.

Replies from: Jordan
comment by Jordan · 2009-10-04T08:03:59.443Z · LW(p) · GW(p)

I'm in favor of providing food and health care to anyone that needs it. However, a GMI that rivals minimum wage would probably have much larger consequences, which I'm not convinced anyone could predict.

comment by Alicorn · 2009-10-03T15:18:04.356Z · LW(p) · GW(p)

Awesome link, thanks! I'm not sure about a GMI in the form of money per se, but if there's a way to make it represent (as he suggests) "real wealth", instead of a potentially slow-to-adjust numerical value, then it could work.

comment by wedrifid · 2009-10-02T16:07:21.177Z · LW(p) · GW(p)

My thought of the day: An 'Infinite Improbability Drive' is slightly less implausible than a faster than light engine.

comment by pdf23ds · 2009-10-02T08:53:31.325Z · LW(p) · GW(p)

Is there a complete guide anywhere to comment/post formatting? If so, it should probably linked on the "About" page or something. I can't figure out how to do html entities; is that possible?

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2009-10-02T09:31:32.863Z · LW(p) · GW(p)

There is a comment formatting page on the Wiki. The syntax description says that you can just write HTML entities in the comments directly, but apparently it doesn't work here: ©

On the other hand, simple copy-past from an entity list page works: ©

comment by RolfAndreassen · 2009-10-01T17:16:25.217Z · LW(p) · GW(p)

I would like to throw out some suggested reading: John Barnes's Thousand Cultures and Meme Wars series. The former deals with the social consequences of smarter-than-human AI, uploading, and what sorts of pills we ought to want to take. The latter deals with nonhuman, non-friendly FOOMs. Both are very good, smart science fiction quite apart from having themes often discussed here.

Replies from: billswift
comment by billswift · 2009-10-02T15:11:54.959Z · LW(p) · GW(p)

I have read "A Million Open Doors" and "A World Made of Glass" and don't remember ANY AI at all in them. And only limited uploading. And are there any Meme Wars novels other than "Kaleidoscope Century" and "Candle"? They were decent but not great stories, but the "memetic virus" background required a serious "suspension of disbelief". Barnes's least unrealistic uploading and FOOM novel was the space-farers in "Mother of Storms".

Replies from: RolfAndreassen
comment by RolfAndreassen · 2009-10-04T20:24:42.728Z · LW(p) · GW(p)

Thousand Cultures: The technology develops through the series. In "The Merchants of Souls" the uploading is the main McGuffin, and in "The Armies of Memory" the AIs are, with the uploads as a good second.

Meme Wars: You are missing "Orbital Resonance" and "The Sky so Big and Black", although I the memes as such are background, not the main story element, in both.

comment by zaph · 2009-10-01T17:43:16.817Z · LW(p) · GW(p)

I'll make my more wrong confession here in this thread: I'm a multiple worlds skeptic. Or at least I'm deeply skeptical of Egan's law. I won't pretend I'm arguing from any sort of deep QM understanding. I just mean in my sci-fi, what-if, thinking about what the implications would be. I truly believe there would be more wacky outcomes in an MWI setting than we see. And I don't mean violations of physical laws; I'm hung up on having to give up the idea of cause and effect in psychology. In MWI, I don't see how it's possible to think there would be cause and effect behind conversations, personal identity, etc. Literally every word, every vocalization, is determined solely by quantum interactions, unless I'm deeply misunderstanding something. This goes against the determinism I hold to be true. I don't see how my next words won't be French, Arabic, Klingon, etc, and I don't see how what I consider to be normally isn't vanishingly unlikely to continue for an indefinite period of time.

I'll admit that works been busy, so I haven't worked through EY's latest posts, so if there's been some resolution in this in the anthropic threads, I'd appreciate a quick summary. Sorry if this is more of a question than answer; it's for that reason that I second a forum. I like blogs for articles, but they don't work for discussion as well as forums do, and forums better allow people to post questions.

Replies from: Vladimir_Nesov, Jack, saturn, JamesAndrix
comment by Vladimir_Nesov · 2009-10-01T17:49:08.685Z · LW(p) · GW(p)

This is a confusion about free will, not many-worlds.

Replies from: zaph
comment by zaph · 2009-10-01T18:16:51.016Z · LW(p) · GW(p)

I would describe my view on the free will question as basically being Dennett's in Elbow Room and Freedom Evolves. But that seems to be confounded by what I expect to be the utter randomness that would emerge from the MWI. I don't worry about having free will; I am concerned about having some sort of causal chain in my actions. I don't disavow that I'm confused, I just don't think I'm confused over free will.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2009-10-01T19:40:39.190Z · LW(p) · GW(p)

There is deep similarity, that I expected to carry over: in both cases, you have some subjective feeling, and in both cases the nature of physical substrate in which you exist doesn't matter the slightest for the explanation of why you have that feeling. The feeling has a cognitive explanation that screens off physical explanation. Thus, you can be confused about physical explanation, but not confused about your question, since you have a cognitive explanation.

Replies from: zaph
comment by zaph · 2009-10-01T20:53:20.759Z · LW(p) · GW(p)

I'm not sure I quite follow. So I have the feeling of confusion, which I attribute to not understanding the ramifications of the physical explanation of quantum effects that the MWI provides. What's the cognitive explanation for this?

comment by Jack · 2009-10-01T19:06:17.501Z · LW(p) · GW(p)

Your claim is that MWI predicts things we don't see. If this is true then it is a really big deal- you'd be able to show that MWI was not just falsifiable (which is still a contentious issue) but already falsified. Suffice to say someone would have noticed this.

Anyway it is true that MWI does entail that there is some non-zero possibility that your next words will be in Klingon. But the possibility is so small that the universe is likely to end many, many times over before it ever happens. Unfortunately, this does suggest you have to give up your notion of robust, metaphysical causation since (1) shit ain't determined and (2) there are no objects (the usual units of causation) just overlapping fields. There are some efforts to maintain serious causal stories despite this but since no one really knew what was meant by causation before quantum mechanics this doesn't seem like that big a loss.

In any case, these sacrifices are purely philosophical, MWI changes nothing about what experiences you should expect (except possibly in regards to anthropic issues) and makes no new predictions about run of the mill everyday physics.

Replies from: zaph
comment by zaph · 2009-10-01T19:23:58.045Z · LW(p) · GW(p)

Hi Jack,

[i]Anyway it is true that MWI does entail that there is some non-zero possibility that your next words will be in Klingon. But the possibility is so small that the universe is likely to end many, many times over before it ever happens.[/i]

This all could just be an issue of me being massively off on the probabilities, but aren't there a greater number of possibilities that my next words will be not be in English than in English, and therefore a greater probability that what I would say would not be in English? And in this particular example, there are a number of universes that have branched off that I would have spoken Klingon. I'm not understanding the limitation that would demonstrate that there are more universes where I spoke English instead (i.e. why would there be a bell curve distribution with English sentences being the most frequently demonstrated average?)

And I do want to more clearly re-iterate that I'm not talking about Everett's formal proof, but the purely philosophical ramifications you mention (and also, I haven't got some earth shattering thesis waiting in the wings, I'm just describing my confusion). QM is fact, and MWI is a way of interpreting it. For whatever reason, I'm interested in that interpretation. So chalk it up to me thinking through a dumb question. I don't believe I've falsified a mainstream QM theory. I do feel I've demonstrated to my satisfaction that I don't fully understand the metaphysical implications of MWI. It sounds easier to just chalk it up to "it's the equations", but I do find the potential implications interesting.

Replies from: CronoDAS, Jack
comment by CronoDAS · 2009-10-01T21:53:25.780Z · LW(p) · GW(p)

This all could just be an issue of me being massively off on the probabilities, but aren't there a greater number of possibilities that my next words will be not be in English than in English, and therefore a greater probability that what I would say would not be in English? And in this particular example, there are a number of universes that have branched off that I would have spoken Klingon. I'm not understanding the limitation that would demonstrate that there are more universes where I spoke English instead (i.e. why would there be a bell curve distribution with English sentences being the most frequently demonstrated average?)

Not all "possibilities", as you describe them, are equally likely. If I enter 2+2 into my calculator, and MWI is correct, there would be some worlds in which some transistors don't behave normally (because of thermal noise, cosmic rays, or whatever), bits flip themselves, and the calculator ends up displaying some number that isn't "4". The calculator can display lots of different numbers, and 4 is only one of them, but in order for any other number to appear, something weird had to have happened - and by weird, I mean "eggs unscrambling themselves" kind of weird. (Transistors are much smaller than chicken eggs, so flipped bits in a calculator are more like a microscopic egg unscrambling itself, but you get the idea.)

MWI basically says that, yes, someone will win the quantum lottery, but it won't be you.

Replies from: zaph
comment by zaph · 2009-10-01T22:11:39.791Z · LW(p) · GW(p)

This and the other probability discussions above have greatly helped me to understand what MWI was getting at. I wasn't fully grasping what the limitations were, that MWI wasn't describing limitless possibilities happening infinitely.

comment by Jack · 2009-10-01T20:40:03.117Z · LW(p) · GW(p)

but aren't there a greater number of possibilities that my next words will be not be in English than in English, and therefore a greater probability that what I would say would not be in English?

No. So QM says that at time t every sub atomic particle in your brain has a superposition- a field which gives the possibility that that particle will be found at that location in the field. There is no end to the field but only a very small area will have a non-insignificant probability magnitude. Now scale up to the atomic level. Atoms will similarly have superpositions- these superpositions will be dictated by the superpositions of the subatomic particles which make up the atom. You can keep scaling up. The larger the scale the lower the chances of anything crazy happening is because for an entire atom to be discovered on the other side of the room every particle it is made up of would have to have tunneled ten feet at the same time to the same place. This is true for molecules that make up the entire brain mass. Whatever molecular/brain structural conditions that make you an English speaker at time t are very likely to remain in place at time t2 since their superposition is just a composite of the superpositions of their parts (well not really, my understanding is that it is way more complicated than that, suffice to say that the chances of many particles being discovered away from the peak of their wavefunction is much lower than the chance of finding a single electron outside the peak of its wavefunction).

For our purposes many worlds just says all of the possible outcomes happen. The chances you should assign to experiencing any one of these possibilities are just the chances you should assign to finding yourself in the world in which that possibility happens. Since in nearly all Everett branches you will still be speaking English (nearly all of the particles will have remained in approximately the same place) you should predict that you will never experience un mundo donde personas hablan espanol sin razones!

Heh. Right now, I'm pretty sure the QM does preclude robust, folk understandings of causation. But tell me, what is it that causation gives you that you want so badly?

Replies from: zaph
comment by zaph · 2009-10-01T22:09:28.099Z · LW(p) · GW(p)

Thanks, again, this is the type of explanation that helps me to much better understand the possibilities MWI was addressing. And causation just gives me the reasonable expectation that physics models, biology theories, do adequately model our world without worrying about spooking action throwing too big of a monkey wrench into things.

Replies from: Jack
comment by Jack · 2009-10-01T22:34:05.519Z · LW(p) · GW(p)

Sure. And don't worry about causation, you can inference and make predictions just fine without it.

comment by saturn · 2009-10-01T19:11:04.517Z · LW(p) · GW(p)

It's true that MWI doesn't absolutely rule out the possibility that your next words might be in another language, but neither does any other QM interpretation. They all predict just the amount of wackiness that we see.

Replies from: zaph
comment by zaph · 2009-10-01T19:28:46.162Z · LW(p) · GW(p)

The other interpretations allow for the possibility, but MWI seems to argue for it to definitely occur, in some universe branch.

I think it's the "wacky but not TOO wacky" world that I find pretty fascinating in QM. I just haven't seen a description that just seemed to nail it for me. Obviously, YMMV.

comment by JamesAndrix · 2009-10-01T18:29:29.267Z · LW(p) · GW(p)

I don't quite understand what you're confused about. Why would MWI make you start talking in anything but english?

If you flip a hypothetical fair random coin 1000 times, you'll almost certainly get something around 500 heads and 500 tails. Getting anything like 995 heads would be rare.

The coin can be entirely nondeterministic in how it flips, and still be reliable in this regard.

Replies from: zaph
comment by zaph · 2009-10-01T18:48:43.561Z · LW(p) · GW(p)

Well, there's no physical limitation against me speaking something other than my birth language. Using the coin analogy, my tongue position, lips position, and airflow out of my throat are the variables. Those variables, across all distributions, can produce any human word. Across infinity, there will be worlds where I'm speaking my birth language, and other ones that I'm not, for my next statement. MWI seems to me to eliminate the prior state from having an influence on the next state of my language machine. If all probabilities do occur in the MWI, I see the probability of me continuing to speak English to be the 995 heads case (which is still possible, I just see it as unlikely). I don't think MWI "makes" me do anything, I just think the implication is that all possible worlds become reality. It really comes down to the prior state's apparent lack influence; that's what confuses me. Once that's gone, I just see causality in human actions going out the window.

Replies from: orthonormal
comment by orthonormal · 2009-10-01T21:44:33.670Z · LW(p) · GW(p)

You're confused about probability, causality in QM, and anthropics. (Note in particular that your objection can't be particular to MWI, since even in a collapse theory, the wacky things could happen).

MWI seems to me to eliminate the prior state from having an influence on the next state of my language machine.

The current state of your brain corresponds to a particular (small neighborhood of) configurations, and most of the wavefunction-mass that is in this neighborhood flows to a relatively small subset of configurations (i.e. ones where your next sentence is in English, or gibberish, rather than in perfect Klingon); this, precisely, is what causality actually means.

Yes, there is some probability that quantum fluctuations will cause your throat cells to enunciate a Klingon speech, without being prompted by a patterned command from your brain. But that probability is on the order of 10^{-100} at most.

And there is some probability, given the structure of your brain, that your nerves would send precisely the commands to make that happen; but given that you don't actually know the Klingon speech, that probability too is on the order of 10^{-100}.

The upshot of MWI in this regard is that very few of your future selves will see wacky incredibly-improbably-ordered events happen, and so you recover your intuition that you will not, in fact, see wacky things. It's just that an infinitesimal fraction of your future selves will be surprised.

Replies from: zaph
comment by zaph · 2009-10-01T22:08:21.791Z · LW(p) · GW(p)

Thanks, this really helps to clarify the picture for me.

comment by CannibalSmith · 2009-10-01T15:45:43.539Z · LW(p) · GW(p)

Open threads should not be promoted, because.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2009-10-01T15:52:31.066Z · LW(p) · GW(p)

Promoted articles as they are also serve a purpose: they screen low-value articles from a "feed for a busy reader". What you describe is also a good suggestion, but instead of redefining "promoted", a better way to implement it is to add a subcategory of promoted self-sufficient entry-level articles, and place them on the front page.

comment by [deleted] · 2009-10-04T02:22:55.337Z · LW(p) · GW(p)

We should compile a "rationalist Bible" of sorts: a book explaining, in down-to-earth terms, how various irrational or irrationalistic modes of thought and behaviors lead to undesirable things, and denouncing these things as "sinful" or "wrong"; and explaining how various easy-to-do things lead to increased rationality and therefore desirable things, and extolling these things as "virtuous".

The ideal result is that we have a book that people would live their lives by just as willingly as they live their lives by the Bible, and that people would find just as truthful and wise as the Bible.

And, of course, we should release a series of editions, and, obviously, not claim divine inspiration.