Rationality Quotes July 2013

post by Vaniver · 2013-07-02T16:21:59.219Z · LW · GW · Legacy · 429 comments

Contents

429 comments

Another month has passed and here is a new rationality quotes thread. The usual rules are:

  • Please post all quotes separately, so that they can be upvoted or downvoted separately. (If they are strongly related, reply to your own comments. If strongly ordered, then go ahead and post them together.)
  • Do not quote yourself.
  • Do not quote from Less Wrong itself, HPMoR, Eliezer Yudkowsky, or Robin Hanson.
  • No more than 5 quotes per person per monthly thread, please.

429 comments

Comments sorted by top scores.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-07-01T19:11:50.725Z · LW(p) · GW(p)

"If you don't know how to turn off the safety, being unable to fire the gun is the intended result."

-- NotEnoughBears

Replies from: cody-bryce, AndHisHorse, bouilhet
comment by cody-bryce · 2013-07-03T02:56:14.843Z · LW(p) · GW(p)

If posting things said on lesswrong or OB or from HPMOR aren't in scope, it seems a little odd things said in HPMOR discussion on a forum run by you that doesn't happen to be those two is.

Replies from: ArisKatsaris, SaidAchmiz
comment by ArisKatsaris · 2013-07-03T09:39:37.115Z · LW(p) · GW(p)

If posting things said on lesswrong or OB or from HPMOR aren't in scope, it seems a little odd things said in HPMOR discussion on a forum run by you that doesn't happen to be those two is.

The idea of the rule is to not have this thread be an echo chamber for LessWrong and Yudkowsky quotes. As a sister site, Overcoming Bias falls under the same logic (though I think, given that the origin of LessWrong in OvecomingBias constantly becomes more distant in time, I wouldn't mind that rule getting relaxed for OvercomingBias more recent entries.)

But either way, I haven't seen that many lesswrong members participate in "hpmor/reddit" or that many hpmor/reddit members participate in lesswrong, so I think it makes sense to NOT ban hpmor/reddit quotes from this thread...

Replies from: wedrifid
comment by wedrifid · 2013-07-03T17:09:23.899Z · LW(p) · GW(p)

As a sister site, Overcoming Bias falls under the same logic (though I think, given that the origin of LessWrong in OvecomingBias constantly becomes more distant in time, I wouldn't mind that rule getting relaxed for OvercomingBias more recent entries.)

We succeeded in getting rid of the Overcoming Bias ban for several months a couple of years ago. Unfortunately someone reverted to an old version and since then it's stuck. Traditions are a nuisance to change.

Replies from: Vaniver
comment by Vaniver · 2013-07-03T17:41:49.252Z · LW(p) · GW(p)

We succeeded in getting rid of the Overcoming Bias ban for several months a couple of years ago. Unfortunately someone reverted to an old version and since then it's stuck. Traditions are a nuisance to change.

If I make this post next month, I'll get rid of the ban. Should that also mean Robin Hanson is fair game?

[Edit] I realized that waiting was silly since I made this month's. It's not clear to me whether or not Hanson quotes should be fair game, though; with the current policy, quoting gems from the comments (like NotEnoughBears's quote) works but we shouldn't get deluged by Hanson quotes.

comment by Said Achmiz (SaidAchmiz) · 2013-07-03T05:55:23.628Z · LW(p) · GW(p)

I don't think Eliezer runs r/HPMOR/ ...

Replies from: wedrifid
comment by wedrifid · 2013-07-03T06:26:11.642Z · LW(p) · GW(p)

I don't think Eliezer runs r/HPMOR/ ...

I seems like he does. While I've only gone to the site once the time I did (a few days ago) I saw drama about Eliezer censoring something on the subreddit and observing that this is why fan forums are better when not run by the author himself.

Replies from: Kawoomba
comment by Kawoomba · 2013-07-03T07:08:05.368Z · LW(p) · GW(p)

He's a moderator there, but he's not the top moderator, i.e. he acts at the whim of two moderators with more seniority who could remove him at any time.

Replies from: Dorikka
comment by Dorikka · 2013-07-04T20:25:56.332Z · LW(p) · GW(p)

who could remove him at any time.

I doubt this.

Replies from: Davidmanheim
comment by Davidmanheim · 2013-07-08T03:16:43.576Z · LW(p) · GW(p)

OK.

What evidence would cause you to change your mind?

Replies from: Dorikka
comment by Dorikka · 2013-07-13T12:49:28.032Z · LW(p) · GW(p)

Other authors being booted from forums discussing the stories that they wrote (whether primary or fanific).

Replies from: Kawoomba
comment by Kawoomba · 2013-07-13T12:58:28.965Z · LW(p) · GW(p)

Don't need to be a moderator to participate in a forum.

For an example, see user Dorikka.

Replies from: Dorikka
comment by Dorikka · 2013-07-21T01:59:29.684Z · LW(p) · GW(p)

But it would still be evidence, no? grin

More seriously, replace "booted" with "having their moderator-ship revoked or something of similar/greater severity" to produce a more accurate comment.

comment by AndHisHorse · 2013-08-02T00:39:29.144Z · LW(p) · GW(p)

I've seen this quote multiple times, and particularly after reading this post, I find myself needing to add the same clarification, lest the quote be misused.

The idea contained applies if and only if there is a designer able to predict your actions with a high degree of certainty. And even then, it's useful advice if and only if you agree with the designer's intent.

comment by bouilhet · 2013-07-02T21:55:41.049Z · LW(p) · GW(p)

How is this so? Surely, as a general proposition, ignorance and intention are much more loosely correlated than the quote suggests. What if the statement were altered slightly: "If (after great effort and/or reflection and/or prayer) you (still) don't know..." Does it still make sense to speak of intention? Or if the point is that the failure to solve a simple problem indicates a will to fail, well then the author has more faith in human will than I do--and IMO greatly underestimates the possible ways of not-knowing.

Replies from: Qiaochu_Yuan
comment by Qiaochu_Yuan · 2013-07-02T22:12:03.812Z · LW(p) · GW(p)

You're misreading the quote. The intention is on the part of the person who designed the gun, not the person who's trying to fire it.

Replies from: bouilhet
comment by bouilhet · 2013-07-02T23:07:39.616Z · LW(p) · GW(p)

Thanks for clarifying. The wording seems odd to me, but I get it now.

comment by Pablo (Pablo_Stafforini) · 2013-07-01T22:38:53.017Z · LW(p) · GW(p)

Far from being the smartest possible biological species, we are probably better thought of as the stupidest possible biological species capable of starting a technological civilization. We filled that niche because we got there first, not because we are in any sense optimally adapted to it.

Nick Bostrom, Superintelligence: the Coming Machine Intelligence Revolution, chap. 2

Replies from: elharo, ciphergoth, Benito
comment by elharo · 2013-07-03T10:10:47.147Z · LW(p) · GW(p)

Assuming we're the stupidest possible biological species capable of starting a technological civilization seems almost (though not quite) as wrong as asserting we're the smartest such. In both cases we're generalizing from a sample size of one.

For instance, I can imagine a technological civilization that was stupid enough to wipe itself out in a nuclear war, which we've so far managed to avoid; or to destroy its environment far worse than we have. I can also imagine a society that might be able to reach 18th or 19th century levels of tech but couldn't handle calculus or differential geometry.

Replies from: Desrtopa, RobbBB, DanielLC
comment by Desrtopa · 2013-07-15T18:56:03.838Z · LW(p) · GW(p)

Assuming we're the stupidest possible biological species capable of starting a technological civilization seems almost (though not quite) as wrong as asserting we're the smartest such. In both cases we're generalizing from a sample size of one.

Well, considering it took us thousands to hundreds of thousands of years (depending on whether you buy that certain, more chronologically recent adaptations played a significant role) to start developing the rudiments of technological civilization, after evolving all the biological assets of intelligence that we have now, I think it's pretty fair to infer that we're not that far above the minimum bar.

A species whose intelligence was far in excess of that necessary to be capable of technological civilization could probably have produced individuals capable of kickstarting the process in every generation once they found themselves in an environment capable of supporting it. By that measure, we as a species proved quite resoundingly lacking.

Replies from: army1987
comment by A1987dM (army1987) · 2013-07-16T09:20:09.929Z · LW(p) · GW(p)

Well, considering it took us thousands to hundreds of thousands of years (depending on whether you buy that certain, more chronologically recent adaptations played a significant role) to start developing the rudiments of technological civilization, after evolving all the biological assets of intelligence that we have now,

The end of the last glacial period might have had something to do with it.

Replies from: Desrtopa
comment by Desrtopa · 2013-07-16T11:19:41.715Z · LW(p) · GW(p)

Still thousands of years even if we suppose the window was closed before then.

comment by Rob Bensinger (RobbBB) · 2013-07-08T08:28:17.382Z · LW(p) · GW(p)

Assuming we're the stupidest possible biological species capable of starting a technological civilization seems almost (though not quite) as wrong as asserting we're the smartest such.

I agree they're both very wrong, but I don't think the levels of wrongness are as close as you suggest. The former sounds much, much wronger to me. We're much more likely to be close to the dumb end than close to the smart end.

comment by DanielLC · 2013-07-23T21:12:42.403Z · LW(p) · GW(p)

It's not that we've seen a large number of biological species capable of starting a technological civilization and we're dumbest of them. It's that we've seen a large number of biological species incapable of starting a technological civilization and we're only slightly smarter than any of them.

We know we're at the boundary of being able to start a technological civilization because we can do it and we've seen biological species within an epsilon neighborhood that cannot.

comment by Paul Crowley (ciphergoth) · 2013-07-02T12:36:33.587Z · LW(p) · GW(p)

Looking forward to reading that. This idea is definitely older than this chapter, though; would be interested to know who first made this observation and when.

EDIT:

Humans are not optimized for intelligence. Rather, we are the first and possibly dumbest species capable of producing a technological civilization.

-- Reducing Long-Term Catastrophic Risks from Artificial Intelligence (in the PDF, not the summary)

comment by Ben Pace (Benito) · 2013-07-22T22:22:37.050Z · LW(p) · GW(p)

Not sure the quote is right: http://lesswrong.com/lw/fk4/how_minimal_is_our_intelligence/

Replies from: Desrtopa
comment by Desrtopa · 2013-07-22T22:35:18.496Z · LW(p) · GW(p)

To which I made roughly the same comment then that I did this time.

comment by David_Gerard · 2013-07-02T16:13:17.955Z · LW(p) · GW(p)

Unix was not designed to stop its users from doing stupid things, as that would also stop them from doing clever things.

  • Doug Gwyn
Replies from: Stabilizer, elharo
comment by Stabilizer · 2013-07-22T08:22:00.719Z · LW(p) · GW(p)

This design philosophy also seems to explain why the United States seems to have generated some of the most useful innovations in the last century.

comment by elharo · 2013-07-03T10:03:29.293Z · LW(p) · GW(p)

In that case Unix was misdesigned. Proper design stops its users from doing stupid things and enables them to do clever things. It makes the right thing obvious and easy and the wrong thing difficult to impossible.

Replies from: SaidAchmiz, DanArmak, RobinZ, tzok, DanielLC, Lethalmud, None
comment by Said Achmiz (SaidAchmiz) · 2013-07-03T15:45:37.795Z · LW(p) · GW(p)

It seems like your comment misses the point of the Unix philosophy, which is that the designers do not undertake to know in advance exactly which user actions are "stupid" and which are "clever". Unix is supposed to be a solid framework in which you can do things; figuring out what's stupid and what's clever is left to the user. It is an expression of fundamental designer trust in the user.

comment by DanArmak · 2013-07-03T19:10:27.280Z · LW(p) · GW(p)

A car can be driven on the road, or it can go onto a sidewalk and kill pedestrians and the driver. In comparison, a train or trolley can't easily go off its rails.

Is a car misdesigned because it is an open-ended, unconstrained tool? Not necessarily; you must weigh the costs of other possibilities against their benefits.

Unix is deliberately built as a general open-ended collection of tools. It enables many more things to be done than other systems which start by presuming a list of things the user wants to do. And some of the things it enables are mostly harmful, although they too are sometimes useful. It's not misdesigned; it makes a design tradeoff that is the correct one in some situations, and not in others.

comment by RobinZ · 2013-07-08T22:15:16.723Z · LW(p) · GW(p)

I think the chief obstacle to preventing stupidity without preventing cleverness is that there are clever ideas you haven't thought of yet that sound stupid. There's also the fact that what is stupid under one set of circumstances is clever under others.

Suppose I was designing a new car radiator, and I decided that I wanted to prevent idiots from, say, filling it with motor oil, so I built a system which would prevent the addition of anything other than water or antifreeze. Then suppose the radiator sprang a leak. At this point, the owner of the car might want to use the old shade-tree mechanic's trick of putting an egg in the radiator that will flow to the leak and produce a plug (or, more intelligently, a synthetic compound that does it more effectively and with less likely damage to the machinery) ... but they can't, because of the thing I did to prevent stupidity.

comment by tzok · 2013-07-03T15:24:02.414Z · LW(p) · GW(p)

These two concepts do not contradict each other. Unix can allow "doing stupid things" AND at the same time make them "difficult to impossible". So, the conclusion that Unix was misdesigned is not correct, at least not basing on your definition

comment by DanielLC · 2013-07-23T21:15:56.001Z · LW(p) · GW(p)

When the right thing can be made obvious and easy, it generally does so. It also makes the wrong thing difficult. However, short of giving it AI, there is no way for Unix to tell a clever thing from a wrong thing, so it lets you do both, but only if you're logged in as root.

comment by Lethalmud · 2013-07-04T15:20:53.261Z · LW(p) · GW(p)

Being able to design stupid things is an important skill for any designer. Steering away from it tends to reduce your process to cached thoughts.

Replies from: savageorange
comment by savageorange · 2013-07-06T00:08:34.917Z · LW(p) · GW(p)

Upvoted, but I feel that it could be more clear: You're focused on the idea of "Make new mistakes instead of trying to repeat previous successes", right?

That is, commit new stupidities instead of old stupidities or old successes.

comment by [deleted] · 2013-07-18T03:04:11.230Z · LW(p) · GW(p)

You can still hammer your thumb with a hammer. That doesn't make it a badly designed tool for driving nails.

In unix, all you have to refrain from doing is rm -rf / and you will be fine.

comment by Paul Crowley (ciphergoth) · 2013-07-02T12:59:11.182Z · LW(p) · GW(p)

“Erudition can produce foliage without bearing fruit.” - Georg Christoph Lichtenberg

Replies from: RobertChange, christopheg
comment by RobertChange · 2013-07-29T21:53:00.861Z · LW(p) · GW(p)

Original for Reference: "Gelehrsamkeit schießt leicht in die Blätter, ohne Frucht zu tragen."

Replies from: ciphergoth
comment by Paul Crowley (ciphergoth) · 2013-07-30T09:06:28.412Z · LW(p) · GW(p)

Thank you! Or as Google Translate has it, "Scholarship has slightly into the leaves, without carrying fruit."

comment by christopheg · 2013-07-16T12:15:27.210Z · LW(p) · GW(p)

Who says fruit is to be prefered to foliage ?

I often wonder about something along this line when speaking of education. Are students learning for getting a job (fruit) or for culture (foliage) ? Choosing between one or the other should it be the choice of the student or of the society ? I believe the most common answer is : we study for job and the choice is made by society. But I, for one, cannot so easily dismiss the question. It has too much to do with meaning of life: are people living to work/act or to understand/love.

That's obviously not the only way to interpret this quote, the obvious one would probably be a simple statement that knowledge can be flashy but still sterile. Anyway, as most good quotes it is ambiguous, henceforth may lead to fruitful thinking.

comment by Zubon · 2013-07-02T22:15:14.774Z · LW(p) · GW(p)

He senses in his gut that he did the right thing by showing up. As with all gut feelings, only time will tell whether this is pathetic self-delusion.

Neil Stephenson, Cryptonomicon

Replies from: fbreuer, Kaj_Sotala
comment by fbreuer · 2013-07-04T04:14:02.842Z · LW(p) · GW(p)

The more immediate question is, however: Does his positive gut reaction enable him to engage more openly with the situation, thus deriving greater value from it than he might have done otherwise?

comment by Kaj_Sotala · 2013-07-09T15:24:06.358Z · LW(p) · GW(p)

As with all [beliefs], only time will tell whether this is pathetic self-delusion.

comment by Alejandro1 · 2013-07-03T13:28:06.934Z · LW(p) · GW(p)

My experience as a marriage counselor taught me that for a discussion of a disagreement to be productive, the parties have to have a shared understanding of what is being debated. If a husband thinks a marital debate is about leaving the toliet seat up or not, and the wife thinks it is about why her husband never listens to, appreciates or loves her the way he should, expect fireworks and frustration. If you are in an argument that you think is about government debt and it’s going nowhere, it may be because the person you are debating isn’t really arguing about the current level of government debt. Rather, they are arguing about the size of government.

If you get into a debate that is ostensibly about the level of government debt, try the following tactic (or try it on yourself in your own mind): If your opponent says that government debt is too high and we therefore need to cut public spending, ask whether s/he has EVER favored under ANY economic conditions a nice, fat increase in public spending. If you are debating someone who says that government debt is no big deal and that we should be increasing public spending, ask if s/he has EVER favored under ANY economics conditions a big, fat cut in public spending. You are going to get a no answer most of the time; maybe almost all the time.

…Is that wrong? No, it’s just frustrating when you are arguing about one thing and the other person is arguing about something else (or, when BOTH of you are actually arguing about something other than what on the face of it you think you are arguing about). The solution?: Drop the charade and get down to business. How big government should be is an essential political argument for the members of a society to have, so why not just have it up front?

--Keith Humphreys

(I hope that the general point is appreciated instead of starting a politics discussion! I think these kind of proxy arguments are a very common failure mode in all areas of life.)

Replies from: Jiro
comment by Jiro · 2013-07-04T09:23:48.173Z · LW(p) · GW(p)

I don't think the conclusion follows.

It's entirely consistent to believe that the level of something is too high and has been too high for a long time, yet to not oppose it in principle.

The correct question to detect if that's really their objection is not "have they ever thought that the level is too low"--the correct question is "would they ever under any circumstances think that the level is too low". Of course, you're not going to get as many "no" answers with that as with your original formulation.

Replies from: RolfAndreassen
comment by RolfAndreassen · 2013-07-12T15:33:31.217Z · LW(p) · GW(p)

It's entirely consistent to believe that the level of something is too high and has been too high for a long time, yet to not oppose it in principle.

It may be consistent, but is it common? Especially in political debates?

comment by Creutzer · 2013-07-20T09:28:36.102Z · LW(p) · GW(p)

“As I looked out into the night sky, across all those infinite stars, it made me realize how insignificant they are."

Peter Cook

Not, perhaps, a rationality quote per se, but a delightful subversion of a harmful commonplace.

Replies from: None
comment by [deleted] · 2013-07-25T14:12:42.349Z · LW(p) · GW(p)

What do you mean by "harmful commonplace"?

Replies from: simplicio
comment by simplicio · 2013-07-25T18:00:36.911Z · LW(p) · GW(p)

The standard version is that in looking at the stars we realize our own insignificance. Apart from the sheer non-sequitur from "of comparatively small dimensions" to "insignificant" (to whom?!), such tropes may serve as a sort of moral anaesthetic: "Taking the Hubble View, does it really, fundamentally matter if I steal money from my investors?"

The general problem is that of making leaps from empty empirical facts to (almost certainly mistaken or self-serving) moral conclusions.

Replies from: None
comment by [deleted] · 2013-07-25T19:05:35.371Z · LW(p) · GW(p)

I notice I am confused, and I do get a sense of insignificance/wonderment when looking at the night sky.

Are there actually people who use the size of the universe to justify moral nihilism?

Replies from: Creutzer, simplicio
comment by Creutzer · 2013-07-26T05:16:56.374Z · LW(p) · GW(p)

I don't think it's usually employed to justify moral nihilism so much as to tell people to shut up and not take human problems so seriously - when in fact human problems are all that matters. It strikes me as a secular cognate of the way religion frequently calls for "humility".

comment by simplicio · 2013-07-26T14:27:45.521Z · LW(p) · GW(p)

Maybe the specific example I cite is a bit farfetched, but the general principle of "ex naturalistic fallacy quodlibet" is sound.

comment by cody-bryce · 2013-07-02T18:17:44.169Z · LW(p) · GW(p)

Truth would quickly cease to be stranger than fiction, once we got as used to it.

H.L. Mencken

Replies from: Randy_M
comment by Randy_M · 2013-07-02T20:02:55.089Z · LW(p) · GW(p)

Used to truth? or used to fiction?

Replies from: Vaniver, cody-bryce
comment by Vaniver · 2013-07-02T20:50:14.588Z · LW(p) · GW(p)

Truth.

comment by cody-bryce · 2013-07-02T21:09:38.812Z · LW(p) · GW(p)

Correct.

comment by James_Miller · 2013-07-01T17:31:59.688Z · LW(p) · GW(p)

"Here are the ten major principles of rapid skill acquisition:

  1. Choose a lovable project.
  2. Focus your energy on one skill at a time.
  3. Define your target performance level.
  4. Deconstruct the skill into subskills.
  5. Obtain critical tools.
  6. Eliminate barriers to practice.
  7. Make dedicated time for practice.
  8. Create fast feedback loops.
  9. Practice by the clock in short bursts.
  10. Emphasize quantity and speed."

The First 20 Hours: How to Learn Anything . . . Fast! by Josh Kaufman.

Replies from: bentarm
comment by bentarm · 2013-07-25T19:29:44.694Z · LW(p) · GW(p)

Have you read the book?

My suspicion is that over 90% of it's worth is in an additional rule, which isn't one of these: "commit to practising something for 20 hours before starting to apply these principles". My guess - 20 hours of dedicated practice is just way longer than people tend to think it is, and you'd be surprised how much you learn in 20 hours without making an effort to do any of the rest of the 10 things.

Replies from: James_Miller
comment by James_Miller · 2013-07-26T04:07:50.341Z · LW(p) · GW(p)

Yes I did read the book.

comment by Vaniver · 2013-07-01T16:57:29.034Z · LW(p) · GW(p)

We shall not grow wiser before we learn that much that we have done was very foolish.

-- F. A. Hayek

Replies from: DanArmak
comment by DanArmak · 2013-07-05T12:15:43.663Z · LW(p) · GW(p)

And perhaps not after that, either.

comment by CronoDAS · 2013-07-04T19:33:31.630Z · LW(p) · GW(p)

There are those among us - among you, too, I observe - who glorify the wonders of the natural world with a kind of glassy-eyed fanaticism and urge a return to that purer, more innocent state. This testifies to nothing other than the fact that those who recommend the satisfactions of living in harmony with nature have never had to do it. Nature is evil. Nature is conflict, violence, betrayal; worms that crawl through the skin and breed in the gut; thorns that poison; snakes that fight in writhing, heaving masses until all lie dead from one another's poison. From nature we learned to tear the flesh off the bone and suck out the blood - and to enjoy it. Do you want to return to that state? I do not.
...
I have known Nature. I have known Civilization. Civilization is better.

-- Donna Ball (writing as Donna Boyd), The Passion

Replies from: PhilGoetz, MixedNuts
comment by PhilGoetz · 2013-07-16T15:28:34.408Z · LW(p) · GW(p)

This is factually false. I know the subculture of Americans who are most-passionate about going back to nature, and they do it. The unrealism in their attitude derives not from ignorance of nature, but from being able to go back to nature while under the protection of American law and mores, so that they don't have to band together in tribes for pretection, compete with other tribes for land, and do the whole tribal bickering and conformity thing.

It's all about population density. Primitive life is pretty great if you have low population density--one person per square mile is about right in much of North America. But the population always grows until you have conflict.

Spending 9 hours a day 5 days a week sitting in a cubicle staring at a monitor and typing in numbers is horrible in its own ways, which the author prefers to accomodate and ignore.

(There are no poisonous thorns in North America. And when you see two snakes in "writhing, heaving masses", they're probably mating.)

Replies from: Lumifer, CronoDAS, Swimmer963, None
comment by Lumifer · 2013-07-16T16:35:20.640Z · LW(p) · GW(p)

This is factually false.

What exactly was claimed to be a fact and how do you know it's false?

Primitive life is pretty great if you have low population density

Um. Really? What do you call primitive life, then? Does it include contemporary medicine, for example?

Replies from: PhilGoetz
comment by PhilGoetz · 2013-07-16T21:52:37.902Z · LW(p) · GW(p)

"This testifies to nothing other than the fact that those who recommend the satisfactions of living in harmony with nature have never had to do it." That "fact" is false, and sets up a straw man in the place of the views and preferences of people who know what they're talking about.

Replies from: gwern
comment by gwern · 2013-08-02T01:26:24.067Z · LW(p) · GW(p)

That "fact" is false, and sets up a straw man in the place of the views and preferences of people who know what they're talking about.

In what sense is traveling with modern equipment, vaccinated and raised in an industrial society,

while under the protection of American law and mores, so that they don't have to band together in tribes for pretection, compete with other tribes for land, and do the whole tribal bickering and conformity thing

all of which depends crucially on a vast technological economy and society, 'living in harmony with nature'?

They aren't living in harmony with nature because their brief highly sanitized encounters are structured and make use of countless highly unnatural products & tools, and so that is not a strawman.

comment by CronoDAS · 2013-07-16T21:33:15.436Z · LW(p) · GW(p)

Me, I'll take air conditioning, indoor plumbing, mosquito control, and antibiotics any day...

comment by Swimmer963 (Miranda Dixon-Luinenburg) (Swimmer963) · 2013-07-16T22:04:25.908Z · LW(p) · GW(p)

Spending 9 hours a day 5 days a week sitting in a cubicle staring at a monitor and typing in numbers is horrible in its own ways, which the author prefers to accomodate and ignore.

I 100% agree with this. As a kid, I used to daydream about going and living by myself in the wilderness, partly because sitting in a classroom all day was so awful. (The other aspect is that I didn't like people much when I was 10). I've compromised by finding a job where I don't have to sit down and type numbers into a computer...at least, not much. Also I like people a lot more now.

comment by [deleted] · 2013-07-16T15:40:50.992Z · LW(p) · GW(p)

I have a sneaking suspicion that's not what the OP meant by "Nature."

comment by MixedNuts · 2013-07-04T21:45:43.287Z · LW(p) · GW(p)

That sounds like fun, from a LaVeyan-ish perspective. Fighting and killing are more exciting than singing Kumbaya. Does she just not like raw meat?

Replies from: Estarlio, CronoDAS
comment by Estarlio · 2013-07-04T22:59:11.064Z · LW(p) · GW(p)

Because the consequences of losing are so terrible, people tend to avoid serious fighting if they can. Being hunted - a far more likely state - is decidedly un-fun.

Replies from: DanielLC, MixedNuts
comment by DanielLC · 2013-07-23T21:19:36.498Z · LW(p) · GW(p)

Being hunted is just as likely as hunting. It's just that being hunted is much worse than hunting is good.

Also, being in the state of trying to avoid being hunted is also un-fun.

comment by MixedNuts · 2013-07-05T07:31:22.448Z · LW(p) · GW(p)

It's definitely terrible and to be avoided if at all possible, but it is kind of fun. We can and do get back a small part of that feeling with roller coasters and action movies and fighting sports.

Replies from: Estarlio, Richard_Kennaway
comment by Estarlio · 2013-07-07T02:01:39.204Z · LW(p) · GW(p)

Do you have data for prevalence in this respect?

As a martial artist and as someone whose been in fear of getting the crap knocked out of them in the past this just doesn't line up with my experience. There's a degree of focus that goes on in fights that largely excluded feelings of excitement, it's not like being on a rollercoaster. At least not for me. Fighting feels more like floating if it can be said to be like anything,I just get incredibly tuned in and a lot stronger than usual.

Admittedly I don't think everyone experiences it like that, some people probably do enjoy it.

Replies from: PhilGoetz, AndHisHorse
comment by PhilGoetz · 2013-07-16T15:22:24.213Z · LW(p) · GW(p)

In the middle ages it was more respectable to talk about how much you enjoyed killing people, and some people did, though I can't remember any references.

comment by AndHisHorse · 2013-08-02T00:34:09.145Z · LW(p) · GW(p)

I would suspect that sparring in a martial arts context - the product of years of training and practicing specific, restrained moves, in which the objective is not to harm the opponent but to demonstrate superior technique - is rather different, emotionally, from a life-or-death struggle or even a fight between two combatants working off instinct and experience, neither of whom have been conditioned to associate that particular kind of fighting with a safe, controlled environment.

That said, I agree with you that there's a matter of individual variation. The people who receive the strongest adrenaline high from fighting, however, are probably not the ones asked to return to the martial arts academy.

comment by Richard_Kennaway · 2013-07-19T13:46:19.745Z · LW(p) · GW(p)

It's definitely terrible and to be avoided if at all possible, but it is kind of fun.

"Nothing in life is so exhilarating as to be shot at without result.”

Winston Churchill (from his years as a war correspondent).

comment by CronoDAS · 2013-07-04T22:05:56.853Z · LW(p) · GW(p)

It's actually from the prologue of a romance novel, and the narrator is a werewolf.

comment by PhilGoetz · 2013-07-16T14:48:18.464Z · LW(p) · GW(p)

I'll be more enthusiastic about encouraging thinking outside the box when there's evidence of any thinking going on inside it.

  • Terry Pratchett, alt.fan.pratchett, quoted here
comment by dspeyer · 2013-07-01T20:20:30.098Z · LW(p) · GW(p)

Sometimes the most remarkable things seem commonplace. I mean, when you think about it, jet travel is pretty freaking remarkable. You get in a plane, it defies the gravity of an entire planet by exploiting a loophole with air pressure, and it flies across distances that would take months or years to cross by any means of travel that has been significant for more than a century or three. You hurtle above the earth at enough speed to kill you instantly should you bump into something, and you can only breathe because someone built you a really good tin can that has seams tight enough to hold in a decent amount of air. Hundreds of millions of man-hours of work and struggle and research, blood, sweat, tears, and lives have gone into the history of air travel, and it has totally revolutionized the face of our planet and societies.

But get on any flight in the country, and I absolutely promise you that you will find someone who, in the face of all that incredible achievement, will be willing to complain about the drinks.

The drinks, people.

--Harry Dresden, Summer Knight, Jim Butcher

Replies from: SaidAchmiz, DSherron, christopheg
comment by Said Achmiz (SaidAchmiz) · 2013-07-02T13:11:36.758Z · LW(p) · GW(p)

Here's the thing about air-travel-related complaints.

Air travel is really unpleasant. Oh sure, it's technologically impressive, but the actual experience is terrible: sitting in a cramped space for hours on end, being in close proximity to so many other people; the pressure changes and the noise; the long, tiring process of arriving for your flight, which often takes longer than the actual flight and is quite stressful; the humiliating and absurd security procedures, which these days look more and more like ways for the government to gratuitously exercise its power...

So we've got this really impressive means of travel, which our society seems to have conspired to make as unpleasant as humanly possible. Ok, maybe it's all excusable and inevitable, just for the sheer amazingness of "ooh, we're FLYING through the AIR and so FAST!" etc. But then, after we pay the airline such impressive amounts of money for this amazing-but-unpleasant convenience, they don't deign to even serve us good drinks?

And what do the drinks have to do with how technologically impressive flight is, anyway? Are the people responsible for the drinks also the people who build, maintain, and fly the planes? What, are the drinks the pilot's responsibility, and he just can't be bothered, what with all that keeping the plane upright that he has to do? Did the Boeing engineer have "serve good drinks" on his to-do list, but just plain didn't get to it, tired as he was from all that "making sure the wings don't fall off" he had to do? No! The people responsible for the drinks had one damn job! And they're doing it badly! And then when people complain, they have the gall to evade responsibility by attempting to take credit for all that amazing science and engineering?!

In short, the quote is analogous to:

"I mean, when you think about it, our society is pretty freaking remarkable. We have computers, and indoor plumbing, and hundreds of channels on cable. Hundreds of millions of man-hours of work and struggle and research, blood, sweat, tears, and lives have gone into the history of all of our modern conveniences, and it has totally revolutionized the face of our planet and societies.

But look anywhere in the world, and I absolutely promise you that you will find someone who, in the face of all that incredible achievement, will be willing to complain about being mugged.

Being mugged, people."

Yeah, "everything is amazing so why are you complaining about this unrelated bad thing" is a fallacy. At this rate, all complaints about everything, ever, are apparently unwarranted.

Replies from: Kawoomba, Jiro, fractalman
comment by Kawoomba · 2013-07-02T13:30:23.479Z · LW(p) · GW(p)

You're using this remarkable set of interacting or interdependent components of interlinked hypertext documents in a global system of interconnected computer networks powered by a flow of electric charge to whine about a rationality quote! How quaint.

comment by Jiro · 2013-07-02T21:41:35.919Z · LW(p) · GW(p)

Well, there's the scenario where one person does both the engineering and the drinks, but only has a limited amount of effort in his job to exert, and he chooses to devote all of that effort to engineering the plane and only a tiny portion of it to ensuring the quality of the drinks. That scenario is obviously absurd.

But if you slightly modify that, person->company and effort->money, that's pretty much what's going on. The company has a limited amount of money to spend, and spending most of it on engineering and almost nothing on drinks has similar dynamics to a single worker who's choosing to spend all his time on engineering and almost nothing on drinks. Even if the company internally contains several workers and the engineer and the drink maintainer are different people.

comment by fractalman · 2013-07-07T07:05:24.182Z · LW(p) · GW(p)

back to the original quote for a bit...Dresden actually complains quite a bit. But after dealing with flaming monkey poo (literally), a white court vampire as a friend, using a cleaning spell to deal with some giant scorpions, and who knows how many dead bodies (some of which were animated)....drinks seem really, really shallow to him. Not to mention he's trying hard not to think too much about how, if he lets his magic the least bit off the leash, it will crash the plane. (something about complicated technology seems to override the rule "cannot accomplish what you don't believe in accomplishing")

Moving back to real life, someone is willing to complain about the drinks while someone else is being mugged.

Furthermore, If the person's REAL complaint is about the unpleasant security measures, cramped seats, and air pressure changes, complaining about the drinks, even if the complaint gets the drinks to improve, will not really optimize much.

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2013-07-07T15:14:13.535Z · LW(p) · GW(p)

Furthermore, If the person's REAL complaint is about the unpleasant security measures, cramped seats, and air pressure changes, complaining about the drinks, even if the complaint gets the drinks to improve, will not really optimize much.

Well, my real complaint is about both/all of those things. It is possible to have multiple complaints, you know; and also it is possible to improve more than one thing, ever.

Moving back to real life, someone is willing to complain about the drinks while someone else is being mugged.

But this generalizes. Someone is willing to complain about being mugged while someone else is being violently assaulted. Someone is willing to complain about being violently assaulted while someone else is imprisoned and tortured. And so on...

There's no law that says we have to find The Worst Problem, devote all our resources to fixing it, and totally ignore every other problem that humanity has while The Worst Problem persists. Such a policy would lead to a rather horrifying world.

As always, a relevant xkcd.

Replies from: Jiro
comment by Jiro · 2013-07-07T17:38:42.737Z · LW(p) · GW(p)

There's no law that says we have to find The Worst Problem, devote all our resources to fixing it, and totally ignore every other problem that humanity has while The Worst Problem persists. Such a policy would lead to a rather horrifying world.

Something similar has been seriously argued here for donations to charity: you should donate all your money to the single charity that would do the most good (unless you're a millionaire who can donate so much money that the charity will reduce the size of the problem to below the size of another problem).

http://lesswrong.com/lw/elo/a_mathematical_explanation_of_why_charity/ http://lesswrong.com/lw/gtm/when_should_you_give_to_multiple_charities/ http://lesswrong.com/lw/aid/heuristics_and_biases_in_charity/

Some of the comments have good arguments against this, however.

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2013-07-07T18:09:05.241Z · LW(p) · GW(p)

Quite so, and I agree with this argument in the charity case; I just don't think it generalizes to a strategy for dealing with Problems In General.

comment by DSherron · 2013-07-01T23:58:16.526Z · LW(p) · GW(p)

That honestly seems like some kind of fallacy, although I can't name it. I mean, sure, take joy in the merely real, that's a good outlook to have; but it's highly analogous to saying something like "Average quality of life has gone up dramatically over the past few centuries, especially for people in major first world countries. You get 50-90 years of extremely good life - eat generally what you want, think and say anything you want, public education; life is incredibly great. But talk to some people, I absolutely promise you that you will find someone who, in the face of all that incredible achievement, will be willing to complain about [starving kid in Africa|environmental pollution|dying peacefully of old age|generally any way in which the world is suboptimal]."

That kind of outlook not only doesn't support any kind of progress, or even just utility maximization, it actively paints the very idea of making things even better as presumptuous and evil. It does not serve for something to be merely awe-inspiring; I want more. I want to not just watch a space shuttle launch (which is pretty cool on its own), but also have a drink that tastes better than any other in the world, with all of my best friends around me, while engaged in a thrilling intellectual conversation about strategy or tactics in the best game ever created. While a wizard turns us all into whales for a day. On a spaceship. A really cool spaceship. I don't just want good; I want the best. And I resent the implication that I'm just ungrateful for what I have. Hell, what would all those people that invested the blood, sweat, and tears to make modern flight possible say if they heard someone suggesting that we should just stick to the status quo because "it's already pretty good, why try to make it better?" I can guarantee they wouldn't agree.

Replies from: James_K, Kaj_Sotala, NancyLebovitz, dspeyer
comment by James_K · 2013-07-02T05:38:12.025Z · LW(p) · GW(p)

Nonetheless it is important to have a firm grasp on the progress we have already attained. It's easy to go from "we haven't made any real progress" to "real progress is impossible". And so we should acknowledge the achievements we have made to date, while always striving to build on them.

comment by Kaj_Sotala · 2013-07-02T04:39:30.033Z · LW(p) · GW(p)

You're right that it would indeed be a mistake to say "things are already great, let's stop here". But then, "things are really awful, so let's get better" doesn't sound quite right either. The attitude I would lean towards, and which I think is compatible with the quote, is "things are already pretty awesome, how could we make them even more awesome?".

Replies from: DSherron
comment by DSherron · 2013-07-02T14:02:23.748Z · LW(p) · GW(p)

The ideal attitude for humans with our peculiar mental architecture probably is one of "everything is amazing, also lets make it better" just because of how happiness ties into productivity. But that would be the correct attitude regardless of the actual state of the world. There is no such thing as an "awesome" world state, just a "more awesome" relation between two such states. Our current state is beyond the wildest dreams of some humans, and hell incarnate in comparison to what humanity could achieve. It is a type error to say "this state is awesome;" you have to say "more awesome" or "less awesome" compared to something else.

Also, such behavior is not compatible with the quote. The quote advocates ignoring real suboptimal sections of the world and instead basking how much better the world is than it used to be. How are you supposed to make the drinks better if you're not even allowed to admit they're not perfect? I could, with minor caveats, get behind "things are great lets make them better" but that's not what the quote said. The quote advocates pretending that we've already achieved perfection.

Replies from: Kaj_Sotala
comment by Kaj_Sotala · 2013-07-03T09:47:16.353Z · LW(p) · GW(p)

There is no such thing as an "awesome" world state, just a "more awesome" relation between two such states.

Sure. But "things are pretty awesome" is faster to say than "our current world is more awesome than most of the worlds that have existed in history".

The quote advocates pretending that we've already achieved perfection.

That's a valid interpretation of the quote, but not the only one. The way I read it, specifically the way it focused on the drinks and the word "complain", it wasn't so much saying that we should pretend that we've already achieve perfection but rather to keep in mind what's worth feeling upset over and what isn't. In other words, don't waste your time complaining about drinks to anyone who could hear, but instead focus your energies on something that you can actually change and which actually matters.

comment by NancyLebovitz · 2013-07-03T16:49:01.393Z · LW(p) · GW(p)

I don't think the comparison is to complaining about very bad things happening elsewhere, it's more like "we've got it so much easier than our forebears, why do people still complain about misspellings on the internet? They should be grateful they have an internet."

One fallacy is that the person who says sort of thing fails to realize that complaining about complaining is still complaining.

Replies from: TheOtherDave
comment by TheOtherDave · 2013-07-03T17:38:45.902Z · LW(p) · GW(p)

Though people have complained about stuff that isn't perfect now even when the imperfect stuff was less imperfect than things had previously been pretty much as far back as we have records, so complaining about that isn't necessarily an instance of the thing being complained about.

Said less obscurely: if we assign the label kvetching to complaining about things even in the face of continual improvement, complaining about kvetching is not necessarily kvetching, since kvetching has continued unabated for generations.

comment by dspeyer · 2013-07-02T03:01:24.163Z · LW(p) · GW(p)

I'm not saying we should settle for anything. Certainly not.

But to forget the awesomeness that already exists is a mistake with consequences. When looking at the big picture, it's important to realize that our current tradjectory is upwards. When planning for something like space travel, it's important to remember that air travel sounded just as crazy a hundred years ago. And when thinking about thinking, it's worth remembering that this same effect will hit whatever awesome thing we think of next.

Replies from: DSherron, Eugine_Nier
comment by DSherron · 2013-07-02T13:52:00.185Z · LW(p) · GW(p)

Sure, I agree with that. But you see, that's not what the quote said. It actually not even related to what the quote said, except in very tenuous manners. The quote condemned people complaining about drinks on an airplane; that was the whole point of mentioning the technology at all. I take issue with the quote as stated, not with every somewhat similar-sounding idea.

comment by Eugine_Nier · 2013-07-04T05:51:41.738Z · LW(p) · GW(p)

Another consequence is to see the that all this talk of a post-scarcity society is nonsense.

Replies from: Desrtopa
comment by Desrtopa · 2013-07-04T06:57:50.694Z · LW(p) · GW(p)

Why?

We may quickly come to take major developments for granted, but that doesn't mean that future developments can't, for example, restructure society so that nobody needs to work.

People might quickly come to think of it as normal, but that doesn't mean things would still be basically the same as they were before.

Replies from: Eugine_Nier
comment by Eugine_Nier · 2013-07-07T06:05:45.602Z · LW(p) · GW(p)

We may quickly come to take major developments for granted, but that doesn't mean that future developments can't, for example, restructure society so that nobody needs to work.

What do you mean by "nobody needs to work"? The standard meaning is that nobody needs to work to provide everyone with a "decent" standard of living. The problem is that popular conceptions of what constitutes a "decent" standard of living change as the average standard itself changes.

Replies from: Desrtopa
comment by Desrtopa · 2013-07-07T06:37:53.669Z · LW(p) · GW(p)

I mean that everyone will have access to an abundance of resources without needing to perform any labor.

In terms of material goods and resources, it's possible for technological advances to reach a point where any human labor is more or less irrelevant in terms of total productivity.

Replies from: Viliam_Bur, Eugine_Nier
comment by Viliam_Bur · 2013-07-10T07:44:50.214Z · LW(p) · GW(p)

Unless we have computers that can organize the work of other computers, there will still be some human work necessary. I mean, we can have a machine that mines coal, and then the humans don't have to mine coal. But then we need humans to operate this machine, repair this machine, invent a better machine, and perhaps do some research about what we can do after we run out of the coal to mine. The day this meta-labor is not necessary is pretty much the day of Singularity.

There is also something strange about this process. It eliminates the cognitively trivial work first, which increases the entry cost to the job market. I mean, in the past a person could start with some trivial work, such as moving things from place A to place B. You could have a retarded person do that and contribute to the society meaningfully. These kinds of jobs will be gone first; and some of the highly-qualified jobs will be necessary until the Singularity.

I can imagine a world where everyone works, and I can try to imagine a post-Singularity utopia where nobody needs to work. The difficult part is the interval between that -- for example a situation where 95% of people would not have to work at all, and the remaining 5% would have to spend decades learning hard just to be able to do something meaningful in their jobs, because all the simpler tasks have already been automatized. At the same time, the 95% would most likely guard the working 5% enviously, making sure they don't have any significant reward for their sacrifices, because that would be against our egalitarian instincts.

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2013-07-10T09:58:59.069Z · LW(p) · GW(p)

I can imagine a world where everyone works, and I can try to imagine a post-Singularity utopia where nobody needs to work. The difficult part is the interval between that

Are we at the beginning of that period now, in the developed world? Is that why we have an underclass of people living their lives on welfare -- there simply isn't enough work needed, of the sort that they are capable of, that can't be done more cheaply with machines?

Replies from: Viliam_Bur, army1987
comment by Viliam_Bur · 2013-07-10T11:25:08.921Z · LW(p) · GW(p)

I am not qualified to answer this, but it seems to me like this. I am also not saying that this is the only or the greatest problem. Just that it already exists.

Let's start with a naive question: How is it possible that so many people are unemployed and yet there are so many things that should be done but no one (or not nearly enough people) is doing them?

This is typically answered by: Not everything that is useful is also profitable. Some things are not done because it is not possible (or not easy enough for an average person, with all the natural lack of strategy) to make money doing them. All those unemployed people are trying to get some more money; of course they will not choose activities they can't make money from.

But this is not a complete answer. First, there are many non-profitable activities, and yet many people are doing them. So perhaps the causality is not (merely) "can make money -> does the work", but (also, significantly) "does many things -> makes money on average". (Or using the signalling hypothesis: Middle-class people are more likely to do non-protitable activities, because it signals they make enough money to live decently so they don't need the extra penny.) Second, if lack of money would be the only problem, then every problem could be fixed by getting some funding. However, I suspect that if you get funding, the first people to come will be those who already are employed, and they will come if your offer is better (better paid or more interesting). The naive assumption would be that the unemployed people would get there first, as soon as your offer is better than being unemployed. -- At this moment I admit that I actually never tried creating jobs specifically for unemployed people, so this is just a guess. My experience suggests that when something needs to be done, the most busy people volunteer first. (Which is probably the trait that makes them so busy.)

As a specific example, ten or fifteen years ago everyone wanted to have a webpage, and the whole Java EE business did not exist yet; you could make decent money by making just HTML, or HTML with a little of PHP. At some moment I had much more offers than I could handle, and they were well paid. At the same moment, there was like 15% or 20% unemployment in my country. Well, the unemployed people remained unemployed, as I was slowly making one web page after another. It was the lack of education and/or skills that prevented them from taking my work; I would not try to stop them. These days there is a lack of Java EE programmers, and almost everyone is asking me whether I know one, because their company needs one. But when I was a teacher at high school and tried to teach children programming, a lot of them resisted, because it's "boring and useless". To avoid a possible connotation: I am not saying all children are like this, not even that most of them are. I also had an experience of teaching teenagers programming by e-mail, because they wanted to learn, but they lived in some small town and didn't have good teachers there, so they found some of my blogs and contacted me. Also, I am not getting to a conclusion that unemployed people deserve it; being not strategic is a natural human condition. Just saying that when there is an abundance of jobs next to a big unemployment, lack of skills seems to be the cause, although many people would say otherwise.

In the past there were many things that a person without an education could do with a very short training. Today there is not enough of these jobs, compared with the number of people who formally had some mandatory education, but never studied something deep enough. And it goes against people's intuition; it's like: "Are you saying that a good, honest, hard-working person is useless today?" And the answer, unfortunately, is: If they don't have the necessary skills, and are not strategic enough to gain them, then they cannot contribute meaningfully to the economy today. And it feels completely evil. -- We could create some economically meaningless jobs for these people to give them some status and illusion of purpose. (Actually, we are already doing this, but perhaps we should do it more.) But it would not solve the problem with lack of highly skilled people. We wouldn't have unemployment anymore; but we still wouldn't have enough web pages, cure for cancer, or whatever. (The problem could actually get even worse, if those meaningless jobs would become attractive also to the skilled people.)

Also, in a situation where there is not enough work, reducing the work week feels like a natural solution; but it's not. Instead of having 20% unemployment, how about reducing the work week from 5 days to 4, so everyone can have a job? If the work does not require any education or skill, this solution may work. But what about jobs that require a lot of education? You may have 80% of the work week, but you still need 100% of the education. The society as a whole would have to spend more study-hours to produce the same number of work-hours. And the more technologically developed we get, the less we will need to work, but the more we will need to study. -- Using the veil of ignorance, the best solution would be to have a few people study hard and work hard all their lives, and the rest of the society just to have fun all the time; welcome in Omelas, the city of maximum total happiness. Again, this feels contrary to our concept of justice. It would be fair if people who worked hard could have more fun. But it is more efficient if people who worked hard continue to work even harder, because they already have the skill and the experience. And by "work" I mean education and, uhm, "luminous" work.

comment by A1987dM (army1987) · 2013-07-10T10:30:44.220Z · LW(p) · GW(p)

See this and the links therein.

comment by Eugine_Nier · 2013-07-09T03:24:26.639Z · LW(p) · GW(p)

I mean that everyone will have access to an abundance of resources without needing to perform any labor.

What do you mean by "abundance"?

Replies from: Desrtopa
comment by Desrtopa · 2013-07-09T03:28:17.553Z · LW(p) · GW(p)

More than they could possibly use up for any practical, non-signalling related purposes.

Replies from: Eugine_Nier
comment by Eugine_Nier · 2013-07-09T03:50:28.034Z · LW(p) · GW(p)

You may want to reread the original quote.

Replies from: army1987
comment by A1987dM (army1987) · 2013-07-09T14:02:16.623Z · LW(p) · GW(p)

Drinks do also have practical, non signalling related purposes.

But yeah, a society where one of the main things you'd have to miss out if you didn't work is decent drinks on a plane would definitely count as post-scarcity by my standards.

Replies from: Eugine_Nier
comment by Eugine_Nier · 2013-07-13T05:37:52.586Z · LW(p) · GW(p)

So would you say the developed world is currently a post-scatcity society?

Replies from: TimS, army1987
comment by TimS · 2013-07-13T05:43:11.452Z · LW(p) · GW(p)

Your question implies you think that the main complaints in the developed world involve decent drinks on planes and similarly non-dire concerns. Not sure I agree with that implication.

Replies from: Eugine_Nier
comment by Eugine_Nier · 2013-07-14T20:44:39.420Z · LW(p) · GW(p)

Pick a dire concern from the developed world today, now how would you explain to an average westerner ~200 years why that concern is dire.

Replies from: Jiro, TimS, Caspian
comment by Jiro · 2013-07-15T01:42:47.906Z · LW(p) · GW(p)

"I'm concerned about nuclear war. It's like the wars you know, but it's a lot more deadly and whole areas can be left uninhabitable for centuries."

"I'm concerned about dying of cancer. Cancer is a disease that many people eventually get once we have reduced the rate of dying from other things."

"I'm concerned about the NSA reading my email. You don't have email 200 years ago, but surely you understand how bad it is for the government to spy on people. Imagine that every time you wrote someone a letter, the government hired a scribe to copy it and filed it so they could read it whenever they wanted."

Replies from: Eugine_Nier
comment by Eugine_Nier · 2013-07-15T02:27:33.818Z · LW(p) · GW(p)

The first and last problem on your list aren't related to scarcity. As for the second one:

"I'm concerned about dying of cancer. Cancer is a disease that many people eventually get once we have reduced the rate of dying from other things."

You left out the part where you get them to understand why this is dire. If you told them the life expectancy of the typical member of a developed country, they're assume you were describing a utopian society.

Replies from: Jiro, wedrifid, JoshuaZ
comment by Jiro · 2013-07-15T04:06:47.404Z · LW(p) · GW(p)

I think that someone from 200 years ago would readily understand that people don't want to die, and that having a longer life expectancy and dying is still not as good as not dying. Yes, there's always the possibility that they may think that dying is good, but it isn't, really; that's just a sour grapes-type rationalization that we only make in the first place because death sucks.

I'd also point out that nuclear war and NSA spying only can happen in a developed society because it takes a lot of resources to do those things. 200 years ago we were simply incapable of making a nuclear weapon, and even if space aliens had dropped the plans for one in their lap, they wouldn't be able to build one; it takes a huge infrastructure to make one that does indeed imply having overcome many scarcity limitations.

Replies from: TimS
comment by TimS · 2013-07-15T04:31:29.568Z · LW(p) · GW(p)

There's a lot going on in the conversation right now.

I just want to note that you are having a conversation about a slightly different topic than what army1987 was talking about - I think Eugine_Nier is right that many of your examples are not about scarcity per se.

comment by wedrifid · 2013-07-15T03:27:45.869Z · LW(p) · GW(p)

The first and last problem on your list aren't related to scarcity.

This seems to be a problem with your question, not the answer.

Replies from: JoshuaZ
comment by JoshuaZ · 2013-07-15T03:33:00.455Z · LW(p) · GW(p)

Eugine's question is in the context of a larger conversation.

Replies from: TimS, wedrifid
comment by TimS · 2013-07-15T04:15:04.140Z · LW(p) · GW(p)

Sure, but he is conflating utopian and post-scarcity. It's not obvious to me that they are isomorphic.

comment by wedrifid · 2013-07-15T04:14:58.726Z · LW(p) · GW(p)

Eugine's question is in the context of a larger conversation.

Indeed, and said larger conversation includes TimS expressing confusion about how the question relates to the rest of the conversation. That being the case it is an error to suggest (or imply) that the answers to the question are non-sequitur simply because Jiro answered the question rather than trying to use the question as a chance to support some scarcity related position or another.

comment by JoshuaZ · 2013-07-15T03:03:53.125Z · LW(p) · GW(p)

I'm confused by why your comment got downvoted. Not only is it correct in the context that scarcity is what is under discussion, but the point that modern developed societies resemble what someone in the past would likely have considered a utopia should be uncontroversial. Long lifespans and good medical care is in one of the things mentioned in the original book "Utopia". Other historical utopian literature has this aspect, as well as emphasizing education and low infant mortality. New Atlantis would be a prominent example.

comment by TimS · 2013-07-14T21:01:38.666Z · LW(p) · GW(p)

I don't understand your question. I'm not sure I even understand the relevance of your question to the topic of post-scarcity and what post-scarcity might be like.

It seems pretty easy to explain current serious problems to people from the far past or far future (I'm not sure which you mean). Drinks on airplanes is just not a serious problem - it might be hard to explain not serious problems to people from very different cultural contexts.

Replies from: Eugine_Nier
comment by Eugine_Nier · 2013-07-19T07:06:25.312Z · LW(p) · GW(p)

My point is that if one were to ask someone ~100-200 years ago to imagine a post-scarcity society they'd imagine something that resembles our current society, yet we don't think of ourselves as post-scarcity. Similarly, I doubt the societies of ~100-200 years in the future will think of themselves as post-scarcity, even if they'd seem that way to us at first glance.

Replies from: Jiro, TheOtherDave
comment by Jiro · 2013-07-19T17:42:27.021Z · LW(p) · GW(p)

If I asked someone from 100-200 years ago to imagine a post-scarcity society, I'd expect them to say something like "you can have as much of ___ as you want". Furthermore, I think they'd clearly understand the difference between "have more of it than we get now" and "have as much as we want", whether it's lifespan, food and shelter, or anything else. I don't see why someone from that time period would think a "post-scarcity" society means a society that merely has less scarcity.

"Someone from the past would say our level of something is far beyond what they would have hoped for" doesn't equate to "someone from the past would say that our level of something is post-scarcity". Presuming they speak English and the meaning of the term "post-scarcity" can be explained to them, I don't see why they would confuse the two.

Replies from: TheOtherDave
comment by TheOtherDave · 2013-07-19T18:35:47.225Z · LW(p) · GW(p)

I would expect a typical member of my society, given the prompt "A post-scarcity society is one where you can have as much of _ as you want" and instructions to fill in the blank, to offer things like "food", "housing," "consumer goods", "entertainment," "leisure", "Internet access," "health care", etc.

Some of those I would not expect a typical member of my society's 1813 ancestors to offer.

I would not expect a typical member of my society to offer things like "emotional nurturing," "challenge," "work that needs to be done," "friendship," "love", "knowledge," "years of life", "knowledge"... but I would not be greatly surprised by those answers from any given individual. If I woke up from a coma N years from now and those answers were typical, I would conclude that society had changed significantly.

I would be surprised by answers like "suffering," "the color blue", "emptiness", "corporeal existence," "qualia", "mortality." If I woke up from a coma N years from now and those answers were typical, I would conclude that society had become something unrecognizable.

Replies from: Jiro, wedrifid, Eugine_Nier
comment by Jiro · 2013-07-19T18:58:35.082Z · LW(p) · GW(p)

There's a difference between "more than I thought I could get" and "as much as I want", though.

Eugene seems to think that someone from the past would call our society post-scarcity because it provides more of some things than he hopes he would get, rather than as much as he could possibly want. I think that given the definition of a post-scarcity society as one where you can get as much of something as you want, someone from the past would not consider our society to be a post-scarcity society, since it's very clear that some things--even things that he himself wants--are in limited supply.

Replies from: Eugine_Nier
comment by Eugine_Nier · 2013-07-21T04:11:17.051Z · LW(p) · GW(p)

There's a difference between "more than I thought I could get" and "as much as I want", though.

My point is that the concept of "post-scarcity" is meaningless. It only seems meaningful because our intuitions conflate the two, or rather the amount of something someone wants at any given time is just a little more than what he thinks he can get. Of course, once the amount he thinks he can get changes, the amount he wants will also change, but at the time the amount he wanted really was that small.

Replies from: Jiro
comment by Jiro · 2013-07-21T15:49:51.303Z · LW(p) · GW(p)

I don't believe that. People even before modern times talked about living forever.

comment by wedrifid · 2013-07-19T19:23:16.351Z · LW(p) · GW(p)

I would be surprised by answers like "suffering," "the color blue", "emptiness", "corporeal existence," "qualia", "mortality." If I woke up from a coma N years from now and those answers were typical, I would conclude that society had become something unrecognizable.

The "corporeal existence" one actually fits well with what future people may consider a scarce luxury.

Replies from: TheOtherDave
comment by TheOtherDave · 2013-07-20T00:59:17.298Z · LW(p) · GW(p)

Sure, I can imagine a future for which that's true. Ditto suffering, mortality, and qualia. The others are a bit beyond my imagination, but I suspect if I sat down and worked at it for a while I could come up with something.

Replies from: wedrifid
comment by wedrifid · 2013-07-20T05:12:48.075Z · LW(p) · GW(p)

Sure, I can imagine a future for which that's true. Ditto suffering, mortality, and qualia.

The difference is that those wishes have to be contrived and would be considered insane (or confused) by local standards. Corporeal existence is something that that people with current human values are likely to consider a luxury in plausible transhuman circumstances.

Replies from: TheOtherDave
comment by TheOtherDave · 2013-07-20T05:35:32.771Z · LW(p) · GW(p)

Hm.

I can imagine a future in which the default mode of existence for most people is incorporeal (say, as uploads), and being downloaded into a physical body is a luxury. I can imagine a future in which the default mode of existence for most people lacks subjective experience (again, say, as uploads which mostly run "on autopilot," somewhat like a trance state, perhaps because computing subjective experience is expensive relative to computing other behavior), and being run with subjective experience is a luxury. (I don't presume p-zombiehood here; I expect there to be demonstrable differences between these states.)

Neither of those strike me as requiring insanity or confusion. Whether the corresponding scenarios are contrived or plausible I'm not prepared to argue; they don't seem differentially one or the other to me, but I'll accept other judgments. (If your grounds for believing them differentially contrived are articulable, I'm interested; you might convince me.)

Suffering and mortality, I'll grant you, require me to essentially posit fashion, which can equally well (or poorly) justify anything.

comment by Eugine_Nier · 2013-07-21T04:06:55.702Z · LW(p) · GW(p)

I would not expect a typical member of my society to offer things like "emotional nurturing," "challenge," "work that needs to be done," "friendship," "love", "knowledge," "years of life", "knowledge"...

Some of those answers would be far more common in certain past eras.

Replies from: TheOtherDave
comment by TheOtherDave · 2013-07-21T14:22:35.387Z · LW(p) · GW(p)

In what eras would you expect a typical respondent to have provided which of those answers?

Replies from: Eugine_Nier
comment by Eugine_Nier · 2013-07-25T03:01:19.725Z · LW(p) · GW(p)

Well, "love" would have been more common during the late 60's-early 70's to state one obvious example.

Replies from: TheOtherDave
comment by TheOtherDave · 2013-07-25T06:07:23.481Z · LW(p) · GW(p)

Just to be clear: do you expect that a typical respondent during the late 60's-early 70's, given the prompt "A post-scarcity society is one where you can have as much of _ as you want" and instructions to fill in the blank, would reply "love"?

comment by TheOtherDave · 2013-07-19T15:48:33.708Z · LW(p) · GW(p)

I suspect that in 1813 there were people who worried about whether they would find themselves without enough food, shelter, medicine, or defense from hostile outsiders.

If I described to them the level of food, shelter, medicine, and defense that their counterparts in 2013 had available, I expect they would go "Wow! That's amazing! Why, with that much abundance, I would never worry again!"

If I then explained to them how often their counterparts in 2013 worried about whether they find themselves without enough food, shelter, medicine, or defense from hostile outsiders, I would expect several reactions. One is incredulity. Another is some variant of "well, I guess some people are never satisfied." A third is "Huh. Yeah, I guess 'enough abundance' is something we approach only asymptotically."

If I explained to them the other stuff their counterparts in 2013 worried about , and how anxious they sometimes became about such things, I'd expect a similar range of reactions.

For my own part, I think " 'Enough abundance' is something we approach only asymptotically." is a pretty accurate summary.

So, sure. As we progress from "even wealthy people routinely suffer from insufficient food, shelter, medicine, and defense" to "even middle-class people routinely suffer from IFSMaD" to "poor people routinely suffer from IFSMaD" to "people suffer from IFSMaD only in exceptional circumstances" to "nobody I've ever met has ever heard of anyone who has ever suffered from IFSMaD", we will undoubtedly identify other sources of suffering and we will worry about those.

Whether we are at that point in a "post-scarcity" environment or not is largely a semantic question.

comment by Caspian · 2013-07-15T14:22:51.677Z · LW(p) · GW(p)

Getting back to post-scarcity for people who choose not to work, and what resources they would miss out on, a big concern would be not having a home. Clearly this is much more of a concern than drinks on flights. The main reason it is not considered a dire concern is that people's ability to choose not to work is not considered that vital.

Replies from: Eugine_Nier
comment by Eugine_Nier · 2013-07-19T06:51:23.441Z · LW(p) · GW(p)

So get welfare or whatever other related social program is available in your area.

Replies from: Caspian
comment by Caspian · 2013-07-19T11:32:29.569Z · LW(p) · GW(p)

That's not intended for people who could work but chose not to. They require you to regularly apply for employment. The applications themselves can be stressful and difficult work if you don't like self-promotion.

Replies from: Eugine_Nier
comment by Eugine_Nier · 2013-07-21T04:01:39.084Z · LW(p) · GW(p)

The applications themselves can be stressful and difficult work if you don't like self-promotion.

Only if you care about whether you get the job.

comment by A1987dM (army1987) · 2013-07-13T09:39:10.984Z · LW(p) · GW(p)

Not quite, but almost. (Are you alleging that the unemployed on welfare can afford intercontinental flights, though not ones with good drinks? [EDIT: But yeah, for an unemployed person there seldom are practical, non-signalling reasons to need intercontinental flights. I could probably come up with better examples if I were less sleep-deprived.])

Replies from: elharo
comment by elharo · 2013-07-13T10:52:01.423Z · LW(p) · GW(p)

The last time I was unemployed I took an intercontinental flight from NYC to SFO for a job interview. I'd classify that as a practical, non-signalling reason. :-)

Replies from: CAE_Jones
comment by CAE_Jones · 2013-07-13T15:47:42.611Z · LW(p) · GW(p)

Hypothesis: you had savings for such a situation, or got aid from someone else. ?

(I would also classify it as practical, non-signalling, given the current information. :) )

comment by christopheg · 2013-07-02T07:37:26.655Z · LW(p) · GW(p)

I'm certainly cynical, but I see the point complaining about the drinks.

Not all airplane tickets are selled the same price. But basically everybody in the plane get the same share of progress, science, technology and man labour and sweat.

Henceforth how to account for the princing difference ?

The drinks, people.

comment by dspeyer · 2013-07-01T20:25:02.397Z · LW(p) · GW(p)

The Milky-Way galaxy is mind-bogglingly big.

Eh," you say, "100,000 light years in diameter, give or take a few."

Listen, pal: just because you can measure something in light years doesn't mean you truly understand how big it really is.

By the time you carve our galaxy up into units you have actual, personal experience with, you'll have to start using numbers that you won't live long enough to count to.

That's okay. The galaxy doesn't care. In fact, not caring is one of the things it does best.

That, and being really, really, really big.

--Howard Taylor

Replies from: fractalman
comment by fractalman · 2013-07-07T07:34:59.317Z · LW(p) · GW(p)

Our PLANET is mind-numbingly big. If you don’t believe me go to the grand canyon and look down. Did I say go to the grand canyon? Make that HIKE to the grand canyon from yellowstone national park. Still not convinced? ROW across the ocean to china. Bonus points if you can hit Japan without a gps.

So in a twisted sort of sense, the milky-way galaxy is less mind-bogglingly big, because our [or at least my] built-in distance-comprehension hardware shorts out so quickly when attempting to deal with the milky way galaxy we don't really even notice it and so we switch to rigorous numbers which do not have this short-circuiting problem.

Replies from: MixedNuts, dspeyer
comment by MixedNuts · 2013-07-08T06:08:44.884Z · LW(p) · GW(p)

It seems comprehensibly big. It would take between three and four years to walk around the Earth, walking for a sustainable number of hours at a reasonable pace every day, if you could walk around it in a straight line.

Replies from: DanielLC, fractalman
comment by DanielLC · 2013-07-23T21:22:59.862Z · LW(p) · GW(p)

Walk on the surface of a sphere, in a straight line?

Replies from: pragmatist
comment by pragmatist · 2013-07-23T21:28:04.953Z · LW(p) · GW(p)

A straight line in elliptic geometry, presumably.

Replies from: DanielLC
comment by DanielLC · 2013-07-23T21:57:47.225Z · LW(p) · GW(p)

That's called a "geodesic". I'm not sure why they don't just call it a "line", but they don't.

comment by fractalman · 2013-07-08T08:55:12.930Z · LW(p) · GW(p)

[joke mode] congratulations, you just walked into the ocean. [/joke mode]

Now, about looking down at the grand canyon floor from the glass platform to engage your visual cortex?

comment by dspeyer · 2013-07-08T18:00:32.410Z · LW(p) · GW(p)

I think that shorting out effect is what is meant by "mind-bogglingly".

People have walked from yellowstone to the grand canyon. I couldn't do it myself, but I can read their accounts and understand them.

Earth is big, but our minds are amazed, not boggled. It's with the galaxy that we just start thinking "system error".

Replies from: Kawoomba
comment by Kawoomba · 2013-07-08T18:09:37.085Z · LW(p) · GW(p)

An easy way to bridge such distances is to construct a lot of intermediate steps. Take the Milky Way, containing 100 to 400 billion stars (let's take 250 billion). The problem of grasping 250 billion stars going off from just our sun is not too dissimilar from imagining someone with 250 billion dollars, going off from just 1. Lots of intermediate steps: So and so many dollars for a current generation smart phone, so and so many smart phones for, say, a villa, so and so many villas to buy, say, Microsoft. Of course different examples work differently well, but you get the picture, I suppose.

Incidentally, the number of US citizens is higher than the number of stars in the Milky Way in thousands, so if you find yourself a good way of visualizing the former, you can transfer that understanding to the latter, then just unpack the "thousand".

Nothing interesting, not even the size of our Hubble volume, is more than a couple dozen orders of magnitude away, which makes it -- in my opinion -- quite accessible even to our widdle bwains.

Replies from: solipsist, Desrtopa
comment by solipsist · 2013-07-08T18:17:28.434Z · LW(p) · GW(p)

Take the Milky Way, containing 100 to 400 billion stars (let's take 250 billion).

...

Incidentally, the number of US citizens is higher than the number of stars in the Milky Way, so if you find yourself a good way of visualizing the former, you can transfer that understanding to the latter.

So, there are more than 100 billion US citizens?

Replies from: Kawoomba
comment by Kawoomba · 2013-07-08T18:18:27.782Z · LW(p) · GW(p)

Thanks for noting, corrected.

Replies from: solipsist
comment by solipsist · 2013-07-08T20:09:20.615Z · LW(p) · GW(p)

You're welcome.

Replies from: Kawoomba
comment by Kawoomba · 2013-07-08T20:17:50.302Z · LW(p) · GW(p)

To clarify:

The point is that a few orders of magnitude can be visualized / grasped just by adding another step to the ladder, chopping off only as large a step as you can take at a time.

Then even a whole lotta orders of magnitude just become a short sequence of steps, going off of concepts you find more familiar.

I often start with 10^3 as "number of students in my high school", I have a distinct image of some school photo in the school yard where everyone was on there. After that e.g. the number of images (each showing one yard-full of students) in a photo album. Number of photo albums that could fit in an Ikea shelf. Number of Ikea shelves in a library. Etcetera, though that alone should get you to 10^10 or so.

Suddenly the steep mountain slope has a stairway, and doesn't seem quite so daunting anymore.

Replies from: army1987
comment by A1987dM (army1987) · 2013-07-09T03:07:12.436Z · LW(p) · GW(p)

I often start with 10^3 as "number of students in my high school", I have a distinct image of some school photo in the school yard where everyone was on there. After that e.g. the number of images (each showing one yard-full of students) in a photo album. Number of photo albums that could fit in an Ikea shelf. Number of Ikea shelves in a library. Etcetera, though that alone should get you to 10^10 or so.

Imagining grains of sand can get you to bigger numbers faster.

comment by Desrtopa · 2013-07-15T19:00:14.567Z · LW(p) · GW(p)

Nothing interesting, not even the size of our Hubble volume, is more than a couple dozen orders of magnitude away, which makes it -- in my opinion -- quite accessible even to our widdle bwains.

A couple dozen orders of magnitude of nearly anything will tend to stretch beyond human borders of intuitive comprehension in either direction.

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2013-07-15T20:04:29.843Z · LW(p) · GW(p)

A couple dozen orders of magnitude = 1 mole (roughly). The relationship between a single molecule and a handful of the macroscopic substance.

Replies from: bogdanb, Desrtopa
comment by bogdanb · 2013-07-16T01:08:08.497Z · LW(p) · GW(p)

Yes, I can handle numbers in terms of orders of magnitude. But I challenge you to picture yourself the size of a molecule, sitting "on the floor", looking towards your real body, and visualize what you would see without doing any calculations.

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2013-07-16T12:21:31.218Z · LW(p) · GW(p)

I'm not sure what the thought experiment is. For me to be shrunk to the size of a molecule, all of the molecules I am made of would have to be shrunk, as would the light waves I see by, leaving my perception of my body unchanged. I don't think this is the scenario you mean, but I don't know in what way to change this to make it the one you mean.

Replies from: bogdanb
comment by bogdanb · 2013-07-16T22:25:49.265Z · LW(p) · GW(p)

I just meant in a semi-magical, non-physical way, only for visualising scale. Like a computer simulation of the world that scales up everything other than you twenty orders of magnitude, then uses some hacked-in rendering convention that lets you “see” without trouble from stuff like wavelengths.

Or if you want something more physical-like, imagine looking from “floor level” at a human statue 10 million light-years (relative to our c) in size, of correct proportions and colors (but no universe-crushing gravity), in a non-relativistic universe (to get around light-speed issues). Do you think you could tell the difference between that and a 10000 light-years one without seeing them side by side nor using instruments?

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2013-07-18T15:53:55.326Z · LW(p) · GW(p)

I just meant in a semi-magical, non-physical way, only for visualising scale. Like a computer simulation of the world that scales up everything other than you twenty orders of magnitude, then uses some hacked-in rendering convention that lets you “see” without trouble from stuff like wavelengths.

Then I'd see something like the ball-and-stick models that chemists build. We already know the shapes of molecules, and the photographs made of them in the last few years look just like that.

Replies from: bogdanb
comment by bogdanb · 2013-07-18T17:06:44.031Z · LW(p) · GW(p)

OK, sorry. It appears I’ve rolled a critical failure in communication :-)

I wasn’t referring to the small scale structure, just the ability to comprehend scale. Something like the way that when you’re at the foot of the mountain, the brain doesn’t really capture the difference between a 1km-tall and a 8-km tall one. Or how the distinction between a 10-story building and a 100-story one isn’t really manifest in the mind unless they’re side by side. Now take that and multiply both scales by enough orders of magnitude to span molecule-to-human scales.

Let me try a better example. Take this image. Without using symbolic math (i.e. actually figuring orders of magnitude and doing arithmetic with them), what can your brain do that simultaneously includes numbers of the scales “the width of one of the galaxy’s arms”, “the diameter of one of the stars” and “the height of a person on one of the planets”?

I mean, I don’t have to resort to math to know that ten people in a normal car would be crowded, or that a bucket of nails are hard to fit in a typical person’s pockets. I can have an intuitive comprehension (albeit inaccurate) of how much work might be needed to dig a small ditch. But I have no intuitive feel for similar problems posed at astronomical scales other than “intuition overflow, use math”. E.g., I’ve no chance of estimating the number of people needed to crowd just the solar system, let alone the galaxy, within a couple of orders of magnitude, unless I actually do at least a few back-of-the-envelope calculations.

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2013-07-18T21:58:41.739Z · LW(p) · GW(p)

I think our intuitions work differently.

Something like the way that when you’re at the foot of the mountain, the brain doesn’t really capture the difference between a 1km-tall and a 8-km tall one.

I've walked up a 1 km hill. 8 km is Everest. I've only seen mountains that big in pictures.

Or how the distinction between a 10-story building and a 100-story one isn’t really manifest in the mind unless they’re side by side.

10 storeys is the height of some of the more substantial buildings (other than skyscrapers) in central London. 100 is a skyscraper. I'm not sure there are any buildings that tall in London.

Let me try a better example. Take this image. Without using symbolic math (i.e. actually figuring orders of magnitude and doing arithmetic with them), what can your brain do that simultaneously includes numbers of the scales “the width of one of the galaxy’s arms”, “the diameter of one of the stars” and “the height of a person on one of the planets”?

From general knowledge I'm guessing 1000 to 10000 ly for the thickness of an arm and 100,000 miles for the diameter of a star. Then it's just counting zeroes. 1 ly is 10^13 km, which is 10^13 miles. So that's 8 zeroes from the star to the arm, and 8 zeroes from a person to a star: 100,000 miles = 100,000 km, and a person is 2 m, which is equal to 1 m. ("If anyone asks, I did not tell you it was ok to do math like this.")

Ok, I'm figuring orders of magnitude and doing arithmetic with them, but that is intuitive to me.

For numbers of zeroes up to 15, a while back I posted some handy visualisations which I can't find, so here they are again. Take the solid copper earth conductor from some mains cable, which is around 1mm^2 cross-section, and cut a little piece just 1mm long. Roll it between your fingertips. That's a cubic millimetre. In your other hand pick up a 1 litre bottle of milk. You're looking at a million. One million of those copper fragments will fill the bottle. (They will weigh 10 kg, and if you do any weight training, you'll know what a 10 kg weight feels like.) One billion of them is enough to fill the space between the top of a largish dining table and the floor (3/4m high, top surface 1m by 4/3 m). One trillion will fill a few lanes of an Olympic swimming pool (50m long, 10m wide, 2m deep). Get another factor of 1000 by using coarse sand (0.1mm grain size) instead of diced copper wire, and that's 10^15.

But I have no intuitive feel for similar problems posed at astronomical scales other than “intuition overflow, use math”. E.g., I’ve no chance of estimating the number of people needed to crowd just the solar system, let alone the galaxy, within a couple of orders of magnitude, unless I actually do at least a few back-of-the-envelope calculations.

As I say, there isn't a boundary to me between intuition and calculation. As in, 10^24 just is, to me, about a mole, the relationship between one molecule and a handful of stuff. It's also a lower bound on the number of operations of individual transistors you can expect a computer to perform without a single error. A billion transistors clocked a billion times a second for a million seconds, a million seconds being 1/30 of a year, or 12 days.

Replies from: bogdanb
comment by bogdanb · 2013-07-18T22:19:55.345Z · LW(p) · GW(p)

Yes, it’s possible our intuitions simply function differently.

I do the same kinds of calculations, more or less intuitively. I can juggle zeros too if I need to. But my point is that for most human-scale things I don’t need to do that. Maybe it’s just learned behavior, I’m sure an astrophysicist has better intuitions in his area of expertise. The fact that intuition triggers even in situations that are not often encountered seems to indicate there’s more to it than that, though.

comment by Desrtopa · 2013-07-15T20:07:53.223Z · LW(p) · GW(p)

Of course, a molecule is rather notoriously outside the scale of our ability to visualize; it's small enough that our hardwired understanding of how materials are supposed to behave simply cease to apply.

Replies from: Richard_Kennaway
comment by Zubon · 2013-07-03T23:03:03.791Z · LW(p) · GW(p)

Xander: Yep, vampires are real. A lot of 'em live in Sunnydale. Willow 'll fill you in.

Willow: I know it's hard to accept at first.

Oz: Actually, it explains a lot.

One of the stronger examples of Bayesian updating in fiction, from Buffy the Vampire Slayer season 2, episode 13

Replies from: roystgnr
comment by roystgnr · 2013-07-10T22:41:37.664Z · LW(p) · GW(p)

Hmm... this isn't exactly a Bayesian update, though.

Bayesian update: you have prior probabilities for theories A, B, C, D; you get new evidence for D, and you use Bayes' rule to decide how to move posterior probability to D.

Oz: you have prior probabilities for theories A, B, and C; you hear a new theory D that you hadn't previously considered, and you recalculate the influence of previous evidence to see how much credence you should give D.

This quote isn't a pure example of the distinction between "getting new evidence" and "considering a new theory", since obviously "my friends believe in D" is also new evidence, but there seems to be more of the latter than the former going on.

It's weird that we don't seem to have a term describing what kind of update the "considering a new theory" process is. It's not something that would ever be done by an ideal Bayesian agent with infinite computing resources, but it's unavoidable for us finite types.

Replies from: RolfAndreassen, bentarm
comment by RolfAndreassen · 2013-07-12T15:32:01.590Z · LW(p) · GW(p)

Oz: you have prior probabilities for theories A, B, and C; you hear a new theory D that you hadn't previously considered, and you recalculate the influence of previous evidence to see how much credence you should give D.

This seems slightly off both in terms of what (the writer intends us to infer) is going on in Oz's head, and what ought to be going on. First, it seems that Oz may have considered vampires or other supernatural explanations, but dismissed them using the absurdity heuristic, or perhaps what we can call the "Masquerade heuristic" - that's where people who live in a fictional world full of actual vampires and demons and whatnot nevertheless heurise as though they lived in ours. (Aside: Is 'heurise' a reasonable verbing of "use heuristics?") Upon hearing that his friends take the theory seriously (plus perhaps whatever context caused them to make these remarks) he reconsiders without the absurdity penalty.

Second, what should be going on is that Oz has theories A, B, C with probabilities adding up to 1-epsilon, where epsilon is the summed probability of "All those explanations which I haven't had time to explicitly consider as theories". Just because he's never explicitly formulated D and formally assigned a probability to it, doesn't mean it doesn't have an implicit one. Once it is picked out of hypothesis space, he can detach it from the other previously unconsidered theories, formally assign an initial probability much smaller than epsilon, and update from there. Of course this is not realistic as a matter of human psychology, but what I'm arguing is that "I never thought of theory X before" does not actually demonstrate that "Oh yeah, theory X makes a lot of sense" is not a Bayesian update. It just means that the updater hasn't had the processing power to fully enumerate the space of available theories.

comment by bentarm · 2013-07-25T20:44:27.516Z · LW(p) · GW(p)

Does Oz already know that he's a werewolf at this point? That would seem to bring "vampires exist" into the realm of plausible hypotheses.

comment by grendelkhan · 2013-07-05T21:55:57.586Z · LW(p) · GW(p)

"You're like an infant!" Tosco sneered. "Still humming at night about your poor lost momma and the terrible thing men do to their cos? Grow up and face the real world."

"I have," Carlo replied. "I faced it, and now I'm going to change it."

Greg Egan, The Eternal Flame, ch. 38

Replies from: PhilGoetz
comment by PhilGoetz · 2013-07-16T15:18:54.942Z · LW(p) · GW(p)

Argument by straw man and false dichotomy. The world is not made up entirely of cynical people who accept the system around them, and Ender Wiggins.

Putting forward the proposition, as fiction so often does, that all the world needs is heroes with the courage to change the world, is destructive, as it diverts the efforts of the heroic down unproductive paths. Sure, they should want to change the world--but the only context this kind of quote ever occurs in is ones where the hero is uncompromising and eventually wins out due to his moral, physical, and/or mental superiority.

comment by [deleted] · 2013-07-01T21:53:59.058Z · LW(p) · GW(p)

“The wonder and horror of epidemiology, is that it’s not enough to just measure one thing very accurately. To get the right answer, you may have to measure a great many things very accurately.”

-- Jerry Avorn, quoted here.

comment by Vaniver · 2013-07-01T16:56:28.811Z · LW(p) · GW(p)

I wish that I may never think the smiles of the great and powerful a sufficient inducement to turn aside from the straight path of honesty and the convictions of my own mind.

-- David Ricardo

comment by PhilGoetz · 2013-07-16T14:50:30.086Z · LW(p) · GW(p)

There's more pressure on a vet to get it right. People say "it was god's will" when granny dies, but they get angry when they lose a cow.

  • Terry Pratchett, alt.fan.pratchett again
Replies from: army1987
comment by A1987dM (army1987) · 2013-07-20T23:06:11.065Z · LW(p) · GW(p)

What? Putting down pets or livestock isn't that uncommon, whereas people go way out of their way (I seem to recall Robin Hanson mentioning a two-digit percentage of the US GDP, though I can't seem to find it) to prolong human lives long after they're no longer worth living.

Replies from: David_Gerard
comment by David_Gerard · 2013-07-27T14:01:44.293Z · LW(p) · GW(p)

Discworld is set in a time roughly parallel to the late 1700s or early 1800s. Medicine didn't really work, and livestock were significant capital.

comment by elharo · 2013-07-03T10:00:50.404Z · LW(p) · GW(p)

When you tear out a man's tongue, you are not proving him a liar, you're only telling the world that you fear what he might say.

Tyrion Lannister in George R.R. Martin's A Clash of Kings

Replies from: Viliam_Bur, Lethalmud
comment by Viliam_Bur · 2013-07-03T14:55:52.796Z · LW(p) · GW(p)

Most importantly, you are telling the world that anyone saying the same thing is in a risk of losing their tongue, regardless of correctness of the information.

That makes it cheaper for people to argue against the information than to argue for it.

And that increases that chance that people will finally consider him a liar.

Replies from: roystgnr, knb
comment by roystgnr · 2013-07-03T18:05:53.658Z · LW(p) · GW(p)

That makes it cheaper for people to argue against the information than to argue for it.

Not necessarily. It makes it cheaper for people to argue against whatever slim fraction of the information they can put up as a strawman without risking their own tongues. But it's hard to put up a real argument against an opposition that you can't really even quote.

And that increases that chance that people will finally consider him a liar.

Not if that strawman is easily blown away by whatever samizdat eventually conveys the full information.

Yvain explains some of the mechanisms better than I could in points 5 through 7 here:

http://squid314.livejournal.com/333353.html

comment by knb · 2013-07-10T07:11:06.466Z · LW(p) · GW(p)

The effectiveness of silencing someone really depends on how common such silencing is for a given regime. For example, if a regime silences all critics (regardless of whether they tell the truth or lie) an individual act of censorship doesn't carry any information about whether the censored info was true or false.

On the other hand, tons of claims are made against the US government every day, and no action is taken against almost all of them. If the government suddenly acted to silence one conspiracy theorist, far more attention would be paid to his claims, and the action would likely backfire.

Replies from: DanielLC
comment by DanielLC · 2013-07-23T21:04:22.459Z · LW(p) · GW(p)

This leads to an interesting possibility for a misinformation campaign: Let people speculate wildly. Silence the guy who says what you want your enemies to think.

Unfortunately, you can only do that so much before it gets noticed.

comment by Lethalmud · 2013-07-04T15:14:56.423Z · LW(p) · GW(p)

spoilers man..

Replies from: JoshuaZ
comment by JoshuaZ · 2013-07-04T16:01:24.747Z · LW(p) · GW(p)

How is that quote a spoiler? Also, how long does a work need to be out before spoilers are no longer an issue? Is it ok if I tell you that Macbeth dies at the end?

Replies from: Alejandro1, Kaj_Sotala, Dorikka, ciphergoth, bentarm
comment by Alejandro1 · 2013-07-05T20:58:40.069Z · LW(p) · GW(p)

Charitably, it might be viewed as a minor spoiler in that it implies that the character is alive in that book, which is not the first one of the series. (Although that is not a necessary implication: he could possibly be saying it in someone else's flashback, for example.)

Replies from: JoshuaZ
comment by JoshuaZ · 2013-07-05T22:27:13.041Z · LW(p) · GW(p)

Charitably, it might be viewed as a minor spoiler in that it implies that the character is alive in that book, which is not the first one of the series.

Hmm, that's a good point, given that Game of Thrones does have a high death rate of major characters.

comment by Kaj_Sotala · 2013-07-09T15:20:33.553Z · LW(p) · GW(p)

Also, how long does a work need to be out before spoilers are no longer an issue?

"Spoilers for a work are okay after this time has passed" is an okay heuristic in a community where everyone can reasonably be expected to familiarize themselves with the work as soon as possible after it has become available - and nowhere else. You cannot generally expect that simply time having passed from the publication of a work means that people are familiar with its content.

The actual question one wants to ask is "am I communicating with an audience where I can reasonably expect that people are either already familiar with the work, or do not care about this particular detail about this particular work being spoiled". This is a hard question in general, and sometimes "has this work been out long enough for spoilers not to be an issue" works as an adequate substitute question for it, but only sometimes.

comment by Dorikka · 2013-07-04T20:27:11.986Z · LW(p) · GW(p)

Macbeth dies at the end

Damn you.

comment by Paul Crowley (ciphergoth) · 2013-07-04T20:52:47.870Z · LW(p) · GW(p)

Have you seen The Passion yet?

Replies from: CronoDAS
comment by CronoDAS · 2013-07-04T21:20:49.075Z · LW(p) · GW(p)

Some tellings of the story include the Resurrection; others don't. (Notably, "Jesus Christ Superstar" doesn't.)

comment by bentarm · 2013-07-04T22:24:33.505Z · LW(p) · GW(p)

this comment on the recent Reddit thread about intellectual jokes goes one better (and actually made me laugh out loud the first time I read it).

comment by bouilhet · 2013-07-02T23:22:27.375Z · LW(p) · GW(p)

The conscientious. - It is more comfortable to follow one's conscience than one's reason: for it offers an excuse and alleviation if what we undertake miscarries--which is why there are always so many conscientious people and so few reasonable ones.

-- Nietzsche

comment by Tenoke · 2013-07-18T21:38:25.541Z · LW(p) · GW(p)

“The future is always ideal: The fridge is stocked, the weather clear, the train runs on schedule and meetings end on time. Today, well, stuff happens.”

  • Hara Estroff Marano on procrastination in Psychology Today as cited here
comment by AShepard · 2013-07-01T23:56:09.576Z · LW(p) · GW(p)

If (as those of us who make a study of ourselves have been led to do) each man, on hearing a wise maxim immediately looked to see how it properly applied to him, he would find that it was not so much a pithy saying as a whiplash applied to the habitual stupidity of his faculty of judgment. But the counsels of Truth and her precepts are taken to apply to the generality of men, never to oneself; we store them up in our memory not in our manners, which is most stupid and unprofitable.

Michel de Montaigne, Essays, "On habit"

Replies from: MixedNuts
comment by MixedNuts · 2013-07-02T07:32:43.256Z · LW(p) · GW(p)

Does it actually help? My usual reactions are "Ha, yeah, I totally do that. Silly human foibles eh?", "Screw you, anonymous proverb author, just because you don't mention what makes this a least-bad option doesn't make it worse", or "Yeah, that's the problem. Do you have a solution?".

Replies from: Vaniver
comment by Vaniver · 2013-07-02T16:53:11.283Z · LW(p) · GW(p)

Does it actually help?

Yes. One option is to use it as a memorable trigger- "Oh, I'm making mistake X, like the proverb"- and then amend behavior. (This is one of the reasons why it's worth trying to word proverbs as memorably as possible- rhyming helps quite a bit. If your actions you want to jigger, then do not fail to set a trigger! Sometimes it works better than others.)

A superior option is, upon seeing the maxim, to contemplate it fully, and plan out now how it could be avoided in some way, and then practice that offline.

In general, though, de Montaigne is highlighting the general thrust of Less Wrong. Knowing the ways in which people in general make mistakes is most useful to you if you use that knowledge to prevent yourself from making that mistake, and a general mistake people make is to not do that!

Replies from: Kenny
comment by Kenny · 2013-07-18T11:28:13.244Z · LW(p) · GW(p)

Or "If it's your actions that you want to jigger, do not fail to set a trigger!".

comment by Stabilizer · 2013-07-01T22:00:15.171Z · LW(p) · GW(p)

On any important topic, we tend to have a dim idea of what we hope to be true, and when an author writes the words we want to read, we tend to fall for it, no matter how shoddy the arguments. Needy readers have an asymptote at illiteracy; if a text doesn't say the one thing they need to read, it might as well be in a foreign language. To be open-minded, you have to recognize, and counteract, your own doxastic hungers.

-Dennett's Law of Needy Readers, Daniel Dennett

Replies from: Stabilizer
comment by Stabilizer · 2013-07-01T22:02:31.478Z · LW(p) · GW(p)

This law according to Dennett is an extension of Schank's Law:

Because people understand by finding in their memories the closest possible match to what they are hearing and use that match as the basis of comprehension, any new idea will be treated as a variant of something the listener has already thought of or heard. Agreement with a new idea means a listener has already had a similar thought and well appreciates that the speaker has recognized his idea. Disagreement means the opposite. Really new ideas are incomprehensible. The good news is that for some people, failure to comprehend is the beginning of understanding. For most, of course, it is the beginning of dismissal.

-Roger Schank

Replies from: Richard_Kennaway, RolfAndreassen, Eugine_Nier
comment by Richard_Kennaway · 2013-07-03T14:01:48.550Z · LW(p) · GW(p)

any new idea will be treated as a variant of something the listener has already thought of or heard.

From a Bayesian point of view, this is as it must be. People have priors and will assess anything new as a diff (of log-odds) from those priors. Even understanding what you are saying, before considering whether to update towards it, is subject to this. You will always be understood as saying whatever interpretation of your words is the least surprising to your audience.

BTW, this is standard in natural language processing (which is what a lot of Schank's AI work was in). When a sentence is ambiguous, choose the least surprising interpretation, the one containing the least information relative to your current knowledge.

The narrower your audience's priors, the more of a struggle it will be for them to hear you; the narrower your priors, the more you will struggle to hear them.

Having shown how Schank's Law is but an instance of Bayesian inference, I trust you will all find it acceptably unsurprising. :)

comment by RolfAndreassen · 2013-07-02T01:02:26.325Z · LW(p) · GW(p)

[A]ny new idea will be treated as a variant of something the listener has already thought of or heard.

This does raise the question of how anyone learns anything in the first place. :)

Replies from: TheOtherDave, Desrtopa, Estarlio, Viliam_Bur
comment by TheOtherDave · 2013-07-02T03:25:18.823Z · LW(p) · GW(p)

Don't underestimate the power of variations.

When shaping behavior in animals, we start with something the animal does naturally and differentially reward natural variations. Evolution of biological systems also involves differential selection of naturally occurring variations on existing systems. So it's certainly possible to get "something new" out of mere "variants of something [that already existed]".

That said, many cognitive systems do also seem capable of insight, which seems to be a completely different kind of process. Dennett and Schank here seem to be dismissing the very possibility of insight, though I assume they are doing so rhetorically.

Replies from: RolfAndreassen
comment by RolfAndreassen · 2013-07-02T15:40:34.902Z · LW(p) · GW(p)

What has a baby which does not understand speech "heard before", that it can form variations on? Evolution is fine, but you do need a theory of abiogenesis, or in this case aontogenesis - knowledge-from-nothing-ness, in the vernacular.

Replies from: TheOtherDave, Vaniver
comment by TheOtherDave · 2013-07-02T16:39:35.968Z · LW(p) · GW(p)

Babies are not clean slates; there exist innate behaviors. We can get into a theoretical discussion of where these behaviors came from if you like, but I don't need a theoretical justification to observe that babies do in fact do things they haven't been taught to do.

Replies from: RolfAndreassen
comment by RolfAndreassen · 2013-07-02T19:34:58.688Z · LW(p) · GW(p)

Quite so, but this contradicts the original idea that everything is variants on something that has been heard before.

Replies from: shminux
comment by shminux · 2013-07-02T19:54:22.700Z · LW(p) · GW(p)

I interpret "heard before" to include "programmed in your genetics".

Replies from: TheOtherDave
comment by TheOtherDave · 2013-07-02T21:41:36.755Z · LW(p) · GW(p)

This.

comment by Vaniver · 2013-07-02T19:16:35.064Z · LW(p) · GW(p)

While I agree with TheOtherDave's point, I'm not sure it's necessary. A baby doesn't understand new sounds the first time it hears them, but may understand them the hundredth time it's heard them- at which point it does have quite a bit of experience, both of hearing those noises in some situations and not hearing those noises in other situations. Then, once they've learned the general skill of acquiring words, they can correctly learn words quickly, sometimes even after hearing a single use- but that's drawing on their previous experience in learning thousands of words.

comment by Desrtopa · 2013-07-02T20:30:06.972Z · LW(p) · GW(p)

Naturally we go through a period of believing everything we're told when we're kids, and transition to comparing everything we hear to what we've already heard before as we grow up.

(This is an inexact approximation, but in my more cynical moments it strikes me as only very slightly inexact.)

comment by Estarlio · 2013-07-03T11:36:38.727Z · LW(p) · GW(p)

Depends how great the variance is. Sounds better if you say that people benefit from having things they're learning related to familiar topics.

comment by Viliam_Bur · 2013-07-03T10:45:09.848Z · LW(p) · GW(p)

This does raise the question of how anyone learns anything in the first place.

Perhaps most people learn like this: They already have an idea X. Then they hear a very similar idea Y, so they accept it, although they interpret it as X. But once they agreed that Y is their idea, and they hear it repeatedly, they gradualy become aware of Y as something slightly different from X. Thus they made another inferential step.

Perhaps many people are willing to learn only when it does not feel like learning.

comment by Eugine_Nier · 2013-07-03T05:04:27.882Z · LW(p) · GW(p)

A less cynical take on this is that people compare what they hear to their previous experience (stored in compressed form) and accept or reject it depending on how well it matches.

comment by jsbennett86 · 2013-07-12T04:43:26.778Z · LW(p) · GW(p)

There's something here that doesn't make sense... Let's go and poke it with a stick.

The Doctor - Doctor Who

Replies from: elharo
comment by elharo · 2013-07-15T10:30:31.978Z · LW(p) · GW(p)

Good one, though it would be nice to cite the exact episode. A little googling and I think this is from "Amy's Choice" (Episode 5.7)

Also, I'd try to avoid ellipses in a quote unless you are in fact leaving something out. I suspect here you just meant it to reflect the doctor's speech pattern, but it's a bit confusing.

Replies from: RolfAndreassen, wedrifid
comment by RolfAndreassen · 2013-07-19T01:03:17.185Z · LW(p) · GW(p)

The convention I was taught is that "This... and that" is quoting someone who pauses after 'This', while "This [...] and that" indicates that I elided something. This seems to me both useful and clear.

comment by wedrifid · 2013-07-15T10:58:17.145Z · LW(p) · GW(p)

Also, I'd try to avoid ellipses in a quote unless you are in fact leaving something out. I suspect here you just meant it to reflect the doctor's speech pattern, but it's a bit confusing.

"Word... Word" has a different meaning to "Word ... word". The usage in this comment would not confuse many and replacing the ellipsis with a period would change the meaning of the quote. As it happens most sources I can find don't include the ellipsis so the addition would be a mistake. So I agree with you in this instance and agree in general with a slight modification to "adding ellipses to a quote".

comment by lukeprog · 2013-07-11T19:42:37.699Z · LW(p) · GW(p)

Extinguished philosophies lie about the cradle of every science as the strangled snakes beside that of Hercules.

John McCarthy, adapted a line by T.H. Huxley

Replies from: simplicio
comment by simplicio · 2013-07-15T16:34:22.534Z · LW(p) · GW(p)

I'm fine with this quote as long as the conclusion is not "So let's just do science without any philosophy!"

Because usually that just means doing science with unexamined philosophical assumptions while deluding yourself that you're being objective. This goes badly; e.g., Copenhagen interpretation, neurobabble ("Libet experiment proves you have no free will!").

Replies from: SaidAchmiz, PhilGoetz
comment by Said Achmiz (SaidAchmiz) · 2013-07-15T16:58:20.158Z · LW(p) · GW(p)

Your comment, with which I agree, inspired me to post this quote.

comment by PhilGoetz · 2013-07-16T14:57:41.130Z · LW(p) · GW(p)

You need to make a stronger case. Your neurobabble is the result of trying to do philosophy after science rather than of trying to do science without philosophizing. The Copenhagen interpretation (which is, by the by, still in vogue outside of LessWrong, please stop the groupthink) allowed people to get on with their science instead of getting bogged down in its bewildering philosophical implications. Ignoring philosophy was the right thing to do. So I see both your examples as proofs that we should do science without any philosophy.

Replies from: simplicio, TimS
comment by simplicio · 2013-07-16T16:04:20.894Z · LW(p) · GW(p)

The Copenhagen interpretation (which is, by the by, still in vogue outside of LessWrong, please stop the groupthink) allowed people to get on with their science instead of getting bogged down in its bewildering philosophical implications.

This is a perfect example of the crypto-philosophy of "we're not doing philosophy". Copenhagen is a philosophical interpretation of QM, which makes metaphysical claims about wavefunctions coming into existence and then collapsing. If anything could be called the aphilosophical approach, it would be the Feynman "shut up and calculate" interpretation of QM, but that leads to problems too.

This is not really about MWI versus Copenhagen - it's more of a meta-issue. This is about how scientists sleepwalked themselves into a philosophical theory about QM without fully realizing they were doing philosophy at all.

Replies from: TimS, Juno_Watt, nshepperd
comment by TimS · 2013-07-16T17:10:11.275Z · LW(p) · GW(p)

I agree with your main point, but I have a nick-picky side question:

Copenhagen is a philosophical interpretation of QM, which makes metaphysical claims about wavefunctions coming into existence and then collapsing.

In what sense is the Copenhagen interpretation making "metaphysical" claims about wavefunctions coming into existence and then collapsing? My sense was that proponents are making a straightforward physical claim, on par with physical claims make by non-QM atomic theory. Copenhagen has not been empirically proved (or empirically disproved), but that does not make it metaphysical.

In other words, I think you might be using "metaphysical" as a synonym for "nonsensical."

comment by Juno_Watt · 2013-07-17T09:24:52.359Z · LW(p) · GW(p)

The CI as such is minimal in its commitments, and is not committed to the existence (or non existence) of a real wave function. You comment seems to reflect EY;s habit of conflating the CI with Objective Reduction.

ETA:

This is about how scientists sleepwalked themselves into a philosophical theory about QM without fully realizing they were doing philosophy at all.

Bohr and Heisenberg were in fact quite self-aware about their philosophical presumptions.

comment by nshepperd · 2013-07-17T15:20:52.829Z · LW(p) · GW(p)

As far as I can tell the Copenhagen Interpretation basically is a shut-up-and-calculate interpretation. It's an operational theory that is only capable of predicting subjective-ish experimental results, and doesn't make claims about the "contents of reality". That is to say, all its predictions are of the form "if I did [EXPERIMENT] I would observe a result according to [DISTRIBUTION]". Which is somewhat respectable (although what exactly counts as an observation is naturally ill-defined, since the theory doesn't encompass the observer itself).

The real problem is that having the CI as the majority view sucks people into philosophical positions where you're not allowed to even wonder what reality is made of, or how these observations are manifested. See: "the EPR experiment proved there's no such thing as reality, right?"

Replies from: None
comment by [deleted] · 2013-07-18T02:02:11.245Z · LW(p) · GW(p)

I disagree. CI is generously, semi-bad anthropocentric epistemology.

CI specifically mentions two fundamental interactions: DeWitt equation and Collapse.

A "shut up and calculate" inteprentation is the Ensemble one.

comment by TimS · 2013-07-16T15:12:12.302Z · LW(p) · GW(p)

I think scientists who truly understand the principles of Structures of Scientific Revolutions would be better at noticing changes were needed, helping make the changes, or at least not getting in the way.

Instead, most practicing scientists talk about physical realism in a way that even hardcore reductionists think is naive. And that affects how the general populace treats the scientific results. Which affects how science is treated by the general public.

In short, I think if scientists were more philosophically sophisticated, they would help the public be more sophisticated.

Edit: simplicio makes a more important and slightly similar point.

comment by Zubon · 2013-07-09T23:19:17.970Z · LW(p) · GW(p)

[As the] percentage of the US population carrying cameras everywhere they go, every waking moment of their lives [has gone from "almost none" to "almost all,"] in the last few years, with very little fanfare, we've conclusively settled the questions of flying saucers, lake monsters, ghosts, and Bigfoot.

xkcd explains that the absence of evidence is evidence of absence .

Replies from: ChristianKl, Jayson_Virissimo, Eugine_Nier
comment by ChristianKl · 2013-07-23T11:51:06.868Z · LW(p) · GW(p)

Given the amount of drones that fly around these days the question of UFO is settled. There are plenty of objects that fly around which nobody can accurately identify.

Especially when it comes to hobbist drones there are models that really look like flying saucers.

comment by Jayson_Virissimo · 2013-07-13T06:20:42.379Z · LW(p) · GW(p)

There is only an absence of evidence if you ignore all the pictures that are purportedly of those objects.

Replies from: bogdanb
comment by bogdanb · 2013-07-16T02:01:24.489Z · LW(p) · GW(p)

No, it’s absence of evidence if you notice that we have ready access to high-resolution videos of innumerable rare events and elusive animals.

And yet the best footage of UFOs, ghosts, and Bigfoot still consists of some blurry, hazy, shiny, or dark blob smeared somewhere in a couple of frames of a shaky video at the absolute limit of the camera, exactly the same as forty years ago. Which is exactly what you’d expect to see if these were in fact normal things and optical artifacts that are perfectly explainable when they’re actually close enough to see.

comment by Eugine_Nier · 2013-07-13T05:58:02.498Z · LW(p) · GW(p)

Except some of this footage does in fact record what's claimed to be flying saucers, lake monsters, ghosts, and Bigfoot. Don't confuse "there is no evidence despite people looking", with "there is no evidence but no one has looked", or worse "I'm so sure there is no evidence I'm not even going to bother checking whether anyone else has found any".

Replies from: TimS
comment by TimS · 2013-07-13T06:18:51.843Z · LW(p) · GW(p)

You think Randall Monroe is making this mistake?

Replies from: Eugine_Nier
comment by Eugine_Nier · 2013-07-14T20:38:23.460Z · LW(p) · GW(p)

Yes, a combination of this and him suffering from the bias that makes it hard to notice flaws in arguments whose conclusions one agrees with.

Replies from: TimS
comment by TimS · 2013-07-14T21:08:32.426Z · LW(p) · GW(p)

My impression is that people have expended roughly constant effort searching for Bigfoot from 1960 to now. Based on advances in modern camera technology (especially ubiquitous smartphones), evidence collection is cheaper and easier now than in 1960 (or even 1980).

I understood Monroe to be asserting that easier evidence collection and constant levels of effort imply that we should expect higher quality evidence now than in the past (if Bigfoot exists). In fact, the quality of evidence for Bigfoot is substantially similar now and in 1960. That's pretty strong evidence that Bigfoot does not exist.

Replies from: Eugine_Nier
comment by Eugine_Nier · 2013-07-15T02:20:23.479Z · LW(p) · GW(p)

In fact, the quality of evidence for Bigfoot is substantially similar now and in 1960.

Have you actually checked this? If so could you describe your procedure. Also this argument applies less to ghosts and UFOs.

Replies from: TimS, bogdanb
comment by TimS · 2013-07-15T02:47:45.572Z · LW(p) · GW(p)

Have you actually checked this?

Very very cursory investigation - I went to Wikipedia, and noticed the lack of "Yo stupid, Bigfoot is real" section.

Also this argument applies less to ghosts and UFOs.

The argument applies to any phenomena that can be unambiguously recorded. Given the ease of recording relatively high quality video / audio / IR, assertions of a recordable phenomena without unambiguous recordings implies the assertions are false.

If proponents do not assert recordable phenomena, the rise-of-smartphones argument is not as powerful, but such assertions are much more vulnerable to invisible-undetectable-dragon issues.

Replies from: Jiro, ChristianKl, ChristianKl
comment by Jiro · 2013-07-15T04:11:06.377Z · LW(p) · GW(p)

I once heard an argument on a skeptical podcast: Why do we never see any Bigfoot roadkill?

Replies from: wedrifid
comment by wedrifid · 2013-07-15T04:43:47.480Z · LW(p) · GW(p)

I once heard an argument on a skeptical podcast: Why do we never see any Bigfoot roadkill?

That's easy. Bigfoot must be stronger and more durable than cars. The real question is why we don't find Bigfoot faeces with traces of partially digested steel, oil and rubber from Bigfoot's vehicular combat success.

Replies from: bogdanb
comment by bogdanb · 2013-07-16T02:30:53.828Z · LW(p) · GW(p)

That’s just goverment misinformation. They steal all evidence and mind-wipe the witnesses. Then they use the DNA for top-secret research trying to make a super-soldier. Makes a lot more sense than the ridiculous rubber digestion. Everyone knows its chupacabra that eats rubber.

comment by ChristianKl · 2013-07-23T11:49:12.540Z · LW(p) · GW(p)

The argument applies to any phenomena that can be unambiguously recorded. Given the ease of recording relatively high quality video / audio / IR, assertions of a recordable phenomena without unambiguous recordings implies the assertions are false.

There are plenty of recorded sightings of UFOs on youtube. Probably more videos than existed 50 years ago.

comment by ChristianKl · 2013-07-23T11:37:14.117Z · LW(p) · GW(p)

Very very cursory investigation - I went to Wikipedia, and noticed the lack of "Yo stupid, Bigfoot is real" section.

That doesn't imply in any way that the evidence that now exist is the same strength as was 50 years ago. It just doesn't.

Replies from: TimS
comment by TimS · 2013-07-23T15:16:44.462Z · LW(p) · GW(p)

Let me expand: I assume Wikipedia would link or discuss the best evidence for Bigfoot. It does link evidence, and the quality of that evidence is poor. As of this writing, the most recent evidence listed in wikipedia is a 2007 photo that forest rangers assert is a bear with mange and a 2008 youtube video link to what pro-Bigfoot groups apparently admit is a hoax.

None of this is higher quality evidence than the famous 1967 film.

Imagine a ten minute encounter with a para-normal / unexplained phenomena. In 1967 or 1980, it is essentially luck whether the observer has decent quality recording equipment and time to get into position to make a good recording. In 2013, the smartphone-per-person density is such that we should expect the vast majority of sighting of Bigfoot, or UFOs, or whatever, to be recorded by smartphone video cameras.

90% or more of those recordings will be crap, but the sheer volume of possible recordings implies that we should expect to see some very high quality evidence by now. And we haven't (more precisely, wikipedia has no such link, which I think is equivalent in these circumstances).

Further, this analysis ignores the fairly large number of people actively searching for para-normal / unexplained phenomena.

Replies from: Kindly
comment by Kindly · 2013-07-23T15:21:24.104Z · LW(p) · GW(p)

Do we observe this explosion-of-recorded-evidence phenomenon with real but weird things (e.g. some rare, bizarre-looking bug)?

Replies from: TimS
comment by TimS · 2013-07-23T15:31:03.063Z · LW(p) · GW(p)

Excellent question - I don't know the answer. This link is suggestive, and the last picture includes a species discovered via upload to flicker.

In general, Googling "newly discovered species" suggests that many new species are found in relatively exotic locations. Bigfoot is supposed to live in the forests of the NW United States. I don't consider that location exotic because those forests are heavily populated and very accessible to people (compared to the bottom of the ocean or deep jungle of Brazil or Africa).

comment by Kaj_Sotala · 2013-07-02T04:28:54.174Z · LW(p) · GW(p)

The language of the totalist environment is characterized by the thought-terminating cliché. The most far-reaching and complex of human problems are compressed into brief, highly reductive, definitive-sounding phrases, easily memorized and easily expressed. These become the start and finish of any ideological analysis.

-- Robert Jay Lifton, Thought Reform and the Psychology of Totalism

comment by [deleted] · 2013-07-03T12:51:40.338Z · LW(p) · GW(p)

All magic is science! You just don't know what you're doing, so you call it magic! And well, it's... Ridiculous.

Princess Bubblegum in Adventure Time.

Replies from: DanielLC
comment by DanielLC · 2013-07-23T21:27:09.148Z · LW(p) · GW(p)

Neat, but the vocabulary isn't accurate. Studying magic is science. Using this knowledge to make magic work for you is technology. Wild magic just does its own thing, so it's neither.

Replies from: None
comment by [deleted] · 2013-07-24T10:29:00.159Z · LW(p) · GW(p)

Well, "magic" is in common speech shorhand for "I have no idea how this works and I don't think anyone else has either." Science in common speech is "someone somewhere smarter than me knows how this works." (barring No One Knows what Sciece doesn't Know).

The problem with calling things magic is that it serves as a Semantic Stopsign.

And I must paraphrase Sam Huges Ra, a story in which "magic" is a newly discovered branch of physics: Calling magic 'magic' was an incredibly bad decision in the first place.

comment by satt · 2013-07-12T00:37:16.545Z · LW(p) · GW(p)

People tend to roll their eyes a bit when business school grads like me start saying things about “management is measurement” and so on, but the fact is that a) if you don’t measure something, how are you going to find out whether it’s changed or not? and b) if you don’t want to find out whether something’s changing or not, in what sense can you actually claim to care about it?

Daniel Davies

Replies from: cody-bryce, ChristianKl
comment by cody-bryce · 2013-07-12T14:57:16.199Z · LW(p) · GW(p)

When we roll our eyes at business school grads, it isn't because we don't believe in measuring anything. It's the same eyeroll that the 10 O'Clock news gets when they report the newest study linking molasses and cancer, which has nothing to do with my lack of belief in studies about cancer.

comment by ChristianKl · 2013-07-23T11:51:37.013Z · LW(p) · GW(p)

I thought quite a bit about how to measure whether I'm good at Salsa dancing on a particular night. I haven't found a measurement that's adequete.

I could use a measurement like: "How close do woman dance with me?" If a woman enjoys dancing with me she's likely to dance closer than if she doesn't. If I'm however measure my dancing skills on that variable I'm likely to dance with some woman in a way that to close for them and makes them uncomfortable.

I could use a metric just as counting how often a woman asks for my name. If I'm however using that metric I probably won't be the first to ask for a name to increase the chances that the woman asks on her own.

If I'm using a metric such as being asked by woman to dance, I'm less likely to ask on my own.

If I would hand a woman a sheet after a dance to rate my dancing, I would probably be seen as strange.

The average business school grad probably isn't doing very much Quantified Self on his own life. He doesn't know much about actually measuring what he cares about.

Women are not going to enjoy dancing with me more when I try to intellectual control their enjoyment by having a tight feedback loop about some proxy variable that I use to measure their enjoyment. It just doesn't work that way.

On the other hand, if I'm empathic, if I'm in a happy mood and get outside of my head I'm more likely to have success in making woman enjoy dancing with me.

The idea that being in your head and being focused on specific measurements is the only way to care is just flawed.

Replies from: scaphandre, satt
comment by scaphandre · 2013-07-29T03:44:46.059Z · LW(p) · GW(p)

In your life, salsa dancing ability is definitely not the sole metric you wish to be optimizing for.

Things you presumably want to optimize might be something like personal happiness, bettering the world or wherever you find meaning.

If one truly wanted to drop resources into optimizing salsa ability, I'd imagine filming the dance floor from a few cellphones every week, uploading the video to youtube and paying a few experts on a salsa forum to give the dancers a rating and feedback would give a somewhat valid metric that you could go about tracking, quantifying and optimizing.

But I presume that that is not the primary goal of most salsa-goers. I guess that people go to salsa dancing nights because they are fun, good exercise and you get to socialize with a group of guys and girls who want to dance with girls and guys.

Can you try tracking happiness? Sure, why not. Have a prompt to record happiness appear at random intervals, or write a journal to note big highs or lows. Then questions like "do things like salsa increase my happiness more than things like video games" or whatever become addressable in a slightly more informed way.

I agree with you that your mind should not be on contrived proxy goals while you are salsa dancing. Better to be enjoying the salsa. But I disagree with the implication that because many metrics are tangential to the 'true' goal, careful measurement is flawed. It it still the fun/happiness that you care about, just now you are doing a smarter job of tracking it.

Replies from: ChristianKl
comment by ChristianKl · 2013-07-29T10:49:51.416Z · LW(p) · GW(p)

Can you try tracking happiness? Sure, why not. Have a prompt to record happiness appear at random intervals, or write a journal to note big highs or lows.

Actually I do, Most days I put down a number from the interval 0-100 to rate my happiness.

I don't think that the number is informative when it comes to my Salsa dancing despite the fact that I'm someone who did Quantified Self TV interviews in Germany that involve showing me dancing Salsa. The thing I found is that it's important for me to drink water directly after arriving home from Salsa dancing. Otherwise I might lose up to one kg of body weight the next day from the missing water I sweated out.

But back to the topic. The fact that I do have some formal measurement shows me very well the limits of those measurements when it comes to making most decisions.

If one truly wanted to drop resources into optimizing salsa ability, I'd imagine filming the dance floor from a few cellphones every week, uploading the video to youtube and paying a few experts on a salsa forum to give the dancers a rating and feedback would give a somewhat valid metric that you could go about tracking, quantifying and optimizing.

If the goal is impressive dancing that wow's spectators that might be a way to go. If your goal is to dance in way that your dance partner enjoys that's not directly related to how it looks on video.

comment by satt · 2013-07-24T02:53:03.926Z · LW(p) · GW(p)

On the other hand, if I'm empathic, if I'm in a happy mood and get outside of my head I'm more likely to have success in making woman enjoy dancing with me.

And there's your measurement! (But then the school I graduated from taught quantum mechanics instead of Taylorism, so I may have an unusually expansive idea of what constitutes a measurement.)

Replies from: ChristianKl
comment by ChristianKl · 2013-07-24T15:59:18.984Z · LW(p) · GW(p)

Even in quantum mechanics people do have numbers as a result of their measurements.

It's not about trusting your intuition and relying on something like empathy.

comment by khafra · 2013-07-10T11:29:12.493Z · LW(p) · GW(p)

Statistically speaking, if you pick up a seashell and don't hold it to your ear, you can probably hear the ocean.

Replies from: nshepperd
comment by nshepperd · 2013-07-10T13:36:46.965Z · LW(p) · GW(p)

Umm, is it me being sleepy, or did he get P(I picked up a seashell) and P(I'm near the ocean) mixed up in the equation? P(near the ocean | evidence) shouldn't be inversely proportional to P(near the ocean). [ETA: Randall fixed it now.]

Replies from: wedrifid
comment by wedrifid · 2013-07-10T13:55:03.122Z · LW(p) · GW(p)

Umm, is it me being sleepy, or did he get P(I picked up a seashell) and P(I'm near the ocean) mixed up in the equation? P(near the ocean | evidence) shouldn't be inversely proportional to P(near the ocean).

Well spotted. Bayes rule is p(A | B) = p(B | A) * p(A) / p(B). This cartoon sees to mixed up p(A) and p(B) just as you note.

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-07-11T00:05:44.128Z · LW(p) · GW(p)

The cartoon looks right to me...

Replies from: Qiaochu_Yuan, ciphergoth
comment by Qiaochu_Yuan · 2013-07-11T00:22:25.535Z · LW(p) · GW(p)

It's been fixed. I think it was previously wrong. The comic thread seems to support this conclusion.

comment by AlanCrowe · 2013-07-03T15:46:24.064Z · LW(p) · GW(p)

Madmen we are, but not quite on the pattern of those who are shut up in a madhouse. It does not concern any of them to discover what sort of madness afflicts his neighbor, or the previous occupants of his cell; but it matters very much to us. The human mind is less prone to go astray when it gets to know to what extent, and in how many directions, it is itself liable to err, and we can never devote too much time to the study of our aberrations.

Bernard de Fontenelle,1686

Found in book review

comment by Yahooey · 2013-07-03T15:24:27.910Z · LW(p) · GW(p)

It is terrible to see how a single unclear idea, a single formula without meaning, lurking in a young man’s head, will sometimes act like an obstruction.

— Charles Sanders Peirce

comment by CronoDAS · 2013-07-02T08:00:18.751Z · LW(p) · GW(p)

If you find yourself in a hole, stop digging.

-- Denis Healey

Replies from: christopheg, CronoDAS
comment by christopheg · 2013-07-02T09:10:08.567Z · LW(p) · GW(p)

Remind's me of this one from Terry Pratchett:

"All you get if you are good at digging holes it's a bigger shovel."

Replies from: DanielLC
comment by DanielLC · 2013-07-23T21:32:31.170Z · LW(p) · GW(p)

Shouldn't that be: "All you get if you are good at digging holes is a bigger shovel."?

Replies from: christopheg
comment by christopheg · 2013-07-24T08:20:33.411Z · LW(p) · GW(p)

Thanks for fixing my broken english.

There is actually several quotes expressing the same idea in different Terry Pratchett's book. Everyone of them much better than what I could remember. I dug these two ones:

In Wyrd Sisters you have (Granny Weatherwas speaking): “The reward you get for digging holes is a bigger shovel.”

And another one from "Carpe Jugulum" that I like even better (also Granny Weatherwax speaking): "The reward for toil had been more toil. If you dug the best ditches, they gave you a bigger shovel."

comment by CronoDAS · 2013-07-02T08:07:06.999Z · LW(p) · GW(p)

I've also seen this quote attributed to Will Rogers, but it seems to be unconfirmed.

Replies from: Zubon
comment by Zubon · 2013-07-02T22:23:08.319Z · LW(p) · GW(p)

Wikipedia has a couple of citations giving it to Healey, although that is hardly definitive. The First Law of Holes has its own subsection on his Wikipedia page.

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2013-07-03T15:09:34.577Z · LW(p) · GW(p)

I don't know if Healey was the first to say it, but he definitely said it. I heard him (on the radio) at the time.

comment by Pablo (Pablo_Stafforini) · 2013-07-01T22:35:43.852Z · LW(p) · GW(p)

[O]ur moral judgments are less reliable than many would hope, and this has specific implications for methodology in normative ethics. Three sources of evidence indicate that our intuitive ethical judgments are less reliable than we might have hoped: a historical record of accepting morally absurd social practices; a scientic record showing that our intuitive judgments are systematically governed by a host of heuristics, biases, and irrelevant factors; and a philosophical record showing deep, probably unresolvable, inconsistencies in common moral convictions. I argue that this has the following implications for moral theorizing: we should trust intuitions less; we should be especially suspicious of intuitive judgments that t a bias pattern, even when we are intuitively condent that these judgments are not a simple product of the bias; we should be especially suspicious of intuitions that are part of inconsistent sets of deeply held convictions; and we should evaluate views holistically, thinking of entire classes of judgments that they get right or wrong in broad contexts, rather than dismissing positions on the basis of a small number of intuitive counterexamples.

Nick Beckstead, On the Overwhelming Importance of Shaping the Far Future, University of Rutgers, New Brunswick, 2013, p. 19

comment by oooo · 2013-07-09T04:12:46.559Z · LW(p) · GW(p)

We cooperate to compete, and a high level of fellow feeling makes us better able to unite to destroy outsiders.

--Robert Bigelow

Replies from: PhilGoetz
comment by PhilGoetz · 2013-07-16T15:14:56.212Z · LW(p) · GW(p)

Reminds me of Konrad Lorenz' observation that the strength of love in mammalian species is proportional to their ability to inflict harm on each other.

comment by baiter · 2013-07-04T13:20:38.184Z · LW(p) · GW(p)

If wishes were horses, beggars would ride.

English proverb

Replies from: Cthulhoo, Xachariah, DanielLC
comment by Cthulhoo · 2013-07-04T13:55:24.934Z · LW(p) · GW(p)

It should also be noted that if one doesn't start wishing for a horse, the probability of obtaining one decreases furtherly.

I know this is meant to be a call to action instead of contemplation, but sometimes I've heard it quoted intending : "Be and adult, stop whishing for very-difficoult-to-obtain things", and this is a statement I don't agree with.

comment by Xachariah · 2013-07-20T04:26:20.279Z · LW(p) · GW(p)

If wishes were horses we'd all be eating steak.

  • Jayne Cobb, Objects in Space, Firefly
Replies from: Kindly
comment by Kindly · 2013-07-20T14:41:41.751Z · LW(p) · GW(p)

If wishes were fishes, we'd all cast nets.

Gurney Halleck in Dune by Frank Herbert

Replies from: None
comment by [deleted] · 2013-07-21T16:21:06.591Z · LW(p) · GW(p)

If wishes were ingots, beggars would smelt.

The titular anthropomorphic wombat in Digger by Ursula Vernon.

comment by DanielLC · 2013-07-23T21:17:40.912Z · LW(p) · GW(p)

If wishes were horses, then My Little Pony would be about wishes. Who wants to watch a cartoon about wishes?

Replies from: linkhyrule5
comment by linkhyrule5 · 2013-07-24T02:34:36.569Z · LW(p) · GW(p)

Ahem..

Replies from: DanielLC
comment by DanielLC · 2013-07-24T02:56:17.831Z · LW(p) · GW(p)

If wishes were horses, Puella Magi Madoka Magica would be My Little Pony?

comment by Eugine_Nier · 2013-07-02T03:47:59.179Z · LW(p) · GW(p)

At college in 1980, my Government Studies prof also served as Secretary of the Socialist Workers Party of Minnesota (the real one, not the DFL). We clashed over Robert Mugabe, just coming to power in Zimbabwe, he asserting it spelled salvation and I, that it spelled ruin.

I e-mailed him a year or two ago, asking if I could get a retroactive grade increase since my predictions had proven more accurate than his. His explanation was that he truly believed Mugabe was an agrarian reformer whose program of taking land from Whites to give to Blacks would benefit the country; but things just hadn't worked out as hoped.

I didn't bother to send him the famous Heinlein quote about Bad Luck. And I didn't really expect the grade change. But it certainly was satisfying to say "I told you so" 30 years later.

JD

comment by Diver_Dan · 2013-07-05T15:25:38.911Z · LW(p) · GW(p)

Always drive within your competence, at a speed which is appropriate to the circumstances so that you can stop safely in the distance that you can see to be clear.

  • Roadcraft: The Police Drivers' Handbook

In driving, as in life.

Replies from: Qiaochu_Yuan
comment by Qiaochu_Yuan · 2013-07-05T23:07:40.372Z · LW(p) · GW(p)

This advice really only applies in contexts where the risks of failure substantially outweigh the rewards of success. This isn't true in many contexts; if they're approximately equally balanced, it makes sense to attempt to work slightly above your level of competence in order to improve your skill, and if the rewards of success substantially outweigh the risks of failure it makes sense to be even more risk-loving.

Replies from: Diver_Dan
comment by Diver_Dan · 2013-07-06T19:40:41.269Z · LW(p) · GW(p)

I think that you may have misunderstood the point that I was trying to make. I am not advocating excessive caution. Rather, I value self-knowledge and knowledge of the environment and the people you interact with in that environment. Obviously, a certain amount of margin of error should be included in any decision making.

It has been my experience as a driving instructor that most pupils are entirely too cautious especially on faster roads where going too slowly may cause a following vehicle to attempt an unsafe overtaking manoeuvre

comment by Richard_Kennaway · 2013-07-04T10:20:47.681Z · LW(p) · GW(p)

I've been rich and I’ve been poor. Rich is better.

Variously attributed.

Replies from: DanArmak, army1987
comment by DanArmak · 2013-07-06T13:28:41.319Z · LW(p) · GW(p)

It's very easy for a rich person to become poor: just give all you have away. It's very hard for a poor person to become rich: almost all of them try, and very few succeed.

If people found, on reflection, that being poor was better than being rich, then they would give their wealth away. We don't observe this.

Therefore I believe being rich is better, even without the benefit of personal experience.

Replies from: Kaj_Sotala, Yahooey
comment by Kaj_Sotala · 2013-07-09T15:04:33.098Z · LW(p) · GW(p)

There could be a hedonic treadmill effect: as you get richer, you get more things, but eventually you get used to it and it stops being better than your old life. But you still don't want to give your wealth away, because you have gotten used to having more stuff, and you're not sure that you would get used back to your old way of life the way you get used to your new one.

comment by Yahooey · 2013-07-06T21:20:01.453Z · LW(p) · GW(p)

My superficial knowledge of Seneca and the stoics doesn't allow me to debate the premise fully. It does tell me that the argument that it is better can be debated. That people prefer to be rich does not make it better.

An aside: A rich man that gives away his wealth is not equal to a person that is poor from the start or has lost his riches. The person that gives it away, keeps his connections, earns respect and, generally, is in a position re-earn a fortune.

Replies from: DanArmak
comment by DanArmak · 2013-07-06T21:40:33.504Z · LW(p) · GW(p)

That people prefer to be rich does not make it better.

It's enough for a strong presumption of it being better, pending evidence to the contrary.

Taboo "better": there are preferences as belief, and preferences as revealed in actions. Actions are clearly in favour of being rich.

On the side of beliefs, there are certainly religions and ethical theories that say being poor is better. Personally, I strongly disagree with both this and many other beliefs of all such theories that I know about, not to mention religions.

There are of course ethical systems that say that while being rich may be good, giving away your wealth to charity is better still. Even plain self-interested consequentialism may tell you that you should give your money, perhaps to fight existential risk or to help develop FAI. I certainly agree that there is a tradeoff to be made; I'm only pointing out that in itself, rich is better than poor.

As for the Stoics, I too am not deeply familiar with their philosophy. But it seems to me that any concrete problems generated by wealth, can be rather easily solved in practice by using some of that wealth.

Replies from: Yahooey
comment by Yahooey · 2013-07-07T00:22:53.846Z · LW(p) · GW(p)

It's enough for a strong presumption of it being better, pending evidence to the contrary.

There is plenty of evidence that behaviour is not always rational which in my mind shifts the burden of proof.

Replies from: DanArmak, Watercressed
comment by DanArmak · 2013-07-07T08:26:53.989Z · LW(p) · GW(p)

It's true that people sometimes behave instrumentally-irrationally in the sense that they don't take the correct steps to reach their goal of happiness. But that fact, alone, is relatively weak evidence: people are a little irrational, not completely wrong about what makes them happy.

Your reply can be read very generally ("behavior is not always rational, therefore it's not positively correlated with desired results"). Please specify what you meant more precisely.

Replies from: Yahooey
comment by Yahooey · 2013-07-07T09:20:32.363Z · LW(p) · GW(p)

I'm saying that the argument that most people are doing something is not proof that what they are doing is better. In other words, the fact that most rich people choose not to give away all of their fortune is not proof that being rich is better than being poor. Why they choose not to give it all away cannot be inferred from their actions.

Personally I would state that this is a false dichotomy and that Rich is better than Poor because it is not-Poor. It isn't necessarily the best state of not-Poor.

Replies from: DanArmak
comment by DanArmak · 2013-07-07T13:04:27.836Z · LW(p) · GW(p)

I'm saying that the argument that most people are doing something is not proof that what they are doing is better.

It's evidence that what they are doing is, or leads to, something being better. And in the cases where it isn't, we can point to a specific mechanism that subverts the general rule (e.g.: addiction).

Personally I would state that this is a false dichotomy and that Rich is better than Poor because it is not-Poor. It isn't necessarily the best state of not-Poor.

You seem to be talking about having a middle amount of money.

Whereas I'm saying a simple thing: for any two amounts of money X, Y where X > Y, all else being equal, is is better to have X (more) and not Y (less). And in particular, it's better to have lots of money (rich) than very little (poor).

comment by Watercressed · 2013-07-07T02:57:01.882Z · LW(p) · GW(p)

What do you mean by not rational? People reporting higher satisfaction when they're rich even though they feel less happiness?

comment by A1987dM (army1987) · 2013-07-04T10:42:58.393Z · LW(p) · GW(p)

That's not the case for all the people who have been poor and have been rich (see e.g. certain lottery winners).

I guess it largely depend on how one became rich, as well as how one spends the money.

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2013-07-04T12:23:09.510Z · LW(p) · GW(p)

Rich can be worse than poor, knowledge can be worse than ignorance, sickness can be better than health, and death can be better than life. But none of these are the way to bet.

It is also worth considering the relevant causal graph. Wealth --> Happiness allows of such exceptions. But what do they look like in terms of the causal graph Wealth --> Happiness <-- Character? If someone can't handle a sudden accession of money, is it the money or their personal failings that should be blamed? If you see a friend in that situation, do you advise them to get rid of their money or learn to handle it better?

Replies from: army1987
comment by A1987dM (army1987) · 2013-07-13T17:59:24.699Z · LW(p) · GW(p)

If you see a friend in that situation, do you advise them to get rid of their money or learn to handle it better?

It depends on how good each option would be if it succeeded, and how likely it would be to succeed.

comment by Pablo (Pablo_Stafforini) · 2013-07-01T22:37:07.621Z · LW(p) · GW(p)

Einstein’s theory of relativity suggests that there is no fact of the matter as to when “now” is. Any measurement of time is relative to the perspective of an observer. In other words, if you are traveling very fast, the clocks of others are speeding up from your point of view. You will spend a few years in a spaceship but when you return to earth thousands or millions of years will have passed. Yet it seems odd, to say the least, to discount the well-being of people as their velocity increases. Should we pay less attention to the safety of our spacecraft, and thus the welfare of our astronauts, the faster those vehicles go? If, for instance, we sent off a spacecraft at near the velocity of light, the astronauts would return to earth, hardly aged, millions of years hence. Should we—because of positive discounting—not give them enough fuel to make a safe landing? And if you decline to condemn them to death, how are they different from other “residents” in the distant future?

Tyler Cowen, ‘Caring about the Distant Future: Why it Matters and What it Means’, University of Chicago Law Review, vol. 74, no. 1 (Winter, 2007), p. 10

Replies from: dspeyer, BlazeOrangeDeer, B_For_Bandana, shminux
comment by dspeyer · 2013-07-02T03:07:18.054Z · LW(p) · GW(p)

They are different because when we pack the spaceship with fuel, we control with reasonable certainty whether they make a safe landing or not. As for our millions-of-years descendants, it's very hard to make any statement about us effecting them with >51% confidence (except, "we shouldn't exterminate ourselves").

A lot of what looks like time discounting is really uncertainty discounting.

Replies from: Pablo_Stafforini
comment by Pablo (Pablo_Stafforini) · 2013-07-03T18:24:36.735Z · LW(p) · GW(p)

A lot of what looks like time discounting is really uncertainty discounting.

Cowen is explicitly discussing time discounting. As he writes, "Should we—because of positive discounting—not give them enough fuel to make a safe landing?" (emphasis added) There may of course be other reasons for treating these people differently, including uncertainty about the long-term future, but Cowen is not focusing on these reasons here.

Replies from: fractalman
comment by fractalman · 2013-07-07T19:32:39.228Z · LW(p) · GW(p)

It feels like a terrible example for examining the effects of relativity on utility functions regarding time-discounting; the typical human utility function is going to result in something that approximates Utility(fuel)=stepfunction(fuelpurchased-“100% fuel”) at around 99-100% fuel, regardless of time-discounting. It’s a case of [lands succesffully] versus [runs out of fuel 10 seconds too soon and crashes, killing everyone in the rocket.]

If you’re time discounting heavily enough to not notice that spike, and fuel is somehow the most expensive part of the whole operation, then you’re probably discounting heavily that you’re better off launching two rockets on one-way trips with about 25-50% fuel each, depending on specifics of the rocket.

-In other words, the example fails to probe to the real heart of the mater because it doesn't matter if i use an Einsteinian reference frame or a Newtonian one, my answer is the same: either 100% fuel or very little fuel.

comment by BlazeOrangeDeer · 2013-07-02T17:34:16.725Z · LW(p) · GW(p)

if you are traveling very fast, the clocks of others are speeding up from your point of view.

This is backwards. Everyone in an inertial frame thinks other peoples clocks are slower. Acceleration is what causes the opposite, e.g. turning the spaceship around to come back

Replies from: pragmatist
comment by pragmatist · 2013-07-02T19:24:27.585Z · LW(p) · GW(p)

You're right that Cowen got it backwards, but you're wrong about this:

Acceleration is what causes the opposite, e.g. turning the spaceship around to come back

Acceleration is not the cause. The reason the astronauts age less is that the path they follow through space-time corresponds to a smaller proper time than the path followed by people who remain on the Earth, and the proper time along a path is what a clock following that path measures. So it's a geometrical fact about the difference between the two paths that causes the asymmetrical aging, not the acceleration of the astronauts.

To make this obvious, it is possible to set up a scenario where another group of astronauts leaves Earth and then returns, accelerating the exact same amount as the first group, but following a path with larger proper time. This second group of astronauts will age more than the first group, even though the accelerations involved were the same.

A lot of elementary presentations of relativity identify acceleration as the relevant factor in twin paradox type cases, but this is wrong (or, more charitably, not entirely right).

Replies from: shminux, BlazeOrangeDeer
comment by shminux · 2013-07-02T20:12:53.366Z · LW(p) · GW(p)

Just to chime in, in Special Relativity in a simply connected Minkowski spacetime acceleration is required for differential aging, so "Acceleration is not the cause" is misleading. Not that it is relevant to the issue of positive discounting.

Replies from: pragmatist
comment by pragmatist · 2013-07-02T20:22:54.432Z · LW(p) · GW(p)

But you can get differential aging without any difference in acceleration, so it does seem right to say that acceleration is not the cause of the differential aging. An analogy: Suppose you have two substances in the same lab that are burning at different rates, and you want to figure out the cause of the difference in burn rates. It would be wrong to say that the difference is due to the presence of oxygen in the lab, even though it is true that there would be no differential burning (or any burning at all) without oxygen.

ETA: Perhaps this just devolves into a semantic debate about what we mean when we say "the cause". In the Pearlian framework it seems more natural to talk of multiple causally relevant factors without singling one out as "the" cause. And I admit that the presence or absence of acceleration is a causally relevant factor in the twin paradox. I guess my point was that "acceleration" is not the best explanation for the differential aging. There exists a more fundamental explanation that accounts for many more cases (i.e. when neither observer is inertial, or when the space-time is multiply connected), and allows a precise calculation of the extent of the effect. I think its a useful heuristic to single out the most explanatory causal factor as "the cause" if you want to play that game, but like I said, that's a semantic point.

Replies from: shminux
comment by shminux · 2013-07-02T20:37:37.936Z · LW(p) · GW(p)

But you can get differential aging without any difference in acceleration

You cannot. The duration and/or magnitude and/or direction of acceleration has to be different for the two worldlines to be different.

Replies from: pragmatist
comment by pragmatist · 2013-07-02T20:43:10.278Z · LW(p) · GW(p)

Check out this diagram for an example of two different worldlines (A and B) without any difference in duration, magnitude or spatial direction of acceleration. The accelerated segments are in red.

Replies from: cousin_it, shminux
comment by cousin_it · 2013-07-22T18:32:28.853Z · LW(p) · GW(p)

Thanks for this! I had the same misconception as shminux.

comment by shminux · 2013-07-02T21:16:38.635Z · LW(p) · GW(p)

Thanks! I stand corrected. The timing of acceleration also matters. I should have known better. Anyway, I agree that

The reason the astronauts age less is that the path they follow through space-time corresponds to a smaller proper time than the path followed by people who remain on the Earth, and the proper time along a path is what a clock following that path measures.

It just seems like a tautology to me (the difference in aging is due to the difference in subjective clocks). To cause this difference one has to make the worldlines diverge, and this means difference in acceleration profiles.

What I initially was unhappy about is the statement

You're right that Cowen got it backwards, but you're wrong about this:

Acceleration is what causes the opposite, e.g. turning the spaceship around to come back

That last statement is perfectly correct.

comment by BlazeOrangeDeer · 2013-07-02T19:59:35.203Z · LW(p) · GW(p)

I wasn't claiming it was the whole story, but thanks for giving more info. I maybe should have said that you can't have that situation without changing trajectories but I thought acceleration was a simpler way to summarize.

comment by B_For_Bandana · 2013-07-02T01:50:30.341Z · LW(p) · GW(p)

I agree in principle, but I have basically no confidence in my ability to figure out what to do to help people in the future. There are two obstacles: random error and bias. Random error, because predicting the future is hard. And bias, because any policy I decide I like could be justified as being good for the future people, and that assertion couldn't be refuted easily. The promise of helping even an enormous number of people in the future amounts to Pascal's Wager, where donating to this or that charity or working on this or that research is like choosing this or that religion; all the possibilities cancel out and I have no reliable guide to what to actually do.

Admittedly this is all "I failed my art" stuff rather than the other way around, but well, it's still true.

comment by shminux · 2013-07-01T23:44:20.015Z · LW(p) · GW(p)

Yet it seems odd, to say the least, to discount the well-being of people as their velocity increases.

Is it some kind of non-sequitur? How is it related to positive discounting?

if you decline to condemn them to death, how are they different from other “residents” in the distant future?

Probably because some are more real and others are less so.

Replies from: Jakeness
comment by Jakeness · 2013-07-03T00:31:26.470Z · LW(p) · GW(p)

if you decline to condemn them to death, how are they different from other “residents” in the distant future?

Probably because some are more real and others are less so.

Can you explain in more detail what you mean by this?

Replies from: shminux
comment by shminux · 2013-07-03T07:03:20.316Z · LW(p) · GW(p)

It's pretty reasonable to care about the live people you know more than about some from potential future generations.

comment by PhilR · 2013-07-23T11:48:16.492Z · LW(p) · GW(p)

The man who first declared that "seeing" was "believing" laid his finger (whether he knew it himself or not) on one of the fundamental follies of humanity. The easiest of all evidence to receive is the evidence that requires no other judgment to decide on it than the judgment of the eye—and it will be, on that account, the evidence which humanity is most ready to credit, as long as humanity lasts.

Wilkie Collins, Man and Wife, Chapter the Twentieth

Replies from: Richard_Kennaway, Eugine_Nier
comment by Richard_Kennaway · 2013-07-26T08:26:58.877Z · LW(p) · GW(p)

The thing that is important is the thing that is not seen.

Antoine de Saint-Exupéry, "The Little Prince"

comment by Eugine_Nier · 2013-07-25T02:54:17.676Z · LW(p) · GW(p)

Well, it was a very good heuristic up until photography and than television were invented.

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2013-07-26T08:15:30.845Z · LW(p) · GW(p)

Well, it was a very good heuristic up until photography and than television were invented.

This predates television entirely and photography all but entirely.

Replies from: Eugine_Nier
comment by Eugine_Nier · 2013-07-29T06:29:14.365Z · LW(p) · GW(p)

Ok, different meanings of the word "see".

comment by Qiaochu_Yuan · 2013-07-22T07:33:34.571Z · LW(p) · GW(p)

There's a big difference between performing an action and endorsing the theory that the action is good.

Anna Salamon (paraphrase)

Replies from: wedrifid
comment by wedrifid · 2013-07-22T17:48:25.173Z · LW(p) · GW(p)

Anna Salamon (paraphrase)

Do we allow quotes from lesswrong users and CFAR instructors now?

A policy that disallows Robin Hanson quotes but permits quotes from Anna Salamon would seem peculiar to me.

Replies from: army1987
comment by A1987dM (army1987) · 2013-07-23T13:32:58.911Z · LW(p) · GW(p)

Whatever the actual rule is, the next time it should be spelled out explicitly.

comment by Pablo (Pablo_Stafforini) · 2013-07-01T22:46:20.065Z · LW(p) · GW(p)

For a few years, I attended a meeting called Animal Behavior Lunch where we discussed new animal behavior articles. All of the meetings consisted of graduate students talking at great length about the flaws of that week’s paper. The professors in attendance knew better but somehow we did not manage to teach this. The students seemed to have a strong bias to criticize. Perhaps they had been told that “critical thinking” is good. They may have never been told that appreciation should come first. I suspect failure to teach graduate students to see clearly the virtues of flawed research is the beginning of the problem I discuss here: Mature researchers who don’t do this or that because they have been told not to do it (it has obvious flaws) and as a result do nothing.

Seth Roberts, ‘Something is better than nothing’, Nutrition, vol. 23, no. 11 (November, 2007), p. 912

Replies from: gwern, Zubon
comment by gwern · 2013-07-02T00:03:31.191Z · LW(p) · GW(p)

Roberts, naturally, has substantial interest in avoiding any criticism, and the work of people like Ioannides and the eternal life of the publication bias says that if anything, we are insufficiently critical...

Replies from: Eliezer_Yudkowsky, Pablo_Stafforini
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-07-02T00:40:21.380Z · LW(p) · GW(p)

I think we're looking at the wrong kind of criticism. Like, the kind of criticism you can make with almost equal ease of results that will and won't turn out to replicate later.

comment by Pablo (Pablo_Stafforini) · 2013-07-02T00:37:07.398Z · LW(p) · GW(p)

As you know, I agree with you that Roberts is incorrigibly biased, and I liked your earlier post on this. But I think we can be critical in the sense you have in mind, and still try to cultivate the attitude that I take Roberts to be hinting at. Perhaps this is not very clear in the passage I chose to quote though.

Replies from: TimS
comment by TimS · 2013-07-02T00:48:41.754Z · LW(p) · GW(p)

From an outside view, how can we distinguish this virtue-of-flawed-research from insiders refraining from criticizing each other for the sake of the reputation of the research field?

Replies from: Estarlio
comment by Estarlio · 2013-07-02T12:14:47.335Z · LW(p) · GW(p)

Virtue of flawed research insiders won't not criticise the flaws, but they will follow up on them with further studies expanding on a point or fixing a methodology.

The problem that Roberts might be criticising is the sort of thinking that goes: I've made a criticism, now we can forget about the thing.

comment by Zubon · 2013-07-02T22:25:41.220Z · LW(p) · GW(p)

One of the more useful class discussions I had consciously started with the opposite. The first question was what was good and useful in the week's reading. We proceeded to criticism, but starting with "is there anything useful here?" made the discussion more useful and positive.

comment by James_Miller · 2013-07-01T17:14:07.913Z · LW(p) · GW(p)

Things won are done; joy's soul lies in the doing.

William Shakespeare, Troilus and Cressida, Act 1, Scene 2. I found the quote in The Happiness Hypothesis where this book's author wrote "Pleasure comes more from making progress toward goals than from achieving them."

comment by katydee · 2013-07-24T22:26:00.873Z · LW(p) · GW(p)

Somebody told me how frightening it was how much topsoil we are losing each year, but I told that story around the campfire and nobody got scared.

Jack Handey

comment by Richard_Kennaway · 2013-07-26T08:09:20.439Z · LW(p) · GW(p)

Most times, when a feller's sellin' heaven, he ain't got no heaven to sell.

Penny Arcade on Pascal's Wager.

comment by Estarlio · 2013-07-02T13:07:53.208Z · LW(p) · GW(p)

Vision without action is a daydream. Action without vision is a nightmare.”

I believe that one's meant to be a Japanese proverb.

comment by AShepard · 2013-07-02T00:02:16.678Z · LW(p) · GW(p)

We readily inquire, 'Does he know Greek or Latin?' 'Can he write poetry and prose?' But what matters most is what we put last: 'Has he become better and wiser?' We ought to find out not merely who understands most but who understands best. We work merely to fill the memory, leaving the understanding and the sense of right and wrong empty. Just as birds sometimes go in search of grain, carrying it in their beaks without tasting to stuff it down the beaks of their young, so too do our schoolmasters go foraging for learning in their books and merely lodge it on the tip of their lips, only to spew it out and scatter it on the wind.

Michel de Montaigne, Essays, "On schoolmasters' learning"

comment by Ben Pace (Benito) · 2013-07-22T22:01:43.214Z · LW(p) · GW(p)

Often a person uses some folk proverb to explain a behavioral event even though, on an earlier occasion, this same person used a directly contradictory folk proverb to explain the same type of event. For example, most of us have heard or said, “look before you leap.” Now there’s a useful, straightforward bit of behavioral advice—except that I vaguely remember admonishing on occasion, “he who hesitates is lost.” And “absence makes the heart grow fonder” is a pretty clear prediction of an emotional reaction to environmental events. But then what about “out of sight, out of mind”? And if “haste makes waste,” why do we sometimes hear that “time waits for no man”? How could the saying “two heads are better than one” not be true? Except that “too many cooks spoil the broth.” If I think “it’s better to be safe than sorry,” why do I also believe “nothing ventured, nothing gained”? And if “opposites attract,” why do “birds of a feather flock together”? I have counseled many students to “never to put off until tomorrow what you can do today.” But I hope my last advisee has never heard me say this, because I just told him, “cross that bridge when you come to it.”

The enormous appeal of clichés like these is that, taken together as implicit “explanations” of behavior, they cannot be refuted. No matter what happens, one of these explanations will be cited to cover it. No wonder we all think we are such excellent judges of human behavior and personality. We have an explanation for anything and everything that happens. Folk wisdom is cowardly in the sense that it takes no risk that it might be refuted.

  • Keith Stanovich, 'How to Think Straight About Psychology"
Replies from: beoShaffer, Vaniver, Morendil
comment by beoShaffer · 2013-07-23T02:32:32.647Z · LW(p) · GW(p)

dupe

Replies from: Benito
comment by Ben Pace (Benito) · 2013-07-23T08:55:43.200Z · LW(p) · GW(p)

Thanks.

comment by Vaniver · 2013-07-23T00:22:49.343Z · LW(p) · GW(p)

I view those more as helpful labels for general trends. In many situations, there are pressures pushing against each other, and lending weight to one (by mentioning its general label) can push someone off-balance towards a better position. As they say, everything in moderation. ;)

comment by Morendil · 2013-07-22T23:55:12.765Z · LW(p) · GW(p)

taken together as implicit “explanations” of behavior

Huge assumption here. That's not what they are. There's a much more insightful discussion of what they are in Hofstadter's latest, Surfaces and Essences.

comment by CronoDAS · 2013-07-02T07:54:17.042Z · LW(p) · GW(p)

To err is human; to really foul things up requires a computer.

-- Bill Vaughan, accidentally anticipating the dangers of UFAI in 1969

Replies from: Username
comment by Username · 2013-07-02T19:19:10.238Z · LW(p) · GW(p)

You can also turn that around.

To succeed is human; to really make a difference requires a computer.

Suffice to say that AGI is a really big lever.

Replies from: cody-bryce
comment by cody-bryce · 2013-07-03T02:50:10.801Z · LW(p) · GW(p)

There's a saying about India, "Whatever you can rightly say about India, the opposite is also true."

Replies from: ygert, itaibn0
comment by ygert · 2013-07-03T09:42:13.269Z · LW(p) · GW(p)

"Whatever you can rightly say about India, the opposite is false."

Replies from: DanArmak, Eliezer_Yudkowsky
comment by DanArmak · 2013-07-03T22:01:19.908Z · LW(p) · GW(p)

No, no, no. "There exists a statement you can rightly say about India, whose opposite is false."

comment by itaibn0 · 2013-07-06T20:34:33.041Z · LW(p) · GW(p)

To extend on ygert's comment, if the opposite of a statement about India is true, that make it less right to say. Note that this also applies for ambiguous statements with no clear truth-value and for statements whose opposites are not their negation.

comment by JQuinton · 2013-07-11T21:18:36.912Z · LW(p) · GW(p)

To admit you were wrong is to declare that you are wiser now than you were before

I can't find the original source for this, but I got it from an image floating around Facebook.

Replies from: RolfAndreassen
comment by RolfAndreassen · 2013-07-12T15:15:12.776Z · LW(p) · GW(p)

Well yes, this is true, but one may reasonably prefer a high steady state over an increase to the current level. It's better to have A in the past and A now, than B in the past and A now. The increase is only to be preferred if it is from B to A+, which does not follow from the admission of error.

comment by shminux · 2013-07-07T08:11:00.829Z · LW(p) · GW(p)

Asked to make a 30-second case for On Constitutional Disobedience — his 2013 book that advocates abolishing the U.S. Constitution — Louis Michael Seidman, a constitutional law professor at Georgetown University Law Center, says:

"There's no good reason why we should be bound by decisions made hundreds of years ago by people who are long dead, knew nothing about modern America, and had moral and political views that no sensible person would hold today."

Replies from: ygert, Yahooey, Larks
comment by ygert · 2013-07-07T08:52:31.789Z · LW(p) · GW(p)

I think the main thing that can be said to defend keeping the Constitution is simply that it is a Schelling point. We need some way to base our system of laws. What system do you choose? There are arguments for many options, and I'm not saying the Constitution is necessarily the best. But due to what you may perhaps call a historical accident, the Constitution is where we are now. This makes it a Schelling point for all the different options for a system to base our laws on.

Replies from: simplicio
comment by simplicio · 2013-07-08T20:48:59.274Z · LW(p) · GW(p)

Very true, although where the USA is now is really not "the Constitution" simpliciter, so much as "the Constitution + all case law."

comment by Yahooey · 2013-07-07T12:11:30.879Z · LW(p) · GW(p)

The constitution can be amended therefore Americans are not bound by decisions made hundreds of years ago. There were 12 amendments passed in the 20th century, the last of which was an amendment that was proposed in 1789 and ratified in 1992.

Replies from: somervta
comment by somervta · 2013-07-19T13:29:28.866Z · LW(p) · GW(p)

cough 30 second case cough

comment by Larks · 2013-07-23T10:09:34.005Z · LW(p) · GW(p)

Why is this a rationality quote?

Replies from: Benito, Zaine
comment by Ben Pace (Benito) · 2013-07-23T10:23:50.519Z · LW(p) · GW(p)

Sunk Cost? Also, Tsuyoku Naritai - we can do so much better with the knowledge currently available to us.

Replies from: Larks
comment by Larks · 2013-07-24T09:47:05.683Z · LW(p) · GW(p)

This isn't a sunk cost. It's not like we used up a large fraction of our paper supply writing the constitution. Rather, it's a precommitment, a contract, and a schelling point. There are good reasons to be bound be those, so the quote is false.

comment by Zaine · 2013-07-24T10:10:35.755Z · LW(p) · GW(p)

It's a quote against which one can test their rationality, maybe?

  • When someone died or when it was made has no relevance; only its merit in guiding a government is relevant.
  • Their moral and political views don't matter either, unless contained in the present US Constitution; this seems like argumentum ad hominem at first glance, but one needs to check the claim before evaluating its persuasiveness.
  • One must argue that knowledge of modern America confers enough of a benefit to forming a working governmental body that scrapping and rewriting the entire U.S. Constitution is preferable to the amendment process.

In an effort to steelman: perhaps the Professor meant to indicate that with the advent of the internet, a representative democracy is no longer the most effective means of running a government by the people, for the people, and of the people. If he was feeling radical, he may have been hinting at how political science has developed as a discipline since the Enlightenment era when the principles founding the U.S. government were theorised; perhaps the best solution is a flexible one, able to adapt to the political system most effective at running an efficient government while still remaining resistant to tyranny. Exempli gratia a futarchy for four years, some form of crypto-direct democracy for eight years, a modified version of Finland's government for ten years, etcetera.

comment by shminux · 2013-07-29T22:40:04.871Z · LW(p) · GW(p)

There can certainly be no question of malice or premeditation on the part of the computers; they merely do whatever requires the least amount of effort, just as water will inevitably flow downhill and not up. But while water may be easily dammed, it is far more difficult to control all the possible deviations of intelligent machines.

Stanislaw Lem, The Futurological Congress (1971)

comment by cody-bryce · 2013-07-22T01:47:23.160Z · LW(p) · GW(p)

Every book is a children's book if the kid can read

Mitch Hedberg

comment by Risto_Saarelma · 2013-07-02T08:51:21.077Z · LW(p) · GW(p)

The biggest difference between literary fiction set in the future and Science fiction is that literary fictioneers don't really believe in the future. History is merely a spiral of ever widening crap, and we are on the brink of the abyss. Any opinions otherwise must be exterminated.

Instructor in The Guardian comment section

Replies from: Kyre, Document, Document
comment by Kyre · 2013-07-02T23:40:03.136Z · LW(p) · GW(p)

Story I heard from a bookshop clerk about the (sadly deceased) Ian M Banks. He was being interviewed on the South Bank Show and the interviewer asked, in a slightly condescending manner, "why did you start writing science fiction ?", and he replied "I wanted to make sure I was good enough first."

comment by Document · 2013-08-03T05:56:48.715Z · LW(p) · GW(p)

"The biggest difference between literary fiction set in the future and Science fiction is that science fictioneers don't really believe in the future. History is merely an adventure story of good guys beating adversity, and we are on the brink of a glorious new age. Any opinions otherwise must be sneered at."

Alternatively:

The future is not the realization of our hopes and dreams, a warning to mend our ways, an adventure to inspire us, nor a romance to touch our hearts. The future is just another place in spacetime.

comment by Document · 2013-07-02T23:13:31.931Z · LW(p) · GW(p)

...So every dystopia and cautionary tale is "literature" and inferior, while generic pew-pew space opera is always produced by "real" belief in the future and never pure profit motive or anything of the sort?

(Edit: or status motivation, nostalgia/desire to emulate something one likes, "lasers are cool" aestheticism, or whatever other reasons.)

Replies from: BloodyShrimp
comment by BloodyShrimp · 2013-07-02T23:18:08.820Z · LW(p) · GW(p)

I'd guess the quotee wouldn't call generic space opera "science fiction" either. I sure wouldn't, myself.

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2013-07-03T00:14:44.578Z · LW(p) · GW(p)

Indeed. Space opera is Romanticism; science fiction is Enlightenment (tvtropes link).

Replies from: Nornagest
comment by Nornagest · 2013-07-03T00:21:17.377Z · LW(p) · GW(p)

I'm not sure I'd go quite that far. The Culture novels are space opera, for example. yet they fall on the Enlightenment side of the divide; likewise for Star Trek, modulo a few more Romantic episodes and themes. Star Wars, however, is very, very Romantic, and it's probably the first thing that comes to mind for most people when you bring up space opera.

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2013-07-03T01:12:47.063Z · LW(p) · GW(p)

There's definitely a spectrum. I'm with you on the Culture, but I'm not sure I agree about Star Trek. If Star Trek is space opera, what qualifies as science fiction...?

Replies from: Kaj_Sotala, Nornagest, William_Quixote
comment by Kaj_Sotala · 2013-07-03T10:01:24.618Z · LW(p) · GW(p)

If Star Trek is space opera, what qualifies as science fiction...?

To me, the defining element of "real" science fiction is that it actually explores the possible consequences of worlds where the science is different (either because people have made inventions that don't exist in our world, because the laws of physics themselves are different, or - for scifi based on the social sciences - because the society is different; a lot of cyberpunk is arguably more sociological than hard-sciencial in focus), taking that as its starting point.

So I would say that anything that actually takes the science or scientific question as a starting point qualifies. Star Trek is infamous for doing the opposite - the writers would actually write scripts with dialogue like "Mr. Data, the so that the doesn't blow up", and then somebody would replace the tags with scientific-sounding words that fit into the wanted context.

Replies from: hylleddin, SaidAchmiz
comment by hylleddin · 2013-07-09T04:21:05.455Z · LW(p) · GW(p)

Of course, many works traditionally labeled fantasy also prefer to explore the consequences of worlds with different physics (HPMoR, for example). I've heard this called "Hard fantasy".

comment by Said Achmiz (SaidAchmiz) · 2013-07-03T14:47:14.415Z · LW(p) · GW(p)

If you include sociological sf in your definition (which, I agree, you absolutely should), then Star Trek seems to qualify. The utopian society of the Federation is one of the key background facts of the franchise (assumed in TOS, explored in more depth in TNG, then deconstructed to some degree in DS9).

You're right about the technobabble, of course. However, that's often just a mask for the actual exploration of sociological/psychological concepts. And there was technological/hard-scientific stuff too: TOS's "City on the Edge of Forever" was a classic time travel story. The Borg were an examination of transhumanism (a biased one, of course, but still). The Mirror Universe episodes are another example.

Replies from: Kaj_Sotala
comment by Kaj_Sotala · 2013-07-09T14:51:32.567Z · LW(p) · GW(p)

That's why I emphasized the "taking that as its starting point" bit.

The old joke about space opera is that they're Westerns in space, with space ships substituted for horses and laser guns substituted for ordinary guns. Now if a writer literally created a series by doing that - saying "hey, let's make a Western in space and make these substitutions" - they wouldn't be exploring the social consequences of space ships and laser guns, they'd just be adding fancy tropes on a story in order to make it seem more cool.

Now Star Trek is definitely not this bad. Many Star Trek episodes do seem to have been written with the purpose of exploring the consequence of something. But overall - especially with the more recent series - it does feel like there are more episodes that were conceived as a way of telling a cool story first, with the technological/social elements being added as the extra spice, rather than as serious exploration of the elements.

Of course, this is a spectrum and not a clear-cut split, and Star Trek is more sci-fi than many other series. But if I had to choose, I'd say it's closer to the "space opera" end than the "sci-fi" end. (In general, I can think of very few TV series that I'd really put in the "sci-fi" end - most "real" sci-fi tends to be written rather than televised, in my experience. Though The Prisoner would qualify.)

Replies from: Jiro
comment by Jiro · 2013-07-09T15:24:44.437Z · LW(p) · GW(p)

"It's not real science fiction, it's just a ___ in space" is a common and tempting meme, but I'm not sure it holds together when you think about it. Imagine if we were to apply this reasoning to other genres, maybe even to Westerns themselves. "It's not a Western, it's just a romance with horses and cowboys. To be a real Western the Western setting must have some effect on the plot, but a romance can happen in any context. Just replace 'barmaid' with 'minimum wage clerk'" "It's not a police drama, it's a Western, they're just chasing each other using cars instead of horses. You're not exploring the social consequences of the fact that they're in a 21st century police station rather than just a sheriff in the West".

Or do it more narrowly. "Sure, it has some science fiction elements, but most of it is still a Western. The cloning device has social consequences that affect the story, but the spaceships and lasers don't. It could just as easily be a story that has a cloning device but is set in the modern era without any spaceships or lasers."

Actually, I find it hard to think of many stories where spaceships and lasers would have an effect at all. I realize, of course, that spaceships and lasers are just examples, but generally, spaceships and lasers don't have any effect on the story that couldn't have been done in a Western--spaceships let you travel faster, but the universe is larger than the setting of a Western, with the net effect being the same--there are some near destinations and some hard to get to destinations..

Is the X-Files not anything except a police procedural because investigations and government coverups could happen without there being space aliens? (Sure, some of the specific investigations and coverups require aliens, but the same basic thing could be done without them.)

Replies from: Nornagest, Kaj_Sotala
comment by Nornagest · 2013-07-09T18:36:21.731Z · LW(p) · GW(p)

"It's not a police drama, it's a Western, they're just chasing each other using cars instead of horses. You're not exploring the social consequences of the fact that they're in a 21st century police station rather than just a sheriff in the West".

...to be fair, I've described a number of movies in exactly those terms before. You can find stylistic descendants of the Western all over modern action movies, even though the original genre is pretty much dead by now.

I was talking about style and themes more than about plot requirements, though, and I think it's a mistake to ignore those in a discussion of genre boundaries.

comment by Kaj_Sotala · 2013-07-09T15:43:54.806Z · LW(p) · GW(p)

Imagine if we were to apply this reasoning to other genres, maybe even to Westerns themselves.

The problem with this argument is that the "real sci-fi is about exploring the consequences of alternative worlds" definition is really a rather analogous way of defining a genre - genres tend to be more commonly defined by the tropes that they employ. A police drama doesn't have "exploring the social consequences of being in a 21st century police station" as a necessary condition of the genre in the same way that "real" sci-fi has "exploring the consequences of alternative worlds" as a necessary condition. (And note the scare quotes on "real" sci-fi, because by the common definition of sci-fi, it's all about the tropes as well, and the thing about having to explore new ideas is only a niche definition employed by a small group of people who take their sci-fi far more seriously than they should. Yes, I include myself in that group.)

Or do it more narrowly. "Sure, it has some science fiction elements, but most of it is still a Western. The cloning device has social consequences that affect the story, but the spaceships and lasers don't. It could just as easily be a story that has a cloning device but is set in the modern era without any spaceships or lasers."

But of course, if the story really is about the cloning device, then it would be just as much sci-fi even if it was set in the modern era.

Actually, I find it hard to think of many stories where spaceships and lasers would have an effect at all. I realize, of course, that spaceships and lasers are just examples, but generally, spaceships and lasers don't have any effect on the story that couldn't have been done in a Western--spaceships let you travel faster, but the universe is larger than the setting of a Western, with the net effect being the same--there are some near destinations and some hard to get to destinations..

You couldn't really do generation ships in a Western, or explore the effects of travel at relativistic speeds, or the consequences of colder wars, or...

The potential room for exploration does seem smaller for laser guns, but it could be relevant in e.g. a police story - maybe it's harder to identify the laser gun that was used to kill for someone than it is to identify a traditional murder weapon, which leaves a bullet in the body.

Replies from: Jiro
comment by Jiro · 2013-07-09T16:13:23.737Z · LW(p) · GW(p)

genres tend to be more commonly defined by the tropes that they employ.

Yes, that was part of my point. Why are we suddenly departing from this for science fiction?

But of course, if the story really is about the cloning device, then it would be just as much sci-fi even if it was set in the modern era.

But you wouldn't say "25% of this story is about the cloning and 75% is about the bus rides and handguns, so this modern day story is only 25% sci-fi". Yet replace bus rides and handguns with spaceships and lasers, and make it a series, and suddenly it's "not very much sci-fi" because not very much of the technological elements affect society.

(Especially if it's a series. Something that may appear every week in a series--because it's a series--may not have social effects every week.)

You couldn't really do generation ships in a Western

But in practice, a series that is not completely based around generation ships won't have them most of the time. It's unrealistic to say that Star Trek isn't real sci-fi unless each episode with a spaceship has a generation ship or other element that shows the spaceships are having a social effect.

maybe it's harder to identify the laser that was used to kill for someone than it is with a traditional weapon

Is that a social consequence, though? Or just a consequence? I can easily think of modern analogs for such a thing, after all.

Replies from: Kaj_Sotala
comment by Kaj_Sotala · 2013-07-09T16:53:41.233Z · LW(p) · GW(p)

Why are we suddenly departing from this for science fiction?

Because the "sci-fi is about exploring consequences" definition is useful for identifying a cluster of stories that some people (me included) find particularly interesting and enjoyable, and purely trope-based definitions wouldn't identify that cluster correctly.

But you wouldn't say "25% of this story is about the cloning and 75% is about the bus rides and handguns, so this modern day story is only 25% sci-fi". Yet replace bus rides and handguns with spaceships and lasers, and make it a series, and suddenly it's "not very much sci-fi" because not very much of the technological elements affect society.

(Especially if it's a series. Something that may appear every week in a series--because it's a series--may not have social effects every week.)

I can give a rough estimate of whether or not a story gives a sense of doing the kind of novel exploration of concepts that I haven't seen done before. If a series only very rarely gives that kind of a sense, then it's not very sci-fi.

But in practice, a series that is not completely based around generation ships won't have them most of the time. It's unrealistic to say that Star Trek isn't real sci-fi unless each episode with a spaceship has a generation ship or other element that shows the spaceships are having a social effect.

I don't think I ever said that the exploration had to always be about the spaceships? Plenty of other concepts that Star Trek could explore as well.

Is that a social consequence, though? Or just a consequence?

Sociological sci-fi was defined to be about social consequences, but sci-fi in general doesn't have to be about them in particular. Could be e.g. the logical consequences as well - Asimov had a bunch of stories about the Three Laws of Robotics that were essentially just logic puzzles.

Replies from: Jiro
comment by Jiro · 2013-07-09T22:30:03.730Z · LW(p) · GW(p)

I don't think I ever said that the exploration had to always be about the spaceships? Plenty of other concepts that Star Trek could explore as well.

So something can have many strange or futuristic elements, but only one or a few of those elements needs to have an effect in any one story for it to count as sci-fi?

If that's the case, then even the Star Trek movies count as sci-fi. Even the first movie has a time traveller affecting history, Vulcan being blown up, and a transporter rigged up to go a very long distance. It's hard to do those in a modern day story or a Western without something very contrived.

Heck, even Star Wars counts. It has the Force. Having a world where mysticism works is a big change that has noticeable effects on what the characters can do, and how the audience would react to them. Plenty of people here, watching a similar story taking place in the modern world where mysticism has no reproducible effects, would think that Yoda is a charlatan and that Luke should flee to keep his rationality intact (This goes double because in the real world there's no such thing as a combat skill that only a few dozen people in all of existence are capable of learning. You'd have a very hard time writing Star Wars as a Western without wondering why Darth Vader shouldn't be fought by a posse instead of by a single hero.)

comment by Nornagest · 2013-07-03T01:32:22.485Z · LW(p) · GW(p)

That's a good question. Probably not one that has an answer you can get a decent majority of SF fans to agree on, unfortunately.

My take on it is that you're not going to have much luck defining genres in terms of static attributes; they're more like loosely bound clusters in the space of themes, tropes, and influences. Star Trek's clearly inheriting from older SF-genre stories -- Forbidden Planet, certain Larry Niven novels, all sorts of stuff if you break it down to individual episodes -- so I'm comfortable calling it that.

Space opera, meanwhile, points to a thread that's interwoven with SF but not encompassed by it. It influences a lot of media that also use SF themes and which I'd feel comfortable filing under both categories: the Culture books, Battlestar Galactica, and so forth. But it also influences some that don't; Star Wars draws from planetary romance, heroic fantasy, and samurai movies, but not much pure SF, so I might call it space opera but not science fiction. Or I might not, depending on how wide a net I want to cast with the term.

comment by William_Quixote · 2013-07-05T00:44:07.256Z · LW(p) · GW(p)

Vinge

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2013-07-05T01:37:16.843Z · LW(p) · GW(p)

I don't see any obvious reason for not counting A Fire Upon the Deep as space opera, actually. Maybe it's not a spectrum after all!

comment by Eugine_Nier · 2013-07-02T03:40:09.389Z · LW(p) · GW(p)

I remember a discussion of autonomous man from Locke where I put out the obvious objection that it hypothesized a man who originated “perfet formes, Limb’d and full grown: out of the ground up-rose” which is not very useful because it has no relation to reality.

Mary

comment by shminux · 2013-07-29T20:52:49.556Z · LW(p) · GW(p)

Scott Aaronson on optimal philanthropy (quoted somewhat out of context):

Suppose you had asked yourself, as a teenager, “how should I live my life so as to maximize my impact on reducing the most widespread, obvious forms of human suffering today, like childhood deaths from malaria?” And then set out, as an earnest utilitarian, to implement your answer? What would the result look like?

It seems clear that your life would look nothing at all like Mother Teresa’s, or that of any other traditional “saint.” But it might look a helluva lot like Bill Gates’s. That is, the best strategy might well be to spend the first half of your career making billions of dollars almost any way you could—stealing other people’s ideas, making deals only to backstab your partners later, locking customers in to buggy, inferior products, whatever—and then to spend the second half giving your billions away, thinking very hard about how to maximize the impact of each grant.

comment by Zubon · 2013-07-02T22:15:30.924Z · LW(p) · GW(p)

This was well done, and fairly done too, for anything that wins is fair in war, and the greatest victory is the one that takes the fewest blows.

Stranger-Come-Knocking on why rationalists win life-or-death fights in The Heroes by Joe Abercrombie

comment by Pablo (Pablo_Stafforini) · 2013-07-01T22:36:15.507Z · LW(p) · GW(p)

We live during the hinge of history. Given the scientific and technological discoveries of the last two centuries, the world has never changed as fast. We shall soon have even greater powers to transform, not only our surroundings, but ourselves and our successors. If we act wisely in the next few centuries, humanity will survive its most dangerous and decisive period. Our descendants could, if necessary, go elsewhere, spreading through this galaxy.

Derek Parfit, On What Matters, vol. 2, Oxford, 2011, p. 616

Replies from: Qiaochu_Yuan
comment by Qiaochu_Yuan · 2013-07-01T23:09:08.407Z · LW(p) · GW(p)

Repost.

Replies from: Pablo_Stafforini
comment by Pablo (Pablo_Stafforini) · 2013-07-01T23:42:53.928Z · LW(p) · GW(p)

Thanks, retracted.

comment by Said Achmiz (SaidAchmiz) · 2013-07-15T16:57:26.146Z · LW(p) · GW(p)

Philosophers, I have said, should study AI. Should AI workers study philosophy? Yes, unless they are content to reinvent the wheel every few days. When AI reinvents a wheel, it is typically square, or at best hexagonal, and can only make a few hundred revolutions before it stops. Philosopher's wheels, on the other hand, are perfect circles, require in principle no lubrication, and can go in at least two directions at once. Clearly a meeting of minds is in order.

Replies from: PhilGoetz
comment by PhilGoetz · 2013-07-16T14:53:14.613Z · LW(p) · GW(p)

I'd be interested in any specific examples of things AI workers can learn from philosophy at the present time. There has been at least one instance in the past: AI workers in the 1960s should have read Wittgenstein's discussion of games to understand a key problem with building symbolic logic systems that have an atomic symbol correspond to each dictionary word. But I can't think of any other instances.

Replies from: threewestwinds, Daniel_Burfoot, TimS
comment by threewestwinds · 2013-07-27T09:58:05.561Z · LW(p) · GW(p)

Timeless decision theory, what I understand of it, bears a remarkable resemblance to Kant's Categorical Imperative. I'm re-reading Kant right now (it's been half a decade), but my primary recollection was that the categorical imperative boiled down to "make decisions not on your own behalf, but as though you decided for all rational agents in your situation."

Some related criticisms of EDT are weirdly reminiscent of Kant's critiques of other moral systems based on predicting the outcome of your actions. "Weirdly reminiscent of" rather than "reinventing" intentionally, but I try not to be too quick to dismiss older thinkers.

comment by Daniel_Burfoot · 2013-07-16T21:52:07.386Z · LW(p) · GW(p)

AI workers in the 1960s should have read Wittgenstein's discussion of games to understand a key problem with building symbolic logic systems that have an atomic symbol correspond to each dictionary word.

Can you elaborate on this? It sounds fascinating. I confess I can't make heads or tails of Wittgenstein.

Replies from: pragmatist
comment by pragmatist · 2013-07-24T12:40:31.869Z · LW(p) · GW(p)

Wittgenstein, in his discussion of games (specifically, his idea that concepts are delineated by fuzzy "family resemblance", rather than necessary and sufficient membership criteria) basically makes the same points as Eliezer does in these posts.

Representative quotes:

Consider for example the proceedings that we call "games". I mean board-games, card-games, ball-games, Olympic games, and so on. What is common to them all? -- Don't say: "There must be something common, or they would not be called 'games' "-but look and see whether there is anything common to all. -- For if you look at them you will not see something that is common to all, but similarities, relationships, and a whole series of them at that. To repeat: don't think, but look! -- ...

And the result of this examination is: we see a complicated network of similarities overlapping and criss-crossing: sometimes overall similarities.

I can think of no better expression to characterize these similarities than "family resemblances"; for the various resemblances between members of a family: build, features, colour of eyes, gait, temperament, etc. etc. overlap and cries-cross in the same way.-And I shall say: 'games' form a family...

"All right: the concept of number is defined for you as the logical sum of these individual interrelated concepts: cardinal numbers, rational numbers, real numbers, etc.; and in the same way the concept of a game as the logical sum of a corresponding set of sub-concepts." --It need not be so. For I can give the concept 'number' rigid limits in this way, that is, use the word "number" for a rigidly limited concept, but I can also use it so that the extension of the concept is not closed by a frontier. And this is how we do use the word "game". For how is the concept of a game bounded?

comment by TimS · 2013-07-16T15:14:52.897Z · LW(p) · GW(p)

Moral philosophy in general is under-appreciated in FAI discussion in this community.

LW Metaethics Sequence : Solving Actual Moral Dilemmas as Inventing Peano Arithmetic : Inventing Artificial Intelligence. In short, an important and insightful first step. Hardly a conclusive resolution of the outstanding issues.

But if we want Friendly AI, we need to be able to tell it how to resolve moral disputes somehow. I have no idea if recent moral philosophy (post-1980) has the solutions, but I feel that even folks around here underestimate the severity of the problems implied by the Orthogonality Thesis.

Replies from: Viliam_Bur
comment by Viliam_Bur · 2013-07-22T11:07:24.937Z · LW(p) · GW(p)

Could you please be more specific and give me one example of an actual moral dilemma that is solved by moral philosophy and could be a useful lesson for the metaethics?

comment by arborealhominid · 2013-07-16T21:22:11.777Z · LW(p) · GW(p)

Whatever you think can't be done, somebody will come along and do it.

Thelonious Monk

Replies from: Vaniver
comment by Vaniver · 2013-07-16T21:57:45.347Z · LW(p) · GW(p)

I would be happier with this quote if the emphasis were on "think," because impossibility proofs are possible sometimes.

Replies from: arborealhominid
comment by arborealhominid · 2013-07-17T00:25:22.603Z · LW(p) · GW(p)

The emphasis I used was in the original, but I agree that it would work better with the emphasis on "think."

Replies from: aphyer
comment by aphyer · 2013-07-18T05:13:14.631Z · LW(p) · GW(p)

Not sure I agree with that. Emphasis on "think" undercuts the point: I wouldn't say that I "think you can't jump over the moon", even though I do not have a formal proof of impossibility handy for that, I'd just say "you can't do that."

In fact, I almost like it better without the word "think" at all: "Whatever can't be done, someone will come along and do it." YMMV, though.

comment by Jayson_Virissimo · 2013-07-10T05:35:20.052Z · LW(p) · GW(p)

Whatever I have up till now accepted as most true and assured I have gotten either from the senses or through the senses. But from time to time I have found that the senses deceive, and it is prudent never to trust completely those who have deceived us even once.

-- Rene Descartes, Meditations On First Philosophy

comment by Jayson_Virissimo · 2013-07-10T05:23:12.897Z · LW(p) · GW(p)

Ars longa, vita brevis, occasio praeceps, experimentum periculosum, iudicium difficile.

-- Hippocrates

Replies from: Jayson_Virissimo
comment by Jayson_Virissimo · 2013-07-10T05:25:17.876Z · LW(p) · GW(p)

Something like "the art is long [to learn], life is short, opportunity precipitous, experiment perilous, judgment difficult", but it should be pretty obvious to English speakers.

comment by gressettd · 2013-07-03T23:51:16.070Z · LW(p) · GW(p)

Demonizing the other is the prelude to the subsequent doing of demonic things to that other.

comment by elharo · 2013-07-03T10:26:02.811Z · LW(p) · GW(p)

I’ll mention what I’ll call the “radio theory” of brains. Imagine that you are a Kalahari Bushman and that you stumble upon a transistor radio in the sand. You might pick it up, twiddle the knobs, and suddenly, to your surprise, hear voices streaming out of this strange little box. If you’re curious and scientifically minded, you might try to understand what is going on. You might pry off the back cover to discover a little nest of wires. Now let’s say you begin a careful, scientific study of what causes the voices. You notice that each time you pull out the green wire, the voices stop. When you put the wire back on its contact, the voices begin again. The same goes for the red wire. Yanking out the black wire causes the voices to get garbled, and removing the yellow wire reduces the volume to a whisper. You step carefully through all the combinations, and you come to a clear conclusion: the voices depend entirely on the integrity of the circuitry. Change the circuitry and you damage the voices.

Proud of your new discoveries, you devote your life to developing a science of the way in which certain configurations of wires create the existence of magical voices. At some point, a young person asks you how some simple loops of electrical signals can engender music and conversations, and you admit that you don’t know—but you insist that your science is about to crack that problem at any moment.

Your conclusions are limited by the fact that you know absolutely nothing about radio waves and, more generally, electromagnetic radiation. The fact that there are structures in distant cities called radio towers—which send signals by perturbing invisible waves that travel at the speed of light—is so foreign to you that you could not even dream it up. You can’t taste radio waves, you can’t see them, you can’t smell them, and you don’t yet have any pressing reason to be creative enough to fantasize about them. And if you did dream of invisible radio waves that carry voices, who could you convince of your hypothesis? You have no technology to demonstrate the existence of the waves, and everyone justifiably points out that the onus is on you to convince them.

So you would become a radio materialist. You would conclude that somehow the right configuration of wires engenders classical music and intelligent conversation. You would not realize that you’re missing an enormous piece of the puzzle.

I’m not asserting that the brain is like a radio—that is, that we’re receptacles picking up signals from elsewhere, and that our neural circuitry needs to be in place to do so—but I am pointing out that it could be true. There is nothing in our current science that rules this out. Knowing as little as we do at this point in history, we must retain concepts like this in the large filing cabinet of ideas that we cannot yet rule in favor of or against. So even though few working scientists will design experiments around eccentric hypotheses, ideas always need to be proposed and nurtured as possibilities until evidence weighs in one way or another.

--David Eagleman, Incognito: The Secret Lives of the Brain, Random House, pp. 221-222

Replies from: Eliezer_Yudkowsky, DanArmak, mwengler, Martin-2
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-07-03T21:55:48.134Z · LW(p) · GW(p)

If you could damage wires in a certain way and make the voices forget how to pronounce nouns, eliminate their short-term but not long-term memory, damage their color words, and so on, you would have a solid case for the wires doing internal, functional information-processing in causal arrangements which permitted the final output to be permuted in ways that corresponded to perturbing particular causal nodes. In much the same way, a calculator might be thought to be a radio if you are ignorant of its internals, but if you have a hypothesis that the calculator contains a binary half-adder and you can perturb particular transistors and see wrong answers in a way that matches what the half-adder hypothesis predicts for perturbing that transistor, you have shown the answers are generated internally rather than externally. In a world where we can directly monitor a cat's thalamus and reconstruct part of its visual processing field, the radio hypothesis is not just privileging a hypothesis without evidence, it is frantically clinging to a hypothesis with strong contrary evidence in denial of a hypothesis with detailed confirming evidence.

Replies from: Pfft, BloodyShrimp
comment by Pfft · 2013-07-06T00:44:22.485Z · LW(p) · GW(p)

(I don't think the cat experiments are very conclusive here. As far as I know, the functions that have been identified in the early visual system are things like edge detection and motion detection. But such functions are used for video compression. So not only could a radio set perform them in principle, an ordinary digital TV set already does.)

comment by BloodyShrimp · 2013-07-03T23:01:52.887Z · LW(p) · GW(p)

I don't think this is quite where the analogy was. The brain's information-processing features you describe seem to be analogous to the radio's volume and clarity... it seems Eagleman was trying to compare the radio's content not to the brain's content, but to consciousness or something. At least, that's the best steelmanning attempt I've got.

comment by DanArmak · 2013-07-03T19:04:30.360Z · LW(p) · GW(p)

This isn't an ancient pre-scientific text; it was written in 2011. I completely disagree with the claim that:

I’m not asserting that the brain is like a radio—that is, that we’re receptacles picking up signals from elsewhere, and that our neural circuitry needs to be in place to do so—but I am pointing out that it could be true. There is nothing in our current science that rules this out. Knowing as little as we do at this point in history, we must retain concepts like this in the large filing cabinet of ideas that we cannot yet rule in favor of or against.

There's also nothing in our current science that rules out a teapot orbiting the sun. That does not mean a hypothesis with no evidence for it should be elevated to the level of serious discussion.

There is no reason to think the brain could possibly be receiving "marching orders" from elsewhere, and we absolutely should discard this concept and rule firmly against it. And the same goes for any other equally unfounded ideas that this is an allegory for.

ideas always need to be proposed and nurtured as possibilities until evidence weighs in one way or another.

No, because there is an infinity of ideas you could consider. You must wait until evidence weighs sufficiently in favor of some one idea to elevate it above the others, before considering it at all.

comment by mwengler · 2013-07-20T16:06:29.527Z · LW(p) · GW(p)

Some of the things you would discover would include that in some locations the voices don't show up. Investigating that, you would find that deep in caves they were gone. If you had access to the materials radios are made from, you would discover that in a metal box the voices don't show up. You would infer from this that the voices are coming from outside and are somehow picked up by the box. You might also discover by putting pieces of radios together differently that you could get your own voice to come out of the speaker by hooking up two speakers in series with the power source.

My point is that you would learn a lot more about what is really going on then this long quote suggests.

comment by Martin-2 · 2013-07-09T03:32:12.095Z · LW(p) · GW(p)

I like the premise. Last month's Douglas Hofstadter quote comes to mind. Some problems:

At some point, a young person asks you how some simple loops of electrical signals can engender music and conversations... you insist that your science is about to crack that problem at any moment.

Why would I insist this? I don't even know how the electrical signals (the what?!) change the volume. I just know how to make the wires change the volume, and I know how to make them change the music too.

You would conclude that somehow the right configuration of wires engenders classical music and intelligent conversation. You would not realize that you’re missing an enormous piece of the puzzle.

Some inquisitive Bushman I turned out to be. This is still a very magical radio.

Also, I think a clever Bushman could figure out that the radio is transmitting sounds from somewhere else. It is the reality after all so there are clues. He hears a person talking when no one's there; the circuitry is too simple to write symphonies and simulate most human discussion; the radio doesn't work in caves...

comment by Qiaochu_Yuan · 2013-07-25T01:27:02.011Z · LW(p) · GW(p)

I'm not clear whether this morally violates the third rule (some clarification on this would be appreciated), but I liked this quote a lot so here goes.

Economics is a theory of what the world would be like if it were run by System 2.

Michael Vassar

Replies from: Vaniver
comment by Vaniver · 2013-07-25T02:24:09.673Z · LW(p) · GW(p)

At present, I would recommend against quoting anyone currently or previously employed by MIRI or CFAR, but I think it may be worth having a conversation (in its own discussion thread) about what the rules for the Rationality Quotes thread should be.

Replies from: Qiaochu_Yuan
comment by Qiaochu_Yuan · 2013-07-25T02:29:01.865Z · LW(p) · GW(p)

Fair enough.

comment by Davidmanheim · 2013-07-08T03:22:29.430Z · LW(p) · GW(p)

Not a quote, a word I have needed fro a long time to refer to people being stubbornly anti-rational:

Mumpsimus /ˈməmpsiməs/ - Noun

1) A traditional custom or notion adhered to although shown to be unreasonable. 2) A person who obstinately adheres to such a custom or notion.

Replies from: simplicio
comment by simplicio · 2013-07-08T20:45:58.030Z · LW(p) · GW(p)

I don't think that IPA pronunciation is plausible given the spelling. English rarely stresses a schwa. I would probably pronounce it /'mʌmpsɪməs/ or less likely /mʌmp'siməs/.

FYI, you are probably being downvoted because this looks like another arrow in the tribalism quiver, rather than advice on how to actually become more rational. Many are the traditions, customs or notions that I have seen "proven" unreasonable in my life, which were either not unreasonable, or not yet plausibly replaceable by more reasonable ones.

Replies from: Davidmanheim
comment by Davidmanheim · 2013-07-08T22:40:28.145Z · LW(p) · GW(p)

I appreciate the comment on pronunciation; dictionary.com and wiktionary disagree with me, but I got the pronunciation from Google. I assume you're correct.

I think that it's reasonable to say that this quotes list should be restricted to quotes that help you become more rational, but if so, there should be some indication that this is the goal. Despite that, the idea applies to anyone, one's self included, who persists in some action shown conclusively to be incorrect (Such as pronouncing nuclear "Nookular"). There may be a reason to do so, but that does not change the idea. It is attributed to either Erasmus, or his contemporary, Richard Pace.

Wikipedia has the story:

"The term originates from a story about a priest who misread sumpsimus [latin:"we have received"] as mumpsimus. After being told about his mistake he stated that he had been using mumpsimus for a number of years and was not about to change, saying "I've got so used to using the word mumpsimus that I'll just go on saying it that way.""

Replies from: simplicio
comment by simplicio · 2013-07-09T14:12:55.922Z · LW(p) · GW(p)

Such as pronouncing nuclear "Nookular"

Your test case for "conclusively shown to be incorrect" should probably not be a pronunciation, given that pronunciations are basically matters of convention that inevitably change over time. How do you pronounce "February?" Or for that matter "laugh?" The mismatch between spelling and pronunciation in these words is not some crazy whim of English speakers for inserting extra letters; we really used to pronounce those letters and now we don't because the fashion changed.

With regard to rationality, the point is that this quote is going to be a "force" for anti-rationality with a large subset of readers. New names to call one's opponents rarely conduce to the best thinking; more often they merely serve to make both our minds and our social groups yet more insular. This is not always true, but it's the way to bet.

Replies from: army1987
comment by A1987dM (army1987) · 2013-07-09T14:19:32.526Z · LW(p) · GW(p)

The mismatch between spelling and pronunciation in these words is not some crazy whim of English speakers for inserting extra letters

Sometimes it is, for example the S in "island" was added to make it look like more like "isle", from which it did not originate.

Replies from: wedrifid
comment by wedrifid · 2013-07-09T16:46:59.713Z · LW(p) · GW(p)

Sometimes it is, for example the S in "island" was added to make it look like more like "isle", from which it did not originate.

Wait, what? I'd always assumed...

Replies from: army1987
comment by A1987dM (army1987) · 2013-07-13T17:41:58.209Z · LW(p) · GW(p)

In that case they probably incorrectly believed island to be derived to isle (same with the H in Anthony which comes from the Roman person name Antonius and not the Greek anthos ‘flower’), so I'm not sure I'd call it a whim... but certain letters were added at the end of words otherwise they'd be too short, and IIRC (I can't seem to find a cite right now) money was spelled with a O and give with an E because the sequence un and final v would look bad in blackletter.

comment by baiter · 2013-07-03T08:39:30.791Z · LW(p) · GW(p)

"O great age of generous love and time of a new man! Not the poor, dark, disfigured creature cramped by his falsehood, a liar from the cradle, flogged by poverty, smelling bad from cowardice, deeper than a latrine in jealousy, dead as a cabbage to feeling, a maggot to beauty, a shrimp to duty, spinning the same thread of cocoon preoccupation from his mouth. Without tears to weep or enough expendable breath to laugh; cruel, frigging, parasitic, sneaking, grousing, anxious, and sluggardly. Drilled like a Prussian by the coarse hollering of sergeant fears."

-- Saul Bellow, Adventures of Augie March

comment by katydee · 2013-07-24T22:25:08.919Z · LW(p) · GW(p)

For mad scientists who keep brains in jars, here's a tip: why not add a slice of lemon to each jar, for freshness?

Jack Handey

Replies from: cody-bryce
comment by cody-bryce · 2013-07-29T01:08:06.053Z · LW(p) · GW(p)

Can you explain why you wanted to post this?

Replies from: katydee
comment by katydee · 2013-07-29T02:00:55.600Z · LW(p) · GW(p)

Certainly: I thought it was funny.