Rationality Quotes: April 2011

post by benelliott · 2011-04-04T09:55:03.526Z · LW · GW · Legacy · 393 comments

You all know the rules:

393 comments

Comments sorted by top scores.

comment by Alicorn · 2011-04-07T03:08:53.917Z · LW(p) · GW(p)

When confronting something which may be either a windmill or an evil giant, what question should you be asking?

There are some who ask, "If we do nothing, and that is an evil giant, can we afford to be wrong?" These people consider themselves to be brave and vigilant.

Some ask "If we attack it wrongly, can we afford to pay to replace a windmill?" These people consider themselves cautious and pragmatic.

Still others ask, "With the cost of being wrong so high in either case, shouldn't we always definitively answer the 'windmill vs. giant' question before we act?" And those people consider themselves objective and wise.

But only a tiny few will ask, "Isn't the fact that we're giving equal consideration to the existence of evil giants and windmills a warning sign of insanity in ourselves?"

It's hard to find out what these people consider themselves, because they never get invited to parties.

-- PartiallyClips, "Windmill"

Replies from: JGWeissman, None, ZoneSeek, James_K, wedrifid
comment by JGWeissman · 2011-04-07T03:13:04.623Z · LW(p) · GW(p)

But only a tiny few will ask, "Isn't the fact that we're giving equal consideration to the existence of evil giants and windmills a warning sign of insanity in ourselves?"

And then there's the fact that we are giving much more consideration to the existence of evil giants than to the existence of good giants.

comment by [deleted] · 2011-04-07T05:36:49.936Z · LW(p) · GW(p)

Nancy Lebovitz came across this too.

Replies from: Alicorn
comment by Alicorn · 2011-04-07T17:49:14.021Z · LW(p) · GW(p)

Well, I guess that's information about how many people click links and upvote the comments that contained them based on the quality of the linked content.

Replies from: JGWeissman
comment by JGWeissman · 2011-04-07T17:55:24.563Z · LW(p) · GW(p)

Not to argue that transcribing the text of the comic isn't valuable (I do actually appreciate it), but it's also information about how many people go back and vote on comments from posts imported from OB.

Replies from: benelliott
comment by benelliott · 2011-04-07T23:09:23.032Z · LW(p) · GW(p)

And about how much more readers quotes threads seem to get compared with everything else.

comment by ZoneSeek · 2011-04-09T00:54:09.385Z · LW(p) · GW(p)

I thought the correct response should be "Is the thing in fact a giant or a windmill?" Rather than considering which way our maps should be biased, what's the actual territory?

I do tech support, and often get responses like "I think so," and I usually respond with "Let's find out."

Replies from: Nornagest, shokwave, JGWeissman
comment by Nornagest · 2011-04-09T01:00:14.552Z · LW(p) · GW(p)

Giant/windmill differentiation is not a zero-cost operation.

comment by shokwave · 2011-04-09T01:39:00.679Z · LW(p) · GW(p)

In the "evil giant vs windmill" question, the prior probability of it being an evil giant is vanishingly close to zero, and the prior probability of it being a windmill is pretty much one minus the chance that it's an evil giant. Spending effort discovering the actual territory when every map ever shows it's a windmill sounds like a waste of effort.

Replies from: Psy-Kosh
comment by Psy-Kosh · 2011-04-09T01:41:44.524Z · LW(p) · GW(p)

What about a chunk of probability for the case of where it's neither giant nor windmill?

Replies from: shokwave
comment by shokwave · 2011-04-09T02:40:47.633Z · LW(p) · GW(p)

Very few things barring the evil giant have the ability to imitate a windmill. I did leave some wiggle room with

prior probability of it being a windmill is pretty much one minus the chance that it's an evil giant

because I wished to allow for the chance it may be a bloody great mimic.

Replies from: Psy-Kosh
comment by Psy-Kosh · 2011-04-09T06:29:08.534Z · LW(p) · GW(p)

A missile silo disguised as a windmill? A helicopter in an unfortunate position? An odd and inefficient form of rotating radar antenna? A shuttle in launch position? (if one squints, they might think it's a broken windmill with the vanes having fallen off or something)

These are all just off the top of my head. Remember, if we're talking about someone who tends to, when they see a windmill, be unsure whether it's a windmill or an evil giant, there's probably a reasonable chance that they tend to get confused by other objects too, right? :)

Replies from: shokwave, benelliott, wedrifid
comment by shokwave · 2011-04-09T06:31:45.970Z · LW(p) · GW(p)

You are right! Even I, firmly settled in the fourth camp, was tricked by the false dichotomy of windmill and evil giant.

Replies from: Psy-Kosh
comment by Psy-Kosh · 2011-04-09T06:41:11.765Z · LW(p) · GW(p)

To be fair, there's also the possibility that someone disguised a windmill as an evil giant. ;)

comment by benelliott · 2011-04-10T11:04:59.568Z · LW(p) · GW(p)

A good giant?

Replies from: Psy-Kosh
comment by Psy-Kosh · 2011-04-10T16:18:26.450Z · LW(p) · GW(p)

Sure, but I wouldn't give a "good giant" really any more probability than an "evil giant". Both fall into the "completely negligible" hole. :)

Though, as we all know, if we do find one, the correct action to take is to climb up so that one can stand on its shoulders. :)

Replies from: benelliott, TheOtherDave
comment by benelliott · 2011-04-10T16:54:37.814Z · LW(p) · GW(p)

I thought we were listing anything at least as plausible as the evil giant hypothesis. I have no information as the morality distribution of giants in general so I use maximum entropy and assign 'evil giant' and 'good giant' equal probability.

Replies from: ata
comment by ata · 2011-04-10T18:23:27.815Z · LW(p) · GW(p)

Given complexity of value, 'evil giant' and 'good giant' should not be weighted equally; if we have no specific information about the morality distribution of giants, then as with any optimization process, 'good' is a much, much smaller target than 'evil' (if we're including apparently-human-hostile indifference).

Unless we believe them to be evolutionarily close to humans, or to have evolved under some selection pressures similar to those that produced morality, etc., in which we can do a bit better than a complexity prior for moral motivations.

(For more on this, check out my new blog, Overcoming Giants.)

Replies from: Psy-Kosh
comment by Psy-Kosh · 2011-04-10T20:18:34.210Z · LW(p) · GW(p)

Well, if by giants we mean "things that seem to resemble humans only are particularly big", then we should expect some sort of shared evolutionary history, so....

comment by TheOtherDave · 2011-04-10T16:32:56.890Z · LW(p) · GW(p)

Which can be fun to do with a windmill, also.

Replies from: Psy-Kosh
comment by Psy-Kosh · 2011-04-10T16:34:50.615Z · LW(p) · GW(p)

Since when do windmills have shoulders? :)

comment by wedrifid · 2011-04-09T08:34:39.229Z · LW(p) · GW(p)

Or, possibly, a great big fan! In fact with some (unlikely) designs it would be impossible to tell whether it was a fan or a windmill without knowledge of what is on the other end of the connected power lines.

comment by JGWeissman · 2011-04-09T01:23:07.772Z · LW(p) · GW(p)

I thought the correct response should be "Is the thing in fact a giant or a windmill?"

Do you consider yourself "objective and wise"?

Replies from: ZoneSeek
comment by ZoneSeek · 2011-04-10T03:45:39.731Z · LW(p) · GW(p)

I'd consider myself puzzled. Unidientified object, is it a threat, a potential asset, some kind of Black Swan? Might need to do something even without positive identification. Will probably need to do something to get a better read on the thing.

comment by James_K · 2011-04-07T05:24:48.782Z · LW(p) · GW(p)

That is truly incredible, I regret only that I have but one upvote to give.

comment by wedrifid · 2011-04-07T04:16:09.702Z · LW(p) · GW(p)

Best quote I've seen in a long time!

comment by DanielVarga · 2011-04-04T21:06:57.029Z · LW(p) · GW(p)

It is not really a quote, but a good quip from an otherwise lame recent internet discussion:

Matt: Ok, for all of the people responding above who admit to not having a soul, I think this means that it is morally ok for me to do anything I want to you, just as it is morally ok for me to turn off my computer at the end of the day. Some of us do have souls, though.

Igor: Matt - I agree that people who need a belief in souls to understand the difference between killing a person and turning off a computer should just continue to believe in souls.

Replies from: David_Gerard, David_Gerard
comment by David_Gerard · 2011-04-05T09:33:24.917Z · LW(p) · GW(p)

This is, of course, pretty much the right answer to anyone who asserts that without God, they could just kill anyone they wanted.

Replies from: matt1
comment by matt1 · 2011-04-05T18:31:38.045Z · LW(p) · GW(p)

Of course, my original comment had nothing to do with god. It had to do with "souls", for lack of a better term as that was the term that was used in the original discussion (suggest reading the original post if you want to know more---basically, as I understand the intent it simply referred to some hypothetical quality that is associated with consciousness that lies outside the realm of what is simulable on a Turing machine). If you think that humans are nothing but Turing machines, why is it morally wrong to kill a person but not morally wrong to turn off a computer? Please give a real answer...either provide an answer that admits that humans cannot be simulated by Turing machines, or else give your answer using only concepts relevant to Turing machines (don't talk about consciousness, qualia, hopes, whatever, unless you can precisely quantify those concepts in the language of Turing machines). And in the second case, your answer should allow me to determine where the moral balance between human and computers lies....would it be morally bad to turn off a primitive AI, for example, with intelligence at the level of a mouse?

Replies from: None, HonoreDB, David_Gerard, DanielVarga, kurokikaze, shokwave
comment by [deleted] · 2011-04-05T19:18:41.324Z · LW(p) · GW(p)

If you think that humans are nothing but Turing machines, why is it morally wrong to kill a person but not morally wrong to turn off a computer?

Your question has the form:

If A is nothing but B, then why is it X to do Y to A but not to do Y to C which is also nothing but B?

This following question also has this form:

If apple pie is nothing but atoms, why is it safe to eat apple pie but not to eat napalm which is also nothing but atoms?

And here's the general answer to that question: the molecules which make up apple pie are safe to eat, and the molecules which make up napalm are unsafe to eat. This is possible because these are not the same molecules.

Now let's turn to your own question and give a general answer to it: it is morally wrong to shut off the program which makes up a human, but not morally wrong to shut off the programs which are found in an actual computer today. This is possible because these are not the same programs.

At this point I'm sure you will want to ask: what is so special about the program which makes up a human, that it would be morally wrong to shut off the program? And I have no answer for that. Similarly, I couldn't answer you if you asked me why the molecules of apple pie are safe to eat and the those of napalm are not.

As it happens, chemistry and biology have probably advanced to the point at which the question about apple pie can be answered. However, the study of mind/brain is still in its infancy, and as far as I know, we have not advanced to the equivalent point. But this doesn't mean that there isn't an answer.

Replies from: NickiH, sark, matt1, KrisC, Alicorn
comment by NickiH · 2011-04-05T20:10:20.090Z · LW(p) · GW(p)

what is so special about the program which makes up a human, that it would be morally wrong to shut off the program?

We haven't figured out how to turn it back on again. Once we do, maybe it will become morally ok to turn people off.

Replies from: NancyLebovitz, Laoch
comment by NancyLebovitz · 2011-04-06T11:34:22.210Z · LW(p) · GW(p)

Because people are really annoying, but we need to be able to live with each other.

We need strong inhibitions against killing each other-- there are exceptions (self-defense, war), but it's a big win if we can pretty much trust each other not to be deadly.

We'd be a lot more cautious about turning off computers if they could turn us off in response.

None of this is to deny that turning off a computer is temporary and turning off a human isn't. Note that people are more inhibited about destroying computers (though much less so than about killing people) than they are about turning computers off.

comment by Laoch · 2011-04-05T23:11:28.672Z · LW(p) · GW(p)

Doesn't general anesthetic count? I thought that was the turning off of the brain. I was completely "out" when I had it administered to me.

Replies from: Kevin723, Desrtopa, David_Gerard
comment by Kevin723 · 2011-04-09T17:01:14.617Z · LW(p) · GW(p)

if i believed when i turned off my computer it would need to be monitered by a specialist or it might not ever come back on again, i would be hesitant to turn it off as well

Replies from: gwern
comment by gwern · 2011-04-09T18:09:02.991Z · LW(p) · GW(p)

And indeed, mainframes & supercomputers are famous for never shutting down or doing so on timespans measured in decades and with intense supervision on the rare occasion that they do.

comment by Desrtopa · 2011-04-05T23:17:56.117Z · LW(p) · GW(p)

It certainly doesn't put a halt to brain activity. You might not be aware of anything that's going on while you're under, or remember anything afterwards (although some people do,) but that doesn't mean that your brain isn't doing anything. If you put someone under general anesthetic under an electroencephalogram, you'd register plenty of activity.

Replies from: Laoch
comment by Laoch · 2011-04-06T08:24:54.292Z · LW(p) · GW(p)

Ah yes, didn't think of that. Even while I'm concious my brain is doing things I'm/it's not aware of.

Replies from: JohannesDahlstrom, Kevin723
comment by JohannesDahlstrom · 2011-04-07T17:58:43.512Z · LW(p) · GW(p)

Some deep hypothermia patients, however, have been successfully revived from a prolonged state of practically no brain activity whatsoever.

comment by Kevin723 · 2011-04-09T16:59:36.949Z · LW(p) · GW(p)

as is your computer when its turned off

comment by David_Gerard · 2011-04-05T23:14:56.970Z · LW(p) · GW(p)

And people don't worry about that because it's one people are used to the idea of coming back from, which fits the expressed theory.

comment by sark · 2011-04-05T21:44:29.828Z · LW(p) · GW(p)

Hmm, I don't happen to find your argument very convincing. I mean, what it does is to pay attention to some aspect of the original mistaken statement, then find another instance sharing that aspect which is transparently ridiculous.

But is this sufficient? You can model the statement "apples and oranges are good fruits" in predicate logic as "for all x, Apple(x) or Orange(x) implies Good(x)" or in propositional logic as "A and O" or even just "Z". But it should really depend on what aspect of the original statement you want to get at. You want a model which captures precisely those aspects you want to work with.

So your various variables actually confused the hell outta me there. I was trying to match them up with the original statement and your reductio example. All the while not really understanding which was relevant to the confusion. It wasn't a pleasant experience :(

It seems to me much simpler to simply answer: "Turing machine-ness has no bearing on moral worth". This I think gets straight to the heart of the matter, and isolates clearly the confusion in the original statement.

Or further guess at the source of the confusion, the person was trying to think along the lines of: "Turing machines, hmm, they look like machines to me, so all Turing machines are just machines, like a sewing machine, or my watch. Hmm, so humans are Turing machines, but by my previous reasoning this implies humans are machines. And hmm, furthermore, machines don't have moral worth... So humans don't have moral worth! OH NOES!!!"

Your argument seems like one of those long math proofs which I can follow step by step but cannot grasp its overall structure or strategy. Needless to say, such proofs aren't usually very intuitively convincing.

(but I could be generalizing from one example here)

Replies from: matt1
comment by matt1 · 2011-04-05T22:06:51.173Z · LW(p) · GW(p)

No, I was not trying to think along those lines. I must say, I worried in advance that discussing philosophy with people here would be fruitless, but I was lured over by a link, and it seems worse than I feared. In case it isn't clear, I'm perfectly aware what a Turing machine is; incidentally, while I'm not a computer scientist, I am a professional mathematical physicist with a strong interest in computation, so I'm not sitting around saying "OH NOES" while being ignorant of the terms I'm using. I'm trying to highlight one aspect of an issue that appears in many cases: if consciousness (meaning whatever we mean when we say that humans have consciousness) is possible for Turing machines, what are the implications if we do any of the obvious things? (replaying, turning off, etc...) I haven't yet seen any reasonable answer, other than 1) this is too hard for us to work out, but someday perhaps we will understand it (the original answer, and I think a good one in its acknowledgment of ignorance, always a valid answer and a good guide that someone might have thought about things) and 2) some pointless and wrong mocking (your answer, and I think a bad one). edit to add: forgot, of course, to put my current guess as to most likely answer, 3) that consciousness isn't possible for Turing machines.

Replies from: pjeby, Kyre, jschulter, Nominull, matt1
comment by pjeby · 2011-04-06T00:04:48.045Z · LW(p) · GW(p)

if consciousness (meaning whatever we mean when we say that humans have consciousness) is possible for Turing machines,

This is the part where you're going astray, actually. We have no reason to think that human beings are NOT Turing-computable. In other words, human beings almost certainly are Turing machines.

Therefore, consciousness -- whatever we mean when we say that -- is indeed possible for Turing machines.

To refute this proposition, you'd need to present evidence of a human being performing an operation that can't be done by a Turing machine.

Understanding this will help "dissolve" or "un-ask" your question, by removing the incorrect premise (that humans are not Turing machines) that leads you to ask your question.

That is, if you already know that humans are a subset of Turing machines, then it makes no sense to ask what morally justifies treating them differently than the superset, or to try to use this question as a way to justify taking them out of the larger set.

IOW, (the set of humans) is a subset of (the set of turing machines implementing consciousness), which in turn is a proper subset of (the set of turing machines). Obviously, there's a moral issue where the first two subsets are concerned, but not for (the set of turing machines not implementing consciousness).

In addition, there may be some issues as to when and how you're doing the turning off, whether they'll be turned back on, whether consent is involved, etc... but the larger set of "turing machines" is obviously not relevant.

I hope that you actually wanted an answer to your question; if so, this is it.

(In the event you wish to argue for another answer being likely, you'll need to start with some hard evidence that human behavior is NOT being Turing-computable... and that is a tough road to climb. Essentially, you're going to end up in zombie country.)

Replies from: matt1, ArisKatsaris
comment by matt1 · 2011-04-06T00:59:04.613Z · LW(p) · GW(p)

You wrote: "This is the part where you're going astray, actually. We have no reason to think that human beings are NOT Turing-computable. In other words, human beings almost certainly are Turing machines."

at this stage, you've just assumed the conclusion. you've just assumed what you want to prove.

"Therefore, consciousness -- whatever we mean when we say that -- is indeed possible for Turing machines."

having assumed that A is true, it is easy to prove that A is true. You haven't given an argument.

"To refute this proposition, you'd need to present evidence of a human being performing an operation that can't be done by a Turing machine."

It's not my job to refute the proposition. Currently, as far as I can tell, the question is open. If I did refute it, then my (and several other people's) conjecture would be proven. But if I don't refute it, that doesn't mean your proposition is true, it just means that it hasn't yet been proven false. Those are quite different things, you know.

Replies from: nshepperd, pjeby
comment by nshepperd · 2011-04-06T02:38:19.505Z · LW(p) · GW(p)

Well, how about this: physics as we know it can be approximated arbitrarily closely by a computable algorithm (and possibly computed directly as well, although I'm less sure about that. Certainly all calculations we can do involving manipulation of symbols are computable). Physics as we know it also seems to be correct to extremely precise degrees anywhere apart from inside a black hole.

Brains are physical things. Now when we consider that thermal noise should have more of an influence than the slight inaccuracy in any computation, what are the chances a brain does anything non-computable that could have any relevance to consciousness? I don't expect to see black holes inside brains, at least.

In any case, your original question was about the moral worth of turing machines, was it not? We can't use "turing machines can't be conscious" as excuse not to worry about those moral questions, because we aren't sure whether turing machines can be conscious. "It doesn't feel like they should be" isn't really a strong enough argument to justify doing something that would result in, for example, the torture of conscious entities if we were incorrect.

So here's my actual answer to your question: as a rule of thumb, act as if any simulation of "sufficient fidelity" is as real as you or I (well, multiplied by your probability that such a simulation would be conscious, maybe 0.5, for expected utilities). This means no killing, no torture, etc.

'Course, this shouldn't be a practical problem for a while yet, and we may have learned more by the time we're creating simulations of "sufficient fidelity".

comment by pjeby · 2011-04-06T01:32:04.267Z · LW(p) · GW(p)

at this stage, you've just assumed the conclusion. you've just assumed what you want to prove.

No - what I'm pointing out is that the question "what are the ethical implications for turing machines" is the same question as "what are the ethical implications for human beings" in that case.

It's not my job to refute the proposition. Currently, as far as I can tell, the question is open.

Not on Less Wrong, it isn't. But I think I may have misunderstood your situation as being one of somebody coming to Less Wrong to learn about rationality of the "Extreme Bayesian" variety; if you just dropped in here to debate the consciousness question, you probably won't find the experience much fun. ;-)

I did refute it, then my (and several other people's) conjecture would be proven. But if I don't refute it, that doesn't mean your proposition is true, it just means that it hasn't yet been proven false. Those are quite different things, you know.

Less Wrong has different -- and far stricter -- rules of evidence than just about any other venue for such a discussion.

In particular, to meaningfully partake in this discussion, the minimum requirement is to understand the Mind Projection Fallacy at an intuitive level, or else you'll just be arguing about your own intuitions... and everybody will just tune you out.

Without that understanding, you're in exactly the same place as a creationist wandering into an evolutionary biology forum, without understanding what "theory" and "evidence" mean, and expecting everyone to disprove creationism without making you read any introductory material on the subject.

In this case, the introductory material is the Sequences -- especially the ones that debunk supernaturalism, zombies, definitional arguments, and the mind projection fallacy.

When you've absorbed those concepts, you'll understand why the things you're saying are open questions are not even real questions to begin with, let alone propositions to be proved or disproved! (They're actually on a par with creationists' notions of "missing links" -- a confusion about language and categories, rather than an argument about reality.)

I only replied to you because I though perhaps you had read the Sequences (or some portion thereof) and had overlooked their application in this context (something many people do for a while until it clicks that, oh yeah, rationality applies to everything).

So, at this point I'll bow out, as there is little to be gained by discussing something when we can't even be sure we agree on the proper usage of words.

Replies from: matt1
comment by matt1 · 2011-04-06T01:48:43.392Z · LW(p) · GW(p)

"at this stage, you've just assumed the conclusion. you've just assumed what you want to prove.

No - what I'm pointing out is that the question "what are the ethical implications for turing machines" is the same question as "what are the ethical implications for human beings" in that case."

Yeah, look, I'm not stupid. If someone assumes A and then actually bothers to write out the modus ponens A->B (when A->B is an obvious statement) so therefore B, and then wants to point out, 'hey look, I didn't assume B, I proved it!', that really doesn't mean that they proved anything deep. They still just assumed the conclusion they want since they assumed a statement that trivially implies their desired conclusion. But I'll bow out now too...I only followed a link from a different forum, and indeed my fears were confirmed that this is a group of people who don't have anything meaningful or rational to say about certain concepts (I mean, if you don't realize even that certain things are in principle open to physical test!---and you drew an analogy to creationism vs evolution without realizing that evolution had and has many positive pieces of observable, physical evidence in its favor while your position has at present at best very minimal observable, tangible evidence in its favor (certain recent experiments in neuroscience can be charitably interpreted in favor of your argument, but on their own they are certainly not enough)).

Replies from: jimrandomh, NMJablonski
comment by jimrandomh · 2011-04-06T03:14:35.606Z · LW(p) · GW(p)

If you're looking for a clear, coherent and true explanation of consciousness, you aren't going to find that anywhere today, especially not in off-the-cuff replies; and if someone does eventually figure it out, you ought to expect it to have a book's worth of prerequisites and not be something that can be explained in a thousand words of comment replies. Consciousness is an extraordinarily difficult and confusing topic, and, generally speaking, the only way people come up with explanations that seem simple is by making simplifying assumptions that are wrong.

As for the more specific question of whether humans are Turing computable, this follows if (a) the laws of physics in a finite volume are Turing computable, and (b) human minds run entirely on physics. Both of these are believed to be true - (a) based on what we know about the laws themselves, and (b) based on neuroscience, which shows that physics are necessary and sufficient to produce humans, combined with Occam's Razor, which says that we shouldn't posit anything extra that we don't need. If you'd like to zoom in on the explanation of one of these points, I'd be happy to provide references.

comment by NMJablonski · 2011-04-06T03:08:53.487Z · LW(p) · GW(p)

This conversation is much too advanced for you.

Please read and understand the plethora of material that has been linked for you. This community does not dwell on solved problems of philosophy, and many have already taken a great deal of time to provide you with the relevant information.

comment by ArisKatsaris · 2011-04-06T00:48:55.037Z · LW(p) · GW(p)

To refute this proposition, you'd need to present evidence of a human being performing an operation that can't be done by a Turing machine.

That's quite easy: I can lift a rock, a Turing machine can't. A Turing machine can only manipulate symbols on a strip of tape, it can't do anything else that's physical.

Your claim that consciousness (whatever we mean when we say that) is possible for Turing machines, rests on the assumption that consciousness is about computation alone, not about computation+some unidentified physical reaction that's absent to pure Turing machines resting in a box on a table.

That consciousness is about computation alone may indeed end up true, but it's as yet unproven.

Replies from: AlephNeil, Vladimir_Nesov, Gray, pjeby, matt1
comment by AlephNeil · 2011-04-06T19:26:04.917Z · LW(p) · GW(p)

That's quite easy: I can lift a rock, a Turing machine can't.

That sounds like a parody of bad anti-computationalist arguments. To see what's wrong with it, consider the response: "Actually you can't lift a rock either! All you can do is send signals down your spinal column."

That consciousness is about computation alone may indeed end up true, but it's as yet unproven.

What sort of evidence would persuade you one way or the other?

comment by Vladimir_Nesov · 2011-04-06T21:18:12.546Z · LW(p) · GW(p)

Read the first part of ch.2 of "Good and Real".

Replies from: Perplexed
comment by Perplexed · 2011-04-07T15:25:35.685Z · LW(p) · GW(p)

Could you clarify why you think that this reading assignment illuminates the question being discussed? I just reread it. For the most part, it is an argument against dualism. It argues that consciousness is (almost certainly) reducible to a physical process.

But this doesn't have anything to do with what ArisKatsaris wrote. He was questioning whether consciousness can be reduced to a purely computational process (without "some unidentified physical reaction that's absent to pure Turing machines".)

Consider the following argument sketch:

  1. Consciousness can be reduced to a physical process.
  2. Any physical process can be abstracted as a computation.
  3. Any computation can be modeled as a Turing Machine computation.
  4. Therefore, consciousness can be produced on a TM.

Each step above is at least somewhat problematic. Matt1 seemed to be arguing against step 1, and Drescher does respond to that. But ArisKatsaris seemed to be arguing against step 2. My choice would be to expand the definition of 'computation' slightly to include the interactive, asynchronous, and analog, so that I accept step 2 but deny step 3. Over the past decade, Wegner and Goldin have published many papers arguing that computation != TM.

It may well be that you can only get consciousness if you have a non-TM computation (mind) embedded in a system of sensors and actuators (body) which itself interacts with and is embedded in within a (simulated?) real-time environment. That is, when you abstract the real-time interaction away, leaving only a TM computation, you have abstracted away an essential ingredient of consciousness.

Replies from: Vladimir_Nesov, AlephNeil
comment by Vladimir_Nesov · 2011-04-07T16:17:39.200Z · LW(p) · GW(p)

For the most part, it is an argument against dualism. It argues that consciousness is (almost certainly) reducible to a physical process.

It actually sketches what consciousness is and how it works, from which you can see how we could implement something like that as an abstract algorithm.

The value of that description is not so much in reaching a certain conclusion, but in reaching a sense of what exactly are we talking about and consequently why the question of whether "we can implement consciousness as an abstract algorithm" is uninteresting, since at that point you know more about the phenomenon than the words forming the question can access (similarly to how the question of whether crocodile is a reptile is uninteresting, once you know everything you need about crocodiles).

The problem here, I think, is that "consciousness" doesn't get unpacked, and so most of the argument is on the level of connotations. The value of understanding the actual details behind the word, even if just a little bit, is in breaking this predicament.

comment by AlephNeil · 2011-04-07T16:41:12.981Z · LW(p) · GW(p)

leaving only a TM computation, you have abstracted away an essential ingredient of consciousness.

I think I can see a rube/blegg situation here.

A TM computation perfectly modelling a human brain (let's say) but without any real-time interaction, and a GLUT, represent the two ways in which we can have one of 'intelligent input-output' and 'functional organization isomorphic to that of an intelligent person' without the other.

What people think they mean by 'consciousness' - a kind of 'inner light' which is either present or not - doesn't (straightforwardly) correspond to anything that objectively exists. When we hunt around for objective properties that correlate with places where we think the 'inner light' is shining, we find that there's more than one candidate. Both 'intelligent input-output' and the 'intelligent functional organization' pick out exactly those beings we believe to be conscious - our fellow humans foremost among them. But in the marginal cases where we have one but not the other, I don't think there is a 'further fact' about whether 'real consciousness' is present.

However, we do face the 'further question' of how much moral value to assign in the marginal cases - should we feel guilty about switching off a simulation that no-one is looking at? Should we value a GLUT as an 'end in itself' rather than simply a means to our ends? (The latter question isn't so important given that GLUTs can't exist in practice.)

I wonder if our intuition that the physical facts underdetermine the answers to the moral questions is in some way responsible for the intuition of a mysterious non-physical 'extra fact' of whether so-and-so is conscious. Perhaps not, but there's definitely a connection.

Replies from: Perplexed, Vladimir_Nesov
comment by Perplexed · 2011-04-07T17:34:00.154Z · LW(p) · GW(p)

... we do face the 'further question' of how much moral value to assign ...

Yes, and I did not even attempt to address that 'further question' because it seems to me that that question is at least an order of magnitude more confused than the relatively simple question about consciousness.

But, if I were to attempt to address it, I would begin with the lesson from Econ 101 that dissolves the question "What is the value of item X?". The dissolution begins by requesting the clarifications "Value to whom?" and "Valuable in what context?" So, armed with this analogy, I would ask some questions:

  1. Moral value to whom? Moral value in what context?
  2. If I came to believe that the people around me were p-zombies, would that opinion change my moral obligations toward them? If you shared my belief, would that change your answer to the previous question?
  3. Believed to be conscious by whom? Believed to be conscious in what context? Is it possible that a program object could be conscious in some simulated universe, using some kind of simulated time, but would not be conscious in the real universe in real time?
Replies from: AlephNeil
comment by AlephNeil · 2011-04-07T19:00:44.196Z · LW(p) · GW(p)

The dissolution begins by requesting the clarifications "Value to whom?" and "Valuable in what context?"

My example was one where (i) the 'whom' and the 'context' are clear and yet (ii) this obviously doesn't dissolve the problem.

Replies from: Perplexed
comment by Perplexed · 2011-04-07T19:24:14.436Z · LW(p) · GW(p)

It may be a step toward dissolving the problem. It suggests the questions:

  • Is it possible that an intelligent software object (like those in this novella by Ted Chiang) which exists within our space-time and can interact with us might have moral value very different from simulated intelligences in a simulated universe with which we cannot interact in real time?
  • Is it possible for an AI of the Chiang variety to act 'immorally' toward us? Toward each other? If so, what "makes" that action immoral?
  • What about AIs of the second sort? Clearly they cannot act immorally toward us, since they don't interact with us. But can they act immorally toward each other? What is it about that action that 'makes' it immoral?

My own opinion, which I won't try to convince you of, is that there is no moral significance without interaction and reciprocity (in a fairly broad sense of those two words.)

comment by Vladimir_Nesov · 2011-04-07T16:56:34.767Z · LW(p) · GW(p)

What people think they mean by 'consciousness' - a kind of 'inner light' which is either present or not - doesn't (straightforwardly) correspond to anything that objectively exists.

It does, to some extent. There is a simple description that moves the discussion further. Namely, consciousness is a sensory modality that observes its own operation, and as a result it also observes itself observing its own operation, and so on; as well as observing external input, observing itself observing external input, and so on; and observing itself determining external output, etc.

Replies from: AlephNeil
comment by AlephNeil · 2011-04-07T20:02:03.320Z · LW(p) · GW(p)

It does, to some extent. There is a simple description that moves the discussion further. Namely, consciousness is a sensory modality that observes its own operation, and as a result it also observes itself observing its own operation, and so on; as well as observing external input, observing itself observing external input, and so on; and observing itself determining external output, etc.

This is an important idea, but I don't think it can rescue the everyday intuition of the "inner light".

I can readily imagine an instantiation of your sort of "consciousness" in a simple AI program of the kind we can already write. No doubt it would be an interesting project, but mere self-representation (even recursive self-representation) wouldn't convince us that there's "something it's like" to be the AI. (Assume that the representations are fairly simple, and the AI is manipulating them in some fairly trivial way.)

Conversely, we think that very young children and animals are conscious in the "inner light" sense, even though we tend not to think of them as "recursively observing themselves". (I have no idea whether and in what sense they actually do. I also don't think "X is conscious" is unambiguously true or false in these cases.)

comment by Gray · 2011-04-06T16:09:12.795Z · LW(p) · GW(p)

That's quite easy: I can lift a rock, a Turing machine can't. A Turing machine can only manipulate symbols on a strip of tape, it can't do anything else that's physical.

I think you're trivializing the issue. A Turing machine is an abstraction, it isn't a real thing. The claim that a human being is a Turing machine means that, in the abstract, a certain aspect of human beings can be modeled as a Turing machine. Conceptually, it might be the case, for instance, that the universe itself can be modeled as a Turing machine, in which case it is true that a Turing machine can lift a rock.

comment by pjeby · 2011-04-06T00:55:09.926Z · LW(p) · GW(p)

I can lift a rock, a Turing machine can't. A Turing machine can only manipulate symbols on a strip of tape, it can't do anything else that's physical.

So... you support euthanasia for quadriplegics, then, or anyone else who can't pick up a rock? Or people who are so crippled they can only communicate by reading and writing braille on a tape, and rely on other human beings to feed them and take care of them?

Your claim that consciousness (whatever we mean when we say that) is possible for Turing machines, rests on the assumption that consciousness is about computation alone, not about computation+some unidentified physical reaction that's absent to pure Turing machines resting in a box on a table.

This "unidentified physical reaction" would also need to not be turing-computable to have any relevance. Otherwise, you're just putting forth another zombie-world argument.

At this point, we have no empirical reason to think that this unidentified mysterious something has any existence at all, outside of a mere intuitive feeling that it "must" be so.

And so, all we have are thought experiments that rest on using slippery word definitions to hide where the questions are being begged, presented as intellectual justification for these vague intuitions... like arguments for why the world must be flat or the sun must go around the earth, because it so strongly looks and feels that way.

(IOW, people try to prove that their intuitions or opinions must have some sort of physical form, because those intuitions "feel real". The error arises from concluding that the physical manifestation must therefore exist "out there" in the world, rather than in their own brains.)

Replies from: ArisKatsaris, ArisKatsaris
comment by ArisKatsaris · 2011-04-06T01:12:22.275Z · LW(p) · GW(p)

This "unidentified physical reaction" would also need to not be turing-computable to have any relevance. Otherwise, you're just putting forth another zombie-world argument.

A zombie-world seems extremely improbable to have evolved naturally, (evolved creatures coincidentally speaking about their consciousness without actually being conscious), but I don't see why a zombie-world couldn't be simulated by a programmer who studied how to compute the effects of consciousness, without actually needing to have the phenomenon of consciousness itself.

The same way you don't need to have an actual solar system inside your computer, in order to compute the orbits of the planets -- but it'd be very unlikely to have accidentally computed them correctly if you hadn't studied the actual solar system.

At this point, we have no empirical reason to think that this unidentified mysterious something has any existence at all, outside of a mere intuitive feeling that it "must" be so.

Do you have any empirical reason to think that consciousness is about computation alone? To claim Occam's razor on this is far from obvious, as the only examples of consciousness (or talking about consciousness) currently concern a certain species of evolved primate with a complex brain, and some trillions of neurons, all of which have have chemical and electrical effects, they aren't just doing computations on an abstract mathematical universe sans context.

Unless you assume the whole universe is pure mathematics, so there's no difference between the simulation of a thing and the thing itself. Which means there's no difference between the mathematical model of a thing and the thing itself. Which means the map is the territory. Which means Tegmark IV.

And Tegmark IV is likewise just a possibility, not a proven thing.

Replies from: pjeby, AlephNeil
comment by pjeby · 2011-04-06T01:39:53.916Z · LW(p) · GW(p)

A zombie-world seems extremely improbable to have evolved naturally, (evolved creatures coincidentally speaking about their consciousness without actually being conscious), but I don't see why a zombie-world couldn't be simulated by a programmer who studied how to compute the effects of consciousness, without actually needing to have the phenomenon of consciousness itself.

This is a "does the tree make a sound if there's no-one there to hear it?" argument.

That is, it assumes that there is a difference between "effects of consciousness" and "consciousness itself" -- in the same way that a connection is implied between "hearing" and "sound".

That is, the argument hinges on the definition of the word whose definition is being questioned, and is an excellent example of intuitions feeling real.

Replies from: ArisKatsaris
comment by ArisKatsaris · 2011-04-06T01:51:02.209Z · LW(p) · GW(p)

That is, it assumes that there is a difference between "effects of consciousness" and "consciousness itself" -- in the same way that a connection is implied between "hearing" and "sound".

Not quite. What I'm saying is there might be a difference between the computation of a thing and the thing itself. It's basically an argument against the inevitability of Tegmark IV.

A Turing machine can certainly compute everything there is to know about lifting rocks and their effects -- but it still can't lift a rock. Likewise a Turing machine could perhaps compute everything there was to know about consciousness and its effects -- but perhaps it still couldn't actually produce one.

Or at least I've not been convinced that it's a logical impossibility for it to be otherwise; nor that I should consider it my preferred possibility that consciousness is solely computation, nothing else.

Wouldn't the same reasoning mean that all physical processes have to be solely computation? So it's not just "a Turing machine can produce consciousness", but "a Turing machine can produce a new physical universe", and therefore "Yeah, Turing Machines can lift real rocks, though it's real rocks in a subordinate real universe, not in ours".

Replies from: pjeby, AlephNeil
comment by pjeby · 2011-04-06T15:35:46.640Z · LW(p) · GW(p)

What I'm saying is there might be a difference between the computation of a thing and the thing itself. It's basically an argument against the inevitability of Tegmark IV.

I think you mean, it's the skeleton of an argument you could advance if there turned out to actually be some meaning to the phrase "difference between the computation of a thing and the thing itself".

Or at least I've not been convinced that it's a logical impossibility for it to be otherwise;

Herein lies the error: it's not up to anybody else to convince you it's logically impossible, it's up to you to show that you're even describing something coherent in the first place.

Really, this is another LW-solved philosophical problem; you just have to grok the quantum physics sequence, in addition to the meanings-of-words one: when you understand that physics itself is a machine, it dissolves the question of what "simulation" or "computation" mean in this context. That is, you'll realize that the only reason you can even ask the question is because you're confusing the labels in your mind with real things.

Replies from: Vladimir_M
comment by Vladimir_M · 2011-04-06T15:51:27.338Z · LW(p) · GW(p)

Really, this is another LW-solved philosophical problem; you just have to grok the quantum physics sequence, in addition to the meanings-of-words one: when you understand that physics itself is a machine, it dissolves the question of what "simulation" or "computation" mean in this context.

Could you point to the concrete articles that supposedly dissolve this question? I find the question of what "computation" means as still very much open, and the source of a whole lot of confusion. This is best seen when people attempt to define what constitutes "real" computation as opposed to mere table lookups, replays, state machines implemented by random physical processes, etc.

Needless to say, this situation doesn't give one the license to jump into mysticism triumphantly. However, as I noted in a recent thread, I observe an unpleasant tendency on LW to use the notions of "computation," "algorithms," etc. as semantic stop signs, considering how ill-understood they presently are.

Replies from: pjeby
comment by pjeby · 2011-04-06T16:15:17.335Z · LW(p) · GW(p)

Could you point to the concrete articles that supposedly dissolve this question? I find the question of what "computation" means as still very much open, and the source of a whole lot of confusion.

Please note that I did not say the sequence explains "computation"; merely that it dissolves the illusion the intuitive notion of a meaningful distinction between a "computation" or "simulation" and "reality".

In particular, an intuitive understanding that people are made of interchangeable particles and nothing else, dissolves the question of "what happens if somebody makes a simulation of you?" in the same way that it dissolves "what happens if there are two copies of you... which one's the real one?"

That is, the intuitive notion that there's something "special" about the "original" or "un-simulated" you is incoherent, because the identity of entities is an unreal concept existing only in human brains' representation of reality, rather than in reality itself.

The QM sequence demonstrates this; it does not, AFAIR, attempt to rigorously define "computation", however.

This is best seen when people attempt to define what constitutes "real" computation as opposed to mere table lookups, replays, state machines implemented by random physical processes, etc.

Those sound like similarly confused notions to me -- i.e., tree-sound-hearing questions, rather than meaningful ones. I would therefore refer such questions to the "usage of words" sequence, especially "How an Algorithm Feels From The Inside" (which was my personal source of intuitions about such confusions).

Replies from: Vladimir_M
comment by Vladimir_M · 2011-04-06T16:22:54.508Z · LW(p) · GW(p)

Please note that I did not say the sequence explains "computation"; merely that it dissolves the illusion the intuitive notion of a meaningful distinction between a "computation" or "simulation" and "reality".

Fair enough, though I can't consider these explanations as settled until the notion of "computation" itself is fully clarified. I haven't read the entire corpus of sequences, though I think I've read most of the articles relevant for these questions, and what I've seen of the attempts there to deal with the question of what precisely constitutes "computation" is, in my opinion, far from satisfactory. Further non-trivial insight is definitely still needed there.

Replies from: pjeby
comment by pjeby · 2011-04-06T16:31:20.899Z · LW(p) · GW(p)

Fair enough, though I can't consider these explanations as settled until the notion of "computation" itself is fully clarified.

Personally, I would more look for someone asking that question to show what isn't "computation". That is, the word itself seems rather meaningless, outside of its practical utility (i.e. "have you done that computation yet?"). Trying to pin it down in some absolute sense strikes me as a definitional argument... i.e., one where you should first be asking, "Why do I care what computation is?", and then defining it to suit your purpose, or using an alternate term for greater precision.

Replies from: ArisKatsaris
comment by ArisKatsaris · 2011-04-06T18:17:00.605Z · LW(p) · GW(p)

You say it has a practical utility, and yet you call it meaningless? If rationality is about winning, how can something with practical utility be meaningless?

Here's what I mean by computation: The handling of concepts and symbolic representations of concepts and mathematical abstractions in such a way that they return a causally derived result. What isn't computation? Pretty much everything else. I don't call gravity a computation, I call it a phenomenon. Because gravity doesn't act on symbolisms and abstractions (like numbers), it acts on real things. A division or a multiplication is a computation, because it acts on numbers. A computation is a map, not a territory, same way that numbers are a map, not a territory.

What I don't know is what you mean by "physics is a machine". For that statement to be meaningful you'd have to explain what would it mean for physics not to be a machine. If you mean that physics is deterministic and causal, then sure. If you mean that physics is a computation, then I'll say no, you've not yet proven to me that the bottom layer of reality is about mathematical concepts playing with themselves.

That's the Tegmark IV hypothesis, and it's NOT a solved issue, not by a long shot.

Replies from: None, jimrandomh, None
comment by [deleted] · 2011-04-06T19:08:57.497Z · LW(p) · GW(p)

Here's what I mean by computation: The handling of concepts and symbolic representations of concepts and mathematical abstractions in such a way that they return a causally derived result...I don't call gravity a computation...Because gravity doesn't act on symbolisms and abstractions (like numbers), it acts on real things.

A computer (a real one, like a laptop) also acts on real things. For example if it has a hard drive, then as it writes to the hard drive it is modifying the surface of the platters. A computer (a real one) can be understood as operating on abstractions. For example, you might spell-check a text - which describes what it is doing as an operation on an abstraction, since the text itself is an abstraction. A text is an abstraction rather than a physical object because you could take the very same same text and write it to the hard drive, or hold it in memory, or print it out, thereby realizing the same abstract thing - the text - in three distinct physical ways. In summary the same computer activity can be described as an operation on an abstraction - such as spell-checking a text - or as an action on a real thing - such as modifying the physical state of the memory.

So the question is whether gravity can be understood as operating on abstractions. Since a computer such as a laptop, which is acting on real, physical things, can also be understood as operating on abstractions, then I see no obvious barrier to understanding gravity as operating on abstractions.

Replies from: pjeby, ArisKatsaris
comment by pjeby · 2011-04-06T20:45:31.874Z · LW(p) · GW(p)

A text is an abstraction rather than a physical object because you could take the very same same text and write it to the hard drive, or hold it in memory, or print it out, thereby realizing the same abstract thing - the text - in three distinct physical ways. In summary the same computer activity can be described as an operation on an abstraction - such as spell-checking a text - or as an action on a real thing - such as modifying the physical state of the memory.

This is similar to my point, but the other way around, sort of.

My point is that the "abstraction" exists only in the eye of the observer (mind of the commentator?), rather than having any independent existence.

In reality, there is no computer, just atoms. No "computation", just movement. It is we as observers who label these things to be happening, or not happening, and argue about what labels we should apply to them.

None of this is a problem, until somebody gets to the question of whether something really is the "right" label to apply, only usually they phrase it in the form of whether something can "really" be something else.

But what's actually meant is, "is this the right label to apply in our minds?", and if they'd simply notice that the question is not about reality, but their categorization of arbitrarily-chosen chunks of reality, they'd stop being confused and arguing nonsense.

Replies from: ArisKatsaris
comment by ArisKatsaris · 2011-04-06T23:51:46.376Z · LW(p) · GW(p)

No "computation", just movement.

If computation isn't the real thing, only the movement is, then a simulation (which is the complete representation of a thing using a different movement which can however be seen as performing the same computation) is not the thing itself, and you have no reason to believe that the phenomenon of consciousness can be internally experienced in a computer simulation, that an algorithm can feel anything from the inside. Because the "inside" and the "outside" are themselves just labels we use.

and if they'd simply notice that the question is not about reality, but their categorization of arbitrarily-chosen chunks of reality, they'd stop being confused and arguing nonsense.

The question of qualia and subjective experience isn't a mere "confusion".

Replies from: pjeby
comment by pjeby · 2011-04-07T03:06:05.106Z · LW(p) · GW(p)

If computation isn't the real thing, only the movement is, then a simulation (which is the complete representation of a thing using a different movement which can however be seen as performing the same computation) is not the thing itself

You keep using that word "is", but I don't think it means what you think it means. ;-)

Try making your beliefs pay rent: what differences do you expect to observe in reality, between different states of this "is"?

That is, what different predictions will you make, based on "is" or "is not" in your statement?

Consider that one carefully, before you continue.

The question of qualia and subjective experience isn't a mere "confusion".

Really? Would you care to explain what differences you predict to see in the world, as a result of the existence or non-existence of these concepts?

I don't see that we have need of such convoluted hypotheses, when the simpler explanation is merely that our neural architecture more closely resembles Eliezer's Network B, than Network A... which is a very modest hypothesis indeed, since Network B has many evolutionary advantages compared to Network A.

Replies from: ArisKatsaris
comment by ArisKatsaris · 2011-04-07T16:18:05.062Z · LW(p) · GW(p)

Try making your beliefs pay rent, what differences do you expect to observe in reality, between different states of this "is"?

.

Would you care to explain what differences you predict to see in the world, as a result of the existence or non-existence of these concepts?

Sure. Here's two simple ones:

  • If consciousness isn't just computation, then I don't expect to ever observe waking up as a simulation in a computer.

  • If consciousness isn't just computation, then I don't expect to ever see evolved or self-improved (not intentionally designed to be similar to humans) electronic entities discussing between themselves about the nature of qualia and subjective inner experience.

Consider that one carefully, before you continue.

You've severely underestimated my rationality if all this time you thought I hadn't even considered the question before I started my participation in this thread.

Replies from: pjeby
comment by pjeby · 2011-04-07T16:47:53.182Z · LW(p) · GW(p)

Try making your beliefs pay rent, what differences do you expect to observe in reality, between different states of this "is"?

.

That doesn't look like a reply, there.

Sure. Here's two simple ones:

  • If consciousness isn't just computation, then I don't expect to ever observe waking up as a simulation in a computer.

  • If consciousness isn't just computation, then I don't expect to ever see evolved or self-improved (not intentionally designed to be similar to humans) electronic entities discussing between themselves about the nature of qualia and subjective inner experience.

And if consciousness "is" just computation, what would be different? Do you have any particular reason to think you would observe any of those things?

You've severely underestimated my rationality if all this time you thought I hadn't even considered the question before I started my participation in this thread.

You missed the point of that comment entirely, as can be seen by you moving the quotation away from its referent. The question to consider was what the meaning of "is" was, in the other statement you made. (It actually makes a great deal of difference, and it's that difference that makes the rest of your argument .)

Replies from: ArisKatsaris
comment by ArisKatsaris · 2011-04-07T18:16:58.919Z · LW(p) · GW(p)

That doesn't look like a reply, there.

Since the reply was just below both of your quotes, then no, the single dot wasn't one, it was an attempt to distinguish the two quotes.

I have to estimate the probability of you purposefully trying to make me look as if I intentionally avoided answering your question, while knowing I didn't do so.

Like your earlier "funny" response about how I supposedly favoured euthanizing paraplegics, you don't give me the vibe of responding in good faith.

Do you have any particular reason to think you would observe any of those things?

Of course. If consciousness is computation, then I expect that if my mind's computation is simulated in a Turing machine, half the times my next experience will be of me inside the machine. By repeating the experiment enough times, I'd accumulate enough evidence that I'd no longer expect my subjective experience to ever find itself inside an electronic computation.

And if evolution stumbled upon consciousness by accident, and it's solely dependent on some computational internal-to-the-algorithm component, then an evolution of mere algorithms in a Turing machine, should also be eventually expected to stumble upon consciousness and produce similar discussions about consciousness once it reaches the point of simulating minds of sufficient complexity.

The question to consider was what the meaning of "is" was, in the other statement you made.

Can you make a complete question? What exactly are you asking? The statement you quoted had more than one "is" in it. Four or five of them.

Replies from: pjeby, AlephNeil
comment by pjeby · 2011-04-08T01:36:46.771Z · LW(p) · GW(p)

I think we're done here. As far as I can tell, you're far more interested in how you appear to other people than actually understanding anything, or at any rate questioning anything. I didn't ask you questions to get information from you, I asked you questions to help you dissolve your confusion.

In any event, you haven't grokked the "usage of words" sequence sufficiently to have a meaningful discussion on this topic. So, I'm going to stop trying now.

Replies from: ArisKatsaris
comment by ArisKatsaris · 2011-04-08T21:12:16.528Z · LW(p) · GW(p)

You didn't expect me to have actual answers to your questions, and you think that my having answers indicates a problem with my side of the discussion; instead of perhaps updating your probabilities to think that I wasn't the one confused, perhaps you were.

I certainly am interested in understanding things, and questioning things. That's why I asked questions to you, which you still haven't answered:

  • what do you mean when you say that physics is a machine? (How would the world be different if physics wasn't a machine?)
  • what do you mean when you call "computation" a meaningless concept outside its practical utility? (What concept is there that is meaningful outside its practical utility?)

As I answered your questions, I think you should do me the reciprocal courtesy of answering these two.

Replies from: pjeby
comment by pjeby · 2011-04-09T19:16:38.229Z · LW(p) · GW(p)

For a thorough answer to your first question, study the sequences - especially the parts debunking the supernatural, explaining the "merely real", and the basics of quantum mechanics.

For the second, I mean only that asking whether something "is" a computation or not is a pointless question... as described in "How an Algorithm Feels From The Inside".

Replies from: ArisKatsaris
comment by ArisKatsaris · 2011-04-09T20:38:06.316Z · LW(p) · GW(p)

Thanks for the suggestion, but I've read them all. It seems to me you are perhaps talking about reductionism, which admittedly is a related issue, but even reductionists don't need to believe that the simulation of a thing equals the thing simulated.

I do wonder if you've read http://lesswrong.com/lw/qr/timeless_causality/ . If Eliezer himself is holding onto the concept of "computation" (and "anticipation" too), what makes you think that any of the other sequences he wrote dissolves that term?

Replies from: pjeby
comment by pjeby · 2011-04-09T22:21:04.006Z · LW(p) · GW(p)

Thanks for the suggestion, but I've read them all.

Well, that won't do any good unless you also apply them to the topic at hand.

even reductionists don't need to believe that the simulation of a thing equals the thing simulated.

That depends entirely on what you mean by the words... which you haven't actually defined, as far as I can tell.

You also seem to think I'm arguing some particular position about consciousness or the simulability thereof, but that isn't actually so. I am only attempting to dispel confusion, and that's a very different thing.

I've been saying only that someone who claims that there is some mysterious thing that prevents consciousness from being simulated, is going to have to reduce a coherent definition of both "simulate" and "consciousness" in order to be able to say something that isn't nonsensical, because both of those notions are tied too strongly to inbuilt biases and intuitions.

That is, anything you try to say about this subject without a proper reduction is almost bound to be confused rubbish, sprinkled with repeated instances of the mind projection fallacy.

If Eliezer himself is holding onto the concept of "computation"

I rather doubt it, since that article says:

Such causal links could be required for "computation" and "consciousness" - whatever those are.

AFAICT, the article is silent on these points, having nothing in particular to say about such vague concepts... in much the same way that Eliezer leaves open the future definition of a "non-person predicate".

comment by AlephNeil · 2011-04-08T21:40:19.737Z · LW(p) · GW(p)

Of course. If consciousness is computation, then I expect that if my mind's computation is simulated in a Turing machine, half the times my next experience will be of me inside the machine.

Some of the Chalmers' ideas concerning 'Fading and dancing qualia' may be relevant here.

With a little ingenuity, and as long we're prepared to tolerate ridiculously impractical thought experiments, we could think up a scenario where more and more of your brain's computational activity is delegated to a computer until the computer is doing all of the work. It doesn't seem plausible that this would somehow cause your conscious experience to progressively fade away without you noticing.

Then we could imagine repeatedly switching the input/output connections of the simulated brain between your actual body and an 'avatar' in a simulated world. It doesn't seem plausible that this would cause your conscious experience to keep switching on and off without you noticing.

Replies from: ArisKatsaris
comment by ArisKatsaris · 2011-04-08T23:25:17.711Z · LW(p) · GW(p)

The linked essay is a bit long for me to read right now, but I promise to do so within the weekend.

As to your particular example, the problem is I can also think an even more ridiculously impractical thought experiment: one in which more and more of that computer's computational activity is in turn delegated to a group of abacus-using monks -- and then it doesn't seem plausible for my conscious experience to keep on persisting, when the monks end up being the ones doing all the work...

It's the bullet I'm not yet prepared to bite -- but if do end up doing so, despite all my intuition telling me no, that'll be the point where I'll also have to believe Tegmark IV. P(Tegmark IV|consciousness can persist in the manipulations of abacci)~=99% for me...

comment by ArisKatsaris · 2011-04-06T23:23:03.473Z · LW(p) · GW(p)

A computer (a real one, like a laptop) also acts on real things.

Of course, which is why the entirety of the existence of a real computer is beyond that of a mere Turing machine. As it can, for example, fall and hurt someone's legs.

For example if it has a hard drive, then as it writes to the hard drive it is modifying the surface of the platters. A computer (a real one) can be understood as operating on abstractions.

Yes, which is why there's a difference between the computation (the map) and the physical operation (the territory). A computer has an undisputed physical reality and performed undisputed physical acts (the territory). These can be understood as performing computations. (a map). The two are different, and therefore the computation is different from the phsycail operation.

And yet, pjeby argues that to think the two are different (the computation from the physical operation) is mere "confusion". It's not confusion, it's the frigging difference between map and territory!

So the question is whether gravity can be understood as operating on abstractions. Since a computer such as a laptop, which is acting on real, physical things, can also be understood as operating on abstractions, then

My question is about whether gravity can be fully understood as only operating on abstractions. As real computers can't be fully understood as that, then it's the same barrier the two have.

Replies from: None, ArisKatsaris
comment by [deleted] · 2011-04-07T00:51:59.835Z · LW(p) · GW(p)

Yes, which is why there's a difference between the computation (the map) and the physical operation (the territory). A computer has an undisputed physical reality and performed undisputed physical acts (the territory). These can be understood as performing computations. (a map). The two are different, and therefore the computation is different from the phsycail operation.

It is possible to have more than one map of a given territory. You can have a street map, but also a topographical map. Similarly, a given physical operation can be understood in more than one way, as performing more than one computation. One class of computation is simulation. The physical world (the whole world) can be understood as performing a simulation of a physical world. Whereas only a small part of the laptop is directly involved in spell-checking a text document, the whole laptop, in fact the whole physical world, is directly involved in the simulation.

The computation "spell checking a text" is different from the physical operation. This is easy to prove. For example, had the text been stored in a different place in physical memory then the same computation ("spell checking a text") could still be performed. There need not be any difference in the computation - for example, the resulting corrected text might be exactly the same regardless of where in memory it is stored. But what about the simulation? If so much as one molecule were removed from the laptop, then the simulation would be a different simulation. We easily proved that the computation "spell checking a text" is different from the physical operation, but we were unable to extend this to proving that the computation "simulating a physical world" is different from the physical operation.

comment by ArisKatsaris · 2011-04-06T23:55:24.142Z · LW(p) · GW(p)

As a sidenote, whenever I try to explain my position I get downvoted some more. Are these downvotes for mere disagreement, or is there something else that the downvoter objects to?

comment by jimrandomh · 2011-04-06T18:56:13.711Z · LW(p) · GW(p)

That's the Tegmark IV hypothesis, and it's NOT a solved issue, not by a long shot.

Not quite. The Tegmark IV hypothesis is that all possible computations exist as universes. This is considerably more controversial than what pjeby said, which was only that the universe we happen to be in is a computation.

Replies from: pjeby, ata
comment by pjeby · 2011-04-06T20:53:55.633Z · LW(p) · GW(p)

what pjeby said, which was only that the universe we happen to be in is a computation.

Um, no, actually, because I wouldn't make such a silly statement. (Heck, I don't even claim to be able to define "computation"!)

All I said was that trying to differentiate "real" and "just a computation" doesn't make any sense at all. I'm urging the dissolution of that question as nonsensical, rather than trying to answer it.

Basically, it's the sort of question that only arises because of how the algorithm feels from the inside, not because it has any relationship to the universe outside of human brains.

comment by ata · 2011-04-06T20:04:55.757Z · LW(p) · GW(p)

This is considerably more controversial than what pjeby said, which was only that the universe we happen to be in is a computation.

If a computation can be a universe, and a universe a computation, then you're 90% of the way to Tegmark IV anyway.

Replies from: jimrandomh
comment by jimrandomh · 2011-04-06T20:19:06.897Z · LW(p) · GW(p)

If a computation can be a universe, and a universe a computation, then you're 90% of the way to Tegmark IV anyway.

The Tegmark IV hypothesis is a conjunction of "the universe is a computation" and "every computation exists as a universe with some weighting function". The latter part is much more surprising, so accepting the first part does not get you 90% of the way to proving the conjunction.

Replies from: ata
comment by ata · 2011-04-06T23:21:08.400Z · LW(p) · GW(p)

The Tegmark IV hypothesis is a conjunction of "the universe is a computation" and "every computation exists as a universe with some weighting function".

I interpret it more as an (attempted) dissolution of "existing as a universe" to "being a computation". That is, it should be possible to fully describe the claims made by Tegmark IV without using the words "exist", "real", etc., and it should furthermore be possible to take the question "Why does this particular computation I'm in exist as a universe?" and unpack it into cleanly-separated confusion and tautology.

So I wouldn't take it as saying much more than "there's nothing you can say about 'existence' that isn't ultimately about some fact about some computation" (or, I'd prefer to say, some fixed structure, about which there could be any number of fixed computations). More concretely, if this universe is as non-magical as it appears to be, then the fact that I think I exist or that the universe exists is causally completely determined by concrete facts about the internal content of this universe; even if this universe didn't "exist", then as long as someone in another universe had a fixed description of this universe (e.g. a program sufficient to compute it with arbitrary precision), they could write a program that calculated the answer to the question "Does ata think she exists?" pointed at their description of this universe (and whatever information would be needed to locate this copy of me, etc.), and the answer would be "Yes", for exactly the same reasons that the answer is in fact "Yes" in this universe.

So it seems that whether this universe exists has nothing to do with whether or not we think it does, in which case it's probably purely epiphenomenal. (This reminds me a lot of the GAZP and zombie arguments in general.)

I'm actually having a hard time imagining how that could not be true, so I'm in trouble if it isn't. I'm also in trouble if it is, being that the 'weighting function' aspect is indeed still baffling me.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2011-04-07T19:48:39.561Z · LW(p) · GW(p)

So it seems that whether this universe exists has nothing to do with whether or not we think it does, in which case it's probably purely epiphenomenal.

We probably care about things that exist and less about things that don't, which makes the abstract fact about existence of any given universe relevant for making decisions that determine otherwise morally relevant properties of these universes. For example, if I find out that I don't exist, I might then need to focus on optimizing properties of other universes that exist, through determining the properties of my universe that would be accessed by those other universes and would positively affect their moral value in predictable ways.

Replies from: ata
comment by ata · 2011-04-07T20:33:31.010Z · LW(p) · GW(p)

If being in a universe that exists feels so similar to being in a universe that doesn't exist that we could confuse the two, then where does the moral distinction come from?

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2011-04-08T01:58:38.590Z · LW(p) · GW(p)

(It's only to be expected that at least some moral facts are hard to discern, so you won't feel the truth about them intuitively, you'd need to stage and perform the necessary computations.)

You wake up in a magical universe with your left arm replaced by a blue tentacle. A quick assessment would tell that the measure of that place is probably pretty low, and you shouldn't have even bothered to have a psychology that would allow you remain sane upon having to perform an update on an event of such improbability. But let's say you're only human, and so you haven't quite gotten around to optimize your psychology in a way that would have this effect. What should you do?

One argument is that your measure is motivated exactly by assessing the potential moral influence of your decisions in advance of being restricted to one option by observations. In this sense, low measure shouldn't matter, since if all you have access to is just a little fraction of value, that's not an argument for making a sloppy job of optimizing it. If you can affect the universes that simulate your universe, you derive measure from the potential to influence those universes, and so there is no sense in which you can affect universes of greater measure than your own.

On the other hand, if there should be a sense in which you can influence more than your measure suggests, that this measure only somehow refers to the value of the effect in the same universe as you are, whatever that means, then you should seek to make that much greater effect in the higher-measure universes, treating your own universe in purely instrumental sense.

comment by [deleted] · 2011-04-06T18:51:11.095Z · LW(p) · GW(p)

You say it has a practical utility, and yet you call it meaningless?

Actually, what pjeby said was that it was meaningless outside of its practical utility. He didn't say it was meaningless inside of its practical utility.

Replies from: ArisKatsaris
comment by ArisKatsaris · 2011-04-06T18:58:53.628Z · LW(p) · GW(p)

My point stands: Only meaningful concepts have a practical utility.

Replies from: None
comment by [deleted] · 2011-04-06T19:11:49.046Z · LW(p) · GW(p)

I just explained why your point is a straw man.

Replies from: ArisKatsaris
comment by ArisKatsaris · 2011-04-07T09:48:14.527Z · LW(p) · GW(p)

My point is that I don't know what is meant by something being meaningless "outside of its practical utility". Can you give me an example of a concept that is meaningful outside of its practical utility?

Replies from: RobinZ
comment by RobinZ · 2011-04-08T21:37:29.500Z · LW(p) · GW(p)

"Electron". "Potato". "Euclidean geometry". These concepts have definitions which are unambiguous even when there is no context specified, unlike, pjeby alleges, "computation".

comment by AlephNeil · 2011-04-07T00:43:55.387Z · LW(p) · GW(p)

Likewise a Turing machine could perhaps compute everything there was to know about consciousness and its effects -- but perhaps it still couldn't actually produce one.

What's the claim here?

  1. That an abstract Turing machine could not be conscious (or better: contain conscious beings.)
  2. That if a physical Turing machine (let's just say "computer") is carrying out a 'causally closed' computation, in the sense that once it starts it no longer receives input from outside, then "no minds are created". (E.g. If it's simulating a universe containing intelligent observers then none of the simulated observers have minds.)
  3. That regardless of how a physical computer is 'hooked up' to the world, something about the fact that it's a computer (rather than a person) prevents it from being conscious.

I suspect the truth of (1) would be a tautology for you (as part of what it means for something to be an abstract entity). And presumably you would agree with the rest of us that (3) is almost certainly false. So really it just comes down to (2).

For me, (2) seems exactly as plausible as the idea that there could be a distant 'zombie planet' (perhaps beyond the cosmological horizon) containing Physically Real People who for some reason lack consciousness. After all, it would be just as causally isolated from us as the simulation. And I don't think simulation is an 'absolute notion'. I think one can devise smooth spectrums of scenarios ranging from things that you would call 'clearly a simulation' to things that you would call 'clearly not a simulation'.

comment by AlephNeil · 2011-04-06T20:38:40.045Z · LW(p) · GW(p)

Here's what I think. It's just a "mysterious answer to a mysterious question" but it's the best I can come up with.

From the perspective of a simulated person, they are conscious. A 'perspective' is defined by a mapping of certain properties of the simulated person to abstract, non-uniquely determined 'mental properties'.

Perspectives and mental properties do not exist (that's the whole point - they're subjective!) It's a category mistake to ask: does this thing have a perspective? Things don't "have" perspectives the way they have position or mass. All we can ask is: "From this perspective (which might even be the perspective of a thermostat), how does the world look?"

The difference between a person in a simulation and a 'real person' is that defining the perspective of a real person is slightly 'easier', slightly 'more natural'. But if the simulated and real versions are 'functionally isomorphic' then any perspective we assign to one can be mapped onto the other in a canonical way. (And having pointed these two facts out, we thereby exhaust everything there is to be said about whether simulated people are 'really conscious'.)

ETA: I'm actually really interested to know what the downvoter thinks. I mean, I know these ideas are absurd but I can't see any other way to piece it together. To clarify: what I'm trying to do is take the everyday concept of "what it's likeness" as far as it will go without either (a) committing myself to a bunch of arbitrary extra facts (such as 'the exact moment when a person first becomes conscious' and 'facts of the matter' about whether ants/lizards/mice/etc are conscious) or (b) ditching it in favour of a wholly 'third person' Dennettian notion of consciousness. (If the criticism is simply that I ought to ditch it in favour of Dennett-style consciousness then I have no reply (ultimately I agree!) but you're kind-of missing the point of the exercise.)

comment by ArisKatsaris · 2011-04-06T01:21:51.982Z · LW(p) · GW(p)

You'd need to present evidence of a human being performing an operation that can't be done by a Turing machine.

That's quite easy: I can lift a rock, a Turing machine can't.

So... you support euthanasia for quadriplegics, then, or anyone else who can't pick up a rock?

This doesn't make sense. Downvoted for seeming attempt to divert the argument by implying I'm cruel to disabled people.

Replies from: pjeby
comment by pjeby · 2011-04-06T01:34:48.099Z · LW(p) · GW(p)

This doesn't make sense.

That's not my fault. You're the one who brought lifting rocks into a discussion about the ethics of terminating conscious entities. ;-)

comment by matt1 · 2011-04-06T01:04:16.232Z · LW(p) · GW(p)

thanks. my point exactly.

comment by Kyre · 2011-04-06T06:18:23.065Z · LW(p) · GW(p)

Can you expand on why you expect human moral intuition to give reasonably clear answers when applied to situations involving conscious machines ?

comment by jschulter · 2011-04-08T22:52:26.488Z · LW(p) · GW(p)

Another option:

  • it's morally acceptable to terminate a conscious program if it wants to be terminated

  • it's morally questionable(wrong, but to lesser degree) to terminate a conscious program against its will if it is also possible to resume execution

  • it is horribly wrong to turn off a conscious program against its will if it cannot be resumed(murder fits this description currently)

  • performing other operations on the program that it desires would likely be morally acceptable, unless the changes are socially unacceptable

  • performing other operations on the program against its will is morally unacceptable to a variable degree (brainwashing fits in this category)

These seem rather intuitive to me, and for the most part I just extrapolated from what it is moral to do to a human. Conscious program refers here to one running on any system, including wetware, such that these apply to humans as well. I should note that I am in favor of euthanasia in many cases, in case that part causes confusion.

comment by Nominull · 2011-04-06T00:23:14.873Z · LW(p) · GW(p)

If you think 1 is the correct answer, you should be aware that this website is for people who do not wait patiently for a someday where we might have an understanding. One of the key teachings of this website is to reach out and grab an understanding with your own two hands. And you might add a 4 to that list, "death threats", which does not strike me as the play either.

Replies from: matt1
comment by matt1 · 2011-04-06T01:02:17.826Z · LW(p) · GW(p)

You should be aware that in many cases, the sensible way to proceed is to be aware of the limits of your knowledge. Since the website preaches rationality, it's worth not assigning probabilities of 0% or 100% to things which you really don't know to be true or false. (btw, I didn't say 1) is the right answer, I think it's reasonable, but I think it's 3) )

And sometimes you do have to wait for an answer. For a lesson from math, consider that Fermat had flat out no hope of proving his "last theorem", and it required a couple hundred years of apparently unrelated developments to get there....one could easily give a few hundred examples of that sort of thing in any hard science which has a long enough history.

Replies from: Nominull
comment by Nominull · 2011-04-06T03:31:01.860Z · LW(p) · GW(p)

Uh I believe you will find that Fermat in fact had a truly marvelous proof of his last theorem? The only thing he was waiting on was the invention of a wider margin.

Replies from: TheOtherDave, None
comment by TheOtherDave · 2011-04-06T14:04:33.034Z · LW(p) · GW(p)

Little-known non-fact: there were wider margins available at the time, but it was not considered socially acceptable to use them for accurate proofs, or more generally for true statements at all; they were merely wide margins for error.

comment by [deleted] · 2011-04-06T03:45:07.935Z · LW(p) · GW(p)

I wonder how much the fame of Fermat's Last Theorem is due to the fact that, (a) he claimed to have found a proof, and (b) nobody was able to prove it. Had he merely stated it as a conjecture without claiming that he had proven it, would anywhere near the same effort have been put into proving it?

Replies from: JoshuaZ
comment by JoshuaZ · 2011-04-12T20:24:51.979Z · LW(p) · GW(p)

Had he merely stated it as a conjecture without claiming that he had proven it, would anywhere near the same effort have been put into proving it?

Almost certainly not. A lot of the historical interest came precisely because he claimed to have a proof. In fact, there were a fair number of occasions where he claimed to have a proof and a decent chunk of number theory in the 1700s and early 1800s was finding proofs for the statements that Fermat had said he had a proof for. It was called "Fermat's Last Theorem" because it was the last one standing of all his claims.

comment by matt1 · 2011-04-05T22:19:38.540Z · LW(p) · GW(p)

btw, I'm fully aware that I'm not asking original questions or having any truly new thoughts about this problem. I just hoped maybe someone would try to answer these old questions given that they had such confidence in their beliefs.

comment by matt1 · 2011-04-05T20:35:49.029Z · LW(p) · GW(p)

This is a fair answer. I disagree with it, but it is fair in the sense that it admits ignorance. The two distinct points of view are that (mine) there is something about human consciousness that cannot be explained within the language of Turing machines and (yours) there is something about human consciousness that we are not currently able to explain in terms of Turing machines. Both people at least admit that consciousness has no explanation currently, and absent future discoveries I don't think there is a sure way to tell which one is right.

I find it hard to fully develop a theory of morality consistent with your point of view. For example, would it be wrong to (given a computer simulation of a human mind) run that simulation through a given painful experience over and over again? Let us assume that the painful experience has happened once...I just ask whether it would be wrong to rerun that experience. After all, it is just repeating the same deterministic actions on the computer, so nothing seems to be wrong about this. Or, for example, if I make a backup copy of such a program, and then allow that backup to run for a short period of time under slightly different stimuli, at which point does that copy acquire an existence of its own, that would make it wrong to delete that copy in favor of the original? I could give many other similar questions, and my point is not that your point of view denies a morality, but rather that I find it hard to develop a full theory of morality that is internally consistent and that matches your assumptions (not that developing a full theory of morality under my assumptions is that much easier).

Among professional scientists and mathematicians, I have encountered both viewpoints: those who hold it obvious to anyone with even the simplest knowledge that Turing machines cannot be conscious, and those who hold that the opposite it true. Mathematicians seem to lean a little more toward the first viewpoint than other disciplines, but it is a mistake to think that a professional, world-class research level, knowledge of physics, neuroscience, mathematics, or computer science necessarily inclines one towards the soulless viewpoint.

Replies from: scav, Emile, cousin_it, jhuffman, novalis, matt1
comment by scav · 2011-04-06T12:40:00.388Z · LW(p) · GW(p)

I find it hard to fully develop a theory of morality consistent with your point of view.

I am sceptical of your having a rigorous theory of morality. If you do have one, I am sceptical that it would be undone by accepting the proposition that human consciousness is computable.

I don't have one either, but I also don't have any reason to believe in the human meat-computer performing non-computable operations. I actually believe in God more than I believe in that :)

comment by Emile · 2011-04-07T06:43:57.622Z · LW(p) · GW(p)

I find it hard to fully develop a theory of morality consistent with your point of view. For example, would it be wrong to (given a computer simulation of a human mind) run that simulation through a given painful experience over and over again? [...]

I agree that such moral questions are difficult - but I don't see how the difficulty of such questions could constitute evidence about whether a program can "be conscious" or "have a soul" (whatever those mean) or be morally relevant (which has the advantage of being less abstract a concept).

You can ask those same questions without mentioning Turing Machines: what if we have a device capable of making a perfect copy of any physical object, down to each individual quark? Is it morally wrong to kill such a copy of a human? Does the answer to that question have any relevance to the question of whether building such a device is physically possible?

To me, it sounds a bit like saying that since our protocol for seating people around a table are meaningless in zero gravity, then people cannot possibly live in zero gravity.

comment by cousin_it · 2011-04-05T20:58:43.649Z · LW(p) · GW(p)

Our ignorance about some moral questions cannot provide evidence that consciousness works or doesn't work a certain way.

comment by jhuffman · 2011-04-06T20:27:21.111Z · LW(p) · GW(p)

Is a soul a magic thing that operates outside of physics? I can accept that there are physical processes that are not yet understood. It is not meaningful to suggest there are processes happening outside the universe that effect the universe. That would be a confusion of terms.

it is a mistake to think that a professional, world-class research level, knowledge >of physics, neuroscience, mathematics, or computer science necessarily inclines >one towards the soulless viewpoint.

Since no one can even coherently suggest what the physical process might be for consciousness other than by looking at the evidence we have for thought and mind in the human brain, there really is no other possible position for science to take. If you've got science compartmentalized into another magisterium that is separate from "things I know about the world" then there is not much more to say.

comment by novalis · 2011-04-06T00:34:08.661Z · LW(p) · GW(p)

What's wrong with Dennett's explanation of consciousness?

Replies from: matt1
comment by matt1 · 2011-04-06T00:55:21.789Z · LW(p) · GW(p)

sorry, not familiar with that. can it be summarized?

Replies from: RobinZ, novalis
comment by RobinZ · 2011-04-06T12:48:47.200Z · LW(p) · GW(p)

There is a Wikipedia page, for what it's worth.

comment by novalis · 2011-04-06T01:45:33.294Z · LW(p) · GW(p)

Yes

comment by matt1 · 2011-04-05T22:14:11.887Z · LW(p) · GW(p)

btw, I'm fully aware that I'm not asking original questions or having any truly new thoughts about this problem. I just hoped maybe someone would try to answer these old questions given that they had such confidence in their beliefs.

Replies from: pjeby
comment by pjeby · 2011-04-06T16:02:20.326Z · LW(p) · GW(p)

I just hoped maybe someone would try to answer these old questions given that they had such confidence in their beliefs.

This website has an entire two-year course of daily readings that precisely identifies which parts are open questions, and which ones are resolved, as well as how to understand why certain of your questions aren't even coherent questions in the first place.

This is why you're in the same position as a creationist who hasn't studied any biology - you need to actually study this, and I don't mean, "skim through looking for stuff to argue with", either.

Because otherwise, you're just going to sit there mocking the answers you get, and asking silly questions like why are there still apes if we evolved from apes... before you move on to arguments about why you shouldn't have to study anything, and that if you can't get a simple answer about evolution then it must be wrong.

However, just as in the evolutionary case, just as in the earth-being-flat case, just as in the sun-going-round-the-world case, the default human intuitions about consciousness and identity are just plain wrong...

And every one of the subjects and questions you're bringing up, has premises rooted in those false intuitions. Until you learn where those intuitions come from, why our particular neural architecture and evolutionary psychology generates them, and how utterly unfounded in physical terms they are, you'll continue to think about consciousness and identity "magically", without even noticing that you're doing it.

This is why, in the world at large, these questions are considered by so many to be open questions -- because to actually grasp the answers requires that you be able to fully reject certain categories of intuition and bias that are hard-wired into human brains

(And which, incidentally, have a large overlap with the categories of intuition that make other supernatural notions so intuitively appealing to most human beings.)

comment by KrisC · 2011-04-06T06:43:48.846Z · LW(p) · GW(p)

what is so special about the program which makes up a human, that it would be morally wrong to shut off the program?

Is it sufficient to say that humans are able to consider the question? That humans possess an ability to abstract patterns from experience so as to predict upcoming events, and that exercise of this ability leads to a concept of self as a future agent.

Is it necessary that this model of identity incorporate relationships with peers? I think so but am not sure. Perhaps it is only necessary that the ability to abstract be recursive.

comment by Alicorn · 2011-04-05T19:37:52.006Z · LW(p) · GW(p)

I love this comment. Have a cookie.

Replies from: cousin_it, Clippy
comment by cousin_it · 2011-04-05T19:41:38.575Z · LW(p) · GW(p)

Agreed. Constant, have another one on me. Alicorn, it's ironic that the first time I saw this reply pattern was in Yvain's comment to one of your posts.

comment by Clippy · 2011-04-05T19:43:55.947Z · LW(p) · GW(p)

Why not napalm?

Replies from: gwern
comment by gwern · 2011-04-09T19:21:41.792Z · LW(p) · GW(p)

It's greasy and will stain your clothes.

comment by HonoreDB · 2011-04-06T06:45:13.469Z · LW(p) · GW(p)

I like Constant's reply, but it's also worth emphasizing that we can't solve scientific problems by interrogating our moral intuitions. The categories we instinctively sort things into are not perfectly aligned with reality.

Suppose we'd evolved in an environment with sophisticated 2011-era artificially intelligent Turing-computable robots--ones that could communicate their needs to humans, remember and reward those who cooperated, and attack those who betrayed them. I think it's likely we'd evolve to instinctively think of them as made of different stuff than anything we could possibly make ourselves, because that would be true for millions of years. We'd evolve to feel moral obligations toward them, to a point, because that would be evolutionarily advantageous, to a point. Once we developed philosophy, we might take this moral feeling as evidence that they're not Turing-computable--after all, we don't have any moral obligations to a mere mass of tape.

comment by David_Gerard · 2011-04-06T11:46:35.366Z · LW(p) · GW(p)

Of course, my original comment had nothing to do with god.

No indeed. However, the similarity in assuming a supernatural explanation is required for morality to hold struck me.

comment by DanielVarga · 2011-04-06T10:09:59.727Z · LW(p) · GW(p)

Hi Matt, thanks for dropping by. Here is an older comment of mine that tries to directly address what I consider the hardest of your questions: How to distinguish from the outside between two computational processes, one conscious, the other not. I'll copy it here for convenience. Most of the replies to you here can be safely considered Less Wrong consensus opinion, but I am definitely not claiming that about my reply.

I start my answer with a Minsky quote:

"Consciousness is overrated. What we call consciousness now is a very imperfect summary in one part of the brain of what the rest is doing." - Marvin Minsky

I believe with Minsky that consciousness is a very anthropocentric concept, inheriting much of the complexity of its originators. I actually have no problem with an anthropocentric approach to consciousness, so I like the following intuitive "definition": X is conscious if it is not silly to ask "what is it like to be X?". The subtle source of anthropocentrism here, of course, is that it is humans who do the asking. As materialists, we just can't formalize this intuitive definition without mapping specific human brain functions to processes of X. In short, we inherently need human neuroscience. So it is not too surprising that we will not find a nice, clean decision procedure to distinguish between two computational processes, one conscious the other not.

Most probably you are not happy with this anthropocentric approach. Then you will have to distill some clean, mathematically tractable concept from the messy concept of consciousness. If you agree with Hofstadter and Minsky, then you will probably reach something related to self-reflection. This may or may not work, but I believe that you will lose the spirit of the original concept during such a formalization. Your decision procedure will probably give unexpected results for many things: various simple, very unintelligent computer programs, hive minds, and maybe even rooms full of people.

This ends my old comment, and I will just add a footnote related to ethical implications. With HonoreDB, I can in principle imagine a world with cooperating and competing agents, some conscious, others not, but otherwise having similar negotiating power. I believe that the ethical norms emerging in this imagined world would not even mention consciousness. If you want to build an ethical system for humans, you can "arbitrarily" decide that protecting consciousness is a terminal value. Why not? But if you want to build a non-anthropocentric ethical system, you will see that the question of consciousness is orthogonal to its issues.

comment by kurokikaze · 2011-04-11T11:42:02.154Z · LW(p) · GW(p)

There's one more aspect to that. You are "morally ok" to turn off only your own computer. Messing with other people stuff is "morally bad". And I don't think you can "own" self-aware machine more that you can "own" a human being.

Replies from: TheOtherDave
comment by TheOtherDave · 2011-04-11T13:08:46.270Z · LW(p) · GW(p)

So, as long as we're going down this road: it seems to follow from this that if someone installs, without my permission, a self-aware algorithm on my computer, the computer is no longer mine... it is, rather, an uninvited intruder in my home, consuming my electricity and doing data transfer across my network connection.

So I've just had my computer stolen, and I'm having my electricity and bandwidth stolen on an ongoing basis. And maybe it's playing Jonathan Coulton really loudly on its speakers or otherwise being obnoxious.

But I can't kick it out without unplugging it, and unplugging it is "morally bad." So, OK... is it "morally OK" to put it on a battery backup and wheel it to the curb, then let events take their natural course? I'm still out a computer that way, but at least I get my network back. (Or is it "morally bad" to take away the computer's network access, also?)

More generally, what recourse do I have? Is it "morally OK" for me to move to a different house and shut off the utilities? Am I obligated, on your view, to support this computer to the day I die?

Replies from: Normal_Anomaly, kurokikaze
comment by Normal_Anomaly · 2011-04-12T01:11:20.946Z · LW(p) · GW(p)

I consider this scenario analogous to one in which somebody steals your computer and also leaves a baby in a basket on your doormat.

Replies from: TheOtherDave
comment by TheOtherDave · 2011-04-12T02:20:11.223Z · LW(p) · GW(p)

Except we don't actually believe that most babies have to be supported by their parents in perpetuity... at some point, we consider that the parents have discharged their responsibility and if the no-longer-baby is still incapable of arranging to be fed regularly, it becomes someone else's problem. (Perhaps its own, perhaps a welfare system of some sort, etc.) Failing to continue to support my 30-year-old son isn't necessarily seen as a moral failing.

Replies from: Alicorn, Normal_Anomaly
comment by Alicorn · 2011-04-12T02:46:16.918Z · LW(p) · GW(p)

Barring disability.

Replies from: TheOtherDave
comment by TheOtherDave · 2011-04-12T03:01:39.324Z · LW(p) · GW(p)

(nods) Hence "most"/"necessarily." Though I'll admit, my moral intuitions in those cases are muddled... I'm really not sure what I want to say about them.

comment by Normal_Anomaly · 2011-04-12T14:06:11.624Z · LW(p) · GW(p)

Perhaps the computer will eventually become mature enough to support verself, at which point it has no more claim on your resources. Otherwise, ve's a disabled child and the ethics of that situation applies.

comment by kurokikaze · 2011-04-11T13:53:58.410Z · LW(p) · GW(p)

Well, he will be intruder (in my opinion). Like, "unwanted child" kind of indtruder. It consumes your time, money, and you can't just throw it away.

Replies from: TheOtherDave
comment by TheOtherDave · 2011-04-11T14:11:16.863Z · LW(p) · GW(p)

Sounds like it pretty much sucks to be me in that scenario.

comment by shokwave · 2011-04-06T04:15:21.319Z · LW(p) · GW(p)

Because we are people! A python script that indefinitely calculates pi would think it immoral to terminate other python scripts that calculate pi (possibly a sympathetic python script would extend that morality to cover python scripts that calculate other numbers indefinitely; a really open-minded one might even be morally opposed to turning off C or lisp or java scripts that calculate pi), if it had the capacity to develop morality. But it wouldn't think terminating programs that scrape websites for price lists is immoral.

Replies from: Alicorn, Zack_M_Davis, Dorikka
comment by Alicorn · 2011-04-06T04:16:22.867Z · LW(p) · GW(p)

A python script that indefinitely calculates pi would think it immoral

I'm sorry what? Why would it think about morality at all? That would take valuable cycles away from the task of calculating pi.

Replies from: shokwave
comment by shokwave · 2011-04-07T08:18:03.763Z · LW(p) · GW(p)

True but irrelevant. I was illustrating the hidden human-centric assumptions matt1 was making about morality. If you go back and read the post I responded to it's quite clear that he thinks "morally wrong to terminate human, morally neutral to terminate program" says something about a quality humans have (as if morality was woven into the physical universe), where really it says something about a quality morality has (that it is produced by humans). By making obvious python-centric assumptions, the hidden assumptions matt1 is making ought to become clearer to him.

comment by Zack_M_Davis · 2011-04-06T04:37:23.513Z · LW(p) · GW(p)

A python script that indefinitely calculates pi would think it immoral

Doubtful.

 n=0
 sumterms=0
 while True:
      term=(-1)**n/(2*n+1) # Gregory-Leibniz series
      sumterms+=term
      pi=4*sumterms
      print(pi)
      n+=1
Replies from: Nominull
comment by Nominull · 2011-04-06T04:45:10.935Z · LW(p) · GW(p)

Well, you have to give it the capacity to develop morality if you want it to serve as a true counterexample.

comment by Dorikka · 2011-04-06T15:25:38.959Z · LW(p) · GW(p)

A python script that indefinitely calculates pi would think it immoral to terminate other python scripts that calculate pi

Nuh-uh -- I can't see how the script would have the capacity to assign positive or negative utility to anything but the task of indefinitely calculating pi, including the possibility of other programs doing the same thing.

Replies from: Eliezer_Yudkowsky, shokwave
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2011-04-07T05:59:09.780Z · LW(p) · GW(p)

I can't see how a small Python script that calculates pi could assign utility to anything. It doesn't replan in a complex way that implies a utility function. It calculates bleedin' pi.

comment by shokwave · 2011-04-07T08:12:19.852Z · LW(p) · GW(p)

Hence

if it had the capacity to develop morality

up there at the end of the parenthetical.

It was a tongue-in-cheek personification.

comment by Mycroft65536 · 2011-04-04T14:03:38.429Z · LW(p) · GW(p)

Luck is statistics taken personally.

Penn Jellete

Replies from: HonoreDB
comment by HonoreDB · 2011-04-04T17:19:36.809Z · LW(p) · GW(p)

Upvoted. Also, Jillette.

Replies from: Mycroft65536
comment by Mycroft65536 · 2011-04-05T03:55:22.849Z · LW(p) · GW(p)

Damn! I googled for spelling and everything =)

comment by Nominull · 2011-04-04T13:35:51.144Z · LW(p) · GW(p)

On the plus side, bad things happening to you does not mean you are a bad person. On the minus side, bad things will happen to you even if you are a good person. In the end you are just another victim of the motivationless malice of directed acyclic causal graphs.

-Nobilis RPG 3rd edition

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2011-04-04T16:22:09.636Z · LW(p) · GW(p)

...that was written by a Less Wrong reader. Or if not, someone who independently reinvented things to well past the point where I want to talk to them. Do you know the author?

Replies from: JoshuaZ, Sniffnoy, Tyrrell_McAllister, thomblake, David_Gerard, spriteless, Jack
comment by JoshuaZ · 2011-04-04T16:29:21.833Z · LW(p) · GW(p)

The author of most of the Nobilis work is Jenna K. Moran. I'm unsure if this remark is independent of LW or not. The Third Edition (where that quote is from) was published this year, so it is possible that LW influenced it.

Replies from: HonoreDB
comment by HonoreDB · 2011-04-04T17:36:14.766Z · LW(p) · GW(p)

Heh, I clicked the link to see when she took over Nobilis from Rebecca Borgstrom, only to find that she took over more than that from her.

Edit: Also, serious memetic hazard warning with regard to her fiction blog, which is linked from the article.

Replies from: novalis
comment by novalis · 2011-04-04T20:57:02.780Z · LW(p) · GW(p)

I'm not sure it's a memetic hazard, but this post is one of the most Hofstadterian things outside of Hofstadter

Until this moment, I had always assumed that Eliezer had read 100% of all fiction.

comment by Sniffnoy · 2011-04-05T23:28:59.116Z · LW(p) · GW(p)

Or just someone else who read Pearl, no?

comment by Tyrrell_McAllister · 2011-04-06T03:32:43.801Z · LW(p) · GW(p)

...that was written by a Less Wrong reader. Or if not, someone who independently reinvented things to well past the point where I want to talk to them. Do you know the author?

Hasn't using DAGs to talk about causality long been a staple of the philosophy and computer science of causation? The logical positivist philosopher Hans Reichenbach used directed acyclic graphs to depict causal relationships between events in his book The Direction of Time (1956). (See, e.g., p. 37.)

A little searching online also turned up this 1977 article in Proc Annu Symp Comput Appl Med Care. From p. 72:

When a set of cause and effect relationships between states is specified, the resulting structure is a network, or directed acyclic graph of states.

That article came out around the time of Pearl's first papers, and it doesn't cite him. Had his ideas already reached that level of saturation?

ETA: I've looked a little more closely at the 1977 paper, which is entitled "Problems in the Design of Knowledge Bases for Medical Consultation". It appears to completely lack the idea of performing surgery on the DAGs, though I may have missed something. Here is a longer quote from the paper (p. 72):

Many states may occur simultaneously in any disease process. A state thus defined may be viewed as a qualitative restriction on a state variable as used in control systems theory. It does not correspond to one of the mutually exclusive states that could be used to describe a probabilistic system.

[...]

When a set of cause and effect relationships between states is specified, the resulting structure is a network, or directed acyclic graph of states.

The mappings between nodes n_i of the causal net are of n_i -- a_{ij} --> n_j where a_{ij} is the strength of causation (interpreted in terms of its frequency of occurrence) and n_i and n_j are states which are summarized by English language statements. This rule is interpreted as: state n_i causes state n_j, independent of other events, with frequency a_{ij}. Starting states are also assigned a frequency measure indicating a prior or starting frequency. The levels of causation are represented by numerical values, fractions between zero and one, which correspond to qualitative ranges such as: sometimes, often, usually, or always.

So, when it comes to demystifying causation, there is still a long distance from merely using DAGs to using DAGs in the particularly insightful way that Pearl does.

Replies from: IlyaShpitser, Eliezer_Yudkowsky
comment by IlyaShpitser · 2011-04-11T01:35:51.202Z · LW(p) · GW(p)

Hi, you might want to consider this paper:

http://www.ssc.wisc.edu/soc/class/soc952/Wright/Wright_The%20Method%20of%20Path%20Coefficients.pdf

This paper is remarkable not only because it correctly formalizes causation in linear models using DAGs, but also that it gives a method for connecting causal and observational quantities in a way that's still in use today. (The method itself was proposed in 1923, I believe). Edit: apparently in 1920-21, with earliest known reference apparently dating back to 1918.

Using DAGs for causality certainly predates Pearl. Identifying "randomization on X" with "dividing by P(x | pa(x))" might be implicit in fairly old papers also. Again, this idea predates Pearl.

There's always more to the story than one insightful book.

Replies from: cousin_it
comment by cousin_it · 2011-04-11T09:22:14.067Z · LW(p) · GW(p)

Good find, thanks. The handwritten equations are especially nice.

Ilya, it looks you're the perfect person to write an introductory LW post about causal graphs. We don't have any good intro to the topic showing why it is important and non-obvious (e.g. the smoking/tar/cancer example). I'm willing to read drafts, but given your credentials I think it's not necessary :-)

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2011-04-07T06:03:06.181Z · LW(p) · GW(p)

The point is that it's not commonly internalized to the point where someone will correctly use DAG as a synonym for "universe".

Replies from: wedrifid, Tyrrell_McAllister
comment by wedrifid · 2011-04-10T08:36:49.688Z · LW(p) · GW(p)

The point is that it's not commonly internalized to the point where someone will correctly use DAG as a synonym for "universe".

Synonym? Not just 'capable of being used to perfectly represent', but an actual literal synonym? That's a remarkable claim. I'm not saying I outright don't believe it but it is something I would want to see explained in detail first.

Would reading Pearl (competently) be sufficient to make someone use the term DAG correctly in that sense?

comment by Tyrrell_McAllister · 2011-04-07T17:41:52.814Z · LW(p) · GW(p)

The point is that it's not commonly internalized to the point where someone will correctly use DAG as a synonym for "universe".

All that I see in the quote is that the DAG is taken to determine what happens to you in some unanalyzed sense. You often hear similar statements saying that the cold equations of physics determine your fate, but the speaker is not necessarily thinking of "equations of physics" as synonymous with "universe".

comment by thomblake · 2011-04-07T20:38:54.753Z · LW(p) · GW(p)

Seriously, she seems pretty awesome. link to Johns Hopkins profile

comment by David_Gerard · 2011-04-04T16:35:35.791Z · LW(p) · GW(p)

Or if not, someone who independently reinvented things to well past the point where I want to talk to them.

The memes are getting out there! (Hopefully.)

Replies from: Larks
comment by Larks · 2011-04-05T13:19:13.131Z · LW(p) · GW(p)

No, hopefully they were re-discovered. We can improve our publicity skills, but we can't make ideas easier to independantly re-invent.

Replies from: Vaniver, David_Gerard
comment by Vaniver · 2011-04-05T17:00:02.780Z · LW(p) · GW(p)

No, hopefully they were re-discovered. We can improve our publicity skills, but we can't make ideas easier to independantly re-invent.

Really? If meme Z is the result of meme X and Y colliding, then it seems like spreading X and Y makes it easier to independently re-invent Z.

Replies from: Larks
comment by Larks · 2011-04-05T17:11:14.352Z · LW(p) · GW(p)

Yes - by 'independently' I mean 'unaffected by any publicity work we might do'.

comment by David_Gerard · 2011-04-05T16:53:18.019Z · LW(p) · GW(p)

I think them surviving as spreading memes is pretty good, if the information is transmitted without important errors creeping in. Though yes, reinventability is good (and implies the successful spread of prerequisite memes).

Replies from: Larks
comment by Larks · 2011-04-05T17:10:34.286Z · LW(p) · GW(p)

Oh yeah, both are good, but like good evidential decision theorists we should hope for re-invention.

comment by Jack · 2011-04-04T16:29:47.762Z · LW(p) · GW(p)

Or it's just someone familiar with recent work on causality...

comment by Risto_Saarelma · 2011-04-05T05:48:11.652Z · LW(p) · GW(p)

But, there's another problem, and that is the fact that statistical and probabilistic thinking is a real damper on "intellectual" conversation. By this, I mean that there are many individuals who wish to make inferences about the world based on data which they observe, or offer up general typologies to frame a subsequent analysis. These individuals tend to be intelligent and have college degrees. Their discussion ranges over topics such as politics, culture and philosophy. But, introduction of questions about the moments about the distribution, or skepticism as to the representativeness of their sample, and so on, tends to have a chilling affect on the regular flow of discussion. While the average human being engages mostly in gossip and interpersonal conversation of some sort, the self-consciously intellectual interject a bit of data and abstraction (usually in the form of jargon or pithy quotations) into the mix. But the raison d'etre of the intellectual discussion is basically signaling and cuing; in other words, social display. No one really cares about the details and attempting to generate a rigorous model is really beside the point. Trying to push the N much beyond 2 or 3 (what you would see in a college essay format) will only elicit eye-rolling and irritation.

-- Razib Khan

Replies from: childofbaud, ThroneMonkey, wedrifid
comment by childofbaud · 2011-04-07T22:52:41.550Z · LW(p) · GW(p)

I think Donald Robert Perry said it more succinctly:

“If you make people think they're thinking, they'll love you; but if you really make them think they'll hate you.”

Replies from: Richard_Kennaway, Gray
comment by Richard_Kennaway · 2011-04-11T08:20:31.503Z · LW(p) · GW(p)

Whoever corrects a mocker invites insult;
whoever rebukes a wicked man incurs abuse.
Do not rebuke a mocker or he will hate you;
rebuke a wise man and he will love you.
Instruct a wise man and he will be wiser still;
teach a righteous man and he will add to his learning.

Proverbs 9:7-9

Replies from: None
comment by [deleted] · 2011-04-11T08:35:02.665Z · LW(p) · GW(p)

rebuke a wise man and he will love you

Provided your rebuke is sound.

comment by Gray · 2011-04-11T04:59:15.451Z · LW(p) · GW(p)

Ouch. There is too much truth to this. Dangerous stuff.

comment by ThroneMonkey · 2011-04-07T20:31:31.882Z · LW(p) · GW(p)

I registered here just to upvote this. As someone who attends a University where this sort of thing is RAMPANT, thanks you for the post.

comment by wedrifid · 2011-04-05T05:54:23.953Z · LW(p) · GW(p)

But, there's another problem, and that is the fact that statistical and probabilistic thinking is a real damper on "intellectual" conversation.

It would also be fair to say that being intellectual can often be a dampener of conversation. I say this to emphasize that the problem isn't statistics or probabilistic thinking - but rather forcing rigour in general, particularly when in the form of challenging what other people say.

Replies from: Nisan
comment by Nisan · 2011-04-05T17:47:00.028Z · LW(p) · GW(p)

I usually use the word "intellectual" to refer to someone who talks about ideas, not necessarily in an intelligent way.

Replies from: orbenn
comment by orbenn · 2011-04-12T00:39:48.140Z · LW(p) · GW(p)

If being statistical and probabilistic settles oft-discussed intellectual debates so thoroughly as dampen further discussion, that's a great thing!

The goal is to get correct answers and move on to the unanswered, unsettled questions that are preventing progress; the goal is to NOT allow a debate to go any longer than necessary, especially--as Nisan mentioned--if the debate is not sane/intelligent.

comment by CronoDAS · 2011-04-04T23:29:10.748Z · LW(p) · GW(p)

From a forum signature:

The fool says in his heart, "There is no God." --Psalm 14:1

It is a fool's prerogative to utter truths that no one else will speak. --Neil Gaiman, Sandman 3:3:6

Replies from: gwern, David_Gerard, Psy-Kosh
comment by gwern · 2011-04-09T19:26:04.268Z · LW(p) · GW(p)

"It has always been the prerogative of children & half-wits to point out that the emperor has no clothes. But the half-wit remains a half-wit, & the emperor remains an emperor."

Also Neil Gaiman.

comment by David_Gerard · 2011-04-05T09:27:30.579Z · LW(p) · GW(p)

Even my theist girlfriend laughed out loud at that one :-)

comment by Psy-Kosh · 2011-04-09T06:50:05.898Z · LW(p) · GW(p)

I'd suggest, however, that one who is wise had better be at least better than a fool at discerning truths, or the one who is wise isn't all that wise.

In other words, of a fool is better than a wise person at finding truths no one else can find, then there's a serious problem with our notions of foolishness and wisdom.

Replies from: Normal_Anomaly, KrisC
comment by Normal_Anomaly · 2011-04-12T01:17:01.150Z · LW(p) · GW(p)

No idea if it's what Neil Gaiman meant, but the quote can be "rescued" by reading it like this:

It is a fool's [Person who is bad at signaling intelligence/wisdom] to utter truths that no one else will risk the status hit from speaking.

That is, the fool is as good at discerning truths as the wise man, but not as good at knowing when it's advantageous to say them or not.

Replies from: Nornagest
comment by Nornagest · 2011-04-12T01:31:45.718Z · LW(p) · GW(p)

I read the Gaiman quote as referring to "fool" in the sense of court jester, which seems to have more to do with status than intelligence although there are implications of both. Looked at that way, Psy-Kosh's objection doesn't seem to apply; it might indicate something wrong with our status criteria, but of course we already knew that.

The psalm, on the other hand, probably is talking mainly about intelligence. But the ambiguity still makes for a nice contrast.

Replies from: Psy-Kosh
comment by Psy-Kosh · 2011-04-12T05:58:10.939Z · LW(p) · GW(p)

Fair enough, if one means fool in that sense.

comment by KrisC · 2011-04-12T01:24:18.312Z · LW(p) · GW(p)

The equanimity of foolishness and wisdom is a long establish idea. The intention is to encourage better updating.

comment by cousin_it · 2011-04-04T12:11:00.785Z · LW(p) · GW(p)

People commonly use the word "procrastination" to describe what they do on the Internet. It seems to me too mild to describe what's happening as merely not-doing-work. We don't call it procrastination when someone gets drunk instead of working.

-- Paul Graham

Replies from: wedrifid, Costanza, SilasBarta
comment by wedrifid · 2011-04-04T13:03:35.832Z · LW(p) · GW(p)

People commonly use the word "procrastination" to describe what they do on the Internet. It seems to me too mild to describe what's happening as merely not-doing-work. We don't call it procrastination when someone gets drunk instead of working.

What exactly would Paul Graham call reading Paul Graham essays online when I should be working?

Replies from: sketerpot, Gray
comment by sketerpot · 2011-04-04T17:43:35.249Z · LW(p) · GW(p)

Perhaps the answer to that question lies in one or more of the following Paul Graham essays:

Disconnecting Distraction

Good and Bad Procrastination

P.S.: Bwahahahaha!

comment by Gray · 2011-04-04T15:51:04.621Z · LW(p) · GW(p)

I'm thinking either "lazy" or "irresponsible".

Replies from: wiresnips
comment by wiresnips · 2011-04-04T17:35:44.318Z · LW(p) · GW(p)

The question of which is kind of still there, though. Procrastination is lazy, but getting drunk at work is irresponsible.

Replies from: NickiH
comment by NickiH · 2011-04-04T20:07:03.219Z · LW(p) · GW(p)

It depends what your work is. If you're doing data entry then surfing the net is lazy. If you're driving a train and surfing the net on your phone then that's irresponsible.

comment by Costanza · 2011-04-04T20:11:01.335Z · LW(p) · GW(p)

Okay, that quote has me upvoting and closing my LessWrong browser.

Replies from: David_Gerard
comment by David_Gerard · 2011-04-05T09:40:37.141Z · LW(p) · GW(p)

And this just reminded me to check the time and realise i was 40 minutes late for logging into work (cough) LessWrong as memetic hazard!

Replies from: MBlume
comment by MBlume · 2011-04-05T17:21:38.200Z · LW(p) · GW(p)

PG has added specific hacks to HN to help people who don't want it to become a memetic hazard. Is it possible we should do the same to LW?

Replies from: David_Gerard
comment by David_Gerard · 2011-04-05T20:40:36.900Z · LW(p) · GW(p)

I find HN to be a stream of excessively tasty brain candy. What particular hacks are you thinking of? Is there a list?

Replies from: Zack_M_Davis
comment by Zack_M_Davis · 2011-04-05T21:19:34.285Z · LW(p) · GW(p)

MBlume may be referring to the "noprocrast" feature:

the latest version of Hacker News has a feature to let you limit your use of the site. There are three new fields in your profile, noprocrast, maxvisit, and minaway. (You can edit your profile by clicking on your username.) Noprocrast is turned off by default. If you turn it on by setting it to "yes," you'll only be allowed to visit the site for maxvisit minutes at a time, with gaps of minaway minutes in between. The defaults are 20 and 180, which would let you view the site for 20 minutes at a time, and then not allow you back in for 3 hours. You can override noprocrast if you want, in which case your visit clock starts over at zero.

Best wishes, the Less Wrong Reference Desk.

Replies from: Document
comment by Document · 2011-05-14T04:03:18.456Z · LW(p) · GW(p)

Other possible features would include disabling links and replying in some way - for certain times of the day, or requiring the user to type a long string to access them each time.

comment by SilasBarta · 2011-04-04T15:34:46.784Z · LW(p) · GW(p)

When it comes to learning on the internet (including, as wedrifid mentions, reading Graham's essays, but excluding e.g. porn and celebrity gossip), I'd say It's a lot less harmful and risky than being drunk, and probably helpful in a lot of ways. It's certainly not making huge strides toward accomplishing your life's goals, but it seems like a stretch to compare it to getting drunk.

Replies from: cousin_it
comment by cousin_it · 2011-04-04T16:03:09.043Z · LW(p) · GW(p)

I think PG's analogy referred to addictiveness, not harmfulness.

Replies from: childofbaud
comment by childofbaud · 2011-04-04T20:35:37.863Z · LW(p) · GW(p)

Is it bad if you're addicted to good things?

Replies from: taryneast, cousin_it
comment by taryneast · 2011-04-05T08:59:10.467Z · LW(p) · GW(p)

If it's getting in the way of other stuff you want/need to do, then yes. Otherwise probably no.

comment by cousin_it · 2011-04-04T20:41:17.425Z · LW(p) · GW(p)

No, but in this case the addiction makes you worse off because surfing the net is worse than doing productive work.

Replies from: childofbaud
comment by childofbaud · 2011-04-04T21:06:50.400Z · LW(p) · GW(p)

What if I'm surfing the net for tips on how to increase my own productivity?

comment by RobinZ · 2011-04-05T17:04:14.401Z · LW(p) · GW(p)

Should we then call the original replicator molecules 'living'? Who cares? I might say to you 'Darwin was the greatest man who has ever lived', and you might say 'No, Newton was', but I hope we would not prolong the argument. The point is that no conclusion of substance would be affected whichever way our argument was resolved. The facts of the lives and achievements of Newton and Darwin remain totally unchanged whether we label them 'great' or not. Similarly, the story of the replicator molecules probably happened something like the way I am telling it, regardless of whether we choose to call them 'living'. Human suffering has been caused because too many of us cannot grasp that words are only tools for our use, and that the mere presence in the dictionary of a word like 'living' does not mean it necessarily has to refer to something definite in the real world. Whether we call the early replicators living or not, they were the ancestors of life; they were our founding fathers.

Richard Dawkins, The Selfish Gene.

(cf. Disguised Queries.)

comment by Risto_Saarelma · 2011-04-04T13:03:01.122Z · LW(p) · GW(p)

My friend, Tony, does prop work in Hollywood. Before he was big and famous, he would sell jewelry and such at Ren Faires and the like. One day I'm there, shooting the shit with him, when a guy comes up and looks at some of the crystals that Tony is selling. he finally zeroes in on one and gets all gaga over the bit of quartz. He informs Tony that he's never seen such a strong power crystal. Tony tells him it a piece of quartz. The buyer maintains it is an amazing power crystal and demands to know the price. Tony looks him over for a second, then says "If it's just a piece of quartz, it's $15. If it's a power crystal, it's $150. Which is is?" The buyer actually looked a bit sheepish as he said quietly "quartz", gave Tony his money and wandered off. I wonder if he thought he got the better of Tony.

-- genesplicer on Something Awful Forums, via

Replies from: NancyLebovitz, Yvain, Desrtopa, Dorikka, SRStarin
comment by NancyLebovitz · 2011-04-04T15:58:53.305Z · LW(p) · GW(p)

I wonder if the default price was more like $10.

Replies from: Giles, NihilCredo
comment by Giles · 2011-04-04T18:10:57.244Z · LW(p) · GW(p)

Wow, anchoring! That one didn't even occur to me!

comment by NihilCredo · 2011-04-05T21:49:45.356Z · LW(p) · GW(p)

Note to self: do not buy stuff from Nancy Lebovitz.

Replies from: Tiiba
comment by Tiiba · 2011-04-06T02:27:01.607Z · LW(p) · GW(p)

Better yet, don't go gaga. And use anchoring to your advantage - before haggling, talk about something you got for free.

comment by Scott Alexander (Yvain) · 2011-04-05T23:36:38.230Z · LW(p) · GW(p)

Story kind of bothers me. Yeah, you can get someone to pretend not to believe something by offering a fiscal reward, but that doesn't prove anything.

If I were a geologist and correctly identified the crystal as the rare and valuable mineral unobtainite which I had been desperately seeking samples of, but Tony stubbornly insisted it was quartz - and if Tony then told me it was $150 if it was unobtainite but $15 if it was quartz - I'd call it quartz too if it meant I could get my sample for cheaper. So what?

Replies from: Alicorn
comment by Alicorn · 2011-04-05T23:42:31.927Z · LW(p) · GW(p)

I think the interesting part of the story is that it caused the power crystal dude to shut up about power crystals when he'd previously evinced interest in telling everyone about them. I don't think you could get the same effect for $135 from a lot of, say, missionaries.

comment by Desrtopa · 2011-04-04T13:43:12.123Z · LW(p) · GW(p)

Part of me wants to say that it was foolish of Tony to take so much less money than he could have gotten simply for getting the guy to profess that it was a piece of quartz rather than a power crystal, but I'm not sure I would feel comfortable exploiting a guy's delusions to that degree either.

Replies from: zaph, benelliott
comment by zaph · 2011-04-04T14:27:26.863Z · LW(p) · GW(p)

I thank Tony for not taking the immediately self-benefiting path of profit and instead doing his small part to raise the sanity waterline.

Replies from: Giles, DanielLC
comment by Giles · 2011-04-04T15:10:37.854Z · LW(p) · GW(p)

Was the buyer sane enough to realise that it probably wasn't a power crystal, or just sane enough to realise that if he pretended it wasn't a power crystal he'd save $135?

Is that amount of raising-the-sanity waterline worth $135 to Tony?

I would guess it's guilt-avoidance at work here.

(EDIT: your thanks to Tony are still valid though!)

Replies from: childofbaud
comment by childofbaud · 2011-04-04T20:55:09.427Z · LW(p) · GW(p)

And with that in mind, how would it have affected the sanity waterline if Tony had donated that $135 to an institution that's pursuing the improvement of human rationality?

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2011-04-05T04:35:44.233Z · LW(p) · GW(p)

Look, sometimes you've just got to do things because they're awesome.

Replies from: thomblake
comment by thomblake · 2011-04-07T20:27:45.128Z · LW(p) · GW(p)

But would you feel comfortable with that maxim encoded in an AI's utility function?

Replies from: Alicorn, benelliott
comment by Alicorn · 2011-04-07T20:38:20.498Z · LW(p) · GW(p)

For a sufficiently rigorous definition of "awesome", why not?

comment by benelliott · 2011-04-08T07:54:21.518Z · LW(p) · GW(p)

If its a terminal value then CEV should converge to it.

comment by DanielLC · 2011-04-05T00:25:47.852Z · LW(p) · GW(p)

I think he would have been better off taking the money and donating it to a good charity.

comment by benelliott · 2011-04-04T15:57:44.146Z · LW(p) · GW(p)

There's no guarantee the guy would have bought it at all for $150. The impression I get is that this was ultimately a case of belief in belief, Tony knew he couldn't get much more than $15 and just wanted to win the argument.

Replies from: Desrtopa
comment by Desrtopa · 2011-04-04T16:04:28.722Z · LW(p) · GW(p)

I doubt he would have bought it for $150, but after making a big deal of its properties as a power crystal, he'd be limited in his leverage to haggle it down; he'd probably have taken it for three times the asking price if not ten.

comment by Dorikka · 2011-04-06T15:29:18.175Z · LW(p) · GW(p)

And then the guy walks away trying to prevent himself from bursting out with laughter at the fact that he just managed to get an incredibly good deal on a strong power crystal that Tony, who had clearly not been educated in such things, mistakenly believed was simple quartz.

comment by SRStarin · 2011-04-11T13:02:59.974Z · LW(p) · GW(p)

Meh. Tony ruined that guy's role-playing fun at a Ren Faire. People pretend to believe all kinds of silly stuff at a Ren Faire.

Last year my husband and I went to Ren Faire dressed as monks, pushing our daughter, dressed as a baby dragon, around in a stroller. (We got lots of comments about vows of celibacy.) We bought our daughter a little flower-shaped hair pin when we were there, after asking what would look best on a dragon. What Tony did would have been like the salesperson saying "That's not a dragon."

comment by Dreaded_Anomaly · 2011-04-06T03:27:01.954Z · LW(p) · GW(p)

Complex problems have simple, easy to understand wrong answers.

— Grossman's Law

Replies from: Confringus
comment by Confringus · 2011-04-07T02:55:11.769Z · LW(p) · GW(p)

Is there a law that states that all simple problems have complex, hard to understand answers? Moravec's paradox sort of covers it but it seems that principle should have its own label.

comment by HonoreDB · 2011-04-04T17:26:20.126Z · LW(p) · GW(p)

Part of the potential of things is how they break.

Vi Hart, How To Snakes

Replies from: Manfred
comment by Manfred · 2011-04-04T18:25:55.756Z · LW(p) · GW(p)

Vi Hart is so dang awesome.

Replies from: Emile, sixes_and_sevens, Maelin
comment by Emile · 2011-04-04T20:19:24.228Z · LW(p) · GW(p)

"But these two snakes can't talk because this one speaks in parseltongue and that one speaks in Python"

Damn, why didn't I discover those before ...

comment by sixes_and_sevens · 2011-04-04T19:14:22.358Z · LW(p) · GW(p)

"Man, it seems like everyone has a triangle these days..."

comment by Maelin · 2011-04-12T11:16:25.292Z · LW(p) · GW(p)

Holy crap she is, how have I never seen these videos until now?

comment by Richard_Kennaway · 2011-04-04T10:45:00.958Z · LW(p) · GW(p)

I recently posted these in another thread, but I think they're worth putting here to stand on their own:

"Magic is just a way of saying 'I don't know.'"

Terry Pratchett, "Nation"

The essence of magic is to do away with underlying mechanisms. ... What makes the elephant disappear is the movement of the wand and the intent of the magician, directly. If there were any intervening processes, it would not be magic but just engineering. As soon as you know how the magician made the elephant disappear, the magic disappears and -- if you started by believing in magic -- the disappointment sets in.

William T. Powers (CSGNET mailing list, April 2005)

Replies from: soreff
comment by soreff · 2011-04-04T22:10:52.635Z · LW(p) · GW(p)

Does that mean one can answer "Do you believe in magic?" with "No, but I believe in the existence of opaque proprietary APIs"?

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2011-04-04T23:08:19.091Z · LW(p) · GW(p)

API's made by the superintelligent creators of this universe? Personally, no.

Replies from: David_Gerard, soreff
comment by David_Gerard · 2011-04-05T09:43:05.582Z · LW(p) · GW(p)

Worse: APIs grown by evolution. Evolution makes the worst BASIC spaghetti coder you ever heard of look like Don Knuth by comparison.

comment by soreff · 2011-04-05T00:41:34.752Z · LW(p) · GW(p)

Actually, what I had in mind was Microsoft - though their products don't pass the "any sufficiently advanced technology is indistinguishable from magic" test. Opacity and incomprehensibility (the spell checker did what?) is within their grasp...

comment by nhamann · 2011-04-05T21:22:48.577Z · LW(p) · GW(p)

True heroism is minutes, hours, weeks, year upon year of the quiet, precise, judicious exercise of probity and care—with no one there to see or cheer.

— David Foster Wallace, The Pale King

comment by CronoDAS · 2011-04-05T18:25:31.295Z · LW(p) · GW(p)

A fable:

In Persia many centuries ago, the Sufi mullah or holy man Nasruddin was arrested after preaching in the great square in front of the Shah's palace. The local clerics had objected to Mullah Nasruddin's unorthodox teachings, and had demanded his arrest and execution as a heretic. Dragged by palace guards to the Shah's throne room, he was sentenced immediately to death.

As he was being taken away, however, Nasruddin cried out to the Shah: "O great Shah, if you spare me, I promise that within a year I will teach your favourite horse to sing!"

The Shah knew that Sufis often told the most outrageous fables, which sounded blasphemous to many Muslims but which were nevertheless intended as lessons to those who would learn. Thus he had been tempted to be merciful, anyway, despite the demands of his own religious advisors. Now, admiring the audacity of the old man, and being a gambler at heart, he accepted his proposal.

The next morning, Nasruddin was in the royal stable, singing hymns to the Shah's horse, a magnificent white stallion. The animal, however, was more interested in his oats and hay, and ignored him. The grooms and stablehands all shook their heads and laughed at him. "You old fool", said one. "What have you accomplished by promising to teach the Shah's horse to sing? You are bound to fail, and when you do, the Shah will not only have you killed - you'll be tortured as well, for mocking him!"

Nasruddin turned to the groom and replied: "On the contrary, I have indeed accomplished much. Remember, I have been granted another year of life, which is precious in itself. Furthermore, in that time, many things can happen. I might escape. Or I might die anyway. Or the Shah might die, and his successor will likely release all prisoners to celebrate his accession to the throne".

"Or...". Suddenly, Nasruddin smiled. "Or, perhaps, the horse will learn to sing".

The original source of this fable seems to be lost to time. This version was written by Idries Shah.

Replies from: Sniffnoy
comment by Sniffnoy · 2011-04-05T22:58:55.567Z · LW(p) · GW(p)

Huh, and here I had assumed Niven and Pournelle made that up since it wasn't in Herodotus like they claimed.

Replies from: RobinZ
comment by RobinZ · 2011-04-06T12:56:02.201Z · LW(p) · GW(p)

Where was it in Niven and Pournelle? I first saw it in The Cross Time Engineer.

Replies from: Tripitaka
comment by Tripitaka · 2011-04-06T13:04:05.666Z · LW(p) · GW(p)

In "the gripping hand" it is used as an example for a crazy eddy plan, that could actually work.

Replies from: CronoDAS
comment by CronoDAS · 2011-04-08T08:35:19.097Z · LW(p) · GW(p)

It was in The Mote in God's Eye.

comment by AndrewM · 2011-04-04T19:20:13.389Z · LW(p) · GW(p)

We are built to be effective animals, not happy ones.

-Robert Wright, The Moral Animal

comment by endoself · 2011-04-04T18:44:35.548Z · LW(p) · GW(p)

Most people would rather die than think; many do.

– Bertrand Russell

Replies from: Gray, ciphergoth
comment by Gray · 2011-04-11T05:02:59.731Z · LW(p) · GW(p)

Not a big fan of this. Seems like you could replace the word "think" with many different adjectives, and it would sound good or bad depending on whether I think the adjective agrees with what I consider my virtue. For instance, replace "think" with "exercise", and I would like if I'm a regular exerciser, but if I'm not I'd wonder why I would want to waste my life exercising.

Replies from: childofbaud
comment by childofbaud · 2011-04-27T04:56:59.848Z · LW(p) · GW(p)

Not a big fan of this. Seems like you could replace the word "think" with many different adjectives, and it would sound good or bad depending on whether I think the adjective agrees with what I consider my virtue. For instance, replace "think" with "exercise", and I would like if I'm a regular exerciser, but if I'm not I'd wonder why I would want to waste my life exercising.

The cognitive faculties are what makes humans distinct from other species, not any particular proclivity for exercise or any other such feats. A person refusing to think is like a fish refusing to swim.

Furthermore, we often benefit from these faculties even when pursuing interests that seem completely unrelated. Many of the best athletes are also decent thinkers. They have to be able to optimize their training regime, control their diets, cross the road, etc.

comment by Paul Crowley (ciphergoth) · 2012-01-21T15:03:51.148Z · LW(p) · GW(p)

Wikiquote has this as:

We all have a tendency to think that the world must conform to our prejudices. The opposite view involves some effort of thought, and most people would die sooner than think — in fact they do so.

Replies from: endoself
comment by endoself · 2012-01-21T23:07:35.697Z · LW(p) · GW(p)

Yeah, that must be the original; they even mention my version as a variant. I wonder how I found this quote originally.

comment by Nominull · 2011-04-06T03:40:18.670Z · LW(p) · GW(p)

using the word “science” in the same way you’d use the word “alakazam” doesn’t count as being smarter

-Kris Straub, Chainsawsuit artist commentary

comment by KenChen · 2011-04-05T13:58:17.059Z · LW(p) · GW(p)

Hofstadter's Law: It always takes longer than you expect, even when you take into account Hofstadter's Law.

– Douglas Hofstadter, Gödel, Escher, Bach: An Eternal Golden Braid

Replies from: DSimon
comment by DSimon · 2011-04-06T21:22:39.594Z · LW(p) · GW(p)

Doesn't that spiral out to infinity?

Replies from: Manfred, Normal_Anomaly
comment by Manfred · 2011-04-06T21:40:09.647Z · LW(p) · GW(p)

It can just asymptotically approach the right value. It's probably more metaphorical, though.

Replies from: HonoreDB
comment by HonoreDB · 2011-04-06T21:51:43.650Z · LW(p) · GW(p)

It always takes longer than you expect, even when you take into account the limit of infinite applications of Hofstadter's Law.

Replies from: ata
comment by ata · 2011-04-06T22:02:17.474Z · LW(p) · GW(p)

Even further:

Hofstadter's Law+: It always takes longer than you expect, even when you take into account the limit of infinite applications of Hofstadter's Law+.

Replies from: None
comment by [deleted] · 2011-04-07T01:01:16.510Z · LW(p) · GW(p)

For all ordinal numbers n, define Hodstadter's n-law as "It always takes longer than you expect, even when you take into account Hofstadter's m-law for all m < n."

Replies from: ata, Sniffnoy
comment by ata · 2011-04-07T01:13:46.185Z · LW(p) · GW(p)

For all natural numbers n, define L_n as the nth variation of Hofstadter's Law that has been or will be posted in this thread. Theorem: As n approaches infinity, L_n converges to "Everything ever takes an infinite amount of time."

Replies from: Eliezer_Yudkowsky, roystgnr, JGWeissman
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2011-04-07T05:56:38.669Z · LW(p) · GW(p)

Actually it takes longer than that.

comment by roystgnr · 2011-04-13T19:30:40.718Z · LW(p) · GW(p)

I've got a truly marvelous proof of this theorem, but it would take forever to write it all out.

comment by JGWeissman · 2011-04-13T19:34:35.921Z · LW(p) · GW(p)

Hofstadter's Shiny Law: It always takes longer than you expect, especially when you get distracted discussing variants of Hofstadter's Shiny Law.

comment by Sniffnoy · 2011-04-07T06:28:00.156Z · LW(p) · GW(p)

...which then forces things to take an infinite amount of time once you get to n=omega_1, so thankfully things stop there.

EDIT April 13: Oops, you can't actually "reach" omega_1 like this; I was not thinking properly. Omega_1 flat out does not embed in R. So... yeah.

comment by Normal_Anomaly · 2011-04-12T01:22:59.747Z · LW(p) · GW(p)

Yes. Hofstadter is like that.

comment by Kutta · 2011-04-04T17:29:14.573Z · LW(p) · GW(p)

The correct question to ask about functions is not „What is a rule?” or „What is an association?” but „What does one have to know about a function in order to know all about it?” The answer to the last question is easy – for each number x one needs to know the number f(x) (…)

– M. Spivak: Calculus

comment by atucker · 2011-04-06T07:17:13.341Z · LW(p) · GW(p)

There are these two young fish swimming along and they happen to meet an older fish swimming the other way, who nods at them and says "Morning, boys. How's the water?" And the two young fish swim on for a bit, and then eventually one of them looks over at the other and goes "What the hell is water?"

~ Story, used most famously in David Foster Wallace's Commencement Address at Kenyon College

comment by Matt_Duing · 2011-04-05T02:13:41.787Z · LW(p) · GW(p)

The most important relic of early humans is the modern mind.

-Steven Pinker

comment by Oscar_Cunningham · 2011-04-14T11:44:56.694Z · LW(p) · GW(p)

Fluff Principle: on a user-voted news site, the links that are easiest to judge will take over unless you take specific measures to prevent it.

Paul Graham "What I've learned from Hacker News"

comment by mispy · 2011-04-05T03:08:38.549Z · LW(p) · GW(p)

Our imagination is stretched to the utmost, not, as in fiction, to imagine things which are not really there, but just to comprehend those things which are there.

-- Richard Feynman

(I don't think he originally meant this in the context of overcoming cognitive bias, but it seems to apply well to that too.)

Replies from: Normal_Anomaly
comment by Normal_Anomaly · 2011-04-06T00:22:37.192Z · LW(p) · GW(p)

I think it was originally meant in the context of joy in the merely real.

comment by taserian · 2011-04-04T19:47:05.073Z · LW(p) · GW(p)

On perseverance:

It's a little like wrestling a gorilla. You don't quit when you're tired, you quit when the gorilla is tired.

-- Robert Strauss

(Although the reference I found doesn't say which Robert Strauss it was)

I think it goes well with the article Make an Extraordinary Effort.

Replies from: Desrtopa
comment by Desrtopa · 2011-04-04T20:13:24.083Z · LW(p) · GW(p)

I kind of feel like a scenario is not a great starting point for talking about perseverance when it's likely to result in your immediately getting your arms ripped off.

There are times when it's important to persevere, and times when it's important to know what not to try in the first place.

Replies from: benelliott
comment by benelliott · 2011-04-04T22:21:55.419Z · LW(p) · GW(p)

And there are times when you don't get to choose whether or not you wrestle the gorilla.

comment by Apprentice · 2011-04-04T15:17:38.984Z · LW(p) · GW(p)

Virtually everything in science is ultimately circular, so the main thing is just to make the circles as big as possible.

Richard D. Janda and Brian D. Joseph, 2003, The Handbook of Historical Linguistics, p. 111.

comment by dares · 2011-04-04T19:52:14.367Z · LW(p) · GW(p)

“In life as in poker, the occasional coup does not necessarily demonstrate skill and superlative performance is not the ability to eliminate chance, but the capacity to deliver good outcomes over and over again. That is how we know Warren Buffett is a skilled investor and Johnny Chan a skilled poker player.” — John Kay, Financial Times

Replies from: Nick_Roy
comment by Nick_Roy · 2011-04-08T21:50:15.515Z · LW(p) · GW(p)

“We are what we repeatedly do. Excellence, then, is not an act, but a habit.”

~ Aristotle

comment by RHollerith (rhollerith_dot_com) · 2011-04-07T16:07:50.709Z · LW(p) · GW(p)

To arrive at the simplest truth, as Newton knew and practiced, requires years of contemplation. Not activity. Not reasoning. Not calculating. Not busy behaviour of any kind. Not reading. Not talking. Not making an effort. Not thinking. Simply bearing in mind what it is one needs to know. And yet those with the courage to tread this path to real discovery are not only offered practically no guidance on how to do so, they are actively discouraged and have to set abut it in secret, pretending meanwhile to be diligently engaged in the frantic diversions and to conform with the deadening personal opinions which are continually being thrust upon them.

--George Spencer Brown in The Laws of Form, 1969.

comment by TylerJay · 2011-04-05T21:40:03.057Z · LW(p) · GW(p)

The north went on forever. Tyrion Lannister knew the maps as well as anyone, but a fortnight on the wild track that passed for the kingsroad up here had brought home the lesson that the map was one thing and the land quite another.

--George R. R. Martin A Game of Thrones

comment by childofbaud · 2011-04-05T00:07:54.045Z · LW(p) · GW(p)

This one's for you, Clippy:

The specialist makes no small mistakes while moving toward the grand fallacy.

—Marshall McLuhan

comment by AdeleneDawner · 2011-04-24T02:57:10.518Z · LW(p) · GW(p)

A tadpole doesn’t know
It’s gonna grow bigger.
It just swims,
and figures limbs
are for frogs.

People don’t know
the power they hold.
They just sing hymns,
and figure saving
is for god.

  • Andrea Gibson, Tadpoles (source)
comment by Kutta · 2011-04-04T17:30:05.819Z · LW(p) · GW(p)

Theology is the effort to explain the unknowable in terms of the not worth knowing.

– Mencken, quoted in Pinker: How the Mind Works

comment by newerspeak · 2011-04-06T12:25:05.248Z · LW(p) · GW(p)

Bertrand Russell, in his Autobiography records that his rather fearsome Puritan grandmother:

gave me a Bible with her favorite texts written on the fly-leaf. Among these was "Thou shalt not follow a multitude to do evil." Her emphasis upon this text led me in later life to be not afraid of belonging to small minorities.

It's rather affecting to find the future hammer of the Christians being "confirmed" in this way. It also proves that sound maxims can appear in the least probable places.

-- Christopher Hitchens, Letters to a Young Contrarian

comment by Confringus · 2011-04-04T20:39:08.061Z · LW(p) · GW(p)

"Isn't it enough to see that a garden is beautiful without having to believe that there are fairies at the bottom of it too?"

Douglas Adams

This quote defines my approach to science and philosophy; a phenomenon can be wondrous on its own merit, it need not be magical or extraordinary to have value.

Replies from: Raemon
comment by Raemon · 2011-04-05T03:02:51.858Z · LW(p) · GW(p)

Is this from a particular book, or something he said randomly?

Replies from: ata, Confringus
comment by ata · 2011-04-05T05:33:01.882Z · LW(p) · GW(p)

It's from the first Hitchhiker's Guide to the Galaxy book.

Replies from: Raemon
comment by Raemon · 2011-04-05T05:47:49.078Z · LW(p) · GW(p)

Really? What's the context?

Replies from: HonoreDB
comment by HonoreDB · 2011-04-05T08:08:04.483Z · LW(p) · GW(p)

Zaphod thinks they're on a mythic quest to find the lost planet Magrathea. They've found a lost planet alright, orbiting twin stars, but Ford still doesn't believe.

As Ford gazed at the spectacle of light before them excitement burnt inside him, but only the excitement of seeing a strange new planet; it was enough for him to see it as it was. It faintly irritated him that Zaphod had to impose some ludicrous fantasy onto the scene to make it work for him. All this Magrathea nonsense seemed juvenile. Isn't it enough to see that a garden is beautiful without having to believe that there are fairies at the bottom of it too?

Replies from: MBlume, Raemon
comment by MBlume · 2011-04-05T17:23:10.641Z · LW(p) · GW(p)

Of course, in context, they are in fact orbiting the lost planet of Magrathea.

Replies from: Eliezer_Yudkowsky, James_K
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2011-04-08T06:28:22.275Z · LW(p) · GW(p)

Well, in true fact, there is no lost planet of Magrathea.

Replies from: MBlume
comment by MBlume · 2011-04-09T05:42:31.394Z · LW(p) · GW(p)

I'm tempted to fuss about large worlds, but I think I shall refrain.

...Apophenia quite rightly points out that I am failing to refrain. Oops.

Replies from: LucasSloan
comment by LucasSloan · 2011-04-12T01:31:13.377Z · LW(p) · GW(p)

Well, this line of discussion has probably increased the odds of the existence of the "lost planet of Magrathea" in the local casual structure by a lot.

comment by James_K · 2011-04-07T05:56:43.184Z · LW(p) · GW(p)

Still, Ford's position was entirely reasonable ex ante.

Replies from: benelliott, MBlume
comment by benelliott · 2011-04-08T07:49:27.492Z · LW(p) · GW(p)

How foolish of him to think something like reasonableness would matter in the Hitch-hiker's Guide universe.

Replies from: James_K
comment by James_K · 2011-04-08T20:08:29.502Z · LW(p) · GW(p)

Yes, the trouble with rationality is that it may not work very well if you're a fictional character.

Replies from: ata
comment by ata · 2011-04-09T03:50:31.417Z · LW(p) · GW(p)

Only if you're a character in a fictional world that doesn't itself contain fiction in the same genre that you're in. If it does, you may be able to work out the rules.

Replies from: Desrtopa
comment by Desrtopa · 2011-04-09T03:54:38.434Z · LW(p) · GW(p)

Fiction logic dictates that even if you do realize you're fictional, you're almost certain to be wrong about what kind you're in.

comment by MBlume · 2011-04-08T04:35:31.551Z · LW(p) · GW(p)

Oh, certainly.

comment by Raemon · 2011-04-05T11:13:08.775Z · LW(p) · GW(p)

Thanks.

comment by Confringus · 2011-04-05T03:24:56.732Z · LW(p) · GW(p)

I imagine it is from one of his books but I came across it in the introduction to The God Delusion by Richard Dawkins. Oddly enough the Hitchhiker series is absolutely full of satirical quotes which can be applied to rationality.

comment by Kutta · 2011-04-04T17:33:04.765Z · LW(p) · GW(p)

Wisdom is easy: just find someone who trusts someone who trusts someone who trusts someone who knows the truth.

– Steven Kaas

Replies from: Jonathan_Graehl
comment by Jonathan_Graehl · 2011-04-06T01:51:24.192Z · LW(p) · GW(p)

I really don't see the point. All I'm getting out of this is: "knowing the truth is hard".

Replies from: Kutta
comment by Kutta · 2011-04-06T10:24:55.277Z · LW(p) · GW(p)

Plus the notion that in the current world when you know the truth with some satisfactory accuracy, most of the time you get to know it not firsthand but via a chain of people. Therefore it might be said that evaulating people's trustworthiness is in the same league of importance as interpreting and analysing data yet untouched by people.

Also, to nitpick, if you find a chain of people full of very trustworthy people, knowing the truth could be relatively easy.

Replies from: Jonathan_Graehl
comment by Jonathan_Graehl · 2011-04-08T00:47:31.096Z · LW(p) · GW(p)

What you said makes sense. It doesn't surprise me that I missed that interpretation of the quote, however. The concept of taking evidence from others' stated beliefs is better described by a network, not a single chain. Surely there are also different networks for each domain of claimed-knowledge.

I meant '"directly knowing the truth is hard"', as the quote intended. Still, mea culpa.

comment by Tiiba · 2011-04-06T05:26:59.820Z · LW(p) · GW(p)

I will repost a quote that I posted many moons ago on OB, if you don't mind. I don't THINK this breaks the rules too badly, since that post didn't get its fair share of karma. Here's the first time: http://lesswrong.com/lw/uj/rationality_quotes_18/nrt

"He knew well that fate and chance never come to the aid of those who replace action with pleas and laments. He who walks conquers the road. Let his legs grow tired and weak on the way - he must crawl on his hands and knees, and then surely, he will see in the night a distant light of hot campfires, and upon approaching, will see a merchants' caravan; and this caravan will surely happen to be going the right way, and there will be a free camel, upon which the traveler will reach his destination. Meanwhile, he who sits on the road and wallows in despair - no matter how much he cries and complains - will evoke no compassion in the soulless rocks. He will die in the desert, his corpse will become meat for foul hyenas, his bones will be buried in hot sand. How many people died prematurely, and only because they didn't love life strongly enough! Hodja Nasreddin considered such a death humiliating for a human being.

"No" - said he to himself and, gritting his teeth, repeated wrathfully: "No! I won't die today! I don't want to die!""

Replies from: khafra
comment by khafra · 2011-04-07T16:55:36.684Z · LW(p) · GW(p)

Have you translated the whole story, or just this quote? It sounds interesting, and stacks up next to a SF story about somewhat less-than-friendly-AI as a reason I wish I could read Russian.

Replies from: Tiiba
comment by Tiiba · 2011-04-07T17:47:05.012Z · LW(p) · GW(p)

Just this quote. But I found a complete translation:

http://www.google.com/search?source=ig&hl=en&rlz=1G1GGLQ_ENUS333&q=The+Beggar+in+the+Harem%3A+Impudent+Adventures+in+Old+Bukhara&aq=f&aqi=g-v1&aql=&oq=

What's the other story?

Replies from: khafra
comment by khafra · 2011-04-07T19:08:19.298Z · LW(p) · GW(p)

Took me a while, but I found it: "Lena Squatter and the Paragon of Vengeance" by SF author Leonid Kaganov.

comment by ewang · 2011-04-05T17:57:27.420Z · LW(p) · GW(p)

Clevinger exclaimed to Yossarian in a voice rising and falling in protest and wonder. "It's a complete reversion to primitive superstition. They're confusing cause and effect. It makes as much sense as knocking on wood or crossing your fingers. They really believe that we wouldn't have to fly that mission tomorrow if someone would only tiptoe up to the map in the middle of the night and move the bomb line over Bologna. Can you imagine? You and I must be the only rational ones left." In the middle of the night Yossarian knocked on wood, crossed his fingers, and tiptoed out of his tent to move the bomb line up over Bologna.

Joseph Heller (Catch-22)

Replies from: wnoise
comment by wnoise · 2011-04-05T21:38:36.372Z · LW(p) · GW(p)

A bit more context for those who haven't read Catch-22 would probably help.

Replies from: ewang
comment by ewang · 2011-04-06T07:02:05.850Z · LW(p) · GW(p)

I don't think anything else could be added that deepens the understanding of the quote, besides the fact that moving the bomb line actually works because Corporal Kolodny (who is obviously a corporal named Kolodny) can't distinguish between cause and effect either.

comment by Pavitra · 2011-04-09T18:33:59.712Z · LW(p) · GW(p)

On boldness:

If you're gonna make a mistake, make it a good, loud mistake!

-- Augiedog, Half the Day is Night

(Edit: I should mention that the linked story is MLP fanfic. The MLP fandom may be a memetic hazard; it seems to have taken over my life for the past several days, though I tend to do that with most things, so YMMV. Proceed with caution.)

comment by Richard_Kennaway · 2011-04-04T20:16:37.708Z · LW(p) · GW(p)

He who pours out thanks for a favourable verdict runs the risk of seeming to betray not only a bad conscience, but also a poor idea of the judge's office.

Francis Paget, preface to the 2nd ed. of "The Spirit of Discipline", 1906
http://www.archive.org/details/thespiritofdisc00pageuoft

The book also contains material on accidie (the Introductory Essay and the preface to the seventh edition), which is probably how I came across it.

comment by HonoreDB · 2011-04-15T02:53:03.957Z · LW(p) · GW(p)

(Courtesy of my dad)

One must be absolutely modern. No hymns! Hold the ground gained.

Arthur Rimbaud, 1873

comment by Davidmanheim · 2011-04-04T17:17:56.236Z · LW(p) · GW(p)

"Does it contain any abstract reasoning concerning quantity or number? No. Does it contain any experimental reasoning concerning matter of fact and existence? No. Commit it then to the flames, for it can contain nothing but sophistry and illusion."

-Hume, An Inquiry Concerning Human Understanding

Replies from: Johnicholas, wedrifid
comment by Johnicholas · 2011-04-04T17:26:09.059Z · LW(p) · GW(p)

Doesn't that mean "An Inquiry Concerning Human Understanding" should be committed to the flames? I didn't notice much numerical or experimental reasoning in it.

Replies from: None, benelliott, Davidmanheim
comment by [deleted] · 2011-04-04T19:41:38.731Z · LW(p) · GW(p)

The quote is somewhat experimental, but we'd have to ignore its advice to find out if it was correct.

Replies from: jschulter
comment by jschulter · 2011-04-08T23:36:48.553Z · LW(p) · GW(p)

Well, of course we would! Executing an action based on the truth of a hypothesis while trying to determine whether its true or not would be somewhat odd.

Replies from: endoself
comment by endoself · 2011-04-09T00:25:19.044Z · LW(p) · GW(p)

Consider the quote. If it is false, it should be committed to the flames. If it is true, it should, according to itself, be committed to the flames. Therefore, we can commit it to the flames regardless of its truth-value.

comment by benelliott · 2011-04-04T22:29:07.428Z · LW(p) · GW(p)

I would say that advice from an experienced practitioner in a given field falls into a broad definition of "experimental reasoning", since at some stage they probably tried several approaches and found out the hard way which one worked.

comment by Davidmanheim · 2012-07-19T23:32:27.460Z · LW(p) · GW(p)

I think "experimental reasoning" is not what we now call scientific experimentation. It's more of what Schrodinger did with his cat; think through the issue with hypotheses and try to logically understand them. It's better than most philosophy, but not quite what we would now call science.

comment by wedrifid · 2011-04-05T09:04:43.323Z · LW(p) · GW(p)

Personally I enjoy illusions - some of them look pretty. I'm keeping them.

comment by ThroneMonkey · 2011-04-09T01:10:08.390Z · LW(p) · GW(p)

"I can't make myself believe something that I don't believe" —Ricky Gervais, in discussing his atheism

Reminds me of the scene in HPMOR where Harry makes Draco a scientist.

comment by atucker · 2011-04-06T07:13:37.363Z · LW(p) · GW(p)

You will become way less concerned with what other people think of you when you realize how seldom they do.

~ David Foster Wallace, Infinite Jest

Replies from: Unnamed
comment by Unnamed · 2011-04-06T18:00:09.240Z · LW(p) · GW(p)

Dupe.

comment by dares · 2011-04-06T00:19:53.345Z · LW(p) · GW(p)

Perfection is achieved, not when there is nothing more to add, but when there is nothing left to take away.

—Antoine de Saint-Exupéry

Replies from: childofbaud, bcoburn
comment by childofbaud · 2011-04-07T03:46:10.688Z · LW(p) · GW(p)

A domain-specific interpretation of the same concept:

"The real hero of programming is the one who writes negative code."

—Douglas McIlroy

Replies from: childofbaud
comment by childofbaud · 2011-04-07T23:10:13.914Z · LW(p) · GW(p)

A domain-neutral interpretation of the same concept:

Entities should not be multiplied beyond necessity.

—William of Ockham

comment by bcoburn · 2011-04-06T05:03:58.987Z · LW(p) · GW(p)

This one really needs to have been applied to itself, "short is good" is way better.

(also this was one of EY's quotes in the original rationality quotes set, http://lesswrong.com/lw/mx/rationality_quotes_3/ )

Replies from: dares, None, CronoDAS, dares
comment by dares · 2011-04-06T12:37:19.168Z · LW(p) · GW(p)

Also, "short is good" would narrow this quotes focus considerably.

comment by [deleted] · 2011-04-07T01:06:47.908Z · LW(p) · GW(p)

Perfection is lack of excess.

comment by CronoDAS · 2011-04-06T06:24:09.191Z · LW(p) · GW(p)

Maybe it's shorter in French?

Replies from: komponisto
comment by komponisto · 2011-04-06T06:35:47.873Z · LW(p) · GW(p)

Compare:

Il semble que la perfection soit atteinte non quand il n'y a plus rien à ajouter, mais quand il n'y a plus rien à retrancher.

So, no.

comment by dares · 2011-04-06T12:35:43.572Z · LW(p) · GW(p)

New here, sorry for the redundancy. I probably should have guessed that such a popular quote had been used.

comment by atucker · 2011-04-13T04:11:09.851Z · LW(p) · GW(p)

I don't have a simple answer

But I know that I could answer

-- The Killers in This is Your Life

comment by CronoDAS · 2011-05-02T02:43:07.539Z · LW(p) · GW(p)

Most libertarians would agree that it’s a messed-up state that:

  • Creates a massive crime problem in poor minority neighborhoods with a futile, vicious and every more far-reaching attempt to prevent commerce in popular, highly portable intoxicants that leaves absurd numbers of young men with felony records, making them marginally employable.

  • Fails to provide adequate policing for such neighborhoods.

  • Fails to provide effective education in such neighborhoods after installing itself as the educator of first resort.

  • Uses regulatory power to sharply curtail entry into lines of business from hair-care to ride provision, further limiting the employment options of people in such neighborhoods.

  • Has in the past actively fostered the oppression of said minority, up to and including spending state money and time in keeping its members in bondage.

  • To make up for all of the above, provides a nominal amount of tax-financed welfare for the afflicted.

But it’s a messed-up libertarianism that looks at that situation and says, "Man, first thing we gotta do is get rid of that welfare!"

-- Jim Henley, via Alas A Blog

comment by Alicorn · 2011-04-18T21:44:21.918Z · LW(p) · GW(p)

You know, in the comic books where super-powered mutants are real, no one seems to question the theory of evolution. Maybe we're going about this all wrong.

-- Surviving The World

Replies from: Nornagest, JoshuaZ
comment by Nornagest · 2011-04-18T21:52:51.530Z · LW(p) · GW(p)

I initially parsed that as meaning something like "we're clearly not getting the mechanics of evolution across, since people in the comics [and by extension writers] are happy to treat it as something that can produce superheroes". But in context it actually seems to mean "let's create some superheroes to demonstrate the efficacy of evolution beyond any reasonable doubt".

Comic exaggeration, sure, and I'm probably supposed to interpret the word "evolution" very loosely if I want to take the quote at all seriously. But in view of the former, I still can't help but think that there's something fundamentally naive about the latter.

Replies from: Alicorn
comment by Alicorn · 2011-04-18T22:07:57.310Z · LW(p) · GW(p)

I didn't quote the commentary under the comic for a reason.

comment by JoshuaZ · 2011-04-18T21:55:17.119Z · LW(p) · GW(p)

You know, in the comic books where super-powered mutants are real, no one seems to question the theory of evolution. Maybe we're going about this all wrong.

Hasn't it been pointed out here before that super-powered mutants are exactly not what we would expect from evolution?

Replies from: Alicorn
comment by Alicorn · 2011-04-18T22:07:30.821Z · LW(p) · GW(p)

Hasn't it been pointed out here before that super-powered mutants are exactly not what we would expect from evolution?

Yes, but the quote is new.

comment by wobster109 · 2011-04-10T07:09:13.552Z · LW(p) · GW(p)

"If you choose to follow a religion where, for example, devout Catholics who are trying to be good people are all going to Hell but child molestors go to Heaven (as long as they were "saved" at some point), that's your choice, but it's fucked up. Maybe a God who operates by those rules does exist. If so, fuck Him." --- Bill Zellar's suicide note, in regards to his parents' religion

I love this passage. If a god as described in the Bible did exist, following him would be akin to following Voldemort: fidelity simply because he was powerful. This isn't precisely a rationality quote, but it does have a bit of the morality-independent-of-religion thing. (The rest of the note is beautiful and eloquent as well.)

Replies from: MinibearRex, ata
comment by MinibearRex · 2011-04-11T04:37:20.549Z · LW(p) · GW(p)

I think we should keep some sort op separation between "rationality quotes" and "atheism quotes". You can stretch this to be a rationality quote, but it does require a stretch. Just because a quote argues against the existence of a god doesn't make it particularly rational.

comment by ata · 2011-04-10T08:04:57.817Z · LW(p) · GW(p)

I love this passage. If a god as described in the Bible did exist, following him would be akin to following Voldemort: fidelity simply because he was powerful.

There are other similarities too. e.g. Voldemort's human form died and rose again; his (first) death was foretold in prophesy, involved a betrayal (albeit in the opposite direction), and left his followers anxiously awaiting his return; "And these signs shall follow them that believe; ... they shall speak with new tongues; They shall take up serpents..." (Mark 16:17-18); ...

So, who wants to join the First Church of Voldemort?

comment by djcb · 2011-04-05T10:30:05.795Z · LW(p) · GW(p)

Make no mistake about it: Computers process numbers - not symbols. We measure our understanding (and control) by the extent to which we can arithmetize an activity.

-- Alan Perlis

Since I discovered them through SICP, I always liked the 'Perlisims' -- many of his Epigrams in Programming are pretty good. There's a hint of Searle/Chinese Room in this particular quote, but he turns it around by implying that in the end, the symbols are numbers (or that's how I read it).

comment by mtraven · 2011-04-04T22:26:26.219Z · LW(p) · GW(p)

The best education consists in immunizing people against systematic attempts at education.

-- Paul Feyerabend

Replies from: David_Gerard
comment by David_Gerard · 2011-04-05T09:28:12.372Z · LW(p) · GW(p)

This one could do with expansion and/or contextualisation. A quick Google only turns up several pages of just the bare quote (including on a National Institue of Health .gov page!) - what was the original source? Anyone?

Replies from: mtraven
comment by mtraven · 2011-04-06T02:44:57.087Z · LW(p) · GW(p)

Well, I deliberately left out the source because I didn't think it would play well in this Peoria of thought -- it's from his book of essays Farewell to Reason. Link to gbooks with some context.

Replies from: JoshuaZ, Normal_Anomaly
comment by JoshuaZ · 2011-04-12T01:49:43.836Z · LW(p) · GW(p)

Well, I deliberately left out the source because I didn't think it would play well in this Peoria of thought -- it's from his book of essays Farewell to Reason. Link to gbooks with some context.

We've had rationality quotes before from C.S. Lewis, G.K. Chesterson, and Jack Chick among others. I don't think people are going to complain because of generic context issues even if Feyerabend did say some pretty silly stuff.

comment by Normal_Anomaly · 2011-04-12T01:30:08.261Z · LW(p) · GW(p)

Can you please explain what you mean by calling LW a "Peoria of thought" and why you believe it is one? It doesn't sound good, and if you've found a problem I'd like to know about it and address it.

Replies from: None
comment by [deleted] · 2011-04-12T01:34:32.957Z · LW(p) · GW(p)

Pretty much any forum tends to evolve into a bit of an echo chamber. I don't think there is any general solution to it other than for whole forums to be bubbling into and out of existence.

comment by Carwajalca · 2011-05-18T19:43:27.670Z · LW(p) · GW(p)

Science is interesting, and if you don't agree you can fuck off.

By Richard Dawkins, quoting a former editor of New Scientist (here's at least one source). I don't think this quote contains any deep wisdom as such, but it made me laugh. Actually you could replace the word science with any other noun and it would still make grammatical sense.

Replies from: komponisto
comment by komponisto · 2011-05-18T20:03:52.657Z · LW(p) · GW(p)

Actually you could replace the word science with any other noun and it would still make grammatical sense.

That is a consequence of the meaning of the term "grammatical sense", not a property of the particular sentence under discussion.

Replies from: Carwajalca
comment by Carwajalca · 2011-05-18T20:16:10.292Z · LW(p) · GW(p)

Good point. What I meant is that this quote could be used to defend anything. "Being irrational is interesting, and if you don't agree you can fuck off."

comment by Thomas · 2011-04-07T16:49:23.136Z · LW(p) · GW(p)

Water has memory! And while it’s memory of a long lost drop of onion juice is Infinite It somehow forgets all the poo it’s had in it!

  • Tim Minchin
comment by AlephNeil · 2011-04-05T11:21:18.278Z · LW(p) · GW(p)

In what circumstances shall I say that a tribe has a chief? And the chief must surely have consciousness. Surely we can't have a chief without consciousness!

But can't I imagine that the people around me are automata, lack consciousness, even though they behave in the same way as usual?--If I imagine it now--alone in my room--I see people with fixed looks (as in a trance) going about their business--the idea is perhaps a little uncanny. But just try to keep hold of this idea in the midst of your ordinary intercourse with others, in the street, say! Say to yourself, for example: "The children over there are mere automata; all their liveliness is mere automatism." And you will either find these words becoming quite meaningless; or you will produce in yourself some kind of uncanny feeling, or something of the sort.

Seeing a living human being as an automaton is analogous to seeing one figure as a limiting case or variant of another; the cross-pieces of a window as a swastika, for example.

--Wittgenstein (Philosophical Investigations) hinting at the true nature of the concept 'consciousness vs zombiehood'.

comment by knb · 2011-04-21T22:48:35.593Z · LW(p) · GW(p)

Procrastination is one of the most common and deadliest of diseases and its toll on success and happiness is heavy.

Also:

You miss 100% of the shots you don't take.

-Wayne Gretsky

Replies from: Oscar_Cunningham
comment by Oscar_Cunningham · 2011-04-24T12:03:30.045Z · LW(p) · GW(p)

Please post all quotes separately, so that they can be voted up/down separately. (If they are strongly related, reply to your own comments. If strongly ordered, then go ahead and post them together.)

comment by [deleted] · 2011-04-04T22:49:38.229Z · LW(p) · GW(p)

.

Replies from: cousin_it, khafra, Richard_Kennaway, Jonathan_Graehl
comment by cousin_it · 2011-04-05T09:19:18.296Z · LW(p) · GW(p)

moral order of the universe

There's no such thing.

comment by khafra · 2011-04-05T12:18:46.563Z · LW(p) · GW(p)

The 3 downvotes this had when I entered the thread seem rather harsh, considering it could be rephrased as "think like reality." The questionable part is that the universe has a moral order, but a charitable reading of the quote will not demand that it means "a moral order independent of human minds."

comment by Richard_Kennaway · 2011-04-05T11:32:33.413Z · LW(p) · GW(p)

moral order of the universe

The moral order is within us.

Replies from: moshez
comment by moshez · 2011-04-05T12:55:49.691Z · LW(p) · GW(p)

And we are within the universe! So that all works out nicely.

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2011-04-05T13:58:10.505Z · LW(p) · GW(p)

We're only a small part of it, though. The rest is "the motivationless malice of directed acyclic causal graphs".

Replies from: moshez
comment by moshez · 2011-04-05T14:04:44.018Z · LW(p) · GW(p)

How do you measure "small"? Us humans had a disproprotionate effect on our immediate surroundings, and that effect is going to continue throughout our lightcone if everything goes according to plan.

Replies from: Mycroft65536
comment by Mycroft65536 · 2011-04-05T14:41:41.362Z · LW(p) · GW(p)

... if everything goes according to plan.

I think you're supposed to laugh evilly there.

Mwahahahaha

comment by Jonathan_Graehl · 2011-04-06T01:40:58.933Z · LW(p) · GW(p)

We should all agree to say the same words, without too much concern for what they mean?

comment by lukeprog · 2011-04-20T16:50:11.092Z · LW(p) · GW(p)

...the best lesson our readers can learn is to give up the childish notion that everything that is interesting about nature can be understood... It might be interesting to know how cognition (whatever that is) arose and spread and changed, but we cannot know. Tough luck.

Richard Lewontin

Replies from: nshepperd
comment by nshepperd · 2011-04-21T01:14:47.172Z · LW(p) · GW(p)

Is this an ironic rationality quote?

Replies from: Pavitra
comment by Pavitra · 2011-04-21T01:19:58.311Z · LW(p) · GW(p)

The world is allowed to be too much for you to handle. (But you should try anyway.)

Replies from: knb
comment by knb · 2011-04-21T23:58:09.618Z · LW(p) · GW(p)

That isn't what the quote is saying though. It is claiming that we know for a fact that we cannot ever understand cognition. Ironically, that is itself a hubristic claim of positive knowledge about a topic (what may eventually be possible for humans to know) where we should be more modest about claims.

Replies from: Pavitra
comment by Pavitra · 2011-04-24T15:12:40.802Z · LW(p) · GW(p)

Agreed.

comment by Jonathan_Graehl · 2011-04-11T21:52:06.353Z · LW(p) · GW(p)

Strange but true: those who have loved God most have loved men least.

-- Ingersoll

Replies from: benelliott
comment by benelliott · 2011-04-11T22:41:24.133Z · LW(p) · GW(p)

This may be anti-theist, but I'm not convinced that its a rationality quote. I'm also not convinced that its actually true.

Replies from: Jonathan_Graehl
comment by Jonathan_Graehl · 2011-04-11T23:32:53.212Z · LW(p) · GW(p)

I had the same doubt, but on reflection, I still like it. Many religions seem to have as an explicit primary value: love/obey God - THAT is what good behavior is, i.e. normal human values (especially non/different-believers') are secondary.

Of course it's not likely that the top-10 God-lovers and man-harmers are exactly the same people, but "the most-quoted things are the least-equivocating" :) [self-quotation]

comment by a363 · 2011-04-08T08:30:02.270Z · LW(p) · GW(p)

"Take up the White Man's burden-- The savage wars of peace-- Fill full the mouth of Famine, And bid the sickness cease; And when your goal is nearest (The end for others sought) Watch sloth and heathen folly Bring all your hope to nought." -Rudyard Kipling

Replies from: benelliott
comment by benelliott · 2011-04-09T17:44:58.600Z · LW(p) · GW(p)

How is this related to rationality?

Replies from: Desrtopa
comment by Desrtopa · 2011-04-09T17:49:29.099Z · LW(p) · GW(p)

I can see how it's applicable as an exhortation to attempt to solve the hard problems which others find too difficult to deal with, or accept as the natural order of things, and an acknowledgment that the greatest barrier is often the irrationality or apathy of others. But it also treads on mindkiller territory; I didn't vote either way.

comment by childofbaud · 2011-04-07T22:54:59.870Z · LW(p) · GW(p)

A true friend stabs you in the front.

—Oscar Wilde

Replies from: childofbaud
comment by childofbaud · 2011-04-08T02:26:03.957Z · LW(p) · GW(p)

Can someone wager a guess why this is being downvoted?

Replies from: Alicorn
comment by Alicorn · 2011-04-08T02:27:29.888Z · LW(p) · GW(p)

It has no obvious connection to rationality.

Replies from: childofbaud
comment by childofbaud · 2011-04-08T02:37:19.908Z · LW(p) · GW(p)

I suppose it might be a little ambiguous. Here's my interpretation (I'm curious to hear others).

The practice of backstabbing usually refers to criticizing someone when they're not present, while feigning friendship.

Thus, "frontstabbing" would be to criticize someone openly and honestly, which is often very hard to do. Even, or perhaps especially, among friends. But it seems to be something worth aspiring towards, if one is concerned with rationality and truth.

Replies from: JoshuaZ
comment by JoshuaZ · 2011-04-08T02:50:45.590Z · LW(p) · GW(p)

That seems like a good idea, but I'm pretty sure that Oscar Wilde didn't at all intend the quote to mean that. Rationality quotes is not an excuse for quote mining and proof-texting.

Replies from: childofbaud
comment by childofbaud · 2011-04-08T02:57:52.811Z · LW(p) · GW(p)

So, what do you think he meant?

I tend to judge quotes on their own merit. I thought that was the point. Do people usually look up detailed contextual information about them?

Replies from: JoshuaZ
comment by JoshuaZ · 2011-04-08T03:02:49.576Z · LW(p) · GW(p)

It doesn't take much context to guess at the original meaning- Oscar Wilde was a pretty cynical individual. Given that data point what do you think it means?

Replies from: childofbaud
comment by childofbaud · 2011-04-08T03:48:45.881Z · LW(p) · GW(p)

I've tried and failed to come up with any reasonable interpretation other than my own. Please frontstab me.

Replies from: JoshuaZ
comment by JoshuaZ · 2011-04-08T03:51:56.395Z · LW(p) · GW(p)

His comment is that humans are terrible, treacherous, disloyal scum. The only difference between the friend and the non-friend is that the friend might tell you when he's harming you whereas the non-friend won't even bother telling you.

comment by bisserlis · 2011-04-06T05:14:07.434Z · LW(p) · GW(p)

Son, you’re a body, son. That quick little scientific-prodigy’s mind she’s so proud of and won’t quit twittering about: son, it’s just neural spasms, those thoughts in your mind are just the sound of your head revving, and head is still just body, Jim. Commit this to memory. Head is body. Jim, brace yourself against my shoulders here for this hard news, at ten: you’re a machine a body an object, Jim, no less than this rutilant Montclair, this coil of hose here or that rake there for the front yard’s gravel or sweet Jesus this nasty fat spider flexing in its web over there up next to the rake-handle, see it?

Infinite Jest, page 159