Rationality Quotes June 2013

post by Thomas · 2013-06-03T03:08:50.803Z · LW · GW · Legacy · 792 comments

Contents

792 comments

Another month has passed and here is a new rationality quotes thread. The usual rules are:

792 comments

Comments sorted by top scores.

comment by sediment · 2013-06-02T19:58:49.817Z · LW(p) · GW(p)

Hofstadter on the necessary strangeness of scientific explanations:

It is no accident, I would maintain, that quantum mechanics is so wildly counterintuitive. Part of the nature of explanation is that it must eventually hit some point where further probing only increases opacity rather than decreasing it. Consider the problem of understanding the nature of solids. You might wonder where solidity comes form. What if someone said to you, "The ultimate basis of this brick's solidity is that it is composed of a stupendous number of eensy weensy bricklike objects that themselves are rock-solid"? You might be interested to learn that bricks are composed of micro-bricks, but the initial question - "What accounts for solidity?" - has been thoroughly begged. What we ultimately want is for solidity to vanish, to dissolve, to disintegrate into some totally different kind of phenomenon with which we have no experience. Only then, when we have reached some completely novel, alien level will we feel that we have really made progress in explaining the top-level phenomenon.

[...]

I first saw this thought expressed in the stimulating book Patterns of Discovery by Norwood Russell Hanson. Hanson attributes it to a number of thinkers, such as Isaac Newton, who wrote, in his famous work Opticks: "The parts of all homogeneal hard Bodies which fully touch one another, stick together very strongly. And for explaining how this may be, some have invented hooked Atoms, which is begging the Question." Hanson also quotes James Clerk Maxwell (from an article entitled "Atom"): "We may indeed suppose the atom elastic, but this is to endow it with the very property for the explanation of which... the atomic constitution was originally assumed." Finally, here is a quote Hanson provides from Werner Heisenberg himself: "If atoms are really to explain the origin of color and smell of visible material bodies, then they cannot possess properties like color and smell." So, although it is not an original thought, it is useful to bear in mind that greeness disintegrates.

— from the postscript to Heisenberg's Uncertainty Principle, in Metamagical Themas: Questing for the Essence of Mind and Pattern (his lovely book of essays from his column in Scientific American)

Replies from: fburnaby, None, NancyLebovitz
comment by fburnaby · 2013-06-03T11:22:10.132Z · LW(p) · GW(p)

Why Opium produces sleep: ... Because there is in it a dormitive power.

Moliere, Le Malade Imaginere (1673), Act III, sc. iii.

Replies from: DysgraphicProgrammer
comment by DysgraphicProgrammer · 2013-06-03T14:20:29.535Z · LW(p) · GW(p)

A lesson here is that if you ask "Why X?" then any answer of the form "Because " is not actually progress toward understanding.

Replies from: Viliam_Bur
comment by Viliam_Bur · 2013-06-04T13:18:41.964Z · LW(p) · GW(p)

Synonyms are not good for explaining... because there is no explanatory power in them.

Replies from: ZankerH
comment by ZankerH · 2013-06-04T19:50:43.229Z · LW(p) · GW(p)

I found your post funny... because it amused me.

Replies from: DanArmak
comment by DanArmak · 2013-06-08T19:21:29.517Z · LW(p) · GW(p)

I upvoted your comment, because I wished for it to have more upvotes.

Replies from: Kawoomba
comment by Kawoomba · 2013-06-08T19:26:15.497Z · LW(p) · GW(p)

Sometimes a downvote will lead to more overall upvotes than an upvote would have. Just like you can increase the probability of a sentence being quoted by including a typo, on purpose (try it!). Mind games!

Replies from: DanArmak
comment by DanArmak · 2013-06-08T19:33:07.467Z · LW(p) · GW(p)

OK, I'm trying it on your comment.

Replies from: TheOtherDave, Kawoomba
comment by TheOtherDave · 2013-06-08T19:40:30.082Z · LW(p) · GW(p)

Unfortunately, even if the effect is real, hanging a lantern on it probably neutralizes it.

comment by Kawoomba · 2013-06-08T19:47:26.456Z · LW(p) · GW(p)

Shooting the messenger! :-(

Alas, our poor community! It’s too frightened to look at itself. LessWrong is no longer the land where we were born; it’s the land where we’ll die. Where no one ever smiles except for the fool who knows nothing. Where sighs, groans, and shrieks rip through the air but no one notices. Where violent sorrow is a common emotion. When the funeral bells ring, people no longer ask who died. Good men die before the flowers in their caps wilt. They die before they even fall sick.

Exeunt.

(In all earnestness, it works better with comments for which no downvotes would be expected -- unlike mine --, the counter-voting will in my experience often overcompensate the initial downvote. So downvote your friends, but only the high status ones on their best comments! It's a bit like upvoting by proxy, except the proxy is a fellow LWer you're secretly puppeteering!)

Replies from: bentarm
comment by bentarm · 2013-06-30T21:18:04.834Z · LW(p) · GW(p)

Is this even possible? How would someone know that a comment has been downvoted once it had been voted back up to 0 points?

Replies from: Eugine_Nier
comment by Eugine_Nier · 2013-07-01T05:05:59.817Z · LW(p) · GW(p)

Hover your mouse over the "n points" text.

comment by [deleted] · 2013-06-26T07:55:03.917Z · LW(p) · GW(p)

"greeness" -> "greenness"

comment by NancyLebovitz · 2013-06-19T12:05:17.640Z · LW(p) · GW(p)

"If atoms are really to explain the origin of color and smell of visible material bodies, then they cannot possess properties like color and smell." So, although it is not an original thought, it is useful to bear in mind that greeness disintegrates.

Does this imply that there's no bottom level, just layer after layer of explanations with each layer being very different from the ones above? If there is a bottom level below which no further explanation is possible, can you tell whether you've reached it?

Replies from: ParanoidAltoid, TheOtherDave
comment by ParanoidAltoid · 2013-10-21T20:48:34.409Z · LW(p) · GW(p)

"If atoms are really to explain the origin of color and smell of visible material bodies, then they cannot possess properties like color and smell."

I want to point out that in this post, you were quoting sediment quoting Hofstadter who was referencing Hanson's quoting of Heisenberg. Pretty sure even Inception didn't go that deep.

comment by TheOtherDave · 2013-06-19T12:37:58.722Z · LW(p) · GW(p)

The principle here is that an attribute x of an entity A is not explained by reference to a constituent entity B that has the same property. The strength of an arch is a property of arches, for example, not of the things from which arches are constituted.

That doesn't imply that theremust be a B in the first place, merely that whether there is or not, referring to B.x in order to explain A.x leaves x unexplained. (Of course, if there is no B, referring to B.x has other problems as well.)

I suspect the "top"/"bottom"/"level" analogy is misleading here. I would be surprised if there were a coherent "bottom level," actually. But if there is, I suppose the sign that I've reached it is that all the observable attributes it has are fully explainable without reference to other "levels," and all the observable attributes of other "levels" are fully (if impractically) explainable in terms of it.

At any level of description, there are observable attributes of entities that are best explained by reference to other levels of description, but I'm not sure there's always a clear rank-ordering of those levels.

comment by aribrill (Particleman) · 2013-06-03T04:04:26.736Z · LW(p) · GW(p)

Why is there that knee-jerk rejection of any effort to "overthink" pop culture? Why would you ever be afraid that looking too hard at something will ruin it? If the government built a huge, mysterious device in the middle of your town and immediately surrounded it with a fence that said, "NOTHING TO SEE HERE!" I'm pretty damned sure you wouldn't rest until you knew what the hell that was -- the fact that they don't want you to know means it can't be good.

Well, when any idea in your brain defends itself with "Just relax! Don't look too close!" you should immediately be just as suspicious. It usually means something ugly is hiding there.

Replies from: OrphanWilde, NancyLebovitz, Risto_Saarelma, linkhyrule5, DaveK
comment by OrphanWilde · 2013-06-03T09:58:56.878Z · LW(p) · GW(p)

Ah, David Wong. A few movies in the post-9/11 era begin using terrorism and asymmetric warfare as a plot point? Proof that Hollywood no longer favors the underdog. Meanwhile he ignores... Daredevil, Elektra, V for Vendetta, X-Men, Kickass, Punisher, and Captain America, just to name the superhero movies I've seen which buck the trend he references, and within the movies he himself mentions, he intentionally glosses over 90% of the plots in order to make his point "stick." In some cases (James Bond, Sherlock Holmes) he treats the fact that the protagonists win as the proof that they weren't the underdog at all (something which would hold in reality but not in fiction, and a standard which he -doesn't- apply when it suits his purpose, a la his comments about the first three Die Hard movies being about an underdog whereas the most recent movie isn't).

Yeah. Not all that impressed with David Wong. His articles always come across as propaganda, carefully and deliberately choosing what evidence to showcase. And in this case he's deliberately treating the MST3K Mantra as some kind of propaganda-hiding tool? Really?

These movies don't get made because Hollywood billionaires don't want to make movies about underdogs, as he implies - Google "underdog movie", this trope is still a mainstay of movies. They get made because they sell. To the same people consuming movies like The Chronicles of Riddick or The Matrix Trilogy. Movies which revolve around badass underdogs.

(Not that this directly relates to your quote, but I find David Wong to be consistently so deliberate about producing propaganda out of nothing that I cannot take him seriously as a champion of rationality.)

Replies from: Vaniver
comment by Vaniver · 2013-06-04T03:04:03.236Z · LW(p) · GW(p)

Not that this directly relates to your quote, but I find David Wong to be consistently so deliberate about producing propaganda out of nothing that I cannot take him seriously as a champion of rationality.

It is worth pointing out that this page is about quotes, not people, or even articles. I thought the quote was worth upvoting for:

Well, when any idea in your brain defends itself with "Just relax! Don't look too close!" you should immediately be just as suspicious. It usually means something ugly is hiding there.

comment by NancyLebovitz · 2013-06-03T14:39:49.901Z · LW(p) · GW(p)

Why is there that knee-jerk rejection of any effort to "overthink" pop culture? Why would you ever be afraid that looking too hard at something will ruin it?

I think it's because enjoying fiction involves being in a trance, and analyzing the fiction breaks the trance. I suspect that analysis is also a trance, but it's a different sort of trance.

Replies from: army1987, sediment
comment by A1987dM (army1987) · 2013-06-09T12:49:29.019Z · LW(p) · GW(p)

The term for that is suspension of disbelief.

comment by sediment · 2013-06-04T00:20:10.749Z · LW(p) · GW(p)

Any chance you could expand on "analysis is also a trance"?

Replies from: NancyLebovitz
comment by NancyLebovitz · 2013-06-04T06:01:55.658Z · LW(p) · GW(p)

I don't know about anyone else, but if I'm analyzing, my internal monologue is the main thing in my consciousness.

Replies from: Baughn, FeepingCreature
comment by Baughn · 2013-06-04T23:19:36.843Z · LW(p) · GW(p)

Your what?

No, I'm not letting it go this time. I've heard people talking about internal monologues before, but I've never been quite sure what those are - I'm pretty sure I don't have one. Could you try to define the term?

Replies from: Eliezer_Yudkowsky, CCC, Eugine_Nier, NancyLebovitz, OrphanWilde, itaibn0, Qiaochu_Yuan, None
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-06-06T05:22:37.479Z · LW(p) · GW(p)

Gosh. New item added to my list of "Not everyone does that."

...I have difficulty imagining what it would be to be like someone who isn't the little voice in their own head, though. Seriously, who's posting that comment?

Replies from: TheOtherDave, Will_Newsome, Nisan, Desrtopa, ialdabaoth, Ratcourse, Estarlio, Baughn
comment by TheOtherDave · 2013-06-06T05:48:24.084Z · LW(p) · GW(p)

I may be in a somewhat unique position to address this question, as one of the many many many weird transient neurological things that happened to me after my stroke was a period I can best describe as my internal monologue going away.

So I know what it's like to be the voice in my head, and what it's like not to be.

And it's still godawful difficult to describe the difference in words.

One way I can try is this: have you ever experienced the difference between "I know what I'm going to say, and here I am saying it" and "words are coming out of my mouth, and I'm kind of surprised by what I'm hearing myself say"?

If so, I think I can say that losing my "little voice" is similar to that difference.
If not, I suspect the explanation will be just as inaccessible as the phenomenon it purported to explain, but I can try again.

Replies from: CCC, Armok_GoB, army1987, Bobertron
comment by CCC · 2013-06-06T08:39:28.887Z · LW(p) · GW(p)

One way I can try is this: have you ever experienced the difference between "I know what I'm going to say, and here I am saying it" and "words are coming out of my mouth, and I'm kind of surprised by what I'm hearing myself say"?

...no, I haven't. I'm always in the state of "I know what I'm going to say, and here I am saying it" (sometimes modified very soon afterwards by "on second thoughts, that was a very poor way to phrase it and I've probably been misunderstood").

Replies from: ciphergoth, TheOtherDave, ESRogs
comment by Paul Crowley (ciphergoth) · 2013-06-06T08:55:14.065Z · LW(p) · GW(p)

...what? Wow!

I'm dying to know whether we're stumbling on a difference in the way we think or the way we describe what we think, here. To me, the first state sounds like rehearsing what I'm going to say in my head before I say it, which I only do when I'm racking my brains on eg how to put something tactfully, where the latter sounds like what I do in conversation all the time, which is simply to let the words fall out of my mouth and find out what I've said.

Replies from: CCC, Kaj_Sotala, ialdabaoth
comment by CCC · 2013-06-06T09:23:41.286Z · LW(p) · GW(p)

My internal monologue is a lot faster than the words can get out of my mouth (when I was younger, I tried to speak as fast as I think, with the result that no-one could understand me; of course, to speak that fast, I needed to drop significant parts of most of the words, which didn't help). I don't always plan out every sentence in advance; but thinking about it, I think I do plan out every phrase in advance, relying on the speed of my internal monologue to produce the next phrase before or at worst very shortly after I complete the current phrase. (It often helps to include a brief pause at the end of a phrase in any case). It's very much a just-in-time thing.

If I'm making a special effort to be tactful, then I'll produce and consider a full sentence inside my head before saying it out loud.

Incidentally, I'm also a member of Toastmasters, and one thing that Toastmasters has is impromptu speaking, when a person is asked to give a one-to-two minute speech and is told the topic just before stepping up to give the speech. The topic could be anything (I've had "common sense", "stick", and "nail", among others). Most people seem to be scared of this, apparently seeing it as an opportunity to stand up and be embarrassed; I find that I enjoy it. I often start an impromptu speech with very little idea of how it's going to end; I usually make some sort of pun about the topic (I changed 'common sense' into a very snooty, upper-crust type of person complaining about commoners with money - 'common cents'), and often talk more-or-less total nonsense.

But, through the whole speech, I always know what I am saying. I am not surprised by my own words (no matter how surprised other people may be by the idea of 'common cents'). I don't think I know how to be surprised at what I am saying. (Of course, my words are not always well-considered, in hindsight; and sometimes I will be surprised at someone else's interpretation of my words, and be forced to explain that that's not what I meant)

Replies from: somervta, CCC
comment by somervta · 2013-06-08T07:24:25.370Z · LW(p) · GW(p)

I'm the same - except occasionally, when I'm 'flowing' in conversation, I'll find that my inner monologue fails to produce what I think it can, and my mouth just halts without input from it

Replies from: CCC
comment by CCC · 2013-06-09T18:06:21.471Z · LW(p) · GW(p)

I find that happens to me sometimes when I talk in Afrikaans; my Afrikaans vocabulary is poor enough that I often get halfway through a sentance and find that I can't remember the word for what I want to say.

Replies from: army1987
comment by A1987dM (army1987) · 2013-06-09T21:53:37.565Z · LW(p) · GW(p)

It occasionally happens to me in any language. I usually manage to rephrase the sentence on the flight or to replace the word with something generic like “thing” and let the listener figure it out from the context, without much trouble.

comment by CCC · 2013-06-09T18:10:56.101Z · LW(p) · GW(p)

Something that occurred to me on this topic; reading has a lot to do with the inner monologue. Writing is, in my view, a code of symbols on a piece of paper (or a screen) which tell the reader what their inner monologue should say. Reading, therefore, is the voluntary (and temporary) replacement of the reader's internal monologue with an internal monologue supplied and encoded by the author.

At least, that's what happens when I read. Do other people have the same experience?

Replies from: NancyLebovitz, army1987
comment by NancyLebovitz · 2013-06-19T12:58:10.320Z · LW(p) · GW(p)

Inner monologue test:

I. like. how. when. you. read. this. the. little. voice. in. your. head. takes. pauses..

Does anyone find that the periods don't make the sentence sound different?

Replies from: CCC
comment by CCC · 2013-06-20T09:31:19.187Z · LW(p) · GW(p)

I. like. how. when. you. read. this. the. little. voice. in. your. head. takes. pauses..

Let's make it a poll:

When you read NancyLebovitz's sentence (quoted above) do the periods make it sound different?

[pollid:470]

(If anyone picks any option except 'Yes' or 'No', could you please elaborate?)

Replies from: army1987, ialdabaoth
comment by A1987dM (army1987) · 2013-06-21T23:19:56.784Z · LW(p) · GW(p)

Hypothesis: Since I am more used to read sentences without a full stop after each word than sentences like that, of course I will read the former more quickly -- because it takes less effort.

Experiment to test this hypothesis: Ilikehowwhenyoureadthisthelittlevoiceinyourheadspeaksveryquickly.

Result of the experiment: at least for me, my hypothesis is wrong. YMMV.

Replies from: NancyLebovitz, CCC, wedrifid
comment by NancyLebovitz · 2013-06-24T14:11:01.524Z · LW(p) · GW(p)

As far as I can tell, I started reading the test phrase more slowly than normal, then "shifted gears" and sped up, perhaps to faster than normal.

Replies from: Benquo
comment by Benquo · 2013-06-24T15:00:39.329Z · LW(p) · GW(p)

Same here, for both test sentences.

comment by CCC · 2013-06-22T07:10:52.025Z · LW(p) · GW(p)

The little voice in my head speaks quickly for that experimental phrase, yes. It should be taking slightly longer to decode - since the information on word borders is missing - which suggests that the voice in my head is doing special effects. I think that that is becausewordslikethis can be used in fiction as the voice of someone who is speaking quickly; so if the voice in my head speeds up when reading it, then that makes the story more immersive.

comment by wedrifid · 2013-06-22T01:25:46.278Z · LW(p) · GW(p)

Result of the experiment: at least for me, my hypothesis is wrong. YMMV.

Hypothesisconfirmedforme.Perhapstoomanyhourslisteningtoaudiobooksatfivetimesspeed. Normalspeedheadvoicejustseemssoslow.

Replies from: army1987
comment by A1987dM (army1987) · 2013-07-13T18:23:00.609Z · LW(p) · GW(p)

That sounds in my head like the voice in Italian TV ads for medicines reading the disclaimers required (I guess) by law (ultra-fast words, but pauses between sentences of nearly normal length).

comment by ialdabaoth · 2013-06-20T09:44:42.051Z · LW(p) · GW(p)

(If anyone picks any option except 'Yes' or 'No', could you please elaborate?)

I can parse it both ways. Actually, on further experimentation, it appears to be tied directly to my eye-scanning speed! If I force my eyes to scan over the line quickly from left-to-right, I read it without pause; if I read the way I normally do (by staring at the 'When' to take a "snapshot" of I, like, how, when, you, and read all at once; then staring at the space between "little" and "voice" to take a snapshot of this, the, little, voice, in, and your all at once, then staring at the "pauses" to take a snapshot of head, takes, and pauses), then the pauses get inserted - but not as normal sentence stops; more like... a clipped robot.

Replies from: CCC
comment by CCC · 2013-06-20T10:05:43.953Z · LW(p) · GW(p)

Huh. You read in a different way to what I do; I normally scan the line left-to-right. And I insert the pauses when I do so.

It sounds like a clipped robot to me too.

Replies from: ialdabaoth
comment by ialdabaoth · 2013-06-20T14:03:52.849Z · LW(p) · GW(p)

Yeah, something clicked while I was reading an old encyclopedia sometime around age 7; I remember it quite vividly. My brain started being able to process chunks of text at a time instead of single words, so I could sort of focus on the middle of a short sentence or phrase and read the whole thing at once. I went from reading at about one-quarter conversation speed, to about ten times conversation speed, over the course of a few minutes. I still don't quite understand what the process was that enabled the change; I just sort of experienced it happening.

One trade-off is that I don't have full conscious recall of each word when I read things that quickly - but I do tend to be able to pull up a reasonable paraphrasing of the information later if I need to.

Replies from: CCC, Baughn
comment by CCC · 2013-06-21T08:23:01.623Z · LW(p) · GW(p)

I can see both pros and cons to this talent. The pro is obvious; faster reading. The con is that it may cause trouble parsing subtly-worded legal contracts; the sort where one misplaced word may potentially land up with both parties arguing the matter in court. Or anything else where exact wording is important, like preparing a wish for a genie.

Of course, since it seems that you can choose when to use this, um, snapshot reading and when not to, you can gain the full benefit of the pros most of the time while carefully removing the cons in any situation where they become important.

comment by Baughn · 2014-01-21T17:22:38.917Z · LW(p) · GW(p)

I call that "skimming", but maybe that's something else?

comment by A1987dM (army1987) · 2013-06-09T21:50:30.154Z · LW(p) · GW(p)

Assuming you're literally talking about subvocalization, it depends on what I'm reading (I do it more with poetry than with academic papers), on how quickly I'm reading (I don't do that as much when skimming), on whether I know what the author's voice sounds like (in which case I subvocalize in their voice -- which slows me down a great deal if I'm reading stuff by someone who speaks slowly and with a strong foreign accent e.g. Benedict XVI), and possibly on something else I'm overlooking at the moment.

Replies from: CCC
comment by CCC · 2013-06-10T09:07:25.274Z · LW(p) · GW(p)

I do not notice that I am subvocalising when I read, even when I am looking for it (I tested this on the wiki page that you linked to). I do notice, however, that it mentions that subvocalising is often not detectable by the person doing the subvocalising.

More specifically, if I place my hand lightly on my throat while reading, I feel no movement of the muscles; and I am able to continue reading while swallowing.

So, no, I don't think I'm talking about subvocalising. I'm talking about an imaginary voice in my head that narrates my thought processes.

Hmmm... my inner monologue does not tend to speak in the voice of someone whose voice I know. I can get it to speak in other peoples' voices, or in what I imagine other people's voices to sound like, if I try to, but it defaults to a sort of neutral gear which, now that I think about it, sounds like a voice but not quite like my (external) voice. Similar, but not the same. (And, of course, the way that I hear my voice when I speak differs from how I hear it when recorded on tape - my inner monologue sounds more like the way I hear my voice, but still somewhat different)

...this is strange. I don't know who my inner monologue sounds like, if anyone.

Replies from: army1987
comment by A1987dM (army1987) · 2013-06-10T09:20:44.736Z · LW(p) · GW(p)

Hmmm... my inner monologue does not tend to speak in the voice of someone whose voice I know. I can get it to speak in other peoples' voices, or in what I imagine other people's voices to sound like, if I try to, but it defaults to a sort of neutral gear which, now that I think about it, sounds like a voice but not quite like my (external) voice. Similar, but not the same. (And, of course, the way that I hear my voice when I speak differs from how I hear it when recorded on tape - my inner monologue sounds more like the way I hear my voice, but still somewhat different)

Mine usually sounds more or less like I'm whispering.

Replies from: CCC
comment by CCC · 2013-06-11T09:31:51.211Z · LW(p) · GW(p)

My inner monologue definitely doesn't sound like whispering; it's a voice, speaking normally.

I think I can best describe it by saying that it sounds more like I imagine myself sounding than like I actually sound to myself; but I suspect that's recursive, i.e. I imagine myself sounding like that because that's what my inner monologue sounds like.

Replies from: hylleddin
comment by hylleddin · 2013-06-13T01:34:54.439Z · LW(p) · GW(p)

Does your inner voice sound different depending on your mood or emotional state?

Replies from: CCC
comment by CCC · 2013-06-13T09:20:43.286Z · LW(p) · GW(p)

Yes. If my mood or emotional state is sufficiently severe, then my inner voice will sound different; both in choice of phrasing and in tone of voice.

It's not an audible voice, as such; I think the best way that I can describe it is to say that it's very much like a memory of a voice, except that it's generated on-the-fly instead of being, well, remembered. As such, it has most of the properties of an audible voice (except actual audibility) - including such markers as 'tone of voice'. This tone changes with my emotional state in reasonable ways; that is, if I am sufficiently angry, then my inner voice may take on an angry, menacing tone.

If my emotional state is not sufficiently severe, then I am unable to notice any change in my inner-voice tone. I also note that my spoken voice shows a noticeable change of tone at significantly lower emotional severity than my inner voice does.

Replies from: army1987
comment by A1987dM (army1987) · 2013-06-15T07:25:57.744Z · LW(p) · GW(p)

It's not an audible voice, as such; I think the best way that I can describe it is to say that it's very much like a memory of a voice, except that it's generated on-the-fly instead of being, well, remembered.

I was about to say that it's the same for me, but then I remember that at least for me actual memories of voices can be very vivid (especially in hypnagogic state or when I'm reading stuff written by that person), whereas my inner voice seldom is. (And memories of voices can also be generated on-the-fly -- I can pick a sentence and imagine a bunch of people I know each saying it, even if I can't remember hearing any of them actually ever saying that sentence.)

Replies from: CCC
comment by CCC · 2013-06-17T09:35:49.463Z · LW(p) · GW(p)

Huh. Either my memories of voices are less vivid than yours, or my inner monologue is more vivid. Quite possibly both.

Of course, when I remember someone saying something, it can include information aside from the voice (e.g. where it happened, the surroundings at the time) which is never included in my inner monologue. I consider these details to be seperate from the voice-memory; the voice-memory is merely a part of the whole "what-he-said" memory.

Replies from: army1987
comment by A1987dM (army1987) · 2013-07-13T19:05:43.801Z · LW(p) · GW(p)

BTW, I think I have one kind of memory for people's timbre, rate of speech, volume, accent, etc., and one for sequences of phonemes, and when recalling what a person sounded like when saying a given sentence I combine the two on the flight.

comment by Kaj_Sotala · 2013-06-10T11:23:47.422Z · LW(p) · GW(p)

My experience is that I generally have some kind of fuzzy idea of what I'm going to say before I say it. When I actually speak, sometimes it comes out as a coherent and streamlined sentence whose contents I figure out as a I speak it. At other times - particularly if I'm feeling nervous, or trying to communicate a complicated concept that I haven't expressed in speech before - my fuzzy idea seems to disintegrate at the moment I start talking, and even if I had carefully rehearsed a line many times in my mind, I forget most of it. Out comes either what feels to me like an incoherent jumble, or a lot of "umm, no, wait".

Writing feels a lot easier, possibly because I have the stuff-that-I've-already-written right in front of me and I only need to keep the stuff that I'm about to say in memory, instead of also needing to constantly remind myself about what I've said so far.

ETA: Here's an earlier explanation of how writing sometimes feels like to me.

comment by ialdabaoth · 2013-06-06T09:53:14.871Z · LW(p) · GW(p)

The parts of your brain that generate speech and the part that generate your internal sense-of-self are less integrated than CCC's. An interesting experiment might be to stop ascribing ownership to your words when you find yourself surprised by them - i.e., instead of framing the phenomenon as "I said that", frame it as "my brain generated those words".

Learn to recognize that the parts of your brain that handle text generation and output are no more "you" than the parts of your brain that handle motor reflex control.

EDIT: Is there a problem with this post?

Replies from: wedrifid, FeepingCreature, Kawoomba
comment by wedrifid · 2013-06-06T10:25:34.223Z · LW(p) · GW(p)

Learn to recognize that the parts of your brain that handle text generation and output are no more "you" than the parts of your brain that handle motor reflex control.

No! The parts of my brain that handle text generation are the only parts that... *slap*... Ow. Nevermind. It seems we have reached an 'understanding'.

Replies from: TheOtherDave, Kawoomba
comment by TheOtherDave · 2013-06-06T12:56:32.498Z · LW(p) · GW(p)

Right!
I mean, I do realize you're being funny, but pretty much exactly this.

I don't recommend aphasia as a way of shock-treating this presumption, but I will admit it's effective. At some point I had the epiphany that my language-generating systems were offline but I was still there; I was still thinking the way I always did, I just wasn't using language to do it.

Which sounds almost reasonable expressed that way, but it was just about as creepy as the experience of moving my arm around normally while the flesh and bone of my arm lay immobile on the bed.

Replies from: FeepingCreature
comment by FeepingCreature · 2013-06-07T23:07:20.683Z · LW(p) · GW(p)

A good way I've found to reach this state is to start to describe a concept in your internal monologue but "cancel" the monologue right at the start - the concept will probably have been already synthesized and will just be hanging around in your mind, undescribed and unspoken but still recognizable.

[edit] Afaict the key step is noticing that you've started a monologue, and sort of interrupting yourself mentally.

Replies from: TheOtherDave, TheOtherDave, Eugine_Nier
comment by TheOtherDave · 2013-06-09T18:18:06.991Z · LW(p) · GW(p)

So, FWIW, after about 20 minutes spent trying to do this I wasn't in a recognizably different state than I was when I started. I can kind of see what you're getting at, though.

Replies from: FeepingCreature
comment by FeepingCreature · 2013-06-10T03:50:01.144Z · LW(p) · GW(p)

Right, I mean as a way of realizing that there's something noticeable going on in your head that precedes the internal monologue. I wrote that comment wrong. Sorry for wasting your time.

Replies from: TheOtherDave
comment by TheOtherDave · 2013-06-10T15:35:33.438Z · LW(p) · GW(p)

Ah! I get you now. (nods) Yeah, that makes sense.

comment by TheOtherDave · 2013-06-07T23:25:06.475Z · LW(p) · GW(p)

That's... hm.
I'm not sure I know what you mean.
I'll experiment with behaving as if I did when I'm not in an airport waiting lounge and see what happens.

comment by Eugine_Nier · 2013-06-12T06:24:47.135Z · LW(p) · GW(p)

A good way I've found to reach this state is to start to describe a concept in your internal monologue but "cancel" the monologue right at the start - the concept will probably have been already synthesized and will just be hanging around in your mind, undescribed and unspoken but still recognizable.

I've had this happen to me semi-accidentally, the resulting state is extremely unpleasant.

comment by Kawoomba · 2013-06-06T10:33:52.655Z · LW(p) · GW(p)

A smash equilibrium.

comment by FeepingCreature · 2013-06-07T23:31:23.202Z · LW(p) · GW(p)

EDIT: Is there a problem with this post?

It's a bit rude to try to change others' definition of themselves unasked.

Replies from: ialdabaoth
comment by ialdabaoth · 2013-06-08T00:08:49.223Z · LW(p) · GW(p)
  1. Where does that intersect with "that which can be destroyed by the truth, should be"?

  2. "I'm dying to know whether we're stumbling on a difference in the way we think or the way we describe what we think, here." wasn't asking?

Replies from: FeepingCreature
comment by FeepingCreature · 2013-06-08T00:25:22.900Z · LW(p) · GW(p)
  1. The problem is that "what is part of you" at the interconnectedness-level of the brain is largely a matter of preference, imo; that is, treating it as truth implies taking a more authoritive position than is reasonable. Same goes for 2) - there's a difference between telling somebody what you think and outright stating that their subjective self-image is factually incorrect.
Replies from: ialdabaoth
comment by ialdabaoth · 2013-06-08T00:31:42.615Z · LW(p) · GW(p)

there's a difference between telling somebody what you think and outright stating that their subjective self-image is factually incorrect.

I appear to be confused.

Are you implying that subjective self-image is something that we should respect rather than analyze?

Replies from: FeepingCreature
comment by FeepingCreature · 2013-06-08T02:26:09.284Z · LW(p) · GW(p)

I think there's a difference between analysis and authoritive-sounding statements like "X is not actually a part of you, you are wrong about this", especially when it comes to personal attributes like selfness, especially in a thread demonstrating the folly of the typical-mind assumption.

Replies from: ialdabaoth
comment by ialdabaoth · 2013-06-08T02:31:24.491Z · LW(p) · GW(p)

Interesting. It was not my intent to sound any more authoritative than typical. Are there particular signals that indicate abnormally authoritarian-sounding statements that I should watch out for? And are there protocols that I should be aware of here that determine who is allowed to sound more or less authoritarian than whom, and under what circumstances?

Replies from: FeepingCreature, TheOtherDave
comment by FeepingCreature · 2013-06-08T03:13:48.203Z · LW(p) · GW(p)

I should have mentioned this earlier, but I did not downvote you so this is somewhat conjectured. In my opinion it's not a question of who but of topic - specifically, and this holds in a more general sense, you might want to be cautious when correcting people about beliefs that are part of their self-image. Couch it in terms like "I don't think", "I believe", "in my opinion", "personally speaking". That'll make it sound less like you think you know their minds better than they do.

comment by TheOtherDave · 2013-06-09T18:20:51.217Z · LW(p) · GW(p)

FWIW, I understood you in the first place to be saying that this was a choice, and it was good to be aware of it as a choice, rather than making authoritarian statements about what choice to make.

comment by Kawoomba · 2013-06-06T10:06:39.270Z · LW(p) · GW(p)

Learn to recognize that the parts of your brain that handle text generation and output are no more "you" than the parts of your brain that handle motor reflex control.

I'd certainly call them much more significant to my identity than a e.g. my deltoid muscle, or some motor function parts of my brain.

Replies from: ialdabaoth
comment by ialdabaoth · 2013-06-07T03:44:34.349Z · LW(p) · GW(p)

I'd certainly call them much more significant to my identity than a e.g. my deltoid muscle, or some motor function parts of my brain.

It may be useful to recognize that this is a choice, rather than an innate principle of identity. The parts that speak are just modules, just like the parts that handle motor control. They can (and often do) run autonomously, and then the module that handles generating a coherent narrative stitches together an explanation of why you "decided" to cause whatever they happened to generate.

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2013-06-07T09:27:34.923Z · LW(p) · GW(p)

This sounds like a theory of identity as epiphenomenal homunculus. A module whose job is to sit there weaving a narrative, but which has no effect on anything outside itself (except to make the speech module utter its narrative from time to time). "Mr Volition", as Greg Egan calls it in one of his stories. Is that your view?

Replies from: ialdabaoth
comment by ialdabaoth · 2013-06-07T09:42:40.620Z · LW(p) · GW(p)

More or less, yes. It does have some effect on things outside itself, of course, in that its 'narrative' tends to influence our emotional investment in situations, which in turn influences our reactions.

Replies from: Richard_Kennaway, Estarlio
comment by Richard_Kennaway · 2013-06-07T17:47:48.341Z · LW(p) · GW(p)

It seems to me that the Mr. Volition theory suffers from the same logical flaw as p-zombies. How would a non-conscious entity, a p-zombie, come to talk about consciousness? And how does an epiphenomenon come to think it's in charge, how does it even arrive at the very idea of "being in charge", if it was never in charge of anything?

An illusion has to be an illusion of something real. Fake gold can exist only because there is such a thing as real gold. There is no such thing as fake mithril, because there is no such thing as real mithril.

Replies from: ialdabaoth, TheOtherDave, khafra, CCC, FeepingCreature, Juno_Watt, Qiaochu_Yuan
comment by ialdabaoth · 2013-06-07T18:55:36.666Z · LW(p) · GW(p)

By that analogy, then, fake gods can exist only because there is such a thing as real gods; fake ghosts can only exist because there is such a thing as real ghosts; fake magic can only exist because there is such a thing as real magic.

It's perfectly possible to be ontologically mistaken about the nature of one's world.

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2013-06-08T06:24:34.843Z · LW(p) · GW(p)

By that analogy, then, fake gods can exist only because there is such a thing as real gods; fake ghosts can only exist because there is such a thing as real ghosts; fake magic can only exist because there is such a thing as real magic.

It's perfectly possible to be ontologically mistaken about the nature of one's world.

Indeed. There is real agency, so people have imagined really big agents that created and rule the world. People's consciousness persists, even after the interruptions of sleep, and they imagine it persists even after death. People's actions appear to happen purely by their intention, and they imagine doing arbitrary things purely by intention. These are the real things that the fakes, pretences, or errors are based on.

But how do the p-zombie and the homunculus even get to the point of having their mistaken ontology?

Replies from: ialdabaoth
comment by ialdabaoth · 2013-06-08T06:40:52.343Z · LW(p) · GW(p)

The p-zombie doesn't, because the p-zombie is not a logically consistent concept. Imagine if there was a word that meant "four-sided triangle" - that's the level of absurdity that the 'p-zombie' idea represents.

On the other hand, the epiphenomenal consciousness (for which I'll accept the appelature 'homunculus' until a more consistent and accurate one occurs to me) is simply mistaken in that it is drawing too large a boundary in some respects, and too small a boundary in others. It's drawing a line around certain phenomena and ascribing a causal relationship between those and its own so-called 'agency', while excluding others. The algorithm that draws those lines doesn't have a particularly strong map-territory correlation; it just happens to be one of those evo-psych things that developed and self-reinforced because it worked in the ancestral environment.

Note that I never claimed that "agency" and "volition" are nonexistent on the whole; merely that the vast majority of what people internally consider "agency" and "volition", aren't.

EDIT: And I see that you've added some to the comment I'm replying to, here. In particular, this stood out:

People's consciousness persists, even after the interruptions of sleep, and they imagine it persists even after death.

I don't believe that "my" consciousness persists after sleep. I believe that a new consciousness generates itself upon waking, and pieces itself together using the memories it has access to as a consequence of being generated by "my" brain; but I don't think that the creature that will wake up tomorrow is "me" in the same way that I am. I continue to use words like "me" and "I" for two reasons:

  1. Social convenience - it's damn hard to get along with other hominids without at least pretending to share their cultural assumptions

  2. It is, admittedly, an incredibly persistent illusion. However, it is a logically incoherent illusion, and I have upon occasion pierced it and seen others pierce it, so I'm not entirely inclined to give it ontological reality with p=1.0 anymore.

Replies from: TheOtherDave
comment by TheOtherDave · 2013-06-08T15:15:38.164Z · LW(p) · GW(p)

Do you believe that the creature you are now (as you read this parenthetical expression) is "you" in the same way as the creature you are now (as you read this parenthetical expression)?

If so, on what basis?

Replies from: ialdabaoth
comment by ialdabaoth · 2013-06-08T20:17:00.859Z · LW(p) · GW(p)

Yes(ish), on the basis that the change between me(expr1) and me(expr2) is small enough that assigning them a single consistent identity is more convenient than acknowledging the differences.

But if I'm operating in a more rigorous context, then no; under most circumstances that appear to require epistemological rigor, it seems better to taboo concepts like "I" and "is" altogether.

Replies from: TheOtherDave, None, Richard_Kennaway
comment by TheOtherDave · 2013-06-08T21:35:02.897Z · LW(p) · GW(p)

(nods) Fair enough.

I share something like this attitude, but in normal non-rigorous contexts I treat me-before-sleep and me-after-sleep as equally me in much the same way as you do me(expr1) and me(expr2).

More generally, my non-rigorous standard for "me" is such that all of my remembered states when I wasn't sleeping, delirious, or younger than 16 or so unambiguously qualify for "me"dom, despite varying rather broadly amongst themselves. This is mostly because the maximum variation along salient parameters among that set of states seems significantly smaller than the minimum variations between that set and the various other sets of states I observe others demonstrating. (If I lived in a community seeded by copies of myself-as-of-five-minutes ago who could transfer memories among one another, I can imagine my notion of "I" changing radically.)

Replies from: ialdabaoth
comment by ialdabaoth · 2013-06-08T21:40:44.794Z · LW(p) · GW(p)

More generally, my non-rigorous standard for "me" is such that all of my remembered states when I wasn't sleeping, delirious, or younger than 16 or so unambiguously qualify for "me"dom, despite varying rather broadly amongst themselves. This is mostly because the maximum variation along salient parameters among that set of states seems significantly smaller than the minimum variations between that set and the various other sets of states I observe others demonstrating. (If I lived in a community seeded by copies of myself-as-of-five-minutes ago who could transfer memories among one another, I can imagine my notion of "I" changing radically.)

Nice! I like that reasoning.

I personally experience a somewhat less coherent sense of self, and what sense of self I do experience seems particularly maladaptive to my environment, so we definitely seem to have different epistemological and pragmatic goals - but I think we're applying very similar reasoning to arrive at our premises.

comment by [deleted] · 2013-06-08T21:13:09.811Z · LW(p) · GW(p)

So in the following sentence...

"I am a construction worker"

Can you taboo 'I' and "am' for me?

Replies from: ialdabaoth
comment by ialdabaoth · 2013-06-08T21:21:18.482Z · LW(p) · GW(p)

"I am a construction worker"

This body works construction.

Jobs are a particularly egregious case where tabooing "is" seems like a good idea - do you find the idea that people "are" their jobs a particularly useful encapsulation of the human experience? Do you, personally find your self fully encapsulated by the ritualized economic actions you perform?

Replies from: None
comment by [deleted] · 2013-06-08T21:25:00.363Z · LW(p) · GW(p)

This body works construction.

But if 'I' differ day to day, then doesn't this body differ day to day too?

Do you, personally find your self fully encapsulated by the ritualized economic actions you perform?

I am fully and happily encapsulated by my job, though I think I may have the only job where this really possible.

Replies from: ialdabaoth
comment by ialdabaoth · 2013-06-08T21:37:35.902Z · LW(p) · GW(p)

But if 'I' differ day to day, then doesn't this body differ day to day too?

Certainly. How far do you want to go? Maps are not territories, but some maps provide useful representations of territories for certain contexts and purposes.

The danger represented by "I" and "is" come from their tendency to blow away the map-territory relation, and convince the reader that an identity exists between a particular concept and a particular phenomenon.

comment by Richard_Kennaway · 2013-06-08T20:59:33.669Z · LW(p) · GW(p)

Is the camel's nose the same thing as his tail? Are the nose and the tail parts of the same thing? What needs tabooing is "same" and "thing".

Replies from: ialdabaoth
comment by ialdabaoth · 2013-06-08T21:23:23.297Z · LW(p) · GW(p)

What needs tabooing is "same" and "thing".

I have also found that process useful (although like 'I', there are contexts where it is very cumbersome to get around using them).

comment by TheOtherDave · 2013-06-07T22:50:53.490Z · LW(p) · GW(p)

An illusion has to be an illusion of something real. Fake gold can exist only because there is such a thing as real gold. There is no such thing as fake mithril, because there is no such thing as real mithril.

Suppose I am standing next to a wall so high that I am left with the subjective impression that it just goes on forever and ever, with no upper bound. Or next to a chasm so deep that I am left with the subjective impression that it's bottomless.

Would you say these subjective impressions are impossible?
If possible, would you say they aren't illusory?

My own answer would be that such subjective impressions are both illusory and possible, but that this is not evidence of the existence of such things as real bottomless pits and infinitely tall walls. Rather, they are indications that my imagination is capable of creating synthetic/composite data structures.

comment by khafra · 2013-06-07T19:32:08.925Z · LW(p) · GW(p)

There is no such thing as fake mithril, because there is no such thing as real mithril.

Mesh mail "mithril" vest, $335.

Setting aside the question of whether this is fake iron man armor, or a real costume of the fake iron man, or a fake costume designed after the fake iron man portrayed by special effects artists in the movies, I think an illusion can be anything that triggers a category recognition by matching some of the features strongly enough to trigger the recognition, while failing to match on a significant amount of the other features that are harder to detect at first.

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2013-06-08T06:22:33.142Z · LW(p) · GW(p)

Mesh mail "mithril" vest, $335.

That's not fake mithril, it's pretend mithril.

I think an illusion can be anything that triggers a category recognition by matching some of the features strongly enough to trigger the recognition

To have the recognotion, there must have already been a category to recognise.

comment by CCC · 2013-06-08T14:35:02.640Z · LW(p) · GW(p)

How would a non-conscious entity, a p-zombie, come to talk about consciousness?

A tape recorder is a non-conscious entity. I can get a tape recorder to talk about consciousness quite easily.

Or are you asking how it would decide to talk about consciousness? It's a bit ambiguous.

comment by FeepingCreature · 2013-06-07T23:36:21.497Z · LW(p) · GW(p)

I think it's not an epiphenomenon, it's just wired in more circuitously than people believe. It has effects; it just doesn't have some effects that we tend to ascribe to it, like decisionmaking and highlevel thought.

comment by Juno_Watt · 2013-06-08T13:34:47.357Z · LW(p) · GW(p)

.> How would a non-conscious entity, a p-zombie, come to talk about consciousness?

By functional equivalence. A zombie Chalmers is bound to will utter sentences asserting its possession of qualia, a zombie Dennett will utter sentences denying the same.

The only getout is to claim that it is not really talking at all.

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2013-06-10T09:15:16.907Z · LW(p) · GW(p)

The epiphenomenal homunculus theory claims that there's nothing but p-zombies, so there are no conscious beings for them to be functionally equivalent to. After all, as the alien that has just materialised on my monitor has pointed out to me, no humans have zardlequeep (approximate transcription), and they don't go around insisting that they do. They don't even have the concept to talk about.

Replies from: Juno_Watt
comment by Juno_Watt · 2013-06-16T13:19:44.504Z · LW(p) · GW(p)

The theory that there is nothing but zombies runs into the difficulty of explaining why many of them would believe they are non-zombies. The standard p-zombie argument, that you can have qualia-less functional duplicates of non-zombies does not have that problem.

Replies from: Locaha, Richard_Kennaway
comment by Locaha · 2013-06-16T13:51:37.154Z · LW(p) · GW(p)

The theory that there is nothing but zombies runs into the much bigger difficulty of explaining to myself why I'm a zombie. When I poke myself with a needle, I sure as hell have the qualia of pain.

And don't tell me it's an illusion - any illusion is a qualia by itself.

Replies from: Juno_Watt
comment by Juno_Watt · 2013-06-16T17:14:08.030Z · LW(p) · GW(p)

Don't tell me tell Dennett

comment by Richard_Kennaway · 2013-06-16T14:07:24.054Z · LW(p) · GW(p)

The standard p-zombie argument still has a problem explaining why p-zombies claim to be conscious. It leaves no role for consciousness in explaining why conscious humans talk of being conscious. It's a short road (for a philosopher) to then argue that consciousness plays no role, and we're back with consciousness as either an epiphenomenon or non-existent, and the problem of why -- especially when consciousness is conceded to exist, but cause nothing -- the non-conscious system claims to be conscious.

Replies from: nshepperd, Juno_Watt
comment by nshepperd · 2013-06-17T01:21:56.843Z · LW(p) · GW(p)

Even worse, the question of how the word "conscious" can possibly even refer to this thing that is claimed to be epiphenomenal, since the word can't have been invented in response to the existence or observations of consciousness (since there aren't any observations). And in fact there is nothing to allow a human to distinguish between this thing, and every other thing that has never been observed, so in a way the claim that a person is "conscious" is perfectly empty.

ETA: Well, of course one can argue that it is defined intensionally, like "a unicorn is a horse with a single horn extending from its head, and [various magical properties]" which does define a meaningful predicate even if a unicorn has never been seen. But in that case any human's claim to have a consciousness is perfectly evidence-free, since there are no observations of it with which to verify that it (to the extent that you can even refer to a particular unobservable thing) has the relevant properties.

comment by Juno_Watt · 2013-06-16T17:13:02.057Z · LW(p) · GW(p)

The standard p-zombie argument still has a problem explaining why p-zombies claim to be conscious. It leaves no role for consciousness in explaining why conscious humans talk of being conscious.

Yes. Thats the standard epiphenomenalism objection.

. It's a short road (for a philosopher) to then argue that consciousness plays no role,

Often a bit too short.

comment by Qiaochu_Yuan · 2013-06-07T23:38:59.382Z · LW(p) · GW(p)

How would a non-conscious entity, a p-zombie, come to talk about consciousness?

I scrawl on a rock "I am conscious." Is the rock talking about consciousness?

Replies from: Richard_Kennaway, nshepperd
comment by Richard_Kennaway · 2013-06-08T06:18:41.685Z · LW(p) · GW(p)

No, you are.

Replies from: Qiaochu_Yuan
comment by Qiaochu_Yuan · 2013-06-08T06:40:42.064Z · LW(p) · GW(p)

I run a program that randomly outputs strings. One day it outputs the string "I am conscious." Is the program talking about consciousness? Am I?

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2013-06-08T06:43:16.176Z · LW(p) · GW(p)

No, see nsheppard's comment.

Replies from: Qiaochu_Yuan
comment by Qiaochu_Yuan · 2013-06-08T08:43:41.104Z · LW(p) · GW(p)

Maybe I'm being unnecessarily cryptic. My point is that when you say that something is "talking about consciousness," you're assigning meaning to what is ultimately a particular sequence of vibrations of the air (or a particular pattern of pigment on a rock, or a particular sequence of ASCII characters on a screen). I don't need a soul to "talk about souls," and I don't need to be conscious to "talk about consciousness": it just needs to happen to be the case that my mouth emits a particular sequence of vibrations in the air that you're inclined to interpret in a particular way (but that interpretation is in your map, not the territory).

In other words, I'm trying to dissolve the question you're asking. Am I making sense?

Replies from: Richard_Kennaway, Juno_Watt
comment by Richard_Kennaway · 2013-06-08T09:31:13.510Z · LW(p) · GW(p)

In other words, I'm trying to dissolve the question you're asking. Am I making sense?

Not yet. I really think you need to read the GLUT post that nsheppard linked to.

I don't need a soul to "talk about souls," and I don't need to be conscious to "talk about consciousness"

You do need to have those concepts, though, and concepts cannot arise without there being something that gave rise to them. That something may not have all the properties one ascribes to it (e.g. magical powers), but discovering that that one was mistaken about some aspects does not allow one to conclude that there is no such thing. One still has to discover what the right account of it is.

If consciousness is an illusion, what experiences the illusion?

it just needs to happen to be the case that my mouth emits a particular sequence of vibrations in the air

This falls foul of the GAZP v. GLUT thing. It cannot "just happen to be the case". When you pull out for attention the case where a random process generates something that appears to be about consciousness, out of all the other random strings, you've used your own concept of consciousness to do that.

Replies from: Qiaochu_Yuan
comment by Qiaochu_Yuan · 2013-06-08T19:02:53.170Z · LW(p) · GW(p)

I've read GLUT. Have you read The Zombie Preacher of Somerset?

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2013-06-10T09:01:39.130Z · LW(p) · GW(p)

Have you read The Zombie Preacher of Somerset?

I think so; at least, I have now. (I don't know why someone would downvote your comment, it wasn't me.) So, something went wrong in his head, to the point that asking "was he, or was he not, conscious" is too abstract a question to ask. Nowadays, we'd want to do science to someone like that, to try to find out what was physically going on.

Replies from: Qiaochu_Yuan
comment by Qiaochu_Yuan · 2013-06-10T18:03:36.972Z · LW(p) · GW(p)

Sure, I'm happy with that interpretation.

comment by Juno_Watt · 2013-06-08T13:13:45.610Z · LW(p) · GW(p)

I don't need to be conscious to "talk about consciousness":

That is not obvious. You do need to be a langue-user to use language, you do need to know English to communicate in English, and so on. If consciousness involves things like self-reflection and volition, you do need to be conscious to interntionally use language to express your reflections on your own consciousness.

comment by nshepperd · 2013-06-08T01:02:34.168Z · LW(p) · GW(p)

In the same way that a philosophy paper does... yes. Of course, the rock is just a medium for your attempt at communication.

Replies from: Randaly
comment by Randaly · 2013-06-08T01:30:08.453Z · LW(p) · GW(p)

I write a computer program that outputs every possible sequence of 16 characters to a different monitor. Is the monitor which outputs 'I am conscious' talking about consciousness in the same way the rock is? Whose attempt at communication is it a medium for?

Replies from: nshepperd, ialdabaoth
comment by nshepperd · 2013-06-08T05:21:02.371Z · LW(p) · GW(p)

Your decision to point out the particular monitor displaying this message as an example of something imparts information about your mental state in exactly the same way that your decision to pick a particular sequence of 16 characters out of platonia to engrave on a rock does.

See also: on GLUTs.

comment by ialdabaoth · 2013-06-08T01:35:04.539Z · LW(p) · GW(p)

Whose attempt at communication is it a medium for?

The reader's. Paradolia is a signal-processing system's attempt to find a signal.

On a long enough timeline, all random noise generators become hidden word puzzles.

comment by Estarlio · 2013-06-08T15:14:49.175Z · LW(p) · GW(p)

Why would we have these modules that seem quite complex, and likely to negatively effect fitness (thinking's expensive), if they don't do anything? What are the odds of this becoming a prevalent without a favourable selection pressure?

Replies from: ialdabaoth, TheOtherDave
comment by ialdabaoth · 2013-06-08T20:19:10.423Z · LW(p) · GW(p)

High, if they happen to be foundational.

Sometimes you get spandrels, and sometimes you get systems built on foundations that are no longer what we would call "adaptive", but that can't be removed without crashing systems that are adaptive.

comment by TheOtherDave · 2013-06-08T19:04:03.120Z · LW(p) · GW(p)

Evo-psych just-so stories are cheap.

Here's one: it turns out that ascribing consistent identity to nominal entities is a side-effect of one of the most easily constructed implementations of "predict the behavior of my environment." Predicting the behavior of my environment is enormously useful, so the first mutant to construct this implementation had a huge advantage. Pretty soon everyone was doing it, and competing for who could do it best, and we had foreclosed the evolutionary paths that allowed environmental prediction without identity-ascribing. So the selection pressure for environmental prediction also produced (as an incidental side-effect) selection pressure for identity-ascribing, despite the identity-ascribing itself being basically useless, and here we are.

I have no idea if that story is true or not; I'm not sure what I'd expect to see differentially were it true or false. My point is more that I'm skeptical of "why would our brains do this if it weren't a useful thing to do?" as a reason for believing that everything my brain does is useful.

comment by TheOtherDave · 2013-06-06T13:17:07.976Z · LW(p) · GW(p)

(nods) Yeah, OK. Take 2.

It's also broadly similar to the difference between explicit and implicit knowledge. Have you ever practiced a skill enough that it goes from being something where you hold the "outline" of the skill in explicit memory as you perform it, to being something where you simply perform it without that "outline"? For example, driving to an unfamiliar location and thinking "ok, turn right here, turn left here" vs. just turning in the correct direction at each intersection, or something similar to that?

Replies from: CCC
comment by CCC · 2013-06-08T11:12:27.366Z · LW(p) · GW(p)

Yes, I have. Driving is such a skill; when I was first learning to drive, I had to think about driving ("...need to change gear, which was the clutch again? Ordered CBA, so on the left..."). Now that I am more practiced, I can just think about changing gear and change gear, without having to examine my actions in so much detail. Which allows my internal monologue to wonder into other directions.

On a couple of occasions, as a result of this thread, I've tried just quietening down my internal monologue - just saying nothing for a bit - and observing my own thought processes. I find that the result is that I pay a lot more attention to audio cues - if I hear a bird in the distance, I picture a bird. There's associations going on inside my head that I'd never paid much attention to before.

comment by ESRogs · 2013-06-08T01:20:25.343Z · LW(p) · GW(p)

Is this still true under significant influence of alcohol?

Replies from: CCC
comment by CCC · 2013-06-08T11:13:48.024Z · LW(p) · GW(p)

I wouldn't know, I don't drink alcohol.

Replies from: ESRogs
comment by ESRogs · 2013-06-09T07:01:00.673Z · LW(p) · GW(p)

Well, if you ever did want to experience what TheOtherDave describes, that might be a good way to induce it.

Replies from: CCC
comment by CCC · 2013-06-09T18:02:27.130Z · LW(p) · GW(p)

I've found I can quiet my internal monologue if I try. (It's tricky, though; the monologue starts up again at the slightest provocation - I try to observe my own though processes without the monologue, and as soon as something odd happens, the internal monologue says "That's odd... ooops.")

I'm not sure if I can talk without the monologue automatically starting up again, but I'll try that first.

comment by Armok_GoB · 2013-06-10T21:08:21.826Z · LW(p) · GW(p)

I wasn't to add another data point, but I'm not sure the one I got can even be called that: I have no consistent memory on this subject. I am notoriously horrible at luminosity and introspection. When I do try to ask my brain, I receive a model/metaphor based of what I already know for neuroscience which may or may not contain data I couldn't access otherwise, and which is presented as a machine I can manipulate in the hopes of trying to manipulate the states of distant brains. The machine is clearly based on whatever concepts happen to be primed and the results would probably be completely different in every way if I tried this an hour later. Note that the usage of the word "I" here is inconsistent and ill-defined. This might be related to the fact this brain is self-diagnosed with posible ego-death (in the good way).

Edit: it is also noticed that like seemingly the case with most attempts to introspection, the act of observation strongly and aversely influence the functioning of the relevant circuity, in this case heavily altering my speech-patterns.

Replies from: hylleddin
comment by hylleddin · 2013-06-13T01:09:53.579Z · LW(p) · GW(p)

Huh. They way you describe attempting introspection is exactly the way our brain behaves when we try to access any personal memories outside of working memory. This doesn't seem to be as effective as whatever the typical way is, as our personal memory's notoriously atrocious compared with others.

I don't seem to have any sort of ego death. Vigil might have something similar, though.

Replies from: Armok_GoB
comment by Armok_GoB · 2013-06-13T16:24:35.028Z · LW(p) · GW(p)

Hmm, this seems related to another datapoint: reportedly, when I'm asked about my current mood and distracts, I answer "I can't remember".

A more tenuously related datapoint is that in fiction, I try to design BMIs around emulating having memorized GLUTs.

And some other thing come to think of it: I do have abnormal memory function in a bunch of various ways.

Basically; maybe a much larger chunk of my cognition passes through memory machinery for some reason?

Replies from: hylleddin, army1987
comment by hylleddin · 2013-06-13T18:01:54.772Z · LW(p) · GW(p)

A more tenuously related datapoint is that in fiction, I try to design BMIs around emulating having memorized >GLUTs.

What are GLUTs? I'm guessing you're not talking about Glucose Transporters.

Basically; maybe a much larger chunk of my cognition passes through memory machinery for some reason?

This seems like a plausible hypothesis. Alternatively, perhaps your working memory is less differentiated from your long-term memory.

Hmm, this seems related to another datapoint: reportedly, when I'm asked about my current mood and distracts, >I answer "I can't remember".

Hm. I have the same reaction if I'm asked what I'm thinking about, but I don't think it's because my thoughts are running through my long-term memory, so much as my train of thought usually gets flushed out of working memory when other people are talking.

Replies from: Armok_GoB, TheOtherDave
comment by Armok_GoB · 2013-06-13T18:42:50.030Z · LW(p) · GW(p)

GLUT=Giant Look-Up Table. Basically, implementing multiplication by memorizing the multiplication tables up to 2 147 483 647.

Hmm, that's an interesting theory. They are not necessarily mutually exclusive.

And no I'm not talking about trying to remember what happened a few seconds ago. I mean direct sensory experiences; as in someone holds p 3 fingers in the darkness and asks "how many fingers am I holding up right now" and I answer "I can't remember" instead of "I can't see".

comment by TheOtherDave · 2013-06-13T18:04:54.836Z · LW(p) · GW(p)

What are GLUTs?

Giant Look-Up Table

comment by A1987dM (army1987) · 2013-06-15T07:28:17.445Z · LW(p) · GW(p)

A more tenuously related datapoint is that in fiction, I try to design BMIs around emulating having memorized GLUTs.

What are BMIs? I'm guessing you're not talking about body mass indexes.

:-)

Replies from: Armok_GoB
comment by Armok_GoB · 2013-06-15T15:58:05.983Z · LW(p) · GW(p)

Brain machine Interface.

comment by A1987dM (army1987) · 2013-06-09T12:55:18.849Z · LW(p) · GW(p)

One way I can try is this: have you ever experienced the difference between "I know what I'm going to say, and here I am saying it" and "words are coming out of my mouth, and I'm kind of surprised by what I'm hearing myself say"?

BTW, my internal monologue usually sounds quite different from what I actually say in most casual situations: for example, it uses less dialectal/non-standard language and more technical terms. (IOW, it resembles the way I write more than the way I speak. So, "I know what I'm going to say, and here I am saying it" is my default state when writing, and "words are coming out of my mouth, and I'm kind of surprised by what I'm hearing myself say" is the state I'm most often in when speaking.) Anyone else finds the same?

Replies from: OrphanWilde
comment by OrphanWilde · 2013-06-12T16:25:37.033Z · LW(p) · GW(p)

That's pretty close to how I operate, except the words are more like the skeletons of the thoughts than the thoughts themselves, stripped of all the internal connotation and imagery that provided 99% of the internal meaning.

comment by Bobertron · 2013-06-06T22:10:17.631Z · LW(p) · GW(p)

So I know what it's like to be the voice in my head, and what it's like not to be.

Well, which one do you prefer?

Replies from: TheOtherDave
comment by TheOtherDave · 2013-06-07T01:38:27.362Z · LW(p) · GW(p)

Oh, that's hard. The latter was awful, but of course most of that was due to all the other crap that was going on at the time. If I take my best shot at adjusting for that... well, I am most comfortable being the voice in my head. But not-being the voice in my head has an uncomfortable gloriousness associated with it. I doubt the latter is sustainable, though.

comment by Will_Newsome · 2013-06-10T12:19:40.249Z · LW(p) · GW(p)

When you're playing a sport... wait, maybe you don't... okay, when you're playing an instrum—hm. Surely there is a kinesthetic skill you occasionally perform, during which your locus of identity is not in your articulatory loop? (If not, fixing that might be high value?) And you can imagine being in states similar to that much of the time? I would imagine intense computer programming sessions would be more kinesthetic than verbal. Comment linked to hints at what my default thinking process is like.

Replies from: khafra
comment by khafra · 2013-06-10T15:13:44.373Z · LW(p) · GW(p)

When I'm playing music or martial arts, and I'm doing it well, I'm usually in a state of flow--not exactly self-aware in the way I usually think of it.

When I'm working inside a computer or motorcycle, I think I'm less self-aware, and what I'm aware of is my manipulating actuators, and the objects than I need to manipulate, and what I need to do to them.

When I'm sitting in my armchair, thinking "who am I?" this is almost entirely symbolic, and I feel more self-aware than at the other times.

So, I think having my locus of identity in my articulatory loop is correlated with having a strong sense of identity.

I'm not sure whether my sense of identity would be weaker there, and stronger in a state of kinesthetic flow, if I spent more time sparring than sitting.

comment by Nisan · 2013-06-18T17:24:50.314Z · LW(p) · GW(p)

I wouldn't want to identify with the voice in my head. It can only think one thought at a time; it's slow.

Replies from: CCC
comment by CCC · 2013-06-18T18:38:11.552Z · LW(p) · GW(p)

How many things can you think of at once? I'm curious now.

Replies from: Nisan
comment by Nisan · 2013-06-19T16:29:33.451Z · LW(p) · GW(p)

I'm not sure how to answer that question. But when I think verbally I often lose track of the bigger picture of what I'm doing and get bogged down on details or tangents.

comment by Desrtopa · 2013-06-12T15:52:18.698Z · LW(p) · GW(p)

...I have difficulty imagining what it would be to be like someone who isn't the little voice in their own head, though. Seriously, who's posting that comment?

I play other people's voices through my head as I imagine what they would say (or are saying, when I interpret text,) but I don't have my own voice in my head as an internal monologue, and I think of "myself" as the conductor, which directs all the voices.

Replies from: hylleddin
comment by hylleddin · 2013-06-13T00:21:48.412Z · LW(p) · GW(p)

What happens when you are not thinking about what anyone else is saying or would say?

Replies from: Desrtopa
comment by Desrtopa · 2013-06-13T17:42:26.435Z · LW(p) · GW(p)

I think in terms of ideas and impulses, not voices. I can describe an impulse as if it had been expressed in words, but when it's going through my head, it's not.

I'd be kind of surprised if people who have internal monologues need an inner voice telling them "I'm so angry, I feel like throwing something!" in order to recognize that they feel angry and have an urge to throw something. I just recognize urges directly, including ones which are more subtle and don't need to be expressed externally, without needing to mediate them through language.

It definitely hasn't been my experience that not thinking in terms of a distinct inner "voice" makes it hard for me to pin down my thoughts; I have a much easier time following my own thought processes than most people I know.

Replies from: hylleddin, NancyLebovitz, CCC
comment by hylleddin · 2013-06-13T18:14:56.757Z · LW(p) · GW(p)

I'd be kind of surprised if people who have internal monologues need an inner voice telling them "I'm so angry, I >feel like throwing something!" in order to recognize that they feel angry and have an urge to throw something. I >just recognize urges directly, including ones which are more subtle and don't need to be expressed externally, >without needing to mediate them through language.

In our case at least, you are correct that we don't need to vocalize impulses. Emotions and urges seem to run on a different, concurrent modality.

Do ideas and impulses both use the same modality for you?

Replies from: Desrtopa
comment by Desrtopa · 2013-06-13T18:45:15.854Z · LW(p) · GW(p)

Maybe not quite the same, but the difference feels smaller than that between impulse and language.

To me, words are what I need to communicate with other people, not something I need to represent complex ideas within my own head.

I can represent a voice in my head if I choose to, but I don't find much use for it.

comment by NancyLebovitz · 2013-06-19T13:04:35.654Z · LW(p) · GW(p)

Not quite the same thing, but I've discovered that "I feel ragged around the edges" is my internal code for "I need B12".

One part of therapy for some people is giving them a vocabulary for their emotions.

comment by CCC · 2013-06-13T19:18:09.861Z · LW(p) · GW(p)

I'd be kind of surprised if people who have internal monologues need an inner voice telling them "I'm so angry, I feel like throwing something!" in order to recognize that they feel angry and have an urge to throw something.

I can recognise that I'm angry without the voice. When I'm angry, the inner voice will often be saying unflattering things about the object of my anger; something along the lines of "Aaaaaargh, this is so frustrating! I wish it would just work like it's supposed to!" Wordless internal angry growls may also happen.

comment by ialdabaoth · 2013-06-10T12:52:26.352Z · LW(p) · GW(p)

It's something like watching a movie. You can see hands typing and words appearing on the screen, but you aren't precisely thinking them. You can feel lips moving and hear words forming in the air, but you aren't precisely thinking them. They're just things your body is doing, like walking. When you walk, you don't consciously think of each muscle to move, do you? most of the time you don't even think about putting one foot in front of the other; you just think about where you're going (if that) and your motor control does the rest.

For some people, verbal articulation works the same way. Words get formed, maybe even in response to other peoples' words, but it's not something you're consciously acting on; those processes are running on their own without conscious input.

Replies from: CCC
comment by CCC · 2013-06-13T09:33:47.973Z · LW(p) · GW(p)

I find this very strange.

When I walk, yes, I don't consciously think of every muscle; but I do decide to walk. I decide my destination, I decide my route. (I may, if distracted, fall by force of habit into a default route; on noticing this, I can immediately override).

So... for someone without the internal monologue... how much do you decide about what you say? Do you just decide what subject to speak about, what opinions to express, and leave the exact phrasing up to the autopilot? Or do you not even decide that - do you sit there and enjoy the taste of icecream while letting the conversation run entirely by itself?

Replies from: PhilR
comment by PhilR · 2013-06-13T10:16:24.808Z · LW(p) · GW(p)

Didn't think this was going to be my first contribution to LessWrong, but here goes (hi, everybody, I'm Phil!)

I came to what I like to think was a realisation useful to my psychological health a few months ago when I was invited to realise that there is more to me than my inner monologue. That is, I came to understand that identifying myself as only the little voice in my head was not good for me in any sense. For one thing, my body is not part of my inner monologue, ergo I was a fat guy, because I didn't identify with it and therefore didn't care what I fed it on. For another, one of the things I explicitly excluded from my identity was the subprocess that talks to people. I had (and still have) an internal monologue, but it was at best only advisory to the talking process, so you can count me as one of the people for whom conversation is not something I'm consciously acting on. Result: I didn't consider the person people meet and talk to to be "me", but (as I came to understand), nevertheless I am held responsible for everything he says and does.

My approach to this was somewhat luminous avant (ma lecture de) la lettre: I now construe my identity as consisting of at least two sub-personalities. There is one for my inner monologue, and one for the version of me that people get to meet and talk to. I call them Al and Greg, respectively, so that by giving them names I hopefully remember that neither alone is Phil. So, to answer CCC's question: Al is Greg's lawyer, and Greg is Al's PR man. When I'm alone, I'm mostly Al, cogitating and opining and whatnot to the wall, with the occasional burst of non-verbal input from Greg that amounts to "That's not going to play in (Peoria|the office|LessWrong comment threads)". On the other hand, when other people are around, I'm mostly Greg, conversating in ways that Al would never have thought of, and getting closer and closer to an impersonation of Robin Williams depending on prettiness and proximity of the ladies in the room. Al could in theory sit back and let Greg do his thing, but he's usually too busy facepalming or yelling "SHUT UP SHUT UP SHUT UP SHUT UP" in a way that I can't hear until I get alone again.

The problem I used to have was that I was all on Al's side. I'd berate myself (that is, I'd identify with Al berating Greg) incessantly for paranoid interpretations of the way people reacted to what I said, without ever noticing that, y'know what, people do generally seem to like Greg, and Greg is also me.

comment by Ratcourse · 2013-06-10T14:28:37.093Z · LW(p) · GW(p)

Single data point but: I can alternate between inner monologue (heard [in somebody else's voice not mine(!)]) and no monologue (mainly social activity - say stuff then catch myself saying it and keep going) - stuff just happens. When inner monologue is present it seems I'm in real time constructing what I imagine the future to be and then adapt to that. I can feel as if my body moved without moving it, but don't use it for thinking (mainly kinesthethic imagination or whatever). I can force myself to see images, and, at the fringe, close to sleep, can make up symphonies in my mind, but don't use them to think.

comment by Estarlio · 2013-06-18T19:11:59.342Z · LW(p) · GW(p)

Who's speaking the voice in your head? Seems like another layer of abstraction.

Replies from: gwern
comment by gwern · 2013-06-18T20:49:00.031Z · LW(p) · GW(p)

Obviously the speaker is the homunculus that makes Eliezer conscious rather than a p-zombie.

comment by Baughn · 2014-01-21T17:04:20.289Z · LW(p) · GW(p)

Who's posting that comment?

A collective of neural hardware collectively calling itself "Baughn". Everyone gets some input.

comment by CCC · 2013-06-05T07:36:26.908Z · LW(p) · GW(p)

I have an internal monologue. It's a bit like a narrator in my head, narrating my thoughts.

I think - and this is highly speculative on my part - that it's a sign of thinking mainly with the part of the brain that handles language. Whenever I take one of those questionnaires designed to tell whether I use mainly the left or right side of my brain, I land very heavily on the left side - analytical, linguistic, mathematical. I can use the other side if I want to; but I find it surprisingly easy to become almost a caricature of a left-brain thinker.

My internal monologue quite probably restricts me to (mainly) ideas that are easily expressed in English. Up until now, I could see this as a weakness, but I couldn't see any easy way around it. (One advantage of the internal monologue, on the other hand, is that I usually find it easy to speak my thoughts out loud; because they're already in word form)

But now, you tell me that you don't seem to have an internal monologue. Does this mean that you can easily think of things that are not easily expressed in English?

Replies from: Baughn
comment by Baughn · 2013-06-06T00:46:51.301Z · LW(p) · GW(p)

Well.. I can easily think of things I subsequently have seriously trouble expressing in any language, sure. Occasionally through reflection via visuals (or kinesthetics, or..), but more often not using such modalities at all.

(See sibling post)

Replies from: arundelo, CCC
comment by CCC · 2013-06-06T08:45:51.991Z · LW(p) · GW(p)

Okay, visual I can understand. I don't use it often, but I do use it on occasion. Kinesthetic, I use even less often, but again I can more-or-less imagine how that works. (Incidentally, I also have a lot of trouble catching a thrown object. This may be related.)

But this 'no modalities at all'... this intrigues me. How does it work?

All I know is some ways in which it doesn't work.

Replies from: Eugine_Nier, Baughn
comment by Eugine_Nier · 2013-06-07T02:55:11.847Z · LW(p) · GW(p)

But this 'no modalities at all'... this intrigues me. How does it work?

I can't speak for Baughn but as for myself, sometimes It feels like I know ahead of time what I'm going to say as my inner voice, and sometimes this results in me not actually bothering to say it.

comment by Baughn · 2014-01-10T16:06:35.620Z · LW(p) · GW(p)

I went on vacation during this discussion, and completely lost track of it in the process - oops. It's an interesting question, though. Let me try to answer.

First off, using a sensory modality for the purpose of thinking. That's something I do, sure enough; for instance, right now I'm "hearing" what I'm saying at the same time as I'm writing it. Occasionally, if I'm unsure of how to phrase something, I'll quickly loop through a few options; more often, I'll do that without bothering with the "hearing" part.

When thinking about physical objects, sometimes I'll imagine them visually. Sometimes I won't bother.

For planning, etc. I never bother - there's no modality that seems useful.

That's not to say I don't have an experience of thinking. I'm going to explain this in terms of a model of thought[1] that's been handy for me (because it seems to fit me internally, and also because it's handy for models in fiction-writing where I'm modifying human minds), but keep in mind that there is a very good chance it's completely wrong. You might still be able to translate it to something that makes sense to you.

..basically, the workspace model of consciousness combined with a semi-modular brain architecture. That is to say, where the human mind consists of a large number of semi-independent modules, and consciousness is what happens when those modules are all talking to each other using a central workspace. They can also go off and do their own thing, in which case they're subconscious.

Now, some of the major modules here are sensory. For good reason; being aware of your environment is important. It's not terribly surprising, then, that the ability to loop information back - feeding internal data into the sensory modules, using their (massive) computational power to massage it - is useful, though it also involves what would be hallucinations if I wasn't fully aware it's not real. It's sufficiently useful that, well, it seems like a lot of people don't notice there's anything else going on.

Non-sensory modes of thought, now... sensory modes are frequently useful, but not always. When they aren't, they're noise. In that case - and I didn't quite realise that was going on until now - I'm not just not hallucinating an internal monologue, but in fact entirely disconnecting my senses from my conscious experience. It's a bit hard to tell, since they're naturally right there if I check, but I can be extremely easy to surprise at times.

Instead, I have an experience of... everything else. All the modules normally involved with thinking, except the sensory ones. Well, probably not all of them at once, but missing the sensory modules appears to be a sufficiently large outlier that the normal churn becomes insignificant...

Did that help? Hm. Maybe if you think about said "churn"; it's not like you always use every possible method of thought you're capable of, at the same time. I'm just including sensory modalities in the list of hot-swappable ones?

...

This is hard.

One more example, I suppose. I mentioned that, while I was writing this, I hallucinated my voice reading it; this appears to be necessary to actually writing. Not for deciding on the meaning I'm trying to get across, but in order to serialise it as English. Not quite sure what's going on there, since I don't seem to be doing it ahead of time - I'm doing it word by word.

1: https://docs.google.com/document/d/1yArXzSQUqkSr_eBd6JhIECdUKQoWyUaPHh_qz7S9n54/edit#heading=h.ug167zx6z472 may or may not be useful in figuring out what I'm talking about; it's a somewhat more long-winded use of the model. It also has enormous macroplot spoilers for the Death Game SAO fanfic, which.. you probably don't care about.

Replies from: CCC
comment by CCC · 2014-01-21T08:31:31.300Z · LW(p) · GW(p)

Okay, let me summarise your statement so as to ensure that I understand it correctly.

In short, you have a number of internal functional modules in the brain; each module has a speciality. There will be, for example, a module for sight; a module for hearing; a module for language, and so on. Your thoughts consist - almost entirely - of these modules exchanging information in some sort of central space.

The modules are, in effect, having a chat.

Now, you can swap these modules out quite a bit. When you're planning what to type, for example, it seems you run that through your 'hearing' module, in order to check that the word choice is correct; you know that this is not something which you are actually hearing, and thus are in no danger of treating it as a hallucination, but as a side effect of this your hearing module isn't running through the actual input from your ears, and you may be missing something that someone else is saying to you. (I imagine that sufficiently loud or out-of-place noises are still wired directly to your survival subsystem, though, and will get your attention as normal).

But you don't have to use your hearing module to think with. Or your sight module. You have other modules which can do the thinking, even when those modules have nothing to do. When your sensory modules have nothing to add, you can and do shut them out of the main circuit, ignoring any non-urgent input from those modules.

Your modules communicate by some means which are somehow independent of language, and your thoughts must be translated through your hearing module (which seems to have your language module buried inside it) in order to be described in English.


This is very different to how I think. I have one major module - the language module (not the hearing module, there's no audio component to this, just a direct language model) which does almost all my thinking. Other modules can be used, but it's like an occasional illustration in a book - very much not the main medium. (And also like an illustration in that it's usually visual, though not necessarily limited to two dimensions).

When it comes to my internal thoughts, all modules that are not my language model are unimportant in comparison. I suspect that some modules may be so neglected as to be near nonexistent, and I wonder what those modules could be.

My sensory modules appear to be input-only. I can ignore them, but I can't seem to consciously run other information into them. (I still dream, which I imagine indicates that I can subconsciously run other information through my sensory modules)


This leaves me with three questions:

  • Aside from your sensory modules, what other module(s) do you have?
  • Am I correct in thinking that you still require at least one module in order to think (but that can be any one module)?
  • When your modules share information, what form does that information take?

I imagine these will be difficult to translate to language, but I am very curious as to what your answers will be.

Replies from: Baughn
comment by Baughn · 2014-01-21T16:26:18.555Z · LW(p) · GW(p)

Your analysis is pretty much spot on.

It's interesting to me that you say your hearing and language modules are independent. I mean, it's reasonably obvious that this has to be possible - deaf people do have language - but it's absolutely impossible for me to separate the two, at least in one direction; I can't deal with language without 'hearing' it.

And I just checked; it doesn't appear I can multitask and examine non-language sounds while I'm using language, either. For comparison, I absolutely can (re)use e.g. visual modules while I'm writing this, although it gets really messy if I try to do so while remaining conscious of what they're doing - that's not actually required, though.

Aside from your sensory modules, what other module(s) do you have?

Well... my introspection isn't really good enough to tell, and it's really more of a zeroth-approximation model than something I have a lot of confidence in. That said, I suspect the question doesn't have an answer even in principle; that there's no clear border between two adjacent subsystems, so it depends on where you want to draw the line. It doesn't help that some elements of my thinking almost certainly only exist as a property of the communication between other systems, not as a physical piece of meat in itself, and I can't really tell which is which.

Am I correct in thinking that you still require at least one module in order to think (but that can be any one module)?

I think if it was just one, I wouldn't really be conscious of it. But that's not what you asked, so the answer is "Probably yes".

When your modules share information, what form does that information take?

I'm very tempted to say "conscious experience", here, but I have no real basis for that other than a hunch. I'm not sure I can give you a better answer, though. Feelings, visual input (or "hallucinations"), predictions of how people or physical systems will behave, plans - not embedded in any kind of visualization, just raw plans - etc. etc. And before you ask what that's like, it's a bit like asking what a Python dictionary feels like.. though emotions aren't much involved, at that level; those are separate.

The one common theme is that there's always at least one meta-level of thought associated. Not just "Here's a plan", but "Here's a plan, and oh by the way, here's what everyone else in the tightly knit community you like to call a brain thinks of the plan. In particular, "memory" here just pattern-matched it to something you read in a novel, which didn't work, but then again a different segment is pointing out that fictional evidence is fictional."

...without the words, of course.

So the various ideas get bounced back and forth between various segments of my mind, and that bouncing is what I'm aware of. Never the base idea, but all the thinking about the idea... well, it wouldn't really make sense to be "aware of the base idea" if I wasn't thinking about it.

Sight is something else again. It certainly feels like I'm aware of my entire visual field, but I'm at least half convinced that's an illusion. I'm in a prime position to fool myself about that.

Replies from: CCC
comment by CCC · 2014-01-21T18:52:37.082Z · LW(p) · GW(p)

It's interesting to me that you say your hearing and language modules are independent.

This may be related to the fact that I learnt to read at a very young age; when I read, I run my visual input through my language module; the visual model pre-processes the input to extract the words, which are then run through the language module directly.

At least, that's what I think is happening.

Running the language module without the hearing module a lot, and from a young age, probably helped quite a bit to seperate the two.

Aside from your sensory modules, what other module(s) do you have?

Well... my introspection isn't really good enough to tell, and it's really more of a zeroth-approximation model than something I have a lot of confidence in. That said, I suspect the question doesn't have an answer even in principle; that there's no clear border between two adjacent subsystems, so it depends on where you want to draw the line. It doesn't help that some elements of my thinking almost certainly only exist as a property of the communication between other systems, not as a physical piece of meat in itself, and I can't really tell which is which.

Hmph. Disappointing, but thanks for answering the question.

I think I was hoping for more clearly defined modules than appears to be the case. Still, what's there is there.

When your modules share information, what form does that information take?

I'm very tempted to say "conscious experience", here, but I have no real basis for that other than a hunch. I'm not sure I can give you a better answer, though. Feelings, visual input (or "hallucinations"), predictions of how people or physical systems will behave, plans - not embedded in any kind of visualization, just raw plans - etc. etc. And before you ask what that's like, it's a bit like asking what a Python dictionary feels like.. though emotions aren't much involved, at that level; those are separate.

The one common theme is that there's always at least one meta-level of thought associated. Not just "Here's a plan", but "Here's a plan, and oh by the way, here's what everyone else in the tightly knit community you like to call a brain thinks of the plan. In particular, "memory" here just pattern-matched it to something you read in a novel, which didn't work, but then again a different segment is pointing out that fictional evidence is fictional."

...without the words, of course.

So the various ideas get bounced back and forth between various segments of my mind, and that bouncing is what I'm aware of. Never the base idea, but all the thinking about the idea... well, it wouldn't really make sense to be "aware of the base idea" if I wasn't thinking about it.

Now, this is interesting. I'm really going to have to go and think about this for a while. You have a kind of continual meta-commentary in your mind, thinking about what you're thinking, cross-referencing with other stuff... that seems like a useful talent to have.

It also seems that, by concentrating more on the individual modules and less on the inter-module communication, I pretty much entirely missed where most of your thinking happens.

One question comes to mind; you mention 'raw plans'. You've correctly predicted my obvious question - what raw plans feel like - but I still don't really have much of a sense of it, so I'd like to poke at that a bit if you don't mind.

So; how are these raw plans organised?

Let us say, for example, that you need to plan... oh, say, to travel to a library, return one set of books, and take out another. Would the plan be a series of steps arranged in order of completion, or a set of subgoals that need to be accomplished in order (subgoal one: find the car keys); or would the plan be simply a label saying 'LIBRARY PLAN' that connects to the memory of the last time you went on a similar errand?


As for me, I have a few different ways that I can formulate plans. For a routine errand, my plan consists of the goal (e.g. "I need to go and buy bread") and a number of habits (which, now that I think about it, hardly impinge on my conscious mind at all; if I think about it, I know where I plan to go to get bread, but the answer's routine enough that I don't usually bother). When driving, there are points at which I run a quick self-check ("do I need to buy bread today? Yes? Then I must turn into the shopping centre...")

For a less routine errand, my plan will consist of a number of steps to follow. These will be arranged in the order I expect to complete them, and I will (barring unexpected developments or the failure of any step) follow the steps in order as specified. If I were to write down the steps on paper, they would appear horrendously under-specified to a neutral observer; but in the privacy of my own head, I know exactly which shop I mean when I simply specify 'the shop'; both the denotations and connotations intended by every word in my head are there as part of the word.

If the plan is one that I particularly look forward to fulfilling, I may run through it repeatedly, particularly the desirable parts ("...that icecream is going to taste so good..."). This all runs through my language system, of course.


Sight is something else again. It certainly feels like I'm aware of my entire visual field, but I'm at least half convinced that's an illusion. I'm in a prime position to fool myself about that.

I have a vague memory of having read something that suggested that humans are not aware of their entire visual field, but that there is a common illusion that people are, agreeing with your hypothesis here. I vaguely suspect that it might have been in one of the 'Science of the Discworld' books, but I am uncertain.

comment by Eugine_Nier · 2013-06-06T05:14:05.194Z · LW(p) · GW(p)

Obligatory link to Yvain's article on the topic.

comment by NancyLebovitz · 2013-06-05T03:29:48.637Z · LW(p) · GW(p)

A very high proportion of what I call thinking is me talking to myself. I have some ability to imagine sounds and images, but it's pretty limited. I'm better with kinesthesia, but that's mostly for thinking about movement.

What's your internal experience composed of?

Replies from: Baughn
comment by Baughn · 2013-06-06T00:45:37.260Z · LW(p) · GW(p)

That varies.. quite a lot.

While I'm writing fiction there'll be dialogue, the characters' emotions and feelings, visuals of the scenery, point-of-view visuals (often multiple angles at the same time), motor actions, etc. It's a lot like lucid dreaming, only without the dreaming. Occasionally monologues, yes, but those don't really count; they're not mine.

While I'm writing this there is, yes, a monologue. One that's just-in-time, however; I don't normally bother to listen to a speech in my head before writing it down. Not for this kind of thing; more often for said fiction, where I'll do that to better understand how it reads.

Mostly I'm not writing anything, though.

Most of the time, I don't seem to have any particular internal experience at all. I just do whatever it is I'm doing, and experience that, but unless it's relatively complex there doesn't seem to be much call for pre-action reflections. (Well, of course I still feel emotions and such, but.. nothing monologue-like, in any modality. Hope that makes sense.)

A lot of the time I have (am conscious of) thoughts that don't correspond to any sensor modality whatsoever. I have no idea how I'd explain those.

If I'm working on a computer program.. anything goes, but I'll typically borrow visual capacity to model graph structures and such. A lot of the modalities I'd use there, I don't really have words for, and it doesn't seem worthwhile to try inventing them; doing so usefully would turn this into a novel.

Replies from: CCC
comment by CCC · 2013-06-06T08:56:01.334Z · LW(p) · GW(p)

While I'm writing this there is, yes, a monologue. One that's just-in-time, however; I don't normally bother to listen to a speech in my head before writing it down.

That's the internal monologue. Mine is also often just-in-time (not always, of course). I can listen to it in my head a whole lot faster than I can talk, type, or write, so sometimes I'll start out just-in-time at the start of the sentence and then my internal monologue has to regularly wait for the typing/writing/speaking to catch up before I can continue.

For example, in this post, when I clicked the 'reply' button I had already planned out the first two sentences of the above post (before the first bracket). The contents of the first bracket were added when I got to the end of the second sentence, and then edited to add the 'of course'. The next sentence was added in sections, built up and then put down and occasionally re-edited as I went along (things like replacing 'on occasion' with 'sometimes').

Most of the time, I don't seem to have any particular internal experience at all. I just do whatever it is I'm doing, and experience that, but unless it's relatively complex there doesn't seem to be much call for pre-action reflections.

Hmmm. Living in the moment. I'm curious; how would you go about (say) planning for a camping trip? Not so much 'what would you do', but 'how would you think about it'?

comment by OrphanWilde · 2013-06-04T23:33:59.797Z · LW(p) · GW(p)

Can't speak for Nancy, but I think I know what she refers to.

Different people have different thought... processes, I guess is the word. My brother's thought process is, by his description, functional; he assigns parts of his mind tasks, and gets the results back in a stack. (He's pretty good at multi-tasking, as a result.) My own thought process is, as Nancy specifies, an internal monologue; I'm literally talking to myself. (Although the conversation is only partially English. It's kind of like... 4Chan. Each "line" of dialogue is associated with an "image" (in some cases each word, depending on the complexity of the concept encoded in it), which is an abstract conceptualization. If you've ever read a flow-of-consciousness book, that's kind of like a low-resolution version of what's going on in my brain, and, I presume, hers.

I've actually discovered at least one other "mode" I can switch my brain into - I call it Visual Mode. Whereas normally my attention is very tunnel vision-ish (I can track only one object reliably), I can expand my consciousness (at the cost of eliminating the flow-of-consciousness that is usually my mind) and be capable of tracking multiple objects in my field of vision. (I cannot, for some reason, actually move my eyes while in this state; it breaks my concentration and returns me to a "normal" mode of thought.) I'm capable of thinking in this state, but oddly, incapable of tracking or remembering what those thoughts are; I can sustain a full conversation which I will not remember, at all, later.

Replies from: Baughn, FeepingCreature
comment by Baughn · 2013-06-06T00:51:10.795Z · LW(p) · GW(p)

Hm, the obvious question there is: "How do you know you can sustain a full conversation, if you don't remember it at all later?" (..edit: With other people? Er, right. Somehow I was assuming it was an internal conversation.)

I've got some idea what you're talking about, though - focusing my consciousness entirely on sensory input. More useful outside of cities, and I don't have any kind of associated amnesia, but it seems similar to how I'd describe the state otherwise.

Neither your brother's nor your own thought processes otherwise seem to be any kind of match for mine. It's interesting that there's this much variation, really.

Otherwise.. see sibling post for more details.

comment by FeepingCreature · 2013-06-07T23:21:22.245Z · LW(p) · GW(p)

I've actually discovered at least one other "mode" I can switch my brain into - I call it Visual Mode.

I can do a weaker version of this - basically, by telling my brain to "focus on the entire field of your perception" as if it was a single object. As far as I am aware, it doesn't do any of the mental effects you describe for me. It's very relaxing though.

comment by itaibn0 · 2013-06-06T22:15:53.638Z · LW(p) · GW(p)

Add one to the sample size. My thought process is also mostly lacking in sensory modality. My thoughts do have a large verbal component, but they are almost exclusively for planning things that I could potentially say or write.

Rather than trying to justify how this works to the others, I will instead ask my own questions: How can words help in creating thoughts? In order to generate a sentence in your head, surely you must already know what you want to say. And if you already know what you have to say, what's the point of saying it? I presume you cannot jump to the next thought without saying the previous one in full. With my own ability to generate sentences, that would be a crippling handicap.

Replies from: CCC, FeepingCreature
comment by CCC · 2013-06-13T09:46:13.032Z · LW(p) · GW(p)

How can words help in creating thoughts?

My thoughts are largely made up of words. Although some internal experimentation has shown that my brain can still work when the internal monologue is silent, I still associate 'thoughts' very, very strongly with 'internal monologue'.

I think that, while thoughts can exist without words, the word make the thoughts easier to remember; thus, the internal monologue is used as part of a 'write-to-long-term-storage' function. (I can write images and feelings as well; but words seem to be my default write-mode).

Also, the words - how shall I put this - the words solidify the thought. They turn the thought into something that I can then take and inspect for internal integrity. Something that I can check for errors; something that I can think about, instead of something that I can just think. Images can do the same, but take more working-memory space to hold and are thus harder to inspect as a whole.

I presume you cannot jump to the next thought without saying the previous one in full.

I don't think I've ever tried. I can generate sentences fast enough that it's not a significant delay, though. I suspect that this is simply due to long practice in sentence construction. (Also, if I'm not going to actually say it out loud, I don't generally bother to correct it if it's not grammatically correct).

comment by FeepingCreature · 2013-06-07T23:16:27.059Z · LW(p) · GW(p)

I presume you cannot jump to the next thought without saying the previous one in full.

Personally, I can do this to degrees. I can skip verbalizing a concept completely, but it feels like inserting a hiccup into my train of thought (pardon the mixed analogy). I can usually safely skip verbalizing all of it; that is, it feels like I have a mental monologue but upon reflection it went by too fast to actually be spoken language so I assume it was actually some precursor that did not require full auditory representation. I usually only use full monologues when planning conversations in advance or thinking about a hard problem.

As far as I can tell, the process helps me ensure consistency in my thoughts by making my train of thought easier to hold on to and recall, and also enables coherence checking by explicitly feeding my brain's output back into itself.

Replies from: itaibn0
comment by itaibn0 · 2013-06-10T14:33:21.630Z · LW(p) · GW(p)

Now I'm worrying that I might have been exaggerating. Although you are implicitly describing your thoughts as being verbal, they seem to work in a way similar to mine.

ETA: More information: I still believe I am less verbal than you. In particular, I believe my thoughts become less verbal when thinking about hard problems are than becoming more so as in your case. However, my statement about my verbal thoughts being "almost exclusively for planning things that I could potentially say or write" is a half-truth; A lot of it is more along the lines that sometimes when I have an interesting thought I imagine explaining it to someone else. Some confounding factors:

  • There is a continuum here from completely nonverbal to having connotations of various words and grammatical structures to being completely verbal. I'm not sure when it should count as having an internal monologue.

  • Asking myself weather a thought was verbal naturally leads to create a verbalization of it, while not asking myself this creates a danger of not noticing a verbal thought.

  • I basing this a lot on introspection done while I am thinking about this discussion, which would make my thoughts more verbal.

comment by Qiaochu_Yuan · 2013-06-06T05:48:26.563Z · LW(p) · GW(p)

Wikipedia article. I'm really curious how you would describe your thoughts if you don't describe them as an internal monologue. Are you more of a visual thinker?

comment by [deleted] · 2013-06-05T22:43:33.875Z · LW(p) · GW(p)

When I think about stuff, often I imagine a voice speaking some of the thoughts. This seems to me to be a common, if not nearly universal, experience.

Replies from: BerryPick6, Kaj_Sotala, Baughn
comment by BerryPick6 · 2013-06-10T12:04:42.343Z · LW(p) · GW(p)

I only really think using voices. Whenever I read, if I'm not 'hearing' the words in my head, nothing stays in.

comment by Kaj_Sotala · 2013-06-10T11:46:39.570Z · LW(p) · GW(p)

Do you actually hear the voice? I often have words in my head when I think about things, but there isn't really an auditory component. It's just words in a more abstract form.

Replies from: None, tgb
comment by [deleted] · 2013-06-11T02:54:45.832Z · LW(p) · GW(p)

I wouldn't say I literally hear the voice; I can easily distinguish it from sounds I'm actually hearing. But the experience is definitely auditory, at least some of the time; I could tell you whether the voice is male or female, what accent they're speaking in (usually my own), how high or low the voice is, and so on.

I definitely also have non-auditory thoughts as well. Sometimes they're visual, sometimes they're spatial, and sometimes they don't seem to have any sensory-like component at all. (For what it's worth, visual and spatial thoughts are essential to the way I think about math.)

Replies from: syllogism
comment by syllogism · 2013-06-11T04:41:44.053Z · LW(p) · GW(p)

If you want to poke at this a bit, one way could be to test what sort of interferences disrupt different activities for you, compared to a friend.

I'm thinking of the bit in "Surely you're joking" where Feynman finds that he can't talk and maintain a mental counter at the same time, while a friend of his can -- because his friend's mental counter is visual.

Replies from: Baughn
comment by Baughn · 2014-01-21T17:30:33.521Z · LW(p) · GW(p)

Neat. I can do it both ways... actually, I can name at least four different ways of counting:

  • "Raw" counting, without any sensory component; really just a sense of magnitude. Seems to be a floating-point, with a really small number of bits; I usually lose track of the exact number by, oh, six.

  • Verbally. Interferes with talking, as you'd expect.

  • Visually, using actual 2/3D models of whatever I'm counting. No interference, but a strict upper limit, and interferes with seeing - well, usually the other way around. The upper limit still seems to be five-six picture elements, but I can arrange them in various ways to count higher; binary, for starters, but also geometrically or.. various ways.

  • Visually, using pictures of decimal numbers. That interferes with speaking when updating the number, but otherwise sticks around without any active maintenance, at least so long as I have my eyes closed. I'm still limited to five-six digits, though... either decimal or hexadecimal works. I could probably figure out a more efficient encoding if I worked at it.

comment by tgb · 2013-06-10T14:59:20.640Z · LW(p) · GW(p)

I, for one, actually hear the voice. It's quite clear. Not loud like an actual voice but a "so loud I can't hear myself think" moment has never literally happened to me since the voice seems more like its on its own track, parallel to my actual hearing. I would never get it confused with actual sounds, though I can't really separate the hearing it to the making it to be sure of that.

Replies from: Kaj_Sotala
comment by Kaj_Sotala · 2013-06-11T05:38:35.617Z · LW(p) · GW(p)

but a "so loud I can't hear myself think" moment has never literally happened to me since the voice seems more like its on its own track, parallel to my actual hearing.

That's interesting! Because I have definitely had "so loud I can't hear myself think" moments (even though I don't literally hear thoughts) - just two days ago, I had to ask somebody to stop talking for a while so that I could focus.

Replies from: tgb
comment by tgb · 2013-06-11T14:56:22.037Z · LW(p) · GW(p)

Being distracted is one thing - I mean literally not being able to hear my thoughts in the manner that I might not be able to hear what you said if a jet was taking off nearby. This was to emphasize that even though I perceive them as sounds there is 'something' different about them than sounds-from-ears that seems to prevent them from audibly mingling. Loud noises can still make me lose track of what I was thinking and break focus.

Replies from: Kaj_Sotala
comment by Kaj_Sotala · 2013-06-11T16:03:47.066Z · LW(p) · GW(p)

Hmm. Now that I think of it, I'm not sure to what extent it was just distraction and to what extent a literal inability to hear my thoughts. Could've been exclusively one, or parts of both.

comment by Baughn · 2013-06-06T00:54:34.514Z · LW(p) · GW(p)

I added more detail in a sibling post, but it can't be that universal; I practically never do that at all, basically only for thoughts that are destined to actually be spoken. (Or written, etc.)

Actually, I believe I used to do so most of the time (..about twenty years ago, before the age of ten), but then made a concerted effort to stop doing so on the basis that pretending to speak aloud takes more time. Those memories may be inaccurate, though.

Replies from: ChristianKl
comment by ChristianKl · 2013-06-06T21:15:14.733Z · LW(p) · GW(p)

I added more detail in a sibling post, but it can't be that universal; I practically never do that at all, basically only for thoughts that are destined to actually be spoken.

It very universal but some people shut down their awareness of the process. It's like people who don't say they don't dream. They just don't remember it. Most people can't perceive their own heartbeat. It can take some effort to build awareness.

What's your internal reaction when someone insults yourself?

Replies from: itaibn0, Estarlio
comment by itaibn0 · 2013-06-06T22:21:43.369Z · LW(p) · GW(p)

You're claiming that you understand his thought better than he does. That is a severe accusation and is not epistemologically justified. Also, I can't recall off the top of my head any time somebody insulted me, I think my reaction would depend on the context, but I don't see why it will involve imagined words.

Replies from: ChristianKl
comment by ChristianKl · 2013-06-06T23:12:23.947Z · LW(p) · GW(p)

That is a severe accusation and is not epistemologically justified.

How do you know that there's no epistemological justification?

So, how do I know? Empirical experience at NLP seminars. At the beginning plenty of people say that they don't have an internal dialoge, that they can't view mental images or that they can't perceive emotions within their own body.

It's something that usually get's fixed in a short amount of time.

Around two month ago I was chatting with a girl who had two voices in her head. One that did big picture thinking and another that did analytic thinking. She herself wasn't consciously aware that one of the voices came from the left and the other from the right.

After I told her which voice came from which direction, she checked and I was right. I can't diagnose what Baughn does with internal dialog in the same depth through online conversation but there nothing that stops me from putting forth generally observations about people who believe that they have no internal dialog until they were taught to perceive it.

I think my reaction would depend on the context, but I don't see why it will involve imagined words.

Yes, you don't see imagined words. That's kind of the point of words. You either hear them or don't hear them. If you try to see them you will fail. If you try to perceive your internal dialog that way you won't see any internal dialog.

But why did I pick that example? It's emotional. Being insulted frequently get's people to reflect on themselves and the other person. They might ask themselves: "Why did he do that?" or answer to themselves "No, he has no basis for making that claim." In addition judgement is usually done via words.

I'm however not sure whether I can build up enough awareness in Baughn via text based online conversation that he can pick up his mental dialog.

Also, I can't recall off the top of my head any time somebody insulted me,

If you don't have strong internal dialog it doesn't surprise me that you aren't good at recalling a type of event that usually goes with strong internal dialog.

Replies from: Baughn, NancyLebovitz
comment by Baughn · 2014-01-21T17:33:54.666Z · LW(p) · GW(p)

Hm~

Those are interesting claims, but I think you misunderstood a little. I do have an internal monologue, sometimes; I just don't bother to use it, a lot of the time. It depends on circumstances.

Replies from: ChristianKl
comment by ChristianKl · 2014-01-21T22:17:43.514Z · LW(p) · GW(p)

You moved in the span of half a year from: "I'm pretty sure I don't have a internal monologue, I don't know what the term is supposed to mean." to "I do have an internal monologue, sometimes".

That's basically my point. With a bit of direction there something that you could recognize there to be an internal monologue in your mind.

Of course once you recognize it, you aren't in the state anymore where you would say: "I'm pretty sure I don't have a internal monologue." That's typical for those kind of issues.

I was basically right with my claim that "I'm pretty sure I don't have a internal monologue." is wrong, and did what it took to for you to recognize it. itaibn0 claimed that the claim was etymologically unsubstantiated. It was substantiated and turned out to be right.

Replies from: Baughn
comment by Baughn · 2014-01-22T00:26:32.052Z · LW(p) · GW(p)

Actually, I would have made the same claim half a year ago. The only difference is that I have a different model of what the words "internal monologue" mean - that, and I've done some extra modelling and introspection for a novel.

Replies from: ChristianKl
comment by ChristianKl · 2014-01-22T13:27:33.559Z · LW(p) · GW(p)

Yes, now you have a mental model that allows you to believe "I do have an internal monologue, sometimes" back then you didn't. What I did write was intended to create that model in your mind.

To me it seems like it worked. It's also typical that people backport their mental models into the past when they remember what happened in the past.

comment by NancyLebovitz · 2013-06-09T14:31:06.762Z · LW(p) · GW(p)

So, how do I know? Empirical experience at NLP seminars. At the beginning plenty of people say that they don't have an internal dialoge, that they can't view mental images or that they can't perceive emotions within their own body.

It's something that usually get's fixed in a short amount of time.

How does it get fixed?

Replies from: ChristianKl
comment by ChristianKl · 2013-06-10T04:51:56.012Z · LW(p) · GW(p)

First different people use different system with underlying strength. Some people like Tesla can visualize a chair and the chair they visualize get's perceived by them the same way as a real chair. You don't get someone who doesn't think he can visualize pictures to that level of visualization ability by doing a few tricks.

In general you do something that triggers a reaction in someone. You observe the person and when she has the image or has the dialog you stop and tell the person to focus their attention on it. There are cases where that's enough.

There are also cases where a person has a real reason why they repress a certain way of perception. A person with a strong emotional trauma might have completely stopped relating to emotions within their body to escape the emotional pain. Then it's necessary for the person to become into a state where they are resourceful enough to face the pain so that they can process it.

A third layer would consist of different suggestion that it's possible to perceive something new. Both at a conscious level and on a deep metaphoric level.

comment by Estarlio · 2013-06-06T23:31:09.720Z · LW(p) · GW(p)

What's your internal reaction when someone insults yourself?

I feel like I'm floating. Adrenaline rush, the same feeling I used to get when fights were imminent as a kid.

Replies from: ChristianKl
comment by ChristianKl · 2013-06-06T23:32:38.074Z · LW(p) · GW(p)

How do you know how you want to respond to the insult? What mental strategy did you use the last time you were insulted?

Replies from: Estarlio
comment by Estarlio · 2013-06-07T00:12:46.726Z · LW(p) · GW(p)

I just do what I feel like. And my feelings are generally in line with my previous experiences with the other person. If I feel like they're a reasonable person and generally nice then I feel like giving them the benefit of the doubt, if I feel like they're a total toss-pot then I'm liable to fire back at them. There's so much cached thought that's felt rather than verbalised at that point that it's pretty much a reaction.

comment by FeepingCreature · 2013-06-07T23:27:05.063Z · LW(p) · GW(p)

Hijacking this thread to ask if anybody else experiences this - when I watch a movie told from the perspective of a single character or with a strong narrator, my internal monologue/narrative will be in that character's/narrator's tone of voice and expression for the next hour or two. Anybody else?

Replies from: Ratcourse, itaibn0, taelor, Ronak, shminux
comment by itaibn0 · 2013-06-08T15:18:16.567Z · LW(p) · GW(p)

I find that sometimes, after reading for a long time, the verbal components of my thoughts resemble the writing style of what I read.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2013-06-19T13:06:54.800Z · LW(p) · GW(p)

Sometimes, after reading something with a strong narrative voice, I'll want to think in the same style, but realize I can't match it.

comment by taelor · 2013-06-19T23:15:44.681Z · LW(p) · GW(p)

Not exactly what you are asking for, but I've found that if I spend an extended period of time (usually around a week) heavily interacting with a single person or group of people, I'll start mentally reading things in their voice(s).

comment by Ronak · 2013-06-10T17:40:18.319Z · LW(p) · GW(p)

While reading books. Always particular voices for every character. So much so, I can barely sit through adaptations of books I've read. And my opinion of a writer always drops a little bit when I meet hjm/her, and the voice in my head just makes more sense for that style.

comment by Shmi (shminux) · 2013-06-07T23:32:20.944Z · LW(p) · GW(p)

Sure. Or after listening to a charismatic person for some time.

comment by Risto_Saarelma · 2013-06-06T11:06:40.722Z · LW(p) · GW(p)

Why is there that knee-jerk rejection of any effort to "overthink" pop culture?

Maybe the social signaling sensitive unconsciously translate it into "I thought up this unobvious thing about this thing because I am smarter than you", and then file it off as being an asshole about stuff that's supposed to be communal fun?

Replies from: Osiris
comment by Osiris · 2013-06-06T13:06:24.869Z · LW(p) · GW(p)

It is not healthy to believe that every curtain hides an Evil Genius (I speak here as a person who lived in the USSR). Given the high failure rate of EVERY human work, I'd say that most secrets in the movie industry have to do with saving bad writing and poor execution with clever marketing and setting up other conflicts people could watch besides the pretty explosions. It's not about selling Imperialism and Decadance to a country that's been accused of both practically since its formation(sorry if you're American and noticed these accusations exist only now in the 21st century), or trying to force people into some new world order-style government where a dictator takes care of every need. Though, I must admit that I wonder about Michael Bay's agenda sometimes...

Tony Stark isn't JUST a rich guy with a WMD. He messes up. He fails his friends and loved ones. He is in some way the lowest point in each of our lives, given some nobility. In spite of all those troubles, the fellow stands up and goes on with his life, gets better and tries to improve the world. David Wong seems to have missed the POINT of a couple of movies (how about the message of empowerment-through-determination in Captain America? The fellow must still earn his power as a "runt"), and even worse tries to raise conspiracy theory thinking up as rationality.

So, maybe, the knee-jerk reaction is wise, because overanalizing something made to entertain tends to be somewhat similar to seeing shapes in the clouds. Sometimes, Iron Man is just Iron Man.

Replies from: ChristianKl
comment by ChristianKl · 2013-06-06T21:05:52.305Z · LW(p) · GW(p)

You don't need to believe in intent to spread negative values to analyse that spreading negative values is bad.

Replies from: Osiris
comment by Osiris · 2013-06-06T22:13:52.815Z · LW(p) · GW(p)

Hopefully, the positive values are greater in number than the negative ones, if one is not certain which ones are which--and I see quite a few positive values in recent superhero movies.

comment by linkhyrule5 · 2013-11-21T05:53:37.589Z · LW(p) · GW(p)

Seems to me that the problem is, well, precisely as stated: overthinking. It's the same problem as with close reading: look too close at a sample of one and you'll start getting noise, things the author didn't intend and were ultimately caused by, oh, what he had for breakfast on a Tuesday ten months ago and not some ominous plan.

Replies from: Desrtopa
comment by Desrtopa · 2013-11-21T06:09:56.973Z · LW(p) · GW(p)

On the other hand, where do you draw the line between reasonable analysis and overthinking? I mean, you can read into a text things which only your own biases put there in the first place, but on the other hand, the director of Birth of a Nation allegedly didn't intend to produce a racist film. I've argued plenty of times myself that you can clearly go too far, and critics often do, but on the other hand, while the creator determines everything that goes into their work, their intent, as far as they can describe it, is just the rider on the elephant, and the elephant leaves tracks where it pleases.

Replies from: linkhyrule5, Jiro
comment by linkhyrule5 · 2013-11-21T07:44:34.961Z · LW(p) · GW(p)

Well, this is hardly unique to literary critique. If/When we solve the general problem of finding signal in noise we'll have a rigorous answer; until then we get to guess.

comment by Jiro · 2013-11-21T07:31:37.126Z · LW(p) · GW(p)

If someone intends to draw an object with three sides, but they don't know that an object with three sides is a triangle, have they intended to draw a triangle? Whether the answer is yes or no is purely a matter of semantics.

Replies from: linkhyrule5
comment by linkhyrule5 · 2013-11-21T08:05:21.626Z · LW(p) · GW(p)

Yes, but the question "should we censure this movie/book because it causes harm to (demographic)" is not a question of semantics.

comment by DaveK · 2013-06-10T10:53:31.006Z · LW(p) · GW(p)

Well, I really enjoy music, but I made the deliberate choice to not learn about music (in terms of notes, chords, etc.). The reason being that what I get from music is a profound experience, and I was worried that knowledge of music in terms of reductionist structure might change the way I experience hearing music. (Of course some knowledge inevitably seeps in.)

comment by TeMPOraL · 2013-06-01T16:48:25.077Z · LW(p) · GW(p)

Akin's Laws of Spacecraft Design are full of amazing quotes. My personal favourite:

6) (Mar's Law) Everything is linear if plotted log-log with a fat magic marker.

(See also an interesting note from HN's btilly on this law)

comment by NancyLebovitz · 2013-06-25T14:01:01.712Z · LW(p) · GW(p)

http://scienceblogs.com/insolence/2013/06/04/stanislaw-burzynski-versus-the-bbc/#comment-262541

The movie “Apollo 13″ does a fair job of showing how rapidly the engineers in Houston devised the kludge and documented it, but because of time contraints of course they can’t show you everything. NASA is a stickler for details. (Believe me, I’ve worked with them!) They don’t just rapid prototype something that people’s lives will depend upon. Overnight, they not only devised the scrubber adapter built from stuff in the launch manifest, they also tested it, documented it, and sent up stepwise instructions for constructing it. In a high-maturity organization, once you get into the habit of doing that, it doesn’t really take that long. Something that always puzzles me when I meet cowboy engineers who insist that process will just slow them down unacceptably. I tell them that hey, if NASA engineers could design, build, test, and document a CO2 scrubber adapter made from common household items overnight, you can damn well put in a comment when you check in your code changes.

comment by Shmi (shminux) · 2013-06-09T19:21:58.245Z · LW(p) · GW(p)

you can't wait around for someone else to act. I had been looking for leaders, but I realised that leadership is about being the first to act.

Edward Snowden, the NSA surveillance whistle-blower.

comment by James_Miller · 2013-06-01T15:15:46.866Z · LW(p) · GW(p)

Imagine you are sitting on this plane now. The top of the craft is gone and you can see the sky above you. Columns of flame are growing. Holes in the sides of the airliner lead to freedom. How would you react?

You probably think you would leap to your feet and yell, "Let's get the hell out of here!" If not this, then you might assume you would coil into a fetal position and freak out. Statistically, neither of these is likely. What you would probably do is far weirder......

In any perilous event, like a sinking ship or towering inferno, a shooting rampage or a tornado, there is a chance you will become so overwhelmed by the perilous overflow of ambiguous information that you will do nothing at all...

about 75 percent of people find it impossible to reason during a catastrophic event or impending doom.

You Are Not So Smart by David McRaney p 55,56, and 58.

Replies from: itaibn0, army1987, MixedNuts
comment by itaibn0 · 2013-06-01T15:38:55.075Z · LW(p) · GW(p)

Considering the probability that I will encounter such a high-impact fast-acting disaster, and the expected benefit of acting on shallowly thought out gut reaction, I feel no need to remove from myself this bias.

Replies from: James_Miller
comment by James_Miller · 2013-06-01T19:20:06.472Z · LW(p) · GW(p)

Since you have taken the time to make a comment on this website I presume you get some pleasure from thinking about biases. The next time you are on an airplane perhaps you would find it interesting to work through how you should respond if the plane starts to burn.

Replies from: BillyOblivion, itaibn0
comment by BillyOblivion · 2013-06-07T05:15:34.591Z · LW(p) · GW(p)

Interestingly enough there is some evidence--or at least assertions by people who've studied this sort of thing--that doing this sort of problem solving ahead of time tends to reduce the paralysis.

When you get on a plane, go into a restaurant, when you're wandering down the street or when you go someplace new think about a few common emergencies and just think about how you might respond to them.

comment by itaibn0 · 2013-06-01T21:31:03.560Z · LW(p) · GW(p)

Yes, you're right. In fact, I did think about this situation. I think the best strategy is to enter the brace position recommended in the safety guide and to stay still, while gathering as much information as position and obeying the any person who takes on a leadership role. This sort of reasoning can be useful because it is fun to think about, because it makes for interesting conversation, or because it might reveal an abstract principle that is useful somewhere else. My point is to demonstrate a VOI calculation and to show that although this behavior seems irrational on its own, in the broader context the strategy of being completely unprepared for disaster is a good one. Still, the fact that people act in this particular maladaptive way is interesting, and so I got something out of your quote.

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-06-01T22:00:31.615Z · LW(p) · GW(p)

"When two planes collided just above a runway in Tenerife in 1977, a man was stuck, with his wife, in a plane that was slowly being engulfed in flames. He remembered making a special note of the exits, grabbed his wife's hand, and ran towards one of them. As it happened, he didn't need to use it, since a portion of the plane had been sheared away. He jumped out, along with his wife and the few people who survived. Many more people should have made it out. Fleeing survivors ran past living, uninjured people who sat in seats literally watching for the minute it took for the flames to reach them." - http://io9.com/the-frozen-calm-of-normalcy-bias-486764924

Replies from: mjankovic, itaibn0
comment by mjankovic · 2013-06-03T19:20:29.663Z · LW(p) · GW(p)

Speaking as someone who's been trough that, I don't think that the article gives a complete picture. Part of the problem appears to be (particularly by reports from newer generations) in such instaces is the feeling of unreality, as the only times when we tend to see such situations is when we're sitting comfortably, so a lot of us are essentially conditioned to sit comfortably during such events.

However, this does tend to get better with some experience of such situations.

comment by itaibn0 · 2013-06-01T23:34:39.996Z · LW(p) · GW(p)

See, I thought the plane was still in the air. Now I understand that the brace position is useless. This is why "gathering as much information as possible" is part of my plan. Unfortunately, with such a preliminary plan, there's a good chance I won't realise this quickly enough and become one of the passive casualties. As I stated earlier, I don't mind this.

Replies from: katydee
comment by katydee · 2013-06-01T23:43:37.446Z · LW(p) · GW(p)

As I stated earlier, I don't mind this.

As things one could not mind go, literally dying in a fire seems unlikely to be a good choice.

Replies from: khafra, itaibn0
comment by khafra · 2013-06-02T18:54:35.418Z · LW(p) · GW(p)

So does leaving a box with $1,000 in it on the table.

comment by itaibn0 · 2013-06-03T13:21:19.232Z · LW(p) · GW(p)

What's involved here is dying in a fire in a hypothetical situation.

Replies from: DSherron
comment by DSherron · 2013-06-12T17:40:55.689Z · LW(p) · GW(p)

No. Please, just no. This is the worst possible form of fighting the hypothetical. If you're going to just say "it's all hypothetical, who cares!" then please do everyone a favor and just don't even bother to respond. It's a waste of everyone's time, and incredibly rude to everyone else who was trying to have a serious discussion with you. If you make a claim, an your reasoning is shown to be inconsistent, the correct response is never to pretend it was all just a big joke the whole time. Either own up to having made a mistake (note: having made a mistake in the past is way higher status than making a mistake now. Saying "I was wrong" is just another way to say "but now I'm right". You will gain extra respect on this site from noticing your own mistakes as well.) or refute the arguments against your claim (or ask for clarification or things along those lines). If you can't handle doing either of those then tap out of the conversation. But seriously, taking up everyone's time with a counter-intuitive claim and then laughing it off when people try to engage you seriously is extremely rude and a waste of everyone's time, including yours.

Replies from: itaibn0
comment by itaibn0 · 2013-06-12T21:53:17.213Z · LW(p) · GW(p)

You're completely right. I retract my remark.

Replies from: DSherron
comment by DSherron · 2013-06-13T17:26:28.851Z · LW(p) · GW(p)

And then sometimes I'm reminded why I love this site. Only on LessWrong does a (well-founded) rant about bad form or habits actually end up accomplishing the original goal.

Replies from: Tiltmom
comment by Tiltmom · 2013-06-14T04:07:02.949Z · LW(p) · GW(p)

Only on LessWrong would I hope to never see a statement that begins, 'Only on LessWrong'.

comment by A1987dM (army1987) · 2013-06-15T19:34:24.112Z · LW(p) · GW(p)

Actually, freezing up is precisely what I-here-in-my-room imagine I-on-a-plane-in-flames would do.

comment by MixedNuts · 2013-06-04T22:14:26.834Z · LW(p) · GW(p)

I find this confusing. Ambiguity is paralysing (though in what circumstances the freeze response isn't stupid, I have no idea), but it's hard to see what response other than "RUN" this would cause. It's not like having to find words that'll placate a hostile human, or reinvent first aid on the fly.

Replies from: Eugine_Nier
comment by Eugine_Nier · 2013-06-06T04:40:33.685Z · LW(p) · GW(p)

(though in what circumstances the freeze response isn't stupid, I have no idea)

When you're hoping the saber-tooth tiger won't notice you.

comment by Zubon · 2013-06-02T20:33:40.378Z · LW(p) · GW(p)

Sorry? Of course he was sorry. People were always sorry. Sorry they had done what they had done, sorry they were doing what they were doing, sorry they were going to do what they were going to do; but they still did whatever it was. The sorrow never stopped them; it just made them feel better. And so the sorrow never stopped. ...

Sorrow be damned, and all your plans. Fuck the faithful, fuck the committed, the dedicated, the true believers; fuck all the sure and certain people prepared to maim and kill whoever got in their way; fuck every cause that ended in murder and a child screaming.

Against a Dark Background by Iain M. Banks.

Replies from: simplicio
comment by simplicio · 2013-06-11T21:08:28.407Z · LW(p) · GW(p)

I read this as a poetic invocation against utilitarian sacrifices. It seems to me simultaneously wise on a practical level and bankrupt on a theoretical level.

What about the special case of people prepared to be maimed and killed in order to get in someone's way? I guess it depends whether you share goals with the latter someone.

Replies from: Viliam_Bur
comment by Viliam_Bur · 2013-06-12T12:10:38.852Z · LW(p) · GW(p)

If I don't share goals with someone, or more strongly, if I consider their goals evil... then I will see their meta actions differently, because at the end, the meta actions are just a tool for something else. If some people build a perfect superintelligent paperclip maximizer, I will hate the fact that they were able to overcome procrastination, that they succeeded in overcoming their internal conflicts, that they made good strategical decisions about getting money and smart people for their project, etc.

So perhaps the quote could be understood as a complaint against people in the valley of bad rationality. Smart enough to put their plans successfully in action; yet too stupid to understand that their plans will end hurting people. Smart enough to later realize they made a mistake and feel sorry; yet too stupid to realize they shouldn't make a similar kind of plan with similar kinds of mistakes again.

comment by arborealhominid · 2013-06-06T15:31:47.436Z · LW(p) · GW(p)

The word gentleman originally meant something recognisable: one who had a coat of arms and some landed property. When you called someone 'a gentleman' you were not paying him a compliment, but merely stating a fact. If you said he was not 'a gentleman' you were not insulting him, but giving information. There was no contradiction in saying that John was a liar and a gentleman; any more than there now is in saying that James is a fool and an M.A. But then there came people who said- so rightly, charitably, spiritually, sensitively, so anything but usefully- 'Ah, but surely the important thing about a gentleman is not the coat of arms and the land, but the behaviour? Surely he is the true gentleman who behaves as a gentleman should? Surely in that sense Edward is far more truly a gentleman than John?' They meant well. To be honourable and courteous and brave is of course a far better thing than to have a coat of arms. But it is not the same thing. Worse still, it is not a thing everyone will agree about. To call a man 'a gentleman' in this new, refined sense, becomes, in fact, not a way of giving information about him, but a way of praising him: to deny that he is 'a gentleman' becomes simply a way of insulting him. When a word ceases to be a term of description and becomes merely a term of praise, it no longer tells you facts about the object; it only tells you about the speaker's attitude to that object. (A 'nice' meal only means a meal the speaker likes.) A gentleman, once it has been spiritualised and refined out of its old coarse, objective sense, means hardly more than a man whom the speaker likes. As a result, gentleman is now a useless word. We had lots of terms of approval already, so it was not needed for that use; on the other hand if anyone (say, in a historical work) wants to use it in its old sense, he cannot do so without explanations. It has been spoiled for that purpose.

  • C.S. Lewis (emphasis my own)
Replies from: novalis, None
comment by novalis · 2013-06-07T18:40:33.918Z · LW(p) · GW(p)

When a word ceases to be a term of description and becomes merely a term of praise, it no longer tells you facts about the object; it only tells you about the speaker's attitude to that object.

This is because a speaker's attitude towards an object is not formed by the speaker's perception of the object; it is entirely arbitrary. Wait, no, that's not right.

And anyway, the previous use of the term "gentleman" was, in some sense, worse. Because while it had a neutral denotation ("A gentleman is any person who possesses these two qualities"), it had a non-neutral connotation.

Replies from: shminux
comment by Shmi (shminux) · 2013-06-10T18:16:45.817Z · LW(p) · GW(p)

it had a non-neutral connotation.

That would be true if the word "gentle" meant the same thing then as it does now. Which it didn't

gentle (adj.) early 13c., "well-born," from Old French gentil "high-born, noble, of good family".

The word originally comes from the ancient (not modern) meaning of Hebrew goy: nation.

EDIT: the last statement is incorrect, see replies.

Replies from: novalis, JoshuaFox, army1987
comment by novalis · 2013-06-10T18:37:43.880Z · LW(p) · GW(p)

From your link: Sense of "gracious, kind" (now obsolete) first recorded late 13c.; that of "mild, tender" is 1550s.

This is, of course, exactly what the halo effect would predict; a word that means "good" in some context should come to mean "good" in other contexts. The same effect explains the euphemism treadmill, as a word that refers to a disfavored group is treated as an insult.

comment by JoshuaFox · 2013-06-10T19:02:56.411Z · LW(p) · GW(p)

"Gentleman," "gentle" etc do not come from Hebrew.

Maybe you are thinking about the fact that "gentile" comes from the sense "someone from one of the nations (other than Israel)," just as Hebrew goy originally meant "nation" (including the nation of Israel or any other), and came to mean "someone from one of the (other) nations."

"Gentile" was formed as a calque from Hebrew.

But none of these come from a Hebrew root. Rather, they all come from the Latin gens, gentis "clan, tribe, people," thence "nation." Same root as gene, for that mater.

Replies from: shminux
comment by Shmi (shminux) · 2013-06-10T19:38:08.560Z · LW(p) · GW(p)

Right, my bad, it was translated from Hebrew, but does not come directly from it:

The term gentiles is derived from Latin, used for contextual translation, and not an original Hebrew or Greek word from the Bible.

comment by A1987dM (army1987) · 2013-06-11T10:20:03.701Z · LW(p) · GW(p)

EDIT: the last statement is incorrect, see replies.

You can make it correct but still informative by replacing “originally comes from” with “was originally a calque of”.

comment by [deleted] · 2013-06-10T17:37:48.625Z · LW(p) · GW(p)

So Lewis grants that people really can be brave, honorable, and courteous, but then denies that calling someone so is descriptive?

This passage does't make any sense.

Replies from: Estarlio
comment by Estarlio · 2013-06-10T17:50:01.826Z · LW(p) · GW(p)

I suspect his attitude is more along the lines of 'noise to signal ratio too high.'

comment by BT_Uytya · 2013-06-03T09:34:39.603Z · LW(p) · GW(p)

Baroque Cycle by Neal Stphenson proves to be a very good, intelligent book series.

“Why does the tide rush out to sea?”

“The influence of the sun and the moon.”

“Yet you and I cannot see the sun or the moon. The water does not have senses to see, or a will to follow them. How then do the sun and moon, so far away, affect the water?”

“Gravity,” responded Colonel Barnes, lowering his voice like a priest intoning the name of God, and glancing about to see whether Sir Isaac Newton were in earshot.

“That’s what everyone says now. ’Twas not so when I was a lad. We used to parrot Aristotle and say it was in the nature of water to be drawn up by the moon. Now, thanks to our fellow-passenger, we say ‘gravity.’ It seems a great improvement. But is it really? Do you understand the tides, Colonel Barnes, simply because you know to say ‘gravity’?”

Daniel Waterhouse and Colonel Barnes in Solomon’s Gold

Replies from: Thomas
comment by Thomas · 2013-06-03T10:10:57.902Z · LW(p) · GW(p)

Do you understand the tides, Colonel Barnes, simply because you know to say ‘gravity’?”

Yes, be cause saying 'gravity' in fact means the Newton gravitational law. Aristotle had no idea, that e. g. the product of two masses is involved here.

Replies from: Richard_Kennaway, BT_Uytya
comment by Richard_Kennaway · 2013-06-03T11:04:38.364Z · LW(p) · GW(p)

Does Colonel Barnes? If not, he is just repeating a word he has learned to say. Rather like some people today who have learned to say "entanglement", or "signalling", or "evolution", or...

Replies from: None
comment by [deleted] · 2013-06-04T13:10:14.704Z · LW(p) · GW(p)

Except in this case he's actually saying 'gravity' in the right context, and besides, it's not expected of people in general to know Newton's laws (or general relativity, etc) to know basically how gravity works.

Although I'd like to know what his answer was to the last question...

Replies from: BT_Uytya, Estarlio
comment by BT_Uytya · 2013-06-05T07:55:06.500Z · LW(p) · GW(p)

I will gladly post the rest of the conversation because it reminds me of question I pondered for a while.

"Do you understand the tides, Colonel Barnes, simply because you know to say ‘gravity’?”

“I’ve never claimed to understand them.”

“Ah, that is very wise practice.”

“All that matters is, he does,” Barnes continued, glancing down, as if he could see through the deck-planks.

“Does he then?”

“That’s what you lot have been telling everyone. <> Sir Isaac’s working on Volume the Third, isn’t he, and that’s going to settle the lunar problem. Wrap it all up.”

“He is working out equations that ought to agree with Mr. Flamsteed’s observations.”

“From which it would follow that Gravity’s a solved problem; and if Gravity predicts what the moon does, why, it should apply as well to the sloshing back and forth of the water in the oceans.”

“But is to describe something to understand it?”

“I should think it were a good first step.”

“Yes. And it is a step that Sir Isaac has taken. The question now becomes, who shall take the second step?”

After that they started to discuss differences between Newton's and Leibniz theories. Newton is unable to explain why gravity can go through the earth, like light through a pane of glass. Leibniz takes a more fundamental approach (roughly speaking, he claims that everything consist of cellular automata).

Daniel: “<...> Leibniz’s philosophy has the disadvantage that no one knows, yet, how to express it mathematically. And so he cannot predict tides and eclipses, as Sir Isaac can.”

“Then what good is Leibniz’s philosophy?”

“It might be the truth,” Daniel answered.

I find this theme of Baroque Cycle fascinating.

I was somewhat haunted by the similar question: in the strict Bayesian sense, notions of "explain" and "predict" are equivalent, but what about Alfred Wegener, father of plate tectonics? His theory of continental drift (in some sense) explained shapes of continents and archaeological data, but was rejected by the mainstream science because of the lack of mechanism of drift.

In some sense, Wegener was able to predict, but unable to explain.

One can easily imagine some weird data easily described by (and predicted by) very simple mathematical formula, but yet I don't consider this to be explanation. Something lacks here; my curiosity just doesn't accept bare formulas as answers.

I suspect that this situation arises because of the very small prior probability of formula being true. But is it really?

Replies from: DanArmak, ChristianKl, nshepperd, army1987
comment by DanArmak · 2013-06-08T17:59:59.101Z · LW(p) · GW(p)

Stanislaw Lem wrote a short story about this. (I don't remember its name.)

In the story, English detectives are trying to solve a series of cases where bodies are stolen from morgues and are later discovered abandoned at some distance. There are no further useful clues.

They bring in a scientist, who determines that there is a simple mathematical relationship that relates the times and locations of these incidents. He can predict the next incident. And he says, therefore, that he has "solved" or "explained" the mystery. When asked what actually happens - how the bodies are moved, and why - he simply doesn't care: perhaps, he suggests, the dead bodies move by themselves - but the important thing, the original question, has been answered. If someone doesn't understand that a simple equation that makes predictions is a complete answer to a question, that someone simply doesn't understand science!

Lem does not, of course, intend to give this as his own opinion. The story never answers the "real" mystery of how or why the bodies move; the equation happens to predict that the sequence will soon end anyway.

Replies from: BT_Uytya
comment by BT_Uytya · 2013-06-09T09:46:52.608Z · LW(p) · GW(p)

Amusingly, I read this story, but completely forgot about it. The example here is perfect. Probably I should re-read it.

For those interested: http://en.wikipedia.org/wiki/The_Investigation

comment by ChristianKl · 2013-06-06T18:18:40.310Z · LW(p) · GW(p)

One can easily imagine some weird data easily described by (and predicted by) very simple mathematical formula, but yet I don't consider this to be explanation. Something lacks here; my curiosity just doesn't accept bare formulas as answers.

I suspect that this situation arises because of the very small prior probability of formula being true. But is it really?

I think the situation happens because of bias. Demonstrating an empirical effect to be real takes work. Finding an explanation of an effect also takes work. It's very seldom in science that both happens at exactly the same time.

Their are a lot of drugs that are designed in a way where we think that the drug works by binding to specific receptors. Those explanations aren't very predictive for telling you whether a prospective drug works. Once it's shown that a drug actually works it's often that we don't fully understand why it does work.

Replies from: BT_Uytya
comment by BT_Uytya · 2013-06-09T10:19:34.265Z · LW(p) · GW(p)

It's very seldom in science that both happens at exactly the same time.

Interesting.

I imagined a world where Wegener appeared, out of blue, with all that data about geological strata and fossils (nobody noticed any of that before), and declared that it's all because of continental drift. That was anticlimactic and unsatisfactory.

I imagined a world with a great unsolved mystery: all that data about geological strata and fossils. For a century, nobody is unable to explain it. Then Wegener appeared, and pointed that the shapes of continents are similar, and perhaps it's all because of continental drift. That was more satisfactory, and I suspect that most of traces of disappointment are due to hindsight bias.

I think that there are several factors causing that:

1) Story-mode thinking

2) Suspicions concerning the unknown person claiming to solve the problem nobody has ever heard of.

3) (now it's my working hypothesis) The idea that some phenomena are and 'hard' to reduce, and some are 'easy':

I know that fall of apple can be explained in terms of atoms, reduced to the fundamental interactions. Most of things can. I know that we are unable to explain fundamental interactions yet, so equations-without-understanding are justified.

So, if I learn about some strange phenomenon, I believe that it can be easily explained in terms of atoms. Now suppose that it turned out to be very hard problem, and nobody managed to reduce it to something more fundamental. Now I feel that I should be satisfied with bare equations because making something more is hard. Maybe a century later.

This isn't complete explanation, but it feels like a step in the right direction.

comment by nshepperd · 2013-06-05T15:17:37.598Z · LW(p) · GW(p)

"For whatever reason, " seems like it should be a legitimate hypothesis, as much as ", therefore ". The former technically being the disjunction of all variations of the latter with possible reasons substituted in.

But, then again, at the point when we are saying "for whatever reason, ", we are saying that because we haven't been able to think of the correct explanation yet—that is, because we haven't been creative enough, a bounded rationality issue. So we're perhaps not really in a position to evaluate a disjunction of all possible reasons.

comment by Estarlio · 2013-06-07T18:08:31.460Z · LW(p) · GW(p)

I strikes me his understanding of gravity is on the same level as saying that everything attracts everything else, which is after all not much of a step up on saying that it's in the nature of water to be attracted to the moon - just a more general phrasing.

You can make more specific predictions if you know that everything attracts everything, and you know more about the laws of planetary motion and so on, and the gravitational constant and the decay rate and so on, but the basic knowledge of gravity by itself doesn't let you do those things. If your predictions after are the same as your predictions going in can you really claim to understand something better?

Seems to me you need to network ideas and start linking them up to data because you can really start to claim to understand stuff better.

comment by BT_Uytya · 2013-06-03T10:19:49.601Z · LW(p) · GW(p)

Probably I should've added some context to this conversation. One of the themes of Baroque Cycle is that Newton described his gravitational law, but said nothing about why the reality is the way it is. This bugs Daniel, and he rests his hopes upon Leibniz who tries to explain reality on the more fundamental level (monads).

This conversation is "Explain/Worship/Ignore" thing as well as "Teacher's password" thing.

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2013-06-03T12:18:10.892Z · LW(p) · GW(p)

The reason Newton's laws are an improvement over Aristotelian "the nature of water is etc." is that Newton lets you make predictions, while Aristotle does not. You could ask "but WHY does gravity work like so-and-so?", but that doesn't change the fact that Newton's laws let you predict orbits of celestial objects, etc., in advance of seeing them.

Replies from: Nornagest
comment by Nornagest · 2013-06-05T08:26:09.885Z · LW(p) · GW(p)

That's certainly the conventional wisdom, but I think the conventional wisdom sells Aristotle and his contemporaries a little short. Sure, speaking in terms of water and air and fire and dirt might look a little silly to us now, but that's rather superficial: when you get down to the experiments available at the time, Aristotelian physics ran on properties that genuinely were pretty well correlated, and you could in fact use them to make reasonably accurate predictions about behavior you hadn't seen from the known properties of an object. All kosher from a scientific perspective so far.

There are two big differences I see, though neither implies that Aristotle was telling just-so stories. The first is that Aristotelian physics was mainly a qualitative undertaking, not a quantitative one -- the Greeks knew that the properties of objects varied in a mathematically regular way (witness Erastothenes' clever method of calculating Earth's circumference), but this wasn't integrated closely into physical theory. The other has to do with generality: science since Galileo has applied as universally as possible, though some branches reduced faster than others, but the Greeks and their medieval followers were much more willing to ascribe irreducible properties to narrow categories of object. Both end up placing limits on the kinds of inferences you'll end up making.

comment by NoSignalNoNoise (AspiringRationalist) · 2013-06-01T23:19:51.631Z · LW(p) · GW(p)

Bad things don't happen to you because you're unlucky. Bad things happen to you because you're a dumbass.

  • That 70s Show
Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-06-02T19:18:22.922Z · LW(p) · GW(p)

Single bad things happen to you at random. Iterated bad things happen to you because you're a dumbass. Related: "You are the only common denominator in all of your failed relationships."

Replies from: khafra, Kawoomba, NancyLebovitz, ChristianKl
comment by khafra · 2013-06-03T11:28:58.158Z · LW(p) · GW(p)

Corollaries: The more of a dumbass you are, the less well you can recognize common features in iterated bad things. So dumbasses are, subjectively speaking, just unlucky.

Replies from: AlanCrowe
comment by AlanCrowe · 2013-06-03T12:47:46.421Z · LW(p) · GW(p)

The corollary is more useful than the theorem:-) If I wish to be less of a dumbass, it helps to know what it looks like from the inside. It looks like bad luck, so my first job is to learn to distinguish bad luck from enemy action. In Eliezer's specific example that is going to be hard because I need to include myself in my list of potential enemies.

Replies from: Eliezer_Yudkowsky
comment by Kawoomba · 2013-06-02T19:33:42.200Z · LW(p) · GW(p)

Also, oxygen. (Edit: "You are the only common denominator in all of your failed relationships." is misleading, hiding all the other common elements.)

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2013-06-02T22:15:18.226Z · LW(p) · GW(p)

What we want to find is the denominator common to all of your failed relationships, but absent from the successful relationships that other people have (the presumed question being "why do all my relationships fail, but Alice, Bob, Carol, etc. have successful ones?"). Oxygen doesn't fit the bill.

Replies from: Error, Kawoomba
comment by Error · 2013-06-03T12:46:59.284Z · LW(p) · GW(p)

It could also be that Alice, Bob, and Carol's relationships appear more successful than they are. We do tend to hide our failures when we can.

I've heard the failed-relationships quote before, but hadn't seen it generalized to bad things in general. I like that one. Useful corollary: "Iterated bad things are evidence of a pattern of errors that you need to identify and fix."

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2013-06-03T15:41:07.050Z · LW(p) · GW(p)

Of course, "bad things", and even more so "iterated bad things", have to be viewed relative to expectations, and at the proper level of abstraction. Explanation:

Right level of abstraction

"I punched myself in the face six times in a row, and each time, it hurt. But this is not mere bad luck! I conclude that I am bad at self-face-punching! I must work on my technique, such that I may be able to punch myself in the face without ill effect." This is the wrong conclusion. The right conclusion is "abstain from self-face-punching".

Substitute any of the following for "punching self in face":

  • Extreme sports
  • Motorcycle riding
  • Fad diets
  • Prayer

Right expectations

"I've tried five brands of water, and none of them tasted like chocolate candy! My water-brand-selection algorithm must be flawed. I will have to be even more careful about picking only the fanciest brands of water." Again this is the wrong conclusion. The right conclusion is "This water is just fine and there was nothing wrong with my choice of brand. I simply shouldn't have such ridiculous expectations."

Substitute any of the following for "brands of water" / "taste like chocolate candy":

  • Sex partners / knew all the ways to satisfy my needs without me telling them
  • Computer repair shops / fixed my computer for free after I spilled beer on it, and also retrieved all my data [full disclosure: deep-seated personal gripe]
  • Diets / enabled me to lose all requisite weight and keep it off forever
Replies from: Error, Cthulhoo
comment by Error · 2013-06-03T16:57:08.058Z · LW(p) · GW(p)

Computer repair shops / fixed my computer for free after I spilled beer on it, and also retrieved all my data [full disclosure: deep-seated personal gripe]

Ah, I've been in that job. My favorite in the stupid-expectations department was a customer who expected us to lie about the cause of a failure on the work order, so that his insurance company would cover the repair. When we refused, he made his own edits to his copy of the work order....and a few days later brought the machine back (I forget why) and handed us the edited order.

We photocopied it (without telling him) and filed it with our own copy. That was entertaining when the insurance company called.

comment by Cthulhoo · 2013-06-03T16:41:32.958Z · LW(p) · GW(p)

This can be easily generalized as an algorithm.

  • Something repeatedly goes wrong
  • Identify correctly your prior hypothesis
  • Identify the variables involved
  • Check/change the variables
  • Observe the result (apply bayes when needed)
  • Repeat if necessary

Scientific method applied to everiday life, if you want :)

Replies from: NancyLebovitz
comment by NancyLebovitz · 2013-06-04T05:59:47.697Z · LW(p) · GW(p)

The thing is, some of the steps are very vague. If you have a bad case of insufficient clue, what's the cure?

Replies from: Cthulhoo
comment by Cthulhoo · 2013-06-04T08:13:31.351Z · LW(p) · GW(p)

The thing is, some of the steps are very vague You're right of course, this was meant to be fully general. Details should be tuned on each specific instance.

If you have a bad case of insufficient clue, what's the cure?

I'm not sure I understood what you mean, but I guess you're thinking about cases where you can't have a "perfect experimental setup" to collect information. Well, in this case one should do the best with the information one has (though information can also be collected from other external sources of course). Sometimes there's simply not enough information to identify with sufficient certainty the best course of action, so you have to go with your best guess (after a risk/reward evaluation, if you want).

Replies from: NancyLebovitz
comment by NancyLebovitz · 2013-06-04T10:31:19.574Z · LW(p) · GW(p)

Sorry, I wasn't very clear.

I meant that if you have a deep misunderstanding of what's going on, as here, what do you do about it?

Replies from: Cthulhoo
comment by Cthulhoo · 2013-06-04T14:42:24.688Z · LW(p) · GW(p)

Well, it's somewhat hidden in steps 2 and 3. You have to be able to correctly state your hypothesis and to indentify all the possible variables. Consider chocolate water: your hipothesis is "There exist some brands of water that tastes like chocolate candy". Let's say for whatever reson you start with a prior probability p for this hypothesis. You then try some brands, find that none tastes like chocolate candy, and should therefore apply bayes and emerge with a lower posterior. What's much more effective, though, is evaluating the evidence you already have that induced you to believe the original hypothesis. What made you think that water could taste like chocolate? A friend told you? Did it appear in the news? In the more concrete cases:

  • Sex partners : Why did you expect them to be able to satisfy you without your input? What is your source? Porn movies?
  • Computer repair shops : Why did you expect people to work for free?
  • Diets : Have you talked to a professional? Gathered massive anedoctale evidence?
comment by Kawoomba · 2013-06-04T11:38:20.420Z · LW(p) · GW(p)

"You are the only common denominator in all of your failed relationships." != "Why do all my relationships fail?"

Both you and others have relationships, both "failed" and "not-failed" (for some value of failed). The statement "You are the only common denominator in all of your failed relationships" is clearly false, even if comparing to others who have successful ones in search of differentiating factors. The "only" is the problem even then.

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2013-06-04T15:50:10.566Z · LW(p) · GW(p)

The intended formulation, I should think, is "You are the only denominator guaranteed to be common to all of your failed relationships" (which is to say that it might be a contingent fact about your particular set of failed relationships that it has some more common denominators, but for any set of all of any particular person's failed relationships, that person will always, by definition, be common to them all).

Even this might be false when taken literally... so perhaps we need to qualify it just a bit more:

"You are the only interesting denominator guaranteed to be common to all of your failed relationships." (i.e. if we consider only those factors along which relationships-in-general differ from each other, i.e. those dimensions in relationship space which we can't just ignore).

That, I think, is a reasonable, charitable reading of the original quote.

Replies from: Kawoomba
comment by Kawoomba · 2013-06-04T16:03:12.673Z · LW(p) · GW(p)

It's not nitpicking on my side, there are plenty of people who tend to blame themselves for anything going wrong, even when it was outside their control. Maybe they lived in a neighborhood incompatible to themselves, especially pre-social media. Think of 'nerds' stranded in classes without peers. Sure, their behavior was involved in the success or failure of their relationships (how could it not have been?). However, a mindset and pseudo-wise aphorisms such as "you are the only common denominator in all of your failed relationships" would be fueling an already destructive fire of gnawing self-doubt with more gasoline.

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2013-06-04T19:33:10.368Z · LW(p) · GW(p)

I agree. This sort of thing...

Maybe they lived in a neighborhood incompatible to themselves, especially pre-social media. Think of 'nerds' stranded in classes without peers.

can be viewed as a case of "wrong level of abstraction" as I alluded to here.

I think what we have here is two possible sources of error, diametrically opposed to each other. Some people refuse to take responsibility for their failures, and it is at them that "you are the only common denominator ..." is aimed. Other people blame themselves even when they shouldn't, as you say. Let us not let one sort of error blind us to the existence of the other.

When it comes to constructing or selecting rationality quotes, we should keep in mind that what we're often doing is attempting to point out and correct some bias, which means that the relevance of the quote is obviously constrained by whether we have that bias at all, or perhaps have the opposite bias instead.

comment by NancyLebovitz · 2013-06-09T14:18:13.241Z · LW(p) · GW(p)

There is such a thing as bad luck, though perhaps it's less in play in relationships than in most areas of life.

I think that if you keep having relationships that keep failing in the same way, it's a stronger signal that if they just fail.

comment by ChristianKl · 2013-06-06T21:43:55.512Z · LW(p) · GW(p)

Alternatively, iterated bad things happen because someone is out to get you and messes constantly with what you are trying to do.

comment by B_For_Bandana · 2013-06-01T20:45:22.969Z · LW(p) · GW(p)

Stepan Arkadyevitch subscribed to a liberal paper, and read it. It was not extreme in those views, but advocated those principles the majority held. And though he was not really interested in science or art or politics, he strongly adhered to such views on all those subjects as the majority, including his paper, advocated, and he changed them only when the majority changed them; or more correctly, he did not change them, but they themselves imperceptibly changed in him.

Stepan Arkadyevitch never chose principles or opinions, but these principles and opinions came to him, just as he never chose the shape of a hat or coat, but took those that others wore. And, living as he did in fashionable society, through the necessity of some mental activity, developing generally in a man's best years, it was as indispensable for him to have views as to have a hat. If there was any reason why he preferred liberal views rather than the conservative direction which many of his circle followed, it was not because he found a liberal tendency more rational, but because he found it better suited to his mode of life.

The liberal party declared that everything in Russia was wretched; and the fact was that Stepan Arkadyevitch had a good many debts and was decidedly short of money. The liberal party said that marriage was a defunct institution and that it needed to be remodeled, and in fact domestic life afforded Stepan Arakadyevitch very little pleasure, and compelled him to lie, and to pretend, which was contrary to his nature. The liberal party said, or rather allowed it to be understood, that religion is only a curb on the barbarous portion of the community, and in fact Stepan Arkadyevitch could not bear the shortest prayer without pain in his knees, and he could not comprehend the necessity of all these high-sounding words about the other world when it was so very pleasant to live in this one.

  • Leo Tolstoy, Anna Karenina

The personal is political!

Replies from: simplicio
comment by simplicio · 2013-06-11T22:03:40.218Z · LW(p) · GW(p)

Stepan is a smart chap. He has realized (perhaps unconsciously)

  • that one's political views are largely inconsequential,
  • that it's nonetheless socially necessary to have some,
  • that developing popular and coherent political views oneself is expensive,

and so has outsourced them to a liberal paper.

One might compare it to hiring a fashion consultant... except it's cheap to boot!

comment by Estarlio · 2013-06-12T22:30:01.002Z · LW(p) · GW(p)

"Oh, you could do it all by magic, you certainly could. You could wave a wand and get twinkly stars and a fresh-baked loaf. You could make fish jump out of the sea already cooked. And then, somewhere, somehow, magic would present its bill, which was always more than you could afford.

That’s why it was left to wizards, who knew how to handle it safely. Not doing any magic at all was the chief task of wizards - not “not doing magic” because they couldn’t do magic, but not doing magic when they could do and didn’t. Any ignorant fool can fail to turn someone else into a frog. You have to be clever to refrain from doing it when you knew how easy it was.

There were places in the world commemorating those times when wizards hadn’t been quite as clever as that, and on many of them the grass would never grow again."

-- Terry Prachett, Going Postal

comment by tingram · 2013-06-03T05:23:24.005Z · LW(p) · GW(p)

It is said, for example, that a man ten times regrets having spoken, for the once he regrets his silence. And why? Because the fact of having spoken is an external fact, which may involve one in annoyances, since it is an actuality. But the fact of having kept silent! Yet this is the most dangerous thing of all. For by keeping silent one is relegated solely to oneself, no actuality comes to a man's aid by punishing him, by bringing down upon him the consequences of his speech. No, in this respect, to be silent is the easy way. But he who knows what the dreadful is, must for this very reason be most fearful of every fault, of every sin, which takes an inward direction and leaves no outward trace. So it is too that in the eyes of the world it is dangerous to venture. And why? Because one may lose. But not to venture is shrewd. And yet, by not venturing, it is so dreadfully easy to lose that which it would be difficult to lose in even the most venturesome venture, and in any case never so easily, so completely as if it were nothing...one's self. For if I have ventured amiss--very well, then life helps me by its punishment. But if I have not ventured at all--who then helps me?

--Soren Kierkegaard, The Sickness Unto Death

Replies from: tgb
comment by tgb · 2013-06-03T18:38:48.977Z · LW(p) · GW(p)

That's an interesting opening comment on regretting choosing to speak more than choosing not to speak. In particular, it brings to mind studies of the elderly's regrets in life and how most of those are not-having-done's versus having-done's. These two aren't incompatible: if we remain silent 20 times for every time we speak, then we still regret remaining silent more than we regret speaking even if we regret each having-spoken 10 times as much as a not-having-spoken. Still, though, there seems to be some disagreement.

Replies from: tingram
comment by tingram · 2013-06-03T22:08:10.708Z · LW(p) · GW(p)

Obviously the fact that it's translated complicates things, and I don't know anything about Danish. But I think the first sentence is meant to be a piece of folk wisdom akin to "Better to remain silent and be thought a fool, than to open your mouth and remove all doubt." That is, he's not really concerned with the relative proportions of regret, but with the idea that it's better (safer, shrewder) to keep your counsel than to stake out a position that might be contradicted. In light of the rest of the text, this is the reading of the line that makes the most sense to me: equivocation and bet-hedging in the name of worldly safety are a symptom of the sin of despair. Compare:

Possibility then appears to the self ever greater and greater, more and more things become possible, because nothing becomes actual. At last it is as if everything were possible--but this is precisely when the abyss has swallowed up the self.

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2013-06-04T12:12:47.934Z · LW(p) · GW(p)

Possibility then appears to the self ever greater and greater, more and more things become possible, because nothing becomes actual. At last it is as if everything were possible--but this is precisely when the abyss has swallowed up the self.

Reminds me of standards processes and project proposals that produce ever more elaborate specifications that no-one gets round to implementing.

comment by Woodbun · 2013-06-05T05:27:29.747Z · LW(p) · GW(p)

...the machines will do what we ask them to do and not what we ought to ask them to do. In the discussion of the relation between man and powerful agencies controlled by man, the gnomic wisdom of the folk tales has a value far beyond the books of our sociologists.

comment by James_Miller · 2013-06-01T15:24:39.321Z · LW(p) · GW(p)

you would be foolish to accept what people believed for “thousands of years” in many domains of natural science. When it comes to the ancients or the moderns in science always listen to the moderns. They are not always right, but overall they are surely more right, and less prone to miss the mark. In fact, you may have to be careful about paying too much attention to science which is a generation old, so fast does the “state of the art” in terms of knowledge shift.

Razib Khan

Replies from: TeMPOraL
comment by TeMPOraL · 2013-06-01T16:45:06.924Z · LW(p) · GW(p)

Similar thought:

16) The previous people who did a similar analysis did not have a direct pipeline to the wisdom of the ages. There is therefore no reason to believe their analysis over yours. There is especially no reason to present their analysis as yours.

-- Akin's Laws of Spacecraft Design

comment by novalis · 2013-06-07T18:33:30.935Z · LW(p) · GW(p)

"It’s actually hard to see when you’ve fucked up, because you chose all your actions in a good-faith effort and if you were to run through it again you’ll just get the same results. I mean, errors-of-fact you can see when you learn more facts, but errors-of-judgement are judged using the same brain that made the judgement in the first place." - Collin Street

comment by Alejandro1 · 2013-06-02T17:37:52.744Z · LW(p) · GW(p)

"I call that 'the falling problem'. You encounter it when you first study physics. You realize that, if you were ever dropped from a plane without a parachute, you could calculate with a high degree of accuracy how long it's take to hit the ground, your speed, how much energy you'll deposit into the earth. And yet, you would still be just as dead as a particularly stupid gorilla dropped the same distance. Mastery of the nature of reality grants you no mastery over the behavior of reality. I could tell you your grandpa is very sick. I could tell you what each cell is doing wrong, why it's doing wrong, and roughly when it started doing wrong. But I can't tell them to stop."

"Why can't you make a machine to fix it?"

"Same reason you can't make a parachute when you fall from the plane."

"Because it's too hard?"

"Nothing is too hard. Many things are too fast."

(beat)

"I think I could solve the falling problem with a jetpack. Can you try to get me the parts?"

"That's all I do, kiddo."

--SMBC

Replies from: shminux
comment by Shmi (shminux) · 2013-06-02T19:25:46.255Z · LW(p) · GW(p)

IDG the punchline...

Replies from: TheOtherDave, tgb
comment by TheOtherDave · 2013-06-02T19:40:22.092Z · LW(p) · GW(p)

I wouldn't call it a punchline, exactly... I mean, it's not a joke. But in the comic it's likely a parent and child talking, and the subtext I infer is that parenting is a process of giving one's children the tools with which to construct superior solutions to life problems.

Replies from: David_Gerard, shminux
comment by David_Gerard · 2013-06-03T11:10:20.321Z · LW(p) · GW(p)

parenting is a process of giving one's children the tools with which to construct superior solutions to life problems.

How I would love to quote you next month. This is pretty much my approach in a sentence.

comment by Shmi (shminux) · 2013-06-02T20:01:10.933Z · LW(p) · GW(p)

Thanks!

comment by tgb · 2013-06-03T18:31:37.171Z · LW(p) · GW(p)

For me, the real punchline is in the 'votey image' you get by hovering over the red dot at the bottom.

Replies from: ZankerH
comment by ZankerH · 2013-06-04T20:04:58.457Z · LW(p) · GW(p)

THE JETPACK IS A METAPHOR FOR OVERTHROWING THE GOVERNMENT

comment by Pablo (Pablo_Stafforini) · 2013-06-02T02:40:27.759Z · LW(p) · GW(p)

Th[e] strategy [of preferring less knowledge and intelligence due to their high cognitive costs] is exemplified by the sea squirt larva, which swims about until it finds a suitable rock, to which it then permanently affixes itself. Cemented in place, the larva has less need for complex information processing, whence it proceeds to digest part of its own brain (its cerebral ganglion). Academics can sometimes observe a similar phenomenon in colleagues who are granted tenure.

Nick Bostrom

Replies from: TheOtherDave
comment by TheOtherDave · 2013-06-02T02:50:27.113Z · LW(p) · GW(p)

It is perhaps worth noting that a similar comment was made by Dennett:

“The juvenile sea squirt wanders through the sea searching for a suitable rock or hunk of coral to cling to and make its home for life. For this task, it has a rudimentary nervous system. When it finds its spot and takes root, it doesn't need its brain anymore, so it eats it! It's rather like getting tenure.”

...in 1991 or so.

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-06-02T19:16:36.555Z · LW(p) · GW(p)

I remember this as a famous proverb, it may predate Dennett.

Replies from: TheOtherDave
comment by TheOtherDave · 2013-06-02T19:36:39.830Z · LW(p) · GW(p)

Apparently it does... a few minutes of googling turned up a cite to Rodolfo Llinas (1987), who referred to it as "a process paralleled by some human academics upon obtaining university tenure."

Replies from: khafra
comment by khafra · 2013-06-03T11:30:19.188Z · LW(p) · GW(p)

Has the life cycle of the sea squirt ever been notably used to describe something other than the reaction of an academic to tenure?

Replies from: TheOtherDave
comment by TheOtherDave · 2013-06-03T12:25:19.656Z · LW(p) · GW(p)

Hah! Um... hm. A quick perusal of Google results for "sea squirt -tenure" gets me some moderately interesting stuff about their role as high-sensitivity harbingers for certain pollutants, and something about invasive sea-squirt species in harbors. But nothing about their life-cycle per se. I give a tentative "no."

comment by tingram · 2013-06-03T01:25:56.054Z · LW(p) · GW(p)

From the remarkable opening chapter of Consciousness Explained:

One should be leery of these possibilities in principle. It is also possible in principle to build a stainless-steel ladder to the moon, and to write out, in alphabetical order, all intelligible English conversations consisting of less than a thousand words. But neither of these are remotely possible in fact and sometimes an impossibility in fact is theoretically more interesting than a possibility in principle, as we shall see.

--Daniel Dennett

Replies from: Vaniver, B_For_Bandana
comment by Vaniver · 2013-06-04T03:18:50.111Z · LW(p) · GW(p)

It is also possible in principle to build a stainless-steel ladder to the moon, and to write out, in alphabetical order, all intelligible English conversations consisting of less than a thousand words.

While I agree with the general point that it's important to consider impossibilities in fact, I'm not quite sure I agree where he's drawing the line between fact and principle. Does the compressive strength of stainless steel, and the implied limit on the height of a ladder constructed of it, not count as a restriction in principle?

Replies from: khafra
comment by khafra · 2013-06-04T11:44:13.935Z · LW(p) · GW(p)

It just takes some imagination. Hollow out both the Earth and the Moon to reduce their gravitational pull; support the ladder with carbon nanotube filaments; stave off collapse by pushing it around with high-efficiency ion impulse engines; etc.

I agree, though, that philosophers often make too much of the distinction between "logically impossible" and "physically impossible." There's probably no in principle possible way to hollow out the Earth significantly while retaining its structure; etc.

Replies from: DanArmak, tingram, Kawoomba
comment by DanArmak · 2013-06-08T19:05:27.570Z · LW(p) · GW(p)

support the ladder with carbon nanotube filaments

So basically, build a second ladder out of some other material that's feasible (unlike steel), and then just tie the steel ladder to it so it doesn't have to bear any weight.

comment by tingram · 2013-06-04T13:50:23.695Z · LW(p) · GW(p)

I think that often "logically possible" means "possible if you don't think too hard about it". Which is exactly Dennett's point in context: the idea that you are a brain in a vat is only conceivable if you don't think about the computing power that would be necessary for a convincing simulation.

Replies from: ChristianKl, Juno_Watt
comment by ChristianKl · 2013-06-06T21:23:09.639Z · LW(p) · GW(p)

Which is exactly Dennett's point in context: the idea that you are a brain in a vat is only conceivable if you don't think about the computing power that would be necessary for a convincing simulation.

Dreams can be quite convincing simulations that don't need that much computing power.

The worlds that people who do astral traveling perceive can be quite complex. Complex enough to convince people who engage in that practice that they really are on an astral plane. Does that mean that the people are really on an astral plane and aren't just imagining it?

Replies from: Caspian, tingram
comment by Caspian · 2013-06-08T03:01:17.423Z · LW(p) · GW(p)

The way I like to think about it is that convincingness is a 2-place function - a simulation is convincing to a particular mind/brain. If there's a reasonably well-defined interface between the mind and the simulation (e.g. the 5 senses and maybe a couple more) then it's cheating to bypass that interface and make the brain more gullible than normal, for example by introducing chemicals into the vat for that purpose.

From that perspective, dreams are not especially convincing compared to experience while awake, rather dreamers are especially convincable.

Dennett's point seems to be that a lot of computing power would be needed to make a convincing simulation for a mind as clear-thinking as a reader who was awake. Later in the chapter he talks about other types of hallucinations.

Replies from: ChristianKl
comment by ChristianKl · 2013-06-08T19:09:12.367Z · LW(p) · GW(p)

The way I like to think about it is that convincingness is a 2-place function - a simulation is convincing to a particular mind/brain. If there's a reasonably well-defined interface between the mind and the simulation (e.g. the 5 senses and maybe a couple more)

The 5 senses are brain events. There aren't input channels to the brain. Take taste. How many different tastes of food can you perceive through your taste sense? More than 5. Why? Your brain takes data from nose, tongue and your memory and fits them together to something that you can perceive through your smell sense.

You have no direct access to the data that your nose or tongue sends to your brain through your conscious qualia perception.

If someone is open by receiving suggestions and you give him a hypnotic suggestion that a apple tastes like an orange you can awake him. If he eats the thing he will tell you that the apple is an orange. He might even get angry when someone tells him that the thing isn't an orange because it obviously tastes like an orange.

it's cheating to bypass that interface and make the brain more gullible than normal, for example by introducing chemicals into the vat for that purpose.

You don't need to introduce any chemicals. Millions of years of evolutions have trained brains to have an extremly high prior for thinking that they aren't "brains in a vat".

Doubting your own perception is an incredibly hard cognitive task.

There are experients where an experimentor uses a single electron to trigger a subject to do a particular task like raising his arm. If the experimentor afterwards ask the subject why he raised the arm the subject makes up a story and believes in that story. It takes effort for the leader of an experiment to convince a subject that he made up the story and there was no reason he raised his arm.

comment by tingram · 2013-06-07T02:07:15.397Z · LW(p) · GW(p)

I suggest you read the opening chapter of Consciousness Explained. Someone's posted it online here.

Replies from: ChristianKl
comment by ChristianKl · 2013-06-07T09:49:54.117Z · LW(p) · GW(p)

Dennett seems to quote no actual scientific paper in the paragraph or otherwise really know what the brain does.

You don't need to provide detailed feedback to the brain, Dennett should be well aware that humans have a blind spot in their eyes and the brain makes up information to fill the blind spot.

It's the same with suggesting a brain in the vat that it's acting in the real world. The brain makes up the information that's missing to provide for an experience of being in the real world.

To produce a strong hallucination (as I understand Dennett he means equates strong hallucination with complex hallucination) you might need to have a channel through with you can insert information into the brain but you don't need to provide every detail. Missing details get made up by the brain.

Replies from: Leonhart
comment by Leonhart · 2013-06-11T21:52:05.679Z · LW(p) · GW(p)

Dennett should be well aware that humans have a blind spot in their eyes and the brain makes up information to fill the blind spot.

No, Dennett explicitly denies that the brain makes up information to fill the blind spot. This is central to his thesis. He creates a whole concept called 'figment' to mock this notion.

His position is that nothing within the brain's narrative generators expects, requires, or needs data from the blind spot; hence, in consciousness, the blind spot doesn't exist. No gaps need to be filled in, any more that HJPEV can be aware that Eliezer has removed a line that he might, counterfactually, have spoken.

For a hallucination to be strong, does not require the hallucination to have great internal complexity. It suffices that the brain happen to not ask too many questions.

Replies from: ChristianKl
comment by ChristianKl · 2013-06-12T09:04:11.460Z · LW(p) · GW(p)

For a hallucination to be strong, does not require the hallucination to have great internal complexity.

That's a question of definition of strong. But it seems that I read Dennett to charitable for that purpose. He defines it as:

Another conclusion it seems that we can draw from this is that strong hallucinations are simply impossible! By a strong hallucination I mean a hallucination of an apparently concrete and persisting three-dimensional object in the real world — as contrasted to flashes, geometric distortions, auras, afterimages, fleeting phantom-limb experiences, and other anomalous sensations. A strong hallucination would be, say, a ghost that talked back, that permitted you to touch it, that resisted with a sense of solidity, that cast a shadow, that was visible from any angle so that you might walk around it and see what its back looked like.

Given that definition, Dennett just seems wrong.

He continues saying:

Reports of very strong hallucinations are rare

I know multiple people in real life who report hallucinations of that strength. If you want an online source, the Tulpa forum has plenty of peope who manage to have strong hallucinations of Tulpa's.

The Tupla way seems to take months or a year. If you have a strongly hypnotically suggestible person a good hypnotist can create such hallucination in less than an hour.

Replies from: Leonhart
comment by Leonhart · 2013-06-12T21:12:05.217Z · LW(p) · GW(p)

I think I must be misreading you. I'm puzzled that you believe this about hallucinations - that it's possible for the brain to devote enough processing power to create a "strong" hallucination in the Dennettian sense - but upthread, you seemed to be saying that dreams did not require such processing power. Dreams are surely the canonical example, for people who believe that whole swaths of world-geometry are actually being modelled, rendered and lit inside of their heads? After all, there is nothing else to be occupying the brain's horsepower; no conflicting signal source.

If I may share with you my own anecdote; when asleep, I often believe myself to be experiencing a fully sensory, qualia-rich environment. But often as I wake, there is an interim moment when I realise - it seems to be revealed - that there never was a dream. There was only a little voice making language-like statements to itself - "now I am over here now I am talking to Bob the scenery is so beautiful how rich my qualia are".

I think Dennett's position is just this; that there never was a dream, only a series of answers to spurious questions, which don't have to be consistent because nothing was awake to demand consistency.

Do you think he's wrong about dreams, too, or are you saying that waking hallucinations are importantly different? I had a quick look at the Tulpa forum and am unimpressed so far. Could you point to any examples you find particularly compelling?

If you have a strongly hypnotically suggestible person a good hypnotist can create such hallucination in less than an hour.

Ok, so I flat out don't believe that. If waking consciousness was that unstable, a couple of hours of immersive video gaming would leave me psychotic; and all it would take to see angels would be a mildly-well-delivered Latin Mass, rather than weeks of fasting and self-flagellation.

I'll go read about it, though.

Replies from: RobbBB, Kaj_Sotala
comment by Rob Bensinger (RobbBB) · 2013-06-16T18:11:36.395Z · LW(p) · GW(p)

there is an interim moment when I realise - it seems to be revealed - that there never was a dream. There was only a little voice making language-like statements to itself

I don't think I've ever had an experience quite like that. I've perhaps had experiences that are transitional between images and propositions -- I'm thinking by visualizing a little story to myself, and the images themselves are seamlessly semantic, like I'm on the inside of a novel and the narration is a deep component of the concrete flow of events. But to my knowledge I've never felt a sudden revelation that my mental images were 'only a little voice making language-like statements to itself', à la Dennett's suggestion that all experiences are just judgments.

Perhaps we're conceptualizing the same experience after-the-fact in different ways. Or perhaps we just have different phenomenologies. A lot of people have suggested (sometimes tongue-in-cheek) that Dennett finds his own wilder hypotheses credible because he has an unusually linguistic, abstract, qualitatively impoverished phenomenology. (Personally, I wouldn't be surprised if that's a little bit true, but I think it's a small factor compared to Dennett's philosophical commitments.)

Replies from: Juno_Watt, NancyLebovitz
comment by Juno_Watt · 2013-06-16T18:55:58.961Z · LW(p) · GW(p)

A lot of people have suggested (sometimes tongue-in-cheek) that Dennett finds his own wilder hypotheses credible because he has an unusually linguistic, abstract, qualitatively impoverished phenomenology.

He is known to be a wine connoisseur. Sidney Shoemaker once asked him why he doesn't just read the label..

comment by NancyLebovitz · 2013-06-19T12:25:10.820Z · LW(p) · GW(p)

I've occasionally had dreams where elements have backstories--- I just know something about something in my dream, without having any way of having found it out.

Replies from: ArisKatsaris, RobbBB, TheOtherDave
comment by ArisKatsaris · 2013-06-19T12:51:01.196Z · LW(p) · GW(p)

This is common, I think, or at least I've seen other people discuss it before ( http://adamcadre.livejournal.com/172934.html ), and it fits my own experience as well. From which I had the rather obvious-in-hindsight insight that the experience of knowledge is itself just another sort of experience, just another type of qualia, just like color or sound.

In dreams knowledge doesn't need to have an origin-via-discovery, same way that dream images don't need to originate in our eyes, and dream sounds don't need to originate in vibrations of our ear drums...

comment by Rob Bensinger (RobbBB) · 2013-06-19T16:25:38.457Z · LW(p) · GW(p)

Is this any different from how it feels to know something in waking life, in cases where you've forgotten where you learned it?

Replies from: Bunthut, NancyLebovitz
comment by Bunthut · 2021-12-21T22:33:09.080Z · LW(p) · GW(p)

Probably way too old here, but I had multible experiences relevant to the thread.

Once I had a dream and then, in the dream, I remembered I had dreamt this exact thing before, and wondered if I was dreaming now, and everything looked so real and vivid that I concluded I was not.

I can create a kind of half-dream, where I see random images and moving sequences at most 3 seconds or so long, in succession. I am really dimmed but not sleeping, and I am aware in the back of my head that they are only schematic and vague.

I would say the backstories in dreams are different in that they can be clearly nonsensical. E.g. I hold and look at a glass relief, there is no movement at all, and I know it to be a movie. I know nothing of its content, and I dont believe the image of the relief to be in the movie.

comment by NancyLebovitz · 2013-06-19T23:05:02.894Z · LW(p) · GW(p)

It's hard to be sure, but I think dream elements have less of a feeling of context for me. On the other hand, is the feeling of context the side effect of having more connections to my web of memories, or is it just another tag?

comment by TheOtherDave · 2013-06-19T12:44:19.515Z · LW(p) · GW(p)

(nods) Me too. I've also had the RPG-esque variation where I've had a split awareness of the dream... I am aware of the broader narrative context, but I am also experiencing being a character in the narrative who is not aware. E.g., I know that there's something interesting behind that door, and I'm walking around the room, but I can't just go and open that door because I don't actually know that in my walking-around-the-room capacity.

comment by Kaj_Sotala · 2013-06-19T14:38:08.475Z · LW(p) · GW(p)

I'm puzzled that you believe this about hallucinations - that it's possible for the brain to devote enough processing power to create a "strong" hallucination in the Dennettian sense - but upthread, you seemed to be saying that dreams did not require such processing power.

It is perfectly consistent to both believe that (some people) can have fully realistic mental imagery, and that (most people's) dreams tend to exhibit sub-realistic mental imagery.

I have one friend who claims to have eidetic mental imagery, and I have no reason to doubt her. Thomas Metzinger discusses in Being No-One the notion of whether the brain can generate fully realistic imagery, and holds that it usually cannot, but notes the existence of eidetic imaginers as an exception to the rule.

Replies from: Leonhart
comment by Leonhart · 2013-06-19T21:59:29.086Z · LW(p) · GW(p)

Thanks for the cite: sadly, on clicking through, I get a menacing error message in a terrifying language, so evidently you can't share it that way? You are quite right that it's consistent. It's just that it surprised my model, which was saying "if realistic mental imagery is going to happen anywhere, surely it's going to be dreams, that seems obviously the time-of-least-contention-for-visual-workspace."

I'm beginning to wonder whether any useful phenomenology at all survives the Typical Mind Fallacy. Right now, if somebody turned up claiming that their inner monologue was made of butterscotch and unaccountably lapsed into Klingon from three to five PM on weekdays, I'd be all "cool story bro".

Replies from: CCC, Kaj_Sotala
comment by CCC · 2013-06-20T10:02:16.330Z · LW(p) · GW(p)

and unaccountably lapsed into Klingon from three to five PM on weekdays

Hmmm. Well, I don't speak Klingon, but I am bilingual (English/Afrikaans); my inner monologue runs in English all the time in general but, after reading this, I decided to try running it in Afrikaans for a bit. Just to see what happens. Now, my Afrikaans is substantially poorer than my English (largely, I suspect, due to lack of practice).

My inner monologue switches languages very quickly on command; however, there are some other interesting differences that happen. First of all, my inner monologue is rather drastically slowed down. I have a definite sense of having to wait for my brain to look up the right word to describe the concept I mean; that is, there is a definite sense that I know what I am thinking before I wrap it in the monologue. (This is absent when my internal monologue is in the default English; possibly because my English monologue is fast enough that I don't notice the delay). I think that that delay is the first time that I've noticed anticipatory thinking in my own head without the monologue.

There's also grammatical differences between the two languages; an English sentence translated to Afrikaans will come out with a different word order (most of the time). This has its effect on my internal monologue as well; there's a definite sense of the meanings being delivered to my language centres (or at least to the word-looking-up part thereof) in the order that would be correct for an English sentence, and the language centre having to hold certain meanings in a temporary holding space (or something) until I get to the right part of the sentence.

I also notice that my brain slips easily back into the English monologue; that's no doubt due mainly to force of habit, and did not come as a surprise.

comment by Kaj_Sotala · 2013-06-20T05:49:47.692Z · LW(p) · GW(p)

Thanks for the cite: sadly, on clicking through, I get a menacing error message in a terrifying language, so evidently you can't share it that way?

That's odd, it works on three different browsers and two different machines for me. I guess there's some geographical restriction. Here's a PDF instead then, I was citing what's page 45 by the book's page numbering and page 60 by the PDF's.

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2013-06-20T07:48:36.144Z · LW(p) · GW(p)

Curiously, the first time I clicked the Google Books link, I got the "Yksi sormus hallitsemaan niitä kaikkia..." message (not an exact transcription), but the second time, it let me in.

comment by Juno_Watt · 2013-06-19T17:52:17.649Z · LW(p) · GW(p)

I think that often "logically possible" means "possible if you don't think too hard about it".

Agreed

comment by Kawoomba · 2013-06-04T11:52:21.978Z · LW(p) · GW(p)

There's probably no in principle possible way to hollow out the Earth significantly while retaining its structure;

My tulpa, which belongs to a Kardashev 3b civilization (but has its own penpal tulpas higher up) disagrees.

For example, you can construct a gravitational shell around the earth to guard against collapse by compensating the gravity. Use superglue so the wabbits and stones don't start floating. Edit: This is incorrect, stupid Tulpa. More like Kardashev F!

Replies from: Richard_Kennaway, khafra
comment by Richard_Kennaway · 2013-06-04T12:10:41.377Z · LW(p) · GW(p)

I think your tulpa is playing tricks on you. A shell around the Earth will have no effect on the interactions of bodies within it, or their interactions with everything outside the shell.

Replies from: Kawoomba
comment by Kawoomba · 2013-06-04T12:13:18.160Z · LW(p) · GW(p)

It could counteract the gravitational pull which would cause the surface of a hollow Earth to collapse otherwise. Edit: It would not :-(

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2013-06-04T12:16:07.158Z · LW(p) · GW(p)

A spherically symmetric shell has no effect on the gravitational field inside. It will not pull the surface of a hollow Earth outwards.

Replies from: Kawoomba
comment by Kawoomba · 2013-06-04T12:25:01.033Z · LW(p) · GW(p)

You're correct. There's other ways to guard against collapse of an empty shell, it's a similar scenario to guarding against collapse of a Dyson sphere.

comment by khafra · 2013-06-04T12:05:04.896Z · LW(p) · GW(p)

Hey, that's a great idea--lots of little black hole-fueled satellites in low-earth orbit, suspending the crust so it doesn't collapse in on itself. I think we can build this ladder, after all!

edit: I think this falls prey to the shell theorem if they're in a geodesic orbit, but not if they're using constant acceleration to maintain their altitude, and vectoring their exhaust so it doesn't touch the Earth.

comment by B_For_Bandana · 2013-06-03T01:36:03.485Z · LW(p) · GW(p)

I'm someone who still finds subjective experience mysterious, and I'd like to fix that. Does that book provide a good, gut-level, question-dissolving explanation?

Replies from: TheOtherDave, tingram, nigerweiss
comment by TheOtherDave · 2013-06-03T02:41:24.219Z · LW(p) · GW(p)

I've had that conversation with a few people over the years, and I conclude that it does for some people and not others. The ones for whom it doesn't generally seem to think of it as a piece of misdirection, in which Dennett answers in great detail a different question than the one that was being asked. (It's not entirely clear to me what question they think he answers instead.)

That said, it's a pretty fun read. If the subject interests you, I'd recommend sitting down and writing out as clearly as you can what it is you find mysterious about subjective experience, and then reading the book and seeing if it answers, or at least addresses, that question.

Replies from: DanArmak
comment by DanArmak · 2013-06-08T19:11:41.171Z · LW(p) · GW(p)

The ones for whom it doesn't generally seem to think of it as a piece of misdirection, in which Dennett answers in great detail a different question than the one that was being asked. (It's not entirely clear to me what question they think he answers instead.)

He seems to answer the question of why humans feel and report that they are conscious; why, in fact, they are conscious. But I don't know how to translate that into an explanation of why I am conscious.

The problem that many people (including myself) feel to be mysterious is qualia. I know indisputably that I have qualia, or subjective experience. But I have no idea why that is, or what that means, or even what it would really mean for things to be otherwise (other than a total lack of experience, as in death).

A perfect and complete explanation of of the behavior of humans, still doesn't seem to bridge the gap from "objective" to "subjective" experience.

I don't claim to understand the question. Understanding it would mean having some idea over what possible answers or explanations might be like, and how to judge if they are right or wrong. And I have no idea. But what Dennett writes doesn't seem to answer the question or dissolve it.

Replies from: bojangles, TheOtherDave
comment by bojangles · 2013-06-08T20:32:17.652Z · LW(p) · GW(p)

Here's how I got rid of my gut feeling that qualia are both real and ineffable.

First, phrasing the problem:

Even David Chalmers thinks there are some things about qualia that are effable. Some of the structural properties of experience - for example, why colour qualia can be represented in a 3-dimensional space (colour, hue, and saturation) - might be explained by structural properties of light and the brain, and might be susceptible to third-party investigation.

What he would call ineffable is the intrinsic properties of experience. With regards to colour-space, think of spectrum inversion. When we look at a firetruck, the quale I see is the one you would call "green" if you could access it, but since I learned my colour words by looking at firetrucks, I still call it "red".

If you think this is coherent, you believe in ineffable qualia: even though our colour-spaces are structurally identical, the "atoms" of experience additionally have intrinsic natures (I'll call these eg. RED and GREEN) which are non-causal and cannot be objectively discovered.

You can show that ineffable qualia (experiential intrinsic natures, independent of experiential structure) aren't real by showing that spectrum inversion (changing the intrinsic natures, keeping the structure) is incoherent.

An attempt at a solution:

Take another experiential "spectrum": pleasure vs. displeasure. Spectrum inversion is harder, I'd say impossible, to take seriously in this case. If someone seeks out P, tells everyone P is wonderful, laughs and smiles when P happens, and even herself believes (by means of mental representations or whatever) that P is pleasant, then it makes no sense to me to imagine P really "ultimately" being UNPLEASANT for her.

Anyway, if pleasure-displeasure can't be noncausally inverted, then neither can colour-qualia. The three colour-space dimensions aren't really all you need to represent colour experience. Colour experience doesn't, and can't, ever occur isolated from other cognition.

For example: seeing a lot of red puts monkeys on edge. So imagine putting a spectrum-inverted monkey in a (to us) red room, and another in a (to us) green room.

If the monkey in the green (to it, RED') room gets antsy, or the monkey in the red (to it, GREEN') room doesn't, then that means the spectrum-inversion was causal and ineffable qualia don't exist.

But if the monkey in the green room doesn't get antsy, or the monkey in the red room does, then it hasn't been a full spectrum inversion. RED' without antsiness is not the same quale as RED with antsiness. If all the other experiential spectra remain uninverted, it might even look surprisingly like GREEN. But to make the inversion successfully, you'd have to flip all the other experiential spectra that connect with colour, including antiness vs. serenity, and through that, pleasure vs. displeasure.

This isn't knockdown, but it convinced me.

Replies from: DavidAgain, DanArmak
comment by DavidAgain · 2013-06-19T13:18:57.958Z · LW(p) · GW(p)

I'm not sure pleasure/pain is that useful, because 1) they have such an intuitive link to reaction/function 2) they might be meta-qualities: a similar sensation of pain can be strongly unpleasant, entirely tolerable or even enjoyable depending on other factors.

What you've done with colours is combine what feels like a somewhat arbitary/ineffable qualia and declare it inextricable associated with one that has direct behavioural terms involved. Your talk of what's required to 'make the inversion succesfully' is misleading: what if the monkey has GREEN and antsiness rather than RED and antsiness?

It seems intuitive to assume 'red' and 'green' remain the same in normal conditions: but I'm left totally lost as to what 'red' would look like to a creature that could see a far wider or narrower spectrum than the one we can see. Or to that matter to someone with limited colour-blindness. There seems to me to be the Nagel 'what is it like to be a bat' problem, and I've never understood how that dissolves.

It's been a long time since I read Dennett, but I was in the camp of 'not answering the question, while being fascinating around the edges and giving people who think qualia are straightforward pause for thought'. No-one's ever been able to clearly explain how his arguments work to me, to the point that I suggest that either I or they are fundamentally missing something.

If the hard problem of consciousness has really been solved I'd really like to know!

Replies from: TheOtherDave
comment by TheOtherDave · 2013-06-19T13:35:37.052Z · LW(p) · GW(p)

Consider the following dialog:
A: "Why do containers contain their contents?"
B: "Well, because they are made out of impermeable materials arranged in such a fashion that there is no path between their contents and the rest of the universe."
A: "Yes, of course, I know that, but why does that lead to containment?"
B: "I don't quite understand. Are you asking what properties of materials make them impermeable, or what properties of shapes preclude paths between inside and outside? That can get a little technical, but basically it works like this --"
A: "No, no, I understand that stuff. I've been studying containment for years; I understand the simple problem of containment quite well. I'm asking about the hard problem of containment: how does containment arise from those merely mechanical things?"
B: "Huh? Those 'merely mechanical things' are just what containment is. If there's no path X can take from inside Y to outside Y, X is contained by Y. What is left to explain?"
A: "That's an admirable formulation of the hard problem of containment, but it doesn't solve it."

How would you reply to A?

Replies from: Juno_Watt, Kawoomba, DavidAgain
comment by Juno_Watt · 2013-06-19T17:59:00.966Z · LW(p) · GW(p)

There's nothing left to explain about containment. There's something left to explain about consc.

Replies from: TheOtherDave
comment by TheOtherDave · 2013-06-19T18:39:17.069Z · LW(p) · GW(p)

Would you expect that reply to convince A?
Or would you just accept that A might go on believing that there's something important and ineffable left to explain about containment, and there's not much you can do about it?
Or something else?

Replies from: shminux, Juno_Watt
comment by Shmi (shminux) · 2013-06-24T20:46:10.174Z · LW(p) · GW(p)

If you were a container, you would understand the wonderful feeling of containment, the insatiable longing to contain, the sweet anticipation of the content being loaded, the ultimate reason for containing and other incomparable wonderful and tortuous qualia no non-container can enjoy. Not being one, all you can understand is the mechanics of containment, a pale shadow of the rich and true containing experience.

OK, maybe I'm getting a bit NSFW here...

comment by Juno_Watt · 2013-06-24T18:53:33.823Z · LW(p) · GW(p)

Would you expect that reply to convince A?

It is for A to state what the remaining problem actually is. And qualiphiles can do that

D: I can explain how conscious entities respond to their environments, process information and behave. What more is there? C: How it all looks from the inside -- the qualia.

comment by Kawoomba · 2013-06-19T14:21:03.089Z · LW(p) · GW(p)

That's funny, David again and the other David arguing about the hard versus the "soft" problem of consciousness. Have you two lost your original?

I think A and B are sticking different terminology on a similar thing. A laments that the "real" problem hasn't been solved, B points out that it has to the extent that it can be solved. Yet in a way they treat common ground:

A believes there are aspects of the problem of con(tainment|sciousness) that didn't get explained away by a "mechanistic" model.

B believes that a (probably reductionist) model suffices, "this configuration of matter/energy can be called 'conscious'" is not fundamentally different from "this configuration of matter/energy can be called 'a particle'". If you're content with such an explanation for the latter, why not the former? ...

However, with many Bs I find that even accepting a matter-of-fact workable definition of "these states correspond to consciousness" is used as a stop sign more so than as a starting point.

Just as A insists that further questions exist, so should B, and many of those questions would be quite similar, to the point of practically dissolving the initial difference.

Off of the top of my head: If the experience of qualia is a potential side-effect of physical objects, is it configuration-dependent or does everything have it in some raw, unprocessed form? Is it just that the qualia we experience are modulated and processed by virtue of the relevant matter (brain) being in a state which can organize memories, reflect on its experiences etc.?

Anthropic considerations apply: Even if anything had a "value" for "subjective experience", we would know only about our own, and probably only ascribe that property to similar 'things' (other humans or highly developed mammals). But is it just because those can reflect upon that property? Are waterfalls conscious, even if not sentient? "What an algorithm feels like on the inside" - any natural phenomenon is executing algorithms just the same as our neurons and glial cells do. Is it because we can ascribe correspondences between structure in our brain and external structures, i.e. models? We can find the same models within a waterfall, simply by finding another mapping function.

So is it the difference between us and a waterfall that enables the capacity for qualia, something to do with communication, memory, planning? It's not clear why qualia should depend on "only things that can communicate can experience qualia", for example. That sounds more like an anthropic concern: Of course we can understand another human relate its qualia experience better than a waterfall could -- if it did experience it. Occam's Razor may prefer "everything can experience" to "only very special configurations of matter can experience", keeping in mind that the internal structure of a waterfall is just as complex as a human brain.

It seems to be that A is better in tune with the many questions that remain, while B has more of an engineer mindset, a la "I can work with that, what more do I want?". "Here be dragons" is what follows even the most dissolv-y explanation of qualia, and trying to stay out of those murky waters isn't a reason to deny their existence.

Replies from: TheOtherDave, TheOtherDave
comment by TheOtherDave · 2013-06-19T15:38:47.168Z · LW(p) · GW(p)

Have you two lost your original?

I can no longer remember if there was actually an active David when I joined, or if I just picked the name on a lark. I frequently introduce myself in real life as "Dave -- no, not that Dave, the other one."

Replies from: Desrtopa
comment by Desrtopa · 2013-06-19T15:46:29.954Z · LW(p) · GW(p)

I always assumed that the name was originally to distinguish you from David Gerard.

comment by TheOtherDave · 2013-06-19T15:24:05.797Z · LW(p) · GW(p)

Sure, I agree that there may be systems that have subjective experience but do not manifest that subjective experience in any way we recognize or understand.
Or, there may not.

In the absence of any suggestion of what might be evidence one way or the other, in the absence of any notion of what I would differentially expect to observe in one condition over the other, I don't see any value to asking the question. If it makes you feel better if I don't deny their existence, well, OK, I don't deny their existence, but I really can't see why anyone should care one way or the other.

In any case, I don't agree that the B's studying conscious experience fail to explore further questions. Quite the contrary, they've made some pretty impressive progress in the last five or six decades towards understanding just how the neurobiological substrate of conscious systems actually works. They simply don't explore the particular questions you're talking about here.

And it's not clear to me that the A's exploring those questions are accomplishing anything.

If the experience of qualia is a potential side-effect of physical objects, is it configuration-dependent or does everything have it in some raw, unprocessed form?

So, A asks "If containment is a potential side-effect of physical objects, is it configuration-dependent or does everything have it in some raw, unprocessed form?"
How would you reply to A?

My response is something like "We know that certain configurations of physical objects give rise to containment. Sure, it's not impossible that "unprocessed containment" exists in other systems, and we just haven't ever noticed it, but why are you even asking that question?"

comment by DavidAgain · 2013-06-19T13:54:04.387Z · LW(p) · GW(p)

But I don't think conscious experience (qualia if you like) have been explained. I think we have some pretty good explanations of how people act, but I don't see how it pierces through to consciousness as experienced, and linked questions such as 'what is it like to be a bat?' or 'how do I know my green isn't your red'

It would help if you could sum up the merely mechnical things that are 'just what consciousness is' in Dennett's (or your!) sense. I've never been clear on what confident materialists are saying on this: I'm sometimes left with the impression that they're denying that we have subjective experience, sometimes that they're saying it's somehow an inherent quality of other things, sometimes that it's an incidental byproduct. All of these seem to be problematic to me.

Replies from: TheOtherDave
comment by TheOtherDave · 2013-06-19T15:05:46.974Z · LW(p) · GW(p)

It would help if you could sum up the merely mechanical things that are 'just what consciousness is' in Dennett's (or your!) sense.

I don't think it would, actually.

The merely mechanical things that are 'just what consciousness is' in Dennett's sense are the "soft problem of consciousness" in Chalmers' sense; I don't expect any amount of summarizing or detailing the former to help anyone feel like the "hard problem of consciousness" has been addressed, any more than I expect any amount of explanation of materials science or topology to help A feel like the "hard problem of containment" has been addressed.

But, since you asked: I'm not denying that we have subjective experiences (nor do I believe Dennett is), and I am saying that those experiences are a consequence of our neurobiology (as I believe Dennett does). If you're looking for more details of things like how certain patterns of photons trigger increased activation levels of certain neural structures, there are better people to ask than me, but I don't think that's what you're looking for.

As for whether they are an inherent quality or an incidental byproduct of that neurobiology, I'm not sure I even understand the question. Is being a container an inherent quality of being composed of certain materials and having certain shape, or an incidental byproduct? How would I tell?

And: how would you reply to A?

Replies from: Juno_Watt, DavidAgain
comment by Juno_Watt · 2013-06-19T18:02:37.498Z · LW(p) · GW(p)

and I am saying that those experiences are a consequence of our neurobiology

That's such a broad statement, it could cover some forms of dualism.

Replies from: TheOtherDave
comment by TheOtherDave · 2013-06-19T18:37:12.044Z · LW(p) · GW(p)

Agreed.

comment by DavidAgain · 2013-06-19T15:51:04.801Z · LW(p) · GW(p)

I may not remember Chalmer's soft problem well enough either for reference of that to help!

If experiences are a consequence of our neurobiology, fine. Presumably a consequence that itself has consequences: experiences can be used in causal explanations? But it seems to me that we could explain how a bat uses echolocation without knowing what echolocation looks like (sounds like? feels like?) to a bat. And that we could distinguish how well people distinguish wavelengths of light etc. without knowing what the colour looks like to them.

It seems subjective experience is just being ignored: we could identify that an AI could carry out all sorts of tasks that we associate with consciousness, but I have no idea when we'd say 'it now has conscious experiences'. Or whether we'd talk about degrees of conscious experience, or whatever. This is obviously ethically quite important, if not that directly pertinent to me, and it bothers me that I can't respond to it.

With a container, you describe various qualities and that leaves the question 'can it contain things': do things stay in it when put there. You're adding a sort of purpose-based functional classification to a physical object. When we ask 'is something conscious', we're not asking about a function that it can perform. On a similar note, I don't think we're trying to reify something (as with the case where we have a sense of objects having ongoing identity, which we then treat as a fundamental thing and end up asking if a ship is the same after you replace every component of it one by one). We're not chasing some over-abstracted ideal of consciousness, we're trying to explain an experienced reality.

So to answer A, I'd say 'there is no fundamental property of 'containment'. It's just a word we use to describe one thing surrounded by another in circumstances X and Y. You're over-idealising a useful functional concept'. The same is not true of consciousness, because it's not (just) a function.

It might help if you could identify what, in light of a Dennett-type approach, we can identify as conscious or not. I.e. plants, animals, simple computers, top-level computers, theoretical super-computers of various kinds, theoretical complex networks divided across large areas so that each signal from one part to another takes weeks...

Replies from: TheOtherDave, TheOtherDave, TheOtherDave
comment by TheOtherDave · 2013-06-19T17:32:07.511Z · LW(p) · GW(p)

I'm splitting up my response to this into several pieces because it got long. Some other stuff:

what, in light of a Dennett-type approach, we can identify as conscious or not.

The process isn't anything special, but OK, since you ask.

Let's assert for simplicity that "I" has a relatively straightforward and consistent referent, just to get us off the ground. Given that, I conclude that am at least sometimes capable of subjective experience, because I've observed myself subjectively experiencing.

I further observe that my subjective experiences reliably and differentially predict certain behaviors. I do certain things when I experience pain, for example, and different things when I experience pleasure. When I observe other entities (E2) performing those behaviors, that's evidence that they, too, experience pain and pleasure. Similar reasoning applies to other kinds of subjective experience.

I look for commonalities among E2 and I generalize across those commonalities. I notice certain biological structures are common to E2 and that when I manipulate those structures, I reliably and differentially get changes in the above-referenced behavior. Later, I observe additional entities (E3) that have similar structures; that's evidence that E3 also demonstrates subjective experience, even though E3 doesn't behave the way I do.

Later, I build an artificial structure (E4) and I observe that there are certain properties (P1) of E2 which, when I reproduce them in E4 without reproducing other properties (P2), reproduce the behavior of E2. I conclude that P1 is an important part of that behavior, and P2 is not.

I continue this process of observation and inference and continue to draw conclusions based on it. And at some point someone asks "is X conscious?" for various Xes:

I.e. plants, animals, simple computers, top-level computers, theoretical super-computers of various kinds, theoretical complex networks divided across large areas so that each signal from one part to another takes weeks...

If I interpret "conscious" as meaning having subjective experience, then for each X I observe it carefully and look for the kinds of attributes I've attributed to subjective experience... behaviors, anatomical structures, formal structures, etc... and compare it to my accumulated knowledge to make a decision.

Isn't that how you answer such questions as well?

If not, then I'll ask you the same question: what, in light of whatever non-Dennett-type approach you prefer, can we identify as conscious or not?

Replies from: DavidAgain
comment by DavidAgain · 2013-06-19T22:23:05.265Z · LW(p) · GW(p)

Ok, well given that responses to pain/pleasure can equally be explained by more direct evolutionary reasons, I'm not sure that the inference from action to experience is very useful. Why would you ever connect these things with expereince rather than other, more directly measurable things?

But the point is definitely not that I have a magic bullet or easy solution: it's that I think there's a real and urgent question - are they conscious - which I don't see how information about responses etc. can answer. Compare to the cases of containment, or heat, or life - all the urgent questions are already resolved before those issues are even raised.

Replies from: TheOtherDave
comment by TheOtherDave · 2013-06-19T23:32:00.292Z · LW(p) · GW(p)

As I say, the best way I know of to answer "is it conscious?" about X is to compare X to other systems about which I have confidence-levels about its consciousness and look for commonalities and distinctions.

If there are alternative approaches that you think give us more reliable answers, I'd love to hear about them.

Replies from: DavidAgain
comment by DavidAgain · 2013-06-19T23:39:11.941Z · LW(p) · GW(p)

I have no reliable answers! And I have low meta-confidence levels (in that it seems clear to me that people and most other creatures are conscious but I have no confidence in why I think this)

If the Dennett position still sees this as a complete bafflement but thinks it will be resolved with the so-called 'soft' problem, I have less of an issue than I thought I did. Though I'd still regard the view that the issue will become clear one of hope rather than evidence.

comment by TheOtherDave · 2013-06-19T17:31:56.244Z · LW(p) · GW(p)

I'm splitting up my response to this into several pieces because it got long. Some other stuff:

Presumably a consequence that itself has consequences: experiences can be used in causal explanations?

I expect so, sure. For example, I report having experiences; one explanation of that (though hardly the only possible one) starts with my actually having experiences and progresses forward in a causal fashion.

But it seems to me that we could explain how a bat uses echolocation without knowing what echolocation looks like (sounds like? feels like?) to a bat.

Sure, there are many causal explanations of many phenomena, including but not limited to how bats use echolocation, that don't posit subjective experience as part of their causal chain. For example, humas do all kinds of things without the subjective experience of doing them.

And that we could distinguish how well people distinguish wavelengths of light etc. without knowing what the colour looks like to them.

Certainly.

It seems subjective experience is just being ignored

In the examples you give, yes, it is being ignored. So? Lots of things are being ignored in those examples... mass, electrical conductivity, street address, level of fluency in Russian, etc. If these things aren't necessary to explain the examples, there's nothing wrong with ignoring these things.

On the other hand, if we look at an example for which experience ought to be part of the causal chain (for example, as I note above, reporting having those experiences), subjective experience is not ignored. X happens, as a consequence of X a subjective experience Y arises, as a consequence of Y a report Z arises, and so forth. (Of course, for some reports we do have explanations that don't presume Y... e.g., confabulation, automatic writing, etc. But that needn't be true for all reports. Indeed, it would be surprising if it were.)

"But we don't know what Xes give rise to the Y of subjective experience, so we don't fully understand subjective experience!" Well, yes, that's true. We don't fully understand fluency in Russian, either. But we don't go around as a consequence positing some mysterious essence of Russian fluency that resists neurobiological explanation... though two centuries ago, we might have done so. Nor should we. Neither should we posit some mysterious essence of subjective experience.

"But subjective experience is different! I can imagine what a mechanical explanation of Russian fluency would be like, but I can't imagine what a mechanical explanation of subjective experience would be like." Sure, I understand that. Two centuries ago, the notion of a mechanical explanation of Russian fluency would raise similar incredulity... how could a machine speak Russian? I'm not sure how I could go about answering such incredulity convincingly, but I don't thereby conclude that machines can't speak Russian. My incredulity may be resistant to my reason, but it doesn't therefore compel or override my reason.

Replies from: DavidAgain
comment by DavidAgain · 2013-06-19T22:04:44.212Z · LW(p) · GW(p)

I have a lot of sympathy for this. The most plausible position of reductive materialism is simply that at some future scientific point this will become clear. But this is inevitably a statement of faith, rather than acknoweldging current acheivement. It's very hard to compare current apparent mysteries to solved mysteries - I do get that. Having said that, I can't even see what the steps on the way to explaining consciousness would be, and claiming there is not such thing seems not to be an option (unlike 'life', 'free will' etc.) whereas most other cases you rely on saying that you can't see how the full extent could be achieved: a machine might speak crap Russian in some circumstances etc.

Also, if a machine can speak Russian, you can check that. I don't know how we'd check a machine was conscious.

BTW, when I said 'it seems subjective experience is just being ignored', I meant ignored in your and Dennett's arguments, not in specific explanations. I have nothing against analysing things in ways that ignore consciousness, if they work.

Replies from: TheOtherDave
comment by TheOtherDave · 2013-06-19T23:35:48.255Z · LW(p) · GW(p)

I don't know what the mechanical explanation would look like, either. But I'm sufficiently aware of how ignorant my counterparts two centuries ago would have been of what a mechanical explanation for speaking Russian would look like that I don't place too much significance on my ignorance.

I agree that testing whether a system is conscious or not is a tricky problem. (This doesn't just apply to artificial systems.)

Replies from: DavidAgain
comment by DavidAgain · 2013-06-19T23:43:57.078Z · LW(p) · GW(p)

Indeed: though artificial systems are more intuitively difficult as we don't have as clear an intuitive expectation.

You can take an outside view and say 'this will dissolve like the other mysteries'. I just genuinely find this implausible, if only because you can take steps towards the other mysteries (speaking bad Russian occasionally) and because you have a clear empirical standard (russians). Whereas for consciousness I don't have any standard for identifying another's consciousness: I do it only by analogy with myself and by the implausibility of me having an apparently causal element that others who act similarly to me lack.

Replies from: TheOtherDave
comment by TheOtherDave · 2013-06-19T23:54:43.462Z · LW(p) · GW(p)

I agree that the "consciousness-detector" problem is a hard problem. I just can't think of a better answer than the generalizing-from-commonalities strategy I discussed previously, so that's the approach I go with. It seems capable of making progress for now.

And I understand that you find it implausible. That said, I suspect that if we solve the "soft" problem of consciousness well enough that a typical human is inclined to treat an artificial system as though it were conscious, it will start to seem more plausible.

Perhaps it will be plausible and incorrect, and we will happily go along treating computers as conscious when they are no such thing. Perhaps we're currently going along treating dogs and monkeys and 90% of humans as conscious when they are no such thing.

Perhaps not.

Either way, plausibility (or the absence of it) doesn't really tell us much.

Replies from: DavidAgain
comment by DavidAgain · 2013-06-20T10:44:21.276Z · LW(p) · GW(p)

Yes. This is what worries me: I can see more advances making everyone sure that computers are conscious, but my suspicion is that this will not be logical. Take the same processor and I suspect the chances of it being seen as conscious will rise sharply if it's put in a moving machine, rise sharply again for a humanoid, again for face/voice and again for physically indistinguishable.

The problem with generalising from commonalities is that I have precisely one direct example of consciousness. Although having said that, I don't find epiphenomenal accounts convincing, so it's reasonable for me to think that as my statements about qualia seem to follow causally from experiencing said qualia, that other people don't have a totally separate framework for their statements about qualia. I wouldn't be that confident though, and it gets harder with artificial consciousness.

Replies from: TheOtherDave
comment by TheOtherDave · 2013-06-20T13:09:59.666Z · LW(p) · GW(p)

Sure. By the same token, if you take me, remove my ability to communicate, and encase me in an opaque cylinder, nobody will recognize me as a being with subjective experience. Or, for that matter, as a being with the ability to construct English sentences.

We are bounded intellects reasoning under uncertainty in a noisy environment. We will get stuff wrong. Sometimes it will be important stuff.

it's reasonable for me to think that as my statements about qualia seem to follow causally from experiencing said qualia, that other people don't have a totally separate framework for their statements about qualia.

I agree. And, as I said initially, I apply the same reasoning not only to the statements I make in English, but to all manner of behaviors that "seem to rise from my qualia," as you put it... all of it is evidence in favor of other organisms also having subjective experience, even organisms that don't speak English.

I wouldn't be that confident though,

How confident are you that I possess subjective experience?
Would that confidence rise significantly if we met in person and you verified that I have a typical human body?

and it gets harder with artificial consciousness.

Agreed.

Replies from: DavidAgain
comment by DavidAgain · 2013-06-20T14:33:30.818Z · LW(p) · GW(p)

Consciousness does seem different in that we can have a better and better understanding of all the various functional elements but that we're 1) left with a sort of argument from analogy for others having qualia 2) even if we can resolve(1), I can't see how we can start to know whether my green is your red etc. etc.

I can't think of many comparable cases: certainly I don't think containership is comparable. You and I could end up looking at the AI in the moment before it destroys/idealises/both the world and say 'gosh, I wonder if it's conscious'. This is nothing like the casuistic 'but what about this container gives it its containerness'. I think we're on the same point here, though?

I'm intuitively very confident you're conscious: and yes, seeing you were human would help (in that one of the easiest ways I can imagine you weren't conscious is that you're actually a computer designed to post about things on less wrong. This would also explain why you like Dennett - I've always suspected he's a qualia-less robot too! ;-)

Replies from: TheOtherDave
comment by TheOtherDave · 2013-06-20T15:11:18.465Z · LW(p) · GW(p)

Yes, I agree that we're much more confused about subjective experience than we are about containership.

We're also more confused about subjective experience than we are about natural language, about solving math problems, about several other aspects of cognition. We're not _un_confused about those things, but we're less confused than we used to be. I expect us to grow still less confused over time.

I disagree about the lack of comparable cases. I agree about containers; that's just an intuition pump. But the issues that concern you here arise for any theoretical construct for which we have only indirect evidence. The history of science is full of such things. Electrons. Black holes. Many worlds. Fibromyalgia. Phlogiston. Etc.

What makes subjective experience different is not that we lack the ability to perceive it directly; that's pretty common. What makes it different is that we can perceive it directly in one case, as opposed to the other stuff where we perceive it directly in zero cases.

Of course, it's also different from many of them in that it matters to our moral reasoning in many cases. I can't think of a moral decision that depends on whether phlogiston exists, but I can easily think of a moral decision that depends on whether cows have subjective experiences. OTOH, it still isn't unique; some people make moral decisions that depend on the actuality of theoretical constructs like many worlds and PTSD.

Replies from: DavidAgain
comment by DavidAgain · 2013-06-20T15:23:08.636Z · LW(p) · GW(p)

Fair enough. As an intuition pump, for me at least, it's unhelpful: it gave the impression that you thought that consciousness was merely a label being mistaken for a thing (like 'life' as something beyond its parts).

Only having indirect evidence isn't the problem. For a black hole, I care about the observable functional parts. I wouldn't be being sucked towards it and being crushed while going 'but is it really a black hole?' A black hole is like a container here: what matter are the functional bits that make it up. For consciousness, I care if a robot can reason and can display conscious-type behaviour, but I also care if it can experience and feel.

Many worlds could be comparable if there is evidence that means that there are 'many worlds' but people are vague about if these worlds actually exist. And you're right, this is also a potentially morally relevant point.

Replies from: TheOtherDave
comment by TheOtherDave · 2013-06-20T16:20:19.410Z · LW(p) · GW(p)

Insofar as people infer from the fact of subjective experience that there is some essence of subjective experience that is, as you say, "beyond its parts" (and their patterns of interaction), I do in fact think they are mistaking a label for a thing.

Replies from: DavidAgain
comment by DavidAgain · 2013-06-20T16:27:30.992Z · LW(p) · GW(p)

I dunno about essences. The point is that you can observe lots of interactions of neurons and behaviours and be left with an argument from analogy to say "they must be conscious because I am and they are really similar, and the idea that my consciousness is divorced from what I do is just wacky".

You can observe all the externally observable, measurable things that a black hole or container can do, and then if someone argues about essences you wonder if they're actually referring to anything: it's a purely semantic debate. But you can observe all the things a fish, or tree, or advanced computer can do, predict it for all useful purposes, and still not know if it's conscious. This is bothersome. But it's not to do with essences, necessarily.

Replies from: TheOtherDave
comment by TheOtherDave · 2013-06-20T17:12:36.818Z · LW(p) · GW(p)

Insofar as people don't infer something else beyond the parts of a (for example) my body and their pattern of interactions that account for (for example) my subjective experience, I don't think they are mistaking a label for a thing.

Replies from: DavidAgain
comment by DavidAgain · 2013-06-20T17:21:41.032Z · LW(p) · GW(p)

Well, until we know how to identify if something/someone is conscious, it's all a bit of a mystery: I couldn't rule out consciousness being some additional thing. I have an inclination to do so because it seems unparsimonious, but that's it.

comment by TheOtherDave · 2013-06-19T17:31:41.938Z · LW(p) · GW(p)

I'm splitting up my response to this into several pieces because it got long.

The key bit, IMHO:

So to answer A, I'd say 'there is no fundamental property of 'containment'. It's just a word we use to describe one thing surrounded by another in circumstances X and Y.

And I would agree with you.

and that leaves the question 'can it contain things': [..] The same is not true of consciousness, because it's not (just) a function.

"No," replies A, "you miss the point completely. I don't ask whether a container can contain things; clearly it can, I observe it doing so. I ask how it contains things. What is the explanation for its demonstrated ability to contain things? Containership is not just a function," A insists, "though I understand you want to treat it as one. No, containership is a fundamental essence. You can't simply ignore the hard question of "is X a container?" in favor of thinking about simpler, merely functional questions like "can X contain Y?". And, while we're at it," A coninues, "what makes you think that an artificial container, such as we build all the time, is actually containing anything rather than merely emulating containership? Sure, perhaps we can't tell the difference, but that doesn't mean there isn't a difference."

I take it you don't find A's argument convincing, and neither do I, but it's not clear to me what either of us could say to A that A would find at all compelling.

Replies from: DavidAgain, TimS
comment by DavidAgain · 2013-06-19T21:56:50.894Z · LW(p) · GW(p)

Maybe we couldn't, but A is simply asserting that containership is a concept beyond its parts, whereas I'm appealing directly to experience: the relevance of this is that whether something has experience matters. Ultimately for any case, if others just express bewilderment in your concepts and apparently don't get what you're talking about, you can't prove it's an issue. But at any rate, most people seem to have subjective experience.

Being conscious isn't a label I apply to certain conscious-type systems that I deem 'valuable' or 'true' in some way. Rather, I want to know what systems should be associated with the clearly relevant and important category of 'conscious'

Replies from: TheOtherDave
comment by TheOtherDave · 2013-06-19T23:28:28.474Z · LW(p) · GW(p)

My thoughts about how I go about associating systems with the expectation of subjective experience are elsewhere and I have nothing new to add to it here.

As regards you and A... I realize that you are appealing directly to experience, whereas A is merely appealing to containment, and I accept that it's obvious to you that experience is importantly different from containment in a way that makes your position importantly non-analogous to A's.

I have no response to A that I expect A to find compelling... they simply don't believe that containership is fully explained by the permeability and topology of containers. And, you know, maybe they're right... maybe some day someone will come up with a superior explanation of containerhood that depends on some previously unsuspected property of containers and we'll all be amazed at the realization that containers aren't what we thought they were. I don't find it likely, though.

I also have no response to you that I expect you to find compelling. And maybe someday someone will come up with a superior explanation of consciousness that depends on some previously unsuspected property of conscious systems, and I'll be amazed at the realization that such systems aren't what I thought they were, and that you were right all along.

Replies from: DavidAgain
comment by DavidAgain · 2013-06-19T23:48:33.926Z · LW(p) · GW(p)

Are you saying you don't experience qualia and find them a bit surprising (in a way you don't for containerness)? I find it really hard to not see arguments of this kind as a little disingenous: is the issue genuinely not difficult for some people, or is this a rhetorical stance intended to provoke better arguments, or awareness of the weakness of current arguments?

Replies from: TheOtherDave
comment by TheOtherDave · 2013-06-20T00:01:11.854Z · LW(p) · GW(p)

I have subjective experiences. If that's the same thing as experiencing qualia, then I experience qualia.

I'm not quite sure what you mean by "surprising" here... no, it does not surprise me that I have subjective experiences, I've become rather accustomed to it over the years. I frequently find the idea that my subjective experiences are a function of the formal processes my neurobiology implements a challenging idea... is that what you're asking?

Then again, I frequently find the idea that my memories of my dead father are a function of the formal processes my neurobiology implements a challenging idea as well. What, on your view, am I entitled to infer from that?

Replies from: DavidAgain
comment by DavidAgain · 2013-06-20T10:46:36.972Z · LW(p) · GW(p)

Yes, I meant surprising in light of other discoveries/beliefs.

On memory: is it the conscious experience that's challenging (in which case it's just a sub-set of the same issue) or do you find the functional aspects of memory challenging? Even though I know almost nothing about how memory works, I can see plausible models and how it could work, unlike consciousness.

comment by TimS · 2013-06-19T17:52:03.468Z · LW(p) · GW(p)

Isn't our objection to A's position that it doesn't pay rent in anticipated experience? If one thinks there is a "hard problem of consciousness" such that different answers would cause one to behave differently, then one must take up the burden of identifying what the difference would look like, even if we can't create a measuring device to find it just now.

Sure, perhaps we can't tell the difference, but that doesn't mean there isn't a difference.

If A means that we cannot determine the difference in principle, then there's nothing we should do differently. If A means that a measuring device does not currently exist, he needs to identify the range of possible outputs of the device.

Replies from: NancyLebovitz, Juno_Watt
comment by NancyLebovitz · 2013-06-19T22:54:59.724Z · LW(p) · GW(p)

Isn't our objection to A's position that it doesn't pay rent in anticipated experience?

This may be a situation where that's a messy question. After all, qualia are experience. I keep expecting experiences, and I keep having experiences. Do experiences have to be publicly verifiable?

Replies from: TheOtherDave
comment by TheOtherDave · 2013-06-19T23:42:04.683Z · LW(p) · GW(p)

If two theories both lead me to anticipate the same experience, the fact that I have that experience isn't grounds for choosing among them.

So, sure, the fact that I keep having experiences is grounds for preferring a theory of subjective-experience-explaining-but-otherwise-mysterious qualia over a theory that predicts no subjective experience at all, but not necessarily grounds for preferring it to a theory of subjective-experience-explaining-neural-activity.

Replies from: Eugine_Nier
comment by Eugine_Nier · 2013-06-25T02:59:40.387Z · LW(p) · GW(p)

If two theories both lead me to anticipate the same experience,

They don't necessarily once you start talking about uploads, or the afterlife for that matter.

comment by Juno_Watt · 2013-06-19T18:29:58.028Z · LW(p) · GW(p)

different answers to the HP would undoubtedly change our behaviour, because they would indicate different classes of entity have feelings impacting morality. Indeed, it is pretty hard to think of anything more impactive.

The measuring device for conscious experience is consciousness, which is the whole problem.

Replies from: TheOtherDave
comment by TheOtherDave · 2013-06-19T18:43:19.743Z · LW(p) · GW(p)

Sure. But in this sense, false believed answers to the HP are no different from true believed answers.... that is, they would both potentially change our behavior the way you describe.

I suspect that's not what TimS meant.

Replies from: Kawoomba, Juno_Watt
comment by Kawoomba · 2013-06-19T19:06:22.722Z · LW(p) · GW(p)

Sure. But in this sense, false believed answers to the HP are no different from true believed answers.... that is, they would both potentially change our behavior the way you describe.

That is the case for most any belief you hold (unless you mean "in the exact same way", not as "change behavior"). You may believe there's a burglar in your house, and that will impact your actions, whether it be false or true. Say you believe that it's more likely there is a burglar, you are correct in acting upon that belief even if it turns out to incorrect. It's not AIXI's fault if it believes in the wrong thing for the right reasons.

In that sense, you can choose an answer for example based on complexity considerations. In the burglar example, the answer you choose (based on data such as crime rate, cat population etc.) can potentially be further experimentally "verified" (the probability increased) as true or false, but even before such verification, your belief can still be strong enough to act upon.

After all, you do act upon your belief that "I am not living in a simulation which will eventually judge and reward me only for the amount of cheerios I've eaten". It also doesn't lead to different expected experiences at the present time, yet you also choose to act thus as if it were true. Prior based on complexity considerations alone, yet strong enough to act upon. Same when thinking about whether the sun has qualia ("hot hot hot hot hot").

(Bit of a hybrid fusion answer also meant to refer to our neighboring discussion branch.)

Cheerio!

Replies from: TheOtherDave
comment by TheOtherDave · 2013-06-19T19:15:12.231Z · LW(p) · GW(p)

Yes, I agree with all of this.

Replies from: Kawoomba
comment by Kawoomba · 2013-06-19T19:35:48.059Z · LW(p) · GW(p)

In the absence of any suggestion of what might be evidence one way or the other, in the absence of any notion of what I would differentially expect to observe in one condition over the other, I don't see any value to asking the question. If it makes you feel better if I don't deny their existence, well, OK, I don't deny their existence, but I really can't see why anyone should care one way or the other.

Well, in the case of "do landslides have qualia", Occam's Razor could be used to assign probabilities just the same as we assign probabilities in the "cheerio simulation" example. So we've got methodology, we've got impact, enough to adopt a stance on the "psychic unity of the cosmos", no?

Replies from: TheOtherDave
comment by TheOtherDave · 2013-06-19T19:46:43.886Z · LW(p) · GW(p)

I'm having trouble following you, to be honest.

My best guess is that you're suggesting that, with respect to systems that do not manifest subjective experience in any way we recognize or understand, Occam's Razor provides grounds to be more confident that they have subjective experience than that they don't.
If that's what you mean, I don't see why that should be.
If that's not what you mean, can you rephrase the question?

Replies from: Kawoomba
comment by Kawoomba · 2013-06-19T20:04:57.262Z · LW(p) · GW(p)

I think it's conceivable if not likely that Occam's Razor would favor or disfavor qualia as a property of more systems than just those that seem to show or communicate them in terms we're used to. I'm not sure which, but it is a question worth pondering, with an impact on how we view the world, and accessible through established methodology, to a degree.

I'm not advocating assigning a high probability to "landslides have raw experience", I'm advocating that it's an important question, the probability of which can be argued. I'm an advocate of the question, not the answer, so to speak. And as such opposed to "I really can't see why anyone should care one way or the other".

Replies from: TheOtherDave
comment by TheOtherDave · 2013-06-19T20:42:10.460Z · LW(p) · GW(p)

Ah, I see.

So, I stand by my assertion that in the absence of evidence one way or the other, I really can't see why anyone should care.

But I agree that to the extent that Occam's Razor type reasoning provides evidence, that's a reason to care.

And if it provided strong evidence one way or another (which I don't think it does, and I'm not sure you do either) that would provide a strong reason to care.

Replies from: Eugine_Nier, Juno_Watt
comment by Eugine_Nier · 2013-06-25T03:03:37.316Z · LW(p) · GW(p)

I stand by my assertion that in the absence of evidence one way or the other,

I have evidence in the form of by personal experience of qualia. Granted, I have no way of showing you that evidence, but that doesn't mean I don't have it.

Replies from: TheOtherDave
comment by TheOtherDave · 2013-06-25T05:17:24.682Z · LW(p) · GW(p)

Agreed that the ability to share evidence with others is not a necessary condition of having evidence. And to the extent that I consider you a reliable evaluator of (and reporter of) evidence, your report is evidence, and to that extent I have a reason to care.

comment by Juno_Watt · 2013-06-24T18:43:42.994Z · LW(p) · GW(p)

So, I stand by my assertion that in the absence of evidence one way or the other, I really can't see why anyone should care.

The point has been made that we should care because qualia have moral implications.

Replies from: TheOtherDave
comment by TheOtherDave · 2013-06-24T21:31:44.206Z · LW(p) · GW(p)

Moral implications of a proposition in the absence of evidence one way or another for that proposition are insufficient to justify caring.
If I actually care about the experiences of minds capable of experiences, I do best to look for evidence for the presence or absence of such experiences.
Failing such evidence, I do best to concentrate my attention elsewhere.

Replies from: Juno_Watt
comment by Juno_Watt · 2013-06-25T17:59:18.277Z · LW(p) · GW(p)

It's possible to have both a strong reason to care, and weak evidence, ie due the moral hazard dependent on some doubtful proposition. People often adopt precautionary principles in such scenarios.

Replies from: None, TheOtherDave
comment by [deleted] · 2013-06-25T20:24:37.368Z · LW(p) · GW(p)

It's possible to have both a strong reason to care, and weak evidence, ie due the moral hazard dependent on some doubtful proposition.

I don't think that's the situation here though. That sounds like a description of this situation: (imagine) we have weak evidence that 1) snakes are sapient, and we grant that 2) sapience is morally significant. Therefore (perhaps) we should avoid wonton harm to snakes.

Part of why this argument might make sense is that (1) and (2) are independent. Our confidence in (2) is not contingent on the small probability that (1) is true: whether or not snakes are sapient, we're all agreed (lets say) that sapience is morally significant.

On the other hand, the situation with qualia is one where we have weak evidence (suppose) that A) qualia are real, and we grant that B) qualia are morally significant.

The difference here is that (B) is false if (A) is false. So the fact that we have weak evidence for (A) means that we can have no stronger (and likely, we must have yet weaker) evidence for (B).

Replies from: TheOtherDave, Juno_Watt
comment by TheOtherDave · 2013-06-25T22:34:17.398Z · LW(p) · GW(p)

Does the situation change significantly if "the situation with qualia" is instead framed as A) snakes have qualia and B) qualia are morally significant?

Replies from: None
comment by [deleted] · 2013-06-25T23:13:39.410Z · LW(p) · GW(p)

Yes, if the implication of (A) is that we're agreed on the reality of qualia but are now wondering whether or not snakes have them. No, if (A) is just a specific case of the general question 'are qualia real?'. My point was probably put in a confusing way: all I mean to say was that Juno seemed to be arguing as if it were possible to be very confident about the moral significance of qualia while being only marginally confident about their reality.

Replies from: TheOtherDave
comment by TheOtherDave · 2013-06-26T02:55:32.245Z · LW(p) · GW(p)

(nods) Makes sense.

comment by Juno_Watt · 2013-06-26T17:13:31.057Z · LW(p) · GW(p)

What? Are you saying we have weak evidence for even in ourselves?

Replies from: None
comment by [deleted] · 2013-06-26T17:24:19.721Z · LW(p) · GW(p)

What I think of the case for qualia is beside the point, I was just commenting on your 'moral hazard' argument. There you said that even if we assume that we have only weak evidence for the reality of qualia, we should take the possibility seriously, since we can be confident that qualia are morally significant. I was just pointing out that this argument is made problematic by the fact that our confidence in the moral significance of qualia can be no stronger than our confidence in their reality, and therefore by assumption must be weak.

Replies from: Juno_Watt, shminux
comment by Juno_Watt · 2013-06-26T18:49:32.852Z · LW(p) · GW(p)

our confidence in the moral significance of qualia can be no stronger than our confidence in their reality,

But of course it can. i can be much more confident in

(P -> Q)

than I am in P. For instance, I can be highly confident that if i won the lottery, I could buy a yacht.

comment by Shmi (shminux) · 2013-06-26T18:45:46.041Z · LW(p) · GW(p)

I am guessing that Juno_Watt means that strong evidence for our own perception of qualia makes them real enough to seriously consider their moral significance, whether or not they are "objectively real".

comment by TheOtherDave · 2013-06-25T20:14:38.642Z · LW(p) · GW(p)

Yes, they often do.
On your view, is there a threshold of doubtfulness of a proposition below which it is justifiable to not devote resources to avoiding the potential moral hazard of that proposition being true, regardless of the magnitude of that moral hazard?

Replies from: Juno_Watt
comment by Juno_Watt · 2013-06-26T19:52:49.066Z · LW(p) · GW(p)

i don't think ir's likely my house will catch fire, but i take out fire insurance. OTOH, if i don't set a lower bound I will be susceptible to pascal's muggings.

comment by Juno_Watt · 2013-06-24T18:45:57.491Z · LW(p) · GW(p)

He may have meant something like "Qualiaphobia implies we would have no expereinces at all". However, that all depends on what you mean by experience. I don't think the Expected Experience criterion is useful here (or anywhere else)

comment by DanArmak · 2013-06-08T20:51:44.617Z · LW(p) · GW(p)

I realize that non-materialistic "intrinsic qualities" of qualia, which we perceive but which aren't causes of our behavior, are incoherent. What I don't fully understand is why have I any qualia at all. Please see my sibling comment.

Replies from: bojangles
comment by bojangles · 2013-06-08T21:32:26.482Z · LW(p) · GW(p)

Tentatively:

If it's accepted that GREEN and RED are structurally identical, and that in virtue of this they are phenomenologically identical, why think that phenomenology involves anything*, beyond structure, which needs explaining?

I think this is the gist of Dennett's dissolution attempts. Once you've explained why your brain is in a seeing-red brain-state, why this causes a believing-that-there-is-red mental representation, onto a meta-reflection-about-believing-there-is-red functional process, etc., why think there's anything else?

Replies from: DanArmak
comment by DanArmak · 2013-06-09T09:05:13.374Z · LW(p) · GW(p)

Phenomenology doesn't involve anything beyond structure. But my experience seems to.

comment by TheOtherDave · 2013-06-08T19:37:50.315Z · LW(p) · GW(p)

(nods) Yes, that's consistent with what I've heard others say.

Like you, I don't understand the question and have no idea of what an answer to it might look like, which is why I say I'm not entirely clear what question you/they claim is being answered. Perhaps it would be more correct to say I'm not clear how it differs from the question you/they want answered.

Mostly I suspect that the belief that there is a second question to be answered that hasn't been is a strong, pervasive, sincere, compelling confusion, akin to where does the bread go?. But I can't prove it.

Relatedly: I remember, many years ago, attending a seminar where a philosophy student protested to Dennett that he didn't feel like the sort of process Dennett described. Dennett replied "How can you tell? Maybe this is exactly what the sort of process I'm describing feels like!"

I recognize that the traditional reply to this is "No! The sort of process Dennett describes doesn't feel like anything at all! It has no qualia, it has no subjective experience!"

To which my response is mostly "Why should I believe that?" An acceptable alternative seems to be that subjective experience ("qualia", if you like) is simply a property of certain kinds of computation, just as the ability to predict the future location of a falling object ("prescience", if you like) is a property of certain kinds of computation.

To which one is of course free to reply "but how could prescience -- er, I mean qualia -- possibly be an aspect of computation??? It just doesn't make any sense!!!" And I shrug.

Sure, if I say in English "prescience is an aspect of computation," that sounds like a really weird thing to say, because "prescience" and "computation" are highly charged words with opposite framings. But if I throw out the English words and think about computing the state of the world at some future time, it doesn't seem mysterious at all, and such computations have become so standard a part of our lives we no longer give it much thought.

When computations that report their subjective experience become ubiquitous, we will take the computational nature of qualia for granted in much the same way.

Replies from: DanArmak
comment by DanArmak · 2013-06-08T20:42:41.877Z · LW(p) · GW(p)

Thanks for your reply and engagement.

How can you tell? Maybe this is exactly what the sort of process I'm describing feels like!

I agree. We already know what we feel like. Once we know empirically what kind of process we are, we can indeed conclude that "that's what that kind of process feels like".

What I don't understand is why being some kind of process feels like anything at all. Why it seems to myself that I have qualia in the first place.

I do understand why it makes sense for an evolved human to have such beliefs. I don't know if there is a further question beyond that. As I said, I don't know what an answer would even look like.

Perhaps I should just accept this and move on. Maybe it's just the case that "being mystified about qualia" is what the kind of process that humans are is supposed to feel like! As an analogy, humans have religious feelings with apparently dedicated neurological underpinnings. Some humans feel the numinous strongly, and they ask for an answer to the Mystery of God, which to them appears as obvious as any qualia.

However, an answer that would be more satisfactory (if possible) would be an exploration and an explanation of mind-space and its accompanying qualia. Perhaps if I understood the actual causal link from which kind of process I am, to which qualia I have, part of the apparent mystery would go away.

Does being like some other kind of process "feel like" anything? Like what? Would it be meaningful for me to experience something else without becoming something else? Are the qualia of a cat separate from being a cat? Or would I have to have a cat-mind and forget all about being human and verbal and DanArmak to experience the qualia of a cat, at which point I'd be no different than any existing cat, and which I wouldn't remember on becoming human again?

When computations that report their subjective experience become ubiquitous, we will take the computational nature of qualia for granted in much the same way.

I agree. To clarify, I believe all of these propositions:

  • Full materialism
  • Humans are physical systems that have self-awareness ("consciousness") and talk about it
  • That isn't a separate fact that could be otherwise (p-zombies); it's highly entangled with how human brains operate
  • Other beings, completely different physically, would still behave the same if they instantiated the same computation (this is pretty much tautological)
  • If the computation that is myself is instantiated differently (as in an upload or em), it would still be conscious and report subjective experience (if it didn't, it would be a very poor emulation!)
  • If I am precisely cloned, I should anticipate either clone's experience with 50% probability; but after finding out which clone I am, I would not expect to suddenly "switch" to experiencing being the other clone. I also would not expect to somehow experience being both clones, or anything else. (I'm less sure about this because it's never happened yet. And I don't understand quantum mechanics, so I can't properly appreciate the arguments that say we're already being split all the time anyway. Nevertheless, I see no sensible alternative, so I still accept this.)
Replies from: FeepingCreature, Armok_GoB, TheOtherDave
comment by FeepingCreature · 2013-06-16T15:09:55.429Z · LW(p) · GW(p)

If I am precisely cloned, I should anticipate either clone's experience with 50% probability

Shouldn't you anticipate being either clone with 100% probability, since both clones will make that claim and neither can be considered wrong?

Replies from: DanArmak
comment by DanArmak · 2013-06-16T18:13:26.840Z · LW(p) · GW(p)

What I meant is that some time after the cloning, the clones' lives would become distinguishable. One of them would experience X, while the other would experience ~X. Then I would anticipate experiencing X with 50% probability.

If they live identical lives forever, then I can anticipate "being either clone" or as I would call it, "not being able to tell which clone I am".

Replies from: FeepingCreature
comment by FeepingCreature · 2013-06-16T21:22:59.461Z · LW(p) · GW(p)

My first instinctive response is "be wary of theories of personal identity where your future depends on a coin flip". You're essentially saying "one of the clones believes that it is your current 'I' experiencing 'X', and it has a 50% chance of being wrong". That seems off.

I think to be consistent, you have to anticipate experiencing both X and ~X with 100% probability. The problem is that the way anticipation works with probability depends implicitly on there only being one future self that things can happen to.

Replies from: DanArmak
comment by DanArmak · 2013-06-17T09:59:56.906Z · LW(p) · GW(p)

You're essentially saying "one of the clones believes that it is your current 'I' experiencing 'X', and it has a 50% chance of being wrong".

No, I'm not saying that.

I'm saying: first both clones believe "anticipate X with 50% probability". Then one clone experiences X, and the other ~X. After that they know what they experienced, so of course one updates to believe "I experienced X with ~1 probability" and the other "I experienced ~X with ~1 probability".

I think to be consistent, you have to anticipate experiencing both X and ~X with 100% probability.

I think we need to unpack "experiencing" here.

I anticipate there will be a future state of me, which has experienced X (= remembers experiencing X), with 50% probability.

If X takes nontrivial time, such that one can experience "X is going on now", then I anticipate ever experiencing that with 50% probability.

Replies from: FeepingCreature
comment by FeepingCreature · 2013-06-17T13:54:24.176Z · LW(p) · GW(p)

I anticipate there will be a future state of me, which has experienced X (= remembers experiencing X), with 50% probability.

What I meant is that some time after the cloning, the clones' lives would become distinguishable. One of them would experience X, while the other would experience ~X.

But that means there is always (100%) a future state of you that has experienced X, and a separate future state that has always (100%) experienced ~X. I think there's some similarity here to the problem of probability in a many-worlds universe, except in this case both versions can still interact. I'm not sure how that affects things myself.

Replies from: DanArmak
comment by DanArmak · 2013-06-17T18:06:22.650Z · LW(p) · GW(p)

You're right, there's a contradiction in what I said. Here's how to resolve it.

At time T=1 there is one of me, and I go to sleep. While I sleep, a clone of me is made and placed in an identical room. At T=2 both clones wake up. At T=3 one clone experiences X. The other doesn't (and knows that he doesn't).

So, what should my expected probability for experiencing X be?

At T=3 I know for sure, so it goes to 1 for one clone and 0 for the other.

At T=2, the clones have woken up, but each doesn't know which he is yet. Therefore each expects X with 50% probability.

At T=1, before going to sleep, there isn't a single number that is the correct expectation. This isn't because probability breaks down, but because the concept of "my future experience" breaks down in the presence of clones. Neither 50% nor 100% is right.

50% is wrong for the reason you point out. 100% is also wrong, because X and ~X are symmetrical. Assigning 100% to X means 0% to ~X.

So in the presence of expected future clones, we shouldn't speak of "what I expect to experience" but "what I expect a clone of mine to experience" - or "all clones", or "p proportion of clones".

Replies from: TheOtherDave
comment by TheOtherDave · 2013-06-17T18:56:42.101Z · LW(p) · GW(p)

Suppose I'm ~100% confident that, while we sleep tonight, someone will paint a blue dot on either my forehead or my husband's but not both. In that case, I am ~50% confident that I will see a blue dot, I am ~100% confident that one of us will see a blue dot, I am ~100% confident that one of us will not see a blue dot.

If someone said that seeing a blue dot and not-seeing a blue dot are symmetrical, so assigning ~100% confidence to "one of us will see a blue dot" means assigning ~0% to "one of us will not see a blue dot", I would reply that they are deeply confused. The noun phrase "one of us" simply doesn't behave that way.

In the scenario you describe, the noun phrase "I" doesn't behave that way either.

I'm ~100% confident that I will experience X, and I'm ~100% confident that I will not experience X.

Replies from: ialdabaoth, DanArmak
comment by ialdabaoth · 2013-06-17T19:01:01.021Z · LW(p) · GW(p)

In the scenario you describe, the noun phrase "I" doesn't behave that way either.

I'm ~100% confident that I will experience X, and I'm ~100% confident that I will not experience X.

I really find that subscripts help here.

comment by DanArmak · 2013-06-17T22:14:16.358Z · LW(p) · GW(p)

In your example, you anticipate your own experiences, but not your husband's experiences. I don't see how this is analogous to a case of cloning, where you equally anticipate both.

If someone said that seeing a blue dot and not-seeing a blue dot are symmetrical, so assigning ~100% confidence to "one of us will see a blue dot" means assigning ~0% to "one of us will not see a blue dot", I would reply that they are deeply confused.

I'm not saying that "[exactly] one of us will see a blue dot" and "[neither] one of us will not see a blue dot" are symmetrical; that would be wrong. What I was saying was that "I will see a blue dot" and "I will not see a blue dot" are symmetrical.

I'm ~100% confident that I will experience X, and I'm ~100% confident that I will not experience X.

All the terminologies that have been proposed here - by me, and you, and FeepingCreature - are just disagreeing over names, not real-world predictions.

I think the quoted statement is at the very least misleading because it's semantically different from other grammatically similar constructions. Normally you can't say "I am ~1 confident that [Y] and also ~1 confident that [~Y]". So "I" isn't behaving like an ordinary object. That's why I think it's better to be explicit and not talk about "I expect" at all in the presence of clones.

My comment about "symmetrical" was intended to mean the same thing: that when I read the statement "expect X with 100% probability", I normally parse it as equivalent to "expect ~X with 0% probability", which would be wrong here. And X and ~X are symmetrical by construction in the sense that every person, at every point in time, should expect X and ~X with the same probability (whether you call it "both 50%" like I do, or "both 100%" like FeepingCreature prefers), until of course a person actually observes either X or ~X.

Replies from: TheOtherDave
comment by TheOtherDave · 2013-06-18T00:44:39.917Z · LW(p) · GW(p)

In your example, you anticipate your own experiences, but not your husband's experiences. I don't see how this is analogous to a case of cloning, where you equally anticipate both.

In my example, my husband and I are two people, anticipating the experience of two people. In your example, I am one person, anticipating the experience of two people. It seems to me that what my husband and I anticipate in my example is analogous to what I anticipate in your example.

But, regardless, I agree that we're just disagreeing about names, and if you prefer the approach of not talking about "I expect" in such cases, that's OK with me.

comment by Armok_GoB · 2013-06-10T23:51:48.345Z · LW(p) · GW(p)

One thing you seem to know but keep forgetting is the distinction between your current state, and recorded memories. Memories use extreme amounts of lossy and biased compression, and some of your confusion seem to come from looking at your current experience while explicitly thinking about this stuff and then generalizing it as something continuous over time and something applicable to a wider range of mental states than it actually is.

comment by TheOtherDave · 2013-06-08T22:33:38.842Z · LW(p) · GW(p)

Perhaps if I understood the actual causal link from which kind of process I am, to which qualia I have, part of the apparent mystery would go away

Sure, that makes sense.

As far as I know, current understanding of neuroanatomy hasn't identified the particular circuits responsible for that experience, let alone the mechanism whereby the latter cause the former. (Of course, the same could be said for speaking English.)

But I can certainly see how having such an explanation handy might help if I were experiencing the kind of insistent sense of mysteriousness you describe (for subjective experience or for speaking English).

Replies from: None
comment by [deleted] · 2013-06-08T22:38:56.944Z · LW(p) · GW(p)

As far as I know, current understanding of neuroanatomy hasn't identified the particular circuits responsible for that experience

Hmm, to your knowledge, has the science of neuroanatomy ever discovered any circuits responsible for any experience?

Replies from: ialdabaoth, TheOtherDave
comment by ialdabaoth · 2013-06-08T22:49:52.071Z · LW(p) · GW(p)

Quick clarifying question: How small does something need to be for you to consider it a "circuit"?

Replies from: None
comment by [deleted] · 2013-06-09T00:49:57.635Z · LW(p) · GW(p)

It's more a matter of discreetness than smallness: I would say I need to be able to identify the loop.

Replies from: ialdabaoth
comment by ialdabaoth · 2013-06-09T01:04:39.950Z · LW(p) · GW(p)

Second clarifying question, then: Can you describe what 'identifying the loop' would look like?

Replies from: None
comment by [deleted] · 2013-06-09T01:10:08.005Z · LW(p) · GW(p)

Well, I'm not sure. I'm not confident there are any neural circuits, strictly speaking. But I suppose I don't have anything much more specific than 'loop' in mind: it would have to be something like a path that returns to an origin.

comment by TheOtherDave · 2013-06-08T23:46:22.529Z · LW(p) · GW(p)

In the sense of the experience not happening if that circuit doesn't work, yes.
In the sense of being able to give a soup-to-nuts story of how events in the world result in a subjective experience that has that specific character, no.

Replies from: None
comment by [deleted] · 2013-06-09T01:35:46.076Z · LW(p) · GW(p)

I guess I mean: has the science of neuroanatomy discovered any circuits whatsoever?

Replies from: TheOtherDave
comment by TheOtherDave · 2013-06-09T03:57:50.166Z · LW(p) · GW(p)

I am having trouble knowing how to answer your question, because I'm not sure what you're asking.
We have identified neural structures that are implicated in various specific things that brains do.
Does that answer your question?

Replies from: None
comment by [deleted] · 2013-06-09T06:15:13.099Z · LW(p) · GW(p)

I'm not very up to date on neurobiology, and so when I saw your comment that we had not found the specific circuits for some experience I was surprised by the implication that we had found that there are neural circuits at all. To my knowledge, all we've got is fMRI captures showing changes in blood flow which we assume to be correlated in some way with synaptic activity. I wondered if you were using 'circuit' literally, or if your intended reference to the oft used brain-computer metaphor. I'm quite interested to know how appropriate that metaphor is.

Replies from: TheOtherDave
comment by TheOtherDave · 2013-06-09T06:41:44.170Z · LW(p) · GW(p)

Ah! Thanks for the clarification. No, I'm using "circuit" entirely metaphorically.

comment by tingram · 2013-06-03T01:42:43.352Z · LW(p) · GW(p)

I think it does. It really is a virtuoso work of philosophy, and Dennett helpfully front-loaded it by putting his most astonishing argument in the first chapter. Anecdotally, I was always suspicious of arguments against qualia until I read what Dennett had to say on the subject. He brings in plenty of examples from philosophy, from psychological and scientific experiments, and even from literature to make things nice and concrete, and he really seems to understand the exact ways in which his position is counter-intuitive and makes sure to address the average person's intuitive objections in a fair and understanding way.

comment by nigerweiss · 2013-06-06T09:39:32.274Z · LW(p) · GW(p)

I've read some of Dennet's essays on the subject (though not the book in question), and I found that, for me,his ideas did help to make consciousness a good deal less mysterious. What actually did it for me was doing some of my own reasoning about how a 'noisy quorum' model of conscious experience might be structured, and realizing that, when you get right down to it, the fact that I feel as though I have subjective experience isn't actually that surprising. It'd be hard to design to a human-style system that didn't have a similar internal behavior that it could talk about.

comment by sediment · 2013-06-01T13:53:19.164Z · LW(p) · GW(p)

What I have been calling nefarious rhetoric recurs in a rudimentary form also in impromptu discussions. Someone harbors a prejudice or an article of faith or a vested interest, and marshals ever more desperate and threadbare arguments in defense of his position rather than be swayed by reason or face the facts. Even more often, perhaps, the deterrent is just stubbon pride: reluctance to acknowledge error. Unscientific man is beset by a deplorable desire to have been right. The scientist is distinguished by a desire to be right.

— W. V. Quine, An Intermittently Philosophical Dictionary (a whimsical and fun read)

Replies from: simplicio, wrongish
comment by simplicio · 2013-06-11T23:06:59.775Z · LW(p) · GW(p)

Usually I find myself deploying nefarious rhetoric when I believe something on good evidence but have temporarily forgotten the evidence (this is very embarrassing and happens to me a lot).

comment by wrongish · 2013-06-04T20:22:07.790Z · LW(p) · GW(p)

Scientists are people too. It's folly to imagine that scientists are less prone to bias and pride than non-scientists.

Replies from: Desrtopa, sediment
comment by Desrtopa · 2013-06-04T20:59:17.038Z · LW(p) · GW(p)

It's folly to suppose that they're not prone at all, but not so foolish to suppose either that their training makes them less biased, or that being less so biased makes people more likely to become scientists.

Replies from: wrongish
comment by wrongish · 2013-06-04T22:44:43.210Z · LW(p) · GW(p)

Ever heard the phrase "Science progresses one funeral at time"? Who do you think coined that phrase? Hint: It wasn't trash collectors.

If scientists were really as open-minded and ego free as you claim, they wouldn't spend their lives defending work from their youth.

Replies from: SodiumExplodium
comment by SodiumExplodium · 2013-06-05T03:32:52.051Z · LW(p) · GW(p)

I think the quote is alluding to capital 'S' scientist rather than a particular group of humans. In theory a Scientist's cause to be correct, while human scientists want to be right.

comment by sediment · 2013-06-04T20:56:17.468Z · LW(p) · GW(p)

Yeah, the only thing I don't like about the quote is that it has an unappealling us-vs.-them quality to it, as if the divide between rational people and irrational were totally clean-cut. Posted it regardless because of the nice turn of phrase at the end.

Replies from: scav
comment by scav · 2013-06-05T14:07:08.756Z · LW(p) · GW(p)

Of course, when you are trying to get more of "them" to be "us", it's worth pointing out what "they" are doing wrong. It's not like anyone without brain damage is born and destined to be an "unscientific man" for life.

comment by alexvermeer · 2013-06-16T20:32:59.315Z · LW(p) · GW(p)

The recognition of confusion is itself a form of clarity.

T.K.V. Desikachar

comment by cody-bryce · 2013-06-03T17:56:40.185Z · LW(p) · GW(p)

Not having all the information you need is never a satisfactory excuse for not starting the analysis.

-Akin's Laws of Spacecraft Design

comment by JQuinton · 2013-06-04T17:26:42.763Z · LW(p) · GW(p)

My sense of the proper way to determine what is ethical is to make a distinction between a smuggler of influence and a detective of influence. The smuggler knows these six principles and then counterfeits them, brings them into situations where they don’t naturally reside.

The opposite is the sleuth’s approach, the detective’s approach to influence. The detective also knows what the principles are, and goes into every situation aware of them looking for the natural presence of one or another of these principles.

  • Robert Cialdini at the blog Bakadesuyo explaining the difference between ethical persuasion and the dark arts
comment by Benya (Benja) · 2013-06-30T18:51:27.565Z · LW(p) · GW(p)

Graffiti on the wall of an Austrian public housing block:

White walls — high rent.

(German original: "Weiße Wände — hohe Mieten". I'm not actually sure it's true, but my understanding is that rent in public housing does vary somewhat with quality and it seems plausible that graffiti could enter into it. And to make the implicit explicit, the reason it seems worth posting here is how it challenges the tenants' — and my — preconceptions: You may think that from a purely selfish POV you should not want graffiti on your house, but it's quite possible that the benefits to you are higher than the costs.)

Replies from: paulfchristiano, army1987, None, None, Richard_Kennaway
comment by paulfchristiano · 2013-06-30T21:24:12.668Z · LW(p) · GW(p)

This makes sense as helping with a price discrimination scheme which is probably made very complicated legally (if the landlord is a monopolist, then both you and them might prefer that they have a crappy product to offer at low cost, but often it is hard to offer a crappier product for legal reasons) or as a costly signal of poverty (if you are poor you are willing to make your house dirtier in exchange for money---of course most of the costs can also be signaling, since having white walls is a costly signal of wealth). My guess would be these kinds of models are too expressive to have predictive power, but this at least seems like a clean case.

Signaling explanations often seem to have this vaguely counter-intuitive form, e.g. you might think that from a selfish point of view you would want your classes to be more easily graded. But alas...

Replies from: Eugine_Nier
comment by Eugine_Nier · 2013-07-01T04:56:47.237Z · LW(p) · GW(p)

if the landlord is a monopolist [...] but often it is hard to offer a crappier product for legal reasons

Well, this is public housing, so the landlord is the government and thus is like to both have monopoly power and not be subject to the same laws as a private landlord.

Replies from: army1987
comment by A1987dM (army1987) · 2013-07-01T10:32:53.004Z · LW(p) · GW(p)

If I guess correctly the reasons why a government would pass a law against renting excessively crappy houses, I don't think it would exempt itself from it.

comment by A1987dM (army1987) · 2013-07-01T10:30:48.415Z · LW(p) · GW(p)

You may think that from a purely selfish POV you should not want graffiti on your house

Er... Why? The only reasons for that I can think of are aesthetics (but you can't ‘should’ that), economic value (but that only applies to landlords, not tenants) and signalling (but people who know what building I live in already know me well, so I can afford countersignalling to them),

Replies from: Estarlio
comment by Estarlio · 2013-07-01T12:32:06.349Z · LW(p) · GW(p)

Broken Windows? ( - If you live in an aesthetically unpleasing area, then people are more likely to trash the place.)

comment by [deleted] · 2015-06-18T08:36:12.678Z · LW(p) · GW(p)

I often see really ingenious grafitti in Vienna. My favorite was somewhere in the 9. district someone wrote "peace to the huts, war to the palaces" and then someone corrected it to "peace to the huts, and to the palaces". I found it amusing because it sounded like a grafitti battle between anarchists and catholics.

comment by [deleted] · 2015-06-18T03:09:33.866Z · LW(p) · GW(p)

Wow.

I wonder if the graffiti artist is part of the housing community, or someone with a special interest in political art targeting rent-seekers.

The delete account that has posted below makes a concise and informative contribution if anyones interested in checking it out. I wonder why it's deleted...

comment by Richard_Kennaway · 2013-07-01T10:43:17.515Z · LW(p) · GW(p)

A better house in a better neighbourhood costs more. How is this news?

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2013-07-01T12:53:01.395Z · LW(p) · GW(p)

I believe the implication is "I am doing you a favor by spraying graffiti on your apartment building, because that will cause your rent to decrease."

I don't know if this is actually true, but that's what I take to be the intent.

Replies from: None, Richard_Kennaway
comment by [deleted] · 2015-07-12T05:01:23.396Z · LW(p) · GW(p)

So it's utility maximising for renters, assuming they don't get caught and the time penalty isn't significant, to deface their property or those in their area so that the property is less attractive to others, assuming they don't value the original state aesthetics of the property relative to the defaced state more than the price differential?

comment by Richard_Kennaway · 2013-07-01T14:19:25.769Z · LW(p) · GW(p)

That is a lot to squeeze from four words. FWIW, they struck me as a snarl of rage against people who have more money than the perpetrators.

As a tenant in such an apartment building I would reply that nice white walls and a nice neighbourhood is the entire point of paying that rent, and that anyone who wants to live in a slum should go and find one, preferably a long way from me.

Replies from: SaidAchmiz, Jiro, ChristianKl
comment by Said Achmiz (SaidAchmiz) · 2013-07-01T15:42:34.250Z · LW(p) · GW(p)

It's not just the words you're squeezing, it's the medium — the fact that the words are written in graffiti.

I agree that a nice neighborhood is the point of paying rent, and your comment about people who want to live in slums, etc. I'm not sure that graffiti by itself constitutes neighborhood-not-niceness, but of course it's correlated with lots of other things, and there's the broken windows theory, etc.

comment by Jiro · 2013-07-01T15:42:22.873Z · LW(p) · GW(p)

To a poor person, having walls at all is more important than having white walls.

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2013-07-01T17:01:31.344Z · LW(p) · GW(p)

To a poor person, having walls at all is more important than having white walls.

To a poor person, having a car at all is more important than having one with no dents in the panels. I don't see that as justifying vandalising the cars at a second-hand dealer by night so as to pick one up cheap the next day.

But we're working from just four words of graffiti here, from an unknown author, and the site where Google led me from the original German text is dead.

comment by ChristianKl · 2013-07-29T13:08:23.925Z · LW(p) · GW(p)

This happens in the context of gentrification. In a city like Berlin rents in the cool neighborhoods rise and some people have to leave their neighborhood because of the rising rents.

Putting grafiti on walls is a way to counteract this trend.

At least it is from the point of view of people who want to justify that they are moral when the illegally spray grafiti on the houses of other people

comment by [deleted] · 2013-06-03T21:36:11.301Z · LW(p) · GW(p)

It’s hard to tell the difference between "Nobody ever complains about this car because it’s reliable" and “Nobody complains about this car because nobody buys this car."

-- Shamus Young

Replies from: NancyLebovitz
comment by NancyLebovitz · 2013-06-07T11:33:38.908Z · LW(p) · GW(p)

Thanks for the link.

Here's another good quote:

But if your solution to a problem is “don’t make mistakes”, then it’s not a solution. If you’re worried about falling off a cliff, the solution isn’t to walk along the edge very carefully, it’s to get away from the edge.

Replies from: army1987
comment by A1987dM (army1987) · 2013-06-15T13:32:40.898Z · LW(p) · GW(p)

If you’re worried about falling off a cliff, the solution isn’t to walk along the edge very carefully, it’s to get away from the edge.

It depends on why you were walking there in the first place.

Replies from: roystgnr
comment by roystgnr · 2013-06-15T23:52:27.839Z · LW(p) · GW(p)

I think you were downvoted too hastily. Seriously, imagine that instead of driving very carefully last week, I phoned my destination to say "Sorry I'm going to miss your wedding, but the only route to the venue is next to a cliff!" Would "Great solution!" be an expected or an accurate response?

Replies from: army1987, army1987
comment by A1987dM (army1987) · 2013-06-16T00:02:20.195Z · LW(p) · GW(p)

I think you were downvoted too hastily.

I don't think a single downvote is that significant. I've seen so many inexplicable downvotes that I would suspect there's a bot that every time a new comment is posted generates a random number between 0 and 1 and downvotes the comment if the random number is less than 0.05 -- if I could see any reason why anyone would do that.

Replies from: wedrifid, ialdabaoth
comment by wedrifid · 2013-06-16T06:50:29.209Z · LW(p) · GW(p)

I don't think a single downvote is that significant. I've seen so many inexplicable downvotes that I would suspect there's a bot that every time a new comment is posted generates a random number between 0 and 1 and downvotes the comment if the random number is less than 0.05 -- if I could see any reason why anyone would do that.

Now that you've suggested it...

comment by ialdabaoth · 2013-06-16T07:29:25.514Z · LW(p) · GW(p)

I don't think a single downvote is that significant. I've seen so many inexplicable downvotes that I would suspect there's a bot that every time a new comment is posted generates a random number between 0 and 1 and downvotes the comment if the random number is less than 0.05 -- if I could see any reason why anyone would do that.

It actually depends on how many total votes the post in question has accumulated, and how much karma the user in question has accumulated.

A completely new user doesn't need to have to worry about anomalous downvotes, because they're too new to have a reputation. Likewise, a well-established user doesn't need to worry about anomalous downvotes, because they get lost in the underflow. I'd say somewhere around the 200 mark it can become problematic to one's reputation; and, of course, one or two consecutive anomalous downvotes that act as the first votes on a given post can easily set a trend to bury that post before anyone has a chance to usefully comment on it.

Replies from: army1987
comment by A1987dM (army1987) · 2013-06-16T07:34:43.336Z · LW(p) · GW(p)

(If much more than 5% of the comments in a thread/by a user are downvoted, then it's probably not my hypothetical bot's fault.)

comment by A1987dM (army1987) · 2013-06-23T11:35:29.676Z · LW(p) · GW(p)

(OTOH, depending on what kind of connection is there between you and the bride and groom and what kind of people they are, a half-joking “how the hell did you choose to get married here of all places” might have been in order.) :-)

comment by lukeprog · 2013-06-08T18:14:40.478Z · LW(p) · GW(p)

If you're not making mistakes, you're not taking risks, and that means you're not going anywhere. The key is to make mistakes faster than the competition, so you have more chances to learn and win.

John W. Holt (previously quoted here, but not in a Rationality Quotes thread)

comment by taelor · 2013-06-08T04:36:24.276Z · LW(p) · GW(p)

The hidden thought embedded in most discussions of conspiracy theories is this: The world is being controlled by evil people; so, if we can get rid of them, the world can revert to control by good people, and things will be great again. This thought is false. The world is not controlled by any group of people – evil or good – and it will not be. The world is a large, chaotic mess. Those groups which do exert some control are merely larger pieces in the global mix.

-- Paul Rosenberg

Replies from: Viliam_Bur, ChristianKl
comment by Viliam_Bur · 2013-06-10T17:46:47.297Z · LW(p) · GW(p)

I don't know if there are short words for this, but seems to me that some people generally assume that "things, left alone, naturally improve" and some people assume that "things, left alone, naturally deteriorate".

The first option seems like optimism, and the second option seems like pesimism. But there is a catch! In real life, many things have good aspects and bad aspects. Now the person who is "optimistic about the future of things left alone" must find a reason why things are worse than expected. (And vice versa, the person who is "pessimistic about the future of things left alone" must find a reason why things are better.) In both cases, a typical explanation is human intervention. Which means that this kind of optimism is prone to conspiracy theories. (And this kind of pessimism is prone to overestimate the benefits of human actions.)

For example, in education: For a "pessimist about spontaneous future" things are easy -- people are born stupid, and schools do a decent job at making them smarter; of course, the process is not perfect. For an "optimist about spontaneous future", children should be left alone to become geniuses (some quote by Rousseau can be used to support this statement). Now the question is, why do we have a school system, whose only supposed consequence is converting these spontaneous geniuses into ordinary people? And here you go: The society needs sheeps, etc.

Analogically, in politics: For some people, the human nature is scary, and the fact that we can have thousands or even millions of people in the same city, without a genocide happening every night, is a miracle of civilization. For other people, everything bad in the world is caused by some evil conspirators who either don't care or secretly enjoy human suffering.

This does not mean that there are no conspiracies ever, no evil people, no systems made worse by human tampering. I just wanted to point out that if you expect things to improve spontaneously (which seems like a usual optimism, which is supposedly a good thing), the consequences of your expectations alone, when confronted with reality, can drive you to conspiracy theories.

Replies from: ChristianKl, OrphanWilde, Armok_GoB
comment by ChristianKl · 2013-06-13T22:03:49.327Z · LW(p) · GW(p)

For other people, everything bad in the world is caused by some evil conspirators who either don't care or secretly enjoy human suffering.

I don't think that accurately describes a position of someone like Alex Jones.

You can care about people and still push the fat man over the bridge but then try to keep the fact that you pushed the fat man over the bridge secret because you live in a country where the prevailing Christian values dictate that it's a sin to push the fat man over the bridge.

There are a bunch of conspiracy theories where there is an actual conflict of values and present elites are just evil according to the moral standards that the person who started the conspiracy theory has.

Take education. If you look at EU educational reform after the Bologna Process there are powerful political forces who want to optimize education to let universities teach skills that are valuable to employeers. On the other hand you do have people on the left who think that universities should teach critical thinking and create a society of individuals who follow the ideals of the Enlightment.

There's a real conflict of values.

Replies from: Viliam_Bur
comment by Viliam_Bur · 2013-06-14T09:21:39.509Z · LW(p) · GW(p)

there are powerful political forces who want to optimize education to let universities teach skills that are valuable to employeers. On the other hand you do have people on the left who think that universities should teach critical thinking and create a society of individuals who follow the ideals of the Enlightment. There's a real conflict of values.

In this specific conflict, I would prefer having two kinds of school -- universities and polytechnics -- each optimized for one of the purposes, and let the students decide.

Seems to me that conflicts of values are worse when a unified decision has to be made for everyone. (Imagine that people would start insisting that only one subject can be ever taught at schools, and then we would have a conflict of values whether the subject should be English or Math. But that would be just a consequence of a bad decision at meta level.)

But yeah, I can imagine a situation with a conflict of values that cannot be solved by letting everyone pick their choice. And then the powerful people can push their choice, without being open about it.

Replies from: ChristianKl
comment by ChristianKl · 2013-06-14T10:44:21.442Z · LW(p) · GW(p)

Seems to me that conflicts of values are worse when a unified decision has to be made for everyone. (Imagine that people would start insisting that only one subject can be ever taught at schools, and then we would have a conflict of values whether the subject should be English or Math. But that would be just a consequence of a bad decision at meta level.)

You do have this in a case like teaching the theory of evolution.

You have plenty of people who are quite passionate but making an unified decision to teach everyone the theory of evolution, including the parents of children who don't believe in the theory of evolution.

Germany has compulsory schooling. Some fundamental chrisitan don't want their children in public schools. If you discuss the issue with people who have political power you find that those people don't want that those children get taught some strange fundamental worldview that includes things like young earth creationism. The want that the children learn the basic paradigm that people in German society follow.

On the other hand I'm not sure whether you can get a motivation like that from reading the newspaper. Everyone who's involved in the newspaper believes that it's worth to teach children the theory of evolution so it's not worth writing a newspaper article about it.

Is it a secret persecution of fundamentalist Christians? The fundamentalist Christian from whom the government takes away the children for "child abuse" because the children don't go to school feel perscecuted. On the other hand the politician in question don't really feel like the are persecuting fundamentalist Christians.

The ironic thing about it is that compulsory schooling was introduced in Germany for the stated purpose of turning children into 'good Christians".

In a case like evolution, do you sincerely believe that the intellectual elite should use their power to push a Texan public school to teach evolution even if the parents of the children and the local board of education don't want it?

Replies from: Viliam_Bur
comment by Viliam_Bur · 2013-06-14T11:02:00.459Z · LW(p) · GW(p)

The ironic thing about it is that compulsory schooling was introduced in Germany for the stated purpose of turning children into 'good Christians".

Yeah, when people in power create tools to help them maintain the power, if those tools are universal enough, they will be reused by the people who get the power later.

In a case like evolution, do you sincerely believe that the intellectual elite should use their power to push a Texan public school to teach evolution even if the parents of the children and the local board of education don't want it?

The trade-offs need to be discussed rationally. The answer would probably be "yes", but there are some negative side effects. For example, you create a precedent for other elites to push their agenda. (Just like those Christians did with the compulsory education.) Maybe a third option could be found. (Something like: Don't say what schools have to teach, but make the exams independent on schools. Make the evolutionary knowledge necessary to pass a biology exam. Make it public when students or schools or cities are "failing in biology".)

Replies from: Eugine_Nier, ChristianKl
comment by Eugine_Nier · 2013-06-15T06:27:52.495Z · LW(p) · GW(p)

Something like: Don't say what schools have to teach, but make the exams independent on schools.

Why have governments control exams at all? Have different certifying authorities and employers are free to decide which authorities' diploma they accept.

Replies from: Osiris
comment by Osiris · 2013-06-15T06:39:25.759Z · LW(p) · GW(p)

That could work! On the other hand, it may set up a situation where a person who is only guilty of being raised in the wrong place may never get a decent job. Wonder what can be done to prevent that as much as possible?

Replies from: Eugine_Nier
comment by Eugine_Nier · 2013-06-15T06:54:31.013Z · LW(p) · GW(p)

On the other hand, it may set up a situation where a person who is only guilty of being raised in the wrong place may never get a decent job.

And this differs from the status quo, how?

Replies from: Osiris
comment by Osiris · 2013-06-18T04:56:46.262Z · LW(p) · GW(p)

I was under the impression you wanted to improve things significantly. Hence why I mentioned that issue--and it IS an issue.

Replies from: Eugine_Nier
comment by Eugine_Nier · 2013-06-21T06:26:20.187Z · LW(p) · GW(p)

My point is that a child's parents are more likely to make good decisions for the child then education bureaucrats.

Replies from: CCC
comment by CCC · 2013-06-21T08:36:15.603Z · LW(p) · GW(p)

That depends on the parents. Yes, many parents (including mine and, presumably, yours) have the best interests of the child at heart, and have the knowledge and ability to be able to serve those interests quite well.

This is not, however, true of all parents. There's no entrance exam for parenthood. Thus:

  • Some parents are directly abusive to their children (including: many parents who abuse alcohol and/or drugs)
  • Some parents are total idiots; even if they have the best interests of the child at heart, they have no idea what to do about it
  • Some parents are simply too mired in poverty; they can't afford food for their children, never mind schooling
  • Some parents are, usually through no fault of their own, dead while their children are still young
  • Some parents are absent for some reason (possibly an acrimonious divorce? Possibly in order to find employment?)

An education bureaucrat, on the other hand, is a person hand-picked to make decisions for a vast number of children. Ideally, he is picked for his ability to do so; that is, he is not a total idiot, directly abusive, dead, or missing, and he has a reasonable budget to work with. He also has less time to devote to making a decision per child.

Replies from: Jiro, Eugine_Nier
comment by Jiro · 2013-06-21T21:45:53.600Z · LW(p) · GW(p)

That's like claiming that bicycling is better than driving cars, as long as "driving cars' includes cases where the cars are missing or broken.

If the parents are missing, dead, abusive, or total idiots (depending on how severe the "total" idiocy is), they can be replaced by adoptive or foster parents. You would need to compare bureaucrats to parents-with-replacement, not to parents-without-replacement, to get a meaningful comparison.

Replies from: Osiris
comment by Osiris · 2013-06-23T03:16:45.092Z · LW(p) · GW(p)

A question: How many people are so attached to being experts at parenting that they would rather see children jobless, unhappy, or dead than educated by experts in a particular field (whether biology or social studies)? Those are the people I worry about, when I imagine a system in which parents/government could decide all the time what their children learn and from what institution. For every parent or official that changes their religion just to get children into the best schools, willing to give up every alliance just to get the tribe's offspring a better chance at life, and happy to give up their own authority in the name of a growing child's happiness, there are many, many more who are not so caring and fair, I fear.

Experts in a field are far more likely to want to educate children better BECAUSE the above attachment to beliefs, politics, and authority is not, in their minds, in competition with their care for the children (or, at least, shouldn't be, if those same things depend upon their knowledge). So, rather than saying we trust business, government, or one's genetic donors, shouldn't we be trying to make it so that the best teachers are trusted, period? Or, am I missing the point?

Replies from: Jiro
comment by Jiro · 2013-06-25T04:09:42.478Z · LW(p) · GW(p)

A question: How many people are so attached to being experts at parenting that they would rather see children jobless, unhappy, or dead than educated by experts in a particular field (whether biology or social studies)?

That's a very odd question because you're phrasing it as a hypothetical, thus forcing the logical answer to be "yes, being taught by an expert is better than having the child dead", but you're giving no real reason to believe the hypothetical is relevant to the real world. If experts could teleport to the moon, should we replace astronauts with them?

So, rather than saying we trust business, government, or one's genetic donors, shouldn't we be trying to make it so that the best teachers are trusted, period?

If you seriously believe what that is implying, that argument wouldn't just apply to education. Why shouldn't we just take away all children at birth (or grow them in the wombs of paid volunteers and prohibit all other childbearing) to have them completely raised by experts, not just educated by them?

Replies from: Osiris
comment by Osiris · 2013-06-25T08:33:12.083Z · LW(p) · GW(p)

Would it benefit the children more than being raised by the parents? Then the answer would be "yes." Many people throughout history attempted to have their children raised by experts alone, so it is not without precedent, for all its strangeness. Nobles in particular entrusted their children to servants, tutors, and warriors, rather than seek to provide everything needed for a healthy (by their standards) childhood themselves. Caring about one's offspring may include realizing that one needs lots of help.

By the way, I did not intend to cut off an avenue of exploration, here--merely to point out that the selection processes for business, government, and mating do not have anything to do with getting a better teacher or a person good at deciding what should be taught. If that does destroy some potential solution, I hope you forgive me, and would love to hear of that solution so I may change.

comment by Eugine_Nier · 2013-06-25T02:37:29.024Z · LW(p) · GW(p)

An education bureaucrat, on the other hand, is a person hand-picked to make decisions for a vast number of children.

You have an extremely over-idealistic view of how the education bureaucracy (or any bureaucracy for that matter) works.

For evolutionary reasons, parents have a strong desire to do what's best for their child, bureaucracies on the other hand have all kinds of motivations (especially perpetuating the bureaucracy).

he is not a total idiot, directly abusive, dead, or missing,

You haven't dealt with bureaucracy much, have you?

he has a reasonable budget to work with.

There are a lot of failing school systems with large budgets. Throwing money at a broken system doesn't give you a working system, it gives you a broken system that wastes even more money.

Replies from: CCC
comment by CCC · 2013-06-25T09:21:34.581Z · LW(p) · GW(p)

For evolutionary reasons, parents have a strong desire to do what's best for their child, bureaucracies on the other hand have all kinds of motivations (especially perpetuating the bureaucracy).

Evolution is satisfied if at least some of the children live to breed. There are several possible strategies that parents can follow here; having many children and encouraging promiscuity would satisfy evolutionary reasons and likely do so better than having few children and ensuring that they are properly educated. Evolutionary reasons are insufficient to ensure that what happens is good for the children; evolutionary reasons are satisfied by the presence of grandchildren.

he has a reasonable budget to work with.

There are a lot of failing school systems with large budgets. Throwing money at a broken system doesn't give you a working system, it gives you a broken system that wastes even more money.

Yes. That means that the problems in those systems are not money; the problems in those systems lie elsewhere, and need to be dealt with separately.

he is not a total idiot, directly abusive, dead, or missing,

You haven't dealt with bureaucracy much, have you?

...not that much, no. I would kind of expect that, when dealing with someone who will be making decisions that affect vast numbers of children, people will make some effort to consider the long-term effects of such choices. (I realise that, in some cases, this will involve words like 'indoctrination'; there can be a dark side to long-term planning).

This may be over-idealistic on my part. The way I see it, though, it is not the bureaucrat's job to be better at making decisions for children than the best parent, or even than the average parent. It is the bureaucrat's job to create a floor; to ensure that no child is treated worse than a certain level.

Replies from: Eugine_Nier, Viliam_Bur
comment by Eugine_Nier · 2013-06-27T02:31:14.205Z · LW(p) · GW(p)

The way I see it, though, it is not the bureaucrat's job to be better at making decisions for children than the best parent, or even than the average parent. It is the bureaucrat's job to create a floor; to ensure that no child is treated worse than a certain level.

It doesn't (and can't) work this way in practice. In practice what happens is that there is a disagreement between the bureaucracy and the parents. In that case whose views should prevail? If you answer "the bureaucracy's" your floor is now also a ceiling, if you answer "the parents' " you've just gutted your floor. If you want to answer "the parents' if their average or better and the bureaucracy's otherwise" then the question becomes whose job is it to make that judgement, and we're back to the previous two cases.

comment by Viliam_Bur · 2013-07-22T07:26:09.079Z · LW(p) · GW(p)

I would kind of expect that, when dealing with someone who will be making decisions that affect vast numbers of children, people will make some effort to consider the long-term effects of such choices.

I am not sure why exactly it does not work this way, but as a matter of fact, it does not. Specifically I am thinking about department of education in Slovakia. As far as I know, it works approximately like this: There are two kinds of people there; elected and unelected.

The elected people (not sure if only the minister, or more people) only care about short-term impression on their voters. They usually promise to "reform the school system" without being more specific, which is always popular, because everyone knows the system is horrible. There is no system behind the changes, it is usually a random drift of "we need one less hour of math, and one more hour of English, because languages are important" and "we need one less hour of English and one more hour of math, because former students can't do any useful stuff"; plus some new paperwork for teachers.

The unelected people don't give a shit about anything. They just sit there, take their money, and expect to sit there for the next decades. They have zero experience with teaching, and they don't care. They just invent more paperwork for teachers, because then the existing paperwork explains why their jobs are necessary (someone must collect all the data, retype it to Excel, and create reports). The minister usually has no time or does not care enough to understand their work, optimize it, and fire those who are not needed. It is very easy for a bureaucrat to create a work for themselves, because paperwork recursively creates more paperwork. These people are not elected, so they don't fear the votes; and the minister is dependent on their cooperation, so they don't fear the minister.

comment by ChristianKl · 2013-06-14T11:26:52.914Z · LW(p) · GW(p)

For example, you create a precedent for other elites to push their agenda.

Maybe elites that push their agenda have a much better chance keeping their power that don't? I'm not sure how much setting precedents limit further elites.

Something like: Don't say what schools have to teach, but make the exams independent on schools. Make the evolutionary knowledge necessary to pass a biology exam. Make it public when students or schools or cities are "failing in biology".

Basically you try to make the system more complicated to still get what you want but make people feel less manipulated.

Complicated and intransparent systems lead to conspiracy theories.

comment by OrphanWilde · 2013-06-11T20:00:22.911Z · LW(p) · GW(p)

Pessimists can also believe that education started out decent and has deteriorated to the point where it's worse than nothing.

In addition to Armok's alternatives, there's also those who believe the tendency is a reversion to the mean (the mean being the mean because it's a natural equilibrium, perhaps).

comment by Armok_GoB · 2013-06-11T00:03:53.658Z · LW(p) · GW(p)

And what about those that tend to assume things stay the same/revert to only changing on geological timescales, or those that assume it keeps moving in a linear way?

comment by ChristianKl · 2013-06-13T22:17:46.005Z · LW(p) · GW(p)

Conspiracy theorists of the world, believers in the hidden hands of the Rothschilds and the Masons and the Illuminati, we skeptics owe you an apology. You were right. The players may be a little different, but your basic premise is correct: The world is a rigged game. We found this out in recent months, when a series of related corruption stories spilled out of the financial sector, suggesting the world's largest banks may be fixing the prices of, well, just about everything.

Matt Taibbi opening paragraph in [Everything Is Rigged The Biggest Price-Fixing Scandal Ever] (http://www.rollingstone.com/politics/news/everything-is-rigged-the-biggest-financial-scandal-yet-20130425#ixzz2W8WJ4Vix)

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2013-06-14T08:31:02.314Z · LW(p) · GW(p)

I smell a rat. Googling Matt Talibi (actually Taibbi) does not suggest that he was ever one of these "skeptics". It's a rhetorical flourish, nothing more.

Replies from: ChristianKl
comment by ChristianKl · 2013-06-14T09:58:08.941Z · LW(p) · GW(p)

Matt isn't a mainstream journalist. On the other hand he writes about stuff that you can easily document instead of writing about Rothschilds, Masons and the Illuminati.

He isn't the kind of person who cares about symbolic issues such as whether the members of the Bohemian grove do mock human sacrifices.

In the post I link to he makes his case by arguing facts.

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2013-06-14T12:14:33.391Z · LW(p) · GW(p)

In the post I link to he makes his case by arguing facts.

He may even be right, and the Paul Rosenberg article is lightweight and appears on what looks like a kook web site. But it seems to me that there's no real difference between their respective conclusions.

Replies from: ChristianKl
comment by ChristianKl · 2013-06-15T18:16:14.121Z · LW(p) · GW(p)

Rosenberg writes:

So, when they say, “No one saw this crisis coming,” they may be telling the truth, at least as far as they know it. Neither they nor anyone in their circles would entertain such thoughts. Likewise, they may not see the next crisis until it hits them.

Plenty of big banks did make money by betting on the crisis. There were a lot of cases where banks sold their clients products the banks knew that the products would go south.

Realising that there are important political things that don't happen in the open is a meaningful conclusion. Matt isn't in a position where he can make claims for which he doesn't have to provide evidence.

In 2011 Julian Assange told the press that the US government has an API with the can use to query the data that they like from facebook. On the skeptics stackexchange website there a question whether their's evidence for Assange claim or whether he makes it up. It doesn't mention the possibility that Assange just refers to nonpublic information. The orthodox skeptic just rejects claims without public proof.

Two years later we know have decent evidence that the US has that capability via PRISM. In 2011 Julian had the knowledge that it happens because Julian has the kind of connection that you need to get the knowledge but had no way of proving it.

If you know Italian or German Leoluca Orlando racconta la mafia / Leoluca Orlando erzählt die Mafia is a great book that provides a paradigm of how to operate in a political system with conspiracies.

Leoluca Orlando was major in Palermo with is the capital of Sicily and fought there against the Mafia. That means he has good credentials about telling something about how to deal with it.

He starts his book with the sentence:

I know it. But I don't have evidence.

Throughout the book he says things about the Sicilian Mafia that he can't prove but that he knows. In his world in which he had to take care to avoid getting murdered by the Mafia and at the same time fight it, that's just how the game works.

He also makes the point that it's very important to have politicians who follow a moral codex.

The book end by saying that the new Mafia now consists of people in high finance.

Replies from: TheOtherDave
comment by TheOtherDave · 2013-06-15T18:32:11.493Z · LW(p) · GW(p)

On this site, it's probably worth clarifying that "evidence" here refers to legally admissible evidence, lest we go down an unnecessary rabbit hole.

comment by Shmi (shminux) · 2013-06-10T17:00:43.731Z · LW(p) · GW(p)

My dad used to run a business and whenever they needed a temp, he'd always line up 5-10 interviewees, to check out how they looked.

And then hire the ugliest.

Aside from keeping my mother off his back, he reasoned that if the temp had kept good employment, and it wasn't for her looks, she must be ok.

From the comments on the article on the jobs for good-looking.

Replies from: Qiaochu_Yuan
comment by Qiaochu_Yuan · 2013-06-15T19:52:18.669Z · LW(p) · GW(p)

This is a nice calculation with a fairly simple causal diagram. The basic point is that if you think people are repeatedly hired either for their looks or for being a good worker, then among the pool of people who are repeatedly hired, looks and good work are negatively correlated.

Replies from: army1987
comment by Osiris · 2013-06-06T13:35:05.436Z · LW(p) · GW(p)

“Those who will not reason, are bigots, those who cannot, are fools, and those who dare not, are slaves.” --Lord Byron.

All too often those who are least rational in their best moments are the greatest supporters of using one's head, if only to avoid too early a demise. I wonder how many years Lord Byron gained from rational thought, and which of the risks he took did he take because he was good at betting...

comment by elharo · 2013-06-03T23:25:56.697Z · LW(p) · GW(p)

the designers of a theoretical technology in any but the most predictable of areas should identify its assumptions and claims that have not already been tested in a laboratory. They should design not only the technology but also a map of the uncertainties and edge cases in the design and a series of such experiments and tests that would progressively reduce these uncertainties. A proposal that lacks this admission of uncertainties coupled with designs of experiments that will reduce such uncertainties should not be deemed credible for the purposes of any important decision. We might call this requirement a requirement for a falsifiable design.

--Nick Szabo, Falsifiable design: A methodology for evaluting theoretical technologies

Replies from: ChristianKl
comment by ChristianKl · 2013-06-06T18:07:28.667Z · LW(p) · GW(p)

If something is purely theoretical you can't test it in the lab. You need to move beyond theory to start testing how a technology really works.

There cases where a technology might be dangerous and you want to stay some time with the theorethical analysis of the problem. In other cases you don't want to do falsifiable design but put the technology into reality as soon as possible.

comment by Bruno_Coelho · 2013-06-01T19:13:25.845Z · LW(p) · GW(p)

Students are often quite capable of applying economic analysis to emotionally neutral products such as apples or video games, but then fail to apply the same reasoning to emotionally charged goods to which similar analyses would seem to apply. I make a special effort to introduce concepts with the neutral examples, but then to challenge students to ask wonder why emotionally charged goods should be treated differently.

-- R. Hanson

Replies from: Vaniver
comment by Vaniver · 2013-06-02T17:28:20.906Z · LW(p) · GW(p)

I'm under the impression that all EY / RH quotes are discouraged, as described in this comment tree, which suggests the following rule should be explicitly amended to be broader:

Do not quote from Less Wrong itself, Overcoming Bias, or HPMoR.

comment by Thomas · 2013-06-01T11:43:02.113Z · LW(p) · GW(p)

I will destroy my enemies by converting them to friends!

  • Maimonides
Replies from: Luke_A_Somers, jazmt, JoshuaZ
comment by Luke_A_Somers · 2013-06-01T14:32:41.144Z · LW(p) · GW(p)

Source? It's pithy, yet not on the usual quote compilations that I checked.

Replies from: Baughn, Emile, Kawoomba, Thomas
comment by Baughn · 2013-06-05T09:17:49.570Z · LW(p) · GW(p)

Sounds like Takamachi Nanoha to me.

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-06-10T09:42:31.129Z · LW(p) · GW(p)

That's more along the lines of, "I will convert my enemies to friends by STARLIGHT BREAKER TO THE FACE".

Offhand I can't think of a single well-recorded real-life historical instance where this has ever worked.

Replies from: simplicio, CronoDAS, Baughn
comment by simplicio · 2013-06-11T23:14:05.388Z · LW(p) · GW(p)

Substitute "friends" with "trading partners" and the outlook improves though.

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-06-12T05:14:29.339Z · LW(p) · GW(p)

Fair, the British were totally befriending their way through history for a while.

comment by CronoDAS · 2013-06-12T07:17:45.641Z · LW(p) · GW(p)

Offhand I can't think of a single well-recorded real-life historical instance where this has ever worked.

"Befriending" by force? Well, post-WWII Japan worked out pretty well for the United States. As for dealing with would-be enemies by actually befriending them, Alexander Nevsky sucked up to the Mongols and ended up getting a much better deal for Russia than many of the other places the Mongols invaded.

comment by Baughn · 2013-06-10T20:33:02.800Z · LW(p) · GW(p)

That's what her reputation turned out like, and what TSAB propaganda likes to claim. It's not what she actually did. Let me count the befriendings:

  • Alisa Bannings. The sole "Nanoha-style befriending": Nanoha punched her to make her stop bothering Suzuka, after which they somehow became friends. No starlight breaker, though.

  • Alicia. Mostly Alicia was the one beating up Nanoha. It's true that Nanoha eventually defeated her in a climactic battle, after first sort-of-befriending her along more normal lines; however, Nanoha's victory in that battle isn't what finally turned Alicia. That's down to the actions of her insane, brain-damaged mother.

  • Vita. Neither motivation nor loyalty ever wavered.

  • Reinforce. Decided to work with Nanoha after Hayate asked her to. Nanoha's starlight breaker was helpful for temporarily weakening the defence program, but was not instrumental in the actual motivation change.

  • Vivio. ...do I really need to go there?

Her reputation for converting enemies is not undeserved, but she's not converting them by defeating them; she's converting and defeating them. Amusingly, the movies (which are officially TSAB propaganda) show marginal causation where there's only correlation.

Oh, and explicitly because people have asked me not to, you're hereby invited to the rizon/#nanoha irc channel. I'm relatively confident you won't show up, which is good - it has a tendency to distract authors when I do this. :P

Replies from: Alicorn
comment by Alicorn · 2013-06-10T23:13:16.216Z · LW(p) · GW(p)

Did you confuse Alicia with Fate?

Replies from: Baughn
comment by Baughn · 2013-06-11T08:24:33.772Z · LW(p) · GW(p)

No.

I'm just opinionated on the subject.

Replies from: Eliezer_Yudkowsky, Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-06-12T05:27:05.496Z · LW(p) · GW(p)

MAHOU SHOUJO TRANSHUMANIST NANOHA

"Girl," whispered Precia. The little golden-haired girl's eyes were fluttering open, amid the crystal cables connecting the girl's head to the corpse within its stasis field. "Girl, do you remember me?"

It took the girl some time to speak, and when she did, her voice was weak. "Momma...?"

The memories were there.

The brain pattern was there.

Her daughter was there.

"Momma...?" repeated Alicia, her voice a little stronger. "Why are you crying, Momma? Did something happen? Where are we?"

Precia collapsed across her daughter, weeping, as some part of her began to believe that the long, long task was finally over.

Replies from: Leonhart
comment by Leonhart · 2013-06-12T14:02:41.533Z · LW(p) · GW(p)

So, in case anyone is still confused about the point of the Quantum Physics Sequence, it was to help future mad scientists love their reconstructed daughters properly :)

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-06-12T14:31:07.887Z · LW(p) · GW(p)

An Idiot Plot is any plot that goes away if the characters stop being idiots. A Muggle Plot is any plot which dissolves in the presence of transhumanism and polyamory. That exact form is surprisingly common; e.g. from what I've heard, canon!Twilight has two major sources of conflict, Edward's belief that turning Bella into a vampire will remove her soul, and Bella waffling between Edward and Jacob. I didn't realize it until Baughn pointed it out, but S1 Nanoha - not that I've watched it, but I've read fanfictions - counts as a Muggle Plot because the entire story goes away if Precia accepts the pattern theory of identity.

Replies from: Jiro, shminux, Leonhart, Baughn, NancyLebovitz
comment by Jiro · 2013-06-12T18:28:03.922Z · LW(p) · GW(p)

I would find it unhelpful to describe as a "Muggle Plot" any plot that depends on believing one side of an issue where there is serious, legitimate, disagreement.

(Of course, you may argue that there is no serious, legitimate disagreement on theories of identity, if you wish.)

I also find it odd that polyamory counts but not, for instance, plots that fail when you assume other rare preferences of people. Why isn't a plot that assumes that the main characters are heterosexual considered a Muggle Plot just as much as one which assumes they are monogamous? What about a plot that fails if incest is permitted (Star Wars could certainly have gone very differently.) If a plot assumes that the protagonist likes strawberry ice cream, and it turned out that the same percentage of the population hates strawberry ice cream as is polygamous, would that now be a Muggle Plot too?

Replies from: Vaniver, MugaSofer
comment by Vaniver · 2013-06-13T00:08:51.883Z · LW(p) · GW(p)

I also find it odd that polyamory counts but not, for instance, plots that fail when you assume other rare preferences of people. Why isn't a plot that assumes that the main characters are heterosexual considered a Muggle Plot just as much as one which assumes they are monogamous?

I think the idea is not so much "rare preference" as "constrained preference," where that constraint is not relevant / interesting to the reader. Looking at gay fiction, there's lots of works in settings where homosexuality is forbidden, and lots of works in settings where homosexuality is accepted. A plot that disappears if you tried to move it to a setting where homosexuality is accepted seems too local; I've actually mostly grown tired of reading those because I want them to move on and get to something interesting. I imagine that's how it feels for a polyamorist to read Bella's indecision.

To use the ice cream example, imagine trying to read twenty pages on someone in an ice cream shop, agonizing over whether to get chocolate or strawberry. "Just get two scoops already!"

Replies from: Eliezer_Yudkowsky, Jiro
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-06-13T00:38:40.887Z · LW(p) · GW(p)

Excellent reply. I'm pretty sure I'd feel the same way if I was reading a story where A wants to be with only B, B wants to be with only A, neither of them want to be with C, but it's just never occurred to them that monogamy is an option.

Replies from: ciphergoth, Multiheaded
comment by Paul Crowley (ciphergoth) · 2013-06-13T12:30:42.473Z · LW(p) · GW(p)

Better to say "B wishes A would not sleep with others, A wishes B would not sleep with others, but..". Monogamy is the state of disallowing other partners, not just not having them.

Replies from: TheOtherDave
comment by TheOtherDave · 2013-06-13T14:16:28.816Z · LW(p) · GW(p)

I'll accept this definition, but would like a word to describe my marriage in that case.

I'm quite confident that if we ever wanted to open the relationship up to romantic/sexual relationships with third parties, we would have that conversation and negotiate the terms of it, so I'm reluctant to describe us as disallowing other partners. But I currently describe us as monogamous, because, well, we are.

Describing us as polyamorous when neither of us is interested in romantic/sexual relationships with third parties seems as ridiculous as describing a gay man as bisexual because he's not forbidden to have sex with women.

So how ought I refer to relationships like ours, on your view?

Replies from: ciphergoth
comment by Paul Crowley (ciphergoth) · 2013-06-13T15:51:41.140Z · LW(p) · GW(p)

I'd describe that as monogamous. You're saying that you think you'd be able to negotiate a new rule if circumstances arose, but the current rule is monogamy.

Replies from: TheOtherDave
comment by TheOtherDave · 2013-06-13T16:22:03.936Z · LW(p) · GW(p)

Mm. OK, with that connotation of "disallowing", I would agree. It's not the connotation I would expect to ordinarily come to mind in conversation, and in particular your statements about "B wishes A would not sleep with others" emphasized a different understanding of "disallowing" in my mind.

Replies from: army1987, ciphergoth
comment by A1987dM (army1987) · 2013-06-15T18:59:57.482Z · LW(p) · GW(p)

Have you (implicitly or explicitly) promised each other to not have sex with anyone else for the time being (even though the promise is renegotiable)? For example, would it be OK with you if your husband went to (say) a conference abroad and had a one-night stand with someone there without telling you until afterwards? That'd sound as a stronger condition than "B wishes A would not sleep with others" -- I wish my grandma didn't smoke, but given that she's never promised me not to smoke...

Replies from: TheOtherDave
comment by TheOtherDave · 2013-06-15T20:02:25.138Z · LW(p) · GW(p)

If he had sex with someone without telling me until afterwards, I would be very surprised, and it would suggest that our relationship doesn't work the way I thought it did. I wouldn't be OK with that change/revelation, and would need to adjust until I was OK with it.

If he bought a minivan without telling me, all of the above would be true as well.

But it simply isn't true that I wish he wouldn't buy a minivan, nor is it true that I wish he wouldn't sleep with others.

And if he came to me today and said "I want to sleep with so-and-so," that would be a completely different situation. (Whether I would be OK with it would depend a lot on so-and-so.)

It's possible that, somewhere in the last 20 years, he promised me he wouldn't sleep with anyone else. Or, for that matter, buy a minivan. If so, I've forgotten (if it was an implicit promise, I might not even have noticed), and it doesn't matter to me very much either way.

Replies from: army1987
comment by A1987dM (army1987) · 2013-06-16T16:46:24.544Z · LW(p) · GW(p)

If he had sex with someone without telling me until afterwards, I would be very surprised, and it would suggest that our relationship doesn't work the way I thought it did. I wouldn't be OK with that change/revelation, and would need to adjust until I was OK with it.

If so, I wouldn't consider it much of a stretch to call it monogamous.

Replies from: TheOtherDave
comment by TheOtherDave · 2013-06-16T17:52:42.973Z · LW(p) · GW(p)

Nor would I, as I said initially.

What I considered a stretch was accepting ciphergoth's definition of monogamy, given that my marriage is monogamous, because "We disallow other partners" didn't seem to accurately describe my monogamous marriage. (Similarly, "We disallow the purchase of minivans" seems equally inaccurate.)

Then came ciphergoth's clarification that he simply meant by "disallow" that right this moment it isn't allowed, even though if we expressed interest in changing the rule the rule would change and at that time it would be allowed. That seems like a weird usage of "disallow" to me (consider a dialog like "You aren't allowed to do X." "Oh, OK. Can I do X?" "Yeah, sure.", which is permitted under that usage, for example), but I agreed that under that usage it's true that we're not allowed other partners.

I hope that clears things up.

comment by Paul Crowley (ciphergoth) · 2013-06-14T09:07:25.115Z · LW(p) · GW(p)

Right, but those are the obvious circumstances where a couple who were not monogamous might become so.

comment by Multiheaded · 2013-06-13T01:24:39.174Z · LW(p) · GW(p)

(The more plausible reason being that C is just coercing them both.)

comment by Jiro · 2013-06-13T02:41:06.968Z · LW(p) · GW(p)

Explaining it as a complaint about a constrained preference does negate the heterosexual example, but I could easily tweak the example a bit: I could still ask why "Muggle Plots" doesn't include plots that assume a character isn't bisexual. And my incest example applies without even any tweaks--I'm not pointing out that Star Wars would be different if characters accepted incestuous relationships and no other kind, I'm pointing out that Star Wars would be different if characters accepted incestuous relationships in addition to the ones they do now--that is, if their preference was less constrained. So why is it that a plot that depends on the unacceptability of incest doesn't count as a Muggle Plot?

Replies from: Armok_GoB, Vaniver
comment by Armok_GoB · 2013-06-13T16:54:39.845Z · LW(p) · GW(p)

Having read the rest of the conversation... I'd say that yes, I have a mild "dammit, aren't condoms invented in this universe long ago enough to these issues to have gone away?!" to Starwars, but only after reconsidering it in the light of Homestuck. Which by the way, provides an excellent example in the alien Trolls considering both heterosexuality and incest-taboos in the kids to be trite annoyances.

I'm going out on a limb here, and saying that Muggle Plot is not a property of a plot, or even a plot-reader pair, but rather an emotion that can be felt in response to a plot, and which is scalar, with a rough heuristic being that it's stronger the more salient the option that'd make the plot go away is in whatever communities you participate in.

Replies from: Eugine_Nier
comment by Eugine_Nier · 2013-06-15T06:44:30.611Z · LW(p) · GW(p)

I'd say that yes, I have a mild "dammit, aren't condoms invented in this universe long ago enough to these issues to have gone away?!"

Why? Remember adaptation executors not fitness maximizers. And if condoms have been around for long enough for people to adapt to them, the first adaptation would be to no longer find condomed sex pleasurable or fulfilling.

comment by Vaniver · 2013-06-13T02:53:04.580Z · LW(p) · GW(p)

So why is it that a plot that depends on the unacceptability of incest doesn't count as a Muggle Plot?

I suspect the constraint against incest seems relevant to Eliezer. (The concept as I outlined it is subjective, and I suspect the association with "transhumanism + polyamory" is difficult to pin down without a reference to Eliezer or clusters he's strongly associated with.)

comment by MugaSofer · 2013-06-12T23:28:24.647Z · LW(p) · GW(p)

I also find it odd that polyamory counts but not, for instance, plots that fail when you assume other rare preferences of people. Why isn't a plot that assumes that the main characters are heterosexual considered a Muggle Plot just as much as one which assumes they are monogamous?

Because poly evangelism? It certainly seems like something people decide is a good idea rather than some sort of innate preference difference.

But if that were true, I would have to admit that monogomy is probably a bad idea, and that would be sad :(

Replies from: TheOtherDave
comment by TheOtherDave · 2013-06-12T23:55:34.569Z · LW(p) · GW(p)

(shrug) My husband and I live in a largely poly-normative social environment, and are monogamous. We don't object, we simply aren't interested. It still makes "oh noes! which lover do I choose! I want them both!" plots seem stupid, though. ("if you want them both, date them both... what's the difficulty here?")

So, no, acknowledging that polyamory is something some people decide is a good idea doesn't force me to "admit" that monogamy is a bad idea.

Admittedly, I'm also not sure why it would be sad if it did.

Replies from: MugaSofer
comment by MugaSofer · 2013-06-14T14:19:21.505Z · LW(p) · GW(p)

Because social norms, of course.

Actually, I was pretty tired when I wrote that, but thats what I think I meant.

(I'll note that most monogomous people whose opinions I hear on this think polyamory is almost always a bad idea, although possibly OK for a rare minority. But if relationships are usually a good idea, and polyamory isn't usually actively bad, then polyamory=more relationships=good, goes the 1:00 AM logic.)

comment by Shmi (shminux) · 2013-06-12T15:08:44.832Z · LW(p) · GW(p)

Re pattern identity theory:

I’m against any irreversible destruction of knowledge, thoughts, perspectives, adaptations, or ideas, except possibly by their owner.

Deleting the last copy of an em in existence should be prosecuted as murder, not because doing so snuffs out some inner light of consciousness (who is anyone else to know?), but rather because it deprives the rest of society of a unique, irreplaceable store of knowledge and experiences, precisely as murdering a human would.

Scott Aaronson in The Ghost in the Quantum Turing Machine.

Replies from: wedrifid
comment by wedrifid · 2013-06-12T15:28:09.664Z · LW(p) · GW(p)

Re pattern identity theory:

The first paragraph of the quote is about pattern identity theory. Unfortunately the second paragraph is actually something of a muddling of pattern identity with the separate issue of basing moral/ethical/legal considerations only on the externalities experienced by the survivors. Specifically, making it about 'depriving the rest of society' distracts from the (hopefully) primary point that it is the pattern that matters more so than spooky stuff about an instance.

comment by Leonhart · 2013-06-12T15:02:37.420Z · LW(p) · GW(p)

Nice one. Though one could perhaps recover most of the Nanoha storyline by giving Precia Capgras delusion, unless by "transhumanism" you include the assumption that organic disorders would be trivially fixed (albeit I don't think Precia had anyone around to diagnose her?)

I'm not sure if that would make it more or less tragic.

Replies from: Baughn
comment by Baughn · 2013-06-12T17:52:19.663Z · LW(p) · GW(p)

Right, that's my standard head-canon on the subject.

Precia was very badly hurt by the accident, and had to leave society because - for some reason - resurrecting Alicia the way she did was severely illegal. As a result, there was no-one around to double-check her conclusions, or spot the brain damage.

comment by Baughn · 2013-06-12T17:47:51.831Z · LW(p) · GW(p)

My personal head-canon says that Precia, who ought to know better, was afflicted with a particular type of brain damage that prevented her from recognizing her own daughter. She was, effectively, insane.

Given that the cause of both Alicia's first death and Precia's insanity were an inadvisable engineering experiment that she is explicitly stated to have been against, this makes Precia a tragic figure in her own right.

comment by NancyLebovitz · 2013-06-19T10:53:51.761Z · LW(p) · GW(p)

Edward's belief that turning Bella into a vampire will remove her soul

Does worrying about that sort of thing suggest that Edward actually has a soul?

Replies from: ArisKatsaris
comment by ArisKatsaris · 2013-06-19T12:16:27.449Z · LW(p) · GW(p)

Does worrying about that sort of thing suggest that Edward actually has a soul?

BUFFY THE VAMPIRE SLAYER SPOILERS (up to season 4)

Rira gubhtu Fcvxr jnf fbhyyrff ng gur gvzr ur sryy va ybir jvgu Ohssl, V qba'g guvax ur jbhyq jnag gb erzbir ure fbhy, fvapr gung jbhyq shaqnzragnyyl punatr ure.

Bs pbhefr Va gur Ohssl-irefr, univat be abg univat n fbhy unf dhvgr pyrne rssrpgf (ynpxvat n fbhy zrnag lbh prnfr gb unir nal zbenyf, gubhtu lbh pna fgvyy srry ybir gbjneqf crbcyr lbh xabj), naq jr frr n pyrne qvssrerapr orgjrra crefba-jvgu-fbhy if gur fnzr crefba-jvgubhg-fbhy. V qbhog gung'f gur pnfr va gur Gjvyvtug-irefr...

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-06-11T09:27:18.747Z · LW(p) · GW(p)

...as the only upvoter, I suspect nobody else got that.

comment by Emile · 2013-06-03T12:13:49.257Z · LW(p) · GW(p)

After a bit of googling, I don't think it's a quote by Maimonides.

The closest I could find is this passage of the Babilonian Talmud:

Come and hear: If a friend requires unloading, and an enemy loading. one's [first] obligation is towards his enemy, in order to subdue his evil inclinations. Now if you should think that [relieving the suffering of an animal is Biblically [enjoined], [surely] the other is preferable! — Even so, [the motive] 'in order to subdue his evil inclination' is more compelling.

comment by Kawoomba · 2013-06-01T14:49:17.051Z · LW(p) · GW(p)

That's because it's usually attributed to Abe Lincoln, with an exception.

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-06-01T17:49:58.363Z · LW(p) · GW(p)

That's kind of amusing, considering that Lincoln is also famous for destroying his enemies the other way.

Replies from: Luke_A_Somers
comment by Luke_A_Somers · 2013-06-01T20:33:56.685Z · LW(p) · GW(p)

He tried the nice way first...

Replies from: wedrifid
comment by wedrifid · 2013-06-03T07:23:06.843Z · LW(p) · GW(p)

He tried the nice way first...

This would seem to further weaken the quote in as much as it is evidence that the tactic doesn't work.

Replies from: Osiris, TobyBartels, Luke_A_Somers
comment by Osiris · 2013-06-19T10:24:03.826Z · LW(p) · GW(p)

Just because your enemies will not always be your friends does not mean it is useless to TRY to convert them to be one's friends. It is, as most things, a bet. One must know, beforehand, if it is WORTH it to try.

I would say it's a useful quote because it provides an alternative to the usual "smash them as soon as they oppose you" deal going on.

Replies from: wedrifid, NancyLebovitz
comment by wedrifid · 2013-06-19T10:52:58.484Z · LW(p) · GW(p)

Just because your enemies will not always be your friends does not mean it is useless to TRY to convert them to be one's friends. It is, as most things, a bet. One must know, beforehand, if it is WORTH it to try.

Nevertheless, the statement to which I replied remains evidence against rather than evidence for. You are of course welcome to support the sentiment despite the anecdote in question---such things aren't typically considered to be strong evidence either way.

comment by NancyLebovitz · 2013-06-19T10:36:44.344Z · LW(p) · GW(p)

I would say it's a useful quote because it provides an alternative to the usual "smash them as soon as they oppose you" deal going on.

It may also be better than the even more common "deal with them as you can, but don't expect they'll ever be on your side".

comment by TobyBartels · 2013-06-19T05:04:04.355Z · LW(p) · GW(p)

I don't know in what context Lincoln said this (if he really said it), but the tactic worked very well for him at the convention in the summer of 1860. (In those days, the conventions would start without people knowing who would be nominated. But often you had an idea, and Lincoln was a long shot.) All of the other candidates then joined Lincoln's cabinet (his ‘Team of Rivals’).

comment by Luke_A_Somers · 2013-06-03T11:15:16.682Z · LW(p) · GW(p)

DId not work in one notable case, to which the quote may or may not have originally been applied.

Of course it doesn't apply all the time.

comment by Thomas · 2013-06-01T14:49:04.975Z · LW(p) · GW(p)

Found on the Forbes site a week or so ago. Then I've googled it further and found some more occurences. Interestingly the quote is usually attributed to Abraham Lincoln. But he was certainly not the first with this nifty idea,

comment by Yaakov T (jazmt) · 2013-06-12T14:45:28.758Z · LW(p) · GW(p)

does anyone know the original source in Maimonides writings?

comment by JoshuaZ · 2013-06-12T05:34:56.937Z · LW(p) · GW(p)

I'm not sure where this, and the idea is good, but it doesn't sound like Maimonides. He was extremely willing to declare that those who disagreed with him were drunks, whoremongers and idolators. Rambam would rarely have talked about how his own personal goals anyways. It really isn't his style. I'm skeptical that this is a genuine quote due to him.

comment by elharo · 2013-06-16T13:31:02.196Z · LW(p) · GW(p)

women rarely regret having a child, even one they thought they didn’t want. But as Katie Watson, a bioethicist at Northwestern University’s Feinberg School of Medicine, points out, we tell ourselves certain stories for a reason. “It’s psychologically in our interest to tell a positive story and move forward,” she says. “It’s wonderfully functional for women who have children to be glad they have them and for women who did not have children to enjoy the opportunities that afforded them.”

--Joshua Lang, New York Times, June 12, 2013, What Happens to Women Who Are Denied Abortions?

Replies from: Qiaochu_Yuan
comment by Qiaochu_Yuan · 2013-06-17T05:12:04.239Z · LW(p) · GW(p)

I was also under the impression that the process of giving birth to a child triggers hormonal changes of some kind (involving oxytocin?) in the mother that help induce maternal bonding.

comment by Osiris · 2013-06-06T13:24:19.271Z · LW(p) · GW(p)

“Reality provides us with facts so romantic that imagination itself could add nothing to them.” --Jules Verne.

The fellow had a brilliant grasp of how to make scientific discovery interesting, and I think people could learn a thing or two from reading his stuff, still.

comment by CasioTheSane · 2013-06-05T06:43:48.497Z · LW(p) · GW(p)

The paucity of skepticism in the world of health science is staggering. Those who aren't insufferable skeptical douchebags are doing it wrong.

-Stabby the Raccoon

comment by tingram · 2013-06-03T01:31:31.511Z · LW(p) · GW(p)

He [the Inner Game player] reasons that since by definition the commonplace is what is experienced most often, the talent to be able to appreciate it is extremely valuable.

--W. Timothy Gallwey, Inner Tennis: Playing the Game

comment by katydee · 2013-06-01T20:52:27.361Z · LW(p) · GW(p)

When you have to shoot, shoot. Don't talk.

Tuco, The Good, The Bad, and the Ugly

Replies from: Jayson_Virissimo
comment by Jayson_Virissimo · 2013-06-02T01:08:02.490Z · LW(p) · GW(p)

A great line, but it's a dupe.

Replies from: katydee
comment by katydee · 2013-06-02T03:03:21.312Z · LW(p) · GW(p)

Ah! Humblest apologies, retracted.

comment by TobyBartels · 2013-06-19T04:58:09.698Z · LW(p) · GW(p)

I just watched Oz the Great and Powerful, the big-budget fanfic prequel film to The Wizard of Oz. Hardly a rationalist movie, but there was some nice boosting of science and technology where I didn't expect it. So here's the quotation:

I’m not that kind of wizard. Where I come from there aren’t any real wizards, except one, Thomas Edison. He could look into the future and make it real. […] That's the kind of wizard I want to be.

(There's more, but this is all that I could get out of the Internet and my memory.)

Replies from: Nornagest
comment by Nornagest · 2013-06-19T05:04:11.522Z · LW(p) · GW(p)

I haven't seen the movie, but that sounds awfully familiar. It doesn't sound consistent with the Oz books or any of the big-name fanfic out there (Wicked, etc.), but I wonder if it might have shown up in some similar context.

comment by elharo · 2013-06-13T20:54:58.442Z · LW(p) · GW(p)

Another potential detour on the road to truth is the nature of statistical variation and people’s tendency to misjudge through overgeneralization. Often in the fitness world, someone who appears to have above-average physical characteristics or capabilities is assumed to be a legitimate authority. The problem with granting authority to appearance is that a large part of an individual’s expression of such above-average physical characteristics and capabilities could simply be the result of wild variations across the statistical landscape. For instance, if you look out over a canopy of trees, you will probably notice a lone tree or two rising up above the rest – and it’s completely within human nature to notice things that stand out in such a way. In much the same manner, we take notice of individuals who possess superior physical capabilities, and when we do, there is a strong tendency to identify these people as sources of authority.

To make matters worse, many people who happen to posses such abnormal physical capabilities frequently misidentifies themselves as sources of authority, taking credit for something that nature has, in essence, randomly dropped in their laps. In other words, people are intellectually prepared to overlook the role of statistical variation in attributing authority.

-- Doug McDuff, M.D., and John Little, Body by Science, pp. x-xi

comment by satt · 2013-06-01T12:07:26.403Z · LW(p) · GW(p)

Hindsight is blindsight. The very act of looking back on events once you know their outcome, or even try to imagine their outcome, makes it, by definition, impossible to view such events objectively.

— Mark Salter & Trevor H. Turner, Community Mental Health Care: A practical guide to outdoor psychiatry

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-06-01T12:55:12.633Z · LW(p) · GW(p)

Though you can still find subjects who don't know the outcome, ask them for their predictions, and compare those predictions with subjects who are told the outcome to find the size of the hindsight bias.

comment by elharo · 2013-06-11T00:38:01.805Z · LW(p) · GW(p)

Linguistic traditions force us to think of body and mind as separate and distinct entities. Everyday notions like free will and moral responsibility contain underlying contradictions. Language also uses definitions and forms of the verb to be in ways that force us to think of classes of things as clearly defined (Is a fetus a human being or not?), when in fact every classification scheme has fuzzy boundaries and continuous gradations.

--Thomas M Georges, Digital Soul, 2004, p. 14

Replies from: Richard_Kennaway, simplicio
comment by Richard_Kennaway · 2013-06-17T08:29:59.459Z · LW(p) · GW(p)

I don't have a pithy parallel quote from Korzybski to put alongside this (pithiness was not his style), but the ideas here are exactly in accordance with Korzybski on "elementalism" (treating as separate and distinct entities things that are not, including body vs. mind), over/under defined terms (verbal definitions lacking extensionality), reification of categories, and the rejection of the is of identity.

comment by simplicio · 2013-06-17T04:59:02.599Z · LW(p) · GW(p)

I don't know that I'd recommend thinking of body and mind as identical (as in identity theory in phil mind).

The proper relation is probably better thought of as instantiation of a mind by a brain, in a similar way to how transistors instantiate addition and subtraction.

It matters because if you think mind=brain then you may come to some silly philosophical conclusions, like that a mind that does exactly what yours does (in terms of inputs and outputs to the rest of the body) but, say, runs on silicon, is "not the same mind" or "not a real mind."

comment by Pablo (Pablo_Stafforini) · 2013-06-02T02:53:28.901Z · LW(p) · GW(p)

With machine intelligence and other technologies such as advanced nanotechnology, space colonization should become economical. Such technology would enable us to construct “von Neumann probes” – machines with the capability of traveling to a planet, building a manufacturing base there, and launching multiple new probes to colonize other stars and planets. A space colonization race could ensue. Over time, the resources of the entire accessible universe might be turned into some kind of infrastructure, perhaps an optimal computing substrate (“computronium”). Viewed from the outside, this process might take a very simple and predictable form – a sphere of technological structure, centered on its Earthly origin, expanding uniformly in all directions at some significant fraction of the speed of light. What happens on the “inside” of this structure – what kinds of lives and experiences (if any) it would sustain – would depend on initial conditions and the dynamics shaping its temporal evolution. It is conceivable, therefore, that the choices we make in this century could have extensive consequences.

Nick Bostrom

Replies from: Kyre
comment by Kyre · 2013-06-03T07:41:42.885Z · LW(p) · GW(p)

If you haven't seen it I can recommend Stuart Armstrong's talk at Oxford on the Fermi paradox and Von Neumann probes. Before I saw this I was thinking in a fuzzy way about "colonization waves" of probes going from star to star ...

comment by Pablo (Pablo_Stafforini) · 2013-06-02T02:48:29.176Z · LW(p) · GW(p)

If you turn on your television and tune it between stations, about 10 percent of that black-and-white speckled static you see is caused by photons left over from the birth of the universe. What grater proof of the reality of the Big Bang–you can watch it on TV.

Jim Holt

Replies from: Roxolan, RolfAndreassen, Dorikka
comment by Roxolan · 2013-06-02T06:52:39.676Z · LW(p) · GW(p)

Would the static look any different if it was 0% though?

Replies from: Manfred, B_For_Bandana
comment by Manfred · 2013-06-03T06:15:21.026Z · LW(p) · GW(p)

Yes, it wouldn't be peaked at about 3 GHz. Since television only goes up to about 1 GHz, this means more noise at higher channels after accounting for other sources.

comment by B_For_Bandana · 2013-06-03T00:30:27.575Z · LW(p) · GW(p)

There would be less?

comment by RolfAndreassen · 2013-06-03T14:38:59.001Z · LW(p) · GW(p)

Can you actually do this experiment on a modern TV? I know how to change the channels on mine, but I have no idea how you would "tune" it.

Replies from: kpreid
comment by kpreid · 2013-07-14T17:45:54.152Z · LW(p) · GW(p)
  1. Selecting a channel is tuning; each channel has a specific frequency and the TV knows what frequencies the channel numbers stand for. But what you can't do is tune to a frequency that isn't assigned to any channel, so you would have to select a channel on which no station in your area is broadcasting.

  2. You would have to be using an analog TV tuner (which is now obsolete, if you're in the US); digital TV has a much less direct relationship between received radio photons and displayed light photons. On the upside, it's really easy to find a channel where no station is broadcasting, now :) (though actually, I don't know what the new allocation of the former analog TV bands is and whether there would be anything broadcasting on them).

(I've recently gotten an interest in radio technology; feel free to ask more questions even if you're just curious.)

comment by Dorikka · 2013-06-11T23:31:30.210Z · LW(p) · GW(p)

This grater.

Replies from: army1987
comment by A1987dM (army1987) · 2013-06-15T13:38:44.908Z · LW(p) · GW(p)

?

Replies from: gjm
comment by gjm · 2013-06-15T14:39:40.424Z · LW(p) · GW(p)

S/he is making a pun of the typo: "what grater proof..." instead of "what greater proof...". (I don't find it a very funny pun myself.)

comment by katydee · 2013-06-01T20:42:20.976Z · LW(p) · GW(p)

A little less conversation, a little more action!

Elvis Presley

Replies from: cody-bryce
comment by cody-bryce · 2013-06-03T17:53:36.863Z · LW(p) · GW(p)

One needs the right balance between conversation and action, and overall, it's probably too much of the latter and too little of the former in this world.

Replies from: savageorange, katydee
comment by savageorange · 2013-06-07T11:40:40.674Z · LW(p) · GW(p)

Or more precisely:

The problem with the world is fools and fanatics are so sure of themselves, and wiser people so full of doubts. -- Bertrand Russell

..Most actors don't think enough, and most thinkers don't act enough. cf. Dunning-Kruger effect.

Extroverts and Introverts typically line up with those two categories quite neatly, and in my observation tend to associate mainly with people of similar temperament (allowing them to avoid much of the pressure to be more balanced they'd find in a less homogenous social circle). I believe that this lack of balanced interaction is the real source of the problem. We need balanced pressure to both act and think competently, but the inherent discomfort makes most people unwilling to voluntarily seek it out (if they even become aware that doing so is beneficial).

comment by katydee · 2013-06-03T18:50:05.652Z · LW(p) · GW(p)

I'm not sure I agree in the general case, and I think that among LessWrongers things are certainly unbalanced in the other direction.

comment by [deleted] · 2013-06-01T15:11:33.708Z · LW(p) · GW(p)

Fundamental ideas play the most essential role in forming a physical theory. Books on physics are full of complicated mathematical formulae. But thought and ideas, not formulae, are the beginning of every physical theory. The ideas must later take the mathematical form of a quantitative theory, to make possible the comparison with experiment.

-- Albert Einstein

Replies from: satt
comment by satt · 2013-06-29T16:36:08.735Z · LW(p) · GW(p)

I have had to employ a fair number of technical concepts and use some mathematical operations, but the concepts have also been explained in non-technical terms and the mathematical results have been given intuitive explanation. It is hoped that the non-technical reader will not be put off by the formalities. The importance of the formal results lies ultimately in their relevance to normal communication and to things that people argue about and fight for.

— Amartya Sen, On Economic Inequality, p. vii

comment by rahul · 2013-06-03T13:39:11.332Z · LW(p) · GW(p)

From David Shields' Reality Hunger:

Once, after running deep into foul territory to make an extraordinary catch to preserve a victory, he was asked, “When did you know you were going to catch the ball?” Ichiro replied, “When I caught it."

comment by Vaniver · 2013-06-02T22:37:50.611Z · LW(p) · GW(p)

The first duty of life is to assume a pose. What the second duty is, no one has yet discovered.

--Oscar Wilde on signalling.

comment by Cthulhoo · 2013-06-02T08:31:42.173Z · LW(p) · GW(p)

-Thank you, thank you Lord, for preserving my virginity!

  • You bloody idiot! Do you think God, to keep you a virgin, will drown the whole city of Florence?

(Architect Melandri to Noemi, the girl he is in love with, who thinks the flood of 1966 was sent as an answer to her prayers)

All my Friend, Act II [roughly translated by me]

Replies from: Osiris
comment by Osiris · 2013-06-02T13:12:42.334Z · LW(p) · GW(p)

This is yet another reason why a God that answers prayers is far, far crueler than an indifferent Azathoth. Imagine the weight of guilt that must settle on a person if they prayed for the wrong thing and God answered!

On another note, that girl must not be very picky, if God has to destroy a whole city to keep her a virgin...(please don't blast me for this!)

comment by Pablo (Pablo_Stafforini) · 2013-06-02T02:46:39.063Z · LW(p) · GW(p)

[T]here can be no way of justifying the substantive assumption that all forms of altruism, solidarity and sacrifice really are ultra-subtle forms of self-interest, except by the trivializing gambit of arguing that people have concern for others because they want to avoid being distressed by their distress. And even this gambit […] is open to the objection that rational distress-minimizers could often use more efficient means than helping others.

Jon Elster

Replies from: sketerpot, OrphanWilde
comment by sketerpot · 2013-06-02T10:14:33.891Z · LW(p) · GW(p)

Even if altruism turns out to be a really subtle form of self-interest, what does it matter? An unwoven rainbow still has all its colors.

Replies from: Eliezer_Yudkowsky, Pablo_Stafforini
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-06-02T19:33:17.108Z · LW(p) · GW(p)

Rational distress-minimizers would behave differently from rational atruists. (Real people are somewhere in the middle and seem to tend toward greater altruism and less distress-minimization when taught 'rationality' by altruists.)

Replies from: syllogism, Richard_Kennaway, Locaha, pinyaka
comment by syllogism · 2013-06-04T19:04:39.494Z · LW(p) · GW(p)

That could be because rationality decreases the effectiveness of distress minimisation techniques other than altruism.

Replies from: Baughn
comment by Baughn · 2013-06-05T00:42:40.237Z · LW(p) · GW(p)

..because it makes you try to see reality as it is?

In me, it's also had the effect of reducing empathy. (Helps me not go crazy.)

Replies from: syllogism
comment by syllogism · 2013-06-05T09:41:12.976Z · LW(p) · GW(p)

Well, for me, believing myself to be a type of person I don't like causes me great cognitive dissonance. The more I know about how I might be fooling myself, the more I have to actually adjust to achieve that belief.

For instance, it used to be enough for me that I treat my in-group well. But once I understood that that was what I was doing, I wasn't satisfied with it. I now follow a utilitarian ethics that's much more materially expensive.

comment by Richard_Kennaway · 2013-06-05T09:56:43.877Z · LW(p) · GW(p)

Are they being taught 'rationality' by altruists or 'altruism' by rationalists? Or 'rational altruism' by rational altruists?

comment by Locaha · 2013-06-05T12:26:31.359Z · LW(p) · GW(p)

Shouldn't the methods of rationality be orthogonal to the goal you are trying to achieve?

comment by pinyaka · 2013-06-03T14:03:58.228Z · LW(p) · GW(p)

Perhaps this training simply focuses attention on the distress to be alleviated by altruism. Learning that your efforts at altruism aren't very effective might be pretty distressing.

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-06-03T18:52:56.721Z · LW(p) · GW(p)

That seems to verge on the trivializing gambit, though.

Replies from: pinyaka
comment by pinyaka · 2013-06-03T22:55:29.946Z · LW(p) · GW(p)

I guess I don't see the problem with the trivializing gambit. If it explains altruism without needing to invent a new kind of motivation why not use it?

Replies from: Psy-Kosh
comment by Psy-Kosh · 2013-06-06T00:31:55.904Z · LW(p) · GW(p)

Why would actual altruism be a "new kind" of motivation? What makes it a "newer kind" than self interest?

Replies from: pinyaka
comment by pinyaka · 2013-06-06T18:11:24.973Z · LW(p) · GW(p)

I meant that everyone I've discussed the subject with believes that self-interest exists as a motivating force, so maybe "additional" would have been a better descriptor than "new."

Replies from: Psy-Kosh
comment by Psy-Kosh · 2013-06-07T15:48:56.851Z · LW(p) · GW(p)

Hrm... But "self-interest" is itself a fairly broad category, including many sub categories like emotional state, survival, fulfillment of curiosity, self determination, etc... Seems like it wouldn't be that hard a step, given the evolutionary pressures there have been toward cooperation and such, for it to be implemented via actually caring about the other person's well being, instead of it secretly being just a concern for your own. It'd perhaps be simpler to implement that way. It might be partly implemented by the same emotional reinforcement system, but that's not the same thing as saying that the only think you care about is your own reinforcement system.

Replies from: pinyaka
comment by pinyaka · 2013-06-08T01:14:22.413Z · LW(p) · GW(p)

Well, the trivializing gambit here would be to just say that "caring about another person" just means that your empathy circuitry causes you to feel pain when you observe someone in an unfortunate situation and so your desire to help is triggered ultimately by the desire to remove this source of distress.

I'm not sure how concern for anothers well being would actually be implemented in a system that only has a mechanism for caring solely about its own well-being (ie how the mechanism would evolve). The push for cooperation probably came about more because we developed the ability to model other the internal states of critters like ourselves so that we could be mount a better offense or defense. The simplest mechanism just being to use a facial expression or posture to cause us to feel a toned down version of what we would normally feel when we had the same expression or posture (you're looking for information not to literally feel the same thing at the same intensity - when the biggest member of your pack is aggressing at you you probably want the desire to run away or submit to override the empathetic aggression).

It's worth noting (for me) that this doesn't diminish the importance of empathy and it doesn't mean that I don't really care about others. I think that caring for others is ultimately rooted in self-centeredness but like depth perception is probably a pre-installed circuit in our brains (a type I system) that we can't really remove totally without radically modifying the hardware. Caring about another person is as much a part of me as being able to recognize their face. The specific mechanism is only important when you're trying to do something specific with your caring circuits (or trying to figure out how to emulate them).

comment by Pablo (Pablo_Stafforini) · 2013-06-02T11:32:43.915Z · LW(p) · GW(p)

It may not matter pragmatically but it still matters scientifically. Just as you want to have a correct explanation of rainbows, regardless of whether this explanation has any effects on our aesthetic appreciation of them, so too you want to have a factually accurate account of apparently altruistic behavior, independently of whether this matters from a moral perspective.

Replies from: ChristianKl
comment by ChristianKl · 2013-06-06T21:42:42.118Z · LW(p) · GW(p)

Things is about predicting things not about explaning them. If a theory has no additional predictive value than it's not scientifically valuable.

In this case I don't the the added predictive value.

comment by OrphanWilde · 2013-06-02T22:12:59.315Z · LW(p) · GW(p)

There's the alternative "gambit" of describing it in terms of signaling. There's the alternative "gambit" of describing it in terms of wanting to live in the best possible universe. There's the alternative "gambit" of ascribing altruism to the emotional response it invokes in the altruistic individual.

I find the quote false on its face, in addition to being an appeal to distaste.

Replies from: simplicio
comment by simplicio · 2013-06-11T21:28:46.773Z · LW(p) · GW(p)

There's the alternative "gambit" of ascribing altruism to the emotional response it invokes in the altruistic individual.

Careful, there are some tricky conceptual waters here. Strictly, anything I want to do can be ascribed to my emotional response to it, because that's how nature made us pursue goals. "They did it because of the emotional response it invoked" is roughly analogous to "They did it because their brain made them do it."

The cynical claim would be that if people could get the emotional high without the altruistic act (say, by taking a pill that made them think they did it), they'd just do that. I don't think most altruists would, though. There are cynical explanations for that fact, too ("signalling to yourself leads to better signalling to others") but they begin to lose their air of streetwise wisdom and sound like epicycles.

Replies from: OrphanWilde
comment by OrphanWilde · 2013-06-12T01:48:52.919Z · LW(p) · GW(p)

Are you suggesting emotions are necessary to goal-oriented behavior?

There should be some evidence for that claim; we have people with diminished emotional capacity in wide range of forms. Do individuals with alexithymia demonstrate impaired goal-oriented behaviors?

I think there's more to emotion as a motive system than the brain as a motive force. People can certainly choose to stop taking certain drugs which induce emotional highs. 10% of people who start taking heroin are able to keep their consumption levels "moderate" or lower, as compared to 90% for something like tobacco, according to one random and hardly authoritative internet site - the precise numbers aren't terribly important. Perhaps such altruists, like most people, deliberately avoid drugs like heroin for this reason?

comment by Eugine_Nier · 2013-06-04T05:47:51.284Z · LW(p) · GW(p)

Idealism increases in direct proportion to one's distance from the problem.

-- John Galsworthy

comment by Halfwit · 2013-06-02T22:03:14.982Z · LW(p) · GW(p)

"Why do people worry about mad scientists? It's the mad engineers you have to watch out for." - Lochmon

Replies from: DanielLC, kpreid
comment by DanielLC · 2013-06-03T03:25:40.434Z · LW(p) · GW(p)

Considering the "mad scientists" keep building stuff, perhaps the question is "Why do people keep calling mad engineers mad scientists?"

Replies from: tgb
comment by tgb · 2013-06-03T18:44:06.753Z · LW(p) · GW(p)

Comic

Replies from: Caspian
comment by Caspian · 2013-06-08T01:19:00.668Z · LW(p) · GW(p)

I want to use one of those phrases in conversation. Either grfgvat n znq ulcbgurfvf be znxvat znq bofreingvbaf (spoilers de-rot13ed)

Also I found the creator's page for the comic http://cowbirdsinlove.com/46

comment by kpreid · 2013-07-14T18:01:00.107Z · LW(p) · GW(p)

"These are Komarran terrorists. Madmen—you can't negotiate with them!"

“But I don't think [spoiler] is a madman. He's not even a mad scientist. He's merely a very upset engineer.”

— Miles Vorkosigan, Komarr by Lois McMaster Bujold

comment by satt · 2013-06-01T12:02:40.454Z · LW(p) · GW(p)

As far as I'm concerned, insight, intuition, and recognition are all synonymous.

Herbert Simon

Replies from: wedrifid
comment by wedrifid · 2013-06-01T19:21:02.424Z · LW(p) · GW(p)

As far as I'm concerned, insight, intuition, and recognition are all synonymous.

Calling different but somewhat related things the same when they are not does not warrant "rationality quote" status.

Replies from: satt, Kawoomba
comment by satt · 2013-06-02T01:41:38.624Z · LW(p) · GW(p)

I acknowledge & respect this criticism, but for two reasons I maintain Simon had a worthwhile insight(!) here that bears on rationality:

  1. Insight, intuition & recognition aren't quite the same, but they overlap greatly and are closely related.

  2. Simon's comment, although not literally true, is a fertile hypothesis that not only opens eyeholes into the black boxes of "insight" & "intuition", but produces useful predictions about how minds solve problems.

I should justify those. Chapter 4 of Simon's The Sciences of the Artificial, "Remembering and Learning: Memory as Environment for Thought", is relevant here. It uses chess as a test case:

[...] one part of the grandmaster's chess skill resides in the 50,000 chunks stored in memory, and in the index (in the form of a structure of feature tests) that allows him to recognize any one of these chunks on the chess board and to access the information in long-term memory that is associated with it. The information associated with familiar patterns may include knowledge about what to do when the pattern is encountered. Thus the experienced chess player who recognizes the feature called an open file thinks immediately of the possibility of moving a rook to that file. The move may or may not be the best one, but it is one that should be considered whenever an open file is present. The expert recognizes not only the situation in which he finds himself, but also what action might be appropriate for dealing with it. [...]

When playing a "rapid transit" game, at ten seconds a move, or fifty opponents simultaneously, going rapidly from one board to the next, a chess master is operating mostly "intuitively," that is, by recognizing board features and the moves that they suggest. The master will not play as well as in a tournament, where about three minutes, on the average, can be devoted to each move, but nonetheless will play relatively strong chess. A person's skill may decline from grandmaster level to the level of a master, or from master to expert, but it will by no means vanish. Hence recognition capabilities, and the information associated with the patterns that can be recognized, constitute a very large component of chess skill.⁵ [The footnote refers to a paper in Psychological Science.]

The seemingly mysterious insights & intuitions of the chessmaster derive from being able to recognize many memorized patterns. This conclusion applies to more than chess; Simon's footnote points to a champion backgammon-playing program based on pattern recognition, and a couple of pages before that he refers to doctors' reliance on recognizing many features of diseases to make rapid medical diagnoses.

From what I've seen this even holds true in maths & science, where people are raised to the level of geniuses for their insights & intuitions. Here's cousin_it noticing that Terry Tao's insights constitute series of incremental, well-understood steps, consistent with Tao generating insights by recognizing familiar features of problems that allow him to exploit memorized logical steps. My conversations with higher ability mathematicians & physicists confirm this; when they talk through a problem, it's clear that they do better than me by being better at recognizing particular features (such as symmetries, or similarities to problems with a known solution) and applying stock tricks they've already memorized to exploit those features. Stepping out of cognitive psychology and into the sociology & history of science, the near ubiquity of multiple discovery in science is more evidence that insight is the result of external cues prompting receptive minds to recognize the applicability of an idea or heuristic to a particular problem.

The reduction of insight & intuition to recognition isn't wholly watertight, as you note, but the gains from demystifying them by doing the reduction more than outweigh (IMO) the losses incurred by this oversimplification. There are also further gains because the insight-is-intuition-is-recognition hypothesis results in further predictions & explanations:

  • Prediction: long-term practice is necessary for mastery of a sufficiently complicated domain, because the powerful intuition indicative of mastery requires memorization of many patterns so that one can recognize those patterns.

  • Prediction: consistently learning new domain-specific patterns (so that one can recognize them later) should, with a very high probability, engender mastery of that domain. (Putting it another way: long-term practice, done correctly, is sufficient for mastery.)

  • Explanation of why "[i]n a couple of domains [chess and classical music composition] where the matter has been studied, we do know that even the most talented people require approximately a decade to reach top professional proficiency" (TSotA, p. 91).

  • Prediction: "When a domain reaches a point where the knowledge for skillful professional practice cannot be acquired in a decade, more or less, then several adaptive developments are likely to occur. Specialization will usually increase (as it has, for example, in medicine), and practitioners will make increasing use of books and other external reference aids in their work" (TSotA, p. 92).

  • Prediction: "It is probably safe to say that the chemist must know as much as a diligent person can learn in about a decade of study" (TSotA, p. 93).

  • Explanation of Eliezer's experience with being deep: the people EY spoke to perceived him as deep (i.e. insightful) but EY knew his remarks came from a pre-existing system of intuitions (transhumanism and knowledge of cognitive biases) which allowed him to immediately respond to (or "complete") patterns as he recognized them.

  • Explanation of how intensive childhood training produced some famous geniuses and domain experts (the Polgár sisters, William James Sidis, John Stuart Mill, Norbert Wiener).

  • Prediction: "This accumulation of experience may allow people to behave in ways that are very nearly optimal in situations to which their experience is pertinent, but will be of little help when genuinely novel situations are presented" ("On How to Decide What to Do", p. 503).

  • Prediction: one can write a computer program that plays a game or solves a problem by mechanically recognizing relevant features of the input and making cached feature-specific responses.

I know I've gone on at length here, but your criticism deserved a comprehensive reply, and I wanted to show I wasn't just being flippant when I quoted Simon. I agree he was hyperbolic, but I reckon his hyperbole was sufficiently minor & insightful as to be RQ-worthy.

Replies from: wedrifid
comment by wedrifid · 2013-06-02T04:25:38.921Z · LW(p) · GW(p)

Independent of whether the particular quote is labelled a rationality quote, Simon had an undeniable insight in the linked article and your explanation thereof is superb! To the extent that this level of research, organisation and explanation seems almost wasted on a comment. I'll look forward to reading your future contributions (be they comments or, if you have a topic worth explaining, posts).

comment by Kawoomba · 2013-06-01T19:26:31.976Z · LW(p) · GW(p)

The interview that's linked with the name is excellent, though. In an AI context ("as far as I [the AI guy] am concerned"), the quote makes more sense.

Replies from: wedrifid
comment by wedrifid · 2013-06-01T19:29:20.818Z · LW(p) · GW(p)

The interview that's linked with the name is excellent, though. In an AI context, the quote makes more sense.

I'd upvote a link to the article if it were posted in an open thread. I downvote it (and all equally irrational 'rationalist quotes') when they are presented as such here.

Replies from: Kawoomba
comment by Kawoomba · 2013-06-01T19:35:57.624Z · LW(p) · GW(p)

Yea I sometimes struggle with that: Taken at face value, the quote is of course trivially wrong. However, it can be steelmanned in a few interesting ways. Then again, so can a great many random quotes. If, say, EY posted that quote, people may upvote after thinking of a steelmanned version. Whereas with someone else, fewer readers will bother, and downvote since at a first approximation the statement is wrong. What do, I wonder?

(Example: "If you meet the Buddha on the road, kill him!" - Well downvoted, because killing is wrong! Or upvoted, because e.g. even "you may hold no sacred beliefs" isn't sacred? Let's find out.)

comment by Osiris · 2013-06-25T08:50:01.655Z · LW(p) · GW(p)

"Never let your sense of morals get in the way of doing what's right." --Isaac Asimov

All too often, an intuition creates mistakes which rationality must remedy, when one is presented with a complex problem in life. No fault of the intuition, of course--it is merely the product of nature.

Replies from: Richard_Kennaway, Eugine_Nier
comment by Richard_Kennaway · 2013-06-25T12:19:56.444Z · LW(p) · GW(p)

Sometimes, rationality creates mistakes which intuition must modify. Rationality, too, is merely a product of nature.

I don't know the context of the Asimov quote, but it is not clear that the two things he is contrasting match up, in either order, to rationality and intuition.

comment by Eugine_Nier · 2013-06-30T06:12:24.826Z · LW(p) · GW(p)

The problem with Asimov's advice, is that without context it seems to be telling people to ignore ethical injunctions, which is actually horrendous advice.

A better piece of advise would be "If you find your morals get in the way of doing what's right, consider that evidence that you're probably mistaken about the rightness of the action in question."

Replies from: wedrifid
comment by wedrifid · 2013-06-30T12:51:48.487Z · LW(p) · GW(p)

The problem with Asimov's advice, is that without context it seems to be telling people to ignore ethical injunctions, which is actually horrendous advice.

Ethical injunctions and morals are similar but not the same thing. Also note that "sense of morals" seems to be referring to intuitions-without-consideration which is different again.

Replies from: Eugine_Nier
comment by Eugine_Nier · 2013-07-01T05:00:46.500Z · LW(p) · GW(p)

Ethical injunctions and morals are similar but not the same thing.

LW jargon. Neither Asimov nor the intended audience would necessarily make that distinction.

Also note that "sense of morals" seems to be referring to intuitions-without-consideration which is different again.

Not really once you consider where said intuitions come from.

Replies from: wedrifid
comment by wedrifid · 2013-07-01T05:10:35.850Z · LW(p) · GW(p)

LW jargon. Neither Asimov nor the intended audience would necessarily make that distinction.

The jargon introduction was yours, not Asimov's or mine and your interpretation of his advice to be telling to people to ignore ethical injunctions is uncharitable as a reading of his intent and mistaken as a claim about the how the LW concept applies.

Not really once you consider where said intuitions come from.

Yes, really. I don't know what you are basing this 'consideration' on.

An example of following Asimov's advice would be someone with strong moral sense that homosexuality is wrong but a strong egalitarian philosophy choosing to overcome the moral sense and refusing to stone the homosexual to death despite the instinctive and socially reinforced moral revulsion.

Replies from: Eugine_Nier
comment by Eugine_Nier · 2013-07-02T03:23:48.785Z · LW(p) · GW(p)

The jargon introduction was yours, not Asimov's or mine

Yes, and I was using it to be technical, you seem to be trying to argue that Asimov couldn't have meant "ethical injunction" since he wrote "morality".

your interpretation of his advice to be telling to people to ignore ethical injunctions is uncharitable as a reading of his intent

I didn't say anything about his intent, I'm talking about how someone told to not let one's "sense of morals get in the way of doing what's right" is likely to behave when attempting to act on the advice. As for intent I'm guessing Asimov's (and apparently yours judging by your example) is to interpret "sense of morals" as [a moral intuition Asimov (or wedrifid) disagrees with] and "doing what's right" as [a moral intuition Asimov (or wedrifid) agrees with].

mistaken as a claim about the how the LW concept applies.

I think you are the one mistaken. Remember the point at which an ethical injunction is most important is the point when disobeying it feels like the right thing to do.

Replies from: Osiris, wedrifid
comment by Osiris · 2013-07-10T23:50:08.407Z · LW(p) · GW(p)

Not everyone who speaks about morality automatically sinks down into nonsense and intuition, into the depths of accusations and praise for particular persons, however strange the language they use. Sometimes, speaking about morality means speaking about rationality, surviving and thriving, etc. It may be a mistake to think that Asimov was entirely ignorant of the philosophies this website promotes, given his work in science and the quotes one finds from his interviews, letters, and stories.

Replies from: Eugine_Nier
comment by Eugine_Nier · 2013-07-12T06:29:05.205Z · LW(p) · GW(p)

Not everyone who speaks about morality automatically sinks down into nonsense and intuition, into the depths of accusations and praise for particular persons, however strange the language they use.

I never said anything otherwise. My point was that Asimov was trying to make a distinction between "morality" and "doing what's right". The implication being that thinking in terms of the latter will produce better behavior than thinking in terms of the former. My point is that this is not at all the case.

comment by wedrifid · 2013-07-02T05:00:45.466Z · LW(p) · GW(p)

Yes, and I was using it to be technical,

Using a technical term incorrectly then retorting with "LW jargon." when corrected is either disingenuous or conceivably severely muddled thinking.

you seem to be trying to argue that Asimov couldn't have meant "ethical injunction" since he wrote "morality".

I'm saying that he in fact didn't mean "ethical injunction" in that context and also that his intended audience would not have believed that he was referring to that.

As for intent I'm guessing Asimov's (and apparently yours judging by your example) is to interpret "sense of morals" as [a moral intuition Asimov (or wedrifid) disagrees with] and "doing what's right" as [a moral intuition Asimov (or wedrifid) agrees with].

No, not remotely correct. You may note that the example explicitly mentions to two different values held by the actor and describes a particular way of resolving the conflict.

comment by Shmi (shminux) · 2013-06-11T19:09:23.558Z · LW(p) · GW(p)

... from my perspective, this process of “breaking off” answerable parts of unanswerable riddles, then trying to answer those parts, is the closest thing to philosophical progress that there is.

Scott Aaronson in The Ghost in the Quantum Turing Machine.

Replies from: None
comment by [deleted] · 2013-06-11T19:14:11.572Z · LW(p) · GW(p)

I would be extremely surprised to learn that there were any unanswerable riddles of any kind.

Replies from: Qiaochu_Yuan
comment by Qiaochu_Yuan · 2013-06-11T19:15:08.565Z · LW(p) · GW(p)

I cashed out "unanswerable" to "should be dissolved."

Replies from: None
comment by [deleted] · 2013-06-11T19:25:43.398Z · LW(p) · GW(p)

That's a good thought. I take 'should be disolved' to mean that the appropriate attitude towards an apparent question is not to try to answer it on its own terms, but to provide some account that undermines the question. I suppose Aaronson means that given a body of interrelated concepts and questions, philosophical progress amounts to isolating those that can and should be answered on their own terms from those that can't. On this reading, there are no 'unanswerable' questions, only ill-formed ones.

That makes sense to me.

Replies from: shminux
comment by Shmi (shminux) · 2013-06-11T19:32:12.401Z · LW(p) · GW(p)

there are no 'unanswerable' questions, only ill-formed ones.

He talks specifically about the concept of free will (emphasis below is mine):

By “free will,” I’ll mean a metaphysical attribute that I hold to be largely outside the scope of science—and which I can’t even define clearly, except to say that, if there’s an otherwise-undefinable thing that people have tried to get at for centuries with the phrase “free will,” then free will is that thing! More seriously, as many philosophers have pointed out, “free will” seems to combine two distinct ideas: first, that your choices are “free” from any kind of external constraint; and second, that your choices are not arbitrary or capricious, but are “willed by you.” The second idea—that of being “willed by you”—is the one I consider outside the scope of science, for the simple reason that no matter what the empirical facts were, a skeptic could always deny that a given decision was “really” yours, and hold the true decider to have been God, the universe, an impersonating demon, etc. I see no way to formulate, in terms of observable concepts, what it would even mean for such a skeptic to be right or wrong.

So "unanswerable" does not necessarily mean "should be dissolved", but rather that it's not clear what answering such a question "would even mean". The "breaking-off" process creates questions which can have meaningful answers. The original question may remain "undissolved", but some relevant interesting questions become answerable.

Replies from: None
comment by [deleted] · 2013-06-11T19:56:47.737Z · LW(p) · GW(p)

I see no way to formulate, in terms of observable concepts, what it would even mean for such a skeptic to be right or wrong.

Hmm, but why should Aaronson restrict himself to understanding the skeptic's objection in terms of observable concepts (I assume he means something like 'empirical concepts')? I mean, we have good reason to operate within empiricism where we can, but it seems to me you're not allowed to let your methodology screen off a question entirely. That's bad philosophical practice.

Replies from: shminux, shminux
comment by Shmi (shminux) · 2013-06-11T20:19:17.732Z · LW(p) · GW(p)

why should Aaronson restrict himself to understanding the skeptic's objection in terms of observable concepts

Because that is what "answerable" means to a scientist?

Replies from: None
comment by [deleted] · 2013-06-11T20:57:57.082Z · LW(p) · GW(p)

Because that is what "answerable" means to a scientist?

I guess I could just rephrase the question this way: why should Aaronson get to assume he should be able to understand the skeptic's objection in terms of, say, physics or biology? We have very good reasons to think we should answer things with physics or biology where we can, but we can't let methodology screen off a question entirely.

Replies from: shminux
comment by Shmi (shminux) · 2013-06-11T21:00:57.500Z · LW(p) · GW(p)

Sorry, I don't understand your rephrasing. Must be the inference gap between a philosopher and a scientist.

Replies from: None
comment by [deleted] · 2013-06-11T21:25:05.872Z · LW(p) · GW(p)

Must be the inference gap between a philosopher and a scientist.

I don't think so, I think I was just unclear. It's perfectly fine of course for Aaronson to say 'if I can't understand part of the problem of free will within a scientific methodology, I'm going to set it aside.' But it's not okay for him to say 'if I can't understand part of the problem of free will within a scientific methodology, we should all just set it aside as unanswerable' unless he has some argument to that effect. Hardcore naturalism is awesome, but we don't get it by assumption.

Replies from: shminux
comment by Shmi (shminux) · 2013-06-11T21:55:07.886Z · LW(p) · GW(p)

if I can't understand part of the problem of free will within a scientific methodology, we should all just set it aside as unanswerable'

Hmm, I don't believe that he is saying anything like that.

comment by Shmi (shminux) · 2013-06-11T20:27:12.153Z · LW(p) · GW(p)

That's bad philosophical practice.

True, I agree that philosophers are uniquely equipped to see an "unanswerable" riddle as a whole, having learned the multitude of attempts to attack such a riddle from various directions throughout history. However, I see as one of the more useful tasks a philosopher can do with her unique perspective is what Scott Aaronson suggests: "break off an answerable question", figure out which branch of the natural science is best equipped to tackle it, and pass it along to the area experts. Pass along and not pretend to solve it, because most philosophers (with rare exceptions) are not area experts and so are not qualified to truly solve the "answerable questions". The research area can be math, physics, chemistry, linguistics, neuroscience, psychology etc.

Replies from: None
comment by [deleted] · 2013-06-11T21:06:34.393Z · LW(p) · GW(p)

However, I see as one of the more useful tasks a philosopher can do with her unique perspective is what Scott Aaronson suggests: "break off an answerable question", figure out which branch of the natural science is best equipped to tackle it, and pass it along to the area experts.

Absolutely, we agree on that, though I think the philosophical work doesn't end there, since area experts are generally ill equipped to evaluate their answer in terms of the original question.

Replies from: shminux
comment by Shmi (shminux) · 2013-06-11T21:53:11.179Z · LW(p) · GW(p)

No disagreement there, either. As long as after this evaluation the philosopher in question does not pretend that she helped the scientists to do their job better. If she simply applies the answer received to the original question and carves out another solvable piece of the puzzle to be farmed out to an expert, I have no problem with that.

comment by TeMPOraL · 2013-06-09T17:43:13.111Z · LW(p) · GW(p)

I think it's entirely wrong for Americans to sympathize with Boston victims while disregarding and in many cases outright denying the existence of victims of drone strikes. It's hypocrisy at its finest and especially rich coming from self-proclaimed Christians.

That is exactly the problem with nationalism.

I suspect you're probably saying that it's understandable for Americans only to feel the reality of this kind of cruelty when it affects "their own", and my response is that it may be understandable, but then so are the mechanisms of cancer.

-- HN's Vivtek in discussion about nationalism.

Replies from: simplicio
comment by simplicio · 2013-06-17T05:07:27.090Z · LW(p) · GW(p)

The author may "have a point" as they say, but it doesn't qualify as a rationality quote by my lights; more of a rhetoric quote. One red flag is

in many cases outright denying the existence of victims of drone strikes.

Who denies their existence?

Replies from: shminux, wedrifid, Eliezer_Yudkowsky
comment by Shmi (shminux) · 2013-06-17T18:05:28.089Z · LW(p) · GW(p)

Who denies their existence?

I'm pretty sure that what was meant is "innocent victims". While still a stretch, it would then shift to discussing the meaning of "innocent" vs insinuating that the US military is so inept, it cannot shoot straight and makes up stuff to cover it.

comment by wedrifid · 2013-06-17T16:39:32.842Z · LW(p) · GW(p)

The author may "have a point" as they say, but it doesn't qualify as a rationality quote by my lights; more of a rhetoric quote.

This seems accurate. The quote is a bunch of applause lights and appeals to identity strung together support a political agenda. Sure, I entirely support the particular political agenda in question but just because it is 'my team' being supported doesn't make the process of shouting slogans noteworthy rationality material.

If the religion based shaming line and the "in many cases outright denying the existence of victims of drone strikes" hyperbole were removed or replaced then the quote could have potential.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-06-17T14:26:52.355Z · LW(p) · GW(p)

Principle of charity: "Denial of existence" is to taken as meaning "Don't think about, don't care about, don't act based on, don't know how many there were" and not "When explicitly asked if drone strikes have victims, say 'no'."

Replies from: Richard_Kennaway, wedrifid
comment by Richard_Kennaway · 2013-06-17T16:37:30.156Z · LW(p) · GW(p)

There is already a word for "don't think about, don't care about, don't act based on, don't know how many there were". That word is "disregarding", which is used in the original quote. It then adds, a fortiori, "and in many cases outright denying the existence of victims of drone strikes". In that context, it cannot mean anything but "explicitly say that there are no victims", and in addition, that this has actually happened in "many" cases.

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-06-17T17:02:08.218Z · LW(p) · GW(p)

Hm, good point. I still suspect it's metaphorical. Then again, in a world where Fox News is currently saying how Edward Snowden may be a Chinese double agent, it may also be literal and truthful.

Replies from: Tyrrell_McAllister, Eugine_Nier
comment by Tyrrell_McAllister · 2013-06-18T18:06:48.736Z · LW(p) · GW(p)

By "in many cases outright denying the existence of victims of drone strikes", I think that the author meant "in many cases (i.e., many strikes), outright denying that some of the victims are in fact victims."

The author is probably referring to the reported policy of considering all military-age males in a strike-zone to be militants (and hence not innocent victims). I take the author to be claiming that (1) non-militant military-age male victims of drone strikes exist in many cases, and (2) the reported policy amounts to "outright denying the existence" of those victims.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2013-06-19T13:28:31.427Z · LW(p) · GW(p)

That's how I read it. The claim isn't that no one was killed by drone strikes, it's that no one innocent was killed, so there are no victims.

Replies from: Tyrrell_McAllister
comment by Tyrrell_McAllister · 2013-06-19T19:57:16.512Z · LW(p) · GW(p)

Yes. Furthermore, the "many cases" doesn't refer to many people who think that there has never been an innocent victim of a drone strike. Rather, the "many cases" refers to the (allegedly) many innocent victims killed whose existence (as innocents) was denied by reclassifying them as militants.

comment by Eugine_Nier · 2013-06-18T01:37:43.163Z · LW(p) · GW(p)

Fox News is currently saying how Edward Snowden may be a Chinese double agent

And the reason this hypothesis is so unlikely as to be not worth considering is:

Replies from: gwern, CCC
comment by gwern · 2013-06-18T02:00:35.315Z · LW(p) · GW(p)

And the reason this hypothesis is so unlikely as to be not worth considering is:

During the Cold War, the US and British governments were shot through with hundreds of double agents for the Soviets, to an almost ludicrous extent (eg. Kim Philby apparently almost became head of MI6 before being unmasked); and of course, due to the end of the Cold War & access to Russian archives, we now have a much better idea of everything that was going on and can claim a reasonable degree of certainty as to who was a double agent and what their activities were.

With those observations in mind: can you name a single one of those double-agents who went public as a leaker as Snowden has done?

If you can name only one or two such people, and if there were, say, hundreds of regular whistleblowers over the Cold War (which seems like a reasonable figure given all the crap like MKULTRA), then the extreme unlikelihood of the Fox hypothesis seems clear...

comment by CCC · 2013-06-18T07:27:53.854Z · LW(p) · GW(p)

If America needs a double agent from a hostile foreign power to merely point out to the media that their government may be doing something that some might find questionable, then America's got far bigger problems than a few spies.

Replies from: wedrifid
comment by wedrifid · 2013-06-18T07:57:50.557Z · LW(p) · GW(p)

If America needs a double agent from a hostile foreign power to merely point out to the media that their government may be doing something that some might find questionable, then America's got far bigger problems than a few spies.

And if hostile government cares more about the democratic civil liberties of Americans than Americans do then there is an even bigger problem. (The actual benefit to China of the particular activity chosen for the 'double agent' is negligible.)

comment by wedrifid · 2013-06-17T15:52:28.291Z · LW(p) · GW(p)

Principle of charity: "Denial of existence" is to taken as meaning "Don't think about, don't care about, don't act based on, don't know how many there were" and not "When explicitly asked if drone strikes have victims, say 'no'."

Giving charity is fine. However the principle of charity does not extend to obliging that we applaud, share and propose as inspirational guidelines those things that require such charity in order to not be nonsense.

Replies from: TimS
comment by TimS · 2013-06-17T16:14:27.223Z · LW(p) · GW(p)

Doesn't that depend on the amount of work the reader needs to find a charitable reading? And whether the author would completely endorse the charitable reading?

One could probably charitably read a rationalist-friendly message into the public speeches of Napoleon on the nobility of dying in battle, but it likely would require a lot of intellectual contortions, and Napoleon almost certainly would not endorse the result. So we shouldn't applaud that charitable reading.

But I think the charitable reading of the quote from the OP is straightforward enough that the need to apply the principle of charity is not an independent reason to reject the quote. Simplicio's rejection could be complete and coherent even if he had applied the principle of charity - essentially, drawing a distinction between "rhetoric" and "rationality principle."

It might be that the distinction often makes reference to usage of the principle of charity, but that is different from refusing to apply the principle to a rationality quote.

Replies from: wedrifid
comment by wedrifid · 2013-06-17T16:23:10.170Z · LW(p) · GW(p)

Doesn't that depend on the amount of work the reader needs to find a charitable reading?

Yes.

And whether the author would completely endorse the charitable reading?

A little bit.

It might be that the distinction often makes reference to usage of the principle of charity, but that is different from refusing to apply the principle to a rationality quote.

It is the case that when I see a quote that is being defended by appeal to the principle of charity I will be more inclined to downvote said quote than if I had not seen such a justification. As best as I can tell this is in accord with the evidence that such statements provide and my preferences about what kind of quotes I encounter in the 'rationalist quotes' thread. This is not the same thing as 'refusing to apply the principle of charity'.

Replies from: TimS
comment by TimS · 2013-06-17T16:32:17.813Z · LW(p) · GW(p)

Fair enough. I think the nub of our disagreement is whether the author must endorse the interpretation for it to be considered a "charitable" reading. I think the answer is yes.

If the interpretation is an improvement but the author wouldn't endorse, I think it is analytically clearer to avoid calling that a "charitable" reading, and instead directly call it steelmanning. There's no reason to upvote a rationality quote that requires steelmanning (and many, many reasons not to). But if we all know what the author "really meant," it seems reasonable to upvote based on that meaning.

That said, I recognize that it is very easy to mistakenly identify a charitable reading as the consensus reading (i.e. to steelman when you meant to read charitably).

Replies from: wedrifid
comment by wedrifid · 2013-06-17T16:52:52.812Z · LW(p) · GW(p)

I think the nub of our disagreement is whether the author must endorse the interpretation for it to be considered a "charitable" reading. I think the answer is yes.

I agree.

If the interpretation is an improvement but the author wouldn't endorse, I think it is analytically clearer to avoid calling that a "charitable" reading, and instead directly call it steelmanning.

That's a good distinction and a sufficient (albeit not necessary) cause to call it steelmanning.

There's no reason to upvote a rationality quote that requires steelmanning (and many, many reasons not to). But if we all know what the author "really meant," it seems reasonable to upvote based on that meaning.

In this case "in many cases outright denying the existence of victims of drone strikes" is rather strong and unambiguous language. It is clear that the author is just exaggerating his claims to enhance emotional emphasis but we have to also acknowledge that he went out of his way to say 'outright denying' when he could have said something true instead. He 'really meant' to speak a falsehood for persuasive effect.

What Eliezer did was translate from the language of political rhetoric into what someone might say if they were making a rationalist quote instead. That's an excellent thing to do to such rhetoric but if that is required then the quote shouldn't be in this thread in the first place. Maybe we can have a separate thread for "rationalist translations of inspirational or impressive quotes". (Given the standard of what people tend to post as rationalist quotes we possibly need one.)

Replies from: TimS
comment by TimS · 2013-06-17T17:05:52.890Z · LW(p) · GW(p)

After considering RichardKennaway's point, I'm coming to realize that Eliezer's interpretation is not "charitable" because it isn't clear that the original speaker would endorse Eliezer's reading.

Maybe we can have a separate thread for "rationalist translations of inspirational or impressive quotes."

Since this is what Rationality Quotes has apparently turned into, I'm not sure that the thread type is worth trying to save.

Replies from: wedrifid
comment by wedrifid · 2013-06-18T08:16:52.855Z · LW(p) · GW(p)

I get the impression from the analysis we have done that I am likely to essentially agree with most of your judgements regarding charitability.

comment by CasioTheSane · 2013-06-05T06:43:31.210Z · LW(p) · GW(p)

Once we accept that knowledge is tentative, and that we are probably going to improve our knowledge in important ways when we learn more about the world, we are less likely to reject new information that conflicts with our present ideas.

-Dr. Raymond Peat

comment by Skeeve · 2013-06-03T11:03:41.855Z · LW(p) · GW(p)

The secret is to make wanting the truth your entire identity, right. If your persona is completely stripped down to just "All I care about is the facts", then the steps disappear, the obstacles are gone. Tyranosaurus was a scavenger? Okay! And then you walk right up to it without hesitation. The evidence says the killer was someone else? Okay, see you later sir, sorry for the inconvenience, wanna go bowling later now that we're on a first name basis? And so on. Just you and a straight path to the truth. That is how you become perfect.

Replies from: pinyaka, Qiaochu_Yuan, Dorikka, RolfAndreassen
comment by pinyaka · 2013-06-03T15:38:10.337Z · LW(p) · GW(p)

Also from Subnormality: the perils of AI foom.

comment by Qiaochu_Yuan · 2013-06-06T21:53:23.547Z · LW(p) · GW(p)

This seems like a silly identity to have. When does someone who just wants the truth ever act, other than for the purpose of acquiring truth?

comment by Dorikka · 2013-06-11T23:29:18.317Z · LW(p) · GW(p)

Obligatory note so that people don't get undesired value drift from a particular usage of English.

comment by RolfAndreassen · 2013-06-03T14:32:49.136Z · LW(p) · GW(p)

"Scavenger" is a slippery term. A hyena is a scavenger; that does not mean that a rabbit ought to walk into easy reach of its jaws.

Replies from: Skeeve
comment by Skeeve · 2013-06-03T14:45:39.217Z · LW(p) · GW(p)

I agree with you; the context from earlier in the strip was about reading a study with evidence pointing to T-rexes being a timid scavenger, and then getting transported back in time and seeing a T-rex acting timid.

comment by Manfred · 2013-06-02T00:25:00.332Z · LW(p) · GW(p)

Sometimes it feels like everyone's / being a dick.
But they're not, / it's just you being a dick to every one.
Some days it seems like nothing / works right.
But its fine, / you're probably using it wrong.

You wanna change the world you better / start with yourself.
Charity starts at home in the / skin you're in.
I'm not saying you should go and / change your face.
But if it bothers you that much / you should get a nose job.

I'm talking about what lies beneath / the black and white
There's a mass of gray,
it is called your brain.

Imani Coppola

Replies from: wedrifid
comment by wedrifid · 2013-06-02T05:36:52.129Z · LW(p) · GW(p)

Sometimes it feels like everyone's / being a dick. But they're not, / it's just you being a dick to every one.

A nice ideal. It'd be better world than this one if it were true.

Sometimes if it feels like everyone's being a dick it is actually because you are being not enough of a dick to everyone (at times when you ought to). Ever been to high school? Or, you know, interacted significantly with humans. Or even studied rudimentary game theory with which to establish priors for the likely behaviour of other agents conditional on your own.

The world is not fair. Reject the Just World fallacy.

Some days it seems like nothing / works right. But its fine, / you're probably using it wrong.

Sometimes things don't work because you chose bad things (or people) to work with. If something isn't working either do it differently or do something else entirely that is better.

Personal responsibility is great, and rejecting 'victim' thinking is beneficial. But self delusion is not required and is not (always) beneficial.

comment by James_Miller · 2013-06-01T15:37:56.812Z · LW(p) · GW(p)

Since as lukeprog writes one of the methods for becoming happier is to "Develop the habit of gratitude" here is a quote of stuff to be thankful for: "

  • The taxes I pay because it means that I am employed

  • The clothes that fit a little too snug because it means I have enough to eat

  • My shadow who watches me work because it means I am out in the sunshine

  • A lawn that has to be mowed, windows that have to be washed, and gutters that need fixing because it means I have a home

  • The spot I find at the far end of the parking lot because it means I am capable of walking

  • All the complaining I hear about our government because it means we have the freedom of speech

  • The lady behind me in church who sings off key because it means that I can hear

  • The huge pile of laundry and ironing because it means my loved ones are nearby

  • The alarm that goes off in the early morning because it means that I'm alive"

Replies from: TheOtherDave, Xachariah, DanArmak, Nisan, BillyOblivion, ChristianKl
comment by TheOtherDave · 2013-06-01T15:48:42.366Z · LW(p) · GW(p)

I would still have enough to eat if my clothes fit, I would still have a home if my lawn were self-mowing, I would still be able to hear if she sang more tunefully, I would still be alive if I didn't set my alarm, etc. Taking advantage of these sorts of moments as opportunities to practice gratitude is a fine practice, but it's far better to practice gratitude for the thing I actually want (enough to eat, a home, hearing, life, etc.) than for the indicators of it I'd prefer to be rid of.

Replies from: James_Miller
comment by James_Miller · 2013-06-01T16:11:00.078Z · LW(p) · GW(p)

The goal is to turn something that would otherwise cause you distress into a tool of your own happiness. When something bad happens to you seek a legitimate reason of why it's a sign of something positive in your life.

Replies from: Jiro
comment by Jiro · 2013-06-01T17:04:33.633Z · LW(p) · GW(p)

The idea that we try to optimize happiness in the sense you imply is a simplification. Blissful ignorance provides happiness, but most people don't consider it a worthy goal. Yet this suggestion is basically "try to achieve blissful ignorance, rather than not liking bad things". It does not follow that because X is not possible without Y, and Y is good, therefore X is good. Trying to believe that X is good on these grounds is some variation of willful blindness and blissful ignorance.

Replies from: James_Miller
comment by James_Miller · 2013-06-01T17:25:49.581Z · LW(p) · GW(p)

Happiness is a state of mind, not a condition of the territory.

Blissful ignorance provides happiness

True by tautology.

It does not follow that because X is not possible without Y, and Y is good, therefore X is good. Trying to believe that X is good on these grounds is some variation of willful blindness and blissful ignorance.

I completely agree. But the following is correct:

X is not possible without Y, and Y makes me happy, therefore when I encounter X, I as a rational person who seeks useful emotions and wishes to raise my level of happiness, would benefit from being able to use the relationship between X and Y to raise my happiness even if my brain would lower my happiness if it encountered X and didn't consider the relationship between X and Y.

Replies from: Jiro
comment by Jiro · 2013-06-02T02:41:56.833Z · LW(p) · GW(p)

No rational person (at least no rational person without extremely atypical priorities) "wishes to raise his level of happiness". Few people think that an ideal state for them to be in would be to be drugged into perfect happiness. This suggestion is basically drugging yourself into happiness without the drugs, but keeping the salient aspect of drugs--namely, that the happiness has no connection with there being a desirable situation in the outside world.

Replies from: William_Quixote, NancyLebovitz, James_Miller, roystgnr
comment by William_Quixote · 2013-06-02T23:00:48.223Z · LW(p) · GW(p)

You may be thinking your priorities are more typical than they are. A straight forward utilitarian might think its a reasonable view / goal. There are lots of people out there.

As a more general point rationality doesn't speak to end goals, it speaks to achieving those goals. See orthogonality hypothesis.

comment by NancyLebovitz · 2013-06-09T14:13:38.855Z · LW(p) · GW(p)

People who are depressed can quite reasonably want to raise their level of happiness-- their baseline is below what makes sense for their situation.

There's a difference between wanting to raise one's level of happiness and wanting to raise it as high as possible.

comment by James_Miller · 2013-06-02T02:52:45.142Z · LW(p) · GW(p)

I didn't mean to imply that a rational person should be willing to pay any possible price to raise his happiness.

comment by roystgnr · 2013-06-04T18:21:39.829Z · LW(p) · GW(p)

Drugs reduce the amount of concern you have for the real world. Taking greater notice of necessary relationships between observations increases the amount of concern you have for the real world.

comment by Xachariah · 2013-06-09T01:47:42.034Z · LW(p) · GW(p)

I'm fairly certain that's not how you're supposed to develop a habit of gratitude. It's not about doublithinking yourself into believing you like things that you dislike; it's to help you notice more things you like.

I've been doing a gratitude journal. I write three short notes from the last day where I was thankful for something a person did (eg, saving me a brownie or something). Then I take the one that makes me happiest and write a 1 paragraph description of what occurred, how I felt, and such that writing the paragraph makes me relive the moment. Then I write out a note (that is usually later transcribed) to a past person in my gratitude journal.

When I think of that person or think back to that day, I'm immediately able to recall any nice things they did that I wrote down. Also, as I go through my life, I'm constantly looking for things to be thankful for, and notice and remember them more easily.

If you do something like in the quote, it seem more likely that you'll remember negative things (that you pretend to be positive). It goes against the point of the exercise.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2013-06-09T14:14:47.553Z · LW(p) · GW(p)

Here's another way to do gratitude wrong: thinking about the good things turns into "this is what can be lost".

comment by DanArmak · 2013-06-01T18:10:02.455Z · LW(p) · GW(p)

The alarm that goes off in the early morning because it means that I'm alive

That just doesn't sound appropriate. It's as if you're saying, the alarm means I have to live through another day which I'll hate, but it's still better than not living at all, and that's the best thing I can find to be happy about every morning!

You might as well say: I'm glad I'm sick, because that means I'm not dead yet.

Replies from: CCC
comment by CCC · 2013-06-02T15:36:56.816Z · LW(p) · GW(p)

If you hate every day, then you need to make some changes to your life. Finding a job that you enjoy might be a good first step.

Replies from: Raemon
comment by Raemon · 2013-06-02T15:42:27.590Z · LW(p) · GW(p)

The point of the entire post was to be thankful for things that you normally think of as annoying.

comment by Nisan · 2013-06-02T16:16:58.993Z · LW(p) · GW(p)

Here is another quote by Borges of stuff to be thankful for. English.

comment by BillyOblivion · 2013-06-07T05:11:26.810Z · LW(p) · GW(p)

Pain is good, it tells you you're still alive.

All in all though, I'd rather have the alive w/out the pain. At least as far as I know.

Replies from: Eugine_Nier
comment by Eugine_Nier · 2013-06-09T05:21:09.286Z · LW(p) · GW(p)

All in all though, I'd rather have the alive w/out the pain.

That depends on precisely what is meant by living without pain.

Replies from: BillyOblivion
comment by BillyOblivion · 2013-06-10T15:26:23.249Z · LW(p) · GW(p)

Head is an achin' and knees are abraded
Plates in my neck and stitches updated
Toes are a cracking and Tendons inflamed
These are a few of my favorite pains

But yes, the author of those books is mostly correct, there's some kinds of pain that serve as a useful warning function. Those are good and we should be grateful.

Others are artifacts of historical stupidity. I've learned those lessons and reminding me of them is useless.

Replies from: Eugine_Nier
comment by Eugine_Nier · 2013-06-12T05:45:01.188Z · LW(p) · GW(p)

Others are artifacts of historical stupidity. I've learned those lessons and reminding me of them is useless.

Then why do you keep ignoring them?

comment by ChristianKl · 2013-06-06T22:11:18.401Z · LW(p) · GW(p)

The lady behind me in church who sings off key because it means that I can hear

It also means that you are in church.

A lawn that has to be mowed, windows that have to be washed, and gutters that need fixing because it means I have a home

Lawns are not required to have a home. Especially mowing them isn't. Windows don't need constant cleaning.

The alarm that goes off in the early morning because it means that I'm alive"

It's possible to wake up without an alarm.

comment by James_Miller · 2013-06-01T14:58:07.918Z · LW(p) · GW(p)

We prefer wrong information to no information.

The Art of Thinking Clearly by Rolf Dobelli, p. 33.

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-06-01T17:51:45.249Z · LW(p) · GW(p)

I prefer to quantify my lack of information and call it a prior. Then it's even better than wrong information!

Replies from: Kawoomba
comment by Kawoomba · 2013-06-01T18:37:43.798Z · LW(p) · GW(p)

The numerical value of the prior itself doesn't tell how much information -- or lack thereof -- is incorporated into the prior.

What's a simple way to state how certain you are about a prior, i.e. how stable it is against large updates based on new information? Error bars or something related don't necessarily do the job -- you might be very sure that the true Pr (EDIT: that was poorly phrased, probability is in the mind etc., what was meant is the eventual Pr you end up with once you've hypothetically parsed all possible information, the limit) is between 0.3 and 0.5, i.e. new information will rarely result in a posterior outside that range, even if the size of the range (wrongly) suggests that the prior is based on little information. Is there something more intuitive than Pr(0.3<Pr(A)<0.5) = high?

Replies from: Manfred, RolfAndreassen, CCC
comment by Manfred · 2013-06-02T00:09:18.919Z · LW(p) · GW(p)

Part 1:

The idea of having a "true probability" can be extremely misleading. If I flip a coin but don't look at it, I may call it a 50% probability of tails, but reality is sitting right there in my hand with probability 100%. The probability is not in the external world - the coin is already heads or tails. The probability is just 50% because I haven't looked at the coin yet.

What sometimes confuses people is that there can be things in the world that we often think of as probabilities, and those can have a true value. For example, if I have an urn with 30 black balls and 70 white balls, and I pull a ball from the urn, I'll get a black ball about 30 times out of 100. This isn't "because the true probability is 30%" - that's an explanation that just points to a new fundamental property to explain. It's because the urn is 30% black balls, and I hadn't looked at where all the balls were yet.

Using probabilities is an admission of ignorance, of incomplete information. You don't assign the coin a probability because it's magically probabilistic, you use probabilities because you haven't looked at the coin yet. There's no "true probability" sitting out there in the world waiting for you to discover it, there's only a coin that's either heads or tails. And sometimes there are urns with different mixtures of balls, though of course if you can look inside the urn it's easy to pick the ball you want.

Part 2:

Okay, so there's no "externally objective, realio trulio probability" to compare our priors to, so how about asking how much our probability will move after we get the next bit of information?

Let's use some examples. Say I'm taking a poll. And I want to know what the probability is that people will vote for the Purple Party. So I ask 10 people. Now, 10 is a pretty small sample size, but say 3 out of 10 will vote for the purple party. So I estimate that the probability is a little more than 3/10. Now, the next additional person I ask will cause me to change my probability by about 10% of its current value. But after I poll 1000 people, asking the next person barely changes my probability estimate. Stability!

This actually works pretty well.

If you wanted to split up your hypothesis space about the poll results into mutually exclusive and exhaustive pieces (which is generally a good idea), you would have a million different hypotheses, because there are a million (well, 1,000,001) different possible numbers of Purple Party supporters. So for example there would be separate hypotheses for 300,000 Purple Party supporters vs. 300,001. Giving each of these hypotheses their own probability is sufficient to talk about the kind of stability you want. If the probabilities are concentrated on a few possible numbers, then your poll is really stable.

And a good thing that it works out, because the probabilities of those million hypotheses are all of the information you have about this poll!

Note that this happens without any mention of "true probability." We chose those million hypotheses because there are realio trulio a million different possible answers. A narrow distribution over these hypotheses represents certainty not about some true probability, but about the number of actual people out in the actual world, wearing actual purple.

So thank goodness a probability distribution over the external possibilities is all ya' need, because it's all ya' got in this case.

Replies from: Kawoomba
comment by Kawoomba · 2013-06-02T06:50:55.116Z · LW(p) · GW(p)

Thanks, the "true probability" phrasing was misleading, I should've reread my comment before submitting. Probability is in the mind etc., what I referred to was "the probability you'd eventually end up with, having incorporated all relevant information, the limit", which is still in your mind, but as close to "true" as you'll get.

So you can of course say Pr(Box is empty | I saw it's empty) = x and Pr(Box is empty | I saw it's empty and I got to examine its inner surfaces with my hand) = y, then list all similar hypothesis about the box being empty conditioned on various experiments, and compare x, y etc. to get a notion of the stability of your prior.

However, such a listing is quite tedious, and countably infinite as well, even if it's the only full representation of your box-is-empty belief conditioned on all possible information.

The point was that "my prior about the box being empty is low / high / whatever" doesn't give any information about whether you've just guesstimated it -- or -- whether you're very sure about your value and will likely discount (for the most part) any new information showing the contrary as being a fluke, or a trick. A magician seemingly countering gravity with a levitation trick only marginally lowers your prior how gravity works.

Now when you actually talk to someone, you'll often convey priors about many things, but less often how stable you deem those priors to be. This dice is probably loaded ... the 'probably' refers to your prior, but it does not refer to how fast that prior could change. Maybe it's a dice a friend who's gathering loaded dice is presenting to you, so if you check it you'll be quickly convinced if it's not loaded. Maybe it's your trusted loaded dice from childhood which you've used thousands of times, and if it doesn't appear to be loaded on the next few throws, you'll still consider it to be loaded.

Yet in both cases you'd say "the dice is probably loaded". How do you usefully convey the extra information about the stability of your prior? "The dice is probably loaded, and my belief in that isn't likely to change" so to speak? Not a theoretical definition of stability - only listing all your beliefs can represent those - but, as in the grandparent - a simple and intuitive way of conveying that important extra information about stability, and a plea to start conveying that information.

Replies from: wedrifid, khafra
comment by wedrifid · 2013-06-02T08:03:47.406Z · LW(p) · GW(p)

Thanks, the "true probability" phrasing was misleading, I should've reread my comment before submitting. Probability is in the mind etc., what I referred to was "the probability you'd eventually end up with, having incorporated all relevant information, the limit", which is still in your mind, but as close to "true" as you'll get.

Relevant resource: Probability is subjectively objective.

comment by khafra · 2013-06-02T19:47:53.332Z · LW(p) · GW(p)

Now when you actually talk to someone, you'll often convey priors about many things, but less often how stable you deem those priors to be. This dice is probably loaded ... the 'probably' refers to your prior, but it does not refer to how fast that prior could change. Maybe it's a dice a friend who's gathering loaded dice is presenting to you, so if you check it you'll be quickly convinced if it's not loaded. Maybe it's your trusted loaded dice from childhood which you've used thousands of times, and if it doesn't appear to be loaded on the next few throws, you'll still consider it to be loaded.

I believe this is a model space problem. We're looking at a toy bayesian reasoner that can be easily modeled in a human mind, predicting how it will update its hypotheses about dice in response to evidence like the same number coming up too often. Our toy bayesian, of course, assigns probability 0 to encountering evidence like "my trusted expert friend says it's loaded," so that wouldn't change its probabilities at all. But that's not a flaw in bayesian reasoning; it's a flaw in the kind of bayesian reasoner that can be easily modeled in a human mind.

This doesn't demonstrate that human reasoning that works doesn't have a bayesian core. E.g., I don't know how I would update my probabilities about a die being loaded if, say, my left arm turned into a purple tentacle and started singing "La Bamba." But it does show that even an ideal reasoner can't always out-predict a computationally limited one; if the computationally limited one has access to a much better prior, and/or a whole lot more evidence.

comment by RolfAndreassen · 2013-06-03T04:42:02.403Z · LW(p) · GW(p)

Error bars usually indicate a Gaussian distribution, not a flat one. If you said P=0.4 +- 0.03, that indicates that your probability of the final probability estimate ending up outside the 0.3-0.5 range is less than a percent. This seems to meet your requirements.

If that doesn't suffice, it seems that you need a full probability distribution, specifying the probability of every P-value.

comment by CCC · 2013-06-02T15:44:34.304Z · LW(p) · GW(p)

Is there something more intuitive than Pr(0.3<Pr(A)<0.5) = high?

Describing probabilities in terms of a mean and an approximate standard deviation, perhaps? Low standard deviation would translate to high certainty.

comment by NancyLebovitz · 2013-06-28T01:05:54.049Z · LW(p) · GW(p)

What we need is some sort of system in which any proposed complication is viewed as more bothersome than earlier complications.

--Scott Adams, I Want My Cheese

comment by lukeprog · 2013-06-24T21:53:19.903Z · LW(p) · GW(p)

Reading through some AI literature, I stumbled upon a nicely concise statement of the core of decision theory, from Lindley (1985):

...there is essentially only one way to reach a decision sensibly. First, the uncertainties present in the situation must be quantified in terms of values called probabilities. Second, the various consequences of the courses of action must be similarly described in terms of utilities. Third, that decision must be taken which is expected — on the basis of the calculated probabilities — to give the greatest utility. The force of 'must', used in three places there, is simply that any deviation from the precepts is liable to lead the decision maker into procedures which are demonstrably absurd.

Of course, maximizing expected utility has its own absurd consequences (e.g. Pascal's Mugging), so decision theory is not yet "finished."

comment by NancyLebovitz · 2013-06-19T10:31:51.275Z · LW(p) · GW(p)

You could have all the human information in the world about all the human things in it, and a post-Singularity AI able to semantically interpret it all. And yet still not understand why events happen, what a leader should want to happen, or what is actually going to happen next. But the great powers of the modern world–states and civic institutions alike–must always pretend to be on the road to mastering all of that, and they must pretend that mastery will derive from information, analysis and science, not from choices and beliefs and values.

Timothy Burke

comment by [deleted] · 2013-06-03T00:40:59.615Z · LW(p) · GW(p)

The criminal misuse of time was pointing out the mistakes. Catching them--noticing them--that was essential. If you did not in your own mind distinguish between useful and erroneous information, then you were not learning at all, you were merely replacing ignorance with false belief, which was no improvement.

Orson Scott Card, Ender's Shadow

comment by Zubon · 2013-06-02T20:28:33.620Z · LW(p) · GW(p)

It was a very ordinary tragedy, she supposed, but no less a cause for regret because it was so common. Like a hint, a foretaste of grief, it was an original, even unique experience for everyone it affected, no matter how often it had happened in the past to others.

And how did you avoid it?

Against a Dark Background by Iain M. Banks. The context differs, but it reminded me of the folks working to eliminate death.

Replies from: Zubon
comment by Zubon · 2013-06-05T00:15:28.699Z · LW(p) · GW(p)

I should perhaps explain that perceived connection. I see it in two pieces.

One is a counterpart to Joy in the Merely Real. Just because something is commonplace does not mean it is not wonderful. Just because something is commonplace does not mean it is not horrible. The end of each conscious life is a distinct tragedy, even if it happens 100 times per minute. Every one counts.

The other is a case against rationalization. Looking for a greater meaning or epic poetry in death ignores the basic problem that it is bad. A million deaths is a million tragedies, not a statistic. Shut up and multiply. We all come from cultures that spent millennia developing rationalizations for the inevitability of death. If a solution is possible, and possible within our lifetimes, the proper response is to find it rather than growing effusive about "a great and tragic beauty."

(And, of course, how do you avoid it?)

comment by arundelo · 2013-06-16T22:26:12.934Z · LW(p) · GW(p)

In the exact moment after I'd realized that what Blaine said was true, that I'd cribbed a laugh from someone else's creativity and inspiration, my ego kicked in. And, I mean, my real ego. Not ego's sociopathic cousin, hubris, which would have made me defensive, aggressive and ultimately rationalize the theft. No, the good kind of ego, the kind that wanted success and fame and praise on my own merits, no matter how long it took.

-- Patton Oswalt

comment by bouilhet · 2013-06-03T16:59:04.534Z · LW(p) · GW(p)

Geulincx, from his own annotations to his Ethics (1665):

...our actions are as it were a mirror of Reason and God's law. If they reflect Reason, and contain in themselves what Reason dictates, then they are virtuous and praiseworthy; but if they distort Reason's reflection in themselves, then they are vicious and blameworthy. This has no effect on Reason, or God's law, which are no more beautiful or more ugly for it. Likewise, a thing represented in a mirror remains the same whether the mirror is true and faithfully represents it, or whether it is false and twists and distorts the likeness of the thing. The mirror does not distort the likeness of the thing reflected in the thing itself, but in itself - that is, in the mirror itself. Hence, corruption and ugliness belong with the mirror itself, not with the thing reflected. Similarly, we are also said to break God's law, to trample on it, to pervert it, and so on, but this takes place in ourselves, not in the law itself, so that the whole of the ugliness remains in ourselves, and nothing of it belongs with the law itself.

comment by Pablo (Pablo_Stafforini) · 2013-07-01T22:45:58.724Z · LW(p) · GW(p)

For a few years, I attended a meeting called Animal Behavior Lunch where we discussed new animal behavior articles. All of the meetings consisted of graduate students talking at great length about the flaws of that week’s paper. The professors in attendance knew better but somehow we did not manage to teach this. The students seemed to have a strong bias to criticize. Perhaps they had been told that “critical thinking” is good. They may have never been told that appreciation should come first. I suspect failure to teach graduate students to see clearly the virtues of flawed research is the beginning of the problem I discuss here: Mature researchers who don’t do this or that because they have been told not to do it (it has obvious flaws) and as a result do nothing.

Seth Roberts, ‘Something is better than nothing’, Nutrition, vol. 23, no. 11 (November, 2007), p. 912

comment by NancyLebovitz · 2013-06-24T14:33:15.987Z · LW(p) · GW(p)

"Collectability" is something of a self-refuting prophecy.

---fuzzyfuzzyfungus

Collectability might only be self-refuting for mass-produced items.

A more detailed history of Beanie Babies

Cynicism about price guides

Replies from: Eugine_Nier
comment by Eugine_Nier · 2013-06-25T02:18:08.305Z · LW(p) · GW(p)

"Collectability" is something of a self-refuting prophecy.

It also has an aspect of self-fulfilling prophecy.

Which one applies depends on how easy it is to make new instances of the collectable in question.

comment by B_For_Bandana · 2013-06-03T00:25:45.171Z · LW(p) · GW(p)

It was an ancient calculation: in hard times it paid to sacrifice the vulnerable young, and to keep alive mature individuals who might breed again in an upturn.

But the infant was almost old enough to feed herself. Just a little longer and she would have survived to independence. And this was Ultimate's baby: the first one she had had, and perhaps the only one she would ever be permitted to have. Ancient drives warred. It was a failure of adaptation, this battling of one instinct against another.

It was a primordial calculus, an ancient story told over and over again, in Purga's time, in Juna's, for uncounted grandmothers lost and unimaginable in the dark. But for Ultimate, here at the end of time, the dilemma hurt as much as if it had just been minted in the fires of hell.

  • Stephen Baxter, Evolution

Application to rationality? An exceptionally poetic reminder that tragedies endlessly repeated are still tragedies, each time.

comment by Pablo (Pablo_Stafforini) · 2013-06-02T02:45:09.162Z · LW(p) · GW(p)

One of the notable things about discussing the interpretation of quantum mechanics with physicists and with philosophers is that it is the physicists who propose philosophically radical ways of interpreting a theory, and the philosophers who propose changing the physics. One might reasonably doubt that the advocates or either strategy are always fully aware of its true difficulty.

David Wallace

comment by Multiheaded · 2013-06-15T19:46:57.239Z · LW(p) · GW(p)

...We now live — that is to say, one or two of the most advanced nations of the world now live — in a state in which the law of the strongest seems to be entirely abandoned as the regulating principle of the world's affairs: nobody professes it, and, as regards most of the relations between human beings, nobody is permitted to practise it. When anyone succeeds in doing so, it is under cover of some pretext which gives him the semblance of having some general social interest on his side. This being the ostensible state of things, people flatter themselves that the rule of mere force is ended; that the law of the strongest cannot be the reason of existence of anything which has remained in full operation down to the present time. However any of our present institutions may have begun, it can only, they think, have been preserved to this period of advanced civilisation by a well-grounded feeling of its adaptation to human nature, and conduciveness to the general good. They do not understand the great vitality and durability of institutions which place right on the side of might; how intensely they are clung to; how the good as well as the bad propensities and sentiments of those who have power in their hands, become identified with retaining it; how slowly these bad institutions give way, one at a time, the weakest first, beginning with those which are least interwoven with the daily habits of life; and how very rarely those who have obtained legal power because they first had physical, have ever lost their hold of it until the physical power had passed over to the other side.

...The truth is, that people of the present and the last two or three generations have lost all practical sense of the primitive condition of humanity; and only the few who have studied history accurately, or have much frequented the parts of the world occupied by the living representatives of ages long past, are able to form any mental picture of what society then was. People are not aware how entirely, in former ages, the law of superior strength was the rule of life; how publicly and openly it was avowed, I do not say cynically or shamelessly — for these words imply a feeling that there was something in it to be ashamed of, and no such notion could find a place in the faculties of any person in those ages, except a philosopher or a saint. History gives a cruel experience of human nature, in showing how exactly the regard due to the life, possessions, and entire earthly happiness of any class of persons, was measured by what they had the power of enforcing; how all who made any resistance to authorities that had arms in their hands, however dreadful might be the provocation, had not only the law of force but all other laws, and all the notions of social obligation against them; and in the eyes of those whom they resisted, were not only guilty of crime, but of the worst of all crimes, deserving the most cruel chastisement which human beings could inflict.

(cont. below)

Replies from: ArisKatsaris, cody-bryce, Multiheaded
comment by ArisKatsaris · 2013-06-18T11:43:32.822Z · LW(p) · GW(p)

Downvoted, unread. This is the place for quotes, not essays. (And if you object that there's no rule about the size of the quotes, I'll downvote you again)

Replies from: wedrifid, Estarlio, ialdabaoth
comment by wedrifid · 2013-06-18T17:12:29.137Z · LW(p) · GW(p)

(And if you object that there's no rule about the size of the quotes, I'll downvote you again)

This pre-emptive chastisement seems unnecessary. My egalitarian instinct objects to the social move it represents.

Replies from: TheOtherDave
comment by TheOtherDave · 2013-06-18T17:48:11.319Z · LW(p) · GW(p)

I'm intrigued by this comment. Can you say more about what leads you to make it, given your usual expressed attitude towards appeals to egalitarian instincts?

Replies from: wedrifid
comment by wedrifid · 2013-06-18T20:38:00.098Z · LW(p) · GW(p)

I'm intrigued by this comment. Can you say more about what leads you to make it,

I made it because my instincts warned me that the more forthright declaration "That was unnecessarily dickish, silence fool!" would not be well received. The reason I had the desire to express sentiment at all was due to the use of an unnecessary threat.

By way of illustration, consider if I had said to you (publicly and in an aggressive tone) "If I ever catch you beating your husband I'm going to report you to the police!". That would be a rather odd thing for me to say because I haven't seen evidence of you beating your husband. Me making the threat insinuates that you are likely to beat your husband and also places me in a position of dominance over you such that I can determine your well-being conditional on you complying with my desires. If you in fact were to engage in domestic violence then it would be appropriate for me to use social force against you but since you haven't (p > 0.95) and aren't likely to it would be bizarre if I started throwing such threats around.

given your usual expressed attitude towards appeals to egalitarian instincts?

I'm not sure what you mean. Explain and/or give an example of such an expression? My model of me rather strongly feels the egalitarian instinct and is a vocal albeit conditional supporter of it. Perhaps the instances of appeals to egalitarian instinct that you have in mind are those that I consider to be misleading or disingenuous appeals to the egalitarian instinct to achieve personal social goals? I can imagine myself opposing such instances particularly vehemently.

Replies from: TheOtherDave, army1987
comment by TheOtherDave · 2013-06-18T21:16:23.711Z · LW(p) · GW(p)

Perhaps the instances of appeals to egalitarian instinct that you have in mind are those that I consider to be misleading or disingenuous appeals to the egalitarian instinct to achieve personal social goals?

Yeah, that seems plausible.

it would be bizarre if I started throwing such threats around.

True.

comment by A1987dM (army1987) · 2013-06-22T15:33:41.332Z · LW(p) · GW(p)

My intuitions respond to “And if you object that there's no rule about the size of the quotes, I'll downvote you again” and “If I ever catch you beating your husband I'm going to report you to the police!” in radically different ways. Not entirely sure why.

Replies from: TheOtherDave
comment by TheOtherDave · 2013-06-22T16:46:29.428Z · LW(p) · GW(p)

For my part, my social intuitions respond differently to those cases for two major reasons, as far as I can tell. First, I seem to have a higher expectation that someone will respond to an explanation of a downvote with a legalistic objection than that they will beat their spouses, so responding to the possibility of the former seems more justified. In fact, the former seems entirely plausible, while the second seems unlikely. Second, being accused of the latter seems much more serious than being accused of the former, and require correspondingly stronger justification.

All of which seems reasonable to me on reflection.

All of that said, my personal preference is typically for fewer explicit surface signals of challenge in discussion. For example, if I were concerned that someone might respond to the above that actually, averaged over all of humanity, spouse-beating is far more common than legalistic objections, I'd be far more likely to say something like "(This is of course a function of the community I'm in; in different communities my expectations would differ.)" than something like "And if you reply that spouse-beating is actually more common than legalistic objections, I will downvote you." Similarly, if I anticipated a challenge that nobody is accusing anyone of anything, I might add a qualifier like "(pre-emptively hypothetically)" to "accused," rather than explicitly suggest the challenge and then counter it.

But I acknowledge that this is a personal preference, not a community norm, and I acknowledge that sometimes making potential challenges explicit and responding to them has better long-term consequences than covert subversion of those challenges.

Replies from: army1987
comment by A1987dM (army1987) · 2013-06-23T08:51:28.973Z · LW(p) · GW(p)

I think that what's happened is that my brain took “(And if you object that there's no rule about the size of the quotes, I'll downvote you again)” to be equivalent to “(Yes, I know that there's no rule about the size of the quotes, but still)” with the mock threat added for stylistic/dry humour effect (possibly as a result of me having stuff like this in the past).

Also, someone considering the possibility that an objection will be made to their own comment is self-deprecating in a way that someone considering the possibility that a random person will abuse their spouse isn't.

comment by Estarlio · 2013-06-20T10:25:44.438Z · LW(p) · GW(p)

Downed for being unnecessarily violent and confrontational to someone who wasn't doing anything worthy of such a response....

comment by ialdabaoth · 2013-06-18T11:52:02.291Z · LW(p) · GW(p)

Upvoted because you ACTUALLY GAVE A REASON why you downvoted, providing the OP with useful feedback.

Replies from: wedrifid
comment by wedrifid · 2013-06-18T12:07:01.247Z · LW(p) · GW(p)
comment by cody-bryce · 2013-06-17T02:43:17.399Z · LW(p) · GW(p)

You may have missed the idea of a quote here.

comment by Multiheaded · 2013-06-15T19:47:04.300Z · LW(p) · GW(p)

...Some will object, that a comparison cannot fairly be made between the government of the male sex and the forms of unjust power which I have adduced in illustration of it, since these are arbitrary, and the effect of mere usurpation, while it on the contrary is natural. But was there ever any domination which did not appear natural to those who possessed it? There was a time when the division of mankind into two classes, a small one of masters and a numerous one of slaves, appeared, even to the most cultivated minds, to be natural, and the only natural, condition of the human race. No less an intellect, and one which contributed no less to the progress of human thought, than Aristotle, held this opinion without doubt or misgiving; and rested it on the same premises on which the same assertion in regard to the dominion of men over women is usually based, namely that there are different natures among mankind, free natures, and slave natures; that the Greeks were of a free nature, the barbarian races of Thracians and Asiatics of a slave nature. But why need I go back to Aristotle? Did not the slave-owners of the Southern United States maintain the same doctrine, with all the fanaticism with which men cling to the theories that justify their passions and legitimate their personal interests? Did they not call heaven and earth to witness that the dominion of the white man over the black is natural, that the black race is by nature incapable of freedom, and marked out for slavery? some [1] even going so far as to say that the freedom of manual labourers is an unnatural order of things anywhere. Again, the theorists of absolute monarchy have always affirmed it to be the only natural form of government; issuing from the patriarchal, which was the primitive and spontaneous form of society, framed on the model of the paternal, which is anterior to society itself, and, as they contend, the most natural authority of all. Nay, for that matter, the law of force itself, to those who could not plead any other has always seemed the most natural of all grounds for the exercise of authority. Conquering races hold it to be Nature's own dictate that the conquered should obey the conquerors, or as they euphoniously paraphrase it, that the feebler and more unwarlike races should submit to the braver and manlier. The smallest acquaintance with human life in the middle ages, shows how supremely natural the dominion of the feudal nobility overmen of low condition appeared to the nobility themselves, and how unnatural the conception seemed, of a person of the inferior class claiming equality with them, or exercising authority over them. It hardly seemed less so to the class held in subjection. The emancipated serfs and burgesses, even in their most vigorous struggles, never made any pretension to a share of authority; they only demanded more or less of limitation to the power of tyrannising over them. So true is it that unnatural generally means only uncustomary, and that everything which is usual appears natural. The subjection of women to men being a universal custom, any departure from it quite naturally appears unnatural. But how entirely, even in this case, the feeling is dependent on custom, appears by ample experience. Nothing so much astonishes the people of distant parts of the world, when they first learn anything about England, as to be told that it is under a queen; the thing seems to them so unnatural as to be almost incredible. To Englishmen this does not seem in the least degree unnatural, because they are used to it; but they do feel it unnatural that women should be soldiers or Members of Parliament. In the feudal ages, on the contrary, war and politics were not thought unnatural to women, because not unusual; it seemed natural that women of the privileged classes should be of manly character, inferior in nothing but bodily strength to their husbands and fathers.

...When we consider how vast is the number of men, in any great country, who are little higher than brutes, and that this never prevents them from being able, through the law of marriage, to obtain a victim, the breadth and depth of human misery caused in this shape alone by the abuse of the institution swells to something appalling. Yet these are only the extreme cases. They are the lowest abysses, but there is a sad succession of depth after depth before reaching them. In domestic as in political tyranny, the case of absolute monsters chiefly illustrates the institution by showing that there is scarcely any horror which may not occur under it if the despot pleases, and thus setting in a strong light what must be the terrible frequency of things only a little less atrocious. Absolute fiends are as rare as angels, perhaps rarer: ferocious savages, with occasional touches of humanity, are however very frequent: and in the wide interval which separates these from any worthy representatives of the human species, how many are the forms and gradations of animalism and selfishness, often under an outward varnish of civilisation and even cultivation, living at peace with the law, maintaining a creditable appearance to all who are not under their power, yet sufficient often to make the lives of all who are so, a torment and a burthen to them! It would be tiresome to repeat the commonplaces about the unfitness of men in general for power, which, after the political discussions of centuries, everyone knows by heart, were it not that hardly anyone thinks of applying these maxims to the case in which above all others they are applicable, that of power, not placed in the hands of a man here and there, but offered to every adult male, down to the basest and most ferocious. It is not because a man is not known to have broken any of the Ten Commandments, or because he maintains a respectable character in his dealings with those whom he cannot compel to have intercourse with him, or because he does not fly out into violent bursts of ill-temper against those who are not obliged to bear with him, that it is possible to surmise of what sort his conduct will be in the unrestraint of home. Even the commonest men reserve the violent, the sulky, the undisguisedly selfish side of their character for those who have no power to withstand it. The relation of superiors to dependents is the nursery of these vices of character, which, wherever else they exist, are an overflowing from that source. A man who is morose or violent to his equals, is sure to be one who has lived among inferiors, whom he could frighten or worry into submission. If the family in its best forms is, as it is often said to be, a school of sympathy, tenderness, and loving forgetfulness of self, it is still oftener, as respects its chief, a school of wilfulness, overbearingness, unbounded selfish indulgence, and a double-dyed and idealised selfishness, of which sacrifice itself is only a particular form: the care for the wife and children being only care for them as parts of the man's own interests and belongings, and their individual happiness being immolated in every shape to his smallest preferences. What better is to be looked for under the existing form of the institution? We know that the bad propensities of human nature are only kept within bounds when they are allowed no scope for their indulgence. We know that from impulse and habit, when not from deliberate purpose, almost everyone to whom others yield, goes on encroaching upon them, until a point is reached at which they are compelled to resist. Such being the common tendency of human nature; the almost unlimited power which present social institutions give to the man over at least one human being — the one with whom he resides, and whom he has always present — this power seeks out and evokes the latent germs of selfishness in the remotest corners of his nature — fans its faintest sparks and smouldering embers — offers to him a licence for the indulgence of those points of his original character which in all other relations he would have found it necessary to repress and conceal, and the repression of which would in time have become a second nature.

--Selections from John Stuart Mill's The Subjection of Women (1869). It's probably best read in its entirety; an amazing work ahead of its time, but written in a wall-of-text style that's difficult to abridge for quotes.

(Optional exercise: apply Mill's points on the sociopolitical situation of women in the 19th century to the situation of children today.)

[1] And by "some" Mill likely means Carlyle.

Replies from: Eugine_Nier
comment by Eugine_Nier · 2013-06-16T05:10:30.608Z · LW(p) · GW(p)

Optional exercise: apply Mill's points on the sociopolitical situation of women in the 19th century to the situation of children today.

Are you trying to provide a reductio ad absurdum of Mill's argument, or do you honestly favor treating 5-year olds as legal adults?

Replies from: Izeinwinter, army1987
comment by Izeinwinter · 2013-06-19T10:46:38.401Z · LW(p) · GW(p)

Ehh.. Todays children are often subject to much more limited familial authority than were 19th century women. It is for example illegal to use physical force on them in a great many places.

comment by A1987dM (army1987) · 2013-06-22T18:11:25.835Z · LW(p) · GW(p)

How comes six people downvoted this? While I can think of a few relevant differences between women then and children today, it's not so obvious to me that they'd be so obvious to everybody as to justify unexplained downvotes.

Replies from: ialdabaoth
comment by ialdabaoth · 2013-06-22T18:43:33.083Z · LW(p) · GW(p)

My guess is extended pattern-matching on Eugine Nier's typical post content, along with a huge helping of annoyance with the excluded middle.

"This is a reductio ad absurdum of Mill's argument"

"you honestly favor treating 5-year olds as legal adults".

Are these honestly the only two possible readings of the original post? If not, is it more likely - based on past history of all parties - to assume that Eugine Nier honestly could not conceive of a third option, or merely that a rhetorical tactic was being employed to make their opponent look bad?

Based on what is most likely occurring (evaluated, of course, differently by each person reading), is this post a flower or a weed?

Then you tend the garden.

Replies from: Eugine_Nier
comment by Eugine_Nier · 2013-06-25T02:50:35.698Z · LW(p) · GW(p)

Are these honestly the only two possible readings of the original post? If not, is it more likely - based on past history of all parties - to assume that Eugine Nier honestly could not conceive of a third option, or merely that a rhetorical tactic was being employed to make their opponent look bad?

Well, based to Multiheaded's previous posts I wouldn't be too surprised if he favored treating 5-year olds as legal adults. It's possible he wants us to notice the difference between women and children and see why Mill's argument doesn't apply in the later case, but I find this unlikely given Multiheaded's commenting history. In any case, this is the logical conclusion of his argument as stated, I was merely pointing this out. Of course, If you regard pointing out the implications of someone else's argument as a dishonest rhetorical tactic, I see why you'd object.

Replies from: ialdabaoth
comment by ialdabaoth · 2013-06-25T02:52:43.677Z · LW(p) · GW(p)

Oh, hello! I wondered why my karma was starting to go down again. Welcome back!

Replies from: wedrifid
comment by wedrifid · 2013-06-25T03:54:51.373Z · LW(p) · GW(p)

Oh, hello! I wondered why my karma was starting to go down again. Welcome back!

I am downvoting this and all future complaining about Eugine that is not provoked by immediate context. Too many (ie. about a third) of your comments (and even posts) are attempts to shame people who chose to downvote you. In addition, instances like this one that are snide and sarcastic are particularly distasteful.

I incidentally suggest that giving Eugine just cause (as well as additional emotional incentive) to downvote you is unlikely to be the optimal strategy for reducing the number of downvotes you receive.

On a more empathetic note I know that the task of maintaining the moral high ground and mobilizing the tribe to take action to change the behaviour of a rival is a delicate and difficult one and often a cause for frustration and even disillusionment. A possibility you may consider is that you could accept the minor status hit for excessive complaining but take great care to make sure that each individual complaint is as graceful and inoffensive to third parties (such as myself) as possible. If you resist the urge to insert those oh so tempting additional barbs then you will likely find that you have far more leeway in terms of how much complaining people will accept from you and are more likely to receive the support of said third parties' egalitarian instincts.

Note: The preceding paragraph is purely instrumental advice and should not be interpreted as normative endorsement (or dis-endorsement) of that particular "Grey-Arts" strategy. (But I would at least give unqualified normative endorsement of replacing "complaining + bitchiness" with "complaining + tact" in most cases.)

Replies from: ialdabaoth
comment by ialdabaoth · 2013-06-25T12:20:55.065Z · LW(p) · GW(p)

nod unfortunately, I am terrible at these sorts of plays. Thank you for your criticism, and I'll attempt to behave more gracefully in the future.

EDIT: I'm going to go ahead and trigger your downvotes, now, because reviewing the situation, I feel like I need to speak in my own defense.

I consistently lose fourty to fifty karma over the course of a few minutes, once every few days. Posts which have no possible reason why someone would downvote them get downvoted. And I do not, as you put it, "shame people who chose to downvote me". I mostly ask for an explanation why I got downvoted, so that I can improve. The ONLY time I have explicitly tried to shame someone who downvoted me was Eugine, and only after spending a very long time examining the situation and coming to the conclusion (p > 0.95) that Eugine was downvoting EVERYTHING I say, just because.

If you feel that that deserves further retributive downvoting, you are free to perform it to your heart's content; I am powerless to stop you.

Replies from: army1987
comment by A1987dM (army1987) · 2013-07-13T18:43:45.849Z · LW(p) · GW(p)

p > 0.95

That sounds overconfident.

comment by hampuslj · 2013-06-10T12:13:56.007Z · LW(p) · GW(p)

Does history record any case in which the majority was right?

— Robert A. Heinlein, Time Enough for Love

Replies from: Larks, wedrifid, simplicio, army1987
comment by Larks · 2013-06-11T13:35:17.561Z · LW(p) · GW(p)

Not explicitly, precisely because it is the norm. But it records a great many times when minorities have been wrong.

comment by wedrifid · 2013-06-10T20:21:34.774Z · LW(p) · GW(p)

Does history record any case in which the majority was right?

Yes.

comment by simplicio · 2013-06-10T17:42:08.596Z · LW(p) · GW(p)

Yup.

comment by A1987dM (army1987) · 2013-06-11T11:41:49.258Z · LW(p) · GW(p)

The colour of the sky? What direction a rock goes if you drop it from near the ground?

comment by Kawoomba · 2013-06-01T19:38:52.305Z · LW(p) · GW(p)

If you meet the Buddha on the road, kill him!

Lin Chi

Replies from: TimS, ArisKatsaris, hedges, Benito, Kyre
comment by TimS · 2013-06-01T20:22:11.672Z · LW(p) · GW(p)

I'm sure I could interpret a rationalist message from that quote, in the same way that I could derive a reasonable moral system based solely on the Book of Revelations. But that doesn't imply that my reading is intended by the author, or a plausible reading of the text.

Replies from: Eliezer_Yudkowsky, ChristianKl
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-06-02T19:17:30.861Z · LW(p) · GW(p)

In this case it does seem plausible that a rationalist message was intended.

comment by ChristianKl · 2013-06-06T22:08:32.083Z · LW(p) · GW(p)

Maybe the real issue is that it takes background knoweldge to know what the quote means within Buddhism? Without that background knowledge the sentence doesn't convey much meaning.

comment by ArisKatsaris · 2013-06-01T23:18:59.631Z · LW(p) · GW(p)

Lin Chi is a jerk.

The Buddha

Replies from: TimS, ChristianKl
comment by TimS · 2013-06-02T02:10:41.622Z · LW(p) · GW(p)

Nietzsche is dead

God
:)

Replies from: Desrtopa, jklsemicolon
comment by Desrtopa · 2013-06-02T02:37:11.031Z · LW(p) · GW(p)

I've heard this quoted a lot, but I can't find the original source.

Replies from: TimS
comment by TimS · 2013-06-02T02:52:28.848Z · LW(p) · GW(p)

I'm surprised to find anything on the source of a joke, but this thread suggests it originated some time in the 1960s.

Replies from: wedrifid
comment by wedrifid · 2013-06-03T07:38:56.042Z · LW(p) · GW(p)

I'm surprised to find anything on the source of a joke, but this thread suggests it originated some time in the 1960s.

My impression was that Desrtopa was making an atheist jest.

comment by jklsemicolon · 2013-06-02T02:33:10.267Z · LW(p) · GW(p)

(In the Recent Comments sidebar, this looked like:

Nietzsche is dead God

which is rather different!)

Replies from: TimS
comment by TimS · 2013-06-02T02:36:09.310Z · LW(p) · GW(p)

I saw that, and it ruins the joke a bit. Sigh.

FWIW, I really like Nietzsche.

Replies from: wedrifid
comment by wedrifid · 2013-06-03T07:19:42.765Z · LW(p) · GW(p)

I saw that, and it ruins the joke a bit. Sigh.

Prepend 'God' with hyphens. "Nietzsche is dead. --God" works with or without the line break.

comment by ChristianKl · 2013-06-06T22:01:43.733Z · LW(p) · GW(p)

That wouldn't be the kind of thing Buddha used to say.

comment by hedges · 2013-06-01T20:41:51.364Z · LW(p) · GW(p)

If you find the truth, continue the search for it regardless.

Forget about arriving at the truth, rather practice the methods that brings you closer to truths.

The intended meaning has something to do with the Buddhist concept that the practice of Buddhism (basically meditation) is the realization of Buddhahood, and instead of accepting any Buddha you meet, you must simply continue your practice.

comment by Ben Pace (Benito) · 2013-06-02T19:26:44.999Z · LW(p) · GW(p)

By the way, Sam Harris wrote an essay starting with this quote, called 'Killing the Buddha'.

http://www.samharris.org/site/full_text/killing-the-buddha/

comment by Kyre · 2013-06-03T07:06:04.889Z · LW(p) · GW(p)

Didn't we do this last month ?

Replies from: Viliam_Bur
comment by Viliam_Bur · 2013-07-22T08:19:04.349Z · LW(p) · GW(p)

Apparently the Buddha has reincarnated, so we need to kill him again. It's like playing the World of Warcraft.

comment by Kawoomba · 2013-06-12T20:29:01.328Z · LW(p) · GW(p)

Socrates: I know that I know nothing.

Chaerephon to the Oracle of Delphi (not the programming language): Is anyone wiser than Socrates?

Pythia: No human is wiser!1!

Alternatively:

Ygritte: You know nuthing, Jon Snuh.

comment by hedges · 2013-06-12T18:16:17.967Z · LW(p) · GW(p)

Yesterday we obeyed kings and bent our necks to emperors. Today we kneel only to truth.

— Kahlil Gibran

Replies from: Qiaochu_Yuan
comment by Qiaochu_Yuan · 2013-06-15T19:50:39.637Z · LW(p) · GW(p)

This seems empirically false.

Replies from: hedges
comment by hedges · 2013-06-15T20:01:27.862Z · LW(p) · GW(p)

It almost certainly is, but does that matter? It is a slogan for any time when the powers that be are diminished by the truth.

Replies from: Nominull, ArisKatsaris
comment by Nominull · 2013-06-16T08:47:57.641Z · LW(p) · GW(p)

Today we kneel only to hypocrisy.

comment by ArisKatsaris · 2013-06-18T11:46:14.145Z · LW(p) · GW(p)

Yes, it matters if we are deluding ourselves into thinking ourselves better than we are. False self-gratification prevents us from actually improving.