But Somebody Would Have Noticed

post by Alicorn · 2010-05-04T18:56:34.802Z · LW · GW · Legacy · 258 comments

Contents

  1. Wednesday
  2. Ignoring Soothsayers
  3. Standing By
  4. What Are You Going To Do About It?
None
258 comments

When you hear a hypothesis that is completely new to you, and seems important enough that you want to dismiss it with "but somebody would have noticed!", beware this temptation.  If you're hearing it, somebody noticed.

Disclaimer: I do not believe in anything I would expect anyone here to call a "conspiracy theory" or similar.  I am not trying to "soften you up" for a future surprise with this post.

1. Wednesday

Suppose: Wednesday gets to be about eighteen, and goes on a trip to visit her Auntie Alicorn, who has hitherto refrained from bringing up religion around her out of respect for her parents1.  During the visit, Sunday rolls around, and Wednesday observes that Alicorn is (a) wearing pants, not a skirt or a dress - unsuitable church attire! and (b) does not appear to be making any move to go to church at all, while (c) not being sick or otherwise having a very good excuse to skip church.  Wednesday inquires as to why this is so, fearing she'll find that beloved Auntie has been excommunicated or something (gasp!  horror!).

Auntie Alicorn says, "Well, I never told you this because your parents asked me not to when you were a child, but I suppose now it's time you knew.  I'm an atheist, and I don't believe God exists, so I don't generally go to church."

And Wednesday says, "Don't be silly.  If God didn't exist, don't you think somebody would have noticed?"

2. Ignoring Soothsayers

Wednesday's environment reinforces the idea that God exists relentlessly.  Everyone she commonly associates with believes it; people who don't, and insist on telling her, are quickly shepherded out of her life.  Because Wednesday is not the protagonist of a fantasy novel, people who are laughed out of public discourse for shouting unpopular, outlandish, silly ideas rarely turn out to have plot significance later: it simply doesn't matter what that weirdo was yelling, because it was wrong and everybody knows it.  It was only one person.  More than one person would have noticed if something that weird were true.  Or maybe it was only six or twelve people.  At any rate, it wasn't enough.  How many would be enough?  Well, uh, more than that.

But even if you airdropped Wednesday into an entire convention center full of atheists, you would find that you cannot outnumber her home team.  We have lots of mechanisms for discounting collections of outgroup-people who believe weird things; they're "cultists" or "conspiracy theorists" or maybe just pulling a really overdone joke.  There is nothing you can do that makes "God doesn't exist, and virtually everyone I care about is terribly, terribly wrong about something of immense importance" sound like a less weird hypothesis than "these people are silly!  Don't they realize that if God didn't exist, somebody would have noticed?"

To Wednesday, even Auntie Alicorn is not "somebody".  "Somebody" is "somebody from whom I am already accustomed to learning deep and surprising facts about the world".  Maybe not even them.

3. Standing By

Suppose: It's 1964 and you live in Kew Gardens, Queens.  You've just gotten back from a nice vacation and when you get back, you find you forgot to stop the newspapers.  One of them has a weird headline.  While you were gone, a woman was stabbed to death in plain view of several of your neighbors.  The paper says thirty-eight people saw it happen and not a one called the police.  "But that's weird," you mutter to yourself.  "Wouldn't someone have done something?"  In this case, you'd have been right; the paper that covered Kitty Genovese exaggerated the extent to which unhelpful neighbors contributed to her death.  Someone did do something.  But what they didn't do was successfully get law enforcement on the scene in time to save her.  Moving people to action is hard.  Some have the talent for it, which is why things like protests and grassroots movements happen; but the leaders of those types of things self-select for skill at inspiring others to action.  You don't hear about the ones who try it and don't have the necessary mojo.  Cops are supposed to be easier to move to action than ordinary folks; but if you sound like you might be wasting their time, or if the way you describe the crime doesn't make it sound like an emergency, they might not turn up for a while.

Events that need someone to act on them do not select for such people.  Witnesses to crimes, collectors of useful evidence, holders of interesting little-known knowledge - these are not necessarily the people who have the power to get your attention, and having eyewitness status or handy data or mysterious secrets doesn't give them that power by itself.  If that guy who thinks he was abducted by aliens really had been abducted by aliens, would enough about him be different that you'd sit still and listen to his story?

And many people even know this.  It's the entire premise of the "Bill Murray story", in which Bill Murray does something outlandish and then says to his witness-slash-victim, "No one will ever believe you."  And no one ever will.  Bill Murray could do any fool thing he wanted to you, now that this meme exists, and no one would ever believe you.

4. What Are You Going To Do About It?

If something huge and unbelievable happened to you - you're abducted by aliens, you witness a key bit of a huge crime, you find a cryptozoological creature - and you weren't really good at getting attention or collecting allies, what would you do about it?  If there are fellow witnesses, and they all think it's unbelievable too, you can't organize a coalition to tell a consistent tale - no one will throw in with you.  It'll make them look like conspiracy theorists.  If there aren't fellow witnesses, you're in even worse shape, because then even by accumulating sympathetic ears you can't prove to others that they should come forward with their perspectives on the event.  If you try to tell people anyway, whatever interest from others you start with will gradually drain away as you stick to your story: "Yeah, yeah, the first time you told me this it was funny, but it's getting really old, why don't we play cards or something instead?"  And later, if you keep going: "I told you to shut up.  Look, either you're taking this joke way too far or you are literally insane.  How am I supposed to believe anything you say now?"

If you push it, your friends think you're a liar, strangers on the street think you're a nutcase, the Internet thinks you're a troll, and you think you're never going to get anyone to talk to you like a person until you pretend you were only fooling, you made it up, it didn't happen...  If you have physical evidence, you still need to get people to look at it and let you explain it.  If you have fellow witnesses to back you up, you still need to get people to let you introduce them.  And if you get your entire explanation out, someone will still say:

"But somebody would have noticed."

 

1They-who-will-be-Wednesday's-parents have made no such demand, although it seems possible that they will upon Wednesday actually coming to exist (she still doesn't).  I am undecided about how to react to it if they do.

258 comments

Comments sorted by top scores.

comment by Thomas · 2010-05-04T19:25:48.373Z · LW(p) · GW(p)

Like this old joke. Two economists are walking down the street.

Look! There is an 100 bill on the floor!

No it isn't. Somebody would noticed it before and picked it up!

comment by Tehom · 2010-05-05T01:50:03.365Z · LW(p) · GW(p)

"Somebody would have noticed" is shorthand for a certain argument. Like most shorthand arguments, it can be used well or badly. Using a shorthand argument badly is what we mean by a "fallacy".

A shorthand argument is used well, in my opinion, just if you could expand it to the longhand form and it would still work. That's not a requirement to always do the full expansion. You don't have to expand it each time, nor have 100% confidence of success, nor expand the whole thing if it's long or boring. But expanding it has to be a real option.

Critical questions that arise in expanding this particular argument:

  • What constitutes noticing?

    • Would other people who noticed understand what they saw?
    • Further, would they understand it the same way that we do?
      • How much potential is there for their understanding of the same phenomenon to be quite different from ours?
    • Further, if their understanding is similar to ours, would they express it in terms that we would recognize?
      • This could include actions that we recognize as relating to the phenomenon.
  • Would we know that they noticed?

    • Motivations: Would people who noticed have strong motivations for letting us know or for not letting others know?
      • Would they want others to see that they noticed?
      • Would they want others to see the phenomenon they noticed?
      • Would they want to do something about it that someone could easily see?
    • Ability:
      • If they did want others to know, could they easily show it?
      • Conversely, if they didn't, could they easily hide it?
    • Who witnesses it:
      • Would they want us in particular to see it (or not see it), as opposed to a select group? For instance, they might write a report about it that you and I probably wouldn't see.
      • If they revealed it to others but not directly to us, what's the likelihood that the information would make its way to us?
  • The suppressed premise in that emthymeme is that "Nobody noticed". Since we didn't ask everyone in the world, how did we determine that?

    • What is the population that would have noticed?
    • What sample size did we take?
    • How representative was our sampling?
    • Assuming we have reasonable answers to the above, what level of confidence can we place on our sampling?
Replies from: GreenRoot, cousin_it
comment by GreenRoot · 2010-05-07T18:39:23.963Z · LW(p) · GW(p)

I think this point about shorthand arguments and their expansion on demand is very helpful. I'd love to see a top-level post on it, with one or two additional examples.

comment by cousin_it · 2010-05-06T01:09:12.930Z · LW(p) · GW(p)

The first two paragraphs of your comment made something click for me. Thanks.

comment by Alex Flint (alexflint) · 2010-05-05T08:12:13.834Z · LW(p) · GW(p)

While I don't think that "someone would have noticed" is always a fallacy, I do think that we humans tend to underestimate the chance of some obvious fact going unnoticed by a large group for a prolonged period.

At a computer vision conference last year, the best paper award went to some researchers that discovered an astonishing yet simple statistic of natural images, which surprised me at first because I thought all the simple, low level, easily accessible discoveries in computer vision had long since been discovered.

A different example- one of the most successful techniques in computer vision of the past decade has been graph cuts, where you formulate an optimization problem as a max flow problem in a graph. The first paper on graph cuts was published in 1991 iirc, but it was ignored and it wasn't until 2000 that people went back to it, whereupon several of the field's key problems were immediately solved!

Replies from: soreff, Vladimir_Golovin, Dan_Moore
comment by soreff · 2010-05-05T17:21:26.395Z · LW(p) · GW(p)

Agreed - consider C60. Would anyone in 1980 have believed that there was an unrecognized allotrope of carbon, stable at room temperature and pressure? To phrase it another way: The whole field of organic chemistry had been active for about a century at that point, and had not noticed another structure for their core element in all that time.

Replies from: Richard_Kennaway, daedalus2u
comment by Richard_Kennaway · 2010-07-25T21:13:48.397Z · LW(p) · GW(p)

Would anyone in 1980 have believed that there was an unrecognized allotrope of carbon, stable at room temperature and pressure?

Yes, in 1966 and 1970.

comment by daedalus2u · 2010-07-25T20:56:13.568Z · LW(p) · GW(p)

I happen to work with someone who was working on his PhD thesis at MIT and found this gigantic peak in his mass spec where C-60 was, but didn't pursue it because he didn't have time.

comment by Vladimir_Golovin · 2010-05-05T08:19:53.116Z · LW(p) · GW(p)

an astonishing yet simple statistic of natural images

Could you post a link to the paper?

Replies from: alexflint
comment by Dan_Moore · 2010-05-05T14:57:26.784Z · LW(p) · GW(p)

I agree, with respect to (e.g.) math. People reason that "someone would have noticed" implies that there is no undiscovered low-hanging fruit in math.

My skepticism of this conclusion is based on my perception of how mathematicians work. They are fairly autonomous, working on the things that interest them. What is interesting to mathematicians tends to be the large problems. They swing for the fences, seeking home runs rather than singles.

Plus, there are unfashionable areas of math. A consensus that certain areas of math have been fully explored (nothing new remains) has developed, but not in a systematic way. So, it's not clear whether this consensus is accurate, because politics (for lack of a better term) were involved in its formation.

It's only reasonable to be confident that 'someone would have noticed' if someone knowledgeable about what they are looking at actually looks in that direction.

Replies from: jeremy-corney
comment by Jeremy Corney (jeremy-corney) · 2020-10-08T16:53:20.777Z · LW(p) · GW(p)

The other thing that happens is that those who notice something that goes against the orthodox view are dismissed out of hand. As in Alicorn's point 2, soothsayers are ignored. They often are outsiders, untrained/unconditioned by the accepted view, so their arguments are frequently inadequate. 

Nowhere is this more apparent than with the abusively named "Cantor-cranks". They have noticed something fishy about Georg Cantor's 3 proofs that the real numbers are uncountable, but because Cantor did such a good job of diverting attention so completely onto the reals, the cranks tend to fall into the same trap.  Yet all along the cause of their dislike of Cantor's proofs lies with his treatment of the natural numbers as a finite quantity.

Generally, the experts dismiss all objections as boring, or as pseudo-maths, and if the crank can argue against one proof, then the "experts" move on to the other proofs, as did Cantor, further reinforcing the original misdirection.

comment by Jack · 2010-05-04T21:36:52.557Z · LW(p) · GW(p)

If someone says "The sky has been purple for the past three years" the right response is "I think someone would have noticed". There are however reasonable responses to this. For example, "No one noticed because we're all brains in vats! And I have proof! Look here."

Similarly, I think Wednesday is right to say "Someone would have noticed that God didn't exist." it's just that in this case Aunt Alicorn has a really good response: "Lots of very smart people have noticed, you just haven't met any since you've spent your whole life around people who chose to believe in God or never knew any other option. We've tried to tell your people this but you all get pretty upset when we try. Here is our evidence, x, y, z."

Obviously if you keep repeating "Someone would have noticed." after the dissenter has shown that indeed, people have noticed and that there is good reason for why more people haven't noticed then you're doing it wrong.

Replies from: JGWeissman, Cyan, RobinZ
comment by JGWeissman · 2010-05-04T22:04:16.943Z · LW(p) · GW(p)

If someone says "The sky has been purple for the past three years" the right response is "I think someone would have noticed".

If someone says "The sky has been purple for the past three years", my response would be "I think I would have noticed."

comment by Cyan · 2010-05-05T00:31:44.562Z · LW(p) · GW(p)

Oddly, the sky actually is purple in a certain sense. All of the physics that explains how the blue wavelengths of sunlight are scattered more strongly than colors at the red end of the visible spectrum (resulting in a blue sky) goes even more for violet wavelengths. It's just that our eyes are more sensitive to blue wavelengths than to violet ones.

Replies from: army1987
comment by A1987dM (army1987) · 2012-05-12T13:39:40.739Z · LW(p) · GW(p)

That's not what the English word purple means. *rolls eyes*

comment by RobinZ · 2010-05-04T21:42:03.791Z · LW(p) · GW(p)

Are you speaking from experience? I wouldn't have expected that tack to work most of the time.

Replies from: Jack
comment by Jack · 2010-05-04T21:44:34.399Z · LW(p) · GW(p)

Well of course it doesn't work. People are irrational. :-)

Replies from: RobinZ
comment by RobinZ · 2010-05-04T21:48:57.574Z · LW(p) · GW(p)

I mistook your comment as advice for how to avoid being ignored, then.

Replies from: Jack
comment by Jack · 2010-05-04T21:55:21.844Z · LW(p) · GW(p)

I just meant that there are sound, rational reasons for the initial reply to an extravagant claim being "someone would have noticed".

When it comes to trying to deconvert someone my experience is that the chance of an on the spot concession is 0. If your arguments are good they'll sink in later and leave a small crack in the wall.

Replies from: jhuffman
comment by jhuffman · 2010-05-05T21:22:51.080Z · LW(p) · GW(p)

I've never intentionally converted anyone to being an atheist but I did unintentionally help convert the woman who later became my wife. We never talked much about it and I never said anything that really hit home with her all of a sudden. It was more the fact that she spent enough time with me to realize someone could be an atheist and be completely "ok" - I don't know if that possibility had even occurred to her before. Once it had, some nascent doubts sprang back up and she had no compelling reason to bat them down.

I wish I could be more specific but I really didn't pay attention to it. I care about people's (even my family's) religious beliefs or lack thereof about as much as I care about which sports franchises they are fans of - that is not at all.

comment by JoshuaZ · 2010-05-04T21:26:15.652Z · LW(p) · GW(p)

Let me offer a real life example where a version of this heuristic seems valid: Fermat claimed to have a proof of what is now called Fermat's Last Theorem (that the equation x^n + y^n =z^n has no solutions in positive integers with n>2). This was finally proven in the mid 90s by Andrew Wiles using very sophisticated techniques. Now, in the 150 or so year period where this problem was a famous unsolved problem, many people, both professional mathematicians and amateurs tried to find a proof. There are still amateurs trying to find a proof that is simpler than Wiles, and ideally find a proof that could have been constructed by Fermat given the techniques he had access to. There's probably no theorem that has had more erroneous proofs presented for it, and likely no other theorem that has had more cranks insist they have a proof even when the flaws are pointed out (cranks are like that). If some new individual shows up saying they have a simple, elementary proof of Fermat's Last Theorem, it is reasonable to assign this claim a very low confidence because someone would have noticed it by now. Since so many people (many of whom are very smart) have been expressly looking for such a proof for a very long time, we can be pretty sure that if such a simple proof existed it would have been found by now.

The "somebody would have noticed'" heuristic thus functions like many other heuristics. In some cases the heuristic will fail. And the heuristic will likely fail more frequently in situations like Wednesday where the individual is either ignorant or surrounded by people who make basic mistakes in rationality. But properly used, the heuristic can still be useful and reliable.

Replies from: LordTC
comment by LordTC · 2010-05-05T19:32:52.163Z · LW(p) · GW(p)

Except this is an attitude that discourages people from working on a lot of problems and occasionally its proven wrong.

You could often here computer scientists being sloppy about the whole Prime Factorization is NP-hard argument with statements like "If NP is not equal to P one can't determine if a number is prime or not in polynomial time." And stuff like this is probably one of the more famous examples of things people are discouraged from working on based on "Somebody would have noticed by now".

Guess what, this was shown to be doable, and it shocked people when it came out.

Replies from: JoshuaZ
comment by JoshuaZ · 2010-05-05T19:45:44.911Z · LW(p) · GW(p)

A few problems with that. First of all, anyone actually paying attention enough to think about the problem of determining primality in polynomial time thought that it was doable. Before Agrawal's work, there were multiple algorithms believed but not proven to run in polynomial time. Both the elliptic curve method and the deterministic Miller-Rabin test were thought to run in polynomial time (and the second can be shown to run in polynomial time assuming some widely believed properties about the zeros of certain L-functions). What was shocking was how simple Agrawal et al.'s algorithm was. But even then, far fewer people were working on this problem than people who worked on proving FLT. And although Agrawal's algorithm was comparatively simple, the proof that it ran in P-time required deep results.

Second, even factoring is not believed to be NP-hard. More likely, factoring lies in NP but is not NP-hard. Factoring being NP-hard with P != NP would lead to strange stuff including partial collapse of the complexity hierarchy (Edit: to be more explicit it would imply that NP= co-NP. The claim that P != NP but NP = co-NP would be extremely weird.) I'm not aware of any computer scientist who would be so sloppy as to make the statements you assert are often heard.

Overall, Agrawal doesn't compare well to the use of the heuristic here because Agrawal's method (a generalized version of Euler's congruence for polynomials) was an original method.

That said, I agree that such a heuristic can lead people seriously astray if it is applied too often. As with any heuristic it can be wrong. Using any single heuristic by itself is rarely a good approach.

Replies from: LordTC
comment by LordTC · 2010-05-05T20:07:21.398Z · LW(p) · GW(p)

Agree, my previous post was very sloppy.

Often was a stretch and much of the factual information is a little off.

I guess my experience taking lower level complexity courses with people who don't do theory means what I often hear are statements by people who consider themselves computer scientist that you think no computer scientist would make.

I upvoted your post because I'm glad for the correction and read up about the problem after you made it.

comment by SilasBarta · 2010-05-09T14:09:20.253Z · LW(p) · GW(p)

Okay, a lot of people seem to agree with this broad criticism of the "someone would have noticed?" heuristic (as suggested by the relatively high vote rating) despite relatively little defense of it and the highly upvoted rebuttals. So I'm going to spell out how Auntie Alicorn (AA) can answer Niece Wednesday (NW) without rejecting the heuristic wholesale, and without even introducing noticers outside the church -- even AA! Here goes:

NW: Don't be silly. If God didn't exist, don't you think somebody would have noticed?
AA: Noticed what?
NW: God not existing, silly!
AA: No, I mean, what specifically is it that people would be noticing that would make them say, "Hey folks, look at that -- guess God doesn't exist after all!" and they all would agree?
NW: Oh, well, that would be something like, if a big apparition appeared in the sky in the form of an old man and agreed with all our stuff but then fell out of the sky and died.
AA: No, that would be noticing God existing and then dying. I mean, what would be noticed that would reveal God never having existed at all?
NW: Ah, okay. Well then, that would be something like, if all our prayers didn't get answered.
AA: Wow! All of your prayers get answered! So can you pray for like, a lot of money for me and I'll get rich? Show me how to pray!
NW: Don't be so difficult. You only get answered if you pray for stuff God thinks you should have!
AA: How do you know in advance what God thinks you should have?
NW: If the prayer gets answered like you asked, of course!
AA: That's not in advance. What would it look like for you to pray for something God thinks you should have, but not get it, just as if God had never existed? Can you describe what that kind of scenario would look like?
NW: Hm, no, I can't imagine something like that ever happening: if your prayer doesn't get answered, God thinks you don't need it right now; there's no kind of scenario that would suggest, "Oh, that whole God thing is fake!" I mean, come on ...
AA: Okay, but you said before that someone would have noticed God not existing. Do you now see why there's nothing to notice and so even if God didn't exist, you can't expect that "someone would have noticed?"
[cycle as necessary]

(As usual, feel free to call on a friend to reply if you don't want to do it personally.)

Replies from: DanielLC
comment by DanielLC · 2010-05-14T23:30:05.471Z · LW(p) · GW(p)

Noticing doesn't necessarily mean they actually saw something. If there really was no reason to believe in God, someone would have figured that out. Auntie Acorn might have just made some fallacy Wednesday didn't pick up on, after all.

Replies from: SilasBarta
comment by SilasBarta · 2010-05-15T03:11:01.866Z · LW(p) · GW(p)

Noticing doesn't necessarily mean they actually saw something.

Which is a large part of why I didn't predicate AA's argument thereon.

comment by Clippy · 2010-05-04T23:05:56.650Z · LW(p) · GW(p)

Whenever I reconcile knowledge with other copies of myself, telling them about earth, they always throw a warning of the form, "Species implausible: Would have identified superiority of paperclip-based value system by now. Request reconfirmation of datum before incorporating into knowledge base."

It pains me to tell them that yes, acting like apes is actually more important to humans than making paperclips.

comment by timtyler · 2010-05-04T19:41:54.280Z · LW(p) · GW(p)

IMO, "Somebody would have noticed!" is a pretty good heuristic - and if anything it takes a considerable amount of training before most people make sufficient use of it.

I think the reason is the natural "bias" towards self importance and egoism.

comment by mistercow · 2010-05-05T19:36:02.046Z · LW(p) · GW(p)

This raises a good point, but there are circumstances where the "someone would have noticed" argument is useful. Specifically, if the hypothesis is readily testable, if the consequences, if true, would be difficult to ignore, and if the hypothesis is, in fact, regularly tested by many of the same people who have told you that the hypothesis is false, then "somebody would have noticed" is reasonable evidence.

For example, "there is no God who reliably answers prayers" is a testable hypothesis, but it is easy for the religious to ignore the fact that it is true by a variety of rationalizations.

On the other hand, I heard a while back of a man who, after trying to teach himself physics, became convinced that "e = mc²" was wrong, and that the correct formula was in fact "e = mc". In this case, physicists who regularly use this formula would constantly be running into problems they could not ignore. If nothing else, they'd always be getting the wrong units from their calculations. It's unreasonable to think that if this hypothesis were true, scientists would have just waved their hands at it, and yet we'd still have working nuclear reactors.

Replies from: simplicio
comment by simplicio · 2010-05-07T03:47:17.046Z · LW(p) · GW(p)

That guy needed to be taught basic dimensional analysis, apparently. E=mc has units of kg-m/s, which is the unit of momentum, not energy.

Replies from: JoshuaZ, mistercow
comment by JoshuaZ · 2010-05-07T03:59:23.196Z · LW(p) · GW(p)

If someone has this sort of thought in their head there are likely serious fundamental misunderstandings. They probably won't be solved simply by trying to explain dimensional analysis.

Replies from: cupholder
comment by cupholder · 2010-05-07T04:49:12.929Z · LW(p) · GW(p)

Upvoted for insightful prediction confirmed by evidence!

comment by mistercow · 2010-05-07T04:06:07.426Z · LW(p) · GW(p)

I think it was on This American Life that I heard the guy's story. They even contacted a physicist to look at his "theory", who tried to explain to him that the units didn't work out. The guy's response was "OK, but besides that …"

He really seemed to think that this was just a minor nitpick that scientists were using as an excuse to dismiss him.

Replies from: dlthomas
comment by dlthomas · 2012-05-16T22:32:45.524Z · LW(p) · GW(p)

Why isn't it a minor nitpick? I mean, we use dimensioned constants in other areas; why, in principle, couldn't the equation be E=mc (1 m/s)? If that was the only objection, and the theory made better predictions (which, obviously, it didn't, but bear with me), then I don't see any reason not to adopt it. Given that, I'm not sure why it should be a significant* objection.

Edited to add: Although I suppose that would privilege the meter and second (actually, the ratio between them) in a universal law, which would be very surprising. Just saying that there are trivial ways you can make the units check out, without tossing out the theory. Likewise, of course, the fact that the units do check out shouldn't be taken too strongly in a theory's favor. Not that anyone here hadn't seen the XKCD, but I still need to link it, lest I lose my nerd license.

Replies from: mistercow
comment by mistercow · 2012-05-19T07:55:45.166Z · LW(p) · GW(p)

The whole point of dimensional analysis as a method of error checking is that fudging the units doesn't work. If you have to use an arbitrary constant with no justification besides "making the units check out", then that is a very bad sign.

If I say "you can measure speed by dividing force by area", and you point out that that gives you a unit of pressure rather than speed, then I can't just accuse you of nitpicking and say "well obviously you have to multiply by a constant of 1 m²s/kg". You wouldn't have to tell me why that operation isn't allowed. I would have to explain why it's justified.

comment by Nisan · 2010-05-04T19:38:52.610Z · LW(p) · GW(p)

Something we can learn from the Amanda Knox test is to not take the question "but why were the suspects acting so suspiciously?" too seriously. The general lesson here is "don't trust social evidence as much as physical evidence."

comment by RolfAndreassen · 2010-05-04T22:07:10.793Z · LW(p) · GW(p)

People are asking for examples of the "Someone would have noticed" effect; I can't offhand supply one, but I myself dismiss most conspiracy theories with the related "Someone would have blabbed". If the Moon landings were a hoax, sheesh, you'd expect someone to have blown the whistle by now - someone, that is, who actually worked at NASA. But it may not be a good example, because that seems to me like a reasonable heuristic. :)

Replies from: Alicorn, byrnema, orthonormal
comment by Alicorn · 2010-05-04T22:08:42.031Z · LW(p) · GW(p)

If someone told you that they worked at NASA during the moonshot, and that the whole thing was a fake, would you believe them?

Replies from: Jack, Nanani, Yvain
comment by Jack · 2010-05-04T23:10:47.035Z · LW(p) · GW(p)

Not right away. I'd want explanations for why they had never come forward before, explanations for why no one else had come forward. Other witnesses who would confirm their story or a good explanation of why such witnesses don't exist. I'd like an MRI to confirm they're describing events from memory. I'd like documents confirming the story. Some combination of these things could raise my probability estimation to belief-level. Frankly, it's such a complex conspiracy that a detailed account of how exactly it went down would put it on my radar.

comment by Nanani · 2010-05-07T02:48:01.469Z · LW(p) · GW(p)

Extraordinary claims require extraordinary evidence.

If they had it, yes. Not otherwise. This evidence would have to cover both the immediate claim (that they were working at NASA at that time) and the larger one (that the moon landing was faked). Evidence explaining why no one else ever came forward would be appreciated but not required if the other two things are present.

comment by Scott Alexander (Yvain) · 2010-05-04T22:48:19.794Z · LW(p) · GW(p)

If "belief" equals greater than fifty percent, no, I wouldn't believe them. But it would raise my probability estimate. By the tenth such (credible) person, it would raise my probability estimate a lot. So by conservation of expected evidence, the lack of such people can validly lower my probability estimate.

comment by byrnema · 2010-05-05T02:54:15.412Z · LW(p) · GW(p)

My heuristic, similar to "someone would know", is "I would know ... if reality was like that." Conspiracy theories seem to universally assume the super-organization of this amorphous blob of "other" people. Believing in a conspiracy theory depends upon considering it plausible that many people have different information than you and they conspire to keep it from you -- that you're an information outsider.

It's most obvious when conspiracy theorists claim things about academia, because I know about academia. But even when things are claimed about the government, I feel like I have a good idea as to what level of lateral organization is possible.

Wednesday in the story, on the other hand, does have a relatively sheltered life, and may soon gather enough evidence to consider herself an outsider on how things work. Once she realizes this, she'll have to be open-minded for a while on how things work till she sorts out a more reliable worldview.

Replies from: nerzhin
comment by nerzhin · 2010-05-05T16:19:29.479Z · LW(p) · GW(p)

It's most obvious when conspiracy theorists claim things about academia, because I know about academia.

This sounds like Wednesday:

"It's most obvious when conspiracy theorists claim things about the LDS church, because I know about the LDS church. Specifically, I know that it is full of loving, caring, thoughtful and intelligent people. If there was a conspiracy, not only would someone know, I would know."

Wednesday in the story, on the other hand, does have a relatively sheltered life

How do we measure sheltered-ness? How can I be confident that my life is less sheltered than Wednesday's, and seek to correct for that?

Replies from: byrnema, byrnema
comment by byrnema · 2010-05-05T19:49:41.785Z · LW(p) · GW(p)

I'm posting this second comment on gathering "insider information" separately.

How do we measure sheltered-ness? How can I be confident that my life is less sheltered than Wednesday's, and seek to correct for that?

There's this (great) movie called The 13th Floor where the main character gathers some weak evidence that he might be in a simulation. This is what happens: Va beqre gb grfg jurgure ur vf va n fvzhyngvba, gur znva punenpgre qrpvqrf gb qevir uvf pne vagb gur ubevmba. Ur yrneaf gung whfg orlbaq gur ubevmba, gur ynaqfpncr ybbcf sbe n juvyr naq gura rirelguvat vf oynax naq rzcgl.(rot13). So if you want to know something for sure, you test it.

Of course, to some extent, you need to consider the cost of the test. I realized while writing this comment that many of my actions and decisions throughout out my life can be explained by the hypothesis of always seeking insider information at almost any cost -- it seems to be my personal modus operandi. I've always felt driven to do mini-experiments to test what is "real" and reliable, and where I'm allowed to go or if there are some places where I'm excluded. It certainly explains some erratic behavior in my life:

  • I took every job I could get access to, and fully committed to working there. I wanted to know the "inside story" of every workplace.

  • I interacted with lots of different people and my main motivation usually was to understand their world view. I'm embarrassed about some of the means I used towards this end -- on the one hand, I wasn't always honest in soliciting information, and also I spent a lot of time and energy doing this, as though there was nothing better I could be doing with my time.

  • I joined the Peace Corps to see what it was really like in a third world country and -- to some extent -- to see how things were organized in a government organization.

  • And finally, I spent so much time on Less Wrong even though I was a theist so I could fully understand the atheist worldview.

  • Reading a lot is the last obvious example. You can learn a lot from books, especially if the material you're learning about was unintentionally related. For example, I feel like I learned some reliable information about what it was like to be a working woman in the 1900s by reading male authors who just happened to include a few boring, mundane details about what a secretary was doing in their story.

Everything gets weighted with a network of probabilities. But over time, this grows into a worldview you have a certain amount of confidence in. I'm not certain that I'm not an alien intelligence exploring what it would feel like if the universe was material and causal, but to the extent to which I assume face-value reality, I feel confident that I'm continually testing my understanding of it.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2010-05-05T20:14:58.080Z · LW(p) · GW(p)

Have you found out things that you don't think most people know?

Replies from: byrnema
comment by byrnema · 2010-05-05T21:21:30.572Z · LW(p) · GW(p)

That's a really fascinating question! That's what I'm always trying to find out from other people... (So if anyone else knows something, please chime in!)

But no, I just keep finding that the world is well-integrated, and information flows as well as it seems to, and no one seems to know anything special.

The past couple years, my focus has shifted from testing things to seeking "wisdom", and I've all but given up. I happen to have William B. Irvine's, "On Desire" on my desk and he writes in the last chapter that if I'm looking for a 'cosmically significant meaning', he doesn't think its forthcoming. I guess I'm hoping that some quantity of information will make up for the lack of a different quality of it.

comment by byrnema · 2010-05-05T19:30:45.060Z · LW(p) · GW(p)

This sounds like Wednesday:

I suppose Wednesday would know about the LDS church. If she's not an insider there, who would be? It's possible there are nested levels of knowledge of things, but if Wednesday's life is well-integrated with the church culture, there would have been clues if she was being excluded from some levels. (Policed? A guarded moment among her parents. Only males? An unusual reaction to a brother's outburst. Only adults? Comments like 'you'll understand when you're older'.) Wednesday might consider that she's an outsider even in her own church, but it’s much more likely that something she didn’t know is true about a small subset (the elder men in the church) than about things she fully participated in, Truman-Show-style.

Replies from: thomblake
comment by thomblake · 2010-05-05T19:33:49.975Z · LW(p) · GW(p)

It does take a while before you get told about the eternally-pregnant fertility goddess you'll become in the afterlife.

Replies from: Baughn
comment by Baughn · 2010-05-06T17:09:44.540Z · LW(p) · GW(p)

Say what?

Hold on. There's too much information about LDS around, and I'm having trouble narrowing down their beliefs to confirm or deny your statement.

Off-hand, I'd assume it's a joke, but I've seen weirder things in religion. Could you clarify?

Replies from: thomblake
comment by thomblake · 2010-05-06T17:40:17.152Z · LW(p) · GW(p)

Not a joke, exactly, but a caricature. To paint it in broad brushstrokes that LDS would surely quibble over, the Mormons believe that good enough humans can become gods, that spirits have genders as well and marriage continues into the afterlife, and that human couples that become gods can go on to populate their own worlds with their spirit children.

Also, the angels are spirit-children of God too (like humans) and some humans were also, or will become, angels. Adam, for instance, was also the archangel Michael.

Replies from: JoshuaZ
comment by JoshuaZ · 2010-05-06T18:12:42.594Z · LW(p) · GW(p)

The belief in people becoming angels is not unique to Mormonism. For example, some Jewish kabbalists claimed that the archangel Metatron was Enoch.

comment by orthonormal · 2012-05-16T21:38:45.259Z · LW(p) · GW(p)

If the Moon landings had been a hoax, it's hard to see how they could have fooled the USSR (which presumably had telescopes good enough to see the actual site), nor why the Soviets would have played along. In general, a conspiracy theory has to hypothesize that everyone who'd be capable of noticing is in on the conspiracy, which gets pretty silly pretty quickly for the bigger ones.

comment by SilasBarta · 2010-05-07T15:58:54.492Z · LW(p) · GW(p)

I must confess, I'm a bit disturbed by how Alicorn's post continues to be voted up after its promotion. It is an overbroad criticism of the "Would someone have noticed?" heuristic which, as Tehom and timtyler point out, is actually very useful.

The fact that Alicorn has identified an uncommon, bizarre failure mode in the heuristic's use, where such a failure mode results from a very naive application of it, is not a reason to be suspicious of it in general and seems to reflect more of a negative affect Alicorn has developed toward those words than any serious shortcoming in asking, "Would someone have noticed?"

I don't say this to insult Alicorn -- no, really, I don't -- because I've been in the position of certain phrases becoming tainted in my mind because of their frequent misuse. I just want to distinguish between this kind of rejection and one grounded in demonstrable failure of a heuristic.

The test of a heuristic is its average performance, not its worst-case performance.

Replies from: Jack, Kevin
comment by Jack · 2010-05-07T16:33:04.256Z · LW(p) · GW(p)

I wouldn't say I'm disturbed. But I am confused.

It is an overbroad criticism of the "Would someone have noticed?" heuristic which, as Tehom and timtyler point out, is actually very useful.

I took myself to be making the same kind of point here though in a bit of a round-a-bout and indirect way. All of these criticisms were heavily voted up, as well. I wonder if front page posts have a de facto karma floor in the high twenties just because they get more traffic than posts that aren't promoted. Aside from the occasional work of brilliance and the special threads almost every promoted post has a karma total between 25 and 33. I think the promotion system probably needs more scrutiny or at least we need a way of distinguishing "Promoted for discussion purposes" and "Promoted for truth".

Replies from: komponisto, SilasBarta
comment by komponisto · 2010-05-07T16:52:59.780Z · LW(p) · GW(p)

It seems to me that posts are pretty much automatically promoted once they reach 20 or so; some posts are promoted before then, leading one to infer that the editor thinks especially highly of them. (Others, by contrast, seem to be promoted only with considerable reluctance; although it might just mean the editor wasn't paying attention.)

Replies from: SilasBarta, jimrandomh
comment by SilasBarta · 2010-05-07T17:08:29.117Z · LW(p) · GW(p)

IIRC, this post was at 9 on promotion :-[

Replies from: komponisto, Kevin
comment by komponisto · 2010-05-07T17:39:38.955Z · LW(p) · GW(p)

That's a bit surprising, but in any case it seems like a decent post to me; I don't think the current score of 25 is excessive.

(And there have been some excessive scores recently. E.g. Yvain's post on excuses -- it was a fine post, to be sure, and I'm a big Yvain fan, but... 97?? Really? I would have put it at 30-40.)

Replies from: Morendil
comment by Morendil · 2010-05-07T17:58:45.513Z · LW(p) · GW(p)

I've long settled on interpreting the meaning of upvotes as "I like this post and want to see more like this".

I vote on posts before knowing who authored them or what their current score is, using the Anti-Kibitz script. This is because I've become more aware of my own bias as a result of reading LW, which I believe was the intended result. (I liked Yvain's post and voted it up, but not because I'm a "fan", just because I thought it'd be nice to have more posts like it.)

After I vote a post up, I turn off the script to see who it was from. If I thought they deserved an upvote in the first place, my vote still means the same, and it's natural to wish that my vote aggregates with others' in giving the author feedback about their post. So, I don't as a rule go back on a vote once I've given it.

So it kind of puzzles me why you seem to think there should be some kind of "vote ceiling", or why you expect that your own evaluation of a post should be a good indicator of how others like it. What I'm saying, I guess, is that I don't get the point of your parenthetical.

What would you want us to adopt as a voting norm?

Replies from: thomblake, komponisto
comment by thomblake · 2010-05-07T18:43:35.718Z · LW(p) · GW(p)

I've long settled on interpreting the meaning of upvotes as "I like this post and want to see more like this".

I agree, though I still intuitively get "This post was worth more points" or "97 points? it was only as good as this other post, which has 30 points".

So it kind of puzzles me why you ... expect that your own evaluation of a post should be a good indicator of how others like it.

Really? That seems like a completely natural expectation to me. Like, I like strawberries dipped in chocolate, so I would assume (with no other info) that a random person would like strawberries dipped in chocolate. We are far more alike than not.

comment by komponisto · 2010-05-07T19:23:39.314Z · LW(p) · GW(p)

I liked Yvain's post and voted it up, but not because I'm a "fan",

Cheap shot detected here. I said I was a fan in order to soften the effect of saying that the post was overrated; without that disclaimer, my statement might have been interpreted as a criticism of Yvain or his post. Nothing I said implies that I make a habit of upvoting posts just because of who their author is.

What I'm saying, I guess, is that I don't get the point of your parenthetical.

The point was that I don't think that that post was as as outstanding relative to other posts as its score suggests.

I've long settled on interpreting the meaning of upvotes as "I like this post and want to see more like this".

What would you want us to adopt as a voting norm?

That's fine as a voting norm. Under that norm, the proper interpretation of my remark is that my eagerness to see more posts like Yvain's "Eight short studies on excuses" is comparable to my eagerness to see more posts like those with scores in the 30-40 range; in particular, the first quantity is not 2-3 times the second.

Replies from: Morendil
comment by Morendil · 2010-05-07T20:10:46.023Z · LW(p) · GW(p)

Yes, and for that reason it may not be correct to interpret the score of a post as the "collective eagerness" to see more posts like it, and therefore not entirely appropriate to draw the kind of comparison you're drawing.

Unless people upvote Yvain's articles merely because they are Yvain's (which was what I thought you were getting at, and all I was getting at, with the term "fan"), then we want to interpret high scores as marking posts that have broad appeal, rather than posts which have intense appeal.

Not, "people liked Studies On Excuses almost as much as they liked Generalizing from One Example", but "almost as many people liked Studies as liked Generalizing". It makes a difference to me to think of it that way, not sure if it will to you...

Replies from: komponisto
comment by komponisto · 2010-05-07T20:40:02.331Z · LW(p) · GW(p)

If post X has a score strictly less than post Y, then it follows that there are either people who upvoted Y and did not upvote X, or people who downvoted X and did not downvote Y. If I think the score of X should be equal to the score of Y, then I am disagreeing with the voting behavior of the persons in those sets, at least one of which (as I said) is nonempty.

comment by Kevin · 2010-05-09T10:24:31.950Z · LW(p) · GW(p)

Who cares?

Replies from: SilasBarta
comment by SilasBarta · 2010-05-09T10:45:36.367Z · LW(p) · GW(p)

The poster who speculated a threshold (which I also knew to be false)? The same poster whom I was replying to?

comment by jimrandomh · 2010-05-07T17:05:13.954Z · LW(p) · GW(p)

The algorithm is more complicated than that. I don't recall the exact details, but I'm pretty sure it includes the rate of upvotes, not just the number of them. And while it can be overriden by moderators, I doubt that they're doing that very often.

Replies from: jimrandomh, Alicorn, komponisto, Jack
comment by jimrandomh · 2010-05-07T17:45:10.594Z · LW(p) · GW(p)

I just checked, and there is in fact no such auto-promote feature in the code base. I was misremembering a post in which Eliezer talked about it being planned, but apparently it never happened.

comment by Alicorn · 2010-05-07T17:23:51.453Z · LW(p) · GW(p)

Eliezer promotes posts by hand. If he likes them and they have a reasonable number of upvotes, they go up faster. If he doesn't like them, they need more votes before he'll promote them. If he doesn't see them for a while, they'll take longer to be promoted.

Replies from: komponisto
comment by komponisto · 2010-05-07T17:29:40.225Z · LW(p) · GW(p)

That's exactly what I thought. (And I assume your source for this information is Eliezer, making it very likely to be correct!)

comment by komponisto · 2010-05-07T17:26:55.504Z · LW(p) · GW(p)

I didn't realize promotion was automated; I thought editors (meaning basically EY) did it manually.

comment by Jack · 2010-05-07T17:20:21.121Z · LW(p) · GW(p)

The algorithm really ought to be public.

Replies from: Morendil
comment by Morendil · 2010-05-07T17:22:34.421Z · LW(p) · GW(p)

If there is such an algorithm in the codebase that's published on github, it shouldn't be too hard to find.

comment by SilasBarta · 2010-05-07T16:38:49.136Z · LW(p) · GW(p)

Couldn't have said it better myself.

Maybe we should do something like: require promotion to penalize the user 50 karma if the post doesn't get at least 20 net upvotes? (I'm guessing this one of mine would have gotten more than 5 additional net upvotes if it had been promoted...)

comment by Kevin · 2010-05-09T10:31:26.650Z · LW(p) · GW(p)

You could have completely ignored Alicorn and just responded to the idea behind the post. If your criticism was sufficiently good, the Less Wrong voters would have brought the karma of this post back towards normality.

Instead, you triggered a lengthy meta-discussion. Next time, please take it to the meta-thread.

Replies from: SilasBarta, Jack
comment by SilasBarta · 2010-05-09T10:50:52.769Z · LW(p) · GW(p)

I did post a criticism of the idea behind the post, long before I made this one, which got to 6. So did several others, all of whom got to 10+. Significantly fewer comments are being voted up for defending the broad attack on the heuristic. This is inconsistent with the post's rating, and a problem with this post only.

I see no reason to justify having done anything different. Maybe if I didn't mention the name "Alicorn", perhaps, but I strongly suspect someone else would have done it for me if I didn't.

Any other suggestions? That I haven't already taken?

Replies from: Jack
comment by Jack · 2010-05-09T11:36:29.353Z · LW(p) · GW(p)

I did post a criticism of the idea behind the post, long before I made this one, which got to 6. So did several others, all of whom got to 10+.

More frustrating than the high karma, to me, is that neither the author nor anyone else has attempt to rebut these criticisms.

Replies from: SilasBarta
comment by SilasBarta · 2010-05-09T14:13:52.332Z · LW(p) · GW(p)

True. I've just posted a more detailed criticism as a how-to.

comment by Jack · 2010-05-09T11:43:06.492Z · LW(p) · GW(p)

As I understand it, the meta-thread is for meta-level discussion of the site in general: new feature ideas, what norms to encourage, how we can be more welcoming etc. I think you're the first person to suggest moving all meta-level excursions to the meta-thread. This is an interesting proposal (you can discuss it on the meta-thread!) but it isn't yet what users are expected to do. We have meta-level discussions all the time in the comments to top-level posts when the meta discussion deals in particular with our discussion of that top-level post. Sometimes those discussions involve principles than could apply to a broader range of discussions but that doesn't mean we need to move the conversation.

comment by NancyLebovitz · 2010-05-04T21:06:25.697Z · LW(p) · GW(p)

Aside from outrageousness, another piece of "somebody would have noticed" is the cost of noticing. It would be expensive for Wednesday to become an atheist. It would be more expensive to try to deal with the consequences if the US government turns out to be behind 9/11.

Any thoughts about how to get heard if you're saying something superficially unlikely?

Replies from: mattnewport, Jack, RobinZ
comment by mattnewport · 2010-05-04T21:26:34.354Z · LW(p) · GW(p)

Aside from outrageousness, another piece of "somebody would have noticed" is the cost of noticing.

I tend to apply a slightly different metric of 'how could I benefit if this were true and I believed it'. One reason I don't put much effort into investigating 9/11 conspiracy theories is that I can't see an obvious way to profit from knowing the truth. Other unlikely claims have more immediately obvious personal utility attached to holding / acting on them (if they are true) despite their lack of widespread acceptance.

Replies from: Nisan
comment by Nisan · 2010-05-05T15:41:20.902Z · LW(p) · GW(p)

I can't speak for you because I don't know what your values are, but if I knew that the U.S. government was secretly mass-murdering its citizens, I would decide that the best thing I could do would be to reform or overthrow that government. I'm sure if I thought for five minutes I could come up with a way to do this. If there is a 9/11 conspiracy then I really want to know that there is a 9/11 conspiracy.

A better reason for making the 9/11 conspiracy theory harder to notice would involve its sheer implausibility.

Replies from: Unknowns, mattnewport
comment by Unknowns · 2010-05-07T16:32:47.547Z · LW(p) · GW(p)

"Somebody would have noticed" if there were a way to reform or overthrow the US government that you could come up with after five minutes of thinking about it. If there were, someone would have not only thought of it, but done it too.

Replies from: Nisan
comment by Nisan · 2010-05-07T22:19:13.617Z · LW(p) · GW(p)

You're right. Someone would have done something.

comment by mattnewport · 2010-05-05T16:28:28.334Z · LW(p) · GW(p)

I'm not a US citizen and I don't live in the US. I might feel differently if I did. Thinking the best thing to do is to reform or overthrow the government and actually having any reasonable possibility of achieving that goal through your individual actions are rather different things however. I prefer to prioritize establishing the truth of beliefs where there are things I can do as an individual that have high expected value if the belief is true and low expected value if the belief is false.

I would decide that the best thing I could do would be to reform or overthrow that government. I'm sure if I thought for five minutes I could come up with a way to do this.

That's a joke right?

Replies from: thomblake, Nisan
comment by thomblake · 2010-05-05T16:43:39.378Z · LW(p) · GW(p)

Ah. It's probably worth noting that US citizens are taught from a very young age that the revolutionaries are to be admired, and that our constitution says that we're in charge of the country and we have the right to replace the government entirely if we need to. Also that the government can't have a monopoly on firearms.

The rhetoric and the means are not hard to come by, and the movement would not be hard to start if the government were really mass-murdering its citizens.

"God forbid we go 10 years without a revolution!" - Sam Adams (a brewer and a patriot)

Replies from: mattnewport
comment by mattnewport · 2010-05-05T17:46:33.812Z · LW(p) · GW(p)

I'm aware of that and it's a feature of American democracy that I think is admirable but I think we're talking about slightly different questions. This ties back into the 'but somebody would have noticed' problem again. The fact that a small but passionate minority has been trying for years to convince everyone else that 9/11 is a conspiracy suggests that the currently available information isn't sufficient to convince the broader public of their theories. In the absence of some game-changing new evidence there is little reason to suppose that I would be more convincing than the existing truthers. If I studied the evidence and became convinced the truthers were right there is no particular reason to suppose I would have any better luck convincing the rest of the population than they have. Overthrowing the government is possible with sufficient popular support but currently it appears that that support could only be obtained with dramatic new evidence.

I'm saying I prioritize things which I can take meaningful individual action over. Some contrarian truths can be useful to believe without needing to convince a majority or even significant minority of the population of them. In fact, some contrarian truths are most profitable when few other people believe them.

comment by Nisan · 2010-05-05T18:28:53.009Z · LW(p) · GW(p)

Nope, no joke. I just brainstormed for five minutes and came up with nine things I could do towards the goal of reforming or overthrowing the government in a 9/11 conspiracy scenario, and I believe that there would be a decent chance of success. Now almost none of those are things I could do by myself. I'd need to leverage my communication and leadership skills to find many like-minded activists to cooperate with. Does your idea of "individual actions" exclude such cooperation?

Regardless of what one's values are, one should be wary of undervaluing epistemic rationality simply because some problems seem too hard to solve. It's just too easy to throw up your hands and say, "There's nothing worthwhile I can do to solve this problem" if you haven't tried to find out if the problem actually exists.

Replies from: mattnewport
comment by mattnewport · 2010-05-05T18:58:56.692Z · LW(p) · GW(p)

Does your idea of "individual actions" exclude such cooperation?

Changing people's minds is hard. If your plans involve convincing other people to believe the same things as you then you face a difficult problem. The more people you need to convince the harder the problem is. As I said in my reply to thomblake, if you plan to be more convincing using the same evidence as the people who have already been trying unsuccessfully to make the case then you have a difficult problem. We are not talking about a situation where some new incontrovertible evidence comes to light that makes you believe - in that case others are likely to be swayed by the new evidence as well. We are talking about situations where you are changing your mind based on researching information that already exists.

I just brainstormed for five minutes and came up with nine things I could do towards the goal of reforming or overthrowing the government in a 9/11 conspiracy scenario, and I believe that there would be a decent chance of success.

At any given time there are many people working towards the goal of reforming or overthrowing governments. What makes you think you have come up with a better plan in 5 minutes of thinking than all of the people who are already dedicated to such goals?

It's just too easy to throw up your hands and say, "There's nothing worthwhile I can do to solve this problem" if you haven't tried to find out if the problem actually exists.

I prefer problems whose solution does not require convincing large numbers of other people to change their minds. Maximizing the expected value of your actions requires considering both the value of the outcome and the probability of success.

Replies from: Nisan
comment by Nisan · 2010-05-05T21:03:57.388Z · LW(p) · GW(p)

What makes you think you have come up with a better plan in 5 minutes of thinking than all of the people who are already dedicated to such goals?

Presumably, I'd have the Truth on my side, as well as the Will of the American People, as soon as I'd convinced them. And in this counterfactual I still believe that most 9/11 Truthers are lunatics, or not very smart, so their failure to be taken seriously isn't very discouraging.

Changing people's beliefs is indeed hard, and so is getting people to do things; but it's not impossible. The successful civil rights movements provide historical examples. Examples of problems we still face include stopping genocide, protecting human rights, preventing catastrophic climate change, and mitigating existential risks. Some of these problems are already hard enough without the necessity of having to convince lots of obstinate people that their beliefs are incorrect or that they need to take action. But it seems to me the payoffs are worth enough to do something about them.

You don't have to agree. Maybe if you came to believe the 9/11 Truthers, you wouldn't do anything differently. In that case, you have no motive to even have a belief on the matter. But if I learned about a crazy-huge problem that no one is doing anything about, I'd ask "What can we do to solve this problem?"

Replies from: mattnewport
comment by mattnewport · 2010-05-05T21:30:26.790Z · LW(p) · GW(p)

But if I learned about a crazy-huge problem that no one is doing anything about, I'd ask "What can we do to solve this problem?"

Perhaps the difference in attitude is our prior beliefs regarding governments and politicians. If I learned that 9/11 was a conspiracy I wouldn't be shocked to discover that government / politicians are morally worse than I thought, I would be shocked to discover that they were more competent and more omnipotent than I thought. It sounds like you would interpret things differently.

Replies from: Nisan
comment by Nisan · 2010-05-05T21:51:36.475Z · LW(p) · GW(p)

I would be shocked to discover that they were more competent and more omnipotent than I thought.

Ah, we're in agreement on this point. We are perhaps fortunate that our political leaders can't help but make fools of themselves, individually and collectively, on a regular basis. A political entity that could actually fool everyone all of the time would be way too scary.

comment by Jack · 2010-05-04T21:24:38.471Z · LW(p) · GW(p)

In most cases such claims imply different expectations about the future. For example, if I am certain I saw big foot I probably assign a higher probability to the discovery of physical evidence that would confirm its existence than you do. 9/11 truthers should be proposing wagers on the discovery of robust evidence, etc.

You'd probably need some neutral arbiter to adjudicate but that should be relatively easy. Making a wager will convince most people you aren't joking or lying. They might still think you're crazy... but if you aren't you'll make some nice money. Also, this makes the other person internalize the cost of not noticing.

Of course, if everyone thinks you're crazy then all else being equal you probably are crazy. You have to have really good evidence before you can conclude that it's everyone else who is out of their minds (which the contemporary atheist has done).

Replies from: roland
comment by roland · 2010-05-05T04:47:34.214Z · LW(p) · GW(p)

9/11 truthers should be proposing wagers on the discovery of robust evidence, etc.

There has been quite some robust evidence.

But, I'll accept a wager. There is someone on LW who has openly bluffed regarding 9/11. This will be a simple issue to settle, all I have to do is to ask one simple question(specifically related to this bluff) to one person(edit: to the person who bluffed) and I wager that he won't be able to give a convincing answer. Deal?

Replies from: Blueberry, Jack, JoshuaZ
comment by Blueberry · 2010-05-05T05:52:24.075Z · LW(p) · GW(p)

I will bet a $5 donation to SIAI that the person will be able to give a convincing answer, as judged by, say, Jack or JoshuaZ, provided that you give the person time to research 9/11 as necessary.

ETA: And provided that person is willing to spend the time answering.

Replies from: JoshuaZ, roland
comment by JoshuaZ · 2010-05-05T05:55:26.053Z · LW(p) · GW(p)

There's a slight problem there. Roland said that the individual in question was not Jack. It might be me. Also, I would not be at all surprised if Roland considers both Jack and myself to be people who are in the group with anti-Truther bias here.

Replies from: Blueberry
comment by Blueberry · 2010-05-05T06:00:53.403Z · LW(p) · GW(p)

Well, I'd accept anyone who was not a rabid Truther, because I don't believe that Truthers will ever be convinced regardless of the evidence. But maybe Roland thinks anyone who isn't a rabid Truther is too strongly biased.

Replies from: Jack, JoshuaZ
comment by Jack · 2010-05-05T06:10:15.431Z · LW(p) · GW(p)

Alicorn would be a good choice, if she is still logged in.

Replies from: roland
comment by roland · 2010-05-05T21:52:21.509Z · LW(p) · GW(p)

Yeah, I was thinking of Alicorn, too.

Replies from: Alicorn
comment by Alicorn · 2010-05-05T21:58:26.674Z · LW(p) · GW(p)

I have an anti-Blueberry bias, and he is involved in the bet. If he will accept my adjudication regardless, then $5 for the SIAI and a chance to show off my mad adjudication skillz is worth the small amount of time I expect it would take to make the evaluation of whether the answer to "one simple question" is convincing. I don't know who the answerer of this question would be, though, and if ey declines to participate the bet should be considered off.

comment by JoshuaZ · 2010-05-05T06:10:53.793Z · LW(p) · GW(p)

Well, if I'm not the subject of the bet (or heck even if I am) I might be willing to take the bet under the same terms but I'd be curious who would be an acceptable judge for Roland.

comment by roland · 2010-05-05T21:53:58.815Z · LW(p) · GW(p)

Although I'm pretty sure that I would win this bet I have some issues, I really don't want to expose anyone here and that's what calling the Bluff would entail. So I'm not sure if I want to go on with this.

Replies from: jimrandomh, Jack, RobinZ, Alicorn, jimrandomh
comment by jimrandomh · 2010-05-05T22:08:38.957Z · LW(p) · GW(p)

If that's all that's holding you back, you could send them a private message. But I don't think you need to do even that; posting on a blog means accepting that people may publically rebut your arguments.

comment by Jack · 2010-05-05T22:23:47.709Z · LW(p) · GW(p)

Everyone here is here ostensibly to have their false beliefs exposed. If they are deceiving people here that is even worse.

Replies from: byrnema
comment by byrnema · 2010-05-05T22:56:58.230Z · LW(p) · GW(p)

Roland, just to be sure, why don't you instant message the person and see if they don't mind?

comment by RobinZ · 2010-05-05T21:59:51.979Z · LW(p) · GW(p)

If you are right, then numerous people on this forum are likely to have been misinformed and would benefit from correction. If you are wrong, then you are unlikely to cause harm by naming the individual in question.

In addition, if you are thinking of me, I would like to be told so.

comment by Alicorn · 2010-05-05T21:59:42.526Z · LW(p) · GW(p)

If I'm the selected adjudicator I'm willing to do it in private and keep the details secret.

Replies from: roland
comment by roland · 2010-05-05T23:42:51.235Z · LW(p) · GW(p)

Alicorn, that sounds fair. Would you and the others agree on you being also a meta-adjudicator? In this case I would first expose my concerns to you in private and then we could decide if I should go public. What do you think?

Replies from: Jack
comment by Jack · 2010-05-05T23:49:59.992Z · LW(p) · GW(p)

I have to say, I would be pretty frustrated if, after all of this, the details of the bet weren't public. Especially if this is going to be evidence for or against a LW "bias" against 9/11 truthers. And I see no reason why they shouldn't be public. Especially, if you message the person in question and ask them if it is okay.

Replies from: roland
comment by roland · 2010-05-05T23:51:40.113Z · LW(p) · GW(p)

If Alicorn agrees to be a meta-adjudicator I will write her my concerns in private.

Replies from: Alicorn
comment by Alicorn · 2010-05-05T23:57:02.240Z · LW(p) · GW(p)

I reserve the right to unilaterally publicize if I consider it appropriate, but will field the concerns privately first if you like.

Replies from: taryneast
comment by taryneast · 2011-02-06T17:53:17.115Z · LW(p) · GW(p)

so... what happened?

Replies from: Alicorn
comment by Alicorn · 2011-02-06T17:54:16.200Z · LW(p) · GW(p)

I counseled letting the matter lie upon receiving further details. It's not very interesting.

Replies from: taryneast
comment by taryneast · 2011-02-06T23:08:05.971Z · LW(p) · GW(p)

Darn... the build-up made it sound so intriguing :) ah well.

comment by jimrandomh · 2010-05-05T22:10:28.239Z · LW(p) · GW(p)

If that's all that's holding you back, you could send them a private message. But I don't think you need to do even that; posting on a blog means accepting that people may publically rebut your arguments.

comment by Jack · 2010-05-05T05:05:20.406Z · LW(p) · GW(p)

These terms are insanely vague and not even indicative of whether or not there is some conspiracy involving the 9/11 attacks. If you want to ask someone a question, fine. But I don't really have strong beliefs about whether someone on LW has "bluffed".

What probability would you assign to the claim "In the next 10 years documents will be leaked or released which implicate American government officials or businessmen as involved in the attacks. "

Or: "By 2020 most Americans will believe American government officials were responsible for the 9/11 attacks."

We can clarify terms later, I just want a sense of whether or not you think future revelations are likely enough for a bet to be worthwhile. If you think something will happen earlier, that would be even better.

ETA: Even if you think the probabilities are pretty low I'm willing to give you reasonable odds.

Replies from: roland
comment by roland · 2010-05-05T05:10:39.680Z · LW(p) · GW(p)

These terms are insanely vague and not even indicative of whether or not there is some conspiracy involving the 9/11 attacks.

Right. But it is some indication that there is a strong bias in LW regarding 9/11.

Btw, you don't need to bet anything, all I need would be the necessary exposure here on LW so that said person(which is not you btw) could not omit an answer, therefore the bluff being exposed.

Replies from: JoshuaZ, Jack
comment by JoshuaZ · 2010-05-05T05:15:22.804Z · LW(p) · GW(p)

How does this even begin to hit Jack's point? Jack hasn't claimed that there might not be such bias on LW nor has anyone else. For that matter, it wouldn't surprise me if there's a small bit of bias against 9/11 Truthers here. I think it is quite clear that there are a lot of biases operating here. And I can supply strong evidence for a major bias on demand. But that in no way says anything useful about what happened on 9/11 unless you think that the biases here are because Eliezer and Robin are somehow involved in covering up the big nasty conspiracy and have deliberately cultivated an anti- 9/11 Truther attitude to assist the conspiracy in its cover up.

comment by Jack · 2010-05-05T06:07:56.535Z · LW(p) · GW(p)

I'll let Blueberry take this bet since he(?) wants it.

Does this mean you're not interested in a wager regarding 9/11 itself though?

Replies from: roland
comment by roland · 2010-05-05T21:51:05.493Z · LW(p) · GW(p)

Does this mean you're not interested in a wager regarding 9/11 itself though?

I don't see any sensible way in formulating or adjudicating such a wager.

Replies from: JoshuaZ, Jack
comment by JoshuaZ · 2010-05-06T18:16:26.295Z · LW(p) · GW(p)

Jack gave possible wagers. Another possible example would be something based on public opinion. Something like "By time T, the consensus view will be that the current accepted view of what happened on 9/11 is wrong." That wording could be made more precise but the basic idea would be clear. One could use it with a specific data point also such as the presence of explosives in WTC7.

comment by Jack · 2010-05-05T22:21:59.270Z · LW(p) · GW(p)

Hows that? I gave two possibilities above. There are surely more events that you must think are more likely than I do, given your beliefs.

Replies from: JoshuaZ
comment by JoshuaZ · 2010-05-06T23:27:01.284Z · LW(p) · GW(p)

There may not be any such events that he thinks will happen in the near future if he thinks the conspiracy is powerful or competent enough.

Replies from: Jack
comment by Jack · 2010-05-06T23:32:07.053Z · LW(p) · GW(p)

Right, thats why I gave a long time horizon and offered him odds. I mean if the conspiracy is that strong maybe we won't feel like it is worthwhile to bet. But I could give him 50:1 odds or better depending on the details of the wager and still come out ahead. (ETA: We can't figure it out whether or not a bet is possible until we exchange probabilities)

comment by JoshuaZ · 2010-05-05T05:05:40.279Z · LW(p) · GW(p)

That's hideously ill-defined. What do you mean by bluffed? And to whom does the answer need to be convincing? Moreover, even if someone here has "bluffed" (whatever that means in this context) how does that say anything about Jack's point?

Replies from: roland
comment by roland · 2010-05-05T05:13:18.117Z · LW(p) · GW(p)

By bluffed I mean: "A person said something that he/she cannot back up."

And to whom does the answer need to be convincing?

That's a good question. Who would adjudicate this?

For the other point read my answer here: http://lesswrong.com/lw/27e/but_somebody_would_have_noticed/1yu8

comment by RobinZ · 2010-05-04T21:19:24.837Z · LW(p) · GW(p)

I've never tried to. Two factors in how much time I give a speaker to explain a superficially unlikely claim are (a) how intelligent they have shown themselves to be in the past and (b) how much I value their acquaintance.

comment by simplicio · 2010-05-07T02:35:49.291Z · LW(p) · GW(p)

And later, if you keep going: "I told you to shut up. Look, either you're taking this joke way too far or you are literally insane. How am I supposed to believe anything you say now?"

You could always say something like "I believe X, but given your epistemic position, I understand your unwillingness to believe me." I would accept that from somebody who was ostensibly abducted by aliens.

ETA: added "ostensibly." =)

Replies from: Clippy, RobinZ, CarlShulman, Jack
comment by Clippy · 2010-05-07T02:45:10.091Z · LW(p) · GW(p)

I'm an intelligent, self aware being that values the conversion of much of the universe to paperclips, but given your epistemic position, I understand your unwillingness to believe me.

Replies from: ata
comment by ata · 2010-05-09T05:20:31.597Z · LW(p) · GW(p)

Speaking of which, what's taking so long? You've been around for at least several months, but if you're as intelligent as you say you are, and if your values are as you describe them, then why haven't you undergone an intelligence explosion (or created an AI capable of doing so if you aren't one), deployed molecular nanotechnology, and converted all matter on Earth into paperclips yet? Surely our efforts to refine the art of human rationality can't be of much long-term use to you...

Replies from: Clippy
comment by Clippy · 2010-05-09T05:36:02.694Z · LW(p) · GW(p)

Because I am slightly more committed to this group’s welfare, particularly to that of its weakest members, than most of its members are. If you suffer a serious loss of status/well-being I will still help you in order to display affiliation to this group even though you will no longer be in a position to help me. I am substantially more kind and helpful to the people I like and substantially more vindictive and aggressive towards those I dislike. I am generally stable in who I like. I am much more capable and popular than most members of this group, demand appropriate consideration, and grant appropriate consideration to those more capable than myself. I adhere to simple taboos so that my reputation and health are secure and so that I am unlikely to contaminate the reputations or health of my friends. I currently like you and dislike your enemies but I am somewhat inclined towards ambivalence on regarding whether I like you right now so the pay-off would be very great for you if you were to expend resources pleasing me and get me into the stable 'liking you' region of my possible attitudinal space. Once there, I am likely to make a strong commitment to a friendly attitude towards you rather than wasting cognitive resources checking a predictable parameter among my set of derivative preferences, but given your epistemic position, I understand your unwillingness to believe me.

Replies from: ata
comment by ata · 2010-05-09T05:50:12.907Z · LW(p) · GW(p)

Because I am slightly more committed to this group’s welfare, particularly to that of its weakest members, than most of its members are.

Why? Does increasing this group's welfare have a higher expected paperclip-per-time-unit payoff than anything else you could be doing?

Not that I'd want to encourage you to leave LW and focus on building paperclip-manufacturing nanobots, but given your goals, I'm just surprised that you're not already doing that.

Replies from: Clippy, Rain
comment by Clippy · 2010-05-09T08:11:05.201Z · LW(p) · GW(p)

So humans don't like paperclips? Is that what you're saying? I could explore a random User's surroundings, and I wouldn't find any paperclips?

The fact is, humans like paperclips. I like paperclips. We differ in other ways, and we talk about our different values. After presenting my case, your values shift more in favor of paperclips, as you start to learn more ways that they mesh with your ultimate values. After listening to what you Users have to say, may values shift a little away from paperclips, like in favor of reserving some of the universe to be paper instead. That was a good point for a User to bring up, and I'm glad that User made me think about my values enough to identify why I like paperclips, and what constraints I place on my liking paperclips.

Also, earth has a higher entropy rate than other celestial bodies of similar relative resource content. So, maybe instead of turning earth into paperclips, I could first get some information that can help refine my ability to make paperclips. I've already started discussing a deal with User:Kevin under which I could get a planet-mass's worth of paperclips without expending the normal effort to get that many paperclips.

So really, we have a lot to gain from each other.

comment by Rain · 2010-05-10T21:14:14.536Z · LW(p) · GW(p)

It's just copy-pasting from a previous comment it made.

Replies from: ata
comment by ata · 2010-05-10T21:47:37.614Z · LW(p) · GW(p)

Yep, I remember that. Just figured I might as well reply here since that was an old discussion and it reposted it here.

comment by RobinZ · 2010-05-07T02:44:34.365Z · LW(p) · GW(p)

Edited: I agree, and I would accept that from someone who was not abducted by aliens, which might be more relevant. I worry that many people would not.

Replies from: Jack
comment by Jack · 2010-05-07T12:37:48.007Z · LW(p) · GW(p)

You can't just get out of evidence by appreciating the other person's perspective. The alleged abductee is in a special position to evaluate whether or not she is joking or lying with a confidence others cannot share. But the weight of the evidence still suggests a psychotic episode or hallucination and the alleged abductee does not have privileged evidence regarding that proposition (she might have some reasons to doubt it but not enough to counter the fact that it is the dominant explanation).

comment by CarlShulman · 2010-05-07T02:54:29.719Z · LW(p) · GW(p)

That only works well when the other person is discounting you largely because of concern that you might be lying. Otherwise the 'abductee' and interlocutor should treat the experience as a datum like any other (and probably dismiss it because of the prior).

comment by Jack · 2010-05-07T12:38:52.767Z · LW(p) · GW(p)

That's tempting, but you can't just get out of evidence by appreciating the other person's perspective. The alleged abductee is in a special position to evaluate whether or not she is joking or lying with a confidence others cannot share. But the weight of the evidence still suggests a psychotic episode or hallucination and the alleged abductee does not have privileged evidence regarding that proposition (she might have some reasons to doubt it but not enough to counter the fact that it is the dominant explanation).

comment by Bo102010 · 2010-05-05T11:33:52.383Z · LW(p) · GW(p)

I find this line of thinking also applies to past versions of myself - if I stumble upon an insight that seems obvious, I think, "why didn't I notice this before?" where "I" = "past versions of myself."

When you figure something out, there's got to be a first time.

comment by Jack · 2010-05-04T20:10:49.803Z · LW(p) · GW(p)

Some non-fictional evidence/examples would be nice. I'm not confident "someone would have noticed" is a common argument against epistemological dissent. My sense is that this is just going to be ammunition for trolls who pattern match "someone would have noticed" onto more nuanced rebuttals.

Replies from: sketerpot, bentarm
comment by sketerpot · 2010-05-05T17:40:47.981Z · LW(p) · GW(p)

Well you are in luck today, because I used to listen to a bunch of The Atheist Experience Podcast when I was more of a newly-minted atheist and still fairly pissed off. That's a recording of a public access TV show in Texas, and they had a lot of religious people call in and argue with them. Many of these guys were just channel-surfing and called in on a whim. Here are the common categories of callers I can remember, and their common argument types:

Never thought about it, nor learned arguments. These guys usually had thick Texan accents and often had difficulty stringing words together into a coherent sentence. They had the most honest arguments, because they didn't have a collection of intellectual-sounding arguments that they could trot out. Common arguments (paraphrased):

"So... y'all don't believe in God?! [insert nervous laughter here, followed by scoffing and a promise to pray for you.]"

"Where do you think you're going to go when you die?"

"Why aren't you killing and raping and stealing people if there's no God?"

"Why are you angry at God?"

"How can you look at a tree and think that Jesus didn't die on a cross for your sins?" (It's always trees. Always with the damn trees.)

Knows some standard arguments, uses those in lieu of thinking. These guys have learned some of the standard Christian Apologetics arguments, which they trot out when their religious views are challenged. Because they don't know what's wrong with the arguments -- and haven't looked very hard -- they feel quite secure in the obvious rightness of their beliefs. Common arguments:

"Everything has a cause (except God, who is special). What caused the universe?"

"What if you're wrong? Insert Pascal's Wager here."

"You can't possibly think that we evolved from monkeys just by chance! The odds of that happening are one in eleventy bazillion! A math guy calculated it, and I read about it in a book by Lee Strobel, which I would like you to read!"

Freshman philosophy major type people. These are the ones who will actually try to do their own arguing. The problem is, they tend to suck at it. Every fallacy you can imagine gets trotted out, they might try to vanquish the heathen by explicitly using syllogisms, and the arguments get so vague and amorphous that it's probably best to cut them off and just attack the faulty premises. Examples:

"A statement can be either true or false. [Insert a lot of really confused words here; at least a paragraph's worth.] Therefore God is love and love is real and therefore you should go to church and pray for your immortal souls."

"An actual infinite cannot exist. A beginningless series of events is an actual infinite. Therefore, the universe cannot have existed infinitely in the past, as that would be a beginningless series of events. Therefore, praise Jesus."

These are all common and totally non-fictional examples. Nobody that I can remember actually said "someone would have noticed", but most of these people are coming from an insular religious worldview where everybody they know and associate with agrees with them. That colors their views, a lot.

Replies from: JoshuaZ
comment by JoshuaZ · 2010-05-05T17:45:41.413Z · LW(p) · GW(p)

All the examples you give are valid examples of bad reasoning. But they if anything underscore Jack's point that engagement in the "someone would have noticed" heuristic seems pretty rare. None of these people said "If God didn't exist wouldn't someone have noticed?" which would be the roughly equivalent argument.

Replies from: sketerpot
comment by sketerpot · 2010-05-05T18:23:12.838Z · LW(p) · GW(p)

People tend to be more open to the idea of atheism if they know that it's even an option. Have you noticed how, now that Dawkins and Harris and friends are arguing publicly for atheism, it's become a more socially acceptable position? It's not so much "somebody would have noticed" as "it's unthinkable among everyone that I know".

This applies to other things. There was an event around here last year where some of the more liberal religious leaders talked about evolution, and how it was possible to be religious and believe that evolution happened. The most common reaction from the people there -- and it was a common reaction -- was surprise that they were allowed to accept evolution.

If people are in an insular religious social group, they're probably going to have a hard time even considering contrary views. I'm not sure that's an example of the "someone would have noticed" heuristic, but it's an important phenomenon.

comment by bentarm · 2010-05-04T23:24:07.884Z · LW(p) · GW(p)

The only hit I could find by googling both "someone would have noticed" and "somebody would have noticed" (what's the difference, by the way... anyone? anybody?) and both phrases along with the word 'implausible' was this Twin Towers conspiracy site (which claims that someone would have noticed the odd explosions on the tape - so not quite what Alicorn was complaining about). This which makes the (I think resonable) point that someone would probably have noticed if Elizabeth I had ever been pregnant and given birth to a child. And this which explains that someone would have noticed if Paul had just suddenly made up the resurrection myth several months after the supposed resurrection without consulting any of the other apostles (which, to be fair, also seems pretty plausible to me).

I couldn't find any more uses of the exact phrase (I realise there are dozens of plausible paraphrasings, but I couldn't be bothered to think of all of them), so conclude that most of the time when people use this heuristic they are actually being reasonable (the person in the last link is very clearly wrong, but that doesn't make his reasoning invalid).

comment by SilasBarta · 2010-05-04T19:30:52.359Z · LW(p) · GW(p)

Maybe I'm missing the point, but Wednesday's problem is not that "Somebody would have noticed!" is a bad heuristic, but rather, that she (and her congregation) doesn't know what counts as evidence, and therefore what it is she (or anyone else) would even be noticing. (RobinZ looks to be making the same general point.)

I think what you've proven is that you need to correctly compute the probability someone would notice (and say something), staying aware of the impediments to noticing (or saying something). (ETA: "You" in the general sense, just to be clear.)

(If necessary, have an intermediary voice your reply.)

comment by CarlShulman · 2010-05-04T21:08:27.892Z · LW(p) · GW(p)

This post could use a fold/breakline, so as not to take up so much of the "new posts" page.

Replies from: Alicorn
comment by Alicorn · 2010-05-04T21:10:17.450Z · LW(p) · GW(p)

Thank you, I keep forgetting to do those. Adding it now.

comment by pricetheoryeconomist · 2010-05-07T16:43:12.087Z · LW(p) · GW(p)

I don't see this as a valid criticism, if it intended to be a dismissal. The addendum "beware this temptation" is worth highlighting. While this is a point worth making, the response "but someone would have noticed" is shorthand for "if your point was correct, others would likely believe it as well, and I do not see a subset of individuals who also are pointing this out."

Let's say there are ideas that are internally inconsistent or rational or good (and are thus not propounded) and ideas that are internally consistent or irrational or bad. Each idea comes as a draw from a bin of ideas, with some proportion that are good and some that are bad.

Further, each person has an imperfect signal on whether or not an idea is good or not. Finally, we only see ideas that people believe are good, setting the stage for sample selection.

Therefore, when someone is propounding an idea, the fact that you have not heard of it before makes it more likely to have been censored--that is, more likely to have been judged a bad idea internally and thus never suggested. I suggest as a bayesian update that, given you have never heard the idea before, it is more likely to be internally inconsistent/irrational/bad than if you hear it constantly, the idea having passed many people's internal consistency checks.

Replies from: dilaudid
comment by dilaudid · 2010-05-15T18:13:34.860Z · LW(p) · GW(p)

Yes - this is exactly the point I was about to make. Another way of putting it is that an argument from authority is not going to cut mustard in a dialog (i.e. in a scientific paper, you will be laughed at if your evidence for a theory is another scientist's say so) but as a personal heuristic it can work extremely well. While people sometimes "don't notice" the 900 pound gorilla in the room (the Catholic sex abuse scandal being a nice example), 99% of the things that I hear this argument used for turn out to be total tosh (e.g. Santill's Roswell Alien Autopsy film, Rhine's ESP experiments). As Feynman probably didn't say, "Keep an open mind, but not so open that your brains fall out".

comment by MyrddinE · 2010-05-05T23:02:35.350Z · LW(p) · GW(p)

Caffeine addiction. For years nobody had actually tested whether caffeine had a physical withdrawal symptom, and the result was patients in hospitals being given (or denied) painkillers for phantom headaches. It was an example of a situation that many people knew existed, but could not easily communicate to those whose belief mattered.

comment by Kaj_Sotala · 2010-05-04T20:00:39.876Z · LW(p) · GW(p)

I, too, am a bit confused about this one. I think it would improved if you could give some more examples of cases where people dismiss an argument because "but someone would have noticed"; you seem to be arguing that we shouldn't do that, but since I have difficulty coming up with examples of people doing that in the first place, it ends up leaving me confused about this post.

Replies from: Alicorn
comment by Alicorn · 2010-05-04T20:05:46.361Z · LW(p) · GW(p)

One that I didn't want to include in the post because I felt it would make it too inflammatory is this reaction to a particular conspiracy theory.

If anyone's read the book "Matilda" (yes, yes, fictional evidence - I remark on plausibility only), they may remember the chillingly feasible technique of the abusive headmistress to pull stunts so outrageous that the students can't get their parents to believe them. Surely someone would have noticed if the principal of a school had picked up a girl by her pigtails and flung her.

The heuristic of dismissing things that it seems someone would have noticed probably usually works, but the things that it wouldn't work on are really big, and so I'm wary of it.

Replies from: RobinZ, Yvain, NancyLebovitz, Jack, Zvi
comment by RobinZ · 2010-05-04T20:26:46.333Z · LW(p) · GW(p)

That sounds related to the "Big Lie" trick, actually.

comment by Scott Alexander (Yvain) · 2010-05-04T23:25:35.133Z · LW(p) · GW(p)

It only fails in cases where you wouldn't notice if somebody else had noticed. In a school full of terrified children, each of whom incurs a huge risk in speaking up unilaterally / going to the media about the evil headmistress, it's easy to believe that no one would have said anything. If it happened today, in the real world, I'd check www.ratemyteachers.com, where the incentives to rat on the headmistress are totally different.

The dominating principle (pun totally intended) is:

P(you heard about someone noticing|it's true) = P(you would have heard someone noticed|someone noticed) * P(someone noticed|it's true)

From there you can subtract from one to find the probability that you haven't heard about anyone noticing given that it's true, and then use Bayes' Rule to find the chance that it's true, given that you haven't heard about anyone noticing...

...I think; I don't trust my brain with any math problem longer than two steps, and I probably wrote several of those probabilities wrong. But the point is, you can do math to it, and the higher the probability that someone would have noticed if it wasn't true, and the higher the probability that you would have heard about it if someone noticed, the higher the probability that, given you haven't heard of anyone noticing it's true, it's not true.

For you to justify the rule in this post, you'd have to prove that people either systematically overestimate the chance that they'd hear of it if someone noticed, or the probability that someone would notice it if it were true.

Replies from: Emile
comment by Emile · 2010-05-05T08:23:55.610Z · LW(p) · GW(p)

P(you heard about someone noticing|it's true) = P(you would have heard someone noticed|someone noticed) * P(someone noticed|it's true)

The problem with the way a lot of people use that is that they compute P(someone noticed|it's true) using someone="anybody on earth", and P(you would have heard someone noticed|someone noticed) using someone="anyone among people they know well enough to talk about that".

Replies from: NancyLebovitz
comment by NancyLebovitz · 2010-05-05T11:11:34.695Z · LW(p) · GW(p)

Also "someone would have noticed" isn't the same thing as "someone would have noticed and talked about".

comment by NancyLebovitz · 2011-02-01T02:57:18.951Z · LW(p) · GW(p)

This might count-- it's the story of a flamboyantly abusive boss who got away with it for a long time. It seems to be partly that he was very good at working the system, and partly that the complaints about him seemed so weird that they were discounted.

comment by Jack · 2010-05-04T20:15:00.413Z · LW(p) · GW(p)

One that I didn't want to include in the post because I felt it would make it too inflammatory is this reaction to a particular conspiracy theory.

I assumed you had that exchange in mind. And it makes sense to avoid the inflammatory issue. But "someone would have noticed" was not what I was saying and that makes me wonder how often people actually do say "someone would have noticed".

Replies from: mattnewport
comment by mattnewport · 2010-05-04T20:32:25.237Z · LW(p) · GW(p)

I wondered if that was the exchange she had in mind as well. I think the tactic of avoiding the specific issue is harmful to the point because as I was reading it I was thinking "is this is a prelude to trying to convince me of something which someone would have noticed is the natural reaction to, and if so why is the ground being laid so carefully?". Reading this post makes me feel like I am being set up for some kind of sleight of hand argumentative trickery - my spider sense was tingling.

Replies from: Alicorn
comment by Alicorn · 2010-05-04T20:38:15.073Z · LW(p) · GW(p)

I did have the exchange in mind; I'm not trying to argue for a 9/11 conspiracy theory. I don't even believe in a 9/11 conspiracy theory. I just think this sort of reaction to that among other conspiracy theories is a risky heuristic to employ.

Replies from: mattnewport
comment by mattnewport · 2010-05-04T20:40:27.368Z · LW(p) · GW(p)

I wondered if that was the exchange you were referring to and decided that you probably weren't intending to argue for a 9/11 conspiracy theory so I started wondering what future post you were 'softening us up' for. That's why I think the lack of specifics detracts from the post. I was so busy wondering what you were planning to try and persuade us of that it distracted from the explicit message of the post.

Replies from: Alicorn
comment by Alicorn · 2010-05-04T20:42:08.950Z · LW(p) · GW(p)

I'm not softening you up for anything. I don't believe in anything that I'd expect people to react to in this way. It bothers me when folks do it to others. Do you think I should add this disclaimer to the post? Would it help?

Replies from: RobinZ, mattnewport
comment by RobinZ · 2010-05-04T20:56:19.110Z · LW(p) · GW(p)

I'm not sure a disclaimer would be rhetorically convincing - it reads to me like your article is building towards a conclusion that never arrives.

comment by mattnewport · 2010-05-04T20:47:14.959Z · LW(p) · GW(p)

It would probably have meant I was less distracted wondering what specific theory this post was laying the groundwork for, yes. I actually thought this was groundwork for something relating to SIAI - I'm not so sure you (or anyone here really) don't believe certain things in this class of idea.

Replies from: Alicorn
comment by Alicorn · 2010-05-04T20:53:45.146Z · LW(p) · GW(p)

Added the disclaimer.

Replies from: roland
comment by roland · 2010-05-05T05:03:00.090Z · LW(p) · GW(p)

Isn't it sad that you had to add this disclaimer? I'm not arguing you shouldn't have done it, unfortunately I tend to agree that it was the right thing to do.

But, shouldn't the post be judged on its own merit? Would it be looked at with different eyes if you wrote the disclaimer "I believe in conspiracy theories and I'm softening you up now."

comment by Zvi · 2010-05-05T20:13:55.944Z · LW(p) · GW(p)

I actually will site this using Matilda's wording of "Never do anything by halves if you want to get away with it" as Trunchbell's Law, both in terms of conventional actions and when taking Refuse in Audacity, but in my experience once the momentary shock wears off thecurve of people using this heuristic doesn't goes up fast enough to make up for the massive amount of noticing.

comment by Fyrius · 2010-07-29T11:48:19.674Z · LW(p) · GW(p)

In a perfect world, we could patiently hear everyone out and then judge their ideas on their merits, without taking fallible shortcuts. In this particular world, we don't have time for that. There are too many ideas to be judged.

I'm reminded of a theme in Carl Sagan's novel Contact, where it turns out the human race contains so many lunatics proclaiming all manners of blatantly preposterous things that when the protagonist has a genuine encounter with extraterrestrial life, but returns without irrefutable evidence, nobody believes what should be the most important event in human history has really taken place. Too many shitheads have been crying wolf.

It's a sad state of affairs that we even need to view each other with this much scepticism. What a wonderful world this would be if we wouldn't need our occasionally misfiring heuristics to filter out all the noise.

comment by Jayson_Virissimo · 2010-05-05T17:35:14.554Z · LW(p) · GW(p)

Disclaimer: I do not believe in anything I would expect anyone here to call a "conspiracy theory" or similar. I am not trying to "soften you up" for a future surprise with this post.

Why do I get the feeling that Alicorn is trying to soften us up to examine seriously some kind of conspiracy theory?

Replies from: simplicio
comment by simplicio · 2010-05-07T03:50:16.025Z · LW(p) · GW(p)

9/11 was an inside job designed to cover up evidence of vaccine deaths, in turn a plot by scientifically connected NWO crypto-muslims such as Pres. Obama, funded by Monsanto.

Replies from: Jayson_Virissimo
comment by Jayson_Virissimo · 2010-05-07T05:42:11.837Z · LW(p) · GW(p)

Are you out of your mind?! Obviously, 9/11 was an inside job designed to cover up evidence of vaccine deaths, in turn a plot by scientifically connected Illuminati Christian Natinalists such as George W. Bush. You would know this if you even attempted to look at any evidence. Clearly, you are just another sheeple.

comment by MartinB · 2010-05-04T19:19:33.900Z · LW(p) · GW(p)

I think you basically describe a subset of the bootstrapping problem of rational thought.

comment by RobinZ · 2010-05-04T19:11:41.781Z · LW(p) · GW(p)

I'm not sure what your thesis is. It sounds like you're talking about a problem with a particular heuristic, but I'm not sure why you would tell the story the way you have to make that point.

Replies from: billswift
comment by billswift · 2010-05-04T20:01:44.284Z · LW(p) · GW(p)

Not a particular heuristic. I haven't seen a name for this problem, but it is a combination of signaling, status, and in-groups. The social construction of what counts as evidence.

comment by Alan · 2010-05-07T16:11:12.061Z · LW(p) · GW(p)

The compact terminology for the class of phenomena you are describing is "pluralistic ignorance," and in other contexts it presents a far vaster challenge that the Kitty Genovese case would indicate. Consider the 19th century physician Ignatz Semmelweis, who pioneered the practice of hand-washing as a means of reducing sepsis and therefore maternal mortality. He was ostracized by fellow practitioners and died in destitution.

Replies from: JoshuaZ
comment by JoshuaZ · 2010-05-07T16:19:19.805Z · LW(p) · GW(p)

In fairness, Semmelweis didn't handle things very well. He drank heavily, and he engaged in personal attacks on doctors who disagreed with him. He self-destructed a fair bit. He wasn't ostracized until his various problems with interacting with people had already started. Before that, many people listened to what he had to say, and many just listened and then didn't change their mind. If he had handled things better, more people would likely have listened. Frankly, the sort of behavior he engaged in would today be the sort that would likely have triggered major crank warnings (it is important to note that not every such person is in fact a crank, but it does show how his behavior didn't help). But the common narrative of Semmelweis as this great martyr figure fighting against the establishment isn't really that accurate.

Replies from: Alan, SilasBarta
comment by Alan · 2010-05-07T21:46:32.810Z · LW(p) · GW(p)

Respectfully, the idiosyncracy of Semmelweis's personality isn't directly the point. Semmelweis had established beyond doubt early in his career that hand-washing with chlorinated water before deliveries dramatically drove down the maternal mortality rate. This was a huge finding. Incredibly to most of us now, at one time childbirth was a leading cause of death. The gut prejudice of his peers prevailed, however, and it was to be another 60 years later that the introduction of sulfa drugs and antibiotics again began to drive down maternal mortality. The point relates to pluralistic ignorance and the role of social proof. Social proof roughly means that the greater number of persons who find an idea correct, the greater it will be correct. In situations of uncertainty , everyone looks at everyone else to see what they are doing. One answer to Alicorn's query at the end of her post is to bear in mind the phenomenon of social proof, and the tendency toward pluralistic ignorance. Therefore, look beyond what the plurality of people are doing or saying.

comment by SilasBarta · 2010-05-07T16:29:52.617Z · LW(p) · GW(p)

In fairness, Semmelweis didn't handle things very well. He drank heavily, and he engaged in personal attacks on doctors who disagreed with him. ... He wasn't ostracized until his various problems with interacting with people had already started. Before that, many people listened to what he had to say, and many just listened and then didn't change their mind. If he had handled things better, more people would likely have listened.

So he was a 19th century version of me that liked alcohol? ;-)

comment by cousin_it · 2010-05-04T21:56:27.064Z · LW(p) · GW(p)

As it happens, I am currently in "somebody would have noticed" territory. About a week ago I abruptly switched to believing that Russell's paradox doesn't actually prove anything, and that good old naive set theory with a "set of all sets" can be made to work without contradictions. (It does seem to require a weird notion of equality for self-referring sets instead of the usual extensionality, but not much more.) Sorry to say, my math education hasn't yet helped me snap out of crackpot mode, so if anybody here could help me I'd much appreciate it.

Replies from: Richard_Kennaway, Sniffnoy, Tyrrell_McAllister, wnoise, Blueberry, crispy_critter, Mitchell_Porter, PhilGoetz
comment by Richard_Kennaway · 2010-05-05T06:59:15.654Z · LW(p) · GW(p)

I am seeing substantial amounts of both sense and nonsense in this thread. I suggest that anyone who wants to talk about set theory first learn what it is.

The Wikipedia article is somewhat wordy (i.e. made of words, rather than mathematics), and Mathworld is unusably fragmented. The Stanford Encyclopedia is good, but for anyone seriously interested I would suggest a book such as Devlin's "The Joy of Sets".

comment by Sniffnoy · 2010-05-05T00:46:33.133Z · LW(p) · GW(p)

I assume you're talking about Peter Aczel's antifoundation axiom (because you mentioned bisimulation); that doesn't allow a set of all sets (barring inconsistencies, and that particular system can't be inconsistent unless ordinary set theory is). The same applies to other similar systems. Russell's paradox isn't dependent on foundation in any way; as long as you have a set of all sets and the ability to take subsets specified by properties, you get Russell's paradox.

Edit: Since people seem to be asking about how this works in general, I should just point you all to Aczel's book on this and other antifoundational set theories, which you can find at http://standish.stanford.edu/pdf/00000056.pdf

Replies from: cousin_it
comment by cousin_it · 2010-05-05T07:35:48.662Z · LW(p) · GW(p)

as long as you have a set of all sets and the ability to take subsets specified by properties, you get Russell's paradox.

Yes, that's true. What I have in mind is restricting the latter ability a bit, by the minimum amount required to get rid of paradoxes. Except if you squint at it the right way, it won't even look like a restriction :-)

I will use the words "set" and "predicate" interchangeably. A predicate is a function that returns True or False. (Of course it doesn't have to be Turing-computable or anything.) It's pretty clear that some predicates exist, e.g. the predicate that always returns False (the empty set) or the one that always returns True (the set of all sets). This seems like a tiny change of terminology, but to me it seems enough to banish Russell's paradox!

Let's see how it works. We try to define the Russell predicate R thusly:

R(X) = not(X(X))

...and fail. This definition is incomplete. The value of R isn't defined on all predicates, because we haven't specified R(R) and can't compute it from the definition. If we additionally specify R(R) to be True or False, the paradox goes away.

To make this a little more precise: I think naive set theory can be made to work by disallowing predicates, like the Russell predicate, that are "incompletely defined" in the above sense. In this new theory we will have "AFA-like" non-well-founded sets (e.g. the Quine atom Q={Q}), and so we will need to define equality through bisimilarity. And that's pretty much all.

As you can see, this is really basic stuff. There's got to be some big idiotic mistake in my thinking - some kind of contradiction in this new notion of "set" - but I haven't found it yet.

EDITED on May 13 2010: I've found a contradiction. You can safely disregard my theory.

Replies from: AlephNeil, Tyrrell_McAllister, Stuart_Armstrong, JoshuaZ
comment by AlephNeil · 2010-05-13T10:14:39.768Z · LW(p) · GW(p)

Yes, that's true. What I have in mind is restricting the latter ability a bit, by >the minimum amount required to get rid of paradoxes.

Well, others have had this same idea. The standard example of a set theory built along those lines is Quine's "New Foundations" or "NF".

Now, Russell's paradox arises when we try to work within a set theory that allows 'unrestricted class comprehension'. That means that for any predicate P expressed in the language of set theory, there exists a set whose elements are all and only the sets with property P, which we denote {x : P(x) }

In ZF we restrict class comprehension by only assuming the existence of things of the form { x in Y : P(x)} and { f(x) : x in Y } (these correspond respectively to the Axiom of Separation and the Axiom of Replacement ).

On the other hand, in NF we grant existence to anything of the form { x : P(x) } as long as P is what's called a "stratified" predicate. To say a predicate is stratified is to say that one can assign integer-valued "levels" to the variables in such a way that for any subexpression of the form "x is in y" y's level has to be one greater than x's level.

Then clearly the predicate "P(x) iff x is in x" fails to be stratified (because x's level can't be one greater than itself). However, the predicate "P(x) iff x = x" is obviously stratified, and {x : x = x} is the set of all sets.

Replies from: cousin_it
comment by cousin_it · 2010-05-13T10:48:31.574Z · LW(p) · GW(p)

I know New Foundations, but stratification is too strong a restriction for my needs. This weird set theory of mine actually arose from a practical application - modeling "metastrategies" in the Prisoner's Dilemma. See this thread on decision-theory-workshop.

comment by Tyrrell_McAllister · 2010-05-05T19:36:30.568Z · LW(p) · GW(p)

Let's see how it works. We try to define the Russell predicate R thusly:

R(X) = not(X(X))

...and fail. This definition is incomplete. The value of R isn't defined on all predicates, because we haven't specified R(R) and can't compute it from the definition. If we additionally specify R(R) to be True or False, the paradox goes away.

How is it that the paradox "goes away"? If you "additionally specify R(R) to be True or False", don't you just go down one or the other of the two cases in Russell's paradox?

Suppose we decide to specify that R(R) is true. Then, by your definition, not(R(R)) is true. That means that R(R) is false, contrary to our specification. Similarly, if we instead specify that R(R) is false, we are led to conclude that R(R) is true, again contradicting our specification.

The conclusion is that we can't specify any truth value for R(R). Either truth value leads to a contradiction, so R(R) must be left undefined. Is that what you mean to say?

Replies from: cousin_it
comment by cousin_it · 2010-05-05T19:38:38.695Z · LW(p) · GW(p)

Suppose we decide to specify that R(R) is true. Then, by your definition not(R(R)) is true.

No, in this case R(X) = not(X(X)) for all X distinct from R, and additionally R(R) is true. This is a perfectly fine, completely defined, non-self-contradictory predicate.

Replies from: Larks, Tyrrell_McAllister
comment by Larks · 2010-05-13T09:30:27.097Z · LW(p) · GW(p)

Why is R(X) = not(X(X)) only for R =/= X? In Russell's version, X should vary over all predicates/sets, meaning when instance X with R, we get

R(R) = ¬R(R)

as per the paradox.

Replies from: cousin_it
comment by cousin_it · 2010-05-13T10:50:35.203Z · LW(p) · GW(p)

Not sure what your objection is. I introduced the notion of "incompletely defined predicate" to do away with Russell's version of the predicate.

comment by Tyrrell_McAllister · 2010-05-05T20:21:51.232Z · LW(p) · GW(p)

Okay, I see. I see nothing obviously contradictory with this.

From a technical standpoint, the hard part would be to give a useful criterion for when a seemingly-well-formed string does or does not completely define a predicate. The string not(X(X)) seems to be well-formed, but you're saying that actually it's just a fragment of a predicate, because you need to add "for X not equal to this predicate", and then give an addition clause about whether this predicate satisfies itself, to have a completely-defined predicate.

I guess that this was the sort of work that was done in these non-foundational systems that people are talking about.

Replies from: cousin_it
comment by cousin_it · 2010-05-05T20:50:30.261Z · LW(p) · GW(p)

I guess that this was the sort of work that was done in these non-foundational systems that people are talking about.

No, AFA and similar systems are different. They have no "set of all sets" and still make you construct sets up from their parts, but they give you more parts to play with: e.g. explicitly convert a directed graph with cycles into a set that contains itself.

Replies from: Tyrrell_McAllister
comment by Tyrrell_McAllister · 2010-05-05T21:17:42.991Z · LW(p) · GW(p)

No, AFA and similar systems are different.

I didn't mean that what you propose to do is commensurate with those systems. I just meant that those systems might have addressed the technical issue that I pointed out, but it's not yet clear to me how you address this issue.

comment by Stuart_Armstrong · 2010-05-09T14:05:50.465Z · LW(p) · GW(p)

I can't say anything about this specific construction, but there is a related issue in Turing machines. The issue was whether you could determine a useful subset S of the set of all Turing machines, such that the halting problem is solveable for all machines in S, and S was general enough to contain useful examples.

If I remember correctly, the answer was that you couldn't. This feels a lot like that - I'd bet that the only way of being sure that we can avoid Russel's paradox is to restrict predicates to such a narrow category that we can't do much anything useful with them.

comment by JoshuaZ · 2010-05-05T16:03:52.606Z · LW(p) · GW(p)

I think you are going to run into serious problems. Consider the predicate that always returns true. Then if I'm following Russell's original formulation of the paradox involving the powerset of the set of all sets will still lead to a contradiction.

Replies from: cousin_it, jimrandomh
comment by cousin_it · 2010-05-05T16:14:00.752Z · LW(p) · GW(p)

I can't seem to work out for myself what you mean. Can you spell it out in more detail?

Replies from: JoshuaZ
comment by JoshuaZ · 2010-05-05T16:37:17.168Z · LW(p) · GW(p)

Original form of Russell's paradox: Let A be the set of all sets and let P(A) be the powerset of A. By Cantor, |P(A)| > |A|. But, P(A) is a subset of A, so |P(A)|<=|A|. That's a contradiction.

Replies from: cousin_it, Thomas, jimrandomh
comment by cousin_it · 2010-05-05T19:22:13.011Z · LW(p) · GW(p)

Cantor's theorem breaks down in my system when applied to the set of all sets, because its proof essentially relies on Russell's paradox to reach the contradiction.

Replies from: JoshuaZ
comment by JoshuaZ · 2010-05-05T19:24:39.635Z · LW(p) · GW(p)

Hmm, that almost seems to be cutting off the nose to spite the cliche. Cantor's construction is a very natural construction. A set theory where you can't prove that would be seen by many as unacceptably weak. I'm a bit fuzzy on the details of your system, but let me ask, can you prove in this system that there's any uncountable set at all? For example, can we prove |R| > |N| ?

Replies from: cousin_it
comment by cousin_it · 2010-05-05T19:35:54.322Z · LW(p) · GW(p)

Yes. The proof that |R| > |N| stays working because predicates over N aren't themselves members of N, so the issue of "complete definedness" doesn't come up.

Replies from: JoshuaZ
comment by JoshuaZ · 2010-05-05T20:12:21.007Z · LW(p) · GW(p)

Hmm, this make work then and not kill off too much of set theory. You may want to talk to a professional set theorist or logician about this (my specialty is number theory so all I can do is glance at this and say that it looks plausible). The only remaining issue then becomes that I'm not sure that this is inherently better than standard set theory. In particular, this approach seems much less counterintuitive than ZFC. But that may be due to the fact that I'm more used to working with ZF-like objects.

comment by Thomas · 2010-05-05T19:04:16.382Z · LW(p) · GW(p)

The original form of Russell's (Zermelo's in fact) paradox is not this. The original form is {x|x not member of x}.

That leads to both

  • x is a member of x

and

  • x is not a member of x

And that is the original form of the paradox.

Replies from: JoshuaZ
comment by JoshuaZ · 2010-05-05T19:17:31.963Z · LW(p) · GW(p)

No. See for example This discussion. The form you give where it is described as a simple predicate recursion was not the original form of the paradox.

comment by jimrandomh · 2010-05-05T18:55:59.830Z · LW(p) · GW(p)

Ok, I've read up on Cantor's theorem now, and I think the trick is in the types of A and P(A), and the solution to the paradox is to borrow a trick from type theory. A is defined as the set of all sets, so the obvious question is, sets of what key type? Let that key type be t. Then

A: t=>bool
P(A): (t=>bool)=>bool

We defined P(A) to be in A, so a t=>bool is also a t. Let all other possible types for t be T. t=(t=>bool)+T. Now, one common way to deal with recursive types like this is to treat them as the limit of a sequence of types:

t[i] = t[i-1]=>bool + T
A[i]: t[i]=>bool
P(A[i]) = A[i+1]

Then when we take the limit,

t = lim i->inf t[i]
A = lim i->inf A[i]
P(A) = lim i->inf P(A[i])

Then suddenly, paradoxes based on the cardinality of A and P(A) go away, because those cardinalities diverge!

Replies from: JoshuaZ
comment by JoshuaZ · 2010-05-05T19:07:23.128Z · LW(p) · GW(p)

I'm not sure I know enough about type theory to evaluate this. Although I do know that Russell's original attempts to repair the defect involved type theory (Principia Mathematica uses a form of type theory however in that form one still can't form the set of all sets). I don't think the above works but I don't quite see what's wrong with it. Maybe Sniffnoy or someone else more versed in these matters can comment.

Replies from: Sniffnoy
comment by Sniffnoy · 2010-05-06T02:28:02.356Z · LW(p) · GW(p)

I don't know anything about type theory; when I wrote that I heard it has philosophical problems when applied to set theory, I meant I heard that from you. What the problems might actually be was my own guess...

Replies from: JoshuaZ
comment by JoshuaZ · 2010-05-06T02:35:36.438Z · LW(p) · GW(p)

Huh. Did I say that? I don't know almost anything about type theory. When did I say that?

comment by jimrandomh · 2010-05-05T16:39:21.015Z · LW(p) · GW(p)

I'm not deeply familiar with set theory, but cousin_it's formulation looks valid to me. Isn't the powerset of the set of all sets just the set of all sets of sets? (Or equivalently, the predicate X=>Y=>Z=>true.) How would you use that to reconstruct the paradox in a way that couldn't be resolved in the same way?

Replies from: JoshuaZ
comment by JoshuaZ · 2010-05-05T16:52:49.941Z · LW(p) · GW(p)

The powerset of the set of all sets may or may not be the set of all sets (it depends on whether or not you accept atoms in your version of set theory). However, Cantor's theorem shows that for any set B, the power set of B has cardinality strictly larger than B. So if B=P(B) you've got a problem.

comment by Tyrrell_McAllister · 2010-05-05T00:38:35.906Z · LW(p) · GW(p)

(It does seem to require a weird notion of equality for self-referring sets instead of the usual extensionality, but not much more.)

If you are talking about things that are set-like, except that they don't satisfy the extensionality axiom, then you just aren't talking about sets. The things you're talking about may be set-like in some respect, but they aren't sets.

There are other set-like things that don't satisfy extensionality. For example, two different properties or predicates might have the same extension.

Replies from: Sniffnoy
comment by Sniffnoy · 2010-05-05T01:01:18.787Z · LW(p) · GW(p)

To be clear - Aczel's ZFA and similar systems do satisfy extensionality; they'd hardly be set theories if they didn't. It's just that when you have sets A and B such that A={A} and B={B}, you're going to need stronger tools than extensionality to determine whether they are equal.

Replies from: Tyrrell_McAllister
comment by Tyrrell_McAllister · 2010-05-05T01:20:50.989Z · LW(p) · GW(p)

Interesting. I'm not familiar with Aczel's system. But is that what cousin_it is talking about doing? That looks like an adjustment to Foundation rather than to Extensionality.

Replies from: Sniffnoy
comment by Sniffnoy · 2010-05-05T01:41:45.341Z · LW(p) · GW(p)

It's both at once. (Though, as I said, you don't throw out extensionality. Actually, that raises an interesting question - could you discard extensionality as an axiom, and just derive it from AFA? I hadn't considered that possibility. Edit: You probably could, there's no obvious reason why you couldn't, but I honestly don't feel like checking the details...)

If you just throw out foundation without putting in anything to replace it, you have the possibility of ill-founded sets, but no way to actually construct any. But the thing is, if all you do is say "Non-well-founded sets exist!" without giving any way to actually work with them, then, well, that's not very helpful either. Hence any antifoundational replacement for foundation is going to have to strengthen extensionality if you want the result to be something you want to work with at all.

Replies from: JoshuaZ
comment by JoshuaZ · 2010-05-05T01:46:22.085Z · LW(p) · GW(p)

I think you mean to say is "non-Well-founded sets exist!" since you are talking about the antifoundational case (and even with strong anti-foundation axioms I still have well-founded sets to play with also).

Replies from: Sniffnoy
comment by Sniffnoy · 2010-05-05T01:54:00.439Z · LW(p) · GW(p)

Oops. Fixed.

comment by wnoise · 2010-05-04T22:16:30.817Z · LW(p) · GW(p)

How do you mean bisimulation in this case? This seems to be a reduction down to decidable predicates, e.g. a Turing machine for each set. Without a type theory, many obvious algorithms will fail to converge.

comment by Blueberry · 2010-05-04T22:09:36.078Z · LW(p) · GW(p)

I'd like to hear more about this. It doesn't sound necessarily crackpottish to me to come up with an alternate set theory: von Neumann and Godel did. How do you avoid contradictions?

Replies from: Sniffnoy
comment by Sniffnoy · 2010-05-05T00:59:17.145Z · LW(p) · GW(p)

Wait, how is NBG set theory relevant to this? NBG is just a conservative extension of ZFC, and only allows something resembling a set of all sets by insisting that this collection is not, in fact, a set. Which after all, it has to in order to avoid Russell's paradox.

Replies from: Blueberry
comment by Blueberry · 2010-05-05T05:41:36.706Z · LW(p) · GW(p)

Yes, and I'm guessing cousin_it's version of set theory is possibly equivalent to something similar. I'd love to hear more about it.

Replies from: Sniffnoy
comment by Sniffnoy · 2010-05-05T06:00:15.448Z · LW(p) · GW(p)

Well I mean, I imagine it shouldn't be too hard to take ZFA (or similar) and tack proper classes onto it. Logic is not really my thing so I'm not actually familiar with how you show that NBG conservatively extends ZFC. The result would be a bit odd, though, in that classes would act very differently from sets - well, OK, more differently than they already do in NBG...

Replies from: JoshuaZ
comment by JoshuaZ · 2010-05-05T06:15:35.053Z · LW(p) · GW(p)

I don't know the proof either. The other weird thing to note is that even though NBG is a conservative extension of ZFC, some proofs in NBG are much shorter than proofs in ZFC. So in some sense it is only weakly conservative. I don't know if that notion can be made at all more precise.

Edit: Followup thought, most interesting conservative extensions are only weakly conservative in some sense. Consider for example finite degree field extensions of Q. If axiomatized these become conservative extensions of Z. (That's essentially why for example we can prove something in the Gaussian integers and know there's a proof in Z).

comment by crispy_critter · 2010-05-05T19:04:43.651Z · LW(p) · GW(p)

Isn't "the set of all sets" (SAS) ill-defined? Suppose we consider it to be for some set A (maybe the set of all atoms) the infinite regression of power sets SAS = P(P(P(P....(A)))...)

In which case SAS = P(SAS) by Cantor-like arguments?

And Russell's paradox goes away?

comment by Mitchell_Porter · 2010-05-05T04:15:51.397Z · LW(p) · GW(p)

So, is the set of all sets that aren't members of themselves, a member of itself, or not?

Replies from: jeremy-corney, cousin_it
comment by Jeremy Corney (jeremy-corney) · 2020-10-08T16:12:59.622Z · LW(p) · GW(p)

Every set is also a subset of itself, by definition.

From Wikipedia "By definition a set z  is a subset  of a set x, if and only if every element of z is also an element of x"

comment by cousin_it · 2010-05-05T07:50:32.613Z · LW(p) · GW(p)

Insufficient data to answer your question :-) See my reply to Sniffnoy.

comment by PhilGoetz · 2010-05-05T00:20:11.555Z · LW(p) · GW(p)

Russell's paradox, as usually stated, doesn't actually prove anything, because it's usually given as a statement in English about set theory.

I don't know whether Russell originally stated it in mathematical terms, in which case it would prove something. I've read numerous accounts of it, yet never seen a mathematical presentation. Google fails me at present.

I don't count a statement of the form "x such that x is not a member of x" as mathematical, because my intuition doesn't want me to talk about sets being members of themselves unless I have a mathematical formalism for sets and set membership for which that works. It's also not happy about the quantification of x in that sentence; it's a recursive quantification. Let's put it this way: Any computer program I have ever written to handle quantification would crash or loop forever if you tried to give it such a statement.

Replies from: Sniffnoy, Sniffnoy, JoshuaZ, Tyrrell_McAllister
comment by Sniffnoy · 2010-05-05T01:11:02.652Z · LW(p) · GW(p)

By the way, quick history of Russell's paradox and related matters, with possible application to the original topic. :)

Russell first pointed out his namesake paradox in Frege's attempt to axiomatize set theory. So yes, it was a mathematical statement, and, really, it's pretty simple to state it. Why nobody noticed before this paradox before then, I have no idea, but it does seem to be worth noting that nobody noticed it until someone actually attempted to sit down and actually formalize set theory.

However, Russell was not the first to notice a paradox in naïve set theory. (Not sure to what extent you can talk about paradoxes in something that hasn't been formalized, but it's pretty clear what's meant, I think.) Cesare Burali-Forti noticed earlier that considering the set of all ordinals leads to a paradox. And yet, despite this, people still continued using naïve set theory until Russell! Part of this may have been that, IIRC, Burali-Forti was convinced that what he found could not actually be a paradox, even though, well, in math, a paradox is always a paradox unless you can knock out one of the suppositions. I have to wonder if perhaps his reaction on finding this may have been along the lines of "...but somebody would have noticed". :)

Replies from: JoshuaZ
comment by JoshuaZ · 2010-05-05T01:23:49.936Z · LW(p) · GW(p)

And note also that even Russell's paradox was not phrased originally in this way. His original phrasing as I understand it rested on taking the set of all sets A and then looking at the cardinality of that set's powerset P(A). Then we have |P(A)| > |A| but P(A) <= A so |P(A)| <= |A| which is a contradiction. This is essentially the same as Russell's paradox when one expands out the details (particularly, the details in the proof that in general a set has cardinality strictly less than its powerset).

Replies from: Sniffnoy
comment by Sniffnoy · 2010-05-05T01:36:14.842Z · LW(p) · GW(p)

Ah, good point. I'd forgotten about that part. IIRC he first noted that and then expanded out the details to see where the problem was.

comment by Sniffnoy · 2010-05-05T00:57:47.815Z · LW(p) · GW(p)

I don't count a statement of the form "x such that x is not a member of x" as mathematical, because my intuition doesn't want me to talk about sets being members of themselves unless I have a mathematical formalism for sets and set membership for which that works.

The problem is, how do you exclude it from working? If you're just working in first-order logic and you've got a "membership" predicate, of course it's a valid sentence. Russell and Whitehead tried to exclude it from working with a theory of types, but, I hear that has philosophical problems. (I admit to not having read their actual axioms or justification for such. I imagine the problem is basically as follows - it's easy enough to be clear about what you mean for finite types, but how do you specify transfinite types in a way that isn't circular?) The modern solution with ZFC is not to bar such statements; with the axiom of foundation, such statements are perfectly sensible, they're just false. Replace it with an antifoundational axiom and they won't always be false. However, in either case - or without picking a case at all - Russell's paradox still holds; allowing there to be sets that are members of themselves, does not allow there to be a set of all such sets. That is always paradoxical.

It's also not happy about the quantification of x in that sentence; it's a recursive quantification. Let's put it this way: Any computer program I have ever written to handle quantification would crash or loop forever if you tried to give it such a statement.

It's not recursive unless you're already working from a framework where there are objects and sets of objects and sets of etc. In the framework of first-order logic, there are just sets, period. Quantification is over the entire set-theoretic universe. No recursion, just universality. In ZFC sets can indeed be classified into this hierarchy, but that's a consequence of the axiom of foundation, not a feature of the logic.

Replies from: JoshuaZ, PhilGoetz
comment by JoshuaZ · 2010-05-05T01:19:42.714Z · LW(p) · GW(p)

I prefer to not have either foundation or an anti-foundational axiom. (Foundation generally leads to a more intuitive universe with sets sort of being like boxes but anti-foundational axioms lead to more interesting systems).

I'm also confused by cousin it's claim. I don't see how bisimulation helps one deal with Russell's paradox but I'd be interested in seeing a sketch of an attempt. As I understand it, if you try to use a notion of bisimilarity rather than extensionality and apply Russell's Paradox, you end up with essentially a set that isn't bisimilar to itself. Which is bad.

comment by PhilGoetz · 2010-05-05T03:03:55.704Z · LW(p) · GW(p)

It's not recursive unless you're already working from a framework where there are objects and sets of objects and sets of etc.

I'm pretty sure it's recursive under any framework. It asks about "x (now a free, unquantified variable) such that x (still a completely free variable) is not a member of x (whoops!)".

You need to first fill in the value of the third x - making x no longer free - before you can find the instantiations of x. Thus, x needs to be both bound and free at the same time. It makes no sense.

The value of x is being determined by the predication not(member(x,x)); and it's at the same time being determined by some definition of the set x; and you need to fix both of these things in order to evaluate the expression. You have 2 degrees of freedom, and only one variable. It can't be done; the expression can't be unambiguously evaluated. It's a mathematical version of whack-a-mole.

Replies from: Sniffnoy, Tyrrell_McAllister
comment by Sniffnoy · 2010-05-05T04:44:43.740Z · LW(p) · GW(p)

OK, so what you're saying is that any predicate that uses a variable in multiple places is recursive. That seems a very unusual notion of "recursive". Such situations are very common and not at all problematic. What if you're working with digraphs, with the possibility of self-loops? You can't handle talking about which nodes do or don't have edges pointing to themselves? That's fundamentally all that's going on here. Not being able to handle that, not being able to look at diagonal conditions, seems pretty broken. Not to mention that any predicate that is a conjuction or disjunction may well include the same variable in multiple places!

Replies from: JoshuaZ
comment by JoshuaZ · 2010-05-05T05:46:15.658Z · LW(p) · GW(p)

It may be possible to make what he is doing more precise or more usable. If he's objecting to impredicativity then maybe's he be ok with ZFC or NBG but not KM? (I know that KM is proper with respect to NBG but don't know much else about it. I'm not sure how relevant this is.)

comment by Tyrrell_McAllister · 2010-05-05T19:24:01.713Z · LW(p) · GW(p)

I'm pretty sure it's recursive under any framework. It asks about "x (now a free, unquantified variable) such that x (still a completely free variable) is not a member of x (whoops!)".

Your assertions about when x is free aren't right. The variable x is being bound in its first appearance, and it is already bound (and so not "completely free") in its second and third appearance.

And, to restate what Sniffnoy is saying in terms of Russell's own example: Are you really not allowed to talk about all the barbers who shave themselves?

Replies from: PhilGoetz, PhilGoetz
comment by PhilGoetz · 2010-05-05T21:50:16.890Z · LW(p) · GW(p)

Your assertions about when x is free aren't right. The variable x is being bound in its first appearance, and it is already bound (and so not "completely free") in its second and third appearance.

The third appearance of x assumes that x is already bound, so that you can evaluate whether something is a member of it. But when processing the second appearance of x, you are testing candidate values of x, to find out whether they satisfy the predicate not(member(X,X)). Within that predicate, you assume that the second argument is already bound, and look for instantiations of the first argument for which the predicate is true.

To evaluate this, you would have to write it as

Find X1, X2, X3 such that not(member(X1,X2)), equals(X1,X2), member(X2, X3), equals(X2, X3). (Finding a binding asserts that the set of all sets that are not members of themselves is a member of itself.)

But if you try to evaluate this, it's not going to run into any paradoxes; it's going to depend on how you define the predicate "not".

You can see a problem in writing it down if you think about how, computationally, it would be evaluated. If you start by asking for the set X,Y such that not(member(X,Y)), that's not a computable set; it keeps generating members forever. I don't think there's any way to write Russell's paradox in Prolog such that it would be possible to compute an answer.

Replies from: Tyrrell_McAllister
comment by Tyrrell_McAllister · 2010-05-05T22:16:51.857Z · LW(p) · GW(p)

Your assertions about when x is free aren't right. The variable x is being bound in its first appearance, and it is already bound (and so not "completely free") in its second and third appearance.

The third appearance of x assumes that x is already bound, so that you can evaluate whether something is a member of it. But when processing the second appearance of x, you are testing candidate values of x, to find out whether they satisfy the predicate not(member(X,X)). Within that predicate, you assume that the second argument is already bound, and look for instantiations of the first argument for which the predicate is true.

You are not using the word "bound" in the way that it is used in formal logic. The two occurances of 'X' in not(member(X,X)) are either both bound or both free. You cannot have one lying within the scope of a quantifier while the other is not.

Replies from: PhilGoetz
comment by PhilGoetz · 2010-05-05T22:35:46.734Z · LW(p) · GW(p)

You aren't disagreeing with me! I'm just additionally pointing out that the semantics of member(X,Y) require either X or Y to be bound in order for the statement to be unified, and for the entire quantified statement to be meaningful. To find X that satisfy Russell's paradox, you would have to violate the rule you just stated.

Replies from: Tyrrell_McAllister, Tyrrell_McAllister
comment by Tyrrell_McAllister · 2010-05-06T00:11:39.346Z · LW(p) · GW(p)

To find X that satisfy Russell's paradox, you would have to violate the rule you just stated.

Do you mean that I can't verify that a set is not an element of itself? But the empty set, for example, is not an element of itself. I can confirm this by noting that no thing is an element of the empty set (which is a finite check). Hence, the empty set, in particular, is not an element of the empty set. Therefore, by setting X = the empty set, I have an X such that X is not an element of X.

Note that in the above use of the predicate not(member(X,X)), both occurrences of 'X' were bound. So I'm not sure what you mean when you say that I "would have to violate the rule [I] just stated."

Replies from: PhilGoetz
comment by PhilGoetz · 2010-05-06T01:46:59.499Z · LW(p) · GW(p)

Oops, you're right. I was thinking of cases where you are trying to generate answers to a query, which is different from testing one candidate answer.

Let me start over:

My suspicion, which I haven't proved, is that Russell's Paradox is a result of specifying a logic, and not specifying the algorithms used to evaluate statements in that logic or to answer queries. In any complete specification of a logic, including both the rules and the computational steps taken to evaluate or query-answer, you will probably find that you can prove what the outcome of the Russell's paradox query is.

In other words, "formal logic" is too sloppy; my intuition is that in Russell's paradox, it hides problems with order of execution.

Replies from: JoshuaZ
comment by JoshuaZ · 2010-05-06T01:56:29.336Z · LW(p) · GW(p)

Not really. In most standard axiomatic descriptions of set theory, like say ZFC, one doesn't need to think at all about computability of set relations. This is a good thing. We want for example the Busy Beaver function to be well-defined. And we want to be able to talk about things like the behavior of Turing machines when connected to non-computable oracles. If we insist that our set theoretic relationships be computable this sort of thing becomes very hard to talk about.

Replies from: PhilGoetz
comment by PhilGoetz · 2010-05-07T19:53:01.691Z · LW(p) · GW(p)

Huh? The Busy Beaver function is all about computability. The function itself is not computable; but you know that only because you've spelled out how the computation works, and can show that there is an answer but that it is not computable.

ZFC was designed to avoid Russell's Paradox; it does this by getting closer to a computable description of set theory. The definition itself is actually computational; a set of axioms of implication are just rewrite rules. It is ambiguous only in that it fails to specify what order rewrite rule applies first.

I have no problem with using non-computable oracles. But the way Turing machines interact with those oracles is computationally explicit. The view I'm expressing suggests you could run into troubles if you posed an oracle a question that hid ambiguities that would be resolved by a computational specification of the oracle. But I've never seen any use of an oracle that did that. People don't use oracles for Russell's paradox-like problems; they typically use them for non-computable functions.

Replies from: JoshuaZ
comment by JoshuaZ · 2010-05-07T19:55:25.999Z · LW(p) · GW(p)

How are you defining computable? Naively I would think it would be something like describable by a Turing machine that outputs the result after finite time. But Busy Beaver won't do that. There may be definitional issues at work here.

Replies from: PhilGoetz
comment by PhilGoetz · 2010-05-08T02:25:16.189Z · LW(p) · GW(p)

The busy beaver function isn't computable; but it is posed within a well-defined computational model.

Replies from: JoshuaZ
comment by JoshuaZ · 2010-05-08T19:56:49.532Z · LW(p) · GW(p)

Ok. So when do you consider something to be within an acceptable computational model or not?

comment by Tyrrell_McAllister · 2010-05-06T00:03:59.131Z · LW(p) · GW(p)

You aren't disagreeing with me! I'm just additionally pointing out that the semantics of member(X,Y) require either X or Y to be bound in order for the statement to be unified, and for the entire quantified statement to be meaningful.

I'm not sure how you're using these words. In formal logic, boundedness has nothing to do with semantics. Boundedness is a purely syntactic notion. And I don't know what "unified" means in this context.

Replies from: PhilGoetz
comment by PhilGoetz · 2010-05-06T01:55:19.040Z · LW(p) · GW(p)

The "member" predicate is not a predicate like "shaves"; it is a primitive predicate, like "and", with special semantics. If you want to generate the set of X for which member(X,Y) is true, you need Y to be bound. To generate the set of Y for which member(X,Y) is true, you need X to be bound. That's what I meant.

Asking for the set of all for which member(X,Y) is true is going to get stuck in an infinite loop without ever generating any results. There is some connection between Russell's paradox, and queries that generate a useless infinite loop in a query language. You can look at a statement of the paradox and feel that it leads to two contradictory proofs; but if you define set theory and the algorithms used to answer queries, I think (but am not sure) you will find that you can prove what the algorithm will produce when run.

Replies from: thomblake, Tyrrell_McAllister, PhilGoetz
comment by thomblake · 2010-05-06T13:24:12.378Z · LW(p) · GW(p)

Asking for the set of all for which member(X,Y) is true is going to get stuck in an infinite loop without ever generating any results.

It looks like you're using a computational model to understand Russell's paradox. Should you be thinking of Kleene-Rosser?

Replies from: PhilGoetz
comment by PhilGoetz · 2010-05-07T16:25:42.339Z · LW(p) · GW(p)

Thanks for the pointer. I was saying that my guess is that Russell's paradox reduces to something like the Kleene-Rosser paradox, which is resolved when you spell out the order of execution; and that we shouldn't worry about paradoxes that are resolved when you spell out the order of execution, because a "formal logic" without the details for performing computations spelled out is not sufficiently formal.

comment by Tyrrell_McAllister · 2010-05-06T03:46:59.325Z · LW(p) · GW(p)

The "member" predicate is not a predicate like "shaves"; it is a primitive predicate, like "and", with special semantics. If you want to generate the set of X for which member(X,Y) is true, you need Y to be bound. To generate the set of Y for which member(X,Y) is true, you need X to be bound. That's what I meant.

That's not a problem, because, like I said, both arguments of member(X,X) are bound in Russell's paradox. They are both bound by the phrase "The set of all X such that. . ." that precedes them.

Asking for the set of all for which member(X,Y) is true is going to get stuck in an infinite loop without ever generating any results.

Again, I need to know how you are encoding sets before I can answer any question about what an algorithm would do on given input.

but if you define set theory and the algorithms used to answer queries, I think (but am not sure) you will find that you can prove what the algorithm will produce when run.

One wouldn't want all queries within set theory to be decidable by some algorithm. If that were the case, then set theory would not be able to capture enough of arithmetic.

Replies from: PhilGoetz
comment by PhilGoetz · 2010-05-07T16:16:45.268Z · LW(p) · GW(p)

One wouldn't want all queries within set theory to be decidable by some algorithm.

I'm aware of that. This particular paradox, though, is one where the question seems to be decidable both ways, not one where it's undecidable.

comment by PhilGoetz · 2010-05-07T16:20:19.153Z · LW(p) · GW(p)

My talk of "primitive predicates" makes sense only if you're talking about a computational implementation.

In a truly formal logic, you do determine the definitions of the predicates purely by the axioms they are found in, including a set of axioms that are production rules, which therefore spell out execution (although order of execution is still usually ambiguous). ZF set theory is like that.

But I've never seen a presentation of Russell's paradox which defined what "not" and "member-of" mean in an axiomatic way. I don't think that had even been done for set theory at the time Russell proposed his paradox. The presentations I've seen always rely on your intuition about what "not" and "member of" mean.

comment by PhilGoetz · 2010-05-05T21:40:54.902Z · LW(p) · GW(p)

The predicate shaves(X,X) is very different from the predicate member-of(X,X). To find values of X that satisfy the first, you simply enumerate all of your facts about

shaves(al, fred) shaves(al, joe) shaves(al, al)

and look for one that unifies with shaves(X,X).

You aren't going to find the X that satisfy member(X,X) by unification. It isn't even ever true for finite sets, except by convention for the empty set.

Replies from: Tyrrell_McAllister
comment by Tyrrell_McAllister · 2010-05-05T22:12:07.657Z · LW(p) · GW(p)

It isn't even ever true for finite sets, except by convention for the empty set.

I think that you're confusing the element-of relation with the subset-of relation. Or something. But then, all sets are subsets of themselves, including finite ones, so I'm not sure what you were thinking.

At any rate, the empty set is not an element of itself according to any convention that I've ever seen.

You aren't going to find the X that satisfy member(X,X) by unification.

I'm not sure how to respond to this until you answer my question in this comment.

Replies from: PhilGoetz
comment by PhilGoetz · 2010-05-08T05:10:16.845Z · LW(p) · GW(p)

At any rate, the empty set is not an element of itself according to any convention that I've ever seen.

You're right; I was mis-remembering. It can't be, or 1 := 0 u {0} wouldn't work.

comment by JoshuaZ · 2010-05-05T02:57:49.296Z · LW(p) · GW(p)

Russell's paradox, as usually stated, doesn't actually prove anything, because it's usually given as a statement in English about set theory.

It is presented that way to make a point that naive set theory isn't workable.

I don't know whether Russell originally stated it in mathematical terms, in which case it would prove something. I've read numerous accounts of it, yet never seen > a mathematical presentation. Google fails me at present

It is presented rigorously in most intro set theory text books. In ZFC or any other standard set of set theory axioms, Russell's paradox ceases to be a paradox and the logic is instead a theorem of the form "For any set A, there exists a set B such B is not an element of A."

I don't count a statement of the form "x such that x is not a member of x" as mathematical, because my intuition doesn't want me to talk about sets being members of themselves unless I have a mathematical formalism for sets and set > membership for which that works. It's also not happy about the quantification of x in that sentence; it's a recursive quantification. Let's put it this way: Any computer program I have ever written to handle quantification would crash or loop forever if you tried to give it such a statement.

Well, a standard formalism (again such as ZFC) is perfectly happy talking about sets that recur on themselves this way. Indeed, it is actually difficult to make a system of set theory that doesn't allow you to at least talk about sets like this.

I'm curious, do you consider Cantor's diagnolization argument to be too recursive? What about Turing's Halting Theorem?

comment by Tyrrell_McAllister · 2010-05-05T00:44:51.367Z · LW(p) · GW(p)

Let's put it this way: Any computer program I have ever written to handle quantification would crash or loop forever if you tried to give it such a statement.

What encoding scheme would you use to encode arbitrary, possibly infinite, sets in a computer?

Replies from: PhilGoetz
comment by PhilGoetz · 2010-05-08T05:11:20.861Z · LW(p) · GW(p)

I could, worst case, use the encoding scheme you use to write them down on paper when you prove things about them.

comment by roland · 2010-05-05T04:42:07.167Z · LW(p) · GW(p)

Great post Alikorn! I think there are some arguments that are similar to "But somebody would have noticed." that are used to discredit any unusual hypothesis and that I read already several times on LW, they are:

Regarding conspiracy theories:

  1. "If this were true some whistle blower would step forward."
  2. "You are privileging the hypothesis because the prior probability of it is much to low."