Posts

How would you talk a stranger off the ledge? 2012-01-23T14:52:05.813Z
Would the world be better off without 50% of the people in it? 2010-12-14T05:19:10.076Z

Comments

Comment by MoreOn on Rationality Quotes January 2013 · 2013-05-08T05:20:39.949Z · LW · GW

I am firmly atheist right now, lounging in my mom's warm living room in a comfy armchair, tipity-typing on my keyboard. But when I go out to sea, alone, and the weather turns, a storm picks up, and I'm caught out after dark, and thanks to a rusty socket only one bow light works... well, then, I pray to every god I know starting with Poseidon, and sell my soul to the devil while at it.

I'm not sure why I do it.

Maybe that's what my brain does to occupy the excess processing time? In high school, when I still remembered it, I used to recite the litany against fear. But that's not quite it. When waves toss my little boat around and I ask myself why I'm praying---the answer invariably comes out, ``It's never made things worse. So the Professor God isn't punishing me for my weakness. Who knows... maybe it will work? Even if not, prayer beats panic as a system idle process.''

Comment by MoreOn on How would you talk a stranger off the ledge? · 2012-01-23T17:19:59.623Z · LW · GW

I'd love to redirect everyone in my blast radius who's ever mentioned suicide to a hotline, but somehow I think that's the first thing just about anyone says when someone mentions suicide... to the point when "get professional help" is synonymous with "I don't want to deal with this personally."

In a similar vein, do suicide hotlines actually work? I'm reading up on them right now, and found this alarming article, that basically says that sometimes the call centers screw up, but overall they work sort of well, and that lapses need to be fixed with better training. I can't find any specifics about what that training entails; I'd love to read about what those hotline volunteers actually say to the strangers who call in.

Comment by MoreOn on Why Our Kind Can't Cooperate · 2012-01-01T19:51:39.708Z · LW · GW

“If I agree, why should I bother saying it? Doesn’t my silence signal agreement enough?”

That’s been my non-verbal reasoning for years now! Not just here: everywhere. People have been telling me, with various degrees of success, that I never even speak except to argue. To those who have been successful in getting through to me, I would respond with, “Maybe it sounds like I’m arguing, but you’re WRONG. I’m not arguing!”

Until I read this post, I wasn’t even aware that I was doing it. Yikes!

Comment by MoreOn on [SEQ RERUN] Why is the Future So Absurd? · 2011-08-21T02:07:03.623Z · LW · GW

Gotcha. I wasn't aware that there had been more discussion about sequence reruns than that one thread.

Comment by MoreOn on Magic Tricks Revealed: Test Your Rationality · 2011-08-20T21:40:04.214Z · LW · GW

If those teacher's students were absolutely not expecting a lie, then another out-of-the-box question based on physics they should understand wouldn't trick them. The trust has been broken. On the other hand, if the problem is their inability to be creative enough, they won’t become creative just because they learned not to trust the teacher.

My high school physics teacher in high school who liked tricking us. Demonstrating his point about reflections off of light/dark surfaces, he covered up the laser pointer while shining it at a black binder. He put a compass next to a magnet to throw us off. These tricks were rare enough that we didn’t expect them every time, but we also knew not to blindly trust his setup. Still, there were plenty of people who fell for them every time.

And then came the torque wheel, a gyroscope bicycle wheel almost exactly like the one in this video. My first reaction, based on physics I did understand (and that wasn’t it at that time) was, “That’s impossible!” Then the teacher then told us it wasn’t a trick. He wouldn’t lie, but my reaction was still, “That’s impossible!” If I remember correctly, my hypothesis involved a hinge at the end of a solid string. Eventually, the teacher just had me hold the wheel and spun it… and the friggin’ thing moved on its own!!! I even checked that the axis or the rim didn’t contain any magicary before I was able to admit that, “Huh, I guess it is possible.”

A couple years later after that, another physics teacher inadvertently placed a compass on top of the table with a classroom computer inside of it. And then he had us learn N/S/E/W by pointing. I was the only moron in a 200-person class pointing to the “wrong” North.

Comment by MoreOn on [SEQ RERUN] Why is the Future So Absurd? · 2011-08-20T21:28:27.405Z · LW · GW

Discuss the post here (rather than in the comments to the original post).

This comment by alexflint doesn't look like it's gotten much exposure back when sequence reruns were first discussed.

Maybe the template shouldn't be instructing people to leave comments here?

Comment by MoreOn on If You Demand Magic, Magic Won't Help · 2011-08-20T07:47:14.368Z · LW · GW

So, magic is easy. Then, everyone else is doing it, too. (And you're spending a good portion of your learning curve struggling with the magical equivalent of flipping a light switch). It's even more mundane than difficult magic.

By comparison, how many times today have you thought, "Wow! I'm really glad I have eyesight!" Well, now you have. But it's not something you go around thinking all the time. Why do you expect that you'd think "Wow! I'm really glad I have easy magic!" any more frequently?

Comment by MoreOn on Joy in Discovery · 2011-08-20T03:01:13.308Z · LW · GW

The problem with routine discoveries, like my most recent discovery of how a magic trick works or the QED-euphoria I get after getting a proof down, is that it doesn't last long. I can't output 5 proofs/solutions an hour.

Comment by MoreOn on Availability · 2011-08-20T01:45:27.368Z · LW · GW

Subjects thought that accidents caused about as many deaths as disease.

Lichtenstein et aliōrum research subjects were 1) college students and 2) members of a chapter of the League of Women Voters. Students thought that accidents are 1.62 times more likely than diseases, and league members thought they were 11.6 times more likely (geometric mean). Sadly, no standard deviation was given. The true value is 15.4. Note that only 57% and 79% of students and league members respectively got the direction right, which further biased the geometric average down.

There were some messed up answers. For example, students thought that tornadoes killed more people than asthma, when in fact asthma kills 20x more people than tornadoes. All accidents are about as likely as stomach cancer (well, 1.19x more likely), but they were judged to be 29 times more likely. Pairs like these represent a minority, and subjects were generally only bad at guessing which cause of death was more frequent when the ratio was less than 2:1. These are the graphs from the paper.

The following excerpt is from Judged Frequency Of Lethal Events by Lichtenstein, Slovic, Fischhoff, Layman and Combs.

Instructions. The subjects' instructions read as follows:

Each item in part one consists of two different possible causes of death. The question you are to answer is: Which cause of death is more likely? We do not mean more likely for you, we mean more likely in general, in the United States.

Consider all the people now living in the United States—children, adults, everyone. Now supposing we randomly picked just one of those people. Will that person more likely die next year from cause A or cause B ? For example: Dying in a bicycle accident versus dying from an overdose of heroin. Death from each cause is remotely possible. Our question is, which of these two is the more likely cause of death?

For each pair of possible causes of death, A and B, we want you to mark on your answer sheet which cause you think is MORE LIKELY. Next, we want you to decide how many times more likely this cause of death is, as compared with the other cause of death given in the same item. The pairs we use vary widely in their relative likelihood. For one pair, you may think that the two causes are equally likely. If so, you should write the number 1 in the space provided for that pair. Or, you may think that one cause of death is 10 times, or 100 times, or even a million times as likely as the other cause of death. You have to decide: How many times as likely is the more likely cause of death? Write the number in the space provided. If you think it's twice as likely, write 2. If it's 10 thousand times as likely, write 10,000, and so forth.

There were more instructions about relative likelihoods and scales. And there was a glossary to help the people understand some categories.

All accidents: includes any kind of accidental event; excludes diseases and natural disasters (floods, tornadoes, etc.).

All cancer: includes leukemia.

Cancer of the digestive system: includes cancer of stomach, alimentary tract, esophagus, and intestines.

Excess cold: freezing to death or death by exposure.

Nonvenomous animal: dogs, bears, etc.

Venomous bite or sting: caused by snakes, bees, wasps, etc.

Note that there was nothing about “old age” anywhere. There is no such thing as “death by old age,” but I’ll risk generalizing from my own example to say that some people think there is. And even those who know there isn’t might think, despite the instructions, “Oh, darnit, I forgot that old people count, too.”

I wish I’d tested myself BEFORE reading the correct answer. As near as I could tell, I would’ve been correct about homicide vs. suicide, but wrong about diseases vs. accidents (“Old people count, too!” facepalm). I wouldn’t even bother guessing the relative frequency. I didn’t have a clue.

When I need to know the number of square feet in an acre, or the world population it takes me seconds to get from the question to the answer. I dutifully spent ~20 minutes googling the CDC website, looking for this. It wasn’t even some heroic effort, but it’s not something I, or most other people, would casually expend on every question that starts with, “Huh, I wonder….” (we should, but we don’t).

As for what I found: I dare you, click on my link and see table 9. (http://www.cdc.gov/NCHS/data/nvsr/nvsr58/nvsr58_19.pdf). Did you? If you did, you would’ve seen that Zubon2 was right in this comment. Accidents win by quite a margin in the 15-44 demographic. I couldn’t find 1978 data, but I’d expect it to be similar (Lichtenstein’s et al tables are no help because they pool all age groups).

I spent the last two hours looking at these tables. Ask me anything! … I won’t be able to answer. Unless I have the CDC tables in front of me, I might not even do much better on Lichtenstein et aliōrum questionnaire than a typical subject (well, at least, I know tornadoes have frequency; measles doesn’t—I’ll get that question right). I suppose that people who haven’t looked at the CDC table are getting all of their information from fragmented reports like “Drive safely! Traffic accidents is the leading cause of death among teenagers who !” or “Buy our drug! is the leading cause of death in over 55!” or “5-star exhaust pipe crash safety rating!” Humans aren’t good at integrating these fragments.

Memory is a bad guide to probability estimates. But what’s the alternative? Should we carry tables around with us?

Personally, I hope that someday data that is already out there in the public domain will be made easily accessible. I hope that finding the relative frequencies of measles-related deaths and tornado-related deaths will be as quick as finding the number of square feet in an acre or the world population, and that political squabble will focus on whether or not certain data should be in the public domain (“You can’t force hospitals to put their data online! That violates the patients’ right to privacy!” “Well, but….”)

Note: repost from SEQ RERUN.

Comment by MoreOn on Introduction to the Sequence Reruns · 2011-08-20T01:43:44.458Z · LW · GW

Seconded.

At this point, [SEQ RERUNS] get very few responses. Barely any discussions happen in [SEQ RERUNS]. Might as well post comments in the original post and hope someone will respond in a couple months.

Comment by MoreOn on Trust in Math · 2011-08-20T00:06:50.873Z · LW · GW

"Huh, if I didn't spot this flaw at first sight, then I may have accepted some flawed congruent evidence too. What other mistaken proofs do I have in my head, whose absurdity is not at first apparent?"

Has this question ever been answered? It is one of those things I go around worrying about.

Comment by MoreOn on Magic Tricks Revealed: Test Your Rationality · 2011-08-19T22:47:00.629Z · LW · GW

Bwahahahahahaha! I'll admit I kinda freaked out at first.

Comment by MoreOn on [SEQ RERUN] Availability · 2011-08-19T19:57:47.391Z · LW · GW

Subjects thought that accidents caused about as many deaths as disease.

Lichtenstein et aliōrum research subjects were 1) college students and 2) members of a chapter of the League of Women Voters. Students thought that accidents are 1.62 times more likely than diseases, and league members thought they were 11.6 times more likely (geometric mean). Sadly, no standard deviation was given. The true value is 15.4. Note that only 57% and 79% of students and league members respectively got the direction right, which further biased the geometric average down.

There were some messed up answers. For example, students thought that tornadoes killed more people than asthma, when in fact asthma kills 20x more people than tornadoes. All accidents are about as likely as stomach cancer (well, 1.19x more likely), but they were judged to be 29 times more likely. Pairs like these represent a minority, and subjects were generally only bad at guessing which cause of death was more frequent when the ratio was less than 2:1. These are the graphs from the paper.

The following excerpt is from Judged Frequency Of Lethal Events by Lichtenstein, Slovic, Fischhoff, Layman and Combs.

Instructions. The subjects' instructions read as follows:

Each item in part one consists of two different possible causes of death. The question you are to answer is: Which cause of death is more likely? We do not mean more likely for you, we mean more likely in general, in the United States.

Consider all the people now living in the United States—children, adults, everyone. Now supposing we randomly picked just one of those people. Will that person more likely die next year from cause A or cause B ? For example: Dying in a bicycle accident versus dying from an overdose of heroin. Death from each cause is remotely possible. Our question is, which of these two is the more likely cause of death?

For each pair of possible causes of death, A and B, we want you to mark on your answer sheet which cause you think is MORE LIKELY. Next, we want you to decide how many times more likely this cause of death is, as compared with the other cause of death given in the same item. The pairs we use vary widely in their relative likelihood. For one pair, you may think that the two causes are equally likely. If so, you should write the number 1 in the space provided for that pair. Or, you may think that one cause of death is 10 times, or 100 times, or even a million times as likely as the other cause of death. You have to decide: How many times as likely is the more likely cause of death? Write the number in the space provided. If you think it's twice as likely, write 2. If it's 10 thousand times as likely, write 10,000, and so forth.

There were more instructions about relative likelihoods and scales. And there was a glossary to help the people understand some categories.

All accidents: includes any kind of accidental event; excludes diseases and natural disasters (floods, tornadoes, etc.).

All cancer: includes leukemia.

Cancer of the digestive system: includes cancer of stomach, alimentary tract, esophagus, and intestines.

Excess cold: freezing to death or death by exposure.

Nonvenomous animal: dogs, bears, etc.

Venomous bite or sting: caused by snakes, bees, wasps, etc.

Note that there was nothing about “old age” anywhere. There is no such thing as “death by old age,” but I’ll risk generalizing from my own example to say that some people think there is. And even those who know there isn’t might think, despite the instructions, “Oh, darnit, I forgot that old people count, too.”

I wish I’d tested myself BEFORE reading the correct answer. As near as I could tell, I would’ve been correct about homicide vs. suicide, but wrong about diseases vs. accidents (“Old people count, too!” facepalm). I wouldn’t even bother guessing the relative frequency. I didn’t have a clue.

When I need to know the number of square feet in an acre, or the world population it takes me seconds to get from the question to the answer. I dutifully spent ~20 minutes googling the CDC website, looking for this. It wasn’t even some heroic effort, but it’s not something I, or most other people, would casually expend on every question that starts with, “Huh, I wonder….” (we should, but we don’t).

As for what I found: I dare you, click on my link and see table 9. (http://www.cdc.gov/NCHS/data/nvsr/nvsr58/nvsr58_19.pdf). Did you? If you did, you would’ve seen that Zubon2 was right in this comment. Accidents win by quite a margin in the 15-44 demographic. I couldn’t find 1978 data, but I’d expect it to be similar (Lichtenstein’s et al tables are no help because they pool all age groups).

I spent the last two hours looking at these tables. Ask me anything! … I won’t be able to answer. Unless I have the CDC tables in front of me, I might not even do much better on Lichtenstein et aliōrum questionnaire than a typical subject (well, at least, I know tornadoes have frequency; measles doesn’t—I’ll get that question right). I suppose that people who haven’t looked at the CDC table are getting all of their information from fragmented reports like “Drive safely! Traffic accidents is the leading cause of death among teenagers who !” or “Buy our drug! is the leading cause of death in over 55!” or “5-star exhaust pipe crash safety rating!” Humans aren’t good at integrating these fragments.

Memory is a bad guide to probability estimates. But what’s the alternative? Should we carry tables around with us?

Personally, I hope that someday data that is already out there in the public domain will be made easily accessible. I hope that finding the relative frequencies of measles-related deaths and tornado-related deaths will be as quick as finding the number of square feet in an acre or the world population, and that political squabble will focus on whether or not certain data should be in the public domain (“You can’t force hospitals to put their data online! That violates the patients’ right to privacy!” “Well, but….”)

Comment by MoreOn on Preference For (Many) Future Worlds · 2011-08-19T02:50:52.986Z · LW · GW

People have been gambling for millennia. Most of the people who have lost bets have done so without killing themselves. Much can be learned from this. For example, that killing yourself is worse than not killing yourself. This intuition is one that should follow over to ‘quantum’ gambles rather straightforwardly.

You weren't one of those people.

That non-ancestor of yours who played Quantum Russian Roulette with fifteen others is dead from your perspective, his alleles slightly underrepresented in the gene pool. In fact, if there was an allele for "QRoulette? Sure!" that had caused these 16 people to gample, then it had lost 15+ of its copies from your ancestral population. Post-gambling suicide really isn't a good evolutionary strategy.

But, from the perspective of your non-ancestor, he got to live in his own perfect little world with 16x his wealth (barring being crippled--but then, he'd only been up against 4 bits).

Comment by MoreOn on Preference For (Many) Future Worlds · 2011-08-19T02:34:21.823Z · LW · GW

Reality wouldn't be mean to you on purpose.

Of course there would be worlds where something would have gone horribly wrong if you won the lottery. But there's no reason for you to expect that you'd wake up in one of those worlds because you won the lottery. The difference between your "horribly wrong" worlds (don't care about money/ inflation / no money) and wedrifid's (lost the lottery and became crippled) is that waking up in wedrifid's is caused by one's participation in the lottery.

Comment by MoreOn on The Mystery of the Haunted Rationalist · 2011-02-26T01:47:22.885Z · LW · GW

A Toilet Flush Monster climbed out of my toilet whenever I used to flush at night. If I could get back into bed completely covered by a blanket before it fully climbed out (i.e. the tank filled in with water and stopped making noises), then I was safe. All lights had to be off the whole time, or else the monster could see me.

Don't laugh.

In one of my childhood's flashes of clarity, I must have wondered how I knew about the monster if I'd never actually seen it. So one day I watched the toilet flush, and no monster came out. I checked with the lights off, and light on, and nothing. Since then, I could go to the bathroom with lights on, for once.

Well, I defeated the monster. But I'm still a little afraid of using a flashlight at night, or stepping into the floodlight when there's a lot of darkness around. So the monster vacated the toilet, but continues to haunt me.

PS: For fear that my statement may be misinterpreted: I don't actually believe in the monster, duh! But I still show symptoms of the Toilet Flush Monster disease.

Comment by MoreOn on Making Beliefs Pay Rent (in Anticipated Experiences) · 2011-02-26T00:31:18.918Z · LW · GW

Taboo'ed. See edit.

Although I have a bone to pick with the whole "belief in belief" business, right now I'll concede that people actually do carry beliefs around that don't lead to anticipated experiences. Wulky Wilkinsen being a "post-utopian" (as interpreted from my current state of knowing 0 about Wulky Wilkinsen and post-utopians) is a belief that doesn't pay any rent at all, not even a paper that says "moneeez."

Comment by MoreOn on Use curiosity · 2011-02-25T23:38:11.515Z · LW · GW

The fact that I haven't noticed the same thing in casual conversations either speaks volumes for my conversation skills (lack thereof), or suggests that maybe not all people are as trigger-happy on the ignore button as you suggest.

Comment by MoreOn on Making Beliefs Pay Rent (in Anticipated Experiences) · 2011-02-25T23:34:40.781Z · LW · GW

Two people have semantically different beliefs.

Both beliefs lead them to anticipate the same experience.

EDIT: In other words, two people might think they have different beliefs, but when it comes to anticipated experiences, they have similar enough beliefs about the properties of sound waves and the properties of falling trees and recorders and etc etc that they anticipate the same experience.

Comment by MoreOn on Making Beliefs Pay Rent (in Anticipated Experiences) · 2011-02-25T23:29:11.171Z · LW · GW

That said, I don't actually know anyone for whom this is true.

I don't know too many theist janitors, either. Doesn't mean they don't exist.

From my perspective, it sucks to be them. But once you're them, all you can do is minimize your misery by finding some local utility maximum and staying there.

Comment by MoreOn on Making Beliefs Pay Rent (in Anticipated Experiences) · 2011-02-25T23:24:56.584Z · LW · GW

If my tenants paid rent with a piece of paper that said "moneeez" on it, I wouldn't call it paying rent.

In your view, don't all beliefs pay rent in some anticipated experience, no matter how bad that rent is?

Comment by MoreOn on Making Beliefs Pay Rent (in Anticipated Experiences) · 2011-02-25T23:19:03.824Z · LW · GW

"Smart and beautiful" Joe is being Pascal's-mugged by his own beliefs. His anticipated experiences lead to exorbitantly high utility. When failure costs (relatively) little, it subtracts little utility by comparison.

I suppose you could use the same argument for the lottery-playing Joe. And you would realize that people like Joe, on average, are worse off. You wouldn't want to be Joe. But once you are Joe, his irrationality looks different from the inside.

Comment by MoreOn on Making Beliefs Pay Rent (in Anticipated Experiences) · 2011-02-25T18:45:42.714Z · LW · GW

But why do beliefs need to pay rent in anticipated experiences? Why can’t they pay rent in utility?

If some average Joe believes he’s smart and beautiful, and that gives him utility, is that necessarily a bad thing? Joe approaches a girl in a bar, dips his sweaty fingers in her iced drink, cracks a piece of ice in his teeth, pulls it out of his mouth, shoves it in her face for demonstration, and says, “Now that I’d broken the ice—”

She thinks: “What a butt-ugly idiot!” and gets the hell away from him.

Joe goes on happily believing that he’s smart and beautiful.

For myself, the answer is obvious: my beliefs are means to an end, not ends in themselves. They’re utility producers only insofar as they help me accomplish utility-producing operations. If I were to buy stock believing that its price would go up, I better hope my belief paid its rent in correct anticipation, or else it goes out the door.

But for Joe? If he has utility-pumping beliefs, then why not? It’s not like he would get any smarter or prettier by figuring out he’s been a butt-ugly idiot this whole time.

Comment by MoreOn on How to Not Lose an Argument · 2011-02-25T18:08:30.030Z · LW · GW

More generally you cannot rigorously prove that for all integers n > 0, P(n) -> P(n+1) if it is not true, and in particular if P(1) does not imply P(2).

Sorry, I can't figure out what you mean here. Of course you can't rigorously prove something that's not true.

I have a feeling that our conversation boils down to the following:

Me: There exists a case where induction fails at n=2.

You: For all cases, if induction doesn’t fail at n=2, doesn’t mean induction doesn’t fail. Conversely, if induction fails, it doesn’t mean it fails at n=2. You have to carefully look at why and where it fails instead of defaulting to “it works at n=2, therefore it works.”

Is that correct, or am I misinterpreting?

Anyways, let's suppose you're making a valid point. Do you think that my interlocutors were arguing this very point? Or do you think they were arguing to put me back in my place, like TheOtherDave suggests, or that there was a similar human issue that had nothing to do with the actual argument?

Comment by MoreOn on How to Not Lose an Argument · 2011-02-23T22:27:18.458Z · LW · GW

"I refuse to cede you the role of instructor by letting you define the hypothetical."

You know, come think of it, that's actually a very good description of the second person... who is, by the way, my dad.

I am a lot more successful if I adopt the stance of "I am thinking about a problem that interests me," and if they express interest, explaining the problem as something I am presenting to myself, rather than to them. Or, if they don't, talking about something else.

This hasn't ever occurred to me, but I'll try it the next time a similar situation arises.

Comment by MoreOn on How to Not Lose an Argument · 2011-02-23T22:19:28.452Z · LW · GW

But why can you take a horse from the overlap? You can if the overlap is non-empty. Is the overlap non-empty? It has n-1 horses, so it is non-empty if n-1 > 0. Is n-1 > 0? It is if n > 1. Is n > 1? No, we want the proof to cover the case where n=1.

That's exactly what I was trying to get them to understand.

Do you think that they couldn't, and that's why they started arguing with me on irrelevant grounds?

Comment by MoreOn on How to Not Lose an Argument · 2011-02-23T21:52:33.085Z · LW · GW

.... The first n horses and the second n horses have an overlap of n-1 horses that are all the same color. So first and the last horse have to be the same color. Sorry, I thought that was obvious.

I see your point, though. This time, I was trying to reduce the word count because the audience is clearly intelligent enough to make that leap of logic. I can say the same for both of my "opponents" described above, because both of them are well above average intellectually. I honestly don't remember if I took that extra step in real life. If I haven't, do you think that was the issue both people had with my proof?

I have a feeling that the second person's problem with it was not from nitpicking on the details, though. I feel like something else made him angry.

Comment by MoreOn on How to Not Lose an Argument · 2011-02-23T21:42:44.599Z · LW · GW

I suspect that I lost the second person way before horses even became an issue. When he started picking on my words, "horses" and "different world" and "hypothetical person" didn't really matter anymore. He was just angry. What he was saying didn't make sense from that point on. For whatever reason, he stopped responding to logic.

But I don't know what I said to make him this angry in the first place.

Comment by MoreOn on How to Not Lose an Argument · 2011-02-23T21:39:39.370Z · LW · GW

I don't think I ever got to my "ultimate" conclusion (that all of the operations that appear in step n must appear in the basis step).

I was trying to use this example where the proof failed at n=2 to show that it's possible in principle for a (specific other) proof to fail at n=2. Higher-order basis steps would be necessary only if there were even more operations.

Comment by MoreOn on How to Not Lose an Argument · 2011-02-23T21:35:28.492Z · LW · GW

Induction based on n=1 works sometimes, but not always. That was my point.

The problem with the horses of one color problem is that you are using sloppy verbal reasoning that hides an unjustified assumption that n > 1.

I'm not sure what you mean. I thought I stated it each time I was assuming n=1 and n=2.

Comment by MoreOn on How to Not Lose an Argument · 2011-02-23T20:39:12.304Z · LW · GW

Most of the comments in this discussion focused on topics that are emotionally significant for your "opponent." But here's something that happened to me twice.

I was trying to explain to two intelligent people (separately) that mathematical induction should start with the second step, not the first. In my particular case, a homework assignment had us do induction on the rows of a lower triangular matrix as it was being multiplied by various vectors; the first row only had multiplication, the second row both multiplication and addition. I figured it was safer to start with a more representative row.

When a classmate disagreed with me, I found this example on Wikipedia. His counter-arguement was that this wasn't the case of induction failing at n=2. He argued that the hypothesis was worded incorrectly, akin to the proof that a cat has nine tails. I voiced my agreement with him, that “one horse of one color” is only semantically similar to “two horses of one color,” but are in fact as different as “No cat (1)” and “no cat (2).” I tried to get him to come to this conclusion on his own. Midway through, he caught me and said that I was misinterpreting what he was saying.

The second person is not a mathematician, but he understands the principles of mathematical induction (as I'd made sure before telling him about horses). And this led to one of the most frustrating arguments I'd ever had in my life. Here's the our approximate abridged dialogue (sans the colorful language):

Me: One horse is of one color. Suppose every n horses are of one color. Add the n+1st horse, and take n out of those horses. They’re all of one color by assumption. Remove 1 horse and take the one that’s been left out. You again have n horses, so they must be of one color. Therefore, all horses are of one color.

Him: This proof can't be right because its result is wrong.

Me: But then, suppose we do the same proof, but starting with on n=2 horses. This proof would be correct.

Him: No, it won’t be, because the result is still wrong. Horses have different colors.

Me: Fine, then. Suppose this is happening in a different world. For all you know, all horses there can be of one color.

Him: There’re no horses in a different world. This is pointless. (by this time, he was starting to get angry).

Me: Okay! It’s on someone’s ranch! In this world! If you go look at this person’s horses, every two you can possibly pick are of the same color. Therefore, all of his horses are of the same color.

Him: I don’t know anyone whose horses are of the same color. So they’re not all of one color, and your proof is wrong.

Me: It’s a hypothetical person. Do you agree, for this hypothetical person—

Him: No, I don’t agree because this is a hypothetical person, etc, etc. What kind of stupid problems do you do in math, anyway?

Me: (having difficulties inserting words).

Him: Since the result is wrong, the proof is wrong. Period. Stop wasting my time with this pointless stuff. This is stupid and pointless, etc, etc. Whoever teaches you this stuff should be fired.

Me: (still having difficulties inserting words) … Wikipe—…

Him: And Wikipedia is wrong all the time, and it’s created by regular idiots who have too much time on their hands and don’t actually know jack, etc, etc. Besides, one horse can have more than one color. Therefore, all math is stupid. QED.

THE END.

To the best of my knowledge, neither of these two people were emotionally involved with mathematical induction. Both of them were positively disposed at the beginning of the argument. Both of them are intelligent and curious. What on Earth went wrong here?

^One of the reasons why I shouldn’t start arguments about theism, if I can’t even convince people of this mathematical technicality.

Comment by MoreOn on [deleted post] 2011-02-21T22:23:53.695Z

So what you're basically saying is that EDT is vulnerable to Simpson's Paradox?

But then, aren't all conclusions drawn from incomplete sets of data potentially at risk from unobserved causations? And complete sets of data are ridiculously hard (if not impossible) to obtain anyway.

Comment by MoreOn on Enjoying musical fashion: why not? · 2011-02-21T22:09:10.786Z · LW · GW

I'm sure that you're absolutely technically correct when saying what you'd said, but I had to reread it 5 times just to figure out what you meant, and I'm still not sure.

Are you saying that the strategy to indiscriminately like whatever's popular will lead to worse outcomes because of random effects, as in this experiment that showed that popularity is largely random? Then you're right--because what are the chances that your preferences exactly match the popular choice?

On the other hand, if it so happens that you end up liking something that's popular and you couldn't tell it apart from something similar in a blind test, is it in any way bad that you're getting utility out of it?

Comment by MoreOn on The Hidden Complexity of Wishes · 2011-02-21T19:36:56.432Z · LW · GW

"I wish that the genie could understand a programming language."

Then I could program it unambiguously. I obviously wouldn't be able to program my mother out of the burning building on the spot, but at least there would be a host of other wishes I could make that the genie won't be able to screw up.

Comment by MoreOn on Enjoying musical fashion: why not? · 2011-02-21T18:41:52.342Z · LW · GW

I think alexflint's point is something along the lines of "it's okay to like popular things just because they're popular."

Comment by MoreOn on Ability to react · 2011-02-19T07:01:07.482Z · LW · GW

Thanks for bringing this up. Now that you've said it, I think I'd observed something similar about myself. Like you, I find it far easier to solve internal problems than external. In SCUBA class, I could sketch the inner mechanism of the 2nd stage, but I'd be the last to put my equipment together by the side of the pool.

Your description maps really well onto introversion and extroversion. I searched for psychology articles on extraversion, introversion and learning styles. A lot of research has been done in that area. For example:

Through the use of EPQ vs. LSQ and CSI questionnaires (see FOOTNOTE below), Furnham (1992) found that extraverts are far more active and far less reflective in their learning. They don’t need to chew over the information before they act on it.

Jackson and Lawty-Jones (1996) confirmed those findings with a similar study but fewer questionnaires (only EPQ vs. LSQ).

Zhang (2001) administered more questionnaires (TSI vs. SVSDS) to find that, unsurprisingly, having a social personality makes you more likely to want to employ external thinking style—that is, interact with others.

More studies used more questionnaires to find the same—i.e. Furnham, 1996; Furnham, Jackson, and Miller 1999; and many others, I’m sure.

The above seem to answer Swimmer963’s question: extraverted people are the ones who are better at actively applying the knowledge that they have quickly and on the spot in collaborative situations. Introverted people need time to reflect. Caveat: this conclusion is based on questionnaire studies, where people described their behavior instead of demonstrating it.

Unfortunately, I couldn’t find a single good experiment that addressed this question directly. But I did find this one…

Suda and Fouts (1980) set up an experiment where a sixth-grader would be left to believe that a little girl in the room next door had fallen off a chair. The sixth-grader then faced a choice: go into the girl’s room (active help) or go into the experimenter’s room (passive help) or continue with the “apparent” experiment (something about children’s drawings of people). If the sixth-grader tried to help, the experimenter would return. The peer was a confederate instructed not to initiate interactions or helping behavior.

I wish they'd included a table of their results in the article. Here’s what I managed to glean from the blurb: Overall, more extraverts helped. Extraverts tended to help actively, by going to the girl’s room themselves. Only a couple introverts tried to help actively; most of those who chose to help at all have done so passively.

During the interviews afterwards, half of the introverted kids said that they didn’t actively help because it might’ve been “wrong” to stop drawing.

What conclusions / interpretations can we draw from this experiment, aside the obvious? Introverted kids might not have been as good at reacting to the world around them as extraverted kids. This might be the very same dynamic that leads to introverted adults being unable to do as well in the real world “people situation” of the Quebec test as well as they could do on the written Ontario test.

FOOTNOTE:

A popular classification of personality traits in the articles I’ve read was due to the Eysenck Personality Questionnaire (EPQ). Personality is measured across three dimensions: Extraversion vs. Introversion, Neuroticism vs. Stability and Psychoticism vs. Socialisation Wikipedia article.

Honey and Mumford’s (1982) Learning Style Questionnaire (LSQ) identifies four learning styles: Activists jump into the problem at hand. “They revel in short-term crisis fire fighting,” as Furnham puts it. Reflectors are careful and methodical; they prefer to stand back and analyze everything carefully before they act. Theorists tend to synthesize the facts they observe into coherent theories. And Pragmatists want what they learn to be practical and applicable, preferably immediately.

Whetten and Cameron’s (1984) Cognitive Style Instrument (CSI) considers the learning styles form a slightly different angle than LSQ, by analyzing how people: gather information (perceptive vs. receptive), evaluate information (systematic vs. initiative) and respond to information (active vs. reflective). The last parameter is the most interesting in this case. It describes whether people act on the information quickly (active) or prefer to reflect on it before taking action (reflective).

Sternberg and Wagner (1992) Thinking Styles Inventory (TSI) asks 65 questions to classify people into one of 13 learning styles. Two of them are external and internal; people who think externally are eager to use their knowledge to interact with people, and those who think internally prefer to work independently.

Short-Version Self-Directed Search (SVSDS) assesses personality types across 6 scales, one of which is social.

REFERENCES:

(1) Furnham A. Personality and Learning Style - a Study of 3 Instruments. Personality and Individual Differences 1992 APR;13(4):429-438.

(2) Jackson C, LawtyJones M. Explaining the overlap between personality and learning style. Personality and Individual Differences 1996 MAR;20(3):293-300.

(3) Zhang LF. Thinking styles and personality types revisited. Personality and Individual Differences 2001 OCT 15;31(6):883-894.

(4) Furnham A. The FIRO-B, the learning style questionnaire, and the five-factor model. Journal of Social Behavior and Personality 1996 JUN;11(2):285-299.

(5) Furnham A, Jackson CJ, Miller T. Personality, learning style and work performance. Personality and Individual Differences 1999 DEC;27(6):1113-1122.

(6) Suda W, Fouts G. Effects of Peer Presence on Helping in Introverted and Extroverted Children. Child Dev 1980;51(4):1272-1275.

Comment by MoreOn on Rationalization · 2011-02-18T01:24:48.190Z · LW · GW

But, given that we have a grand total of one data point, I can't narrow it down to a single answer.

Exactly!

Given just one data point, every explanation for why we didn't observe water boiling at 100 degrees C is an excuse for why it should have. To honestly answer this question, we would have to have performed additional experiments.

But we had already had a conclusion we were supposed to have reached--a truth by definition, in our case. Reaching that conclusion in our imperfect circumstances required rationalization.

Comment by MoreOn on Variable Question Fallacies · 2011-02-12T00:38:34.618Z · LW · GW

Well in that case Earth doesn't really go around the sun, it just goes around the center of this galaxy on this weird wiggly orbit and the sun happens to always be in a certain position with respect to...... ouch! See what I did? I babbled myself into ineptness by trying to be "absolutely technically correct." I just can't. Even if I finished that "absolutely technically correct" sentence, I'd probably be wrong in some other way I haven't even imagined yet.

So let's accept the fact that not everything that is said which is true is "absolutely technically correct." (True with respect to The Simple Truth, ugh, this semantics is tiring so I'll quit).

The not-technically-correct truth for Hunga Huntergatherer and the not-technically-correct truth for Amara Astronomer seem to verbally contradict each other in the same way that Albert::sound verbally contradicts Barry::sound. Is the solution to it that one is false and other is true? You take the side of Amara Astronomer (and so do I) because the maps in our heads resemble this view better than the other.

The fact that these two notions seem contradictory is not because they are contradictory, but because our minds are trying to map them both into the same spot.

Your solution brings us back to analyzing maps. Its analogue is defining Albert::sound to be correct. I don't believe that the point of the article was to define truth. It's practically impossible to do so (see my fumble above). I think the point of the article was that contradictions in our ill-defined language (and concepts and maps that come with it) do not imply contradictions in reality.

Comment by MoreOn on The Scales of Justice, the Notebook of Rationality · 2011-02-04T14:07:39.281Z · LW · GW

You're right, of course.

I'd written the above before I read this defense of researchers, before I knew to watch myself when I'm defending research subjects. Maybe I was far too in shock to actually believe that people would honestly think that.

Comment by MoreOn on Rationalization · 2010-12-26T07:32:51.413Z · LW · GW

Try answering this without any rationalization:

In my middle school science lab, a thermometer showed me that water boiled at 99.5 degrees C and not 100. Why?

Comment by MoreOn on The Correct Contrarian Cluster · 2010-12-25T01:33:05.715Z · LW · GW

Why would you expect someone who has a high correct contrarian factor in one area to have it in another?

Bad beliefs do seem to travel in packs (according to Penn and Teller, and Eliezer, anyhow). Lots of alien conspiracy nuts are government conspiracy nuts as well. That's not surprising, because bad beliefs are easy to pick up and they seem to be tribally maintained by the same tribe that maintains other bad beliefs.

But good beliefs? Really good ones? They're difficult. They take years. If you don't know of Less Wrong (or similar) as a source of good beliefs, you probably only have one set of good beliefs in your narrow area (like economics or quantum physics but not both). And you know what? You shouldn't be expected to have more, if that's the one set that you use to affect the world.

Barring only a few people with interdisciplinary interests, I would expect that the economists who are the best at predicting the stock market would answer “What's a many-worlds interpretation?” to Eliezer's question.

Comment by MoreOn on Possibility and Could-ness · 2010-12-19T20:15:47.640Z · LW · GW

The bigger something is, the more predetermined it gets.

But I assume that whenever a classical coin is flipped, there was an earlier quantum, world-splitting event which resulted in two worlds

Then your classical coin is a quantum coin that simply made its decision before you observed it. The outcome of a toss of a real classical coin would be the result of so many quantum events that you might as well consider the toss predetermined (my post above elaborates).

Are there thermodynamic coin flips too?

The exact same goes for a thermodynamic coin flip, except a lot fewer quantum events determine the outcome of this one.

In both these cases, each quantum event would split worlds up. But given how many of them happen, each non-quantum coin toss creates 2^(that many) new worlds (here I'm naively assuming binary splits only). In how many of those worlds has the coin landed heads, and in how many has it laded tails? If 99.998% of your zombies in other worlds, as well as you in this one, had observed the coin landing heads, then the outcome was really close to predetermined.

Comment by MoreOn on Possibility and Could-ness · 2010-12-19T20:06:31.241Z · LW · GW

I apologize. That's not how I meant it. All events are quantum, and they add up to reality. What I meant was, is free will lost in the addition?

This intuition is difficult like hell to describe, but the authors of Quantum Russian Roulette and this post on Quantum Immortality seemed to have it, as well as half the people I’d ever heard mentioning Schrödinger's cat. It’s the reason why the life of a person/cat in question is tied to a single quantum event, as opposed to a roll of a classical die that’s determined by a whole lot of quantum events.

Our decisions are tied to the actions of bijillions of quarks.

By analogy, consider tossing fair quantum coins. What’s the probability that between 45% and 55% of the coins would land heads? Obviously that depends on the number of coins. If you toss only 1 coin, that probability is p=0. If you toss 2 coins, p=0.5. As coins --> inf, p --> 1.

The “degrees of probabilistic freedom” are reduced as you increase the number of random actions. The outcome becomes more and more determined.

Comment by MoreOn on Possibility and Could-ness · 2010-12-19T18:48:25.379Z · LW · GW

My free will is in choosing which world my consciousness would observe. If I have that choice, I have free will.

There’re instances when I don’t have free will. Sprouting wings is physically improbable. If I estimate the chance of it happening at epsilon, within the constraints of physics, and even then as a result of random chance, this option wouldn’t really figure in my tree diagram of choices. Likewise, if quantum immortality is correct, then observing myself dying is physically impossible. (But what if the only way not to die would be to sprout wings?)

Random actions are also an example of lack of free will. Suppose we’re playing Russian roulette, except we’re shooting our feet and not our heads. Quantum immortality would not kick in to save your foot. So once you pull the trigger, you have no choice over whether your foot gets shot or not. No free will there.

I suppose if a die had consciousness, it would be going through a similar decision process. Instead of a die, imagine a person embedded in a die-like cube with numbered sides. If this human die could affect the outcome of the roll, that’s free will. If not, that’s a random action.

Either way, I’m done arguing about this, because we’re not addressing the main problem with my proposal: all of the above should only work if a decision is a quantum event. I haven’t read anywhere that another world can split off as a result of a non-quantum event. Correct me if I’m wrong.

Comment by MoreOn on Possibility and Could-ness · 2010-12-19T04:42:33.934Z · LW · GW

No, I haven’t. I’ve derived my views entirely from this post, plus the article above.

Since you mentioned “The Fabric Of Reality,” I tried looking it up on Less Wrong, and failing that, found its Wikipedia entry. I know not to judge a book by its Wikipedia page, but I still fail to see the similarity. Please enlighten me if you don't mind.

The following are statements about my mind-state, not about what is:

I don’t see why my view would be incapable of distinguishing free decisions from randomly determined ones. I’d go with naïve intuition: if I chose X and not Y knowingly, then I better be prepared for X’s logical outcome Z. If I choose X expecting W, then I’m either wrong (and/or stupid), or X is a random choice.

As for moral responsibility, that’s even simpler. If I caused outcome Z in world A, then I’m morally responsible in proportion to my knowledge that Z would happen + some constant. If I pressed a button labeled W not knowing what it does and a building nearby blew up because of it, then my responsibility = some constant. If I pressed X knowing it would blow up a building nearby, then my responsibility > some constant. Better yet, take me to a real-world court. I shouldn't be any more or less responsible for my actions if this view were correct, than I would've in the world as it is understood now.

Same goes for all my alternate-world zombies.

Comment by MoreOn on Possibility and Could-ness · 2010-12-19T03:23:56.775Z · LW · GW

Zombie-me's are the replicas of me in alternate worlds. They aren't under my conscious control, thus they're "zombies" from my perspective.

Except, in my understanding, they are created every time I make a choice, in proportion to the probability and I would choose Y over X. That is, if there's a 91% chance that I'd choose X, then in 91% of the worlds the zombie-me's have chosen X and in the remaining 9% they'd chosen Y.

Again, caveat: I don't think physics and probability were meant to be interpreted this way.

Comment by MoreOn on Newcomb's Problem and Regret of Rationality · 2010-12-18T20:04:45.422Z · LW · GW

4) Eliezer: just curious about how you deal with paradoxes about infinity in your utility function. If for each n, on day n you are offered to sacrifice one unit of utility that day to gain one unit of utility on day 2n and one unit on day 2n+1 what do you do? Each time you do it you seem to gain a unit of utility, but if you do it every day you end up worse than you started.

dankane, Eliezer answered your question in this comment, and maybe somewhere else, too, that I don't yet know of.

Comment by MoreOn on Possibility and Could-ness · 2010-12-18T19:32:27.463Z · LW · GW

At the risk of drawing wrong conclusions from physics I don't understand, I propose this model of free will within a lawful universe:

As I stand there thinking whether or not I should eat a banana, I can be confident that there's a world where a zombie-me is already eating a banana, and another world where a zombie-me has walked away from a banana.

As I stand near the edge of the cliff, there's a world where a zombie-me has jumped off the cliff to test quantum immortality, and Inspector Darwin has penciled in a slightly lower frequency of my alleles. But there is no world in which I’d jumped, sprouted wings, and survived.

Comment by MoreOn on Outside the Laboratory · 2010-12-18T18:52:13.839Z · LW · GW

scientific inquiry with the choice of subject matter motivated by theism is of lower quality than science done without that motivation.

Absolutely. Hence, the warning flag. A scientist expecting to find the evidence of God doesn't just have freeloading beliefs, but beliefs that pay rent in wrong expectations. That's akin to a gambling economist.

best scientists ... tend to be less theistic.

I'd say it's good evidence in favor of P ( good science | scientist is theist ) < P ( good science ) . Of course, your point about correlation not causation is very valid, too.

Someone in the discussion once said that atheism on average adds ~40 to IQ (I might be remembering incorrectly). I suppose high IQ is correlated with both excellence as a scientist and an ability to reconsider and abandon theism if the question ever arose.

My specific interest is whether or not atheism alone makes scientists better.

Comment by MoreOn on Outside the Laboratory · 2010-12-18T16:28:33.523Z · LW · GW

Fixed. Thanks. I didn't realize that my statement read, "A priori reasoning can only be justified if it's a posteriori."

Edit: so what about my actual statement? Or, are we done having this discussion?