Posts

[LINK] Being No One (~50 min talk on the self-model in your brain) 2012-01-16T18:18:33.149Z
Guardian article highlights observational biases in Knox investigators 2011-10-08T07:41:26.164Z
On the Human 2010-11-10T09:27:00.447Z

Comments

Comment by machrider on [post redacted] · 2012-01-26T03:29:54.281Z · LW · GW

"UFO" has a colloquial sense that does, in fact, mean aliens (or trans-dimensional beings or what have you). I would posit that this is the sense of the word Eliezer used in the quoted text.

Comment by machrider on The Noddy problem · 2012-01-13T06:49:25.683Z · LW · GW

I read a lot of C&H growing up, and looking back at it, I'm surprised at how many interesting ideas it contains. I wonder how much of my present self was shaped by having these ideas implanted at age 8 or 9...

Comment by machrider on More intuitive explanations! · 2012-01-07T06:48:08.944Z · LW · GW

Steven Strogatz did a series of blog posts at NY Times going through a variety of math concepts from elementary school to higher levels. (They are presented in descending date order, so you may want to start at the end of page 2 and work your way backwards.) Much of the information will be old hat to LWers, but it is often presented in novel ways (to me, at least).

Specifically related to this post, the visual proof of the Pythagorean theorem appears in the post Square Dancing.

Comment by machrider on Rationality quotes January 2012 · 2012-01-02T17:27:53.825Z · LW · GW

The fact is that there are many battles worth fighting, and strong skeptics are fighting one (or perhaps a few) of them. (As I was disgusted to see recently, human sacrifice apparently still happens.) However, I also think it's ok to say that battle is not the one that interests you. You don't have the capacity to be a champion for all possible good causes, so it's good that there is diversity of interest among people trying to improve the human condition.

Comment by machrider on How to label thoughts nonverbally · 2011-12-18T12:45:56.968Z · LW · GW

Thanks for the clarification, I see what you mean. The distinction between repetitive, droning thoughts and actively reasoning about the problem makes sense.

Comment by machrider on How to label thoughts nonverbally · 2011-12-18T11:13:37.481Z · LW · GW

I think eugman is more referring to negative thoughts that cycle through a depressed person's head on a regular basis. They're messages that remind you that you're a failure, you let people down, you're not going anywhere, and they play through your brain almost all your waking hours.

The negative thoughts you described are the ones that healthy people encounter in real, negative situations that must be dealt with. In that case, rumination is appropriate and finding rational solutions is desirable. But when your brain is essentially buggy and constantly replaying cached, (often incorrect or completely out of proportion) negative beliefs, it might be entirely appropriate to forcibly jump to another track instead of dwelling on it.

Put another way, in a depressed brain, rumination and focus on the "problem" is the default mode of operation. Sometimes it eventually yields positive solutions, but frequently it's more of a death spiral. Short circuiting that kind of process seems entirely reasonable to me.

Comment by machrider on [SEQ RERUN] Stop Voting For Nincompoops · 2011-12-12T06:31:41.310Z · LW · GW

Are there any good examples of the long strategy working? Ron Paul seemed like a potential case of exactly that, and in 2008 he was rallying support on the internet and raking in serious political campaign contributions. He got a small chunk of the popular vote and raised the profile of libertarianism a little. However, a few years later the media have still apparently decided that he is unelectable and give him far less coverage than the "mainstream" candidates. (I'm not a Ron Paul fan myself, but he should appeal to the fiscal conservative base and he seems to be a man of integrity.)

Comment by machrider on [SEQ RERUN] Stop Voting For Nincompoops · 2011-12-12T05:09:30.816Z · LW · GW

i read it, and I disagree. I think it's irrational to expect everyone to do what he suggests, and it only works if everyone does it.

Edit: Using the word "strategic" is probably misleading. Eliezer proposes a particular strategy - vote for someone you actually like, regardless of popularity or perceived likelihood of winning. It's still a strategy, and voting is still a game. So the argument isn't really about whether or not to vote "strategically", it's about which strategy one should use.

In my original comment I argue for the meta-strategy of changing the electoral system to one that isn't as broken as plurality systems are. As well, I argue that it still makes sense given the current system to continue to vote for the least evil candidate who has a shot at winning.

Comment by machrider on [SEQ RERUN] Stop Voting For Nincompoops · 2011-12-12T03:30:00.099Z · LW · GW

It might just be that I disagree with him, but I find this post out of character for Eliezer. He argues against being strategic or using game theoretical approaches, which is surprising to me. How can that possibly make sense? Shouldn't I try to maximize the value of my vote given my expectations of the game I'm playing and the people I'm playing with/against? Essentially, I think he's arguing for an idealistic solution instead of a pragmatic one.

I guess I should admit that, in a perfect world, voting for whom you actually want, regardless of perceived popularity, might work well. However, it seems more important to me, having identified that the electoral system seems to consistently produce these kinds of results, to try to identify the problem. Is the problem really with the voters, or is it inherent in the structure of the rules?

What should democracy produce, ideally? It should produce election results that closely mirror what people actually want. It turns out that the plurality voting system, which we use in most places in the US, is well known to support a two-party stranglehold as a failure mode. It is very likely to produce an outcome which leaves most people unsatisfied. Why not work on fixing the system that produces this result instead of just hoping for everyone in the country to suddenly agree to play the game by different rules? (In San Francisco, we use "instant runoff" voting rules that produce an outcome more in line with what people actually want. Of course, it's not perfect.)

Essentially my question is, why would you insist that people shouldn't vote strategically, when it is clearly in their best interests to do so? If you strongly believe (for example) Rick Perry would be a threat to your well being, why would you go vote for a third party instead of doing your best to ensure Perry doesn't win?

Comment by machrider on [POLL] LessWrong census, mindkilling edition [closed, now with results] · 2011-12-12T02:04:54.744Z · LW · GW

What percentage of educated Westerners would you guess are to the right (as operationalized below) of you on economic questions?

Sorry, I find this survey terrible. I don't know how to answer most of the questions. Questions like the above require me to have more knowledge than I personally have (about the internal state of billions of educated Westerners). You are supposed to do this work for us by asking 5 to 10 representative questions with which we can strongly agree/strongly disagree, etc, and then use that information to categorize responders.

The way this survey is written I don't even feel comfortable submitting my response, because the percentages are wild guesses. Further, I don't even know what it means to be "left" or "right" on race and gender issues. Also, the categories in the first part contain multiple, sometimes conflicting labels. It's really hard to know how to respond to those, as well.

I say all this as someone with concrete political beliefs! If you asked me specific questions, I would happily answer them. But I'm not comfortable speculating about the political beliefs of people occupying an entire hemisphere.

Comment by machrider on Rationality Quotes December 2011 · 2011-12-09T00:35:30.342Z · LW · GW

This is the subtext implied in the saying, "A Lannister always pays his debts," from A Game of Thrones by George R. R. Margin. It is frequently applied in the context of compensating someone for helping one of the Lannisters, but it also functions as a warning against misdeeds.

Comment by machrider on How rationality can make your life more awesome · 2011-11-29T08:26:40.489Z · LW · GW

This is a good summary, but a post like this is greatly strengthened by links to external resources to justify or expand upon the claims it makes. If I didn't know anything about the topic, some of the text would be unclear to me, and I would want the ability to click around and learn more. For example:

  • What is the sunk cost fallacy? (Link to wikipedia/LWwiki)
  • There is some recent evidence about rationality as a treatment for depression

Also, I think one of the first reactions a typical person will have is, "Rationality? Of course I'm rational." To start from square one on this topic, you have to explain to people that, surprisingly enough, they aren't. Politely, of course. Then you can start talking about why it's important to work on.

All that said, I think the examples given are great; they're salient problems for most people, and you can make a good case that rationality will improve one's outcomes for those problems.

Comment by machrider on [SEQ RERUN] Unbounded Scales, Huge Jury Awards, & Futurism · 2011-11-10T04:51:13.927Z · LW · GW

The link to the post is incorrect; it points to the previous rerun, should point here: http://lesswrong.com/lw/li/unbounded_scales_huge_jury_awards_futurism/

Edit: It has been fixed. Thanks. :)

Comment by machrider on What visionary project would you fund? · 2011-11-09T22:18:57.562Z · LW · GW

Doesn't that depend on heart attacks being a function of age rather than a function of time? Anti-aging doesn't necessarily mean anti-arterial-plaque-buildup. I do agree that entire classes of problems might go away though, which would be amazing.

Comment by machrider on A clever argument for buying lottery tickets · 2011-11-05T02:05:44.091Z · LW · GW

I don't believe so, but maybe someone smarter than me can explain this. The magic 4% of a million = 40k value indeed should factor in, but it shouldn't dominate the expected value to the degree that you're making it.

Comment by machrider on A clever argument for buying lottery tickets · 2011-11-05T01:43:38.963Z · LW · GW

Let's try a different angle:

Then, with 4% interest on my $160k yearly, it would take me about 5.5 years to accumulate that million dollars, or 11000 hours.

So over 5.5 years, you theoretically earned $1,220,000. A million in savings plus $40k living expenses for 5.5 years. Effective hourly wage is $110.90.

At an effective hourly wage of $110 your expected lottery ticket return is 1.0 hours, not 1.1

Comment by machrider on A clever argument for buying lottery tickets · 2011-11-04T23:28:10.756Z · LW · GW

I believe you left out the opportunity cost of spending the $100 on a ticket instead of letting it accrue 4% interest. That is, you compared $100 in today's dollars to $100 in 2017 dollars, but it should've been $124 in 2017 dollars.

Comment by machrider on [gooey request-for-help] I don't know what to do. · 2011-09-29T22:54:59.707Z · LW · GW

Thanks so much for the detailed response.

Comment by machrider on [gooey request-for-help] I don't know what to do. · 2011-09-29T06:10:13.488Z · LW · GW

Wow, that OkCupid result is surprising. It has not been my experience. What are you doing that causes people to reach out to you in a friendly (rather than romantic) way on there? (Or are you the one reaching out?)

And I agree with regard to the intellectual standard, especially if you consider your intelligence a defining characteristic. Reading the discussion here (and not having much to contribute) has... recontextualized my own self-image.

Comment by machrider on 9/11 as mindkiller · 2011-09-14T10:04:07.303Z · LW · GW

Still it seems reasonable to point out the opportunity cost of spending a couple trillion dollars on a misguided war effort. It is true that the economy would be in better shape without those expenditures, and it's also probably true that US federal budget constraints would be different as a result. (However it may still have been spent elsewhere instead of scientific research.)

Comment by machrider on Rationality Quotes September 2011 · 2011-09-03T07:29:26.079Z · LW · GW

100 years is nothing in the evolution of a civilization though. The time between agricultural revolution and the discovery of evolution is not a typical period in the history of humanity.

Comment by machrider on Rationality Quotes August 2011 · 2011-08-03T00:33:47.436Z · LW · GW

Perhaps a better word would have been 'elegant'.

Comment by machrider on Ethics and rationality of suicide · 2011-05-08T07:13:22.586Z · LW · GW

Suicide hotline operators will sometimes call the police on you...

Comment by machrider on San Francisco Meetup 4/28 · 2011-04-25T11:43:07.703Z · LW · GW

I haven't been able to get to any of the east bay meetups yet, so I'm excited to see this in SF. I'll do my best to be available for it. With all the talk about the NYC group, I keep thinking "What could SF do?"

Comment by machrider on Swords and Armor: A Game Theory Thought Experiment · 2010-10-13T02:58:16.632Z · LW · GW

Pursuing this stupidity to its logical conclusion, I just did an elimination match with 16 rounds. Start with all combinations and cull the weakest member every round. Here's the result: http://pastie.org/1217255

Note the culling is sometimes arbitrary if there's a tie for last place. By pass 14, we have a 3-way tie between blue/blue, blue/green, and green/yellow. Those may very well be the best three combinations, or close to it.

Final version of program here: http://pastie.org/1217284

(Removed randomness and just factored in the probability of evasion into damage directly. This lets me use smaller numbers and runs much faster. Verified that the results didn't change as a result of this.)

Comment by machrider on Swords and Armor: A Game Theory Thought Experiment · 2010-10-13T02:27:35.158Z · LW · GW

Agreed, re: the limitations of my method. As you suggested, I ran another pass using only the top 7 candidates (wins >= 19 in my previous comment). Here are the results:

3: blue/red
5: blue/green
7: blue/blue
7: green/green
7: green/red
9: green/blue
11: green/yellow

Choosing the top 10 (wins >= 17 from before):

7: blue/red
7: red/green
9: green/green
9: green/red
11: blue/blue
11: blue/green
11: blue/yellow
11: green/blue
11: yellow/yellow
13: green/yellow

Yellow/yellow pops up as a surprise member of the 5-way tie for second place. The green sword is less effective once you introduce these new members. There are probably a lot of surprises if you keep varying the members you allow. And all of this still assumes a normal distribution, which is unlikely.

Comment by machrider on Swords and Armor: A Game Theory Thought Experiment · 2010-10-13T01:36:32.827Z · LW · GW

I'm thinking iterations just confuses things. With a high enough HP value we should be able to eliminate "luck". So here's a pass with 1 iteration and 20 million initial HP:

2: red/blue
8: red/red
13: yellow/blue
13: yellow/red
15: red/yellow
15: yellow/green
17: blue/yellow
17: red/green
17: yellow/yellow
19: blue/red
19: green/blue
19: green/green
19: green/red
19: green/yellow
21: blue/blue
23: blue/green
Comment by machrider on Swords and Armor: A Game Theory Thought Experiment · 2010-10-13T00:03:53.429Z · LW · GW

Deleted earlier comment due to a bug in the code.

Here's the result of a naive brute force program that assumes a random distribution of opponents (i.e. any combo is equally likely), sorted by number of wins:

185: red/blue
269: red/red
397: yellow/blue
407: yellow/red
438: red/yellow
464: red/green
471: yellow/green
483: yellow/yellow
512: blue/yellow
528: green/green
539: green/red
561: green/blue
567: green/yellow
578: blue/red
635: blue/green
646: blue/blue

The program is here: http://pastie.org/1217024 (pipe through sort -n)

It performs 30 iterations of all 16 vs 16 matchups. Note that the player that attacks first has an advantage, so doing all 16 vs 16 balances that out (everyone is player 1 as often as he is player 2).

I signed up today to comment in this thread, so don't mock me too heavily. :)

Edit: Bumped iterations to 30 and hit points to 80,000 to try to smooth out randomness in the results.