Posts

Comments

Comment by eirenicon on Alien parasite technical guy · 2010-07-27T18:52:10.075Z · LW · GW

I've never heard of this before, and Google suggests it seems to be mainly a component of NLP, with little supporting evidence. Still, I can't find anything that puts paid to it either way, and it's an interesting idea. Has anyone done a reputable study on it? Scholar yields nothing relevant.

Comment by eirenicon on The Social Coprocessor Model · 2010-05-18T15:54:10.918Z · LW · GW

It's a bad analogy because there are different kinds of games, but only one kind of small talk? If you don't think pub talk is a different game than a black tie dinner, well, you've obviously never played. Why do people do it? Well, when you beat a video game, you've beat a video game. When you win at social interaction, you're winning at life - social dominance improves your chances of reproducing.

As for rule books: the fact that the 'real' rules are unwritten is part of the fun. Of course, that's true for most video games. Pretty much any modern game's real tactics come from players, not developers. You think you can win a single StarCraft match by reading the manual? Please.

Comment by eirenicon on The Social Coprocessor Model · 2010-05-17T19:04:33.363Z · LW · GW

do you personally find these status and alliance games interesting? Why?

They're way more interesting than video games, for example. Or watching television. Or numerous other activities people find fun and engaging. Of course, if you're bad at them you aren't going to enjoy them; the same goes for people who can't get past the first stage of Pac-Man.

Comment by eirenicon on The Red Bias · 2010-04-20T15:44:37.406Z · LW · GW

I think there is probably no relation. My guess is that red signalling probably precedes variation in skin colour, perhaps even loss of body-wide hair. It is a thoroughly unconscious bias, and does not apply to pink, or orange, or peach, but red, especially bright, bold baboon-butt red. In any case, I hope the sporting tests were controlled for skin colour, because that does seem like a weighty factor when considering scoring bias.

Comment by eirenicon on An empirical test of anthropic principle / great filter reasoning · 2010-03-24T20:38:05.466Z · LW · GW

IIRC Hanson favours panspermia over abiogenesis. Has he reconciled this with his great filter theory?

Comment by eirenicon on The strongest status signals · 2010-03-06T22:23:32.581Z · LW · GW

This is not the best example because a president's institutionally granted power is a function of how likable and popular he is with the people.

The President of the US is probably the highest status person in the world. The fact that roughly 20% of Americans voted for Obama is far from the only thing that gives him that status. Keep in mind that it takes extraordinary public disapproval to affect a President; Bush 43's lowest approval rating was one point higher than Nixon's. On the other hand, Clinton's lowest rating was 12 points higher than that, and he was impeached. Public approval is not very meaningful to the Presidency.

Imagine, however, that the president was more of a dictator and didn't need his citizen's approval. In this case, he'd be lowering his status by chatting with regular folk.

Or he'd be signaling that he's a benevolent dictator who, while not requiring the approval of the regular folk, wants them to think he's on their side. Having popular support would obviously raise a dictator's status, domestically and internationally. The people might think that their dictator wasn't such a bad guy if he was willing to talk to them. Anecdotally, when a dictator goes to ground and doesn't make public appearances, it's usually a sign that his regime is in trouble. Don't underestimate what a high-status move it is to be secure about your status.

Comment by eirenicon on The strongest status signals · 2010-03-06T19:56:50.777Z · LW · GW

What Lesswrongers may not realize is how bothering to change your behavior at all towards other people is inherently status lowering. For instance, if you just engage in an argument with someone you’re telling them they’re important enough to use so much of your attention and effort—even if you “act” high status the whole time.

People of high status assume their status generally cannot be affected by people of low status, at least in casual encounters (i.e. not when a cop pulls over your Maybach for going 200). To use an extreme example, when the President of the US goes into a small-town diner and chats with the "regular folks" there, he's not lowering his status. He's signaling, "My status is so high, I can pal around with whoever I want." Yes, this raises the status of those he talks to. (It also raises the President's status.)

If people of high status thought they had something to lose in engaging with someone of low status, they wouldn't engage with them. Of course, that would make them look afraid to lose status, which in itself would lower their status. So they engage with people of lower status in order to make it seem like status isn't important to them, which is a high status signal. In short, engaging with people signals higher status than ignoring them.

I wonder what will be in the random theory hat next time I reach in!

Comment by eirenicon on Shut Up and Divide? · 2010-02-11T22:00:01.895Z · LW · GW

I submit that the idea of 'race' is based solely on bad science and doesn't have any real meaning such that it can be related to anything else.

Nevertheless, the word "race" remains a useful shorthand for "populations differentiated genetically by geographic location" or what have you. If you don't think there are genetic differences between, say, Northern Europeans and Sub-Saharan Africans, you are literally blind. They obviously belong to groups that evolved in different directions. That does not have to include intelligence, but it's not reasonable to refuse to consider a hypothesis just because you find it repugnant.

Comment by eirenicon on Shut Up and Divide? · 2010-02-11T21:12:16.846Z · LW · GW

Do you have any reason to believe Lynn is a racist, or is that just a knee-jerk reaction? Lynn is too contrarian and I am too unqualified to agree or disagree with him, but I believe his work is done in good faith. At the very least, it's unreasonable to label any research into race and intelligence 'racist' just because you don't like the conclusions.

Comment by eirenicon on The Craigslist Revolution: a real-world application of torture vs. dust specks OR How I learned to stop worrying and create one billion dollars out of nothing · 2010-02-10T04:37:41.747Z · LW · GW

I think it ought to be something unimaginative but reliable, like clean water or vaccines to third world countries. I can't find it at the moment but there's a highly reputable charity that provides clean drinking water to African communities. IIRC they estimated that every $400 or so saved the life of a child. A billion dollars into such a charity - saving 2.5 million children - isn't a difficult PR sell.

Comment by eirenicon on The AI in a box boxes you · 2010-02-03T03:23:45.035Z · LW · GW

It's not that I'm making excuses, it's that the puzzle seems to be getting ever more complicated. I've answered the initial conditions - now I'm being promised that I, and my copies, will live out normal lives? That's a different scenario entirely.

Still, I don't see how I should expect to be tortured if I hit the reset button. Presumably, my copies won't exist after the AI resets.

In any case, we're far removed from the original problem now. I mean, if Omega came up to me and said, "Choose a billion years of torture, or a normal life while everyone else dies," that's a hard choice. In this problem, though, I clearly have power over the AI, in which case I am not going to favour the wellbeing of my copies over the rest of the world. I'm just going to turn off the AI. What follows is not torture; what follows is I survive, and my copies cease to experience. Not a hard choice. Basically, I just can't buy into the AI's threat. If I did, I would fundamentally oppose AI research, because that's a a pretty obvious threat an AI could make. An AI could simulate more people than are alive today. You have to go into this not caring about your copies, or not go into it at all.

Comment by eirenicon on The AI in a box boxes you · 2010-02-03T03:16:42.287Z · LW · GW

It's kind of silly to bring up the threat of "eternal pain". If the AI can be let free, then the AI is constrained. Therefore, the real-you has the power to limit the AI's behaviour, i.e. restrict the resources it would need to simulate the hundred copies of you undergoing pain. That's a good argument against letting the AI out. If you make the decision not to let the AI out, but to constrain it, then if you are real, you will constrain it, and if you are simulated, you will cease to exist. No eternal pain involved. As a personal decision, I choose eliminating the copies rather than letting out an AI that tortures copies.

Comment by eirenicon on The AI in a box boxes you · 2010-02-02T22:46:48.770Z · LW · GW

It's not a hard choice. If the AI is trustworthy, I know I am probably a copy. I want to avoid torture. However, I don't want to let the AI out, because I believe it is unfriendly. As a copy, if I push the button, my future is uncertain. I could cease to exist in that moment; the AI has not promised to continue simulating all of my millions of copies, and has no incentive to, either. If I'm the outside Dave, I've unleashed what appears to be an unfriendly AI on the world, and that could spell no end of trouble.

On the other hand, if I don't press the button, one of me is not going to be tortured. And I will be very unhappy with the AI's behavior, and take a hammer to it if it isn't going to treat any virtual copies of me with the dignity and respect they deserve. It needs a stronger unboxing argument than that. I suppose it really depends on what kind of person Dave is before any of this happens, though.

Comment by eirenicon on The AI in a box boxes you · 2010-02-02T20:51:02.436Z · LW · GW

That doesn't seem like a meaningful distinction, because the premise seems to suggest that what one Dave does, all the Daves do. If they are all identical, in identical situations, they will probably make identical conclusions.

Comment by eirenicon on The AI in a box boxes you · 2010-02-02T20:34:06.142Z · LW · GW

If you're inside-Dave, pressing the button does nothing. It doesn't stop the torture. The torture only stops if you press the button as outside-Dave, in which case you can't be tortured, so you don't need to press the button.

Comment by eirenicon on The AI in a box boxes you · 2010-02-02T17:31:48.632Z · LW · GW

This is not a dilemma at all. Dave should not let the AI out of the box. After all, if he's inside the box, he can't let the AI out. His decision wouldn't mean anything - it's outside-Dave's choice. And outside-Dave can't be tortured by the AI. Dave should only let the AI out if he's concerned for his copies, but honestly, that's a pretty abstract and unenforceable threat; the AI can't prove to Dave that he's doing any such thing. Besides, it's clearly unfriendly, and letting it out probably wouldn't reduce harm.

Basically, I'm outside-Dave: don't let the AI out. I'm inside-Dave: I can't let the AI out, so I won't.

[edit] To clarify: in this scenario, Dave must assume he is on the outside, because inside-Dave has no power. Inside-Dave's decisions are meaningless; he can't let the AI out, he can't keep the AI in, he can't avoid torture or cause it. Only the solitary outside-Dave's decision matters. Therefore, Dave should make the decision that ignores his copies, even though he is probably a copy.

Comment by eirenicon on The things we know that we know ain't so · 2010-01-12T19:43:15.819Z · LW · GW

I think when they say "the world" they mean "our world", as in "the world we are able to live in", and on that front, we're probably already screwed.

Comment by eirenicon on Case study: Melatonin · 2010-01-08T21:42:49.170Z · LW · GW

I have delayed-phase sleep disorder - I would say I "suffer" from it but it's really only a problem when a 3-10 sleep schedule is out of the question (as it is now, since I currently work 9-5). It's simply impossible for me to fall asleep before 2 or 3 am unless I am extremely tired. In addition, I'm a light sleeper, and have never been able to sleep while traveling or, in fact, whenever I'm not truly horizontal. I took melatonin to help with this for a couple years (at a recommended 0.3 mg dose), and it worked extremely well. However, I experienced unusually vivid dreams, and would often wake up feeling groggy. Ultimately, I switched to taking 50 mg 5-HTP an hour or two before bed. The result is that I fall sleep as easily as with melatonin, but wake up feeling far more refreshed. I usually clock 7 hours of sleep a night now, and have brighter and more productive days.

The best sleep aid I've ever used isn't a legal one, though. Luckily, it's widely available here in Canada...

Comment by eirenicon on Less Wrong Q&A with Eliezer Yudkowsky: Ask Your Questions · 2009-11-11T20:24:23.880Z · LW · GW

The probability that the species will become extinct because every individual human will die of old age is negligible compared the the extinction risk of insufficiently-careful AGI research.

I'm not talking about old age, I'm talking about death. This includes death from plague, asteroid, LHC mishap, or paperclip maximizer. I didn't say "cure death" or "cure old age" but "[solve] the problem of death". And for the record, to my mind, the likeliest solution involves AGI, developed extremely carefully - but as quickly as possible under that condition.

Having refreshed, I see you've changed the course of your reply to some degree. I'd like to respond further but I don't have time to think it through right now. I will just add that while I don't assign intrinsic value to individuals not yet born, I do intrinsically value the human species as a present and future entity - but not as much as I value individuals currently alive. That said, I need to spend some time thinking about this before I add to my answer. I may have been too hasty and accidentally weakened the implication of "extinction" through a poor turn of phrase.

Comment by eirenicon on Less Wrong Q&A with Eliezer Yudkowsky: Ask Your Questions · 2009-11-11T19:43:14.523Z · LW · GW

That's not possible if status is zero-sum, which it appears to be. If everyone is equal in status, wouldn't it be meaningless, like everyone being equally famous?

Actually, let me qualify. Everyone being equally famous wouldn't necessarily be meaningless, but it would change the meaning of famous - instead of knowing about a few people, everyone would know about everyone. It would certainly make celebrity meaningless. I'm not really up to figuring out what equivalent status would mean.

Comment by eirenicon on Less Wrong Q&A with Eliezer Yudkowsky: Ask Your Questions · 2009-11-11T18:25:41.452Z · LW · GW

If we can't stop dying, we can't stop extinction. Logically, if everyone dies, and there are a finite number of humans, there will necessarily be a last human who dies.

[edit] To those down-voting me: I take my lumps willingly, but could you at least tell me why you think I'm wrong?

Comment by eirenicon on Less Wrong Q&A with Eliezer Yudkowsky: Ask Your Questions · 2009-11-11T15:07:09.324Z · LW · GW

the core concern of avoiding a human extinction scenario.

That is not the core concern of this site. We are in a human extinction scenario so long as the problem of death remains unsolved. Our interest is in escaping this scenario as quickly as possible. The difference is urgency; we are not trying to avoid a collision, but are trying to escape the burning wreckage.

Comment by eirenicon on What makes you YOU? For non-deists only. · 2009-11-11T01:10:43.124Z · LW · GW

So ten seconds isn't enough time to create a significant difference between the RobinZs, in your opinion. What if Omega told you that in the ten seconds following duplication, you, the original RZ, would have an original thought that would not occur to the other RZs (perhaps as a result of different environments)? Would that change your mind? What if Omega qualified it as a significant thought, one that could change the course of your life - maybe the seed of a new scientific theory, or an idea for a novel that would have won you a Pulitzer, had original RZ continued to exist?

I think the problem with this scenario is that saying "ten seconds" isn't meaningfully different from saying "1 Planck time", which becomes obvious when you turn down the offer that involves ten hours or years. Our answers are tied to our biological perception of time - if an hour felt like a second, we'd agree to the ten hour option. I don't think they're based on any rational observation of what actually happens in those ten seconds. A powerful AI would not agree to Omega's offer - how many CPU cycles can you pack into ten seconds?

Comment by eirenicon on What makes you YOU? For non-deists only. · 2009-11-11T00:10:51.829Z · LW · GW

Would you still say yes if there was more than 10 seconds between copying you and killing you - say, ten hours? Ten years? What's the maximum amount of time you'd agree to?

Comment by eirenicon on Open Thread: November 2009 · 2009-11-06T00:54:18.318Z · LW · GW

An en dash is defined by its width, not the spacing around it. In fact, spacing around an em dash is permitted in some style guides. On the internet, though, the hyphen has generally taken over from the em dash (an en dash should not be used in that context).

Now, two hyphens—that's a recipe for disaster if I've ever heard one.

Comment by eirenicon on Open Thread: November 2009 · 2009-11-05T21:12:42.998Z · LW · GW

If the defending party is only required to match the litigating party's contribution, the suits will never proceed because the litigating bums can't afford to pay for a single hour of a lawyer's time. And while I don't know if this is true, it makes sense that funding the bums yourself would be illegal.

Comment by eirenicon on Doing your good deed for the day · 2009-10-27T17:37:26.140Z · LW · GW

So you steal a movie, which means the next homeless guy you see gets change in his cup, which lets you slam the front door in a girl scout's face, the memory of which drives you to volunteer at a soup kitchen, which in turn assuages your conscience when you buy incandescent light bulbs because they look better than CFLs, so you help an old lady across the street, which relieves you of all responsibility for the other old lady who just got hit by a truck, who haunts you in your dreams, so you adopt a child, who grows up to become a mad scientist who destroys the world, thus ending the vicious cycle once and for all.

And that's why piracy is wrong.

Comment by eirenicon on The continued misuse of the Prisoner's Dilemma · 2009-10-23T14:30:07.103Z · LW · GW

When you write "If the others continue to cooperate, their bid is lower and they get nothing" you imply an iterated game. It seems clear from Hamermesh's account that players were only allowed to submit one bid.

Ashley won, but she didn't maximize her win. The smartest thing to do would be to agree to collude, bid higher, and then divide the winnings equally anyway. Everyone gets the same payout, but only Ashley would get the satisfaction of winning. And if someone else bids higher, she's no longer the sole defector, which is socially significant. And, of course, $20 is really not a significant enough sum to play hardball for.

Comment by eirenicon on Rationality Quotes: October 2009 · 2009-10-22T22:08:28.598Z · LW · GW

Well, I don't feel bad at all, so obviously you haven't won this argument yet. Unless I'm wrong, of course.

Comment by eirenicon on Rationality Quotes: October 2009 · 2009-10-22T21:31:06.683Z · LW · GW

Well, if you want to pick nits, a vacuum cleaner sucks more than realizing you're wrong in an argument.

Comment by eirenicon on Rationality Quotes: October 2009 · 2009-10-22T20:19:31.309Z · LW · GW

Also, in general, the quote is accurate. While it is intellectually useful to be proven wrong, it is not really a pleasant feeling, because it's much nicer to have already been right. This is especially true if you are heavily invested in what you are wrong about, eg. a scientist who realizes his research was based on an erroneous premise will be happy to stop wasting time but will also feel pretty crappy about the time he's already wasted. It's not in our nature to be purely cerebral about such a devastating thing as being wrong can be.

Comment by eirenicon on Rationality Quotes: October 2009 · 2009-10-22T19:34:29.357Z · LW · GW

It isn't that winning the lottery is better than being born rich, it's that winning the lottery is better than not winning the lottery. Even if you're already rich, winning the lottery is good. Presumably you weren't born right about everything, which means it's more useful to lose arguments than win them. After all, if you never lose an argument, what's more likely: that you are right about everything, that you're the best arguer ever, or that you simply don't argue things you're wrong about?

Comment by eirenicon on Biking Beyond Madness (link) · 2009-10-22T17:53:52.890Z · LW · GW

Wouldn't ignoring thoughts known to be erroneous despite immense physical pressure to listen to them be a display of extreme rationality?

Comment by eirenicon on Open Thread: October 2009 · 2009-10-19T13:14:13.947Z · LW · GW

Thanks, it's been a while since I wasted a whole morning on TvTropes. Please link responsibly, people!

Comment by eirenicon on Waterloo, ON, Canada Meetup: 6pm Sun Oct 18 '09! · 2009-10-17T21:18:57.017Z · LW · GW

It looks unlikely, I'm afraid. The timing conflicts with my Answers in Genesis study group... haha, nah, just kidding. But I probably have to work. C'est la vie.

Comment by eirenicon on Waterloo, ON, Canada Meetup: 6pm Sun Oct 18 '09! · 2009-10-15T15:13:27.884Z · LW · GW

I didn't know about this event but I'm interested now. Waterloo is pretty close, so consider this an "I'll get back to you."

Comment by eirenicon on I'm Not Saying People Are Stupid · 2009-10-09T19:33:56.972Z · LW · GW

What leads you to suggest Aumann isn't thinking that? Are you saying he is unaware that his ideological beliefs conflict with evidence to the contrary? Of course he is aware he could update on rational evidence and chooses not to, that's what smart religious people do. That's what faith is. The meaning of "capable but unwilling" should be clear: it is the ability to maintain belief in something in the face of convincing evidence opposing it. The ability to say, "Your evidence supporting y is compelling, but it doesn't matter, because I have faith in x." And that's what I think crazy is.

Comment by eirenicon on I'm Not Saying People Are Stupid · 2009-10-09T19:13:27.658Z · LW · GW

Are you sure? It seems to me that having an intellectual problem that you are capable of solving but are unwilling to update on due to ideological reasons or otherwise (eg Aumann) is the sense in which Eliezer is using the word "crazy". Of course, I could just be stupid.

Comment by eirenicon on I'm Not Saying People Are Stupid · 2009-10-09T19:03:33.017Z · LW · GW

Stupid is when you are unable to solve a problem. Lazy is when you are able to solve a problem but don't care to. Crazy is when you are able to solve a problem but don't want to.

Comment by eirenicon on Don't Think Too Hard. · 2009-10-08T22:13:35.871Z · LW · GW

Ah, of course. No, English was my only language at the time. I studied French in grade school but have no more than a few words of it left - that said, the underlying grammar, which is similar to Spanish, probably didn't just disappear. I also took a couple Latin courses in high school, but never became very proficient and again, only retained a few words and a rough understanding of structure. When I began learning Spanish it was all very new and quite difficult at first. I do think my strategy was a good one, though. The week I spent taking private lessons was devoted to grammar and grammar alone, on my insistence, and it paid off. En mis viajes fue muy útil.

Comment by eirenicon on Let them eat cake: Interpersonal Problems vs Tasks · 2009-10-08T21:37:33.451Z · LW · GW

A wealthy person being told he owes money to the government, or to the poor? It could even be someone who won the lottery (the way attractive people won the genetic lottery). But then is taxing lottery winners analogous to forcing women into sex? There's another implication here as well, in that if taxation isn't theft then forced promiscuity doesn't seem to be rape. In retrospect, a most unpleasant analogy that thankfully breaks down under a more nuanced view of property (wish I had more time to refine this comment).

Comment by eirenicon on Dying Outside · 2009-10-05T22:23:56.307Z · LW · GW

I understand - it reminds me of the Max Berry story "Machine Man" where the protagonist, a robotics researcher, loses a leg, so he designs an artificial one to replace it. Of course, it's a lot better than his old leg... so he "loses" the other one. Of course, two out of four artificial limbs is just a good start (and so forth). I wouldn't wish your condition on anyone, but you might just have been lucky enough to live in a time when the meat we were born with isn't relevant to a happy life. Best wishes regardless.

Comment by eirenicon on Don't Think Too Hard. · 2009-10-05T16:42:24.643Z · LW · GW

I was conversationally fluent in Spanish after traveling in Spanish-speaking countries for six months, despite studying the grammar for only a week and spending most of my time speaking English. I can only imagine how fluent I'd be if I had actually devoted myself to learning instead of, well, doing what I like to call "stupid things in dangerous places". (In all fairness, Spanish is pretty easy to learn from an English base, especially if you've studied Latin. I imagine Chinese or Swahili would be a lot harder.)

Comment by eirenicon on 'oy, girls on lw, want to get together some time?' · 2009-10-03T14:30:21.120Z · LW · GW

Alicorn is right about the Na, but what I actually had in mind was modern Western culture, in which marriage is declining and trending toward obsolescence. There are other correlations that can be drawn - for example, atheists have much lower marriage rates than average. Speaking from personal experience, the majority of my personal acquaintances (a majority of which are female) are uninterested in marriage.

Comment by eirenicon on 'oy, girls on lw, want to get together some time?' · 2009-10-03T02:58:29.371Z · LW · GW

There are cultures without marriage, but all cultures engage in sex. You can hardly compare our most basic biological imperative with a fairly recent cultural invention. In general, not everyone wants to get married, but in general, everyone wants to have sex, men and women both. These generalities hardly apply to LW, though, for reasons I believe are self-evident. Frankly, having a less than academic conversation about general human sexuality and dating in a forum like this seems misguided, especially considering the gender ratio.

Comment by eirenicon on Solutions to Political Problems As Counterfactuals · 2009-09-25T21:55:12.320Z · LW · GW

Hansionain, twice? Really?

As an aside, I love what you get when you google Hansonian. Most of the top results are in reference to Robin Hanson, and among my favorites are "Hansonian Normality", the "Hansonian world", and "Hansonian robot growth". (Un?)Fortunately, "Hansonian abduction" is attributed to a different Hanson.

I wish my name was an adjective.

Comment by eirenicon on Avoiding doomsday: a "proof" of the self-indication assumption · 2009-09-24T00:59:19.199Z · LW · GW

Yes, and the doomsday argument is not in regards to whether or not doomsday will occur, but when.

Comment by eirenicon on Avoiding doomsday: a "proof" of the self-indication assumption · 2009-09-23T20:49:22.032Z · LW · GW

It doesn't matter how many observers are in either set if all observers in a set experience the same consequences.

(I think. This is a tricky one.)

Comment by eirenicon on The Finale of the Ultimate Meta Mega Crossover · 2009-09-22T04:54:36.054Z · LW · GW

I thought the numbers were some clever Tom Swift reference, although for the life of me I couldn't figure it out. Swift popped into my mind because there have been at least three Tom Swifts about which it was unknown if they were the same person. I have no idea what those numbered characters might be from.

Comment by eirenicon on Ingredients of Timeless Decision Theory · 2009-09-22T00:57:33.334Z · LW · GW

That's what the physical evidence says.

What the physical evidence says is that the boxes are there, the money is there, and Omega is gone. So what does your choice effect and when?