Posts

Discussion: Which futures are good enough? 2013-02-24T00:06:44.941Z
[LINK] Levels of Ethics 2011-02-07T01:41:00.545Z
Forager Anthropology 2010-07-28T05:48:13.761Z
Against the standard narrative of human sexual evolution 2010-07-23T05:28:40.817Z
Some Thoughts Are Too Dangerous For Brains to Think 2010-07-13T04:44:12.287Z
Unknown knowns: Why did you choose to be monogamous? 2010-06-26T02:50:22.302Z

Comments

Comment by WrongBot on What are you working on? June 2012 · 2014-02-25T19:53:09.983Z · LW · GW

Hahahaha, nope.

Comment by WrongBot on Open Thread, December 2-8, 2013 · 2013-12-07T00:13:12.857Z · LW · GW

I'm planning to run a rationality-friendly table-top roleplaying game over IRC and am soliciting players.

The system is Unknown Armies, a game of postmodern magic set in a creepier, weirder version of our own world. Expect to investigate crimes, decipher the methods behind occult rituals, interpret symbols, and slowly go mad. This particular game will follow the misadventures of a group of fast food employees working for an occult cabal (well, more like a mailing list) that wants to make the world a better place.

Sessions will be 3-4 hours once a week over IRC or google hangouts or skype or whatever people are most comfortable with. Slots for two-three players, email me at sburnstein@gmail.com if you're interested or if I can answer any questions about the game.

Comment by WrongBot on I attempted the AI Box Experiment again! (And won - Twice!) · 2013-09-06T16:20:28.718Z · LW · GW

It may be possible to take advantage of multiple levels of reality within the game itself to confuse or trick the gatekeeper. For instance, must the experiment only be set in one world? Can there not be multiple layers of reality within the world you create? I feel that elaborating on this any further is dangerous. Think carefully about what this advice is trying to imply.

This is a pretty clever way of defeating precommitments. (Assuming I'm drawing the correct inferences.) How central was this tactic to your approach, if you're willing to comment?

Comment by WrongBot on Harry Potter and the Methods of Rationality discussion thread, part 21, chapters 91 & 92 · 2013-07-05T19:27:25.473Z · LW · GW

The bad guy(s) relative to Harry. Hermione coming back is important whichever way his morality goes.

Comment by WrongBot on Harry Potter and the Methods of Rationality discussion thread, part 21, chapters 91 & 92 · 2013-07-05T03:54:26.812Z · LW · GW

I'm about 95% confident Eliezer wouldn't do such a thing.

Comment by WrongBot on Harry Potter and the Methods of Rationality discussion thread, part 21, chapters 91 & 92 · 2013-07-04T20:46:36.797Z · LW · GW

By "wins" I just meant "beats the bad guy(s)".

Comment by WrongBot on Harry Potter and the Methods of Rationality discussion thread, part 21, chapters 91 & 92 · 2013-07-04T17:19:25.228Z · LW · GW

Quirrell is That Fucker.

Heavy spoilers for Nonjon's excellent A Black Comedy follow.

Va N Oynpx Pbzrql, Qnivq Zbaebr vf gur ragvgl perngrq jura gur Ubepehk va Evqqyr'f qvnel fhpprffshyyl erfheerpgf vgfrys, jvgu gur gjvfg gung vg jnf perngrq hfvat nyy bs Ibyqrzbeg'f 'cbfvgvir' rzbgvbaf. Guhf Qnivq Zbaebr vf bccbfrq gb Ibyqrzbeg, unf uvf zrzbevrf naq fxvyyf, naq frrf gung fgbel'f Uneel nf n cbgragvny gbby, nyyl, be rira rdhny.

Fbhaq snzvyvne? Gur anzr whfg znxrf vg boivbhf.

Comment by WrongBot on Harry Potter and the Methods of Rationality discussion thread, part 21, chapters 91 & 92 · 2013-07-04T17:12:43.184Z · LW · GW

Hermione will be resurrected before the conclusion of this story.

(Given that Harry wins and souls aren't real.)

Comment by WrongBot on Public Service Announcement Collection · 2013-06-28T03:47:38.535Z · LW · GW

PSA: There is an actual physical sensation that accompanies religious experiences. If you feel the presence of a being of awesome power and an unusual sensation of... fullness?... in your chest, don't panic or starting believing in a god or anything crazy.

It's a physiological thing that happens to people, especially in altered states (drugs, sleep deprivation, etc.), and it doesn't mean anything.

Comment by WrongBot on Discussion: Which futures are good enough? · 2013-02-24T21:38:40.839Z · LW · GW

Right, but not all trade-offs are equal. Thinking-rainbows-are-pretty and self-determination are worth different amounts.

Comment by WrongBot on Discussion: Which futures are good enough? · 2013-02-24T02:42:06.565Z · LW · GW

Thank you for pointing this out; I've apparently lost the ability to read. Post edited.

Comment by WrongBot on Just One Sentence · 2013-01-05T02:11:56.023Z · LW · GW

"If you perform experiments to determine the physical laws of our universe, you will learn how to make powerful weapons."

It's all about incentives.

Comment by WrongBot on Politics Discussion Thread December 2012 · 2012-12-04T22:25:16.464Z · LW · GW

Please don't do this.

Comment by WrongBot on Factions, inequality, and social justice · 2012-12-04T22:12:38.743Z · LW · GW

The Keeley book you linked has been discredited. See here, e.g.

Comment by WrongBot on 2012 Less Wrong Census/Survey · 2012-11-03T23:41:54.301Z · LW · GW

Took it and laughed several times.

Comment by WrongBot on Logical Pinpointing · 2012-11-01T21:17:08.132Z · LW · GW

In people's brains, and in papers written by philosophy students.

Comment by WrongBot on [Poll] Less Wrong and Mainstream Philosophy: How Different are We? · 2012-10-18T06:53:08.776Z · LW · GW

Sorry for the very belated reply, but I was struggling to find the words to describe exactly what I meant. Luckily, Eliezer has already done most of it for me in his latest post.

Thing A exists with respect to Thing B iff Thing A and Thing B are both part of the same causal network. So ArisKatsaris was half-right, but things outside our past and future light cones can be said to exist with respect to us if they have a causal relationship with anything that is inside our past and future light cones.

Comment by WrongBot on [Poll] Less Wrong and Mainstream Philosophy: How Different are We? · 2012-09-26T19:59:07.801Z · LW · GW

Other: Existence is a two-valued function, not one-valued.

Comment by WrongBot on How to deal with someone in a LessWrong meeting being creepy · 2012-09-07T22:38:52.987Z · LW · GW

Creepy behavior has an evolutionary purpose, just like all human behavior.

Humans are adaptation-executors, not fitness-maximizers. Evolution may have crafted me into a person who wants to sit at home alone all day and play video games, but sitting at home alone all day and playing video games doesn't offer me a fitness advantage.

(I don't actually want to sit at home alone all day and play video games. At least, not every day.)

Comment by WrongBot on Have you changed your mind lately? On what? · 2012-06-05T23:06:02.328Z · LW · GW

I work in video games, so my experience isn't at all typical of programming more generally. The big issues are that:

  • Development priorities and design are driven by marketing.
  • Lots of time is spent doing throwaway work for particular demos. I (and many others) wasted a couple weeks hacking together a scripted demo for E3 that will never be seen again.
  • The design for my portion of the project has changed directions numerous times, and each new version of the feature has been implemented in a rush, so we still have bits of code from five iterations ago hanging around, causing bugs.
  • Willingness to work extremely long hours (70+/week) is a badge of pride. I'm largely exempt from this because I'm a contractor and paid hourly, but my salaried coworkers frequently complain about not seeing enough of their families. On the other hand, some of the grateful to have an excuse to get away from their families.
  • The downside of being a contractor is that I don't get benefits like health insurance, sick days, paid time off, etc.

Many of these issues are specific to the games industry and my employer particularly, and shouldn't be considered representative of programming in general. Quality of life in the industry varies widely.

Comment by WrongBot on Have you changed your mind lately? On what? · 2012-06-05T22:55:34.090Z · LW · GW

The people I work with are mostly not dickheads and the pay is reasonable. It's the mountain of ugly spaghetti code I'm expected to build on top of that kills me. There's no time to do refactors, of course.

Comment by WrongBot on Have you changed your mind lately? On what? · 2012-06-05T22:54:13.996Z · LW · GW

When a deadline is near, all best software practices are thrown out of the window. Later in the project, a deadline is always near.

This is precisely the problem. Not really much more to add.

Comment by WrongBot on Have you changed your mind lately? On what? · 2012-06-05T02:47:12.911Z · LW · GW

Up until a month or so ago, I was convinced I'd landed my dream job. If I had a soul, it would be crushed now.

Which is not to say that it's awful, not by any means. I've just gained a new perspective on the value of best practices in software development.

Comment by WrongBot on What are you working on? June 2012 · 2012-06-03T18:47:08.815Z · LW · GW

I'm working on a video game in which the player controls a horde of orcs and attempts to take over the world. The game is narrated, however, from the perspective of the humans who are being crushed. You get lots of opportunities to commit atrocities, of course.

As far as mechanics go, it's simplified grand strategy tuned so that a typical game takes no more than a couple hours. Haven't gone much beyond design writeups and figuring out my platform (Unity), but I'm only about five hours of actual work into the project.

My realistic goal for the game is for it to serve as a portfolio piece (I work as a game designer). My stretch goals are to get it into the IGF and/or release it on Steam/Desura/etc.

Comment by WrongBot on Rationality Quotes June 2012 · 2012-06-03T18:33:00.489Z · LW · GW

Anger is pretty easy, too. All I have to do is remember a time I was wronged and focus on the injustice of it. Not very fun, though.

Comment by WrongBot on Rationality Quotes May 2012 · 2012-05-03T18:28:07.679Z · LW · GW

Karma isn't (necessarily) about punishment. Downvotes often just mean "I'd prefer to see fewer comments like this."

Comment by WrongBot on SMBC comic: poorly programmed average-utility-maximizing AI · 2012-04-06T19:57:18.189Z · LW · GW

Tapping out, inferential distance too wide.

Comment by WrongBot on Decision Theories: A Semi-Formal Analysis, Part II · 2012-04-06T19:28:51.890Z · LW · GW

Please stop making these little explanatory comments. They're obnoxious, unhelpful, and their karma ratings should indicate to you that they are not well-liked.

Comment by WrongBot on SMBC comic: poorly programmed average-utility-maximizing AI · 2012-04-06T19:26:06.683Z · LW · GW

Your edit demonstrates that you really don't get consequentialism at all. Why would making a good tradeoff (one miserable child in exchange for paradise for everyone else) lead to making a terrible one (a tiny bit of happiness for one person in exchange for death for someone else)?

Comment by WrongBot on SMBC comic: poorly programmed average-utility-maximizing AI · 2012-04-06T17:22:09.575Z · LW · GW

Omelas is a goddamned paradise. Omelas without the tortured child would be better, yeah, but Omelas as described is still better than any human civilization that has ever existed. (For one thing, it only contains one miserable child.)

Comment by WrongBot on Minicamps on Rationality and Awesomeness: May 11-13, June 22-24, and July 21-28 · 2012-03-29T23:07:43.743Z · LW · GW

Before the bootcamp, I'd just barely managed to graduate college and didn't have the greatest prospects for finding a job. (Though to be fair, I was moving to SF and it was a CS degree.)

At the bootcamp, I founded (and then folded) a startup with other bootcampers, which was profoundly educational and cost a couple months of time and <$100.

Now, <1 year after the bootcamp, I'm doing programming and design work on the new SimCity, which is as close to a dream job for me as could reasonably be expected to exist.

I can't attribute all my recent success to the bootcamp, because I was pretty awesome beforehand, but it really did dramatically improve my effectiveness in a number of domains (my girlfriend is grateful for the fashion tips I picked up, for example). Other specific things I've found useful include meditation, value of information calculations, and rejection therapy.

Comment by WrongBot on Harry Potter and the Methods of Rationality discussion thread, part 11 · 2012-03-23T06:53:02.208Z · LW · GW

Harry could destroy his own reputation in order to save Hermione, by (for example) threatening to forever abandon Wizarding Britain. He is a beloved celebrity, after all, and it would be bad press for the Wizengamot if the Boy-Who-Lived defected to France.

Not sure how likely his dark side is to go for a self-sacrificing ploy, though.

Comment by WrongBot on Harry Potter and the Methods of Rationality discussion thread, part 11 · 2012-03-22T19:41:16.148Z · LW · GW

Fair enough. The canon definition of a squib is specifically a non-magical child of wizarding parents. I'd assumed the Grangers had wizarding blood further back than that, making them genetically identical to squibs but not meeting the definition of the term as used in wizarding culture.

Comment by WrongBot on Harry Potter and the Methods of Rationality discussion thread, part 11 · 2012-03-22T18:55:34.963Z · LW · GW

The Grangers are squibs?!

Comment by WrongBot on Is community-collaborative article production possible? · 2012-03-21T23:04:00.078Z · LW · GW

Yeah, that would definitely allay my concerns. I think it's the uncertainty surrounding the number and quality of other entrants that bothers me.

Comment by WrongBot on Harry Potter and the Methods of Rationality discussion thread, part 11 · 2012-03-21T22:55:15.511Z · LW · GW

Maybe the thestral blood added permanence, because death is permanent? If it was replacing blueberries I doubt it was the key magical ingredient, and so its effect may not be directly related to its properties.

ETA: And also that's why this version of the potion so much more dangerous. It has Death in it.

Comment by WrongBot on Is community-collaborative article production possible? · 2012-03-21T22:40:43.985Z · LW · GW

See Raemon's comment. The Dark Arts are involved, mere honesty is no defense.

Comment by WrongBot on Is community-collaborative article production possible? · 2012-03-21T21:59:10.657Z · LW · GW

Thank you for explaining to me what I was thinking. This is exactly my concern.

Comment by WrongBot on Is community-collaborative article production possible? · 2012-03-21T21:26:09.787Z · LW · GW

This is all great except for the contest part, which I might currently have moderate ethical objections to. In general I'm concerned by contests which are held as an alternative to just paying someone to do the work for you; I objected to the contest that SI used to select their new logo (which is great) for the same reasons.

Essentially what you're doing is asking some unknown number of people to work for highly unpredictable pay, which is mostly likely to be (assuming at least a half-dozen entries), no pay at all. This tactic makes lots of financial sense and I understand why it would appeal to a cash-strapped non-profit, but it seems to me that if you're going to ask someone to do work for your benefit, you should pay them for it. This is a slightly muddier ideal when it comes to non-profits because I certainly don't think there's anything wrong with asking people to volunteer their time. Perhaps it's the uncertainty that's bothering me; it's as though you're asking people to gamble with their time.

So perhaps it's ethically equivalent to a charity-sponsored raffle, which I don't object to. Is my reasoning wrong, or am I just inconsistent? I'm not sure.

Comment by WrongBot on Harry Potter and the Methods of Rationality discussion thread, part 11 · 2012-03-20T06:27:54.108Z · LW · GW

Thank you. Alas, my credibility shall be forever tainted.

Comment by WrongBot on Harry Potter and the Methods of Rationality discussion thread, part 11 · 2012-03-20T05:32:41.807Z · LW · GW

It would be less confusing (to me, possibly others), if you abbreviated Albus Percival Brian Wulfric Dumbledore's name as AD. (My personal preference for APBWD should not be catered to.)

Comment by WrongBot on Muehlhauser-Goertzel Dialogue, Part 1 · 2012-03-20T05:10:01.416Z · LW · GW

Dead brains are like burned libraries.

I don't often agree with you, but you just convinced me we're on the same side.

Comment by WrongBot on Open Thread, March 16-31, 2012 · 2012-03-19T23:32:47.231Z · LW · GW

My preference would be for one post per major idea, however short or long that ends up.

Please keep posting mathy stuff here, I find it extremely interesting despite not having much of a math background.

Comment by WrongBot on Reply to Yvain on 'The Futility of Intelligence' · 2012-03-17T23:12:00.312Z · LW · GW

Those questionnaires are not a particularly good introduction to the LW/SI memespace. I worry that he is therefore making a poor first impression on our behalf, reducing the odds that these people will end up contributing to existential risk reduction and/or friendliness research.

Comment by WrongBot on Reply to Yvain on 'The Futility of Intelligence' · 2012-03-17T19:27:37.050Z · LW · GW

I was going to upvote this comment until I got to the last line. XiXiDu's email campaign is almost certainly doing more harm than good.

Comment by WrongBot on What are YOU doing against risks from AI? · 2012-03-17T19:22:58.134Z · LW · GW

RIght now I'm on a career path that will lead to me making lots of money with reasonable probability. I intend to give at least 10% of my income to existential risk reduction (FHI or SI, depending on the current finances of each) for the foreseeable future.

I wish I could do more. I'm probably smart/rational enough to contribute to FAI work directly in at least some capacity. But while that work is extremely important, it doesn't excite me, and I haven't managed to self-modify in that direction yet, though I'm working on it. Historically, I've been unable to motivate myself to do unexciting things for long periods of time (and that's another self-modification project).

I'm not doing more because I am weak. This is one of the primary motivations for my desire to become stronger.

Comment by WrongBot on Harry Potter and the Methods of Rationality discussion thread, part 11 · 2012-03-17T19:15:00.339Z · LW · GW

Flitwick is probably also out as an Imperius candidate, being a former international dueling champion and all.

Comment by WrongBot on Some Thoughts Are Too Dangerous For Brains to Think · 2012-03-17T19:03:00.746Z · LW · GW

In the intervening time I've also been convinced that I have ADD, or at least something that looks like it. My executive function is usually pretty decent.

Comment by WrongBot on I've had it with those dark rumours about our culture rigorously suppressing opinions · 2012-03-16T23:16:27.253Z · LW · GW

My comments on this topic after the first one were a mistake. Apologies for feeding the troll.

Comment by WrongBot on I've had it with those dark rumours about our culture rigorously suppressing opinions · 2012-03-16T23:02:25.389Z · LW · GW

You want to normalize domestic violence and make it legal. That's the only reasonable inference I can draw from what you've written.

Pro tip: I'm a dude. Does that falsify anything you believe?