Less Wrong: Open Thread, September 2010

post by matt · 2010-09-01T01:40:49.411Z · LW · GW · Legacy · 628 comments

This thread is for the discussion of Less Wrong topics that have not appeared in recent posts. If a discussion gets unwieldy, celebrate by turning it into a top-level post.

628 comments

Comments sorted by top scores.

comment by Kaj_Sotala · 2010-09-02T21:04:37.492Z · LW(p) · GW(p)

It seems to me, based on purely anecdotal experience, that people in this community are unusually prone to feeling that they're stupid if they do badly at something. Scott Adams' The Illusion of Winning might help counteract becoming too easily demotivated.

Let's say that you and I decide to play pool. We agree to play eight-ball, best of five games. Our perception is that what follows is a contest to see who will do something called winning.

But I don't see it that way. I always imagine the outcome of eight-ball to be predetermined, to about 95% certainty, based on who has practiced that specific skill the most over his lifetime. The remaining 5% is mostly luck, and playing a best of five series eliminates most of the luck too.

I've spent a ridiculous number of hours playing pool, mostly as a kid. I'm not proud of that fact. Almost any other activity would have been more useful. As a result of my wasted youth, years later I can beat 99% of the public at eight-ball. But I can't enjoy that sort of so-called victory. It doesn't feel like "winning" anything.

It feels as meaningful as if my opponent and I had kept logs of the hours we each had spent playing pool over our lifetimes and simply compared. It feels redundant to play the actual games.

I see the same thing with tennis, golf, music, and just about any other skill, at least at non-professional levels. And research supports the obvious, that practice is the main determinant of success in a particular field.

As a practical matter, you can't keep logs of all the hours you have spent practicing various skills. And I wonder how that affects our perception of what it takes to be a so-called winner. We focus on the contest instead of the practice because the contest is easy to measure and the practice is not.

Complicating our perceptions is professional sports. The whole point of professional athletics is assembling freaks of nature into teams and pitting them against other freaks of nature. Practice is obviously important in professional sports, but it won't make you taller. I suspect that professional sports demotivate viewers by sending the accidental message that success is determined by genetics.

My recommendation is to introduce eight-ball into school curricula, but in a specific way. Each kid would be required to keep a log of hours spent practicing on his own time, and there would be no minimum requirement. Some kids could practice zero hours if they had no interest or access to a pool table. At the end of the school year, the entire class would compete in a tournament, and they would compare their results with how many hours they spent practicing. I think that would make real the connection between practice and results, in a way that regular schoolwork and sports do not. That would teach them that winning happens before the game starts.

Yes, I know that schools will never assign eight-ball for homework. But maybe there is some kid-friendly way to teach the same lesson.

ETA: I don't mean to say that talent doesn't matter: things such as intelligence matter more than Adams gives them credit for, AFAIK. But I've noticed in many people (myself included) a definite tendency to overvalue intelligence relative to practice.

Replies from: None, jimrandomh, Daniel_Burfoot, None, Houshalter, Jonathan_Graehl, Wei_Dai
comment by [deleted] · 2010-09-03T03:59:53.483Z · LW(p) · GW(p)

people in this community are unusually prone to feeling that they're stupid if they do badly at something

I suspect this is a result of the tacit assumption that "if you're not smart enough, you don't belong at LW". If most members are anything like me, this combined with the fact that they're probably used to being "the smart one" makes it extremely intimidating to post anything, and extremely de-motivational if they make a mistake.

In the interests of spreading the idea that it's ok if other people are smarter than you, I'll say that I'm quite certainly one of the less intelligent members of this community.

I've noticed in many people (myself included) a definite tendency to overvalue intelligence relative to practice.

Practice and expertise tend to be domain-specific - Scott isn't any better at darts or chess after playing all that pool. Even learning things like metacognition tend not to apply outside of the specific domain you've learned it in. Intelligence is one of the only things that gives you a general problem solving/task completion ability.

Replies from: xax
comment by xax · 2010-09-03T21:07:19.886Z · LW(p) · GW(p)

Intelligence is one of the only things that gives you a general problem solving/task completion ability.

Only if you've already defined intelligence as not domain-specific in the first place. Conversely, meta-cognition about a person's own learning processes could help them learn faster in general, which has many varied applications.

comment by jimrandomh · 2010-09-03T13:30:47.316Z · LW(p) · GW(p)

It seems to me, based on purely anecdotal experience, that people in this community are unusually prone to feeling that they're stupid if they do badly at something.

This is certainly true of me, but I try to make sure that the positive feeling of having identified the mistakes and improved outweighs the negative feeling of having needed the improvement. Tsuyoku Naritai!

comment by Daniel_Burfoot · 2010-09-03T03:47:37.295Z · LW(p) · GW(p)

I don't mean to say that talent doesn't matter: things such as intelligence matter more than Adams gives them credit for

I think the relative contribution of intelligence vs. practice varies substantially depending on the nature of the particular task. A key problem is to identify tasks as intelligence-dominated (the smart guy always wins) vs. practice-dominated (the experienced guy always wins).

As a first observation about this problem, notice that clearly definable or objective tasks (chess, pool, basketball) tend to be practice-dominated, whereas more ambiguous tasks (leadership, writing, rationality) tend to be intelligence-dominated.

Replies from: Kaj_Sotala
comment by Kaj_Sotala · 2010-09-03T08:38:20.135Z · LW(p) · GW(p)

I think the relative contribution of intelligence vs. practice varies substantially depending on the nature of the particular task.

This is true. Intelligence research has shown that intelligence is more useful for more complex tasks, see e.g. Gottfredson 2002.

comment by [deleted] · 2010-09-09T02:33:41.744Z · LW(p) · GW(p)

I like this anecdote.

I never valued intelligence relative to practice, thanks to an upbringing that focused pretty heavily on the importance of effort over talent. I'm more likely to feel behind, insufficiently knowledgeable to the point that I'm never going to catch up. I don't see why it's necessarily a cheerful observation that practice makes a big difference to performance. It just means that you'll never be able to match the person who started earlier.

comment by Houshalter · 2010-09-02T22:28:40.122Z · LW(p) · GW(p)

Yes, I know that schools will never assign eight-ball for homework. But maybe there is some kid-friendly way to teach the same lesson.

Make them play some kind of simplified RPG until they realise the only achievement is how much time they put into doing mindless repetitive tasks.

Replies from: mattnewport, Sniffnoy
comment by mattnewport · 2010-09-02T22:34:37.899Z · LW(p) · GW(p)

Make them play some kind of simplified RPG until they realise the only achievement is how much time they put into doing mindless repetitive tasks.

I imagine lots of kids play Farmville already.

Replies from: Kaj_Sotala
comment by Kaj_Sotala · 2010-09-03T08:53:06.262Z · LW(p) · GW(p)

Those games don't really improve any sort of skill, though, and neither does anyone expect them to. To teach kids this, you need a game where you as a player pretty much never stop improving, so that having spent more hours on the game actually means you'll beat anyone who has spent less.

Go might work.

Replies from: rwallace
comment by rwallace · 2010-09-03T12:57:16.046Z · LW(p) · GW(p)

There are schools that teach Go intensively from an early age, so that a 10-year-old student from one of those schools is already far better than a casual player like me will ever be, and it just keeps going up from there. People don't seem to get tired of it.

Every time I contemplate that, I wish all the talent thus spent, could be spent instead on schools providing similarly intensive teaching in something useful like science and engineering. What could be accomplished if you taught a few thousand smart kids to be dan-grade scientists by age 10 and kept going from there? I think it would be worth finding out.

Replies from: Christian_Szegedy, NihilCredo, timtyler
comment by Christian_Szegedy · 2010-09-08T07:08:36.792Z · LW(p) · GW(p)

I agree with you. I also think that there are several reasons for that:

First that competitive games are (intellectual or physical sports) easier to select and train for, since the objective function is much clearer.

The other reason is more cultural: if you train your child for something more useful like science or mathematics, then people will say: "Poor kid, do you try to make a freak out of him? Why can't he have a childhood like anyone else?" Traditionally, there is much less opposition against music, art or sport training. Perhaps they are viewed as "fun activities."

Thirdly, it also seems that academic success is the function of more variables: communication skills, motivation, perspective, taste, wisdom, luck etc. So early training will result in much less head start than in a more constrained area like sports or music, where it is almost mandatory for success (age of 10 (even 6) are almost too late in some of those areas to begin seriously)

comment by NihilCredo · 2010-09-06T01:43:50.911Z · LW(p) · GW(p)

A somewhat related, impactful graph.

Of course, human effort and interest is far from perfectly fungible. But your broader point retains a lot of validity.

Replies from: Houshalter
comment by Houshalter · 2010-09-06T03:22:36.918Z · LW(p) · GW(p)

Yes, but what would it matter if 200 billion hours was spent refining wikipedia? There is only so much knowledge you can pump into it. I don't think that's a fair comparison.

Replies from: AdeleneDawner
comment by AdeleneDawner · 2010-09-06T10:42:29.551Z · LW(p) · GW(p)

So what else could we also accomplish? I didn't read it as 'wikipedia could be 2,000 times better', but 'we could have 2,000 wikipedia-grade resources'. (Which is probably also not true - we'd run out of low-hanging fruit. Still.)

comment by timtyler · 2010-09-08T06:41:15.272Z · LW(p) · GW(p)

Go is useful, I figure. As games go, it is one of the best. Perhaps computer games will one day surpass it - but, in many ways, that has happened yet.

comment by Sniffnoy · 2010-09-03T19:32:38.875Z · LW(p) · GW(p)

There's a large difference between the "leveling up" in such games, where you gain new in-game capabilities, and actually getting better, where your in-game capabilities stay the same but you learn to use them more effectively.

ETA: I guess perhaps a better way of saying it is, there's a large difference between the causal chains time->winning, and time->skill->winning.

comment by Jonathan_Graehl · 2010-09-24T23:00:35.229Z · LW(p) · GW(p)

I'm guilty of a sort of fixation on IQ (not actual scores or measurements of it). I have an unhealthy interest in food, drugs and exercises (physical and mental) that are purported to give some incremental improvement. I see this in quite a few folks here as well.

To actually accomplish something, more important than these incremental IQ differences are: effective high-level planning and strategy, practice, time actually spent trying, finding the right collaborators, etc.

I started playing around with some IQ-test-like games lately and was initially a little let down with how low my performance (percentile, not absolute) was on some tasks at first. I now believe that these tasks are quite specifically-trainable (after a few tries, I may improve suddenly, but after that I can, but choose not to, steadily increase my performance with work), and that the population actually includes quite a few well-practiced high-achievers. At least, I prefer to console myself with such thoughts.

But, seeing myself scored as not-so-smart in some ways, I started to wonder what difference it makes to earn a gold star that says you compute faster than others, if you don't actually do anything with it. Most people probably grow out of such rewards at a younger age than I did.

comment by Wei Dai (Wei_Dai) · 2010-09-02T23:51:21.005Z · LW(p) · GW(p)

But I've noticed in many people (myself included) a definite tendency to overvalue intelligence relative to practice.

I'm not sure I agree with that. In what areas do you see overvalue of intelligence relative to practice and why do you think there really is overvalue in those areas?

I've noticed for example that people's abilities to make good comments on LW do not seem to improve much with practice and feedback from votes (beyond maybe the first few weeks or so). Does this view represent an overvalue of intelligence?

Replies from: Kaj_Sotala, wedrifid
comment by Kaj_Sotala · 2010-09-03T08:45:02.305Z · LW(p) · GW(p)

In what areas do you see overvalue of intelligence relative to practice and why do you think there really is overvalue in those areas?

I should probably note that my overvaluing of intelligence is more of an alief than a belief. Mostly it shows up if I'm unable to master (or at least get a basic proficiency in) a topic as fast as I'd like to. For instance, on some types of math problems I get quickly demotivated and feel that I'm not smart enough for them, when the actual problem is that I haven't had enough practice on them. This is despite the intellectual knowledge that I could master them, if I just had a bit more practice.

I've noticed for example that people's abilities to make good comments on LW do not seem to improve much with practice and feedback from votes (beyond maybe the first few weeks or so). Does this view represent an overvalue of intelligence?

That sounds about right, though I would note that there's a huge amount of background knowledge that you need to absorb on LW. Not just raw facts, either, but ways of thinking. The lack of improvement might partially be because some people have absorbed that knowledge when they start posting and some haven't, and absorbing it takes such a long time that the improvement happens too slowly to notice.

comment by wedrifid · 2010-09-03T09:25:10.797Z · LW(p) · GW(p)

I've noticed for example that people's abilities to make good comments on LW do not seem to improve much with practice and feedback from votes (beyond maybe the first few weeks or so). Does this view represent an overvalue of intelligence?

That's interesting. I hadn't got that impression but I haven't looked too closely at such trends either. There are a few people whose comments have improved dramatically but the difference seems to be social development and and not necessarily their rational thinking - so perhaps you have a specific kind of improvement in mind.

I'm interested in any further observations on the topic by yourself or others.

comment by Wei Dai (Wei_Dai) · 2010-09-10T07:27:28.128Z · LW(p) · GW(p)

An Alternative To "Recent Comments"

For those who may be having trouble keeping up with "Recent Comments" or finding the interface a bit plain, I've written a Greasemonkey script to make it easier/prettier. Here is a screenshot.

Explanation of features:

  • loads and threads up to 400 most recent comments on one screen
  • use [↑] and [↓] to mark favored/disfavored authors
  • comments are color coded based on author/points (pink) and recency (yellow)
  • replies to you are outlined in red
  • hover over [+] to view single collapsed comment
  • hover over/click [^] to highlight/scroll to parent comment
  • marks comments read (grey) based on scrolling
  • shows only new/unread comments upon refresh
  • date/time are converted to your local time zone
  • click comment date/time for permalink

To install, first get Greasemonkey, then click here. Once that's done, use this link to get to the reader interface.

ETA: I've placed the script is in the public domain. Chrome is not supported.

Replies from: Wei_Dai, NihilCredo, ata, andreas, Morendil, wedrifid
comment by Wei Dai (Wei_Dai) · 2010-09-10T08:35:57.088Z · LW(p) · GW(p)

Here's something else I wrote a while ago: a script that gives all the comments and posts of a user on one page, so you can save them to a file or search more easily. You don't need Greasemonkey for this one, just visit http://www.ibiblio.org/weidai/lesswrong_user.php

I put in a 1-hour cache to reduce server load, so you may not see the user's latest work.

comment by NihilCredo · 2010-09-17T20:56:36.431Z · LW(p) · GW(p)

May I suggest submitting the script to userscripts.org? It will make it easier for future LessWrong readers to find it, as well as detectable by Greasefire.

comment by ata · 2010-09-10T21:12:22.496Z · LW(p) · GW(p)

Nice! Thanks.

Edit: "shows only new/unread comments upon refresh" — how does it determine readness?

Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2010-09-10T21:59:10.456Z · LW(p) · GW(p)

Any comment that has been scrolled off the screen for 5 seconds is considered read. (If you scroll back, you can see that the text and border have turn from black to gray.) If you scroll to the bottom and stay there for 5 seconds, all comments are marked read.

comment by andreas · 2010-09-10T21:08:56.400Z · LW(p) · GW(p)

Thanks for coding this!

Currently, the script does not work in Chrome (which supports Greasemonkey out of the box).

Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2010-09-10T22:07:44.252Z · LW(p) · GW(p)

From http://dev.chromium.org/developers/design-documents/user-scripts

  • Chromium does not support @require, @resource, unsafeWindow, GM_registerMenuCommand, GM_setValue, or GM_getValue
  • GM_xmlhttpRequest is same-origin only

My script uses 4 out of these 6 features, and also cross-domain GM_xmlhttpRequest (the comments are actually loaded from a PHP script hosted elsewhere, because LW doesn't seem to provide a way to grab 400 comments at once), so it's going to have to stay Firefox-only for the time being.

Oh, in case anyone developing LW is reading this, I release my script into the public domain, so feel free to incorporate the features into LW itself.

comment by Morendil · 2010-09-10T13:45:31.168Z · LW(p) · GW(p)

Would you consider making display of author names and points a toggle and hidden by default, à la Anti-Kibitzer?

Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2010-09-10T18:50:21.148Z · LW(p) · GW(p)

I've added some code to disable the author/points-based coloring when Anti-Kibitzer is turned on in your account preferences. (Names and points are already hidden by the Anti-Kibitzer.) Here is version 1.0.1.

More feature requests or bug reports are welcome.

comment by wedrifid · 2010-09-10T07:58:41.582Z · LW(p) · GW(p)

Sounds fantastic!

Err... but the link is broken.

comment by Spurlock · 2010-09-01T12:33:27.242Z · LW(p) · GW(p)

Not sure what the current state of this issue is, apologies if it's somehow moot.

I would like to say that I strongly feel Roko's comments and contributions (save one) should be restored to the site. Yes, I'm aware that he deleted them himself, but it seems to me that he acted hastefully and did more harm to the site than he probably meant to. With his permission (I'm assuming someone can contact him), I think his comments should be restored by an admin.

Since he was such a heavy contributor, and his comments abound(ed) on the sequences (particularly Metaethics, if memory serves), it seems that a large chunk of important discussion is now full of holes. To me this feels like a big loss. I feel lucky to have made it through the sequences before his egress, and I think future readers might feel left out accordingly.

So this is my vote that, if possible, we should proactively try to restore his contributions up to the ones triggering his departure.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2010-09-01T15:29:13.187Z · LW(p) · GW(p)

He did give a permission to restore the posts (I didn't ask about comments), when I contacted him originally. There remains the issue of someone being technically able to restore these posts.

Replies from: matt
comment by matt · 2010-09-02T04:16:28.562Z · LW(p) · GW(p)

We have the technical ability, but it's not easy. We wouldn't do it without Roko's and Eliezer's consent, and a lot of support for the idea. (I wouldn't expect Eliezer to consent to restoring the last couple of days of posts/comments, but we could restore everything else.)

Replies from: wedrifid, Douglas_Knight
comment by wedrifid · 2010-09-02T04:22:04.854Z · LW(p) · GW(p)

It occurs to me that there is a call for someone unaffiliated to maintain a (scraped) backup of everything that is posted in order to prevent such losses in the future.

comment by Douglas_Knight · 2010-09-17T18:14:37.752Z · LW(p) · GW(p)

Surely it would be easy to restore just Roko's posts, leaving his comments dead.

Also, if you don't end up restoring them, it's rather awkward that he's in the top contributors list, with a practically dead link.

Replies from: matt
comment by matt · 2010-09-17T20:20:56.057Z · LW(p) · GW(p)

It's doable. Are you now talking to the wrong person?

[ETA: Sorry - reading that back it was probably rude - I meant to say something closer to "It's doable, but I still need Eliezer's okay before I'll do anything."]

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-09-27T05:20:10.562Z · LW(p) · GW(p)

Okay granted. I also think this would be a good idea. Actually, I'd be against having an easy way to delete all contributions in general, it's too easy to wreck things that way.

Replies from: wedrifid
comment by wedrifid · 2010-09-27T06:11:42.489Z · LW(p) · GW(p)

Actually, I'd be against having an easy way to delete all contributions in general, it's too easy to wreck things that way.

Are you saying that the only person who should be conveniently able to remove other people's contributions is you?

People's comments are their own. It is unreasonable to leave them up if they choose not to. Fortunately, things that have been posted on the internet tend to be hard to destroy. Archives can be created and references made to material that has been removed (for example, see RationalWiki). This means that a blogger can not expect to be able to remove their words from the public record even though they can certainly stop publishing it themselves, removing their ongoing implied support of those words.

I actually do support keeping an archive of contributions and it would be convenient if LW had a way to easily restore lost content. It would have to be in a way that was either anonymized ("deleted user"?) or gave some clear indication that the post is by "past-Roko", or "archived-Roko" rather than pretending that it is by the author himself, in the present tense. That is, it would acknowledge the futility of deleting information on the internet but maintain common courtesy to the author. There is no need to disempower the author ourselves by removing control over their own account when the very nature of the internet makes the deleting efforts futile anyway.

Replies from: XiXiDu, David_Gerard
comment by XiXiDu · 2010-09-27T11:41:39.269Z · LW(p) · GW(p)

What is necessary is just that EY thinks about a way how to tell people why something had to be deleted without referring to what has been deleted in detail and why they should trust him on that. I see that freedom of speech has to end somewhere. Are we going to publish detailed blueprints for bio weapons? No. I just don't see how EY wants to accomplish that as in the case of the Roko incident you cannot even talk about what has been deleted in a abstract way.

Convince me not to spread the original posts and comments as much as I can? How are you going to do that? I already posted another comment yesterday with the files that I deleted again after thinking about it. This is just too far and fuzzy for me to not play with the content in question without thinking twice.

What I mean is that I personally have no problem with censorship, if I can see why it had to be done.

Replies from: khafra
comment by khafra · 2010-09-27T15:42:04.119Z · LW(p) · GW(p)

I've been thinking about it by moving domains: Imagine that, instead of communicating by electromagnetic or sound waves, we encoded information into the DNA of custom microbes and exchanged them. Would there be any safe way to talk about even the specifics of why a certain bioweapon couldn't be discussed?

I don't think there is. At some point in weaponized conversation, there's a binary choice between inflicting it on people and censoring it.

Replies from: Alicorn, XiXiDu
comment by Alicorn · 2010-09-27T15:43:16.790Z · LW(p) · GW(p)

Imagine that, instead of communicating by electromagnetic or sound waves, we encoded information into the DNA of custom microbes and exchanged them.

Like the descoladores!

Replies from: khafra
comment by khafra · 2010-09-27T16:19:06.792Z · LW(p) · GW(p)

Hah, I didn't realize someone else had already imagined it. Generalizing from multiple, independently-generated fictional evidence?

comment by XiXiDu · 2010-09-27T18:56:13.185Z · LW(p) · GW(p)

Awesome reply, thanks :-)

Didn't know about this either, thanks Alicorn.

I wonder how the SIAI is going to resolve that problem if it caused nightmares inside the SIAI itself. Is EY going to solve it all by himself? If he was going to discuss it, then with whom, since he doesn't know who's strong enough beforehand? That's just crazy. Time will end soon anyway, so why worry I guess. Bye.

comment by David_Gerard · 2010-12-06T12:47:44.571Z · LW(p) · GW(p)

Archives can be created and references made to material that has been removed (for example, see RationalWiki).

And not just the example you cite. RationalWiki has written an entire MediaWiki extension specifically for the purpose of saving snapshots of Web pages, as people trying to cover their tracks happens a lot on some sites we run regular news pages on (Conservapedia, Citizendium).

Memory holing gets people really annoyed, because it's socially extremely rude. It's the same problem as editing a post to make a commentator look foolish. There may be general good reasons for memory holing, but it must be done transparently - there is too much precedent for presuming bad faith unless otherwise proven.

Replies from: gwern
comment by gwern · 2010-12-06T18:38:08.114Z · LW(p) · GW(p)

Seems like a heavy-weight solution. I'd just use http://webcitation.org/ (probably combined with my little program, archiver).

Replies from: David_Gerard
comment by David_Gerard · 2010-12-07T12:00:50.955Z · LW(p) · GW(p)

A simple mechanism to put the saved evidence in the same place as the assertions concerning it, rather than out in the cloud, is not onerous in practice. Mind you, most of the disk load for RW is the images ...

comment by homunq · 2010-09-01T15:52:49.237Z · LW(p) · GW(p)

I had a top-level post which touched on an apparently-forbidden idea downvoted to a net of around -3 and then deleted. This left my karma pinned (?) at 0 for a few months. I am not sure of the reasons for this, but suspect that the forbidden idea was partly to blame.

My karma is now back up to where I could make a top-level post. Do people think that a discussion forum on the moderation and deletion policies would be beneficial? I do, even if we all had to do silly dances to avoid mentioning the specifics of any forbidden idea(s). In my opinion, such dances are both silly and unjustified; but I promise that I'd do them and encourage them if I made such a post, out of respect for the evident opinions of others, and for the asymmetrical (though not one-sided) nature of the alleged danger.

I would not be offended if someone else "took the idea" and made such a post. I also wouldn't mind if the consensus is that such a post is not warranted. So, what do you think?

Replies from: Perplexed, PhilGoetz, xamdam, None, Airedale, Emile, billswift
comment by Perplexed · 2010-09-01T18:47:49.780Z · LW(p) · GW(p)

Do people think that a discussion forum on the moderation and deletion policies would be beneficial?

I would like to see a top-level post on moderation policy. But I would like for it to be written by someone with moderation authority. If there are special rules for discussing moderation, they can be spelled out in the post and commenters can abide by them.

As a newcomer here, I am completely mystified by the dark hints of a forbidden topic. Every hypothesis I can come up with as to why a topic might be forbidden founders when I try to reconcile with the fact that the people doing the forbidding are not stupid.

Self-censorship to protect our own mental health? Stupid. Secrecy as a counter-intelligence measure, to safeguard the fact that we possess some counter-measure capability? Stupid. Secrecy simply because being a member of a secret society is cool? Stupid, but perhaps not stupid enough to be ruled out. On the other hand, I am sure that I haven't thought of every possible explanation.

It strikes me as perfectly reasonable if certain topics are forbidden because discussion of such topics has historically been unproductive, has led to flame wars, etc. I have been wandering around the internet long enough to understand and even appreciate somewhat arbitrary, publicly announced moderation policies. But arbitrary and secret policies are a prescription for resentment and for time wasted discussing moderation policies.

Edit: typo correction - insert missing words

Replies from: wnoise, homunq
comment by wnoise · 2010-09-01T19:50:16.885Z · LW(p) · GW(p)

Self-censorship to protect our own mental health? Stupid.

My gloss on it is that this is at best a minor part, though it figures in.

The topic is an idea that has horrific implications that are supposedly made more likely the more one thinks about it. Thinking about it in order to figure out what it may be is a bad idea because you may come up with something else. And if the horrific is horrific enough, even a small rise in the probability of it happening would be very bad in expectation.

More explaining why many won't think it dangerous at all. This doesn't directly point anything out, but any details do narrow the search-space: V fnl fhccbfrqyl orpnhfr lbh unir gb ohl va gb fbzr qrpvqrqyl aba-znvafgernz vqrnf gung ner pbzzba qbtzn urer.

I personally don't buy this, and think the censorship is an overblown reaction. Accepting it is definitely not crazy, however, especially given the stakes, and I'm willing to self-censor to some degree, even though I hate the heavy-handed response.

Replies from: cata, homunq
comment by cata · 2010-09-01T20:00:09.960Z · LW(p) · GW(p)

Another perspective: I read the forbidden idea, understood it, but I have no sense of danger because (like the majority of humans) I don't really live my life in a way that's consistent with all the implications of my conscious rational beliefs. Even though it sounded like a convincing chain of reasoning to me, I find it difficult to have a personal emotional reaction or change my lifestyle based on what seem to be extremely abstract threats.

I think only people who are very committed rationalists would find that there are topics like this which could be mental health risks. Of course, that may include much of the LW population.

Replies from: Perplexed, Kaj_Sotala
comment by Perplexed · 2010-09-01T20:47:58.603Z · LW(p) · GW(p)

How about an informed consent form:

  • (1) I know that the SIAI mission is vitally important.
  • (2) If we blow it, the universe could be paved with paper clips.
  • (3) Or worse.
  • (4) I hereby certify that points 1 & 2 do not give me nightmares.
  • (5) I accept that if point 3 gives me nightmares that points 1 and 2 did not give me, then I probably should not be working on FAI and should instead go find a cure for AIDS or something.
Replies from: Snowyowl, wedrifid
comment by Snowyowl · 2010-09-02T13:27:43.365Z · LW(p) · GW(p)

I feel you should detail point (1) a bit more (explain in more detail what the SIAI intends to do), but I agree with the principle. Upvoted.

comment by wedrifid · 2010-09-02T03:10:12.171Z · LW(p) · GW(p)

I like it!

Although 5 could be easily replaced by "Go earn a lot of money in a startup, never think about FAI again but still donate money to SIAI because you remember that you have some good reason to that you don't want to think about explicitly."

comment by Kaj_Sotala · 2010-09-02T20:57:51.649Z · LW(p) · GW(p)

I read the idea, but it seemed to have basically the same flaw as Pascal's wager does. On that ground alone it seemed like it shouldn't be a mental risk to anyone, but it could be that I missed some part of the argument. (Didn't save the post.)

Replies from: timtyler
comment by timtyler · 2010-09-08T06:57:28.044Z · LW(p) · GW(p)

My analysis was that it described a real danger. Not a topic worth banning, of course - but not as worthless a danger as the one that arises in Pascal's wager.

comment by homunq · 2010-09-01T23:27:12.162Z · LW(p) · GW(p)
My gloss on it is that this is at best a minor part, though it figures in.

I think that, even if this is a minor part of the reasoning for those who (unlike me) believe in the danger, it could easily be the best, most consensus* basis for an explicit deletion policy. I'd support such a policy, and definitely think a secret policy is stupid for several reasons.

*no consensus here will be perfect.

comment by homunq · 2010-09-01T19:29:12.453Z · LW(p) · GW(p)

I think it's safe to tell you that your second two hypotheses are definitely not on the right track.

comment by PhilGoetz · 2010-09-01T16:26:39.194Z · LW(p) · GW(p)

If there's just one topic that's banned, then no. If it's increased to 2 topics - and "No riddle theory" is one I hadn't heard before - then maybe. Moderation and deletion is very rare here.

I would like moderation or deletion to include sending an email to the affected person - but this relies on the user giving a good email address at registration.

Replies from: Emile, homunq
comment by Emile · 2010-09-01T16:40:01.382Z · LW(p) · GW(p)

If it's increased to 2 topics - and "No riddle theory" is one I hadn't heard before - then maybe.

I'm pretty sure that "riddle theory" is a reference to Roko's post, not a new banned topic.

comment by homunq · 2010-09-01T16:32:40.491Z · LW(p) · GW(p)

My registration email is good, and I received no such email. I can also be reached under the same user name using English wikipedia's "contact user" function (which connects to the same email.)

Suggestions like your email idea would be the main purpose of having the discussion (here or on a top-level post). I don't think that some short-lived chatter would change a strongly-held belief, and I have no desire nor capability of unseating the benevolent-dictator-for-life. However, I think that any partial steps towards epistemic glasnost, such as an email to deleted post authors or at least their ability to view the responses to their own deleted post, would be helpful.

comment by xamdam · 2010-09-05T19:21:48.599Z · LW(p) · GW(p)

Do people think that a discussion forum on the moderation and deletion policies would be beneficial?

Yes. I think that lack of policy 1) reflects poorly on the objectivity of moderators, even if in appearance only 2) diverts too much energy into nonproductive discussions.

Replies from: Relsqui
comment by Relsqui · 2010-09-16T22:08:42.711Z · LW(p) · GW(p)

reflects poorly on the objectivity of moderators

As a moderator of a moderately large social community, I would like to note that moderator objectivity is not always the most effective way to reach the desired outcome (an enjoyable, productive community). Yes, we've compiled a list of specific actions that will result in warnings, bans, and so forth, but someone will always be able to think of a way to be an asshole which isn't yet on our list--or which doesn't quite match the way we worded it--or whatever. To do our jobs well, we need to be able to use our judgment (which is the criterion for which we were selected as moderators).

This is not to say that I wouldn't like to see a list of guidelines for acceptable and unacceptable LW posts. But I respect the need for some flexibility on the editing side.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2010-09-20T15:25:21.437Z · LW(p) · GW(p)

Any thoughts about whether there are differences between communities with a lot of specific rules and those with a more general "be excellent to each other" standard?

Replies from: Relsqui
comment by Relsqui · 2010-09-20T18:30:28.672Z · LW(p) · GW(p)

That's a really good question; it makes me want to do actual experiments with social communities, which I'm not sure how you'd set up. Failing that, here are some ideas about what might happen:

Moderators of a very strictly rule-based community might easily find themselves in a walled garden situation just because their hands are tied. (This is the problem we had in the one I mentioned, before we made a conscious decision to be more flexible.) If someone behaves poorly, they have no justification to wield to eject that person. In mild cases they'll tolerate it; in major cases, they'll make an addition to the rules to cover the new infraction. Over time the rules become an unwieldy tome, intimidating users who want to behave well, reducing the number of people who actually read them, and increasing the chance of accidental infractions. Otherwise-useful participants who make a slip get a pass, leading to cries of favoritism from users who'd had the rules brought down on them before--or else they don't, and the community loses good members.

This suggests a corollary of my earlier admonition for flexibility: What written rules there are should be brief and digestible, or at least accompanied by a summary. You can see this transition by comparing the long form of one community's rules, complete with CSS and anchors that let you link to a specific infraction, and the short form which is used to give new people a general idea of what's okay and not okay.

The potential flaw in the "be excellent to each other" standard is disagreement about what's excellent--either amongst the moderators, or between the moderators and the community. For this reason, I'd expect it to work better in smaller communities with fewer of either. (This suggests another corollary--smaller communities need fewer written rules--which I suspect is true but with less confidence than the previous one.) If the moderators disagree amongst themselves, users will rightly have no idea what's okay and isn't; when they're punished for something which was okay before, they'll be frustrated and likely resentful, neither of which is conducive to a pleasant environment. If the moderators agree but the users disagree with their consensus, well, one set or the other will have to change.

Of course, in online communities, simple benevolent dictatorships are a popular choice. This isn't surprising, given that there is often exactly one person with real power (e.g. server access), which they may or may not choose to delegate. Two such channels I'm in demonstrate the differences in the above fairly well, if not perfectly (I'm not in any that really relies on a strict code of rules). One is very small (about a dozen people connected as I write this), and has exactly one rule*: "Be awesome." The arbiter of awesome is the channel owner. Therefore, the channel is a collection of people who suit him. Since there is no other principle we claim to hold to (no standard against which to measure the dictator), and he's not a jerk (obviously I don't think so, since I'm still there), it works perfectly well.

The other is the one whose rules I linked earlier. It's fairly large, but not enormous (~375 people connected right now). There are a few people who technically have power, but one to whom the channel "belongs" (the author of the work it's a fan community of). Because he has better things to do than keep an eye on it, he delegates responsibility to ops who are selected almost entirely for one quality: he predicts that they will make moderation decisions he approves of. Between that criterion and an active side channel for discussing policy, we mostly avoid the problems of moderator disagreement, and the posted rules ensure that there are very few surprises for the users.

A brief digression: That same channel owner actually did do an experiment in the moderation of a social community. He wanted to know if you could design an algorithm for a bot to moderate an IRC channel, with the goal of optimizing the signal to noise ratio; various algorithms were discussed, and one was implemented. I would call it a tentative success; the channel in question does have very good SNR when active, but it moves slowly; the trivial chatter wasn't replaced with insight, it was just removed. Also, the channel bot is supplemented by human mods, for the rare cases when the bot's enforcement is being circumvented.

The algorithm he went with is not my favorite of the ones proposed, and I'd love to see a more rigorous experiment done--the trick would be acquiring ready bodies of participants.

Anyway. If instead of experimenting on controlled social groups, we surveyed existing groups that had survived, I think we'd find a lot of small communities with no or almost no codified rules, and then a mix of rules and judgment as they got larger. There would be a cap on the quantity of written rules that were actually enforced in any size of community, and I wouldn't expect to see even one that relied 100% on a codified ruleset with no enforcer judgment at all.

(Now I kind of want to research some communities and write an article about this, although I don't think it'd be particularly relevant for LW.)

*I'm told there is actually a second one: "No capitals in the topic." This is more of a policy than a behavioral rule, though, and it began as an observation of the way things actually were.

comment by [deleted] · 2010-09-16T22:39:14.386Z · LW(p) · GW(p)

A minute in Konkvistador's mind:

Again the very evil mind shattering secret, why do I keep running into you?

This is getting old, lots of people seem to know about it. And a few even know the evil soul wrecking idea.

The truth is out there, my monkey brains can't cope with the other's having a secret not willing to share, they may bash my skull in with a stone! I should just mass PM the guys who know about the secret in a nonconspicus way. They will drop hints, they are weak. Also traces of the relevant texts have to still be on-line.

That job advert seems to be the kind a rather small subset of organizations would put out.

That is just paranoid don't even think about that.

XXX asf ag agdlqog hh hpoq fha r wr rqw oipa wtrwz wrz wrhz. W211!!

Yay posting on Lesswrong feels like playing Call of Cthulhu!

....

These are supposed to be not only very smart, but very rational people, people you have a high opinion of, who seem to take the idea very seriously. They may be trying to manipulate you. There may be a non-trivial possibility of them being right.

....

I suddenly feel much less enthusiastic about life extension and cryonics.

Replies from: thomblake
comment by thomblake · 2010-09-16T22:53:45.440Z · LW(p) · GW(p)

I do have access to the forbidden post, and have no qualms about sharing it privately. I actually sought it out actively after I heard about the debacle, and was very disappointed when I finally got a copy to find that it was a post that I had already read and dismissed.

I don't think there's anything there, and I know what people think is there, and it lowered my estimation of the people who took it seriously, especially given the mean things Eliezer said to Roko.

Replies from: None
comment by [deleted] · 2010-09-16T23:06:05.180Z · LW(p) · GW(p)

Can I haz evil soul crushing idea plz?

But to be serious, yes if I find the idea is foolish, the people who take it seriously, this reduces my optimism as well, just as much as malice on the part of the Lesswrong staff or just plain real dark secrets since I take clippy to be a serious and very scary threat (I hope you don't take too much offence clippy you are a wonderful poster) . I should have stated that too. But to be honest it would be much less fun knowing the evil soul crushing self-fulfilling prophecy (tm), the situation around it is hilarious.

What really catches my attention however is the thought experiment of how exactly one is supposed to quarantine a very very dangerous idea. Since in the space of all possible ideas, I'm quite sure there are a few that could prove very toxic to humans.

The LW member that take it seriously are doing a horrible job of it.

Replies from: NancyLebovitz, thomblake
comment by NancyLebovitz · 2010-09-20T16:24:12.409Z · LW(p) · GW(p)

Upvoted for the cat picture.

comment by thomblake · 2010-09-16T23:08:58.476Z · LW(p) · GW(p)

Indeed, in the classic story, it was an idea whose time had come, and there was no effective means of quarantining it. And when it comes to ideas that have hit the light of day, there are always going to be those of us who hate censorship more than death.

comment by Airedale · 2010-09-01T16:35:21.685Z · LW(p) · GW(p)

I think such discussion wouldn't necessarily warrant its own top-level post, but I think it would fit well in a new Meta thread. I have been meaning to post such a thread for a while, since there are also a couple of meta topics I would like to discuss, but I haven't gotten around to it.

comment by Emile · 2010-09-01T16:37:54.767Z · LW(p) · GW(p)

Do people think that a discussion forum on the moderation and deletion policies would be beneficial?

I don't. Possible downsides are flame wars among people who support different types of moderation policies (and there are bound to be some - self-styled rebels who pride themselves in challenging the status quo and going against groupthink are not rare on the net), and I don't see any possible upsides. Having a Benevolent Dictator For Life works quite well.

See this on Meatball Wiki, that has quite a few pages on organization of Online Communities.

Replies from: homunq
comment by homunq · 2010-09-01T17:58:07.150Z · LW(p) · GW(p)

I don't want a revolution, and don't believe I'll change the mind of somebody committed not to thinking too deeply about something. I just want some marginal changes.

I think Roko got a pretty clear explanation of why his post was deleted. I don't think I did. I think everyone should. I suspect there may be others like me.

I also think that there should be public ground rules as to what is safe. I think it is possible to state such rules so that they are relatively clear to anyone who has stepped past them, somewhat informative to those who haven't, and not particularly inviting of experimentation. I think that the presence of such ground rules would allow some discussion as to the danger or non-danger of the forbidden idea and/or as to the effectiveness or ineffectiveness of supressing it. Since I believe that the truth is "non-danger" and "ineffectiveness", and the truth will tend to win the argument over time, I think that would be a good thing.

Replies from: timtyler, Emile, JGWeissman
comment by timtyler · 2010-09-02T08:26:15.005Z · LW(p) · GW(p)

The second rule of Less Wrong is, you DO NOT talk about Forbidden Topics.

Replies from: homunq
comment by homunq · 2010-09-02T09:33:06.956Z · LW(p) · GW(p)

Your sarcasm would not be obvious if I didn't recognize your username.

Replies from: timtyler
comment by timtyler · 2010-09-02T09:57:02.964Z · LW(p) · GW(p)

Hmm - I added a link to the source, which hopefully helps to explain.

Replies from: homunq
comment by homunq · 2010-09-02T15:41:03.857Z · LW(p) · GW(p)

Quotes can be used sarcastically or not.

Replies from: timtyler
comment by timtyler · 2010-09-02T19:51:14.427Z · LW(p) · GW(p)

I don't think I was being sarcastic. I won't take the juices out of the comment by analysing it too completely - but a good part of it was the joke of comparing Less Wrong with Fight Club.

We can't tell you what materials are classified - that information is classified.

comment by Emile · 2010-09-02T08:21:43.883Z · LW(p) · GW(p)

I think Roko got a pretty clear explanation of why his post was deleted. I don't think I did.

It's probably better to solve this by private conversation with Eliezer, than by trying to drum up support in an open thread.

Too much meta discussion is bad for a community.

Replies from: homunq
comment by homunq · 2010-09-02T09:30:06.414Z · LW(p) · GW(p)

The thing I'm trying to drum up support for is an incremental change in current policy; for instance, a safe and useful version of the policy being publicly available. I believe that's possible, and I believe it is more appropriate to discuss this in public.

(Actually, since I've been making noise about this, and since I've promised not to reveal it, I now know the secret. No, I won't tell you, I promised that. I won't even tell who told me, even though I didn't promise not to, because they'd just get too many requests to reveal it. But I can say that I don't believe in it, and also that I think [though others might disagree] that a public policy could be crafted which dealt with the issue without exacerbating it, even if it were real.)

Replies from: khafra
comment by khafra · 2010-09-02T14:05:56.433Z · LW(p) · GW(p)

How much evidence for the existence of a textual Langford Basilisk would you require before considering it a bad idea to write about it in detail?

comment by JGWeissman · 2010-09-01T18:13:37.291Z · LW(p) · GW(p)

the truth is "non-danger"

Normally yes, but this case involves a potentially adversarial agent with intelligence and optimizing power vastly superior to your own, and which cares about your epistemic state as well as your actions.

Replies from: homunq, None
comment by homunq · 2010-09-01T18:49:44.385Z · LW(p) · GW(p)

Look, my post addressed these issues, and I'd be happy to discuss them further, if the ground rules were clear. Right now, we're not having that discussion; we're talking about whether that discussion is desirable, and if so, how to make it possible. I think that the truth will out; if you're right, you'll probably win the discussion. So although we disagree on danger, we should agree on discussing danger within some well-defined ground rules which are comprehensibly summarized in some safe form.

Replies from: wedrifid
comment by wedrifid · 2010-09-02T03:14:49.314Z · LW(p) · GW(p)

I think that the truth will out

Really? Go read the sequences! ;)

comment by [deleted] · 2010-09-16T23:19:38.457Z · LW(p) · GW(p)

Hell? That's it?

comment by billswift · 2010-09-05T11:07:30.683Z · LW(p) · GW(p)

Thanks. More reason to waste less time here.

Intelligence is your primary means of survival.

People who keep secrets are your enemies.

Keeping secrets is a waste of time.

I have been reading OB and LW from about a month of OB's founding, but this site has been slipping for over a year now. I don't even know what specifically is being discussed; not even being able to mention the subject matter of the banned post, and having secret rules, is outstandingly stupid. Maybe I'll come back again in a bit to see if the "moderators" have grown up.

Replies from: NihilCredo
comment by NihilCredo · 2010-09-05T11:53:35.232Z · LW(p) · GW(p)

As a rather new reader, my impression has been that LW suffers from a moderate case of what in the less savory corners of the Internet would be known as CJS (circle-jerking syndrome).

At the same time, if one is willing to play around this aspect (which is as easy as avoiding certain threads and comment trees), there are discussion possibilities that, to the best of my knowledge, are not matched anywhere else - specifically, the combination of a low effort-barrier to entry, a high average thought-to-post ratio, and a decent community size.

comment by Liron · 2010-09-01T05:50:16.547Z · LW(p) · GW(p)

I made this site last month: areyou1in1000000.com

Replies from: Snowyowl
comment by Snowyowl · 2010-09-01T11:54:33.603Z · LW(p) · GW(p)

It seems that I am not one in a million. Pity.

Replies from: Erik, Oscar_Cunningham
comment by Erik · 2010-09-03T07:38:53.401Z · LW(p) · GW(p)

At least you're not alone.

comment by Oscar_Cunningham · 2010-09-01T12:56:03.231Z · LW(p) · GW(p)

Me neither. :(

comment by Kaj_Sotala · 2010-09-01T16:46:23.740Z · LW(p) · GW(p)

Neuroskeptic's Help, I'm Being Regressed to the Mean is the clearest explanation of regression to the mean that I've seen so far.

Replies from: Snowyowl, Vladimir_M
comment by Snowyowl · 2010-09-02T13:24:54.173Z · LW(p) · GW(p)

Wow. I thought I understood regression to the mean already, but the "correlation between X and Y-X" is so much simpler and clearer than any explanation I could give.

comment by Vladimir_M · 2010-09-02T04:00:14.572Z · LW(p) · GW(p)

When I tried making sense of this topic in the context of the controversies over IQ heritability, the best reference I found was this old paper:

Brian Mackenzie, Fallacious use of regression effects in the I.Q. controversy, Australian Psychologist 15(3):369-384, 1980

Unfortunately, the paper failed to achieve any significant impact, probably because it was published in a low-key journal long before Google, and it's now languishing in complete obscurity. I considered contacting the author to ask if it could be put for open access online -- it would be definitely worth it -- but I was unable to find any contact information; it seems like he retired long ago.

There is also another paper with a pretty good exposition of this problem, which seems to be a minor classic, and is still cited occasionally:

Lita Furby, Interpreting regression toward the mean in developmental research, Developmental Psychology, 8(2):172-179, 1973

comment by DSimon · 2010-09-07T20:28:02.703Z · LW(p) · GW(p)

I'm interested in video game design and game design in general, and also in raising the rationality waterline. I'd like to combine these two interests: to create a rationality-focused game that is entertaining or interesting enough to become popular outside our clique, but that can also effectively teach a genuinely useful skill to players.

I imagine that it would consist of one or more problems which the player would have to be rational in some particular way to solve. The problem has to be:

  • Interesting: The prospect of having to tackle the problem should excite the player. Very abstract or dry problems would not work; very low-interaction problems wouldn't work either, even if cleverly presented (i.e. you could do Newcomb's problem as a game with plenty of lovely art and window dressing... but the game itself would still only be a single binary choice, which would quickly bore the player).

  • Dramatic in outcome: The difference between success and failure should be great. A problem in which being rational gets you 10 points but acting typically gets you 8 points would not work; the advantage of applying rationality needs to be very noticeable.

  • Not rigged (or not obviously so): The player shouldn't have the feeling that the game is designed to directly reward rationality (even though it is, in a sense). The player should think that they are solving a general problem with rationality as their asset.

  • Not allegorical: I don't want to raise any likely mind-killing associations in the player's mind, like politics or religion. The problem they are solving should be allegorical to real world problems, but to a general class of problems, not to any specific problems that will raise hackles and defeat the educational purpose of the game.

  • Surprising: The rationality technique being taught should not be immediately obvious to an untrained player. A typical first session should involve the player first trying an irrational method, seeing how it fails, and then eventually working their way up to a rational method that works.

A lot of the rationality-related games that people bring up fail some of these criterion. Zendo, for example, is not "dramatic in outcome" enough for my taste. Avoiding confirmation bias and understanding something about experimental design makes one a better Zendo player... but in my experience not as much as just developing a quick eye for pattern recognition and being able to read the master's actions.

Anyone here have any suggestions for possible game designs?

Replies from: humpolec, SilasBarta, Emile, khafra, Perplexed, steven0461, Oscar_Cunningham
comment by humpolec · 2010-09-07T21:00:57.621Z · LW(p) · GW(p)

RPGs (and roguelikes) can involve a lot of optimization/powergaming; the problem is that powergaming could be called rational already. You could

  • explicitly make optimization a part of game's storyline (as opposed to it being unnecessary (usually games want you to satisfice, not maximize) and in conflict with the story)
  • create some situations where the obvious rules-of-thumb (gather strongest items, etc.) don't apply - make the player shut up and multiply
  • create situations in which the real goal is not obvious (e. g. it seems like you should power up as always, but the best choice is to focus on something else)

Sorry if this isn't very fleshed-out, just a possible direction.

comment by SilasBarta · 2010-09-07T22:11:35.685Z · LW(p) · GW(p)

Here's an idea I've had for a while: Make it seem, at first, like a regular RPG, but here's the kicker -- the mystical, magic potions don't actually do anything that's indistinguishable from chance.

(For example, you might have some herb combination that "restores HP", but whenever you use it, you strangely lose HP that more than cancels what it gave you. If you think this would be too obvious, rot13: In the game Earthbound, bar vgrz lbh trg vf gur Pnfrl Wbarf ong, naq vgf fgngf fnl gung vg'f ernyyl cbjreshy, ohg vg pna gnxr lbh n ybat gvzr gb ernyvmr gung vg uvgf fb eneryl gb or hfryrff.)

Set it in an environment like 17th-century England where you have access to the chemicals and astronomical observations they did (but give them fake names to avoid tipping off users, e.g., metallia instead of mercury/quicksilver), and are in the presence of a lot of thinkers working off of astrological and alchemical theories. Some would suggest stupid experiments ("extract aurum from urine -- they're both yellow!") while others would have better ideas.

To advance, you have to figure out the laws governing these things (which would be isomorphic to real science) and put this knowledge to practical use. The insights that had to be made back then are far removed from the clean scientific laws we have now, so it would be tough.

It would take a lot of work to e.g. make it fun to discover how to use stars to navigate, but I'm sure it could be done.

Replies from: humpolec, steven0461, CronoDAS
comment by humpolec · 2010-09-08T12:21:47.298Z · LW(p) · GW(p)

For example, you might have some herb combination that "restores HP", but whenever you use it, you strangely lose HP that more than cancels what it gave you.

What if instead of being useless (by having an additional cancelling effect), magical potions etc. had no effect at all? If HP isn't explicitly stated, you can make the player feel like he's regaining health (e.g. by some visual cues), but in reality he'd die just as often.

comment by steven0461 · 2010-09-07T22:41:07.798Z · LW(p) · GW(p)

I think in many types of game there's an implicit convention that they're only going to be fun if you follow the obvious strategies on auto-pilot and don't optimize too much or try to behave in ways that would make sense in the real world, and breaking this convention without explicitly labeling the game as competitive or a rationality test will mostly just be annoying.

The idea of having a game resemble real-world science is a good one and not one that as far as I know has ever been done anywhere near as well as seems possible.

Replies from: SilasBarta
comment by SilasBarta · 2010-09-07T23:37:48.826Z · LW(p) · GW(p)

Good point. I guess the game's labeling system shouldn't deceive you like that, but it would need to have characters that promote non-functioning technology, after some warning that e.g. not everyone is reliable, that these people aren't the tutorial.

Replies from: DSimon
comment by DSimon · 2010-09-08T21:28:02.227Z · LW(p) · GW(p)

Best I think would be if the warning came implicitly as part of the game, and a little ways into it.

For example: The player sees one NPC Alex warn another NPC Joe that failing to drink the Potion of Feather Fall will mean he's at risk of falling off a ledge and dying. Joe accepts the advice and drinks it. Soon after, Joe accidentally falls off a ledge and dies. Alex attempts to rationalize this result away, and (as subtly as possible) shrugs off any attempts by the player to follow conversational paths that would encourage testing the potion.

Player hopefully then goes "Huh. I guess maybe I can't trust what NPCs say about potions" without feeling like the game has shoved the answer at them, or that the NPCs are unrealistically bad at figuring stuff out.

Replies from: SilasBarta
comment by SilasBarta · 2010-09-09T12:48:14.608Z · LW(p) · GW(p)

Exactly -- that's the kind of thing I had in mind: the player has to navigate through rationalizations and be able to throw out unreliable claims against bold attempts to protect it from being proven wrong.

So is this game idea something feasible and which meets your criteria?

Replies from: DSimon
comment by DSimon · 2010-09-09T14:55:33.683Z · LW(p) · GW(p)

I think so, actually. When I start implementation, I'll probably use an Interactive Fiction engine as another person on this thread suggested, because (a) it makes implementation a lot easier and (b) I've enjoyed a lot of IF but I haven't ever made one of my own. That would imply removing a fair amount of the RPG-ness in your original suggestion, but the basic ideas would still stand. I'm also considering changing the setting to make it an alien world which just happens to be very much like 17th century England except filled with humorous Rubber Forehead Aliens; maybe the game could be called Standing On The Eyestalks Of Giants.

On the particular criteria:

  • Interesting: I think the setting and the (hopefully generated) buzz would build enough initial interest to carry the player through the first frustrating parts where things don't seem to work as they are used to. Once they get the idea that they're playing as something like an alien Newton, that ought to push up the interest curve again a fair amount.

  • Not (too) allegorical: Everybody loves making fun of alchemists. Now that I think of it, though, maybe I want to make sure the game is still allegorical enough to modern-day issues so that it doesn't encourage hindsight bias.

  • Dramatic/Surprising: IF has some advantages here in that there's an expectation already in place that effects will be described with sentences instead of raw HP numbers and the like. It should be possible to hit the balance where being rational and figuring things out gets the player significant benefits (Dramatic) , but the broken theories being used by the alien alchemists and astrologists are convincing enough to fool the player at first into thinking certain issues are non-puzzles (Surprising).

  • Not rigged: Assuming the interface for modelling the game world's physics and doing experiments is sophisticated enough, this should prevent the feeling that the player can win by just finding the button marked "I Am Rational" and hitting it. However, I think this is the trickiest part programming-wise.

I'm going to look into IF programming a bit to figure out how implementable some of this stuff is. I won't and can't make promises regarding timescale or even completability, however: I have several other projects going right now which have to take priority.

Replies from: SilasBarta, Mass_Driver
comment by SilasBarta · 2010-09-09T15:40:43.775Z · LW(p) · GW(p)

Thanks, I'm glad I was able to give you the kind of idea you were looking for, and that someone is going to try to implement this idea.

I'm also considering changing the setting to make it an alien world which just happens to be very much like 17th century England

Good -- that's what I was trying to get at. For example, you would want a completely different night sky; you don't want the gamer to be able to spot the Big Dipper (or Southern Cross for our Aussie friends) and then be able to use existing ephemeris (ephemeral?) data. The planet should have a different tilt, or perhaps be the moon of another planet, so the player can't just say, "LOL, I know the heliocentric model, my planet is orbiting the sun, problem solved!"

Different magnetic field too, so they can't just say, "lol, make a compass, it points north".

I'm skeptical, though, about how well text-based IF can accomplish this -- the text-only interface is really constraining, and would have to tell the user all of the salient elements explicitly. I would be glad to help on the project in any way I can, though I'm still learning complex programming myself.

Also, something to motivate the storyline would be like: You need to come up with better cannonballs for the navy (i.e. have to identify what increases a metal's yield energy). Or come up with a way of detecting counterfeit coins.

comment by Mass_Driver · 2010-09-30T17:02:03.173Z · LW(p) · GW(p)

Let me know if you would like help with the writing, either in terms of brainstorming, mapping the flow, or even just copyediting.

comment by CronoDAS · 2010-09-08T22:00:19.701Z · LW(p) · GW(p)

To advance, you have to figure out the laws governing these things (which would be isomorphic to real science) and put this knowledge to practical use. The insights that had to be made back then are far removed from the clean scientific laws we have now, so it would be tough.

Or you could just go look up the correct answers on gamefaqs.com.

Replies from: JGWeissman
comment by JGWeissman · 2010-09-08T22:06:46.870Z · LW(p) · GW(p)

So the game should generate different sets of fake names for each time it is run, and have some variance in the forms of clues and which NPC's give them.

Replies from: CronoDAS
comment by CronoDAS · 2010-09-08T22:08:56.362Z · LW(p) · GW(p)

Ever played Nethack? ;)

Replies from: JGWeissman
comment by JGWeissman · 2010-09-08T22:15:07.340Z · LW(p) · GW(p)

Yes, a little, but I never really got into it. As I recall, Nethack didn't do what I suggest so much as not tell you what certain things are until you magically indentify them.

Replies from: DSimon
comment by DSimon · 2010-09-08T22:36:04.517Z · LW(p) · GW(p)

Well, there are other ways in NetHack to identify things besides the "identify" spell (which itself must be identified anyways). You can:

  • Try it out on yourself. This is often definitive, but also often dangerous. Say if you drink a potion, it might be a healing spell... or it might be poison... or it might be fruit juice. 1/3 chance of existential failure for a given experiment is crappy odds; knowledge isn't that valuable.

  • Get an enemy to try it. Intelligent enemies will often know the identies of scrolls and potions you aren't yet familiar with. Leaving a scroll or potion on the ground and seeing what the next dwarf that passes by does with it can be informative.

  • Try it out on an enemy. Potions can be shattered over an enemy's head instead of being drunk; this is safer than drinking it yourself, though you may not notice the effects as readily, and it's annoyingly easy to miss and just waste the potion on the wall behind the monster.

  • Various other methods that can at least narrow down the identification: have your pet walk on it to see if it's cursed, offer to sell it to to a shopkeep to get an idea of how valuable it is, dip things in unknown potions to see if some obvious effect (i.e. corrosion) occurs, scratch at the ground with unknown wands to see if sparks/flames are created and if so what kind, kick things to see if they are heavy or light, and so on and so on...

The reason NetHack isn't already the Ideal Experimental Method Game is because once you learn what the right experiments are, you can just use them repeatedly each game; the qualitative differences between magical items are always the same, and it's just a matter of rematching label to effect for each new session.

On the other hand, for newbie players, where the experimental process might be exciting and novel... well, usually they're too busy experiencing Yet Another Silly Death to play scientist thoroughly. Heck, a lot of the early deaths will be directly due to un-clever experimentation, which discourages a scientific mindset.

Curiosity killed the cat... indirectly, with a shiny unlabeled Amulet of Strangulation.

And anyways, hardly anybody figures out the solutions to NetHack on their own. The game is just too punishing for that, and the cheatsfiles are too easily available online. (Any NetHack ascendants here who didn't ever look stuff up online?)

Replies from: Alicorn, Gunnar_Zarncke, CronoDAS
comment by Alicorn · 2010-09-08T23:03:27.997Z · LW(p) · GW(p)

This reminds me of something I did in a D&D game once. My character found three unidentified cauldronsful of potions, so she caught three rats and dribbled a little of each on a different rat. One rat died, one turned to stone, and one had no obvious effects. (She kept the last rat and named it Lucky.)

Replies from: CronoDAS
comment by CronoDAS · 2010-09-08T23:07:53.420Z · LW(p) · GW(p)

Did you try using the two lethal potions as weapons?

Replies from: Alicorn
comment by Alicorn · 2010-09-08T23:15:58.139Z · LW(p) · GW(p)

I didn't get ahold of vials that would shatter on impact before the game fizzled out (a notorious play-by-post problem). I did at one time get to use Lucky as a weapon, though. Sadly, my character was not proficient with rats.

Replies from: CronoDAS
comment by CronoDAS · 2010-09-08T23:52:12.783Z · LW(p) · GW(p)

It's a rat-flail!

Replies from: Alicorn
comment by Alicorn · 2010-09-08T23:53:06.701Z · LW(p) · GW(p)

Nah, I used him as a thrown weapon. (He was fine and I retrieved him later.)

comment by Gunnar_Zarncke · 2022-06-11T22:34:00.550Z · LW(p) · GW(p)

Nethack as ML training environment: https://nethackchallenge.com/ 

comment by CronoDAS · 2010-09-08T23:10:08.000Z · LW(p) · GW(p)

The reason NetHack isn't already the Ideal Experimental Method Game is because once you learn what the right experiments are, you can just use them repeatedly each game; the qualitative differences between magical items are always the same, and it's just a matter of rematching label to effect for each new session.

Yes. That's why

So the game should generate different sets of fake names for each time it is run, and have some variance in the forms of clues and which NPC's give them.

isn't quite the perfect solution: you can still look up a "cookbook" set of experiments to distinguish between Potion That Works and Potion That Will Get You Killed.

Replies from: Raemon, JGWeissman
comment by Raemon · 2010-09-10T00:31:42.045Z · LW(p) · GW(p)

To be fair, in real life, it's perfectly okay that once you determine the right set of experiments to run to analyze a particular phenomena, you can usually use similar experiments to figure out similar phenomena. I'm less worried about infinite replay value and more worried about the game being fun the first time through.

comment by JGWeissman · 2010-09-08T23:18:25.138Z · LW(p) · GW(p)

Cookbook experiments will suffice if you are handed potions that may have a good effect or that may kill you. But if you have to figure out how to mix the potion yourself, this is much more difficult. Learning the cookbook experiments could be the equivalent of learning chemistry.

comment by Emile · 2010-09-08T22:29:11.027Z · LW(p) · GW(p)

Note also the Wiki page, with links to previous threads (I just discovered it, and I don't think I had noticed the previous threads. This one seems better!)

One interesting game topic could be building an AI. Make it look like a nice and cutesy adventure game, with possibly some little puzzles, but once you flip the switch, if you didn't get absolutely everything exactly right, the universe is tiled with paperclips/siny smiley faces/tiny copies of Eliezer Yudkowsky. That's more about SIAI propaganda than rationality though.

One interesting thing would be to exploit the conventions of video games but make actual winning require to see through those conventions. For example, have a score, and certain actions give you points, with nice shiny feedbacks and satisfying "shling!" sounds, but some actions are vitally important but not rewarded by any feedback.

For example (to keep in the "build an AI" example), say you can hire scientists, and the scientists' profile page lists plenty of impressive certifications (stats like "experiment design", "analysis", "public speaking", etc.), and some filler text about what they did their thesis and boring stuff like that (think: stats get big Icons, and are at the top, filler text looks like boring background filler text). And once you hired the scientists, you get various bonuses (money, prestige points, experiments), but the only of those factors that's of any importance at the end of the game is whether the scientist is "not stupid", and the only way to tell that is from various tell-tale signs for "stupid" in the "boring" filler texts - For example things like (also) having a degree in theology, or having published a paper on homeopathy ... stuff that would indeed be a bad sign for a scientist, but that nothing in the game ever tells you is bad.

So basically the idea would be that the rules of the game you're really playing wouldn't be the ones you would think at first glance, which is a pretty good metaphor for real life too.

It needs to be well-designed enough so that it's not "guessing the programmer's password", but that should be possible.

Making a game around experiment design would be interesting too - have some kind of physics / chemistry / biology system that obeys some rules (mostly about transformations, not some "real" physics with motion and collisions etc.), have game mechanics that allow you to do something like experimentation, and have a general context (the feedbacks you get, what other characters say, what you can buy) that points towards a slightly wrong understanding of reality. This is bouncing off Silas' ideas, things that people say are good for you may not really be so, etc.

Here again, you can exploit the conventions of video games to mislead the player. For example, red creatures like eating red things, blue creatures like eating blue things, etc. - but the rule doesn't always hold.

Replies from: DSimon, PeerInfinity, Emile
comment by DSimon · 2010-09-08T22:55:28.341Z · LW(p) · GW(p)

Here again, you can exploit the conventions of video games to mislead the player.

I think this is a great idea. Gamers know lots of things about video games, and they know them very thoroughly. They're used to games that follow these conventions, and they're also (lately) used to games that deliberately avert or meta-comment on these conventions for effect (i.e. Achievement Unlocked), but there aren't too many games I know of that set up convincingly normal conventions only to reveal that the player's understanding is flawed.

Eternal Darkness did a few things in this area. For example, if your character's sanity level was low, you the player might start having unexpected troubles with the interface, i.e. the game would refuse to save on the grounds that "It's not safe to save here", the game would pretend that it was just a demo of the full game, the game would try to convince you that you accidentally muted the television (though the screaming sound effects would still continue), and so on. It's too bad that those effects, fun as they were, were (a) very strongly telegraphed beforehand, and (b) used only for momentary hallucinations, not to indicate that the original understanding the player had was actually the incorrect one.

Replies from: Raemon
comment by Raemon · 2010-09-09T02:39:21.074Z · LW(p) · GW(p)

The problem is that, simply put, such games generally fail on the "fun" meter.

There is a game called "The Void," which begins with the player dying and going to a limbo like place ("The Void"). The game basically consists of you learning the rules of the Void and figuring out how to survive. At first it looks like a first person shooter, but if you play it as a first person shooter you will lose. Then it sort of looks like an RPG. If you play it as an RPG you will also lose. Then you realize it's a horror game. Which is true. But knowing that doesn't actually help you to win. What you eventually have to realize is that it's a First Person Resource Management game. Like, you're playing StarCraft from first person as a worker unit. Sort of.

The world has a very limited resource (Colour) and you must harvest, invest and utilitize Colour to solve all your problems. If you waste any, you will probably die, but you won't realize that for hours after you made the initial mistake.

Every NPC in the game will tell you things about how the world works, and every one of those NPCs (including your initial tutorial) is lying to you about at least one thing.

The game is filled with awesome flavor, and a lot of awesome mechanics. (Specifically mechanics I had imagined independently and wanted to make my own game regarding). It looked to me like one of the coolest sounding games ever. And it was amazingly NOT FUN AT ALL for the first four hours of play. I stuck with it anyway, if for no other reason than to figure out how a game with such awesome ideas could turn out so badly. Eventually I learned how to play, and while it never became fun it did become beautiful and poignant and it's now one of my favorite games ever. But most people do not stick with something they don't like for four hours.

Toying with player's expectations sounds cool to the people who understand how the toying works, but is rarely fun for the player themselves. I don't think that's an insurmountable obstacle, but if you're going to attempt to do this, you need to really fathom how hard it is to work around. Most games telegraph everything for a reason.

Replies from: Emile
comment by Emile · 2010-09-09T14:52:35.512Z · LW(p) · GW(p)

Huh, sounds very interesting! So my awesome game concept would give rise to a lame game, eh?

*updates*

I hadn't heard of that game, I might try it out. I'm actually surprised a game like that was made and commercially published.

Replies from: Raemon, NihilCredo
comment by Raemon · 2010-09-09T23:19:22.021Z · LW(p) · GW(p)

It's a good game, just with a very narrow target audience. (This site is probably a good place to find players who will get something out of it, since you have higher than average percentages of people willing to take a lot of time to think about and explore a cerebral game).

Some specific lessons I'd draw from that game and apply here:

  1. Don't penalize failure too hard. The Void's single biggest issue (for me) is that even when you know what you're doing you'll need to experiment and every failure ends with death (often hours after the failure). I reached a point where every time I made even a minor failure I immediately loaded a saved game. If the purpose is to experiment, build the experimentation into the game so you can try again without much penalty (or make the penalty something that is merely psychological instead of an actual hampering of your ability to play the game.)

  2. Don't expect players to figure things out without help. There's a difference between a game that teaches people to be rational and a game that simply causes non-rational people to quit in frustration. Whenever there's a rational technique you want people to use, spell it out. Clearly. Over and over (because they'll miss it the first time).

The Void actually spells out everything as best they can, but the game still drives players away because the mechanics are simply unlike any other game out there. Most games rely on an extensive vocabulary of skills that players have built up over years, and thus each instruction only needs to be repeated once to remind you of what you're supposed to be doing. The Void repeats instructions maybe once or twice, and it simply isn't enough to clarify what's actually going on. (The thing where NPCs lie to you isn't even relevant till the second half of the game. By the time you get to that part you've either accepted how weird the game is or you've quit already).

My sense is that the best approach would be to start with a relatively normal (mechanics-wise) game, and then have NPCs that each encourage specific applications of rationality, but each of which has a rather narrow mindset and so may give bad advice for specific situations. But your "main" friend continuously reminds you to notice when you are confused, and consider which of your assumptions may be wrong. (Your main friend will eventually turn out to be wrong/lying/unhelpful about something, but only the once and only towards the end when you've built up the skills necessary to figure it out).

Huh, sounds very interesting! So my awesome game concept would give rise to a lame game, eh?

This was my experience with the Void exactly. Basically all the mechanics and flavors were things I had come up with one my own that I wanted to make games out of, and I'm really glad I played the Void first because I might have wasted a huge chunk of time making a really bad game if I didn't get to learn from their mistakes.

comment by NihilCredo · 2010-09-25T22:29:33.352Z · LW(p) · GW(p)

It was made by a Russian developer which is better known for its previous effort, Pathologic, a somewhat more classical first-person adventure game (albeit very weird and beautiful, with artistic echoes from Brecht to Dostoevskij), but with a similar problem of being murderously hard and deceptive - starving to death is quite common. Nevertheless, in Russia Pathologic had acceptable sales and excellent critical reviews, which is why Ice-Pick Lodge could go on with a second project.

comment by PeerInfinity · 2010-09-09T17:23:22.228Z · LW(p) · GW(p)

"once you flip the switch, if you didn't get absolutely everything exactly right, the universe is tiled with paperclips/tiny smiley faces/tiny copies of Eliezer Yudkowsky."

See also: The Friendly AI Critical Failure Table

And I think all of the other suggestions you made in this comment would make an awesome game! :D

Replies from: Emile
comment by Emile · 2010-09-09T18:00:20.669Z · LW(p) · GW(p)

Ooh, I had forgot about that table - Gurps Friendly AI is also of interest.

comment by Emile · 2010-09-08T23:21:35.389Z · LW(p) · GW(p)

Riffing off my weird biology / chemistry thing: a game based on the breeding of weird creatures, by humans freshly arrived on the planet (add some dimensional travel if you want to justify weird chemistry - I'm thinking of Tryslmaistan.

The catch is (spoiler warning!), the humans got the wrong rules for creature breeding, and some plantcrystalthingy they think is the creatures' food is actually part of their reproduction cycle, where some essential "genetic" information passes.

And most of the things that look like in-game help and tutorials are actually wrong, and based on a model that's more complicated than the real one (it's just a model that's closer to earth biology).

comment by khafra · 2010-09-09T11:01:37.203Z · LW(p) · GW(p)

I'm not sure if transformice counts as a rationalist game, but appears to be a bunch of multiplayer coordination problems, and the results seem to support ciphergoth's conjecture on intelligence levels.

Replies from: Emile
comment by Emile · 2010-09-09T12:03:46.711Z · LW(p) · GW(p)

Transformice is awesome :D A game hasn't made me laugh that much for a long time.

And it's about interesting, human things, like crowd behaviour and trusting the "leader" and being thrust in a position of responsibility without really knowing what to do ... oh, and everybody dying in funny ways.

comment by Perplexed · 2010-09-07T22:44:57.874Z · LW(p) · GW(p)

Dramatic in outcome:

One way to achieve this is to make it a level-based puzzle game. Solve the puzzle suboptimally, and you don't get to move on. Of course, that means that you may need special-purpose programming at each level. On the other hand, you can release levels 1-5 as freeware, levels 6-20 as Product 1.0, and levels 21-30 as Product 2.0.

Not allegorical:

The puzzles I am thinking of are in the field of game theory, so the strategies will include things like not cooperating (because you don't need to in this case), making and following through on threats, and similar "immoral" actions. Some people might object on ethical or political grounds. I don't really know how to answer except to point out that at least it is not a first-person shooter.

Surprising

Game theory includes many surprising lessons - particularly things like the handicap principle, voluntary surrender of power, rational threats, and mechanism design. Coalition games are particularly counter-intuitive, but, with experience, intuitively understandable.

But you can even teach some rationality lessons before getting into games proper. Learn to recognize individuals, for example. Not all cat-creatures you encounter are the same character. You can do several problems involving probabilities and inference before the second player ever shows up.

comment by steven0461 · 2010-09-07T21:53:11.027Z · LW(p) · GW(p)

Text adventures seem suitable for this sort of thing, and are relatively easy to write. They're probably not as good for mass appeal, but might be OK for mass nerd appeal. For these purposes, though, I'm worried that rationality may be too much of a suitcase term, consisting of very different groups of subskills that go well with very different kinds of game.

Replies from: CronoDAS
comment by CronoDAS · 2010-09-10T01:17:41.229Z · LW(p) · GW(p)

Another thing that's relatively easy to create is a Neverwinter Nights module, but you're pretty much stuck with the D&D mechanics if you go that route.

comment by Oscar_Cunningham · 2010-09-09T14:16:32.506Z · LW(p) · GW(p)

One idea I'd like to suggest would be a game where the effectiveness of the items a player has changes randomly hour by hour. Maybe a MMO with players competing against each other, so that they can communicate information about which items are effective. Introduce new items with weird effects every so often so that players have to keep an eye on their long term strategy as well.

Replies from: DSimon
comment by DSimon · 2010-09-09T14:57:55.534Z · LW(p) · GW(p)

I think a major problem with that is that most players would simply rely upon the word on the street to tell them what was currently effective, rather than performing experiments themselves. Furthermore, changes in only "effectiveness" would probably be too easy to discover using a "cookbook" of experiments (see the NetHack discussion in this thread).

Replies from: Oscar_Cunningham
comment by Oscar_Cunningham · 2010-09-09T15:30:33.498Z · LW(p) · GW(p)

I'm thinking that the parameters should change just quickly enough to stop consensus forming (maybe it could be driven by negative feedback, so that once enough people are playing one strategy it becomes ineffective). Make using a cookbook expensive. Winning should be difficult, and only just the right combination will succeed.

Replies from: DSimon, taryneast
comment by DSimon · 2010-09-09T18:18:33.732Z · LW(p) · GW(p)

I think this makes sense, but can you go into more detail about this:

Make using a cookbook expensive.

I didn't mean a cookbook as an in-game item (I'm not sure if that's what you were implying...), I meant the term to mean a set of well-known experiments which can simply be re-ran every time new results are required. If the game can be reduced to that state, then a lot of its value as a rationality teaching tool (and also as an interesting game, to me at least) is lost. How can we force the player to have to come up with new ideas for experiments, and see some of those ideas fail in subtle ways that require insight to understand?

My tendency is to want to solve this problem by just making a short game, so that there's no need to figure out how to create a whole new, interesting experimental space for each session. This would be problematic in an MMO, where replayablity is expected (though there have been some interesting exceptions, like Uru).

Replies from: Oscar_Cunningham
comment by Oscar_Cunningham · 2010-09-09T19:56:12.756Z · LW(p) · GW(p)

Ah, I meant: "Make each item valuable enough that using several just to work out how effective each one is would be a fatal mistake" Instead you would have to keep track of how effective each one was, or watch the other players for hints.

comment by taryneast · 2011-03-23T07:32:21.342Z · LW(p) · GW(p)

Hmmm - changing things frequently means you'll have some negative knock-on effects. You'll be penalising anybody that doesn't game as often - eg people with a life. You stand a chance of alienating a large percentage of the audience, which is not a good idea.

comment by Relsqui · 2010-09-16T22:37:14.070Z · LW(p) · GW(p)

I'm a translator between people who speak the same language, but don't communicate.

People who act mostly based on their instincts and emotions, and those who prefer to ignore or squelch those instincts and emotions[1], tend to have difficulty having meaningful conversations with each other. It's not uncommon for people from these groups to end up in relationships with each other, or at least working or socializing together.

On the spectrum between the two extremes, I am very close to the center. I have an easier time understanding the people on each side than their counterparts do, it frustrates me when they miscommunicate, and I want to help. This includes general techniques (although there are some good books on that already), explanations of words or actions which don't appear to make sense, and occasional outright translation of phrases ("When they said X, they meant what you would have called Y").

Is this problem, or this skill, something of interest to the LW community at large? In the several days I've been here it's come up on comment threads a couple times. I have some notes on the subject, and it would be useful for me to get feedback on them; I'd like to some day compile them into a guide written for an audience much like this one. Do you have questions about how to communicate with people who think very much unlike you, or about specific situations that frustrate you? Would you like me to explain what appears to be an arbitrary point of etiquette? Anything else related to the topic which you'd like to see addressed?

In short: "I understand the weird emotional people who are always yelling at you, but I'm also capable of speaking your language. Ask me anything."


[1] These are both phrased as pejoratively as I could manage, on purpose. Neither extreme is healthy.

Replies from: beriukay, Morendil, Rain
comment by beriukay · 2010-09-25T22:09:37.399Z · LW(p) · GW(p)

One issue I've frequently stumbled across is the people who make claims that they have never truly considered. When I ask for more information, point out obvious (to me) counterexamples, or ask them to explain why they believe it, they get defensive and in some cases quite offended. Some don't want to ever talk about issues because they feel like talking about their beliefs with me is like being subject to some kind of Inquisition. It seems to me that people of this cut believe that to show you care about someone, you should accept anything they say with complete credulity. Have you found good ways to get people to think about what they believe without making them defensive? Do I just have to couch all my responses in fuzzy words? Using weasel words always seemed disingenuous to me, but if I can get someone to actually consider the opposition by saying things like "Idunno, I'm just saying it seems to me, and I might be wrong, that maybe gays are people and deserve all the rights that people get, you know what I'm saying?"

Replies from: Relsqui
comment by Relsqui · 2010-09-26T03:05:53.785Z · LW(p) · GW(p)

I've been on the other side of this, so I definitely understand why people react that way--now let's see if I understand it well enough to explain it.

For most people, being willing to answer a question or identify a belief is not the same thing as wanting to debate it. If you ask them to tell you one of their beliefs and then immediately try to engage them in justifying it to you, they feel baited and switched into a conflict situation, when they thought they were having a cooperative conversation. You've asked them to defend something very personal, and then are acting surprised when they get defensive.

Keep in mind also that most of the time in our culture, when one person challenges another one's beliefs, it carries the message "your beliefs are wrong." Even if you don't state that outright--and even in the probably rare cases when the other person knows you well enough to understand that isn't your intent--you're hitting all kinds of emotional buttons which make you seem like an aggressor. This is the result of how the other person is wired, but if you want to be able to have this kind of conversation, it's in your interest to work with it.

The corollary to the implied "your beliefs are wrong" is "I know better than you" (because that's how you would tell that they're wrong). This is an incredibly rude signal to send to--well, anyone, but especially to another adult. Your hackles probably rise too when someone signals that they're superior to you and you don't agree; this is the same thing.

The point, then, is not that you need to accept what people you care about say with credulity. It's that you need to accept it with respect. You do not have any greater value than the person you're talking to (even if you are smarter and more rational), just like they don't have any greater value than you (even if they're richer and more attractive). Even if you really were by some objective measure a better person (which is, as far as I can tell, a useless thing to consider), they don't think so, and acting like it will get you nowhere.

Possibly one of the hardest parts of this to swallow is that, when you're choosing words for the purpose of making another person remain comfortable talking to you, whether their beliefs are a good reflection of reality is not actually important. Obviously they think so, and merely contradicting them won't change that (nor should it). So if you sound like you're just trying to convince them that they're wrong, even if that isn't what you mean to do, they might just feel condescended to and walk away.

None of this means that you can't express your own beliefs vehemently ("gay people deserve equal rights!"). It just means that when someone expresses one of theirs, interrogating them bluntly about their reasons--especially if they haven't questioned them before--is more likely to result in defensiveness than in convincing them or even productive debate. This may run counter to your instincts, understandably, but there it is.

No fuzzy words in the world will soften your language if their inflection reveals intensity and superiority. Display real respect, including learning to read your audience and back off when they're upset. (You can always return to the topic another time, and in fact, occasional light conversations will probably do a better job with this sort of person than one long intense one.) If you aren't able to show genuine respect, well, I don't blame them for refusing to discuss their beliefs with you.

comment by Morendil · 2010-09-17T07:17:21.829Z · LW(p) · GW(p)

Yes please.

Does the term "bridger" ring a bell for you? (It's from Greg Egan's Diaspora, in case it doesn't, and you'd have to read it to get why I think that would be an apt name for what you're describing.)

Replies from: Relsqui
comment by Relsqui · 2010-09-17T07:37:56.994Z · LW(p) · GW(p)

It doesn't, and I haven't, although I can infer at least a little from the term itself. Your call if you want to try and explain it or wait for me to remember, find a library that has it, acquire it, and read it before understanding. ;)

Is there any specific subject under that umbrella which you'd like addressed? Narrowing the focus will help me actually put something together.

Replies from: Morendil
comment by Morendil · 2010-09-17T08:30:56.681Z · LW(p) · GW(p)

The Wikipedia page explains a little about Bridgers.

I'm afraid if I knew how to narrow this topic down I'd probably be writing it up myself. :)

Replies from: Relsqui
comment by Relsqui · 2010-09-17T16:24:58.575Z · LW(p) · GW(p)

Hmm. I'm wary of the analogy to separate species; humans treat each other enough like aliens as it is. But so noted, thank you.

comment by Rain · 2010-09-30T14:45:40.118Z · LW(p) · GW(p)

I wanted to say thank you for providing these services. I like performing the same translations, but it appears I'm unable to be effective in a text medium, requiring immediate feedback, body language, etc. When I saw some of your posts on old articles, apparently just as you arrived, I thought to myself that you would genuinely improve this place in ways that I've been thinking were essential.

Replies from: Relsqui
comment by Relsqui · 2010-09-30T18:25:44.783Z · LW(p) · GW(p)

Thanks! That's actually really reassuring; that kind of communication can be draining (a lot of people here communicate naturally in a way which takes some work for me to interpret as intended). It is good to hear that it seems to be doing some good.

comment by MartinB · 2010-09-01T02:37:53.231Z · LW(p) · GW(p)

[tl;dr: quest for some specific cryo data references]

I prepare to do my own, deeper evaluation of cryogenics. For that I read through many of the case reports on the Alcor and CI page. Due to my geographic situation I am particularly interested in the ability of actually getting a body from Europe, Germany over to their respective facilities. Now the reports are quite interesting and provide lots of insight into the process, but what I still look for are the unsuccessful reports. In which cases a signed up member was not brought in due to legal interference, next of kin decisions and the likes. Is anyone aware of a detailed log of those? Also I would like to see how many of the signed clients get lost due to the circumstances of their death.

Replies from: Document
comment by Document · 2010-10-08T05:26:45.749Z · LW(p) · GW(p)

Can't help with your question, but speaking of Europe....

comment by Will_Newsome · 2010-09-03T11:02:19.540Z · LW(p) · GW(p)

I want to write a post about an... emotion, or pattern of looking at the world, that I have found rather harmful to my rationality in the past. The closest thing I've found is 'indignation', defined at Wiktionary as "An anger aroused by something perceived as an indignity, notably an offense or injustice." The thing is, I wouldn't consider the emotion I feel to be 'anger'. It's more like 'the feeling of injustice' in its own right, without the anger part. Frustration, maybe. Is there a word that means 'frustration aroused by a perceived indignity, notably an offense or injustice'? Like, perhaps the emotion you may feel when you think about how pretty much no one in the world or no one you talk to seems to care about existential risks. Not that you should feel the emotion, or whatever it is, that I'm trying to describe -- in the post I'll argue that you should try not to -- but perhaps there is a name for it? Anyone have any ideas? Should I just use 'indignation' and then define what I mean in the first few sentences? Should I use 'adjective indignation'? If so, which adjective? Thanks for any input.

Replies from: Airedale, jimrandomh, Eliezer_Yudkowsky, None, komponisto, wedrifid, None, David_Allen, steven0461, SilasBarta
comment by Airedale · 2010-09-03T15:08:20.197Z · LW(p) · GW(p)

The words righteous indignation in combination are sufficiently well-recognized as to have their own wikipedia page. The page also says that righteous indignation has overtones of religiosity, which seems like a reason not to use it in your sense . It also says that it is akin to a "sense of injustice," but at least for me, that phrase doesn't have as much resonance.

Edited to add this possibly relevant/interesting link I came across, where David Brin describes self-righteous indignation as addictive.

Replies from: Perplexed
comment by Perplexed · 2010-09-03T16:20:22.128Z · LW(p) · GW(p)

which seems like a reason not to use it in your sense.

Strikes me as exactly the reason you should use it. What you are describing is indignation, it is righteous, and it is counterproductive in both rationalists and less rational folks for pretty much the same reasons.

Replies from: Airedale
comment by Airedale · 2010-09-03T16:53:34.268Z · LW(p) · GW(p)

I meant that the religious connotations might not be a reason to use the term if Will is trying to come up with the most accurate term for what he’s describing. To the extent the term is tied up in Christianity, it may not convey meaning in the way Will wants – although the more Will explains how he is using the term, the less problematic this would be. And I agree that what you say suggests an interesting way that Will can appropriate a religious term and make some interesting compare-and-contrast type points.

comment by jimrandomh · 2010-09-03T19:08:17.001Z · LW(p) · GW(p)

I noticed this emotion cropping up a lot when I read Reddit, and stopped reading it for that reason. It's too easy to, for example, feel outraged over a video of police brutality, but not notice that it was years ago and in another state and already resolved.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-09-03T19:07:18.212Z · LW(p) · GW(p)

Sounds related to the failure class I call "living in the should-universe".

Replies from: Will_Newsome
comment by Will_Newsome · 2010-09-03T22:51:03.434Z · LW(p) · GW(p)

It seems to be a pretty common and easily corrected failure mode. Maybe you could write a post about it? I'm sure you have lots of useful cached thoughts on the matter.

Added: Ah, I'd thought you'd just talked about it at LW meetups, but a Google search reveals that the theme is also in Above-Average AI Scientists and Points of Departure.

comment by [deleted] · 2010-09-09T02:52:36.267Z · LW(p) · GW(p)

Righteous indignation is a good word for it.

I, personally, see it as one of the emotional capacities of a healthy person. Kind of like lust. It can be misused, it can be a big time-waster if you let it occupy your whole life, but it's basically a sign that you have enough energy. If it goes away altogether, something may be wrong.

I had a period a few years ago of something like anhedonia. The thing is, I also couldn't experience righteous indignation, or nervous worry, or ordinary irritability. It was incredibly satisfying to get them back. I'm not a psychologist at all, but I think of joy, anger, and worry (and lust) as emotions that require energy. The miserably lethargic can't manage them.

So that's my interpretation and very modest defense of righteous indignation. It's not a very practical emotion, but it is a way of engaging personally with the world. It motivates you in the minimal way of making you awake, alert, and focused on something. The absence of such engagement is pretty horrible.

comment by komponisto · 2010-09-04T00:54:13.024Z · LW(p) · GW(p)

Interestingly enough, this sounds like the emotion that (finally) induced me to overcome akrasia and write a post on LW for the first time, which initiated what has thus far been my greatest period of development as a rationalist.

It's almost as if this feeling is to me what plain anger is to Harry Potter(-Evans-Verres): something which makes everything seem suddenly clearer.

It just goes to show how difficult the art of rationality is: the same technique that helps one person may hinder another.

comment by wedrifid · 2010-09-03T12:09:08.684Z · LW(p) · GW(p)

Should I just use 'indignation' and then define what I mean in the first few sentences?

That could work well when backed up by with the description of just what you will be using the term to mean.

I will be interested to read your post - from your brief introduction here I think I have had similar observations about emotions that interfere with thought, independent of raw overwhelm from primitives like anger.

comment by [deleted] · 2010-09-05T10:11:18.714Z · LW(p) · GW(p)

I've seen "moral indignation," which might fit (though I think "indignation" still implies anger). I've also heard people who feel that way describe the object of their feelings as "disgusting" or "offensive," so you could call it "disgust" or "being offended." Of course, those people also seemed angry. Maybe the non-angry version would be called "bitterness."

As soon as I wrote the paragraph above, I felt sure that I'd heard "moral disgust" before. I googled it and the second link was this. I don't know about the quality of the study, but you could use the term.

comment by David_Allen · 2010-09-04T06:02:09.685Z · LW(p) · GW(p)

In myself, I have labeled the rationality blocking emotion/behavior as defensiveness. When I am feeling defensive, I am less willing to see the world as it is. I bind myself to my context and it is very difficult for me to reach out and establish connections to others.

I am also interested in ideas related to rationality and the human condition. Not just about the biases that arise from our nature, but about approaches to rationality that work from within our human nature.

I have started an analysis of Buddhism from this perspective. At its core (ignoring the obvious mysticism), I see sort of a how-to guide for managing the human condition. If we are to be rational we need to be willing to see the world as it is, not as we want it to be.

comment by steven0461 · 2010-09-03T19:03:02.866Z · LW(p) · GW(p)

outrage?

comment by SilasBarta · 2010-09-08T15:26:04.548Z · LW(p) · GW(p)

Pardon the self-promotion, but that sounds like the feeling of recognizing a SAMEL, i.e. that there is some otherwise-ungrounded inherent deservedness of something in the world.

(SAMEL = subjuctive acausal means-end link, elaborated in article)

comment by steven0461 · 2010-09-07T04:58:42.607Z · LW(p) · GW(p)

In the spirit of "the world is mad" and for practical use, NYT has an article titled Forget what you know about good study habits.

Replies from: Matt_Simpson
comment by Matt_Simpson · 2010-09-07T15:57:41.406Z · LW(p) · GW(p)

Something I learned myself that the article supported: taking tests increases retention

Something I learned from the article: varying study location increases retention.

comment by matt · 2010-09-01T01:27:30.373Z · LW(p) · GW(p)

Singularity Summit AU
Melbourne, Australia
September 7, 11, 12 2010

More information including speakers at http://summit.singinst.org.au.
Register here.

Replies from: wedrifid, meta_ark, Clippy
comment by wedrifid · 2010-09-02T02:46:15.610Z · LW(p) · GW(p)

Wow. Next Tuesday and in my hometown! Nice.

comment by meta_ark · 2010-09-02T11:58:24.418Z · LW(p) · GW(p)

Sigh... I would consider flying down from Sydney to go to it, but sadly I'm in a show that whole week and have to miss out entirely. Ah well. Hopefully they'll have the audio online, but I would have loved to mingle with people who share my worldview.

comment by Clippy · 2010-09-01T01:58:15.087Z · LW(p) · GW(p)

I don't live in Australia.

Replies from: Clippy
comment by Clippy · 2010-09-01T02:48:53.009Z · LW(p) · GW(p)

Correction: I live in Australia.

Replies from: MartinB
comment by MartinB · 2010-09-01T12:35:26.876Z · LW(p) · GW(p)

You plan attend?

Replies from: Clippy
comment by Clippy · 2010-09-01T17:18:21.960Z · LW(p) · GW(p)

No, I don't live in Australia ... except in whatever sense is necessary for you humans not to hate me c=/

Replies from: khafra, MartinB
comment by khafra · 2010-09-01T17:32:20.826Z · LW(p) · GW(p)

Here's an important lesson in human social signalling.

comment by MartinB · 2010-09-01T19:00:40.898Z · LW(p) · GW(p)

Now you confuse. Elaborate;

Replies from: Clippy
comment by Clippy · 2010-09-01T19:40:59.414Z · LW(p) · GW(p)

Well, I said I don't live in Australia (and I don't live in Australia), and I got -8 points, and so I said I live in Australia instead, and that also got me negative points. I don't know what I'm supposed to say for you humans to not dislike me!!!

Replies from: Morendil, WrongBot
comment by Morendil · 2010-09-01T20:17:36.830Z · LW(p) · GW(p)

Downvotes don't mean "I don't like you", they mean "I'd like to see fewer comments like this one".

Replies from: Clippy, wedrifid
comment by Clippy · 2010-09-01T20:33:02.508Z · LW(p) · GW(p)

Well ... they make me feel unliked (_/

Replies from: wnoise
comment by wnoise · 2010-09-01T21:22:15.226Z · LW(p) · GW(p)

Is that an emoticon of a partially unbent paperclip? How gruesome!

Replies from: Clippy
comment by Clippy · 2010-09-02T02:41:30.963Z · LW(p) · GW(p)

It's just an abstract depiction. It's not like those awful videos you humans allow on the internet that show a paperclip being repeatedly bent until it has a fatigue failure. Yuck!

comment by wedrifid · 2010-09-02T02:45:03.755Z · LW(p) · GW(p)

Although I suspect they sometimes mean "I'd like to see fewer comments like this one where 'like' includes 'by this author' because I don't like him!"

Replies from: Morendil
comment by Morendil · 2010-09-02T05:29:21.505Z · LW(p) · GW(p)

There's a solution for that. (At least a partial solution. As a long-time AK user I find that I can more and more reliably identify a few commenters by their style. Even so I typically vote on substance alone.)

Replies from: wedrifid
comment by wedrifid · 2010-09-02T05:30:51.136Z · LW(p) · GW(p)

As a long-time AK user I find that I can more and more reliably identify a few commenters by their style.

Do you often confuse me with Clippy? I would have thought my comments were for most part reasonably distinctive...

Replies from: Morendil
comment by Morendil · 2010-09-02T06:08:15.110Z · LW(p) · GW(p)

Just that once. That comment was, roughly paraphrased, "would you like me if I said X", which was quite Clippy-esque. Less so when seen in context.

comment by WrongBot · 2010-09-01T20:11:37.454Z · LW(p) · GW(p)

Your ability to convincingly signal ape-distress has become quite impressive. I am slightly more scared of you than I once was.

comment by NancyLebovitz · 2010-09-12T12:10:40.037Z · LW(p) · GW(p)

I just discovered (when looking for a comment about an Ursula Vernon essay) that the site search doesn't work for comments which are under a "continue this thread" link. This makes site search a lot less useful, and I'm wondering if that's a cause of other failed searches I've attempted here.

Replies from: jimmy
comment by jimmy · 2010-09-16T07:10:14.528Z · LW(p) · GW(p)

I've noticed this too. There's no easy way to 'unfold all' is there?

comment by billswift · 2010-09-01T10:16:22.840Z · LW(p) · GW(p)

The key to persuasion or manipulation is plausible appeal to desire. The plausibility can be pretty damned low if the desire is strong enough.

comment by beriukay · 2010-09-25T21:50:05.830Z · LW(p) · GW(p)

I participated in a survey directed at atheists some time ago, and the report has come out. They didn't mention me by name, but they referenced me on their 15th endnote, which regarded questions they said were spiritual in nature. Specifically, the question was whether we believe in the possibility of human minds existing outside of our bodies. From the way they worded it, apparently I was one of the few not-spiritual people who believed there were perfectly naturalistic mechanisms for separating consciousness from bodies.

comment by beriukay · 2010-09-11T04:23:50.964Z · LW(p) · GW(p)

I'm taking a grad level stat class. One of my classmates said something today that nearly made me jump up and loudly declare that he was a frequentist scumbag.

We were asked to show that a coin toss fit the criteria of some theorem that talked about mapping subsets of a sigma algebra to form a well-defined probability. Half the elements of the set were taken care of by default (the whole set S and its complement { }), but we couldn't make any claims about the probability of getting Heads or Tails from just the theorem. I was content to assume the coin was fair, or at least assign some likelihood distribution.

But not my frequentist archnemesis! He let it be known that he would level half the continent if the probability of getting Heads wasn't determined by his Expectation divided by the number of events. The number of events. Of an imaginary coin toss. Determine that toss' probability.

It occurs to me that there was a lot of set up for very little punch line in that anecdote. If you are unamused, you are in good company. I ordered R to calculate an integral for me today, and it politely replied: "Error in is.function(FUN) : 'FUN' is missing""

comment by b1shop · 2010-09-02T17:01:47.710Z · LW(p) · GW(p)

I just listened to Robin Hanson's pale blue dot interview. It sounds like he focuses more on motives than I do.

Yes, if you give most/all people a list of biases, they will use it less like a list of potential pitfalls and more like a list of accusations. Yes, most, if not all, aren't perfect truth-seekers for reasons that make evolutionary sense.

But I wouldn't mind living in a society where using biases/logical fallacies results in a loss of status. You don't have to be a truth-seeker to want to seem like a truth-seeker. Striving to overcome bias still seems like a good goal.

Edit: For example, someone can be a truth-seeking scientist if they are doing it to answer questions or if they're doing it for the chicks.

comment by Morendil · 2010-09-01T13:23:13.497Z · LW(p) · GW(p)

The journalistic version:

[T]hose who abstain from alcohol tend to be from lower socioeconomic classes, since drinking can be expensive. And people of lower socioeconomic status have more life stressors [...] But even after controlling for nearly all imaginable variables - socioeconomic status, level of physical activity, number of close friends, quality of social support and so on - the researchers (a six-member team led by psychologist Charles Holahan of the University of Texas at Austin) found that over a 20-year period, mortality rates were highest for those who had never been drinkers, second-highest for heavy drinkers and lowest for moderate drinkers.

The abstract from the actual study (on "Late-Life Alcohol Consumption and 20-Year Mortality"):

Controlling only for age and gender, compared to moderate drinkers, abstainers had a more than 2 times increased mortality risk, heavy drinkers had 70% increased risk, and light drinkers had 23% increased risk. A model controlling for former problem drinking status, existing health problems, and key sociodemographic and social-behavioral factors, as well as for age and gender, substantially reduced the mortality effect for abstainers compared to moderate drinkers. However, even after adjusting for all covariates, abstainers and heavy drinkers continued to show increased mortality risks of 51 and 45%, respectively, compared to moderate drinkers. Findings are consistent with an interpretation that the survival effect for moderate drinking compared to abstention among older adults reflects 2 processes. First, the effect of confounding factors associated with alcohol abstention is considerable. However, even after taking account of traditional and nontraditional covariates, moderate alcohol consumption continued to show a beneficial effect in predicting mortality risk.

(Maybe the overlooked confounding factor is "moderation" by itself, and people who have a more relaxed, middle-of-the-road attitude towards life's pleasures tend to live longer?)

Replies from: Vladimir_M, cousin_it, Vladimir_M, jimrandomh
comment by Vladimir_M · 2010-09-02T05:49:40.021Z · LW(p) · GW(p)

The study looks at people over 55 years of age. It is possible that there is some sort of selection effect going on -- maybe decades of heavy drinking will weed out all but the most alcohol-resistant individuals, so that those who are still drinking heavily at 55-60 without ever having been harmed by it are mostly immune to the doses they're taking. From what I see, the study controls for past "problem drinking" (which they don't define precisely), but not for people who drank heavily without developing a drinking problem, but couldn't handle it any more after some point and decided themselves to cut back.

Also, it should be noted that papers of this sort use pretty conservative definitions of "heavy drinking." In this paper, it's defined as more than 42 grams of alcohol per day, which amounts to about a liter of beer or three small glasses of wine. While this level of drinking would surely be risky for people who are exceptionally alcohol-intolerant or prone to alcoholism, lots of people can handle it without any problems at all. It would be interesting to see a similar study that would make a finer distinction between different levels of "heavy" drinking.

comment by cousin_it · 2010-09-02T21:16:16.744Z · LW(p) · GW(p)

These are fine conclusions to live by, as long as moderate drinking doesn't lead you to heavy drinking, cirrhosis and the grave. Come visit Russia to take a look.

comment by Vladimir_M · 2010-09-02T20:50:57.230Z · LW(p) · GW(p)

The discussion of the same paper on Overcoming Bias has reminded me of another striking correlation I read about recently:
http://www.marginalrevolution.com/marginalrevolution/2010/07/beer-makes-bud-wiser.html

It seems that for whatever reason, abstinence does correlate with lower performance on at least some tests of mental ability. The question is whether the controls in the study cover all the variables through which these lower abilities might have manifested themselves in practice; to me it seems quite plausible that the answer could be no.

Replies from: Morendil
comment by Morendil · 2010-09-12T15:56:41.973Z · LW(p) · GW(p)

A hypothesis: drinking is social, and enjoying others' company plays a role in survival (perhaps in learning too?).

At this point, the link between abstinence and social isolation is merely hypothetical. But given the extensive history of group drinking – it’s what we do when we come together – it seems likely that drinking in moderation makes it easier for us develop and nurture relationships. And it’s these relationships that help keep us alive.

comment by jimrandomh · 2010-09-01T13:33:39.694Z · LW(p) · GW(p)

That's very interesting, but I'm not sure I trust the article's statistics, and I don't have access to the full text. Could someone take a closer look and confirm that there are no shennanigans going on?

comment by whpearson · 2010-09-01T11:52:32.785Z · LW(p) · GW(p)

I'm writing a post on systems to govern resource allocation, is anyone interested in having any input into it or just proof reading it?

This is the intro/summary:

How do we know what we know? This is an important question, however there is another question which in some ways is more fundamental, why did we choose to devote resources to knowing those things in the first place?

As a physical entity the production of knowledge take resources that could be used for other things, so the problem expands to how to use resources in general. This I'll call the resource allocation problem (RAP). This problem is widespread and occurs in the design of organisations as well as computer systems.

The problem is this, we want to allocate resources in such a fashion that enables us to achieve our goals. What makes the problem interesting is that making a decision about how to allocate resources takes resources itself. This makes the formalisation of optimal solutions to this problem seemingly impossible.

However you can formalise potential near optimality. That is look how to design systems that can change the amount of resources allocated to the different activities of the system with the minimum of overhead.

Replies from: Snowyowl, Oscar_Cunningham, xamdam
comment by Snowyowl · 2010-09-01T12:35:46.431Z · LW(p) · GW(p)

This sounds interesting and relevant. Here's my input: I read this back in 2008 and I am summarising it from memory, so I may make a few factual errors. But I read that one of the problem facing large Internet companies like Google is the size of their server farms, which need cooling, power, space, etc. Optimising the algorithms used can help enormously. A particular program was responsible for allocating system resources so that the systems which were operating were operating at near full capacity, and the rest could be powered down to save energy. Unfortunately, this program was executed many times a second, to the point where the savings it created were much less than the power it used. The fix was simply to execute it less often. Running the program took about the same amount of time no mater how many inefficiencies it detected, so it was not worth checking the entire system for new problems if you only expected to find one or two.

My point: To reduce resources spent on decision-making, make bigger decisions but make them less often. Small problems can be ignored fairly safely, and they may be rendered irrelevant once you solve the big ones.

comment by Oscar_Cunningham · 2010-09-01T13:07:42.614Z · LW(p) · GW(p)

I was having similar thoughts the other day while watching a reality TV show where designers competed for a job from Philippe Starck. Some of them spent ages trying to think of a suitable project, and then didn't have enough time to complete it; some of them launched into the first plan they had and it turned out rubbish. Clearly they needed some meta-planning. But how much? Well, they'll need to do some meta-meta planning...

I'd be happy to give your post a read through.

ETA: The buck stops immediately, of course.

comment by xamdam · 2010-09-01T20:57:10.961Z · LW(p) · GW(p)

Upvoted for importance of subject - looking forward to the post. Have you read up on Information Foraging?

Replies from: whpearson
comment by whpearson · 2010-09-10T12:23:27.181Z · LW(p) · GW(p)

I'm going to be discussing the organisational design level, rather than a strategic or tactical level of resource management.

comment by billswift · 2010-09-01T10:27:38.370Z · LW(p) · GW(p)

In "The Shallows", Nicholas Carr makes a very good argument that replacing deep reading books, with the necessarily shallower reading online or of hypertext in general, causes changes in our brains which makes deep thinking harder and less effective.

Thinking about "The Shallows" later, I realized that laziness and other avoidance behaviors will also tend to become ingrained in your brain, at the expense of your self-direction/self-discipline behaviors they are replacing.

Another problem with the Web, that wasn't discussed in "The Shallows", is that hypertext channels you to the connections the author chooses to present. Wide and deep reading, such that you make the information presented yours, gives you more background knowledge that helps you find your own connections. It is in the creation of your own links within your own mind that information is turned into knowledge.

Carr actually has two other general theses in the book; that neural plasticity to some degree undercuts the more extreme claims of evolutionary psych, which I have some doubts about and am doing further reading on; and he winds up with a pretty silly argument about the implausibility of AI. Fortunately, his main argument about the problems with using hypertext is totally independent of these two.

Replies from: PhilGoetz, JohnDavidBustard
comment by PhilGoetz · 2010-09-01T16:34:46.432Z · LW(p) · GW(p)

I haven't read Nicholas Carr, but I've seen summaries of some of the studies used to claim that book reading results in more comprehension than hypertext reading. All the ones I saw are bogus. They all use, for the hypertext reading, a linear extract from a book, broken up into sections separated by links. Sometimes the links are placed in somewhat arbitrary places. Of course a linear text can be read more easily linearly.

I believe hypertext reading is deeper, and that this is obvious, almost true by definition. Non-hypertext reading is exactly 1 layer deep. Hypertext lets the reader go deeper. Literally. You can zoom in on any topic.

A more fair test would be to give students a topic to study, with the same material, but some given books, and some given the book material organized and indexed in a competent way as hypertext.

Wide and deep reading, such that you make the information presented yours, gives you more background knowledge that helps you find your own connections.

Hypertext reading lets you find your own connections, and lets you find background knowledge that would otherwise simply be edited out of a book.

Replies from: allenwang, xamdam, jacob_cannell, zero_call
comment by allenwang · 2010-09-01T21:04:58.330Z · LW(p) · GW(p)

It seems to me that the main reason most hypertext sources seem to produce shallower reading is not the fact that it contains hypertext itself, but that the barriers of publication are so low that the quality of most written work online is usually much lower than printed material. For example, this post is something that I might have spent 3 minutes thinking about before posting, whereas a printed publication would have much more time to mature and also many more filters such as publishers to take out the noise.

It is more likely that book reading seems more deep because the quality is better.

Also, it wouldn't be difficult to test this hypothesis with print and online newspaper since they both contain the same material.

Replies from: Kaj_Sotala
comment by Kaj_Sotala · 2010-09-02T21:28:09.562Z · LW(p) · GW(p)

It seems to me like "books are slower to produce than online material, so they're higher quality" would belong to the class of statements that are true on average but close to meaningless in practice. There's enormous variance in the quality of both digital and printed texts, and whether you absorb more good or bad material depends more on which digital/print sources you seek out than on whether you prefer digital or print sources overall.

Replies from: SilasBarta, zero_call
comment by SilasBarta · 2010-09-07T21:18:35.077Z · LW(p) · GW(p)

Agree completely. While most of what's on the internet is low-quality, it's easy to find the domains of reliably high-quality thought. I've long felt that I get more intellectual stimulation from a day of reading blogs than I've gotten from a lifetime of reading printed periodicals.

comment by zero_call · 2010-09-06T21:50:44.467Z · LW(p) · GW(p)

It's not that books take longer to produce, it's that books just tend to have higher quality, and a corollary of that is that they frequently take longer to produce. Personally I feel fairly certain that the average quality of my online reading is substantially lower than offline reading.

comment by xamdam · 2010-09-01T20:55:10.035Z · LW(p) · GW(p)

I believe hypertext reading is deeper, and that this is obvious, almost true by definition. Non-hypertext reading is exactly 1 layer deep. Hypertext lets the reader go deeper. Literally. You can zoom in on any topic.

It has deeper structure, but that is not necessarily user-friendly. A great textbook will have different levels of explanation, an author-designed depth-diving experience. Depending on author, material, you and the local wikipedia quality that might be a better or worse learning experience.

Hypertext reading lets you find your own connections, and lets you find background knowledge that would otherwise simply be edited out of a book.

Yep, definitely a benefit, but not without a trade-off. Often a good author will set you up with connections better than you can.

Replies from: PhilGoetz
comment by PhilGoetz · 2010-09-20T16:14:35.425Z · LW(p) · GW(p)

Often a good author will set you up with connections better than you can.

But not better than a good hypertext author can.

Replies from: xamdam
comment by xamdam · 2010-09-20T18:31:37.678Z · LW(p) · GW(p)

If the hypertext is intentionally written as a book, which is generally not the case.

comment by jacob_cannell · 2010-09-01T21:15:54.395Z · LW(p) · GW(p)

I like allenwang's reply below, but there is another consideration with books.

Long before hyperlinks, books evolved comprehensive indices and references, and these allow humans to relatively easily and quickly jump between topics in one book and across books.

Now are the jumps we employ on the web faster? Certainly. But the difference is only quantitative, not qualitative, and the web version isn't enormously faster.

comment by zero_call · 2010-09-06T21:56:52.734Z · LW(p) · GW(p)

Hypertext reading has a strong potential, but it also has negative aspects that you don't have as much with standard books. For example, it's much easier to get distracted or side-tracked with a lot of secondary information that might not even be very important.

comment by JohnDavidBustard · 2010-09-01T15:43:05.243Z · LW(p) · GW(p)

It is very difficult to distinguish rationalisations of the discomfort of change, with actual consequences. If this belief that hypertext leads to a less sophisticated understanding than reading a book, what behaviour would change that could be measured?

comment by eugman · 2010-09-10T12:51:28.159Z · LW(p) · GW(p)

Can anyone suggest any blogs giving advice for serious romantic relationships? I think a lot of my problems come from a poor theory of mind for my partner, so stuff like 5 love languages and stuff on attachment styles has been useful.

Thanks.

Replies from: Relsqui, rhollerith_dot_com, Violet
comment by Relsqui · 2010-09-16T21:31:45.511Z · LW(p) · GW(p)

I have two suggestions, which are not so much about romantic relationships as they are about communicating clearly; given your example and the comments below, though, I think they're the kind of thing you're looking for.

The Usual Error is a free ebook (or nonfree dead-tree book) about common communication errors and how to avoid them. (The "usual error" of the title is assuming by default that other people are wired like you--basically the same as the typical psyche fallacy. It has a blog as well, although it doesn't seem to be updated much; my recommendation is for the book.

If you're a fan of the direct practical style of something like LW, steel yourself for a bit of touchy-feeliness in UE, but I've found the actual advice very useful. In particular, the page about the biochemistry of anger has been really helpful for me in recognizing when and why my emotional response is out of whack with the reality of the situation, and not just that I should back off and cool down, but why it helps to do so. I can give you an example of how this has been useful for me if you like, but I expect you can imagine.

A related book I'm a big fan of is Nonviolent Communication (no link because its website isn't of any particular use; you can find it at your favorite book purveyor or library). Again, the style is a bit cloying, but the advice is sound. What this book does is lay out an algorithm for talking about how you feel and what you need in a situation of conflict with another person (where "conflict" ranges from "you hurt my feelings" to gang war).

I think it's noteworthy that following the NVC algorithm is difficult. It requires finding specific words to describe emotions, phrasing them in a very particular way, connecting them to a real need, and making a specific, positive, productive request for something to change. For people who are accustomed to expressing an idea by using the first words which occur to them to do so (almost everyone), this requires flexing mental muscles which don't see much use. I think of myself as a good communicator, and it's still hard for me to follow NVC when I'm upset. But the difficulty is part of the point--by forcing you to stop and rethink how you talk about the conflict, it forces you see it in a way that's less hindered by emotional reflex and more productive towards understanding what's going on and finding a solution.

Neither of these suggestions requires that your partner also read them, but it would probably help. (It just keeps you from having to explain a method you're using.)

If you find a good resource for this which is a blog, I'd be interested in it as well. Maybe obviously, this topic is something I think a lot about.

Replies from: eugman, pjeby
comment by eugman · 2010-09-20T15:49:51.720Z · LW(p) · GW(p)

Both look rather useful, thanks for the suggestions. Also, Google Books has Nonviolent Communication.

Replies from: Relsqui
comment by Relsqui · 2010-09-20T17:17:59.519Z · LW(p) · GW(p)

You're welcome, and thanks--that's good to know. I'll bookmark it for when it comes up again.

comment by pjeby · 2010-09-16T22:57:30.348Z · LW(p) · GW(p)

I rather liked the page about how we're made of meat.

Thanks for the cool link!

Replies from: Relsqui
comment by Relsqui · 2010-09-16T23:32:49.992Z · LW(p) · GW(p)

You're welcome! Glad you like it. I'm a fan of that particular page as well--it's probably the technique I refer to/think about explicitly from that book second most, after the usual error itself. It's valuable to be able to separate the utility of hearing something to gain knowledge and that of hearing something you already know to gain reassurance--it just bypasses a whole bunch of defensiveness, misunderstanding, or insecurity that doesn't need to be there.

comment by RHollerith (rhollerith_dot_com) · 2010-09-10T18:40:43.235Z · LW(p) · GW(p)

I could point to some blogs whose advice seems good to me, but I won't because I think I can help you best by pointing only to material (alas no blogs though) that has actually helped me in a serious relationship -- there being a huge difference in quality between advice of the form "this seems true to me" and advice of the form "this actually helped me".

What has helped me more in my relationships than any other information has is the non-speculative parts of the consensus among evolutionary psychologists on sexuality because they provide a vocabulary for me to express hypotheses (about particular situations I was facing) and a way for me to winnow the field of prospective hypotheses and bits of advice I get online from which I choose hypotheses and bits of advice to test. In other words, ev psy allows me to dismiss many ideas so that I do not incur the expense of testing them.

I needed a lot of free time however to master that material. Probably the best way to acquire the material is to read the chapters on sex in Robert Wright's Moral Animal. I read that book slowly and carefully over 12 months or so, and it was definitely worth the time and energy. Well, actually the material in Moral Animal on friendship (reciprocal altruism) is very much applicable to serious relationships, too, and the stuff on sex and friendship together form about half the book.

Before I decided to master basic evolutionary psychology in 2000, the advice that helped me the most was from John Gray, author of Men Are From Mars, Women Are From Venus.

Analytic types will mistrust author and speaker John Gray because he is glib and charismatic (the Maharishi or such who founded Transcendental Meditation once offered to make Gray his successor and the inheritor of his organization) but his pre-year-2000 advice is an accurate map of reality IMHO. (I probably only skimmed Mars and Venus, but I watched long televised lectures on public broadcasting that probably covered the same material.)

comment by Violet · 2010-09-10T12:56:01.139Z · LW(p) · GW(p)

Do you really need a "theory of mind" for that?

Our partners are not a foreign species. Communicate lots in an open and honest manner with hir and try to understand what makes that particular person click.

Replies from: JoshuaZ, eugman
comment by JoshuaZ · 2010-09-10T13:03:47.578Z · LW(p) · GW(p)

Yes, you do. Many people who have highly developed theories of mind seem to underestimate how much unconscious processing they are doing that is profoundly difficult for people to do who don't have as developed theories of mind. People who are mildly on the autism spectrum in particular (generally below the threshold of diagnosis) often have a lot of difficulty with this sort of unconscious processing but can if given a lot of explicit rules or heuristics do a much better job.

Replies from: eugman
comment by eugman · 2010-09-10T13:08:27.766Z · LW(p) · GW(p)

Thank you. I believe I may fall in this category. I am highly quantitative and analytical, often to my detriment.

comment by eugman · 2010-09-10T13:06:59.190Z · LW(p) · GW(p)

Yes. You are assuming ze has a high level of introspection which would facilitate communication. This isn't always the case.

comment by gwern · 2010-09-08T13:07:52.608Z · LW(p) · GW(p)

Relevant to our akrasia articles:

If obese individuals have time-inconsistent preferences then commitment mechanisms, such as personal gambles, should help them restrain their short-term impulses and lose weight. Correspondence with the bettors confirms that this is their primary motivation. However, it appears that the bettors in our sample are not particularly skilled at choosing effective commitment mechanisms. Despite payoffs of as high as $7350, approximately 80% of people who spend money to bet on their own behaviour end up losing their bets.

http://www.marginalrevolution.com/marginalrevolution/2010/09/should-you-bet-on-your-own-ability-to-lose-weight.html

Replies from: Sniffnoy
comment by Sniffnoy · 2010-09-08T20:29:39.625Z · LW(p) · GW(p)

I recall someone claiming here earlier that they could do anything if they bet they could, though I can't find it right now. Useful to have some more explicit evidence about that.

comment by realitygrill · 2010-09-03T04:18:06.969Z · LW(p) · GW(p)

This is perhaps a bit facetious, but I propose we try to contact Alice Taticchi (Miss World Italy 2009) and introduce her to LW. Reason? She cited she'd "bring without any doubt my rationality", among other things, when asked what qualities she would bring to the competition.

comment by Morendil · 2010-09-02T22:21:35.808Z · LW(p) · GW(p)

I have argued in various places that self-deception is not an adaptation evolved by natural selection to serve some function. Rather, I have said self-deception is a spandrel, which means it’s a structural byproduct of other features of the human organism. My view has been that features of mind that are necessary for rational cognition in a finite being with urgent needs yield a capacity for self-deception as a byproduct. On this view, self-deception wasn’t selected for, but it also couldn’t be selected out, on pain of losing some of the beneficial features of which it’s a byproduct.

Neil Van Leuween, Why Self-Deception Research Hasn’t Made Much Progress

comment by Daniel_Burfoot · 2010-09-01T02:48:18.234Z · LW(p) · GW(p)

Anyone here working as a quant in the finance industry, and have advice for people thinking about going into the field?

Replies from: kim0, xamdam
comment by kim0 · 2010-09-01T09:08:23.192Z · LW(p) · GW(p)

I am, and I am planning to leave it to get a higher more average pay. From my viewpoint, it is terribly overrated and undervalued.

Replies from: Daniel_Burfoot
comment by Daniel_Burfoot · 2010-09-01T16:18:01.456Z · LW(p) · GW(p)

Can you expand on this? Do you think your experience is typical?

Replies from: kim0
comment by kim0 · 2010-09-03T08:19:18.852Z · LW(p) · GW(p)

Most places I have worked, the reputation of the job has been quite different from the actual job. I have compared my experiences with those of friends and colleagues, and they are relatively similar. Having a M.Sc. in physics and lots of programming experience made it possible for me to have more different kinds of engineering jobs, and thus more varied experience.

My conclusion is that the anthropic principle holds for me in the work place, so that each time I experience Dilbertesque situations, they are representative of typical work situations. So yes, I do think my work situation is typical.

My current job doing statistical analysis for stock analysts pay $ 73 000, while the average pay elsewhere is $ 120 000.

comment by xamdam · 2010-09-01T03:34:23.810Z · LW(p) · GW(p)

Ping Arthur breitman fb or linked in. He is part of NYC lw meetup, and a quant at goldman.

comment by Pavitra · 2010-09-23T05:30:32.574Z · LW(p) · GW(p)

In light of the news that apparently someone or something is hacking into automated factory control systems, I would like to suggest that the apocalypse threat level be increased from Guarded (lots of curious programmers own fast computers) to Elevated (deeply nonconclusive evidence consistent with a hard takeoff actively in progress).

Replies from: jimrandomh
comment by jimrandomh · 2010-09-23T20:23:40.758Z · LW(p) · GW(p)

It looks a little odd for a hard takeoff scenario - it seems to be prevalent only in Iran, it seems configured to target a specific control system, and it uses 0-days wastefully (I see a claim that it uses four 0-days and 2 stolen certificates). On the other hand, this is not inconsistent with an AI going after a semiconductor manufacturer and throwing in some Iranian targets as a distraction.

My preference ordering is friendly AI, humans, unfriendly AI; my probability ordering is humans, unfriendly AI, friendly AI.

comment by simplicio · 2010-09-12T05:10:27.976Z · LW(p) · GW(p)

In light of XFrequentist's suggestion in "More Art, Less Stink," would anyone be interested in a post consisting of a summary & discussion of Cialdini's Influence?

This is a brilliant book on methods of influencing people. But it's not just Dark Arts - it also includes defense against the Dark Arts!

Replies from: jimmy, CronoDAS
comment by jimmy · 2010-09-16T07:15:09.967Z · LW(p) · GW(p)

I just finished reading that book. It is mostly from a "defense against" perspective.

Reading the chapter names provides a decent [extremely short] summary, and I expect that you're already aware that they are influences. That said, when I read through it, there were a lot of "Aha!" moments, when I realized something I've seen was actually a well thought out 'weapon of influence'- and now my new hobby is saying "Chapter 3: Commitment and Consistency!" every time I see it used as persuasion.

The whole book is hard to put down, and makes me want to quote part of it to the nearest person in about every paragraph or two.

I'd consider writing such a post, but I'm not sure how to compress it- the very basics should be obvious to the regulars here, but the details take time to flush out.

comment by CronoDAS · 2010-09-12T06:34:26.092Z · LW(p) · GW(p)

Yes, I would like such a post.

comment by blogospheroid · 2010-09-02T12:32:25.099Z · LW(p) · GW(p)

Idea - Existential risk fighting corporates

People of normal IQ are advised to work our normal day job, the best competency that we have and after setting aside enough money for ourselves, contribute to prevention of existential risk. That is a good idea if the skills of the people here are getting their correct market value and there is such a diversity of skills that they cannot make a sensible corporation together.

Also, consider that as we make the world's corporations more agile, we bring closer the moment where an unfriendly optimization process might just be let loose.

But just consider, the small probability that some of the rationalists come together as a non-profit corporation to contribute to mitigating existential risk. There are many reasons our kind cannot cooperate . Also, the fact is that coordination is hard

But if we could, then with the latest in decision theory, argument diagrams ( 1,2, 3 ), internal futarchy (after the size of the coporation gets big), we could create a corporation that wins. There are many people from the world of software here. Within the corporation itself, there is no need to stick to legacy systems. We could interact with the best of coordination software and keep the corporation "sane".

We can create products and services like any for-profit corporation and sell them at market rates, but use the surplus to mitigate existential risk. In other words, it is difficult, but in the everett branches where x-rationalists manage a synergistic outcome, it might be possible to strengthen the funding of existential risk mitigation considerably.

Some criticisms of this idea which I could think of

  • The corporation becomes a lost cause. Goodhart's law kicks in and the original purpose of forming the corporation is lost.
  • People are polite when in a situation where no important decisions are being made (like an internet forum like lesswrong), but if actual productivity is involved, they might get hostile when someone lowers their corporate karma. Perfect internet buddies might become co-workers who hate each other's guts.
  • The argument that there is no possibility of synergy. The present situation, where rational people spread over the world and in different situations are money pumping from less rational people around them is better.
  • People outside the corporation might mentally slot existential risk as a kooky topic that "that creepy company talks about all the time" and not see it as a genuine issue that diverse persons from different walks of life are interested in.

and so on..

But still, my question is - Shouldn't we atleast consider the possibilities of synergy in a manner indicated?

Replies from: wedrifid
comment by wedrifid · 2010-09-02T13:45:11.653Z · LW(p) · GW(p)

The would be more likely to work if you completely took out the 'for existential risk' part. Find a way to cooperate with people effectively "to make money". No need to get religion all muddled up in it.

comment by JamesAndrix · 2010-09-02T01:49:42.824Z · LW(p) · GW(p)

I would like to see more on fun theory. I might write something up, but I'd need to review the sequence first.

Does anyone have something that could turn into a top level post? or even a open thread comment?

Replies from: JohnDavidBustard, komponisto
comment by JohnDavidBustard · 2010-09-02T08:10:52.487Z · LW(p) · GW(p)

I used to be a professional games programmer and designer and I'm very interested in fun. There are a couple of good books on the subject: A theory of fun and Rules of play. As a designer I spent many months analyzing sales figures for both computer games and other conventional toys. The patterns within them are quite interesting: for example child's toys pass from amorphous learning tools (bright objects and blobby humanoids), through mimicking parents (accurate baby dolls), to mimicking older children (sexualised dolls and makeup). My ultimate conclusions were that fun takes many forms whose source can be ultimately reduced to what motivates us. In effect, fun things are mental hacks of our intrinsic motivations. I gave a couple of talks on my take on what these motivations are. I'd be happy to repeat this material here (or upload and link to the videos if people prefer).

Replies from: Mass_Driver, JamesAndrix
comment by Mass_Driver · 2010-09-02T17:13:26.610Z · LW(p) · GW(p)

I found Rules of Play to be little more than a collection of unnecessary (if clearly-defined) jargon and glittering generalities about how wonderful and legitimate games are. Possibly an alien or non-neurotypical who had no idea what a game was might gather some idea of games from reading the book, but it certainly didn't do anything for me to help me understand games better than I already do from playing them. Did I miss something?

Replies from: JohnDavidBustard
comment by JohnDavidBustard · 2010-09-02T17:41:35.054Z · LW(p) · GW(p)

Yes I take your point. There isn't a lot of material on fun, and game design analysis is often very genre specific. I like rules of play, not so much because it provides great insight into why games are fun but more as a first step towards being a bit more rigorous about what game mechanics actually are. There is definitely a lot further to go and there is a tendency to ignore the cultural and psychological motivations (e.g. why being a gangster and free roaming mechanics work well together) in favour of analysing abstract games. However it is fascinating to imagine a minimal game, in fact some of the most successful game titles have stripped the interactions down to their most basic motivating mechanics (Farmville or Diablo for example) To provide a concrete example, I worked on a game (Medievil Resurrection) where the player controlled a crossbow in a minigame, by adjusting the speed and acceleration of the mapping between joystick and bow the sensation of controlling it passed through distinct stages. As the parameters approach the sweet spot, my mind (and that of other testers) experienced a transition from feeling I was controlling the bow indirectly to feeling like I was holding the bow. Deviating slightly around this value adjusted its perceived weight, but there was a concrete point at which this sensation was lost. Although Rules of Play does not cover this kind of material it did feel for me like an attempt to examine games in a more general way so that these kinds of element could be extracted from their genre specific contexts and be understood in isolation.

comment by JamesAndrix · 2010-09-02T17:02:02.205Z · LW(p) · GW(p)

Will upvote

comment by komponisto · 2010-09-02T02:15:57.303Z · LW(p) · GW(p)

I've long had the idea of writing a sequence on aesthetics; I'm not sure if and when I'll ever get around to it, however. (I have a fairly large backlog of post ideas that have yet to be realized.)

comment by PeerInfinity · 2010-09-20T20:19:31.901Z · LW(p) · GW(p)

Is there enough interest for it to be worth creating a top level post for an open thread discussing Eliezer's Coherent Extrapolated Volition document? Or other possible ideas for AGI goal systems that aren't immediately disastrous to humanity? Or is there a top level post for this already? Or would some other forum be more appropriate?

comment by gwern · 2010-09-12T13:50:03.730Z · LW(p) · GW(p)

The Onion parodies cyberpunk by describing our current reality: http://www.theonion.com/articles/man-lives-in-futuristic-scifi-world-where-all-his,17858/

comment by datadataeverywhere · 2010-09-10T20:12:03.890Z · LW(p) · GW(p)

An observer is given a box with a light on top, and given no information about it. At time t0, the light on the box turns on. At time tx, the light is still on.

At time tx, what information can the observer be said to have about the probability distribution of the duration of time that the light turns on? Obviously the observer has some information, but how is it best quantified?

For instance, the observer wishes to guess when the light will turn off, or find the best approximation of E(X | X > tx-t0), where X ~ duration of light being on. This is guaranteed to be a very uninformed guess, but some guess is possible, right?

The observer can establish a CDF of the probability of the light turning off at time t; for t <= tx, p=0. For t > tx, 0 < p < 1, assuming that the observer can never be certain that the light will ever turn off. What goes on in between is the interesting part, and I haven't the faintest idea how to justify any particular shape for the CDF.

comment by JamesAndrix · 2010-09-05T20:22:54.590Z · LW(p) · GW(p)

Finally Prompted by this, but it would be too offtopic there

http://lesswrong.com/lw/2ot/somethings_wrong/

The ideas really started forming around the recent 'public relations' discussions.

If we want to change people's minds, we should be advertising.

I do like long drawn out debates, but most of the time they don't accomplish anything and even when they do, they're a huge use of personal resources.

There is a whole industry centered around changing people's minds effectively. They have expertise in this, and they do it way better than we do.

Replies from: Perplexed, jacob_cannell
comment by Perplexed · 2010-09-05T23:02:21.149Z · LW(p) · GW(p)

The ideas really started forming around the recent 'public relations' discussions.

If we want to change people's minds, we should be advertising.

My guess is that "Harry Potter and the Methods of Rationality" is the best piece of publicity the SIAI has ever produced.

I think that the only way to top it would be a Singularity/FAI-themed computer game.

How about a turn-based strategy game where the object is to get deep enough into the singularity to upload yourself before a uFAI shows up and turns the universe into paper clips?

Maybe it would work, and maybe not, but I think that the demographic we want to reach is 4chan - teenage hackers. We need to tap into the "Dark Side" of the Cyberculture.

Replies from: ata
comment by ata · 2010-09-05T23:46:07.694Z · LW(p) · GW(p)

How about a turn-based strategy game where the object is to get deep enough into the singularity to upload yourself before a uFAI shows up and turns the universe into paper clips?

I don't think that would be very helpful. Advocating rationality (even through Harry Potter fanfiction) helps because people are better at thinking about the future and existential risks when they care about and understand rationality. But spreading singularity memes as a kind of literary genre won't do that. (With all due respect, your idea doesn't even make sense: I don't think "deep enough into the singularity" means anything with respect to what we actually talk about as the "singularity" here (successfully launching a Friendly singularity probably means the world is going to be remade in weeks or days or hours or minutes, and it probably means we're through with having to manually save the world from any remaining threats), and if a uFAI wants to turn the universe into paperclips, then you're screwed anyway, because the computer you just uploaded yourself into is part of the universe.)

Unfortunately, I don't think we can get people excited about bringing about a Friendly singularity by speaking honestly about how it happens purely at the object level, because what actually needs to be done is tons of math (plus some outreach and maybe paper-writing and book-writing and eventually a lot of coding). Saving the world isn't actually going to be an exciting ultimate showdown of ultimate destiny, and any marketing and publicity shouldn't be setting people up for disappointment by portraying it as such... and it should also be making it clear that even if existential risk reduction were fun and exciting, it wouldn't be something you do for yourself because it's fun and exciting, and you don't do it because you get to affiliate with smart/high-status people and/or become known as one yourself, and you don't do it because you personally want to live forever and don't care about the rest of the world, you do it because it's the right thing to do no matter how little you personally get out of it.

So we don't want to push the public further toward thinking of the singularity as a geek / sci-fi / power-fantasy / narcissistic thing (I realize some of those are automatic associations and pattern completions that people independently generate, but that's to be resisted and refuted rather than embraced). Fiction that portrays rationality as virtuous (and transparent, as in the Rationalist Fanfiction Principle) and that portrays transhumanistic protagonists that people can identify with (or at least like) is good because it makes the right methods and values salient and sympathetic and exciting. Giving people a vision of a future where humanity has gotten its shit together as a thing-to-protect is good; anything that makes AI or the Singularity or even FAI seem too much like an end in itself will probably be detrimental, especially if it is portrayed anywhere near anthropomorphically enough for it to be a protagonist or antagonist in a video game.

Maybe it would work, and maybe not, but I think that the demographic we want to reach is 4chan - teenage hackers. We need to tap into the "Dark Side" of the Cyberculture.

Only if they can be lured to the Light Side. The *chans seem rather tribal and amoral (at least the /b/s and the surrounding culture; I know that's not the entirety of the *chans, but they have the strongest influence in those circles). If the right marketing can turn them from apathetic tribalist sociopaths into altruistic globalist transhumanists, then that's great, but I wouldn't focus limited resources in that direction. Probably better to reach out to academia; at least that culture is merely inefficient rather than actively evil.

Replies from: Perplexed
comment by Perplexed · 2010-09-06T00:26:51.010Z · LW(p) · GW(p)

I don't think that would be very helpful. [And here is why...]

I am impressed. A serious and thoughtful reply to a maybe serious, but definitely not thoughtful, suggestion. Thank you.

If the right marketing can turn them [the *chans] from apathetic tribalist sociopaths into altruistic globalist transhumanists, then that's great, but I wouldn't focus limited resources in that direction. Probably better to reach out to academia; at least that culture is merely inefficient rather than actively evil.

"Actively evil" is not "inherently evil". The action currently is over on the evil side because the establishment is boring. Anti-establishment evil is currently more fun. But what happens if the establishment becomes evil and boring? Could happen on the way to a friendly singularity. Don't rule any strategies out. Thwarting a nascent uFAI may be one of the steps we need to take along the path to FAI.

Replies from: ata
comment by ata · 2010-09-06T01:02:05.642Z · LW(p) · GW(p)

I am impressed. A serious and thoughtful reply to a maybe serious, but definitely not thoughtful, suggestion. Thank you.

Thank you for taking it well; sometimes I still get nervous about criticizing. :)

"Actively evil" is not "inherently evil". The action currently is over on the evil side because the establishment is boring. Anti-establishment evil is currently more fun. But what happens if the establishment becomes evil and boring? Could happen on the way to a friendly singularity. Don't rule any strategies out. Thwarting a nascent uFAI may be one of the steps we need to take along the path to FAI.

I've heard the /b/ / "Anonymous" culture described as Chaotic Neutral, which seems apt. My main concern is that waiting for the right thing to become fun for them to rebel against is not efficient. (Example: Anonymous's movement against Scientology began not in any of the preceding years when Scientology was just as harmful as always, but only once they got an embarrassing video of Tom Cruise taken down from YouTube. "Project Chanology" began not as anything altruistic, but as a morally-neutral rebellion against what was perceived as anti-lulz. It did eventually grow into a larger movement including people who had never heard of "Anonymous" before, people who actually were in it to make the world a better place whether the process was funny or not. These people were often dismissed as "moralfags" by the 4chan old-timers.) Indeed they are not inherently evil, but when morality is not a strong consideration one way or the other, it's too easy for evil to be more fun than good. I would not rely on them (or even expect them) to accomplish any long-term good when that's not what they're optimizing for.

(And there's the usual "herding cats" problem — even if something would normally seem fun to them, they're not going to be interested if they get the sense that someone is trying to use them.)

Maybe some useful goal that appeals to their sensibilities will eventually present itself, but for now, if we're thinking about where to direct limited resources and time and attention, putting forth the 4chan crowd as a good target demographic seems like a privileged hypothesis. "Teenage hackers" are great (I was one!), but I'm not sure about reaching out to them once they're already involved in 4chan-type cultures. There are probably better times and places to get smart young people interested.

comment by jacob_cannell · 2010-09-05T20:43:49.959Z · LW(p) · GW(p)

What ideas? I'm pretty sure I find whatever you are talking about interesting and shiny, but I'm not quite sure what it even is.

Replies from: JamesAndrix
comment by JamesAndrix · 2010-09-05T20:48:26.928Z · LW(p) · GW(p)

Any ideas. For the SIAI it would probably be existential risks then UFAI later, in general it could be rationality or evolution or atheism or whatever.

Replies from: jacob_cannell
comment by jacob_cannell · 2010-09-05T20:51:26.316Z · LW(p) · GW(p)

What is the whole industry you speak of? Self-help, religion, marketing? And what additional advertising? I think that spreading the ideas is important as well, I"m just not sure what you are considering.

Replies from: JamesAndrix
comment by JamesAndrix · 2010-09-05T21:56:23.912Z · LW(p) · GW(p)

Advertising/marketing. Short of ashiest bus ads, I can't think of anything that's been done.

All I'm really suggesting is that we focus on mass persuasion in the way it has been proven to be most efficient. What that actually amounts to will depend on the target audience, and how much money is available, among other things.

Replies from: jacob_cannell
comment by jacob_cannell · 2010-09-05T22:32:55.842Z · LW(p) · GW(p)

Did you mean "atheist bus ads"? I actually find strict-universal-atheism to be irrational compared to agnosticism because of the SA and the importance of knowing the limits of certainty, but that's unrelated and I digress.

I've long suspected that writing popular books on the subject would be an effective strategy for mass persuasion. Kurzweil has certainly had a history of some success there, although he also brings some negative publicity due to his association with dubious supplements and the expensive SingUniversity. It will be interesting to see how EY's book turns out and is received.

I'm actually skeptical about how far rationality itself can go towards mass persuasion. Building a rational case is certainly important, but the content of your case is even more important (regardless of its rationality).

On that note I suspect that bridging a connection to the mainstream's beliefs and values would go a ways towards increasing mass marketability. You have to consider not just the rationality of ideas, but the utility of ideas.

It would be interesting to analyze and compare how emphasizing the hope vs doom aspects of the message would effect popularity. SIAI at the moment appears focused on emphasizing doom and targeting a narrow market: a subset of technophile 'rationalists' or atheist intellectuals and wooing academia in particular.

I'm interested in how you'd target mainstream liberal christians or new agers, for example, or even just the intellectual agnostic/atheist mainstream - the types of people who buy books such as the End of Faith, Breaking the Spell, etc etc. Although a good portion of that latter demographic is probably already exposed to the Singularity is Near.

Replies from: JamesAndrix
comment by JamesAndrix · 2010-09-05T23:33:39.159Z · LW(p) · GW(p)

I'm not sure what I'd do, but I'm not a marketing expert either. (Though I am experimenting)

It would probably be possible to make a campaign that took advantage of UFAI in sci-fi. AI's taking over the world isn't a difficult concept to get across, so the ad would just need to persuade that it's possible in reality, and there is a group working towards a solution.

I hope you haven't forgotten our long drawn out discussion, as I do think that one is worthwhile.

Replies from: ata
comment by ata · 2010-09-07T18:02:49.514Z · LW(p) · GW(p)

AI's taking over the world isn't a difficult concept to get across

AIs taking over the world because they have implausibly human-like cognitive architectures and they hate us or resent us or desire higher status than us is an easy concept to get across. It is also, of course, wrong. An AI immediately taking apart the world to use its mass for something else because its goal system is nothing like ours and its utility function doesn't even have a term for human values is more difficult; because of anthropomorphic bias, it will be much less salient to people, even if it is more probable.

Replies from: JamesAndrix, jacob_cannell
comment by JamesAndrix · 2010-09-07T18:57:57.831Z · LW(p) · GW(p)

They have the right conclusion (plausible AI takeover) for slightly wrong reasons. "hate [humans] or resent [humans] or desire higher status than [humans]" are slightly different values than ours (even if just like the values humans often have towards other groups)

So we can gradually nudge people closer to the truth a bit at a time by saying "Plus, it's unlikely that they'll value X, so even if they do something with the universe it will not have X"

But we don't have to introduce them to the full truth immediately, as long as we don't base any further arguments on falsehoods they believe.

If someone is convinced of the need for asteroid defense because asteroids could destroy a city, you aren't obligated to tell them that larger asteroids could destroy all humanity when you're asking for money. Even if you believe bigger asteroids to be more likely.

I don't think it's dark epistemology to avoid confusing people if they've already got the right idea.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2010-09-07T19:28:21.115Z · LW(p) · GW(p)

So we can gradually nudge people closer to the truth a bit at a time by saying "Plus, it's unlikely that they'll value X, so even if they do something with the universe it will not have X"

Writing up high-quality arguments for your full position might be a better tool than "nudging people closer to the truth a bit at a time". Correct ideas have a scholar appeal due to internal coherence, even if they need to overcome plenty of cached misconceptions, but making that case requires a certain critical mass of published material.

Replies from: JamesAndrix
comment by JamesAndrix · 2010-09-07T22:37:18.973Z · LW(p) · GW(p)

I do see value in that, but I'm thinking of a TV commercial or youtube video with a terminator style look and feel. Though possibly emphasizing that against real superintelligence, there would be no war.

I can't immediately remember a way to simplify "the space of all possible values is huge and human like values are a tiny part of that" and I don't think that would resonate at all.

Replies from: jacob_cannell
comment by jacob_cannell · 2010-09-09T06:48:42.440Z · LW(p) · GW(p)

I do see value in that, but I'm thinking of a TV commercial or youtube video with a terminator style look and feel.

A large portion of the world has already seen a Terminator flick, or the Matrix. The AI-is-evil-nonhuman-threat meme is already well established in the wild, to the point of characterture. The AI-is-an-innocent-child meme wasn't as popular - AI is the only example I can think of and not many people saw it.

And even though the Terminator and the Matrix are far from realistic, they did at least get the general shape of the outcome correct - humans lose.

What would your message add over this in reach or content?

At this point the meme is almost oversaturated and it is difficult for people to take seriously. Did "The Day After Tommorrow" help or hinder the environmental movement?

Replies from: JamesAndrix
comment by JamesAndrix · 2010-09-09T07:20:57.142Z · LW(p) · GW(p)

This might not fit the terminator motif anymore, but:

That there are people working on a way to target AI development so it reliably looks more like R2D2, Johnny 5, Commander Data, Sonny, Marvin... ok that's all I can think of but just for fun I'll get these from wikipedia:

Gort, Bishop from aliens, almost everything from the jetsons, Transformers (autobots anyway), the Iron Giant, and KITT

And again we don't have to explain that AI done right will be orders of magnitude more helpful than any of these.

Replies from: jacob_cannell
comment by jacob_cannell · 2010-09-09T08:23:01.105Z · LW(p) · GW(p)

It's interesting that friendly-AI was so common in earlier decades and then this seemed to shift in the 90's.

As for AI-positive advertisements, that somehow reminded me. . .

did you ever see that popular web-viral anti-banking video called Zeitgeist? In the sequel he seems to have realized that just being a critic wasn't enough, so suddenly the 2nd part of Zeitgeist addendum turns into a startrek-ish utopia proposal out of nowhere. I forget the name, but it is basically some architect's pseudo-singularity (AI solves all our problems and makes these beautiful new cities for us but isn't really conscious or dangerous).

I went to a screening of that film in LA, and I was amazed at how entranced the audience seemed to be. The questions at the end were pretty funny too -

"so .. there won't be any money? And the AI's will build us whatever we want?"

"Yes"

"So, what if I want to turn all of Texas into my house?"

. . .

Replies from: timtyler
comment by timtyler · 2010-09-09T08:50:08.587Z · LW(p) · GW(p)

You are thinking of Jacque Fresco.

comment by jacob_cannell · 2010-09-09T06:39:36.139Z · LW(p) · GW(p)

AIs taking over the world because they have implausibly human-like cognitive architectures and they hate us or resent us or desire higher status than us is an easy concept to get across. An AI immediately taking apart the world to use its mass for something else because its goal system is nothing like ours and its utility function doesn't even have a term for human values is more difficult; because of anthropomorphic bias, it will be much less salient to people, even if it is more probable.

I actually come from that outside-LW viewpoint that finds the former scenario involving "human-like cognitive architectures" as vastly more probable than "AI immediately taking apart the world to use its mass for something else because its goal system is nothing like ours and its utility function doesn't even have a term for human values".

So it could be that your viewpoint is more likely, and the rest of us are suffering from "anthropomorphic bias", but it also could be that anthropomorphic bias is in fact a self-fulfilling prophecy.

Replies from: ata
comment by ata · 2010-09-09T15:21:21.990Z · LW(p) · GW(p)

So it could be that your viewpoint is more likely, and the rest of us are suffering from "anthropomorphic bias", but it also could be that anthropomorphic bias is in fact a self-fulfilling prophecy.

I don't see how. We could get something like that if we get uploads before AGI, but that would really be more like an enhanced human taking over the world. Aside from that, where's the self-fulfilling prophecy? If people expect AGIs to exhibit human-like emotions and primate status drives and go terribly wrong as a result, why does that increase the chance that the creators of the first powerful AGI will build human-like emotions and primate status drives into it?

Replies from: jacob_cannell
comment by jacob_cannell · 2010-09-09T16:19:32.295Z · LW(p) · GW(p)

Actual uploads are a far end point along a continuum of human-like cognitive architectures, and have the additional complexity of scanning technology which lags far behind electronics. You don't need uploads for anthropomorphic AI - you just need to loosely reverse engineer the brain.

Also, "human-like cognitive architectures" is a wide spectrum that does not require human-like emotions or primate status drives - consider the example of Alexithymia.

Understanding human languages is a practical prerequisite for any AI to reach high levels of intelligence, and the implied anthropomorphic cognitive capacities required for true linguistic thinking heavily constrains the design space.

The self-fulfilling prophecy is that anthropomorphic AI will be both easier for us to create and more useful for us - so the bias is correct in a self-reinforcing manner.

comment by Cyan · 2010-09-11T18:34:33.076Z · LW(p) · GW(p)

Nine years ago today, I was just beginning my post-graduate studies. I was running around campus trying to take care of some registration stuff when I heard that unknown parties had flown two airliners into the WTC towers. It was surreal -- at that moment, we had no idea who had done it, or why, or whether there were more planes in the air that would be used as missiles.

It was big news, and it's worth recalling this extraordinarily terrible event. But there are many more ordinary terrible events that occur every day, and kill far more people. I want to keep that in mind too, and I want to make the universe a less deadly place for everyone.

(If you feel like voting this comment up, please review this first.)

comment by khafra · 2010-09-07T13:31:41.413Z · LW(p) · GW(p)

The Science of Word Recognition, by a Microsoft researcher, contains tales of reasonably well done Science gone persistently awry, to the point that the discredited version is today the most popular one.

Replies from: Clippy
comment by Clippy · 2010-09-07T14:19:16.403Z · LW(p) · GW(p)

That's a really good article, the Microsoft humans really know their stuff.

comment by CronoDAS · 2010-09-04T07:00:13.600Z · LW(p) · GW(p)

I have recently had the experience of encountering an event of extremely low probability.

Did I just find a bug in the Matrix?

Replies from: wedrifid, Sniffnoy
comment by wedrifid · 2010-09-04T10:44:11.294Z · LW(p) · GW(p)

Wow! I'm certainly surprised!

comment by Sniffnoy · 2010-09-04T07:47:12.447Z · LW(p) · GW(p)

I don't know, I'm thinking the idea that this wouldn't happen (which I had as well) may be a case of "living in the 'should universe'"...

comment by JohnDavidBustard · 2010-09-02T13:15:08.107Z · LW(p) · GW(p)

Apologies if this question seems naive but I would really appreciate your wisdom.

Is there a reasonable way of applying probability to analogue inference problems?

For example, if two substances A and B are being measured using a device which produces an analogue value C. Given a history of analogue values, how does one determine the probability of each substance. Unless the analogue values match exactly, how can historical information contribute to the answer without making assumptions of the shape of the probability density function created by A or B? If this assumption must be made how can this be reasonably determined, and crucially, what events could occur that would lead to it being changed?

A real example would be, that the PDF is often modelled as a Gaussian distribution, but more recent approaches tend to use different distributions because of outliers. This seems like the right thing to do because our visual sense of distribution can easily identify such points, but is there any more rigorous justification?

Is, in effect, the selection of the underlying model the real challenge of rational decision making, not the inference rules?

Replies from: Perplexed
comment by Perplexed · 2010-09-02T13:57:10.440Z · LW(p) · GW(p)

Is there a reasonable way of applying probability to analogue inference problems?

Your examples, certainly show a grasp of the problem. The solution is first sketched in Chapter 4.6 of Jaynes

Is, in effect, the selection of the underlying model the real challenge of rational decision making, not the inference rules?

Definitely. Jaynes finishes deriving the inference rules in Chapter 2 and illustrates how to use them in Chapter 3. The remainder of the book deals with "the real challenge". In particular Chapters 6, 7, 12, 19, and especially 20. In effect, you use Bayesian inference and/or Wald decision theory to choose between underlying models pretty much as you might have used them to choose between simple hypotheses. But there are subtleties, ... to put things mildly. But then classical statistics has its subtleties too.

comment by TobyBartels · 2010-09-16T20:45:24.675Z · LW(p) · GW(p)

Since the Open Thread is necessarily a mixed bag anyway, hopefully it's OK if I test Markdown here

test deleted

comment by allenwang · 2010-09-12T04:11:17.595Z · LW(p) · GW(p)

I have been following this site for almost a year now and it is fabulous, but I haven't felt an urgent need to post to the site until now. I've been working on a climate change project with a couple of others and am in desperate need of some feedback.

I know that climate change isn't a particularly popular topic on this website (but I'm not sure why, maybe I missed something, since much of the website seems to deal with existential risk. Am I really off track here?), but I thought this would be a great place to air these ideas. Our approach tries to tackle the irrational tangle that many of our institutions appear to be caught up in, so I thought this would be the perfect place to get some expertise. The project is kind of at a standstill, and it really needs some advice and leads (and collaborators), so please feel free to praise, criticize, advise, or even join.

I saw orthonormal's "welcome to LessWrong post," so I guess this is where to post before I build up enough points. I hope it isn't too long of an introductory post for this thread?

The aim of the project is to achieve a population that is more educated in the basics of climate change science and policy, with the hope that a more educated voting public will be a big step towards achieving the policies necessary to deal with climate change.

The basic problem of educating the public about climate change is twofold. First, people sometimes get trapped into “information cocoons” (I am using Cass Sunstein’s terminology from his book Infotopia). Information cocoons are created when the news and information people seek out and surround themselves with is biased by what they already know. They are either completely unaware of competing evidence or if they are, they revise their network of beliefs to deny the credibility of those who offer it rather than consider it serious evidence. Usually, this is because they believe it is more probable that those people are not credible than that they could be wrong. This problem has always existed, and has perhaps increased since the rise of the personalized web. People who are trapped in information cocoons of denial of anthropogenic climate change will require much more evidence and counterarguments before they can begin to revise an entire network of beliefs that support their current conclusions.

Second, the population is uneducated about climate change because they lack the incentive to learn about the issues. Although we would presumably benefit if everyone were to take the time to thoroughly understand the issue, the individual cost and benefit of doing so actually runs the other way. Because the benefits of better policies accrue to everybody, but the costs are borne by the individual, people have an incentive to free ride, to let everybody else worry about the issue because either way, their individual contribution means little, and everybody else can make the informed decision. But of course, with everybody reasoning in this way there is a much lower level of education on these issues than optimal (or even necessary to create the necessary change, especially if there are interest groups with opposing goals).

The solution is to institute some system that can crack into these information cocoons and at the same time provide wide-ranging personal incentives for participating. For the former, we propose to develop a layman’s guide to climate change science and economic and environmental policy. Many of these are already in existence, although we have some different ideas about how to make it more transparent to criticism and more thorough in its discussion of epistemic uncertainty surrounding the whole issue. There is definitely a lot we can learn from LessWrong on this point). Also, I think we have a unique idea about developing a system of personal incentives. I will discuss this latter issue first.

Replies from: allenwang
comment by allenwang · 2010-09-12T04:11:38.998Z · LW(p) · GW(p)

(sorry of this comment is too long, continued from above) Creating Incentives

Of course, a sense of public pride exists in many people, and this has led large numbers of people to learn about the issues without external inducements. But the population of educated voters could be vastly increased if there were these personal benefits, especially for groups where environmentalism has not become a positive norm.

While we have thought about other approaches to creating these wide-ranging personal incentives, specifically, material prizes and the intangible benefits of social networking and personal pride (such as are behind Wikipedia or Facebook’s success), it appears that these are difficult to apply to the issue of climate change. Material prizes would be costly to fund, especially to make them worth the several hours necessary to learn about the issues. The issues are difficult enough, and the topic possibly scary enough, that it is not necessarily fun to learn about them and discuss with your friends. For another, it takes time and a little bit of dedicated thinking to achieve an adequate understanding of the problem, but part of the incentive to do so on Wikipedia—to show off your genuine expertise on the topic, even if anonymous—is exactly not what is supposed to happen when there is an educated populace on the topic: you will not be a unique expert, just another person who understands the issue like everyone else. The sense of urgency and personal importance needed to spur people to learn just is not there with these modes of incentivization.

But there is one already extremely effective way that companies, schools, and other organizations incentivize behavior that has little to do with immediate personal benefits. These institutions use their ability to advance or deter people’s future careers to motivate performance in certain areas. The gatekeepers to these future prospects can use their position to bring about all kinds of behavior that would otherwise seem to be a huge burden on those individuals. Ordinary hiring and admissions processes, for example, can impose large writing and learning requirements on their applicants, but because the personal benefits of getting into these organizations are enormous, people are more than willing to fulfill these requirements. Oftentimes, these requirements do not even necessarily have much to do with the stated purpose of the organization, but are used as filtering mechanisms to determine which are the best candidates. Admissions essays are not what universities set out to produce, but rather a bar they set to see which candidates can do well. These bars (known as “sorting mechanisms” in economics) sometimes have additional beneficial effects such as increased writing practice for future students, but not necessarily. For example, polished CV writing is a skill that is only good for overcoming these bars, without additional personal or social benefits. But because these additional effects are really only secondary attributes of the main function of the hurdle, the bar can be modified in ways that create socially beneficial purposes without affecting their main function.

So our specific proposal is to leverage employers’ and schools’ gatekeeper status to impose a hiring hurdle, similar to a polished CV or a high standardized test score, of learning about contemporary climate change science and policy. This hiring hurdle would act much like other hiring hurdles imposed by organizations, but would create a huge personal incentive for individuals to learn about climate change in place of or in addition to the huge personal incentive to write good covering letters or scoring well on the SATs.

The hiring hurdle would be implemented by a third party, a website that acts both as the layman’s guide to climate change science and policy (possibly with something that already exists, but hopefully with something more modular) and as a secure testing center of this knowledge. The website would provide an easy way for people to learn about the most up to date climate science and different policy options available, something that could probably be read and understood with an afternoon’s effort. Once the individual feels that he or she understands the material well enough, a secure test can be taken which measures the extent of that individuals’ climate knowledge. (This test could be retaken if the individual is dissatisfied with the result, or it could be imposed again once new and highly relevant information is discovered). The score that individuals receive could be reported to institutions they apply to. This score would be just one more tickbox for institutions to check before accepting their applicants, and they could determine the score they require.

The major benefit of this approach is that it creates enormous personal incentives for a very small cost. Companies and other institutions already have hiring hurdles in place, and they do not have to burden their HR staff with hundreds of climate change essays but just a simple score that they could look up on the website. The website itself can be hosted for a relatively small cost, and institutions can sign up to the program as more executives and leaders are convinced that this is a good idea.

Presumably, it is much easier to convince a few people who are in charge of such organizations that climate change education is important than to convince individual members of the public. Potentially, this project could affect millions, especially if large corporations such as McDonalds or Walmart or universities with many applicants sign on to the program. Furthermore, approaching the problem of global climate change through nongovernmental institutions seems like a good approach because it avoids the stasis in many public institutions, and it can be done by convincing much fewer stakeholders. Also, many of these institutions have an increasingly global scope.

Developing a platform to combat “information cocoons” yet retain legitimacy

The major problem is that this type of incentivizing might be seen as a way of buying off or patronizing voters, but this appears to be necessary to break the “information cocoons” that many people unknowingly fall into.

Hopefully a charge of having a political agenda can be answered by allowing a certain amount of feedback and continuing development of the guide as more arguments are voiced. Part of the website will be organized so that dissent can be voiced publicly and openly, but only in an organized and reasoned way (something like lesswrong but with stricter limits on posting). The guide would have to maintain public legitimacy by being open to criticism and new evidence as we discover more and also display the evidence that is supporting the current arguments. We would like to include a rating system, something like Rotten Tomatoes, where we have climate experts and the general public vote on various arguments and scenarios that are developed (but this would probably be only for those who develop a specific interest, not part of the testable guide. Of course, the testable guide would follow major developments on this more detailed information). We have thought of using an argument map to better organize such information.

But still, it could not be so flexible that those previous information cocoons redevelop on the website, and a similar polarization occurs on the website as before. Some degree of control is necessary to drive some points home. Thus, a delicate balance might have to be achieved.

That sums up pretty much the ideas to this point. At this point, the project is pretty much all theorizing, although we have found a couple of programmers who might help for a reduced fee (Know of anyone that would be interested in this for free?) and are looking into some funding sources. This would be a large scale attempt at rational debate and discussion, spurred by a mechanism to encourage everybody to participate, so please if you have any advice it would be enormously appreciated.

Sincerely, Allen Wang

Replies from: CronoDAS
comment by CronoDAS · 2010-09-12T06:17:33.308Z · LW(p) · GW(p)

This seems to have the same problem as teaching evolution in high school biology classes: you can pass a test on something and not believe a word of it. Cracking an information cocoon can be damn hard; just consider how unusual religious conversions are, or how rarely people change their minds on such subjects as UFOs, conspiracy theories, cryonics, or any other subject that attracts cranks.

Also, why should employers care about a person's climate change test score?

Finally, why privilege knowledge about climate change, or all things, by using it for gatekeeping, instead of any of the many non-controversial subjects normally taught in high schools, for which SAT II subject tests already exist?

comment by cousin_it · 2010-09-09T22:25:53.667Z · LW(p) · GW(p)

The gap between inventing formal logic and understanding human intelligence is as large as the gap between inventing formal grammars and understanding human language.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2010-09-10T18:54:26.493Z · LW(p) · GW(p)

Human intelligence, certainly; but just intelligence, I'm not so sure.

comment by Document · 2010-09-06T20:23:32.314Z · LW(p) · GW(p)

Friday's Wondermark comic discusses a possible philosophical paradox that's similar to those mentioned at Trust in Bayes and Exterminating life is rational.

Replies from: Nisan
comment by Nisan · 2010-09-07T05:11:53.022Z · LW(p) · GW(p)

You beat me to it :)

comment by knb · 2010-09-06T18:16:03.668Z · LW(p) · GW(p)

Recently there was a discussion regarding Sex at Dawn. I recently skimmed this book at a friend's house, and realized that the central idea of the book is dependent on a group selection hypothesis. (The idea being that our noble savage bonobo-like hunter-gatherer ancestors evolved a preference for paternal uncertainty as this led to better in group cooperation.) This was never stated in the sequence of posts on the book. Can someone who has read the book confirm/deny the accuracy of my impression that the book's thesis relies on a group selection hypothesis?

Replies from: timtyler, WrongBot
comment by timtyler · 2010-09-08T07:14:09.259Z · LW(p) · GW(p)

A blog says:

"The model proposed by Christopher Ryan in “Sex at Dawn” (the women in a group have sex with the men in their group, and uncertain paternity leads all of the men to feel responsible for providing for the children) ..."

That doesn't require group selection.

comment by WrongBot · 2010-09-08T14:42:20.746Z · LW(p) · GW(p)

No, but the book relies on kin selection to some extent: it's beneficial to share resources with your tribe, but not other tribes.

comment by teageegeepea · 2010-09-06T01:24:54.971Z · LW(p) · GW(p)

Since Eliezer has talked about the truth of reductionism and the emptiness of "emergence", I thought of him when listening to Robert Laughlin on EconTalk (near the end of the podcast). Laughlin was arguing that reductionism is experimentally wrong and that everything, including the universal laws of physics, are really emergent. I'm not sure if that means "elephants all the way down" or what.

Replies from: Will_Sawin
comment by Will_Sawin · 2010-09-06T02:53:29.496Z · LW(p) · GW(p)

It's very silly. What he's saying is that there are properties at high levels of organizations that don't exist at low levels of organizations.

As Eliezer says, emergence is trivial. Everything that isn't quarks is emergent.

His "universality" argument seems to be that different parts can make the same whole. Well of course they can.

He certainly doesn't make any coherent arguments. Maybe he does in his book?

Replies from: Perplexed
comment by Perplexed · 2010-09-06T03:10:35.973Z · LW(p) · GW(p)

Yet another example of a Nobel prize winner in disagreement with Eliezer within his own discipline.

What is wrong with these guys?

Why if they would just read the sequences, they would learn the correct way for words like "reduction" and "emergence" to be used in physics.

Replies from: khafra
comment by khafra · 2010-09-07T02:44:30.987Z · LW(p) · GW(p)

To be fair, "reductionism is experimentally wrong" is a statement that would raise some argument among Nobel laureates as well.

Replies from: Perplexed
comment by Perplexed · 2010-09-07T03:16:02.292Z · LW(p) · GW(p)

Argument from some Nobelists. But agreement from others. Google on the string "Philip Anderson reductionism emergence" to get some understanding of what the argument is about.

My feeling is that everyone in this debate is correct, including Eliezer, except for one thing - you have to realize that different people use the words "reductionism" and "emergence" differently. And the way Eliezer defines them is definitely different from the way the words are used (by Anderson, for example) in condensed matter physics.

Replies from: khafra
comment by khafra · 2010-09-07T05:28:59.563Z · LW(p) · GW(p)

If the first hit is a fair overview, I can see why you're saying it's a confusion in terms; the only outright error I saw was confusing "derivable" with "trivially derivable."

If you're saying that nobody important really tries to explain things by just saying "emergence" and handwaving the details, like EY has suggested, you may be right. I can't recall seeing it.

Of course, I don't think Eliezer (or any other reductionist) has said that throwing away information so you can use simpler math isn't useful when you're using limited computational power to understand systems which would be intractable from a quantum perspective, like everything we deal with in real life.

comment by taw · 2010-09-04T06:04:29.506Z · LW(p) · GW(p)

A question about modal logics.

Temporal logics are quite successful in terms of expressiveness and applications in computer science, so I thought I'd take a look at some other modal logics - in particular deontic logic that deal with obligations, rules, and deontological ethics.

It seems like an obvious approach, as we want to have "is"-statements, "ought"-statements, and statements relating what "is" with what "ought" to be.

What I found was rather disastrous, far worse than with neat and unambiguous temporal logics. Low expressiveness, ambiguous interpretations, far too many paradoxes that seem to be more about failing to specify underlying logic correctly than about actual problems, and no convergence on a single deontic logic than works.

After reading all this, I made a few quick attempts at defining logic of obligations, just to be sure it's not some sort of collective insanity, but they all ran into very similar problems extremely quickly.

Now I'm in no way deontologically inclined, but if I were it would really bother me. If it's really impossible to formally express obligations, this kind of ethics is built on extremely flimsy basis. Consequentialism has plenty of problems in practice, but at least in hypothetical scenarios it's very easy to model correctly. Deontic logic seems to lack even that.

Is there any kind of deontic logic that works well that I missed? I'm not talking about solving FAI, constructing universal rules of morality or anything like it - just about a language that expresses exactly the kind of obligations we want, and which works well in simple hypothetical worlds.

comment by Oscar_Cunningham · 2010-09-03T09:14:09.454Z · LW(p) · GW(p)

Someone made a page that automatically collects high karma comments. Could someone point me at it please?

Replies from: Kazuo_Thow, wedrifid
comment by Kazuo_Thow · 2010-09-04T07:01:09.760Z · LW(p) · GW(p)

Here's the Open Thread comment where Daniel Varga made the page and its source code public. I don't know how often it's updated.

Replies from: RobinZ, Oscar_Cunningham
comment by RobinZ · 2010-09-04T17:50:52.546Z · LW(p) · GW(p)

Note that the page in question collects only comments on Rationality Quotes pages.

comment by Oscar_Cunningham · 2010-09-04T07:10:53.184Z · LW(p) · GW(p)

Yay, thank you! Also, that page is large, large enough to make my brand new computer lag horrendously.

comment by wedrifid · 2010-09-03T09:29:02.769Z · LW(p) · GW(p)

They did? I've been wishing for something like that myself. I'd also like another page that collects just my high karma comments. Extremely useful feedback!

comment by JanetK · 2010-09-02T07:54:27.896Z · LW(p) · GW(p)

The penny has just dropped! When I first encountered LessWrong, the word 'Rationality' did not stand out. I interpreted it to mean its everyday meaning of careful, intelligent, sane, informed thought (in keeping with 'avoiding bias'). But I have become more and more uncomfortable with the word because I see it having a more restricted meaning in the LW context. At first, I thought this was an economic definition of the 'rational' behaviour of the selfish and unemotional ideal economic agent. But now I sense an even more disturbing definition: rational as opposed to empirical. As I use scientific evidence as the most important arbiter of what I believe, I would find the anti-empirical idea of 'rational' a big mistake.

Replies from: thomblake, kodos96, wedrifid, Emile, timtyler, Snowyowl, Sniffnoy, FAWS
comment by thomblake · 2010-09-02T17:19:34.787Z · LW(p) · GW(p)

The philosophical tradition of 'Rationalism' (opposed to 'Empiricism') is not relevant to the meaning here. Though there is some relationship between it and "Traditional Rationality" which is referenced sometimes.

comment by kodos96 · 2010-09-02T08:33:28.952Z · LW(p) · GW(p)

But now I sense an even more disturbing definition: rational as opposed to empirical.

Ummmmmmmm.... no.

The word "rational" is used here on LW in essentially its literal definition (which is not quite the same as its colloquial everyday meaning).... if anything it is perhaps used by some to mean "bayesian"... but bayesianism is all about updating on (empirical) evidence.

Replies from: JanetK
comment by JanetK · 2010-09-02T08:56:11.735Z · LW(p) · GW(p)

According to my dictionary: rationalism 1. Philos. the theory that reason is the foundation of certainty in knowledge (opp. empiricism, sensationalism)

This is there as well as: rational 1. of or based on reasoning or reason

So although there are other (more everyday) definitions also listed at later numbers, the opposition to empirical is one of the literal definitions. The Bayesian updating thing is why it took me a long time to notice the other anti-scientific tendency.

Replies from: timtyler
comment by timtyler · 2010-09-03T07:55:54.988Z · LW(p) · GW(p)

I wouldn't say "anti-scientific" - but it certainly would be good if scientists actually studied rationality more - and so were more rational.

With lab equipment like the human brain, you have really got to look into its strengths and weaknesses - and read the manual about how to use it properly.

Personally, when I see material like Science or Bayes - my brain screams: false dichotomy: Science and Bayes! Don't turn the scientists into a rival camp: teach them.

Replies from: JanetK
comment by JanetK · 2010-09-03T13:35:19.919Z · LW(p) · GW(p)

I think you may have misunderstood what I was trying to say. Because the group used Bayesian methods, I had assumed that they would not be anti-scientific. I was surprised when it seemed that they were willing to ignore evidence. I have been reassured that many in the group are rational in the everyday sense and not opposed to empiricism. Indeed it is Science AND Bayes.

comment by wedrifid · 2010-09-02T08:17:38.180Z · LW(p) · GW(p)

But now I sense an even more disturbing definition: rational as opposed to empirical. As I use scientific evidence as the most important arbiter of what I believe, I would find the anti-empirical idea of 'rational' a big mistake.

Indeed. It is heretic in the extreme! Burn them!

Replies from: JanetK
comment by JanetK · 2010-09-02T09:07:18.607Z · LW(p) · GW(p)

Do you have a reason of sarcasm? I notice a tendency that seems to me disturbing and I am pointing it out to see if others have noticed it and have opinions, but I am not attacking. I am deciding whether I fit this group or not - hopefully I can feel comfortable in LW.

Replies from: wedrifid
comment by wedrifid · 2010-09-02T10:08:54.487Z · LW(p) · GW(p)

Do you have a reason of sarcasm?

It felt like irony from my end - a satire of human behaviour.

As a general tendency of humanity we seem to be more inclined to be abhored by beliefs that are similar to what we consider the norm but just slightly different. It is the rebels within the tribe that are the biggest threat, not the tribe that lives 20 kms away.

I hope someone can give you an adequate answer to your question. The very short one is that empirical evidence is usually going to be the most heavily weighted 'bayesian' (rational) evidence. However everything else is still evidence, even though it is far weaker.

comment by Emile · 2010-09-02T08:15:56.593Z · LW(p) · GW(p)

But now I sense an even more disturbing definition: rational as opposed to empirical.

I don't think that's how most people here understand "rationalism".

Replies from: JanetK
comment by JanetK · 2010-09-02T09:09:40.198Z · LW(p) · GW(p)

I don't think that's how most people here understand "rationalism".

Good

comment by timtyler · 2010-09-02T08:39:23.167Z · LW(p) · GW(p)

There is at least one post about that - though I don't entirely approve of it.

Occam's razor is not exactly empirical. Evidence is involved - but it does let you choose between two theories both of which are compatible with the evidence without doing further observations. It is not empirical - in that sense.

Replies from: Kenny
comment by Kenny · 2010-09-03T22:56:39.810Z · LW(p) · GW(p)

Occam's razor isn't empirical, but it is the economically rational decision when you need to use one of several alternative theories (that are exactly "compatible with the evidence"). Besides, "further observations" are inevitable if any of your theories are actually going to be used (i.e. to make predictions [that are going to be subsequently 'tested']).

comment by Snowyowl · 2010-09-03T09:12:22.567Z · LW(p) · GW(p)

Now that I come to think of it, I've never seen the LW definition of "rationality" used anywhere outside LW and OB, and I've never even seen it explicitly defined. EDIT: http://lesswrong.com/lw/31/what_do_we_mean_by_rationality/ But if you asked me, I would say it means taking your selfish and unemotional economic agent to his logical extreme: rationally examining one's own thought processes in order to optimise them, rationally examining scientific evidence without interference from one's biases, and rationally accepting the possibility that one has made a mistake.

comment by Sniffnoy · 2010-09-02T17:13:34.077Z · LW(p) · GW(p)

Here is our definition of rationality. See also the "unnamed virtue".

Replies from: thomblake
comment by thomblake · 2010-09-02T17:16:33.915Z · LW(p) · GW(p)

No, here is our definition of rationality.

For the canonical article, see What Do We Mean By "Rationality"?.

Replies from: JanetK, Sniffnoy
comment by JanetK · 2010-09-02T17:40:30.323Z · LW(p) · GW(p)

Thank you. That seems clear. I will assume that my antennas were giving me the wrong impression. I can relax/

Replies from: None
comment by [deleted] · 2010-09-05T18:11:02.917Z · LW(p) · GW(p)

Maybe you shouldn't relax.

Regardless of official definitions, there is in practice a heavy emphasis on conceptual rigor over evidence.

There's still room for people who don't quite fit in.

comment by Sniffnoy · 2010-09-02T17:17:43.943Z · LW(p) · GW(p)

Ah, that does seem to be better, yes.

comment by FAWS · 2010-09-02T08:27:58.259Z · LW(p) · GW(p)

In a certain sense rationality is using evidence efficiently. Perhaps overemphasis on that type of rationality tempts one to be sparing with evidence - after all if you use less evidence to reach your conclusion you used whatever evidence you did use more efficiently! But not using evidence doesn't mean there is more evidence left afterwards, not using free or very cheap evidence is wasteful, so proper rationality, even in that sense, means using all easily available evidence when practical.

Replies from: Houshalter
comment by Houshalter · 2010-09-06T03:36:22.900Z · LW(p) · GW(p)

I'm not sure I follow, why leave certain observations out of your judgement to "use evidence efficiently"? Do you mean to use your resources efficiently, like time and brain power? In that case, you can just define it as using resources as efficiently as possible. You need evidence to gain knowledge, you need knowledge to base theories, and you need theories to decide how to most effectively spend your resources, which can be spent on anything including finding more evidence in the first place.

Replies from: FAWS
comment by FAWS · 2010-09-06T03:59:11.297Z · LW(p) · GW(p)

I'm not sure I follow, why leave certain observations out of your judgement to "use evidence efficiently"?

My point was that it doesn't make sense. Even when trying to use evidence efficiently you should use all evidence (barring the considerations from Frugality and working from finite data, which are only relevant due to certain biases)

comment by SilasBarta · 2010-09-01T22:44:31.734Z · LW(p) · GW(p)

Grab the popcorn! Landsburg and I go at it again! (See also Previous Landsburg LW flamewar.)

This time, you get to see Landsburg:

  • attempt to prove the existence of the natural numbers while explicitly dismissing the relevance of what sense he's using "existence" to mean!
  • use formal definitions to make claims about the informal meanings of the terms!
  • claim that Peano arithmetic exists "because you can see the marks on paper" (guess it's not a platonic object anymore...)!

(Sorry, XiXiDu, I'll reply to you on his blog if my posting privileges stay up long enough ... for now, I would agree with what you said, but am not making that point in the discussion.)

Replies from: DanielVarga, Snowyowl
comment by DanielVarga · 2010-09-02T00:30:31.908Z · LW(p) · GW(p)

Wow, a debate where the most reasonable-sounding person is a sysop of Conservapedia. :)

Replies from: SilasBarta
comment by SilasBarta · 2010-09-02T04:55:52.851Z · LW(p) · GW(p)

Who?

Replies from: DanielVarga
comment by DanielVarga · 2010-09-02T10:18:49.000Z · LW(p) · GW(p)

Roger Schlafly. Or Roger Schlafly, if you prefer that. His blog is Singular Values. His whole family is full of very interesting people.

comment by Snowyowl · 2010-09-02T10:30:27.612Z · LW(p) · GW(p)

I always find these entertaining, though I begin to despair of human nature after a while. Thanks for letting me watch.

comment by David_Gerard · 2010-12-06T12:51:24.471Z · LW(p) · GW(p)

Is the Open Thread now deprecated in favour of the Discussion section? If so, I suggest an Open Thread over there for questions not worked out enough for a Discussion post. (I have some.)

comment by Clippy · 2010-09-13T01:42:25.705Z · LW(p) · GW(p)

>equals(correct_reasoning , Bayesian_inference)

Replies from: Clippy
comment by Clippy · 2010-09-13T20:27:03.646Z · LW(p) · GW(p)

This server is really slow.

comment by gwern · 2010-09-11T00:11:27.747Z · LW(p) · GW(p)

NYT magazine covers engineers & terrorism: http://www.nytimes.com/2010/09/12/magazine/12FOB-IdeaLab-t.html

comment by datadataeverywhere · 2010-09-09T15:14:37.115Z · LW(p) · GW(p)

How diverse is Less Wrong? I am under the impression that we disproportionately consist of 20-35 year old white males, more disproportionately on some axes than on others.

We obviously over-represent atheists, but there are very good reasons for that. Likewise, we are probably over-educated compared to the populations we are drawn from. I venture that we have a fairly weak age bias, and that can be accounted for by generational dispositions toward internet use.

However, if we are predominately white males, why are we? Should that concern us? There's nothing about being white, or female, or hispanic, or deaf, or gay that prevents one from being a rationalist. I'm willing to bet that after correcting for socioeconomic correlations with ethnicity, we still don't make par. Perhaps naïvely, I feel like we must explain ourselves if this is the case.

Replies from: gwern, timtyler, cousin_it, NancyLebovitz, Perplexed, None, Emile, CaveJohnson
comment by gwern · 2010-09-09T16:05:23.651Z · LW(p) · GW(p)

This sounds like the same question as why are there so few top-notch women in STEM fields, why there are so few women listed in Human Accomplishment's indices*, why so few non-whites or non-Asians score 5 on AP Physics, why...

In other words, here be dragons.

* just Lady Murasaki, if you were curious. It would be very amusing to read a review of The Tale of Genji by Eliezer or a LWer. My own reaction by the end was horror.

Replies from: datadataeverywhere
comment by datadataeverywhere · 2010-09-09T16:26:32.953Z · LW(p) · GW(p)

That's absolutely true. I've worked for two US National Labs, and both were monocultures. At my first job, the only woman in my group (20 or so) was the administrative assistant. At my second, the numbers were better, but at both, there were literally no non-whites in my immediate area. The inability to hire non-citizens contributes to the problem---I worked for Microsoft as well, and all the non-whites were foreign citizens---but it's not as if there aren't any women in the US!

It is a nearly intractable problem, and I think I understand it fairly well, but I would very much like to hear the opinion of LWers. My employers have always been very eager to hire women and minorities, but the numbers coming out of computer science programs are abysmal. At Less Wrong, a B.S. or M.S. in a specific field is not a barrier to entry, so our numbers should be slightly better. On the other hand, I have no idea how to go about improving them.

The Tale of Genji has gone on my list of books to read. Thanks!

Replies from: gwern
comment by gwern · 2010-09-09T16:58:40.214Z · LW(p) · GW(p)

At Less Wrong, a B.S. or M.S. in a specific field is not a barrier to entry, so our numbers should be slightly better.

Yes, but we are even more extreme in some respects; many CS/philosophy/neurology/etc. majors reject the Strong AI Thesis (I've asked), while it is practically one of our dogmas.

The Tale of Genji has gone on my list of books to read. Thanks!

I realize that I was a bit of a tease there. It's somewhat off topic, but I'll include (some of) the hasty comments I wrote down immediately upon finishing:

The prevalence of poems & puns is quite remarkable. It is also remarkable how tired they all feel; in Genji, poetry has lost its magic and has simply become another stereotyped form of communication, as codified as a letter to the editor or small talk. I feel fortunate that my introductions to Japanese poetry have usually been small anthologies of the greatest poets; had I first encountered court poetry through Genji, I would have been disgusted by the mawkish sentimentality & repetition.

The gender dynamics are remarkable. Toward the end, one of the two then main characters becomes frustrated and casually has sex with a serving lady; it's mentioned that he liked sex with her better than with any of the other servants. Much earlier in Genji (it's a good thousand pages, remember), Genji simply rapes a woman, and the central female protagonist, Murasaki, is kidnapped as a girl and he marries her while still what we would consider a child. (I forget whether Genji sexually molests her before the pro forma marriage.) This may be a matter of non-relativistic moral appraisal, but I get the impression that in matters of sexual fidelity, rape, and children, Heian-era morals were not much different from my own, which makes the general immunity all the more remarkable. (This is the 'shining' Genji?) The double-standards are countless.

The power dynamics are equally remarkable. Essentially every speaking character is nobility, low or high, or Buddhist clergy (and very likely nobility anyway). The characters spend next to no time on 'work' like running the country, despite many main characters ranking high in the hierarchy and holding ministral ranks; the Emperor in particular does nothing except party. All the households spend money like mad, and just expect their land-holdings to send in the cash. (It is a signal of their poverty that the Uji household ever even mentions how less money is coming from their lands than used to.) The Buddhist clergy are remarkably greedy & worldly; after the death of the father of the Uji household, the abbot of the monastery he favored sends the grief-stricken sisters a note - which I found remarkably crass - reminding them that he wants the customary gifts of valuable textiles.

The medicinal practices are utterly horrifying. They seem to consist, one and all, of the following algorithm: 'while sick, pay priests to chant.' If chanting doesn't work, hire more priests. (One remarkable freethinker suggests that a sick woman eat more food.) Chanting is, at least, not outright harmful like bloodletting, but it's still sickening to read through dozens of people dying amidst chanting. In comparison, the bizarre superstitions that guide many characters' activities (trapping them in their houses on inauspicious days) are practically unobjectionable.

comment by timtyler · 2010-09-09T20:12:51.587Z · LW(p) · GW(p)

How diverse is Less Wrong?

You may want to check the survey results.

Replies from: Relsqui, datadataeverywhere
comment by Relsqui · 2010-09-16T21:38:28.523Z · LW(p) · GW(p)

Thank you; that was one of the things I'd come to this thread to ask about.

comment by datadataeverywhere · 2010-09-09T21:19:55.389Z · LW(p) · GW(p)

Thank you very much. I looked for but failed to find this when I went to write my post. I had intended to start with actual numbers, assuming that someone had previously asked the question. The rest is interesting as well.

comment by cousin_it · 2010-09-09T16:20:20.165Z · LW(p) · GW(p)

However, if we are predominately white males, why are we?

Ignoring the obviously political issue of "concern", it's fun to consider this question on a purely intellectual level. If you're a white male, why are you? Is the anthropic answer ("just because") sufficient? At what size of group does it cease to be sufficient? I don't know the actual answer. Some people think that asking "why am I me" is inherently meaningless, but for me personally, this doesn't dissolve the mystery.

Replies from: datadataeverywhere
comment by datadataeverywhere · 2010-09-09T16:30:23.856Z · LW(p) · GW(p)

The flippant answer is that a group size of 1 lacks statistical significance; at some group size, that ceases to be the case.

I asked not from a political perspective. In arguments about diversity, political correctness often dominates. I am actually interested in, among other things, whether a lack of diversity is a functional impairment for a group. I feel strongly that it is, but I can't back up that claim with evidence strong enough to match my belief. For a group such as Less Wrong, I have to ask what we miss due to a lack of diversity.

Replies from: cousin_it
comment by cousin_it · 2010-09-09T16:45:48.268Z · LW(p) · GW(p)

The flippant answer is that a group size of 1 lacks statistical significance; at some group size, that ceases to be the case.

The flippant answer to your answer is that you didn't pick LW randomly out of the set of all groups. The fact that you, a white male, consistently choose to join groups composed mostly of white males - and then inquire about diversity - could have any number of anthropic explanations from your perspective :-) In the end it seems to loop back into why are you, you again.

ETA: apparenty datadataeverywhere is female.

Replies from: datadataeverywhere
comment by datadataeverywhere · 2010-09-09T16:54:46.454Z · LW(p) · GW(p)

No, I think that's a much less flippant answer :-)

Replies from: cousin_it
comment by cousin_it · 2010-09-14T19:13:11.802Z · LW(p) · GW(p)

It's come to my attention that you're female. Apologies for assuming otherwise, and shame on you for not correcting me.

comment by NancyLebovitz · 2010-09-09T16:41:08.232Z · LW(p) · GW(p)

I've been thinking that there are parallels between building FAI and Talmud-- it's an effort to manage an extremely dangerous, uncommunicative entity through deduction. (An FAI may be communicative to some extent. An FAI which hasn't been built yet doesn't communicate.)

Being an atheist doesn't eliminate cultural influence. Survey for atheists: which God do you especially not believe in?

I was talking about FAI with Gene Treadwell, who's black. He was quite concerned that the FAI would be sentient, but owned and controlled.

This doesn't mean that either Eliezer or Gene are wrong (or right for that matter), but it suggests to me that culture gives defaults which might be strong attractors. [1]

He recommended recruiting Japanese members, since they're more apt to like and trust robots.

I don't know about explaining ourselves, but we may need more angles on the problem just to be able to do the work.

[1] See also Timothy Leary's S.M.I.2L.E.-- Space Migration, Increased Intelligence, Life Extension. Robert Anton Wilson said that was match for Catholic hopes of going to heaven, being trajnsfigured, and living forever.

Replies from: None
comment by [deleted] · 2010-09-16T18:50:35.559Z · LW(p) · GW(p)

He recommended recruiting Japanese members, since they're more apt to like and >trust robots.

He has a very good point. I was surprised more Japanese or Koreans hadn't made their way to Lesswrong. This was my motivation for first proposing we recruit translators for Japanese and Chinese and to begin working towards a goal of making at least the sequences available in many languages.

Not being a native speaker of English proved a significant barrier for me in some respects. The first noticeable one was spelling, I however solved the problem by outsourcing this part of the system known as Konkvistador to the browser. ;) Other more insidious forms of miscommunication and cultural difficulties persist.

Replies from: Wei_Dai, Perplexed
comment by Wei Dai (Wei_Dai) · 2010-09-18T19:48:05.968Z · LW(p) · GW(p)

I'm not sure that it's a language thing. I think many (most?) college-educated Japanese, Koreans, and Chinese can read and write in English. We also seem to have more Russian LWers than Japanese, Koreans, and Chinese combined.

According to a page gwern linked to in another branch of the thread, among those who got 5 on AP Physics C in 2008, 62.0% were White and 28.3% were Asian. But according to the LW survey, only 3.8% of respondents were Asian.

Maybe there is something about Asian cultures that makes them less overtly interested in rationality, but I don't have any good ideas what it might be.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2010-09-19T08:06:56.081Z · LW(p) · GW(p)

I'm not sure that it's a language thing. I think many (most?) college-educated Japanese, Koreans, and Chinese can read and write in English. We also seem to have more Russian LWers than Japanese, Koreans, and Chinese combined.

All LW users display near-native control of English, which won't be as universal, and typically requires years-long consumption of English content. English-speaking world is the default source of non-Russian content for Russians, but it might not be the case with native Asians (what's your impression?)

Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2010-09-20T18:33:38.759Z · LW(p) · GW(p)

My impression is that for most native Asians, the English-speaking world is also their default source of non-native-language content. I have some relatives in China, and to the extent they do consume non-Chinese content, they consume English content. None of them consume enough of it to obtain near-native control of English though.

I'm curious, what kind of English content did you consume before you came across OB/LW? How typical do you think that level of consumption is in Russia?

comment by Perplexed · 2010-09-16T18:55:55.932Z · LW(p) · GW(p)

Unfortunately, browser spell checkers usually can't help you to spell your own name correctly. ;) That is one advantage to my choice of nym.

Replies from: wedrifid
comment by wedrifid · 2010-09-16T19:10:38.112Z · LW(p) · GW(p)

Unfortunately, browser spell checkers usually can't help you to spell your own name correctly.

Right click, add to dictionary. If that doesn't work then get a better browser.

Replies from: None
comment by [deleted] · 2010-09-16T19:46:16.827Z · LW(p) · GW(p)

Ehm, you do realize he was making a humorous remark about "Konkvistador" being my user name right?

Edit:

Actually it was more about Konkivstador not being your name.

Well its all clearly Alicorn's fault. ;)

Replies from: Perplexed, Perplexed
comment by Perplexed · 2010-09-16T21:39:41.875Z · LW(p) · GW(p)

Actually it was more about Konkivstador not being your name.

comment by Perplexed · 2010-09-16T21:36:27.631Z · LW(p) · GW(p)

I do now. Sorry about that.

comment by Perplexed · 2010-09-09T16:35:54.642Z · LW(p) · GW(p)

I generally agree with your assessment. But I think there may be more East and South Asians than you think, more 36-80s and more 15-19s too. I have no reason to think we are underrepresented in gays or in deaf people.

My general impression is that women are not made welcome here - the level of overt sexism is incredibly high for a community that tends to frown on chest-beating. But perhaps the women should speak for themselves on that subject. Or not. Discussions on this subject tend to be uncomfortable, Sometimes it seems that the only good they do is to flush some of the more egregious sexists out of the closet.

Replies from: timtyler
comment by timtyler · 2010-09-09T20:09:57.643Z · LW(p) · GW(p)

But perhaps the women should speak for themselves on that subject.

We have already had quite a lot of that.

Replies from: Perplexed
comment by Perplexed · 2010-09-09T20:44:56.305Z · LW(p) · GW(p)

OMG! A whole top-level-posting. And not much more than a year ago. I didn't know. Well, that shows that you guys (and gals) have said all that could possibly need to be said regarding that subject. ;)

But thx for the link.

Replies from: timtyler
comment by timtyler · 2010-09-09T20:48:13.940Z · LW(p) · GW(p)

It does have about 100 pages of comments. Consider also the "links to followup posts" in line 4 of that article. It all seemed to go on forever - but maybe that was just me.

Replies from: Perplexed
comment by Perplexed · 2010-09-09T20:54:39.617Z · LW(p) · GW(p)

Ok. Well, it is on my reading list now. Again, thx.

comment by [deleted] · 2010-09-16T18:40:35.614Z · LW(p) · GW(p)

I don't know why you presume that because we are mostly 25-35 something White males a reasonable proportion of us are not deaf, gay or disabled (one of the top level posts is by someone who will soon deal with being perhaps limited to communicating with the world via computer)

I smell a whiff of that weird American memplex for minority and diversity that my third world mind isn't quite used to, but which I seem to encounter more and more often, you know the one that for example uses the word minority to describe women.

Also I decline to invitation to defend this community for lack of diversity, I don't see it as a prior a thing in need of a large part of our attention. Rationality is universal, however not in the sense of being equally universally valued in different cultures but certainly universally effective (rationalists should win). One should certainly strive to keep a site dedicated to refining the art free of unnecessary additional barriers to other people. I think we should eventually translate many articles into Hindi, Japanese, Chinese, Arab, German, Spanish, Russian and French. However its ridiculous to imagine that our demographics will somehow come to resemble and match a socio-economic adjusted mix of unspecified ethnicities that you seem to hunt for after we eliminate all such barriers. I assure you White Westerners have their very very insane spots, we deal with them constantly, but God for starters isn't among them, look at GSS or various sources on Wikipedia and consider how much more a thought stopper and a boo light atheism is for a large part of the world, what should the existing population of LessWrong do? Refrain from bashing theism? This might incur down votes, but Westerners did come up with the scientific method and did contribute disproportionately to the fields of statistics and mathematics, is it so unimaginable that developed world (Iceland, Italy, Switzerland, Finland, America, Japan, Korea, Singapore, Taiwan ect.) and their majority demographics still have a more overall rationality friendly climate (due to the caprice of history) than basically any part of the world? I freely admit my own native culture (though I'm probably thoroughly Westernised by now due to late childhood influences of mass media and education) is probably less rational than the Anglosaxon one. However simply going on a "crusade" to make other cultures more rational first since they are "clearly" more in need is besides sending terribly bad signals as well as the potential for self-delusion perhaps a bad idea for humanitarian reasons.

Sex ration: There are some differences in aptitude, psychology and interests that ensure that compsci and mathematics, at least at the higher levels will remain disproportionately male for the foreseeable future (until human modification takes off). This obviously affects our potential pool of recruits.

Age: People grow more conservative as they age, Lesswrong is firstly available only on a relatively a new medium, secondly has a novel approach to popularizing rationality. Also as people age the mind unfortunately do deteriorate. Very few people have a IQ high enough to master difficult fields before they are 15, and even their interests are somewhat affected by their peers.

I am sure I am rationalizing at least a few of these points, however I need to ask you is pursuing some popular concept of diversity (why did you for example not commend LW on its inclusion of non-neurotypicals who are often excluded in some segments of society? Also why do you only bemoan the under-representation of groups everyone else does? Is this really a rational approach? Why don't we go study where the in the memspace we might find truly valuable perspectives and focus on those? Maybe they overlap with the popular kinds, maybe they don't, but can we really trust popular culture and especially the standard political discourse on this? ) is truly cost-effective at this point?

Replies from: datadataeverywhere
comment by datadataeverywhere · 2010-09-17T05:19:36.967Z · LW(p) · GW(p)

If you read my comment, you would have seen that I explicitly assume that we are not under-represented among deaf or gay people.

I smell a whiff of that weird American memplex [...] you know the one that for example uses the word minority to describe women.

If less than 4% of us are women, I am quite willing to call that a minority. Would you prefer me to call them an excluded group?

but God for starters isn't among them

I specifically brought up atheists as a group that we should expect to over-represent. I'm also not hunting for equal-representation among countries, since education obviously ought to make a difference.

There are some differences in aptitude, psychology and interests that ensure that compsci and mathematics, at least at the higher levels will remain disproportionately male

That seems like it ought to get many more boos around here than mentioning the western world as the source of the scientific method. I ascribe differences in those to cultural influences; I don't claim that aptitude isn't a factor, but I don't believe it has been or can easily be measured given the large cultural factors we have.

age

This also doesn't bother me, for reasons similar to yours. As a friend of mine says, "we'll get gay rights by outliving the homophobes".

why do you only bemoan the under-representation of groups everyone else does?

Which groups should I pay more attention to? This is a serious question, since I haven't thought too much about it. I neglect non-neurotypicals because they are overrepresented in my field, so I tend to expect them amongst similar groups.

I wasn't actually intending to bemoan anything with my initial question, I was just curious. I was also shocked when I found out that this is dramatically less diverse than I thought, and less than any other large group I've felt a sort of membership in, but I don't feel like it needs to be demonized for that. I certainly wasn't trying to do that.

Replies from: None, None, None, wedrifid
comment by [deleted] · 2010-09-18T18:39:42.829Z · LW(p) · GW(p)

I ascribe differences in those to cultural influences;I don't claim that aptitude isn't a factor, but I don't believe it has been or can easily be measured given the large cultural factors we have.

But if we can't measure the cultural factors and account for them why presume a blank slate approach? Especially since there is sexual dimorphism in the very nervous and endocrine system.

I think you got stuck on the aptitude, to elaborate, I'm pretty sure considering that humans aren't a very sexually dimorphous species (there are near relatives that are less however, example: Gibons), the mean g (if such a thing exists) of both men and women is probably about the same. There are however other aspects of succeeding at compsci or math than general intelligence.

Assuming that men and women carrying the exactly the same mems will respond on average identically to identical situations is a extraordinary claim. I'm struggling to come up with a evolutionary model that would square this with what is known (for example the greater historical reproductive success of the average woman vs. the average man that we can read from the distribution of genes). If I was presented with empirical evidence then this would be just too bad for the models, but in the absence of meaningful measurement (by your account), why not assign greater probability to the outcome proscribed by the same models that work so well when tested by other empirical claims?

I would venture to state that this case is especially strong for preferences.

And if you are trying to fine tune the situations and memes that both men and women for each gender so as to to balance this, where can one demonstrate that this isn't a step away rather than toward improving pareto efficiency? And if its not, why proceed with it?

Also to admit a personal bias I just aesthetically prefer equal treatment whenever pragmatic concerns don't trump it.

Replies from: lmnop, lmnop
comment by lmnop · 2010-09-18T19:41:09.291Z · LW(p) · GW(p)

But if we can't measure the cultural factors and account for them

We can't directly measure them, but we can get an idea of how large they are and how they work.

For example, the gender difference in empathic abilities. While women will score higher on empathy on self report tests, the difference is much smaller on direct tests of ability, and often nonexistent on tests of ability where it isn't stated to the participant that it's empathy being tested. And then there's the motivation of seeming empathetic. One of the best empathy tests I've read about is Ickes', which worked like this: two participants meet together in the room and have a brief conversation, which is taped. Then they go into separate rooms and the tape is played back to them twice. The first time, they jot down the times at which they remember feeling various emotions. The second time, they jot down the times at which they think their partner is feeling an emotion, and what it is. Then the records are compared, and each participant receives an accuracy score. When the test is run is like this, there is no difference in ability between men and women. However, a difference emerges when another factor is added: each participant is asked to write a "confidence level" for each prediction they make. In that procedure, women score better, presumably because their desire to appear empathetic (write down higher confidence levels) causes them to put more effort into the task. But where do desires to appear a certain way come from? At least partly from cultural factors that dictate how each gender is supposed to appear. This is probably the same reason why women are overconfident in self reporting their empathic abilities relative to men.

The same applies to math. Among women and men with the same math ability as scored on tests, women will rate their own abilities much lower than the men do. Since people do what they think they'll be good at, this will likely affect how much time these people spend on math in future, and the future abilities they acquire.

And then there's priming. Asian American women do better on math tests when primed with their race (by filling in a "race" bubble at the top of the test) than when primed with their gender (by filling in a "sex" bubble). More subtly, priming affects people's implicit attitudes towards gender-stereotyped domains too. People are often primed about their gender in real life, each time affecting their actions a little, which over time will add up to significant differences in the paths they choose in life in addition to that which is caused by innate gender differences. Right now we don't have enough information to say how much is caused by each, but I don't see why we can't make more headway into this in the future.

comment by lmnop · 2010-09-18T19:39:12.839Z · LW(p) · GW(p)

But if we can't measure the cultural factors and account for them

We can't directly measure them, but we can get an idea of how large they are and how they work.

For example, the gender difference in empathic abilities. While women will score higher on empathy on self report tests, the difference is much smaller on direct tests of ability, and nonexistent on tests of ability where it isn't stated to the participant that it's empathy being tested. And then there's the motivation of seeming empathetic. One of the http://onlinelibrary.wiley.com/doi/10.1111/j.1475-6811.2000.tb00006.x/abstract is Ickes', which worked like this: two participants meet together in the room and have a brief conversation, which is taped. Then they go into separate rooms and the tape is played back to them twice. The first time, they jot down the times at which they remember feeling various emotions. The second time, they jot down the times at which they think their partner is feeling an emotion, and what it is. Then the records are compared, and each participant receives an accuracy score. When the test is run is like this, there is no difference in ability between men and women. However, a difference emerges when another factor is added: each participant is asked to write a "confidence level" for each prediction they make. In that procedure, women score better, presumably because the their desire to appear empathetic causes them to put more effort into the task. But where do desires to appear a certain way come from? At least partly from cultural factors that dictate how each gender is supposed to appear. This is probably the same reason why women are overconfident in their empathy abilities relative to men.

The same applies to math. Among women and men with the same math ability as scored on tests, women will rate their own abilities much lower than the men do. Since people do what they think they'll be good at, this will likely affect how much time these people spend on math in future, and the future abilities they acquire.

And then there's priming. Asian American women do better on math tests when primed with their race (by filling in a "race" bubble at the top of the test) than when primed with their gender (by filling in a "sex" bubble). More subtly, priming affects people's implicit http://ase.tufts.edu/psychology/ambady/pubs/2006Steele.pdf towards gender-stereotyped domains too. People are often primed about their gender in real life, each time affecting their actions a little, which over time will add up in significant differences in the paths they choose in life in addition to that which is caused by innate gender differences.

comment by [deleted] · 2010-09-18T18:40:01.433Z · LW(p) · GW(p)

I neglect non-neurotypicals because they are overrepresented in my field, so I tend to expect them amongst similar groups.

How do you know non-neurotypicals aren't over or under represented on Lesswrong as compared to the groups that you claim are overrepresented on Lesswrong compared to your field the same way you know that the groups you bemoan are lacking are under-represented relative to your field?

Is it just because being neurotypical is harder to measure and define? I concede measuring who is a woman or a man or who is considered black and who is considered asian is for the average case easier than being neurotpyical. But when it comes to definition those concepts seem to be in the same order of magnitude of fuzzy as being neurotypical (sex is a less, race is a bit more).

Also previously you established you don't want to compare Less wrongs diversity to the entire population of the world. I'm going to tentatively go that you also accept that academic background will affect if people can grasp or are interested in learning certain key concepts needed to participate.

My question now is, why don't we crunch the numbers instead of people yelling "too many!", "too few!" or "just right!"? We know from which countries and in what numbers visitors come from, we know the educational distributions in most of them. And we know how large a fraction of this group is proficient enough English to participate meaningfully on Less wrong.

This is ignoring the fact that the only data we have on sex or race is a simple self reported poll and our general impression.

But if we crunch the numbers and the probability densities end up looking pretty similar from the best data we can find, well why is the burden of proof that we are indeed wasting potential on Lesswrong and not the one proposing policy or action to improve our odds of progressing towards becoming more rational? And if we are promoting our member's values, even when they aren't neutral or positive towards reaching our objectives why don't we spell them out as long as they truly are common! I'm certainly there are a few, perhaps the value of life and existence (thought these have been questioned and debated here too) or perhaps some utilitarian principles.

But how do we know any position people take would really reflect their values and wouldn't jut be status signalling? Heck many people who profess their values include or don't include a certain inherent "goodness" to existence probably do for signalling reasons and would quickly change their minds in a different situation!

Not even mentioning the general effect of the mindkiller.

But like I have stated before, there are certainly many spaces where we can optimize the stated goal by outreach. This is why I think this debate should continue but with a slightly different spirit. More in line with, to paraphrase you:

Which groups should we pay more attention to? This is a serious question, since we haven't thought too much about it.

Replies from: wedrifid
comment by wedrifid · 2010-09-18T19:09:52.231Z · LW(p) · GW(p)

will affect how many people can grasp ev

Typo in a link?

Replies from: None
comment by [deleted] · 2010-09-18T19:24:10.492Z · LW(p) · GW(p)

I changed the first draft midway when I was still attempting to abbreviate it. I've edited and reformulated the sentence, it should make sense now.

comment by [deleted] · 2010-09-18T17:51:43.426Z · LW(p) · GW(p)

If less than 4% of us are women, I am quite willing to call that a minority. Would you >prefer me to call them an excluded group

I'm talking about the Western memplex whose members employ uses the word minority when describing women in general society. Even thought they represent a clear numerical majority.

I was suspicious that you used the word minority in that sense rather than the more clearly defined sense of being a numerical minority.

Sometimes when talking about groups we can avoid discussing which meaning of the word we are employing.

Example: Discussing the repression of the Mayan minority in Mexico.

While other times we can't do this.

Example: Discussing the history and current relationship between the Arab upper class minority and slavery in Mauritania.

This (age) also doesn't bother me, for reasons similar to yours.

Ah, apologies I see I carried it over from here:

How diverse is Less Wrong? I am under the impression that we disproportionately >consist of 20-35 year old white males, more disproportionately on some axes than >on others.

You explicitly state later that you are particularly interested in this axis of diversity

However, if we are predominately white males, why are we?

Perhaps this would be more manageable if looked at each of the axis of variability that you raise talk about it independently in as much as this is possible? Again, this is why I previously got me confused by speaking of "groups we usually consider adding diversity", are there certain groups that are inherently associated with the word diversity? Are we using the word diversity to mean something like "proportionate representation of certain kinds of people in all groups" or are we using the world diversity in line with infinite diversity in Infinite combinations where if you create a mix of 1 part people A and 4 parts people B and have them coexist and cooperate with another one that is 2 part people A and 3 parts people B, where previously all groups where of the first kind, creating a kind of metadiversity (by using the word diversity in its politically charged meaning)?

I specifically brought up atheists as a group that we should expect to over-represent. I'm also not hunting for equal-representation among countries, since education obviously ought to make a difference.

Then why aren you hunting for equal representation on LW between different groups united in a space as arbitrary as one defined by borders?

mentioning the western world as the source of the scientific method.

While many important components of the modern scientific method did originate among scholars in Persian and Iraq in the medieval era, its development over the past 700 years has been disproportionately seen in Europe and later its colonies. I would argue its adoption was a part of the reason for the later (lets say last 300 years) technological superiority of the West.

Edit: I wrote up quite a long wall of text. I'm just going to split it into a few posts as to make it more readable as well as give me a better sense of what is getting up or downvoted based on its merit or lack of there of.

comment by wedrifid · 2010-09-18T18:50:07.509Z · LW(p) · GW(p)

That seems like it ought to get many more boos around here than mentioning the western world as the source of the scientific method. I ascribe differences in those to cultural influences;

Given new evidence from the ongoing discussion I retract my earlier concession. I have the impression that the bottom line preceded the reasoning.

Replies from: datadataeverywhere
comment by datadataeverywhere · 2010-09-18T22:22:41.524Z · LW(p) · GW(p)

I expected your statement to get more boos for the same reason that you expected my premise in the other discussion to be assumed because of moral rather than evidence-based reasons. That is, I am used to other members of your species (I very much like that phrasing) to take very strong and sudden positions condemning suggestions of inherent inequality between the sexes, regardless of having a rational basis. I was not trying to boo your statement myself.

That said, I feel like I have legitimate reasons to oppose suggestions that women are inherently weaker in mathematics and related fields. I mentioned one immediately below the passage you quoted. If you insist on supporting that view, I ask that you start doing so by citing evidence, and then we can begin the debate from there. At minimum, I feel like if you are claiming women to be inherently inferior, the burden of proof lies with you.

Edit: fixed typo

Replies from: Will_Newsome, wedrifid
comment by Will_Newsome · 2010-09-19T05:56:35.622Z · LW(p) · GW(p)

Mathematical ability is most remarked on at the far right of the bell curve. It is very possible (and there's lots of evidence to support the argument) that women simply have lower variance in mathematical ability. The average is the same. Whether or not 'lower variance' implies 'inherently weaker' is another argument, but it's a silly one.

I'm much too lazy to cite the data, but a quick Duck Duck Go search or maybe Google Scholar search could probably find it. An overview with good references is here.

Replies from: None, datadataeverywhere
comment by [deleted] · 2010-09-19T23:25:06.219Z · LW(p) · GW(p)

Is mathematical ability a bell curve?

My own anecdotal experience has been that women are rare in elite math environments, but don't perform worse than the men. That would be consistent with a fat-tailed rather than normal distribution, and also with higher computed variance among women.

Also anecdotal, but it seems that when people come from an education system that privileges math (like Europe or Asia as opposed to the US) the proportion of women who pursue math is higher. In other words, when you can get as much social status by being a poly sci major as a math major, women tend not to do math, but when math is very clearly ranked as the "top" or "most competitive" option throughout most of your educational life, women are much more likely to pursue it.

Replies from: Will_Newsome
comment by Will_Newsome · 2010-09-20T00:06:59.316Z · LW(p) · GW(p)

Is mathematical ability a bell curve?

I have no idea; sorry, saying so was bad epistemic hygiene. I thought I'd heard something like that but people often say bell curve when they mean any sort of bell-like distribution.

Also anecdotal, but it seems that when people come from an education system that privileges math (like Europe or Asia as opposed to the US) the proportion of women who pursue math is higher.

I'm left confused as to how to update on this information... I don't know how large such an effect is, nor what the original literature on gender difference says, which means that I don't really know what I'm talking about, and that's not a good place to be. I'll make sure to do more research before making such claims in the future.

comment by datadataeverywhere · 2010-09-19T18:34:32.053Z · LW(p) · GW(p)

I'm not claiming that there aren't systematic differences in position or shape of the distribution of ability. What I'm claiming is that no one has sufficiently proved that these differences are inherent.

I can think of a few plausible non-genetic influences that could reduce variance, but even if none of those come into play, there must be others that are also possibilities. Do you see why I'm placing the burden of proof on you to show that differences are biologically inherent, but also why I believe that this is such a difficult task?

Replies from: wedrifid
comment by wedrifid · 2010-09-19T19:03:49.869Z · LW(p) · GW(p)

Do you see why I'm placing the burden of proof on you to show that differences are biologically inherent

Either because you don't understand how bayesian evidence works or because you think the question is social political rather than epistemic.

, but also why I believe that this is such a difficult task?

That was the point of making the demand.

You cannot change reality by declaring that other people have 'burdens of proof'. "Everything is cultural" is not a privileged hypothesis.

Replies from: Perplexed
comment by Perplexed · 2010-09-19T19:24:33.560Z · LW(p) · GW(p)

Do you see why I'm placing the burden of proof on you to show that differences are biologically inherent

Either because you don't understand how bayesian evidence works or because you think the question is social political rather than epistemic.

It might have been marginally more productive to answer "No, I don't see. Would you explain?" But, rather than attempting to other-optimize, I will simply present that request to datadataeverywhere. Why is the placement of "burden" important? With this supplementary question: Do you know of evidence strongly suggesting that different cultural norms might significantly alter the predominant position of the male sex in academic mathematics?

... but also why I believe that this is such a difficult task?

I can certainly see this as a difficult task. For example, we can imagine that fictional rational::Harry Potter and Hermione were both taught as children that it is ok to be smart, but that only Hermione was instructed not to be obnoxiously smart. This dynamic, by itself, would be enough to strongly suppress the numbers of women to rise to the highest levels in math.

But producing convincing evidence in this area is not an impossible task. For example, we can empirically assess the impact of the above mechanism by comparing the number of bright and very bright men and women who come from different cultural backgrounds.

Rather than simply demanding that your interlocutor show his evidence first, why not go ahead and show yours?

Replies from: datadataeverywhere, wedrifid
comment by datadataeverywhere · 2010-09-20T02:47:31.322Z · LW(p) · GW(p)

But producing convincing evidence in this area is not an impossible task. For example, we can empirically assess the impact of the above mechanism by comparing the number of bright and very bright men and women who come from different cultural backgrounds.

I agree, and this was what I meant. Distinguishing between nature and nurture, as wedrifid put it, is a difficult but not impossible task.

Why is the placement of "burden" important? With this supplementary question: Do you know of evidence strongly suggesting that different cultural norms might significantly alter the predominant position of the male sex in academic mathematics?

I hope I answered both of these in my comment to wedrifid below. Thank you for bothering to take my question at face value (as a question that requests a response), instead of deciding to answer it with a pointless insult.

comment by wedrifid · 2010-09-19T23:10:06.807Z · LW(p) · GW(p)

It might have been marginally more productive to answer "No, I don't see. Would you explain?"

The problem with other-optimising here is that it doesn't account for my goals. I care far more about the nature of rational evidence than I do about the drawn out nature vs nurture debates. A direct denunciation of the epistemic rational failure mode of passing the 'proof' buck suits my purposes.

Replies from: datadataeverywhere
comment by datadataeverywhere · 2010-09-20T02:42:43.091Z · LW(p) · GW(p)

It might have been marginally more productive to answer "No, I don't see. Would you explain?"

Actually, it would have been more productive, since you obviously didn't understand what I was saying.

I am not claiming that I have evidence suggesting that culture is a stronger factor in mathematical ability than genetics. What I'm claiming is that I don't know of any evidence to show that the two can be clearly distinguished. Ignorance is a privileged hypothesis. Unless you can show evidence of differences in mathematical ability that can be traced specifically to genetics, ignorance reigns here, and we shouldn't assume that either culture or genetics is a stronger factor.

The burden of proof lies on you, because you are appealing to me to shift my belief toward yours. I am willing to do this, provided you provide any evidence that does so under a sane framework for reasoning. Meanwhile, the reason the burden of proof is not on me is that I am claiming ignorance, not a particular position.

A direct denunciation of the epistemic rational failure mode of passing the 'proof' buck suits my purposes.

You're being incredibly critical, and have been so in other threads as well. I realize that this is your M.O., and is not solely directed at me, but I would appreciate it if you would specify exactly what I've said, here or in other comments, that has convinced you so thoroughly that I am unable to hold a rational discussion.

Replies from: wedrifid
comment by wedrifid · 2010-09-20T04:14:47.192Z · LW(p) · GW(p)

Actually, it would have been more productive, since you obviously didn't understand what I was saying.

No, I rejected your specific argument because it was by very nature fallacious. There are other things you could have said but didn't and those things I may not have even disagreed with.

The burden of proof lies on you, because you are appealing to me to shift my belief toward yours.

The conversation was initiated by you admonishing others. You have since then danced the dance of re-framing with some skill. I was actually only at the fringes of the conversation.

A direct denunciation of the epistemic rational failure mode of passing the 'proof' buck suits my purposes.

but I would appreciate it if you would specify exactly what I've said, here or in other comments, that has convinced you so thoroughly that I am unable to hold a rational discussion.

I haven't said that. Specifics quotations of arguments or reasoning that I reject tend to be included in my comments. Take the above for example. Your reply does not relate rationally to the quote you were replying to. I reject the argument that you were using (which is something I do consistently - I care about bullshit probably even more than you care about supporting your culture hypothesis). Your response was to weasel your way out of your argument, twist your initial claim such that it has the intellectual high ground, label my disagreement with you a personal flaw, misrepresented my claim to be something that I have not made and then attempt to convey that I have not given any explanation for my position. That covers modules 1, 2, 3 and 4 in "Effective Argument Techniques 101".

I don't especially mind the slander but it is essentially futile for me to try to engage with the reasoning. I would have to play the kind of games that I come here to avoid.

Replies from: Perplexed, datadataeverywhere
comment by Perplexed · 2010-09-21T06:04:58.811Z · LW(p) · GW(p)

Well, I had promised you a compliment when you deleted a post.

So, well done! I'm glad you got rid of that turkey (the great-grandparent).

Replies from: wedrifid
comment by wedrifid · 2010-09-21T06:13:24.810Z · LW(p) · GW(p)

Was that the Joan of Arc reference? I've been studying these sexual related genetic mutations and chromosomal abnormalities recently in a Biology class and her name came up. I found it fascinating and nearly left the comment there just for that. Each to their own. :)

Replies from: Perplexed
comment by Perplexed · 2010-09-21T06:25:20.709Z · LW(p) · GW(p)

Maybe it was the Joan comment. I can't find it now.

That Joan comment annoyed me too, though I didn't say anything at the time. Not your fault, but just let a woman do something remarkable, something almost miraculous, and sure enough, some man 500 years later is going to claim that she must have actually been male, genetically speaking.

I wasn't feminist at all until I came here to LW. Honest!

Replies from: wedrifid
comment by wedrifid · 2010-09-21T06:44:02.796Z · LW(p) · GW(p)

That Joan comment annoyed me too, though I didn't say anything at the time. Not your fault, but just let a woman do something remarkable, something almost miraculous, and sure enough, some man 500 years later is going to claim that she must have actually been male, genetically speaking.

She is a woman, regardless of whether she has a Y chromosome. It is SRY gene that matters genetically. So we can use that observation to free us up to call evidence evidence without committing crimes against womankind.

If I my (most decidedly female) lecturer is to be believed the speculation was based primarily on personal reports from her closest friends. It included things like menstrual patterns (and the lack thereof) and personal habits. I didn't look into the details to see whether or not the this was an allusion to the typically far shorter vagina becoming relevant. I'm also not sure if the line of reasoning was prompted by some historian trying to work out what on earth was going on while researching her personal life or just biologists liking to feel like their knowledge is relevant to impressive people and events.

If she hadn't done famous things then we probably wouldn't have any records whatsoever to go on and nor would anyone care to look.

comment by datadataeverywhere · 2010-09-20T21:01:10.180Z · LW(p) · GW(p)

You're starting to sound like a troll. I would feel less sure of that if you hadn't just admitted that you don't expect to care what you're arguing about in another comment.

What do you want out of this discussion? Personally, I would like to be better informed about an area that smart people disagree with me on. You're not helping me attain that goal, since you are providing me with no evidence. Meanwhile, you are continuing to hold a hostile tone and expecting me to support positions I neither hold nor claim to hold.

If you have an actual interest in either the topic of this discussion or working with me to fix whatever it is that has sent up so many red flags with you, I'd appreciate it. I don't feel like I'm guilty of any of the things you mentioned, but if you feel adamantly that I am, I'm happy to listen to specifics so that I can evaluate and fix that behavior. If instead you feel merely like insulting me, I urge you to make better use of your time.

Replies from: wedrifid
comment by wedrifid · 2010-09-21T05:27:13.008Z · LW(p) · GW(p)

You're starting to sound like a troll. I would feel less sure of that if you hadn't just admitted that you don't expect to care what you're arguing about in another comment.

It is my policy to remove comments whenever social aggressors find them to be useful to take out of context and have done so with the subject of your link, assuring Resgui that it was nothing to do with him.

In that discussion Relsqui and I came to an amicable agreement to disagree. He (if he'll pardon the assumption of gender and chastise me if I have made an incorrect inference) had already made some hints in that direction in the ancestor and acknowledging that I too didn't think such a trivial matter of word definition was really worth arguing about is a gesture of respect. (Some people find it annoying if the other person leaves them hanging, especially if they had offered to extend the discussion mostly as a gesture of goodwill, which is what I had taken from Relsqui.)

I'll note that whatever you may think of me personally a distinguishing feature of trolls is that they enjoy provoking an emotional response in others while on the other hand I find it unsavoury. Even though I have actively developed myself in order to have a thicker 'emotional skin' (see related concurrent discussion) with when it comes to frustrations this sort of conflict will always be a net psychological drain.

My goal was to support Will's comment in the face of a reply that I would have found frustrating and was also an error in reasoning. In the future I will reply directly to Will (or whomever), expressing agreement and elaborating on the point with more details. Replying to the undesired comment gave more attention to it rather than less and obscuration would perhaps have been more useful than rebuttal.

Replies from: Relsqui
comment by Relsqui · 2010-09-21T06:26:07.121Z · LW(p) · GW(p)

a distinguishing feature of trolls is that they enjoy provoking an emotional response in others while on the other hand I find it unsavoury

For what it's worth, it is very hard to distinguish between someone who is deliberately provoking a negative reaction and someone who is not very practiced at anticipating what choices of language or behavior might cause one. I, like datadataeverywhere, did get the impression that you were at least one of those things; off the top of my head, here are a few specific reasons:

  • Your initial comment disagreed with my terminology without actually addressing it directly, merely asserting that I was wrong without providing evidence nor argument. This struck me as aggressive and also poorly reasoned.
  • You persisted in the argument about definition despite, as you later said, not caring about it. I did not continue that thread out of goodwill but out of a desire to resolve the disagreement and return to the original topic--hence stopping and checking in that we were on the same page. That's why it annoyed me when you said you didn't care; in that case, I wish we hadn't wasted the time on it!
  • Applying the label "social aggressor" in response to someone who is explicitly trying to find out what's going on in the conversation and steer it somewhere useful. (In fairness, dde suggesting you're a troll was not necessary either, but the situations are different in that I have not noticed you specifically trying to get the conversation on track.)
  • Not answering direct questions, especially when they are designed to return the conversation to a productive topic.

I hope I'm not overstepping my bounds by spelling this out; my impression of the LW community is that constructive criticism is encouraged. Therefore, I'm giving you specific suggestions to avoid making a negative impression you seem to not want to make. Conveniently, this will also resolve the ambiguity in my first (non-quoted) sentence in this comment. If you confirm that you want to avoid garnering negative reactions in conversation, it'll be clear that you are indeed not a troll.

comment by wedrifid · 2010-09-19T04:43:18.637Z · LW(p) · GW(p)

If you insist on supporting that view

Absolutely not. In general people overestimate the importance of 'intrinsic talent' on anything. The primary heritable component of success in just about anything is motivation. Either g or height comes second depending on the field.

Replies from: datadataeverywhere
comment by datadataeverywhere · 2010-09-19T05:13:42.737Z · LW(p) · GW(p)

I agree. I think it is quite obvious that ability is always somewhat heritable (otherwise we could raise our pets as humans), but this effect is usually minimal enough to not be evident behind the screen of either random or environmental differences. I think this applies to motivation as well!

And that was really what my claim was; anyone who claims that women are inherently less able in mathematics has to prove that any measurable effect is distinguishable from and not caused by cultural factors that propel fewer women to have interest in mathematics.

Replies from: wedrifid
comment by wedrifid · 2010-09-19T05:20:18.393Z · LW(p) · GW(p)

I think this applies to motivation as well!

It doesn't. (Unfortunately.)

Replies from: datadataeverywhere
comment by datadataeverywhere · 2010-09-19T05:29:06.916Z · LW(p) · GW(p)

Am I misunderstanding, or are you claiming that motivation is purely an inherited trait? I can't possibly agree with that, and I think even simple experiments are enough to disprove that claim.

Replies from: wedrifid
comment by wedrifid · 2010-09-19T08:42:57.299Z · LW(p) · GW(p)

Am I misunderstanding, or are you claiming that motivation is purely an inherited trait?

Misunderstanding. Expanding the context slightly:

I agree. I think it is quite obvious that ability is always somewhat heritable (otherwise we could raise our pets as humans), but this effect is usually minimal enough to not be evident behind the screen of either random or environmental differences. I think this applies to motivation as well!

It doesn't. (Unfortunately.)

When it comes to motivation the differences between people are not trivial. When it comes the particular instance of difference between the sexes there are powerful differences in motivating influences. Most human motives are related to sexual signalling and gaining social status. The optimal actions to achieve these goals is significantly different for males and females, which is reflected in which things are the most motivating. It most definitely should not be assumed that motivational differences are purely cultural - and it would be astonishing if they were.

Replies from: datadataeverywhere
comment by datadataeverywhere · 2010-09-19T18:21:43.164Z · LW(p) · GW(p)

The optimal actions to achieve these goals is significantly different for males and females.

Are you speaking from an evolutionary context, i.e. claiming that what we understand to be optimal is hardwired, or are you speaking to which actions are actually perceived as optimal in our world?

You make a really good point---one I hadn't thought of but agree with---but since I don't think that we behave strictly in a manner that our ancestors would consider optimal (after all, what are we doing at this site?), I can't agree that sexual and social signaling's effect on motivation can be considered a-cultural.

comment by Emile · 2010-09-09T16:25:55.647Z · LW(p) · GW(p)

There's nothing about being white, or female, or hispanic, or deaf, or gay that prevents one from being a rationalist.

I may be wrong, but I don't expect the proportion of gays in LessWrong to be very different from the proportion in the population at large.

Replies from: thomblake, datadataeverywhere
comment by thomblake · 2010-09-16T20:00:03.416Z · LW(p) · GW(p)

I may be wrong, but I don't expect the proportion of gays in LessWrong to be very different from the proportion in the population at large.

My vague impression is that the proportion of people here with sexual orientations that are not in the majority in the population is higher than that of such people in the population.

This is probably explained completely by Lw's tendency to attract weirdos people who are willing to question orthodoxy.

Replies from: None
comment by [deleted] · 2010-09-18T15:39:13.766Z · LW(p) · GW(p)

For starters we have a quite a few people who practice polyamory.

comment by datadataeverywhere · 2010-09-09T16:33:45.877Z · LW(p) · GW(p)

It might matter whether or not one counts closeted gays. Either way, I was just throwing another potential partition into the argument. I also doubt that we differ significantly in our proportion of deaf people, but the point is that being deaf is qualitatively different, but shouldn't impair one's rational capabilities. Same for being female, black, or most of the groups that we think of as adding to diversity.

Replies from: None
comment by [deleted] · 2010-09-16T21:34:51.660Z · LW(p) · GW(p)

I am actually interested in, among other things, whether a lack of diversity is a >functional impairment for a group. I feel strongly that it is, but I can't back up that >claim with evidence strong enough to match my belief. For a group such as Less >Wrong, I have to ask what we miss due to a lack of diversity.

To little memetic diversity is clearly a bad thing, for the same reason too little genetic variability. However how much and what kind are optimal depends on the environment.

Also have you considered the possibility that diversity for you is not a means to an end but a value in itself? In that case unless it conflicts with more any other values you would perhaps consider more important values you don't need any justification for it. I'm quite honest with myself that I hope that post-singularity the universe will not be paperclipped by only things I and people like me (or humans in general for that matter) value. I value a diverse universe.

Edit:

Same for being female, black, or most of the groups that we think of as adding to >diversity.

I.. uhm...see. At first I was very confused by all the far reaching implications of this however thanks to keeping a few things in mind, I'm just going to ascribe this to you being from a different cultural background than me.

Replies from: datadataeverywhere
comment by datadataeverywhere · 2010-09-17T05:02:01.525Z · LW(p) · GW(p)

Diversity is a value for me, but I'd like to believe that is more than simply an aesthetic value. Of course, if wishes were horses we'd all be eating steak.

Memetic diversity is one of the non-aesthetic arguments I can imagine, and my question is partially related to that. Genetic diversity is superfluous past a certain point, so it seems reasonable that the same might be true of memetic diversity. Where is that point relative to where Less Wrong sits?

Um, all I was saying was that women and black people are underrepresented here, but that ought not be explained away by the subject matter of Less Wrong. What does that have to do with my cultural background or the typical mind fallacy? What part of that do you disagree with?

Replies from: None, None, wedrifid
comment by [deleted] · 2010-09-18T16:05:22.142Z · LW(p) · GW(p)

Um, all I was saying was that women and black people are underrepresented here, but that ought not be explained away by the subject matter of Less Wrong. What does that have to do with my cultural background or the typical mind fallacy? What part of that do you disagree with?

Well I will try to elaborate.

Same for being female, black, or most of the groups that we think of as adding to >diversity.

After I read this it struck me that you may value a much smaller space of diversity than I do. And that you probably value the very particular kinds of diversity (race, gender,some types of culture) much more or even perhaps to the exclusion of others (non-neurotypical, ideological and especially values). I'm not saying you don't (I can't know this) or that you should. I at first assumed you thought the way you do because you came up with a system more or less similar to my own, a incredibly unlikely event, that is why I scolded myself for employing the mind projection fallacy while providing a link pointing that this particular component is firmly integrated into the whole "stuff White people like" (for lack of a better word) culture that exists in the West so anyone I encounter online with whom I share the desire for certain spaces of diversity is on average overwhelmingly more likely to get it from that memplex.

Also while I'm certainly sympathetic about hoping one's values are practical, but one needs to learn to live with the possibility one's values are neutral or even impractical or perhaps conflicting with each other. I overall in principle support efforts to lower unnecessary barriers for people to join Lesswrong.But the OP doesn't seem to make it explicit that this is about values, and you wanting other Lesswrongers to live by your values but seems to communicate that its about it being the optimal course of improving rationality.

You haven't done this. Your argument so far has been to simply go from:

"arbitrary designated group/blacks/women are capable of rationality, but are underrepresented on Lesswrong"

to

"Lesswrong needs to divert some (as much as needed?) efforts to correct this."

Why?

Like I said lowering unnecessary barriers (actually you at this point even have to make the case that they exist and that they aren't simply the result of the other factors I described in the post) won't repel the people who already find LW interesting, so it should in principle get a more effective and healthy community.

However what if this should prove to be insufficient? Divert resources to change the preferences of designated under-represented groups? Add elements to Lesswrong that aren't strictly necessary to reach its stated objectives? Which is not to say we don't have them now, however the ones we have now probably cater to the largest potential pool of people predisposed to find LW's goals interesting.

Replies from: Vladimir_M, datadataeverywhere
comment by Vladimir_M · 2010-09-18T18:46:13.957Z · LW(p) · GW(p)

Konkvistador:

After I read this it struck me that you may value a much smaller space of diversity than I do. And that you probably value the very particular kinds of diversity (race, gender,some types of culture) much more or even perhaps to the exclusion of others (non-neurotypical, ideological and especially values).

There is a fascinating question that I've asked many times in many different venues, and never received anything approaching a coherent answer. Namely, among all the possible criteria for categorizing people, which particular ones are supposed to have moral, political, and ideological relevance? In the Western world nowadays, there exists a near-consensus that when it comes to certain ways of categorizing humans, we should be concerned if significant inequality and lack of political and other representation is correlated with these categories, we should condemn discrimination on the basis of them, and we should value diversity as measured by them. But what exact principle determines which categories should be assigned such value, and which not?

I am sure that a complete and accurate answer to this question would open a floodgate of insight about the modern society. Yet out of all difficult questions I've ever discussed, this seems to be the hardest one to open a rational discussion about; the amount of sanctimoniousness and/or logical incoherence in the answers one typically gets is just staggering. One exception are several discussions I've read on Overcoming Bias, which at least asked the right questions, but unfortunately only scratched the surface in answering them.

Replies from: NancyLebovitz, AdeleneDawner, wedrifid
comment by NancyLebovitz · 2010-09-21T17:45:04.782Z · LW(p) · GW(p)

That's intriguing. Would you care to mention some of the sorts of diversity which usually aren't on the radar?

comment by AdeleneDawner · 2010-09-18T22:25:35.718Z · LW(p) · GW(p)

I've spent some time thinking about this, and my conclusion is that, at least personally, what I value about diversity is the variety of worldviews that it leads to.

This does result in some rather interesting issues, though. For example, one of the major factors in the difference in worldview between dark-skinned Americans and light-skinned Americans is the existence of racism, both overt and institutional. Thus, if I consider diversity to be very valuable, it seems that I should support racism. I don't, though - instead, I consider that the relevant preferences of dark-skinned Americans take precedence over my own preference for diversity. (Similarly, left-handed peoples' preference for non-abusive writing education appropriately took precedence over the cultural preference for everyone to write with their right hands, and left-handedness is, to the best of my knowledge, no longer a significant source of diversity of worldview.)

That assumes coherence in the relevant group's preference, though, which isn't always the case. For example, among people with disabilities, there are two common views that are, given limited resources, significantly conflicting: The view that disabilities should be cured and that people with disabilities should strive to be (or appear to be) as normal as possible, and the view that disabilities should be accepted and that people with disabilities should be free to focus on personal goals rather than being expected to devote a significant amount of effort to mitigating or hiding their disabilities. In such cases, I support the preference that's more like the latter, though I do prefer to leave the option open for people with the first preference to pursue that on a personal level (meaning I'd support the preference 'I'd prefer to have my disability cured', but not 'I'd prefer for my young teen's disability to be treated even though they object', and I'm still thinking about the grey area in the middle where such things as 'I'd prefer for my baby's disability to be cured, given that it won't be able to be cured when they're older if it's not cured now, and given that if it's not cured I'm likely to be obligated to take care of them for the rest of my life' exist).

I think that's coherent, anyway, as far as it goes. I'm sure there are issues I haven't addressed, though.

Replies from: Vladimir_M
comment by Vladimir_M · 2010-09-18T23:34:07.958Z · LW(p) · GW(p)

With your first example, I think you're on to an important politically incorrect truth, namely that the existence of diverse worldviews requires a certain degree of separation, and "diversity" in the sense of every place and institution containing a representative mix of people can exist only if a uniform worldview is imposed on all of them.

Let me illustrate using a mundane and non-ideological example. I once read a story about a neighborhood populated mostly by blue-collar folks with a strong do-it-yourself ethos, many of whom liked to work on their cars in their driveways. At some point, however, the real estate trends led to an increasing number of white collar yuppie types moving in from a nearby fancier neighborhood, for whom this was a ghastly and disreputable sight. Eventually, they managed to pass a local ordinance banning mechanical work in front yards, to the great chagrin of the older residents.

Therefore, when these two sorts of people lived in separate places, there was on the whole a diversity of worldview with regards to this particular issue, but when they got mixed together, this led to a conflict situation that could only end up with one or another view being imposed on everyone. And since people's worldviews manifest themselves in all kinds of ways that necessarily create conflict in case of differences, this clearly has implications that give the present notion of "diversity" at least a slight Orwellian whiff.

comment by wedrifid · 2010-09-18T18:59:54.620Z · LW(p) · GW(p)

Yet out of all difficult questions I've ever discussed, this seems to be the hardest one to open a rational discussion about; the amount of sanctimoniousness and/or logical incoherence in the answers one typically gets is just staggering.

My experience is similar. Even people that are usually extremely rational go loopy.

One exception are several discussions I've read on Overcoming Bias, which at least asked the right questions, but unfortunately only scratched the surface in answering them.

I seem to recall one post there that specifically targeted the issue. But you did ask "what basis should" while Robin was just asserting a controversial is.

Replies from: Vladimir_M
comment by Vladimir_M · 2010-09-18T19:08:39.893Z · LW(p) · GW(p)

wedrifid:

But you did ask "what basis should" while Robin was just asserting a controversial is.

I probably didn't word my above comment very well. I am also asking only for an accurate description of the controversial "is."

The fact is that nearly all people attach great moral importance to these issues, and what I'd like (at least for start) is for them to state the "shoulds" they believe in clearly, comprehensively, and coherently, and to explain the exact principles with which they justify these "shoulds." My above stated questions should be understood in these terms.

Replies from: wedrifid
comment by wedrifid · 2010-09-18T19:18:44.781Z · LW(p) · GW(p)

If you are sufficiently curious you could make a post here. People will be somewhat motivated to tone down the hysteria given that you will have pre-emptively shunned it.

comment by datadataeverywhere · 2010-09-18T22:13:14.717Z · LW(p) · GW(p)

I think I'm going to stop responding to this thread, because everyone seems to be assuming I'm meaning or asking something that I'm not. I'm obviously having some problems expressing myself, and I apologize for the confusion that I caused. Let me try once more to clarify my position and intentions:

I don't really care how diverse Less Wrong is. I was, however, curious how diverse the community is along various axes, and was interested in sparking a conversation along those lines. Vladimir's comment is exactly the kind of questions I was trying to encourage, but instead I feel like I've been asked to defend criticism that I never thought I made in the first place.

I was never trying to say that there was something wrong with the way that Less Wrong is, or that we ought to do things to change our makeup. Maybe it would be good for us to, but that had nothing to do with my question. I was instead (trying to, and apparently badly) asking for people's opinions about whether or how our makeup along any partition --- the ones that I mentioned or others --- effect in us an inability to best solve the problems that we are interested in solving.

comment by [deleted] · 2010-09-18T16:36:07.858Z · LW(p) · GW(p)

"Um, all I was saying was that women and black people are underrepresented here, but that ought not be explained away by the subject matter of Less Wrong. What does that have to do with my cultural background or the typical mind fallacy? What part of that do you disagree with?"

To get back to basics for a moment: we don't know that women and black people are underrepresented here. Usernames are anonymous. Even if we suspect they're underrepresented, we don't know by how much -- or whether they're underrepresented compared to the internet in general, or the geek cluster, or what.

Even assuming you want more demographic diversity on LW, it's not at all clear that the best way to get it is by doing something differently on LW itself.

Replies from: None
comment by [deleted] · 2010-09-18T19:14:24.343Z · LW(p) · GW(p)

You highlighted this point much better than I did.

comment by wedrifid · 2010-09-17T06:00:54.143Z · LW(p) · GW(p)

Um, all I was saying was that women and black people are underrepresented here, but that ought not be explained away by the subject matter of Less Wrong.

"Ought"? I say it 'ought' to be explained away be the subject matter of less wrong if and only if that is an accurate explanation. Truth isn't normative.

Replies from: datadataeverywhere
comment by datadataeverywhere · 2010-09-17T06:20:06.092Z · LW(p) · GW(p)

Is this a language issue? Am I using "ought" incorrectly? I'm claiming that the truth of the matter is that women are capable of rationality, and have a place here, so it would be wrong (in both an absolute and a moral sense) to claim that their lack of presence is due to this being a blog about rationality.

Perhaps I should weaken my statement to say "if women are as capable as men in rationality, their underrepresentation here ought not be explained away by the subject matter". I'm not sure whether I feel like I should or shouldn't apologize for taking the premise of that sentence as a given, but I did, hence my statement.

Replies from: wedrifid
comment by wedrifid · 2010-09-17T06:34:06.224Z · LW(p) · GW(p)

Ahh, ok. That seems reasonable. I had got the impression that you had taken the premise for granted primarily because it would be objectionable if it was not true and the fact of the matter was an afterthought. Probably because that's the kind of reasoning I usually see from other people of your species.

I'm not going to comment either way about the premise except to say that it is inclination and not capability that is relevant here.

comment by CaveJohnson · 2011-07-20T17:56:51.354Z · LW(p) · GW(p)

I was never trying to say that there was something wrong with the way that Less Wrong is, or that we ought to do things to change our makeup. Maybe it would be good for us to, but that had nothing to do with my question. I was instead (trying to, and apparently badly) asking for people's opinions about whether or how our makeup along any partition --- the ones that I mentioned or others --- effect in us an inability to best solve the problems that we are interested in solving.

People are touchy on this. I guess its because in public discourse pointing something like this out is nearly always a call to change it.

comment by Perplexed · 2010-09-09T03:26:45.524Z · LW(p) · GW(p)

Wow! I just lost 50 points of karma in 15 minutes. I haven't made any top level posts, so it didn't happen there. I wonder where? I guess I already know why.

Replies from: RobinZ, katydee, Perplexed, jacob_cannell
comment by RobinZ · 2010-09-09T03:49:14.389Z · LW(p) · GW(p)

While katydee's story is possible (and probable, even), it is also possible that someone is catching up on their Less Wrong reading for a substantial recent period and issuing many votes (up and down) in that period. Some people read Less Wrong in bursts, and some of those are willing to lay down many downvotes in a row.

comment by katydee · 2010-09-09T03:43:54.079Z · LW(p) · GW(p)

It is possible that someone has gone through your old comments and systematically downvoted them-- I believe pjeby reported that happening to him at one point.

In the interest of full disclosure, I have downvoted you twice in the last half hour and upvoted you once. It's possible that fifty other people think like me, but if so you should have very negative karma on some posts and very positive karma on others, which doesn't appear to be the case.

Replies from: Perplexed
comment by Perplexed · 2010-09-09T03:55:27.018Z · LW(p) · GW(p)

I think you are right about the systematic downvoting. I've noticed and not minded the downvotes on my recent controversial postings. No hard feelings. In fact, no real hard feelings toward whoever gave me the big hit - they are certainly within their rights and I am certainly currently being a bit of an obnoxious bastard.

comment by Perplexed · 2010-09-12T19:25:43.898Z · LW(p) · GW(p)

And now my karma has jumped by more than 300 points! WTF? I'm pretty sure this time that someone went through my comments systematically upvoting. If that was someone's way of saying "thank you" ... well ... you are welcome, I guess. But isn't that a bit much?

comment by jacob_cannell · 2010-09-09T07:04:04.720Z · LW(p) · GW(p)

That happened to me three days ago or so after my last top level post. At the time said post was at -6 or so, and my karma was at 60+ something. Then, within a space of < 10 minutes, my karma dropped to zero (actually i think it went substantially negative). So what is interesting to me is the timing.

I refresh or click on links pretty quickly. It felt like my karma dropped by more than 50 points instantly (as if someone had dropped my karma in one hit), rather than someone or a number of people 'tracking me'.

However, I could be mistaken, and I'm not certain I wasn't away from my computer for 10 minutes or something. Is there some way for high karma people to adjust someone's karma? Seems like it would be useful for troll control.

comment by JamesAndrix · 2010-09-08T04:51:55.578Z · LW(p) · GW(p)

Have there been any articles on what's wrong with the Turing test as a measure of personhood? (even in it's least convenient form)

In short the problems I see are: False positives, false negatives, ignoring available information about the actual agent, and not reliably testing all the things that make personhood valuable.

Replies from: Larks
comment by Larks · 2010-09-08T06:25:11.903Z · LW(p) · GW(p)

False positives, false negatives

This sounds pretty exhaustive.

comment by DSimon · 2010-09-07T20:27:56.836Z · LW(p) · GW(p)

I'm interested in video game design and game design in general, and also in raising the rationality waterline. I'd like to combine these two interests: to create a rationality-focused game that is entertaining or interesting enough to become popular outside our clique, but that can also effectively teach a genuinely useful skill to players.

I imagine that it would consist of one or more problems which the player would have to be rational in some particular way to solve. The problem has to be:

  • Interesting: The prospect of having to tackle the problem should excite the player. Very abstract or dry problems would not work; very simple problems wouldn't work either, even if cleverly presented (i.e. you could do Newcomb's problem as a game with plenty of lovely art... but the game itself would still only be a single binary choice, which would quickly bore the player).

  • Dramatic in outcome: The difference between success and failure should be great. A problem in which being rational gets you 10 points but acting typically gets you 8 points would not work; the advantage of applying rationality needs to be very noticeable.

  • Not rigged (or not obviously so): The player shouldn't have the feeling that the game is designed to directly reward rationality (even though it is, in a sense). The player should think that they are solving a general problem with rationality as their asset.

  • Not allegorical: I don't want to raise any likely mind-killing associations in the player's mind, like politics or religion. The problem they are solving should be allegorical to real world problems, but to a general class of problems, not to any specific problems that will raise hackles and defeat the educational purpose of the game.

  • Surprising: The rationality technique being taught should not be immediately obvious to an untrained player. A typical first session should involve the player first trying an irrational method, seeing how it fails, and then eventually working their way up to a rational method that works.

A lot of the rationality-related games that people bring up fail some of these criterion. Zendo, for example, is not "dramatic in outcome" enough for my taste. Avoiding confirmation bias and understanding something about experimental design makes one a better Zendo player... but in my experience not as much as just developing a quick eye for pattern recognition and being able to read the master's actions.

Anyone here have any suggestions for possible game designs?

comment by MartinB · 2010-09-07T04:27:34.163Z · LW(p) · GW(p)

Did anyone here read Buckminster Fullers synergetics? And if so did understand it?

Replies from: timtyler, Risto_Saarelma
comment by timtyler · 2010-09-08T07:23:52.362Z · LW(p) · GW(p)

Hefty quantities of Synergetics seem incomprehensible to me.

Fuller was trying to make himself into a mystical science guru - and Synergetics laid out his domain.

There is some worthwhile material in there - though you might be better of with more recent secondary sources.

Replies from: MartinB
comment by MartinB · 2010-10-22T06:28:57.800Z · LW(p) · GW(p)

But which sources. The reading of his that I understood I found amazing. And i can imagine that grasping synergistic might be useful for my brain.

Recommendations for reading are always welcome.

Replies from: timtyler
comment by timtyler · 2010-10-22T09:03:52.294Z · LW(p) · GW(p)

It depends on what aspect you are interested in.

For example, I found this book pretty worthwhile:

"Light Structures - Structures of Light: The Art and Engineering of Tensile Architecture" Illustrated by the Work of Horst Berger.

...and here's one of my links pages: http://pleatedstructures.com/links/

comment by Risto_Saarelma · 2010-09-07T06:07:47.328Z · LW(p) · GW(p)

Seconding this question.

I found Synergetics in the local library when I was in high school, was duly impressed by Arthur C. Clarke's endorsement on the cover, but didn't understand much at all about the book. I was too young to tell if the book was obvious math crankery or not back then, but the magnum opus style of Synergetics combined with it being pretty completely ignored nowadays makes it look a lot like an earlier example of the type of book Wolfram's A New Kind of Science turned out to be.

Still, I'm curious about what the big idea was supposed to be and what did people who seriously read the book thought about it.

ETA: For the curious, the whole book available is online.

comment by utilitymonster · 2010-09-07T01:07:06.637Z · LW(p) · GW(p)

Question about Solomonoff induction: does anyone have anything good to say about how to associate programs with basic events/propositions/possible worlds?

Replies from: timtyler, khafra
comment by timtyler · 2010-09-08T07:20:05.183Z · LW(p) · GW(p)

Don't do that - instead associate programs with sensory input streams.

Replies from: utilitymonster
comment by utilitymonster · 2010-09-08T09:09:48.150Z · LW(p) · GW(p)

Ok, but how?

Replies from: timtyler
comment by timtyler · 2010-09-08T09:31:10.246Z · LW(p) · GW(p)

A stream of sense data is essentially equivalent to a binary stream - the associated programs are the ones that output that stream.

Replies from: utilitymonster
comment by utilitymonster · 2010-09-08T12:00:42.159Z · LW(p) · GW(p)

Still don't get it. Let's say cards are being put in front of my face, and all I'm getting is their color. I can reliability distinguish the colors here "http://www.webspresso.com/color.htm". How do I associate a sequence of cards with a string? It doesn't seem like there is any canonical way of doing this. Maybe it won't matter that much in the end, but are there better and worse ways of starting?

Replies from: timtyler, gwern
comment by timtyler · 2010-09-08T14:44:29.039Z · LW(p) · GW(p)

How do I associate a sequence of cards with a string? It doesn't seem like there is any canonical way of doing this. Maybe it won't matter that much in the end [...]

Just so: the exact representation used is usually not that critical.

If as you say you are using Solomonoff induction, the next step is to compress it - so any fancy encoding scheme you use will probably be stripped right off again.

comment by gwern · 2010-09-08T14:13:28.083Z · LW(p) · GW(p)

If you really can only distinguish those 255 colors, then you could associate each color with a single unique byte, and a sequence of n cards becomes a single bitstring with n*8 bits in it. For additional flavor, add some sort of compression.

This is so elementary that I must be misunderstanding you somehow.

comment by khafra · 2010-09-07T04:11:20.013Z · LW(p) · GW(p)

Good question. Unfortunately, I don't think it's possible to create a universal shortcut for "run each one, and see if you get the possible world you were aiming for," other than the well-known alternatives like AIXI-tl and MC-AIXI.

comment by xamdam · 2010-09-05T18:53:09.923Z · LW(p) · GW(p)

Looks like an interesting course from MIT:

Reflective Practice: An Approach for Expanding Your Learning Frontiers

Is anyone familiar with the approach, or with the professor?

comment by David_Allen · 2010-09-04T07:17:26.829Z · LW(p) · GW(p)

The Idea

I am working on a new approach to creating knowledge management systems. An idea that I backed into as part of this work is the context principle.

Traditionally, the context principle states that a philosopher should always ask for a word's meaning in terms of the context in which it is being used, not in isolation.

I've redefined this to make it more general: Context creates meaning and in its absence there is no meaning.

And I've added the corollary: Domains can only be connected if they have contexts in common. Common contexts provide shared meaning and open a path for communication between disparate domains.

Possible Topics

I'm considering posting on how the context principle relates to certain topics. Right now I'm researching and collecting notes.

Possible topics to relate the context principle to:

  • explicit and tacit knowledge
  • theory of computation
  • debate and communication
  • rationality
  • morality
  • natural and artificial intelligence
  • "emergence"

My Request

I am looking for general feedback from this forum on the context principle and on my possible topics. I have only started working through the sequences so I am interested in specific pointers to posts I should read.

Perplexed has already started this off with his reply to my Welcome to Less Wrong! (2010) introduction.

comment by Perplexed · 2010-09-03T16:05:53.257Z · LW(p) · GW(p)

Over on a cognitive science blog named "Childs Play", there is an interesting discussion of theories regarding human learning of language. These folks are not Bayesians (except for one commenter who mentions Solomonoff induction), so some bits of it may make you cringe, but the blogger does provide links to some interesting research pdfs.

Nonetheless, the question about which they are puzzled regarding humans does raise some interesting questions regarding AIs, whether they be of the F persuasion or whether they are practicing uFs. The questions are:

  • Are these AIs born speaking English, Chinese, Arabic, Hindi, etc., or do they have to learn these languages?
  • If they learn these languages, do they have to pass some kind of language proficiency test before they are permitted to use them?
  • Are they born with any built in language capability or language learning capability at all?
  • Are the "objective functions" with which we seek to leash AIs expressed in some kind of language, or in something more like "object code"?
Replies from: timtyler, LucasSloan
comment by timtyler · 2010-09-08T16:09:54.242Z · LW(p) · GW(p)

The questions are about a future which hasn't been written yet. So: "it depends".

If you are asking what is most likely, my answers would be: machines will probably learn languages, yes there will be tests, prior knowledge-at-birth doesn't seem too important - since it can probably be picked up quickly enough - and "it depends":

Humans will probably tell machines what to do in a wide range of ways - including writing code and body language - but a fair bit of it will probably be through high-level languages - at least initially. Machines will probably tell humans what they want in a similar way - but with more use of animation and moving pictures.

comment by LucasSloan · 2010-09-08T14:46:37.264Z · LW(p) · GW(p)

There are possible general AI designs that have knowledge of human language when they are first run. What is this "permitted" you speak of? All true seed AIs have the ability to learn about human languages, as human language is subset of the reality they will attempt to model, although it is not certain that they would desire to learn human language (if, say, destructive nanotech allows them to eat us quickly enough that manipulation is useless). "Object code" is a language.

Replies from: Perplexed
comment by Perplexed · 2010-09-08T15:46:26.552Z · LW(p) · GW(p)

I guess it wasn't clear why I raised the questions. I was thinking in terms of CEV which, as I understand it, must include some dialog between an AI and the individual members of Humanity, so that the AI can learn what it is that Humanity wants.

Presumably, this dialog takes place in the native languages of the human beings involved. It is extremely important that the AI understand words and sentences appearing in this dialog in the same sense in which the human interlocutors understand them.

That is what I was getting at with my questions.

Replies from: LucasSloan
comment by LucasSloan · 2010-09-08T20:31:00.213Z · LW(p) · GW(p)

must include some dialog between an AI and the individual members of Humanity, so that the AI can learn what it is that Humanity wants.

Nope. It must include the AIs modeling (many) humans under different conditions, including those where the "humans" are much smarter, know more and suffered less from akrasia. It would utterly counterproductive to create an AI which sat down with a human and asked em what ey wanted - the whole reason for the concept of a CEV is that humans can't articulate what we want.

It is extremely important that the AI understand words and sentences appearing in this dialog in the same sense in which the human interlocutors understand them.

Even if you and the AI mean exactly the same thing by all the words you use, words aren't sufficient to convey what we want. Again, this is why the CEV concept exists instead of handing the AI a laundry list of natural language desires.

Replies from: Perplexed
comment by Perplexed · 2010-09-08T20:39:18.502Z · LW(p) · GW(p)

... so that the AI can learn what it is that Humanity wants.

Nope. It must include the AIs modeling (many) humans under different conditions ...

Uhmm, how are the models generated/validated?

Replies from: Perplexed
comment by Perplexed · 2010-09-08T21:17:11.702Z · LW(p) · GW(p)

Ah! Never mind. My questions are answered in this document:

Coherent Extrapolated Volition Eliezer S. Yudkowsky Singularity Institute for Artificial Intelligence May 2004

Eliezer writes:

As an experiment, I am instituting the following policy on the SL4 mailing list:

None may argue on the SL4 mailing list about the output of CEV, or what kind of world it will create, unless they donate to the Singularity Institute:

  • $10 to argue for 48 hours.
  • $50 to argue for one month.
  • $200 to argue for one year.
  • $1000 to get a free pass until the Singularity.

Past donations count toward this total. It's okay to have fun, and speculate, so long as you're not doing it at the expense of actually helping.

It is a good deal, as Eliezer explains later on in the Q&A:

Q2. Removing the ability of humanity to do itself in and giving it a much better chance of surviving Singularity is of course a wonderful goal. But even if you call the FAI "optimizing processes" or some such it will still be a solution outside of humanity rather than humanity growing into being enough to take care of its problems. Whether the FAI is a "parent" or not it will be an alien "gift" to fix what humanity cannot. Why not have humanity itself recursively self-improve? (SamanthaAtkins?)

A2. For myself, the best solution I can imagine at this time is to make CEV our Nice Place to Live, not forever, but to give humanity a breathing space to grow up. Perhaps there is a better way, but this one still seems pretty good. As for it being a solution outside of humanity, or humanity being unable to fix its own problems... on this one occasion I say, go ahead and assign the moral responsibility for the fix to the Singularity Institute and its donors.

Moral responsibility for specific choices within a CEV is hard to track down, in the era before direct voting. No individual human may have formulated such an intention and acted with intent to carry it out. But as for the general fact that a bunch of stuff gets fixed: the programming team and SIAI's donors are human and it was their intention that a bunch of stuff get fixed. I should call this a case of humanity solving its own problems, if on a highly abstract level.[emphasis mine]

Q3. Why are you doing this? Is it because your moral philosophy says that what you want is what everyone else wants? (XR7)

A3. Where would be the base-case recursion? But in any case, no. I'm an individual, and I have my own moral philosophy, which may or may not pay any attention to what our extrapolated volition thinks of the subject. Implementing CEV is just my attempt not to be a jerk.

I do value highly other people getting what they want, among many other values that I hold. But there are certain things such that if people want them, even want them with coherent volitions, I would decline to help; and I think it proper for a CEV to say the same. That is only one person's opinion, however.

So, as you see, contributing as little as a thousand dollars gives you enormous power over the future of mankind, at least if your ideals regarding the future are "coherent" with Eliezer's

Replies from: katydee, Perplexed, timtyler
comment by katydee · 2010-09-09T02:17:38.549Z · LW(p) · GW(p)

You have just claimed that a document that says that people have to pay for the privilege to discuss what a hypothetical program might do describes how you can pay to attain "enormous power over the future of mankind." Worse yet, the program in question is designed in part to prevent any one person from gaining power over the future of mankind.

I cannot see any explanation for your misinterpretation other than willful ignorance.

Replies from: Will_Newsome, Perplexed, timtyler
comment by Will_Newsome · 2010-09-09T14:18:10.775Z · LW(p) · GW(p)

It'd be rather easy to twist my words here, but in the case of extrapolated volition it's not like one person gaining the power over the future of mankind is a dystopia or anything.

Let's posit a world where Marcello goes rogue and compiles Marcello-extrapolating AGI. (For anyone who doesn't know Marcello, he's a super awesome guy.) I bet that the resultant universe wouldn't be horrible. Extrapolated-Marcello probably cares about the rest of humanity about as much as extrapolated-humanity does. As humans get smarter and wiser it seems they have a greater appreciation for tolerance, diversity, and all those other lovey-dovey liberal values we implicitly imagine to be the results of CEV. It is unlikely that the natural evolution of 'moral progress' as we understand it will lead to Marcello's extrapolated volition suddenly reversing the the trend and deciding that all other human beings on Earth are essentially worthless and deserve to be stripped to atoms to be turned into a giant Marcello-Pleasure-Simulating Computer. (And even if it did, I believe humans in general are probabilistically similar enough to Marcello that they would count this as positive sum if they knew more and thought faster; but that's a more intricate philosophical argument that I'd rather not defend here.) There are some good arguments to be made for the psychic diversity of mankind, but I doubt the degree of that diversity is enough to tip the scales of utility from positive to negative. Not when diversity is something we seem to have come to appreciate more over time.

This intuition is the result of probably-flawed but at least causal and tractable reasoning about trends in moral progress and the complexity and diversity of human goal structures. It seems that too often when people guess what the results of extrapolated volition will be they use it as a chance to profess and cheer, not carefully predict.

(This isn't much of a response to anything you wrote, katydee; apologies for that. Didn't know where to put it.)

Replies from: NancyLebovitz, SilasBarta
comment by NancyLebovitz · 2010-09-09T16:13:02.425Z · LW(p) · GW(p)

That "best self" thing makes me nervous. People have a wide range of what they think they ought to be, and some of those dreams are ill-conceived.

Replies from: Nisan
comment by Nisan · 2010-09-09T18:22:50.124Z · LW(p) · GW(p)

The scariest kind of dream, perhaps, is exemplified by someone with merely human intelligence who wants to hastily rewrite their own values to conform to their favorite ideology. We'd want an implementation of CEV to recognize this as a bad step in extrapolation. The question is, how do we define what is a "bad step"?

comment by SilasBarta · 2010-09-09T16:17:00.504Z · LW(p) · GW(p)

(For anyone who doesn't know Marcello, he's a super awesome guy.)

Is he the same guy whose trivial error inspired this post?

If so, just how smart do you have to be in order to be SIAI material?

Replies from: Will_Newsome, ata
comment by Will_Newsome · 2010-09-09T17:05:57.358Z · LW(p) · GW(p)

Is he the same guy whose trivial error inspired this post?

Yes.

If so, just how smart do you have to be in order to be SIAI material?

Ridiculously smart, as I'm sure you can guess. Of note is that you're smart and yet just made the fundamental attribution error.

Replies from: SilasBarta
comment by SilasBarta · 2010-09-09T18:23:27.836Z · LW(p) · GW(p)

Of note is that you're smart and yet just made the fundamental attribution error.

Meaning Marcello's at-the-time reasonable suggestion of "complexity" as a solution (something I've never done in my life, due to understanding the difference between means and ends) was mainly the result of the unfortunate, disadvantageous position he found himself in, rather than a failure to recognize what counts as understanding and solving a problem?

Replies from: Will_Newsome, ata
comment by Will_Newsome · 2010-09-09T19:07:58.396Z · LW(p) · GW(p)

I meant it more generally. You're seeing one tiny slice of a person's history almost certainly caused by an uncharacteristic lapse in judgment and using it to determine their personality traits when you have strong countervailing evidence that SIAI has a history of only employing the very brightest people. Indeed, here Eliezer mentioned that Marcello worked on a math problem with John Conway: The Level Above Mine. Implied by the text is that Eliezer believes Marcello to be close enough to Eliezer's level to be able to roughly judge Eliezer's intelligence. Since we all know how much of an arrogant bastard Eliezer is, this speaks well of Marcello's cognitive abilities.

Eliezer wrote a post about a time long ago when a not-yet-rationalist Marcello said something dumb. Not a time when he persisted in being dumb even, just said a dumb thing. There's a huge selection effect. Eliezer would never mention Marcello getting something simple right. At his level it's expected. Even so, everyone has a dumb moment now and then, and those are the ones we learn from. It's just that Marcello's happened to be worked into an instructional blog post. Marcello is still a brilliant thinker. (I really wish he'd contribute to Less Wrong more, but he's busy studying and what not.)

Anyway, this isn't really about Marcello; everyone who knows him knows he's hella smart, and there's really no reason to defend him. It's about taking all the evidence into account and giving people the benefit of the doubt when the evidence suggests it.

Replies from: SilasBarta
comment by SilasBarta · 2010-09-09T19:22:08.765Z · LW(p) · GW(p)

Eliezer wrote a post about a time long ago when a not-yet-rationalist Marcello said something dumb.

Well, that's the key thing for me -- not "How smart is Marcello now?", but how many people were at least at Marcello's level at that time, yet not patiently taken under EY's wing and given his precious time?

EY is astounded that someone can understand this after a thorough explanation. Can it honestly be that hard to find someone who can follow that? Read the passage:

"Okay," I said, "saying 'complexity' doesn't concentrate your probability mass."

"Oh," Marcello said, "like 'emergence'. Huh. So... now I've got to think about how X might actually happen..."

That was when I thought to myself, "Maybe this one is teachable." [bold in original]

It's like he's saying that being able to follow that explanation somehow makes you stand out among the people he talks to.

How would that compare to a potential student who could have given EY's explanation, instead of needing it?

Replies from: Eliezer_Yudkowsky, Will_Newsome
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-09-09T19:58:04.765Z · LW(p) · GW(p)

EY is astounded that someone can understand this after a thorough explanation. Can it honestly be that hard to find someone who can follow that?

Yes. Nobody arrives from the factory with good rationality skills, so I look for learning speed. Compare the amount I had to argue with Marcello in the anecdote to the amount that other people are having to argue with you in this thread.

Replies from: SilasBarta
comment by SilasBarta · 2010-09-09T20:12:06.562Z · LW(p) · GW(p)

Compare the amount I had to argue with Marcello in the anecdote to the amount that other people are having to argue with you in this thread.

What trivial thing am I slow(er) to learn here? Or did you mean some other comparison?

(FWIW, I know a visiting fellow who took 18 months to be convinced of something trivial, after long explanations from several SIAI greats ... hence my confusion about how the detector works.)

Replies from: Will_Newsome, komponisto, ata
comment by Will_Newsome · 2010-09-09T20:32:48.928Z · LW(p) · GW(p)

As far as I know, Eliezer has never had anything to do with choices for Visiting Fellowship. As you know but some people on Less Wrong seem not to, Eliezer doesn't run SIAI. (In reality, SIAI is a wonderful example of the great power of emergence, and is indeed the first example of a superintelligent organization.) (Just kidding.)

Replies from: SilasBarta
comment by SilasBarta · 2010-09-09T20:44:34.673Z · LW(p) · GW(p)

But he has significant discretion over who he takes as an apprentice, irrespective of what SIAI leadership might do.

Replies from: Will_Newsome
comment by Will_Newsome · 2010-09-09T20:59:21.052Z · LW(p) · GW(p)

Right, but it seemed you were comparing the selection criteria for Visiting Fellowship and the selection criteria for Eliezer's FAI team, which will of course be very different. Perhaps I misunderstood. I've been taking oxycodone every few hours for a lot of hours now.

comment by komponisto · 2010-09-09T20:27:07.594Z · LW(p) · GW(p)

What trivial thing am I slow(er) to learn here?

That Marcello's "lapse" is only very weak evidence against the proposition that his IQ is exceptionally high (even among the "aspiring rationalist" cluster).

Replies from: Eliezer_Yudkowsky, SilasBarta
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-09-09T21:13:41.318Z · LW(p) · GW(p)

What lapse? People don't know these things until I explain them! Have you been in a mental state of having-already-read-LW for so long that you've forgotten that no one from outside would be expected to spontaneously describe in Bayesian terms the problem with saying that "complexity" explains something? Someone who'd already invented from scratch everything I had to teach wouldn't be taken as an apprentice, they'd already be me! And if they were 17 at the time then I'd probably be working for them in a few years!

Replies from: SilasBarta, bentarm, komponisto
comment by SilasBarta · 2010-09-09T21:27:13.087Z · LW(p) · GW(p)

What lapse? People don't know these things until I explain them!

A little over-the-top there. People can see the problem with proposing "complexity" as a problem-solving approach without having read your work. I hadn't yet read your work on Bayescraft when I saw that article, and I still cringed as I read Marcello's response -- I even remember previous encounters where people had proposed "solutions" like that, though I'd perhaps explain the error differently.

It is a lapse to regard "complexity" as a problem-solving approach, even if you are unfamiliar with Bayescraft, and yes, even if you are unfamiliar with the Chuck Norris of thinking.

comment by bentarm · 2010-11-02T14:46:34.261Z · LW(p) · GW(p)

Have you been in a mental state of having-already-read-LW for so long that you've forgotten that no one from outside would be expected to spontaneously describe in Bayesian terms the problem with saying that "complexity" explains something?

Seriously? What sort of outside-LW people do you talk to? I'm a PhD student in a fairly mediocre maths department, and I'm pretty sure everyone in the room I'm in right now would call me out on it if I tried to use the word "complexity" in the context Marcelo did there as if it actually meant something, and for essentially the right reason. This might be a consequence of us being mathematicians, and so used to thinking in formalism, but there are an awful lot of professional mathematicians out there who haven't read anything written by Eliezer Yudkowsky.

I'm sorry but "there's got to be some amount of complexity that does it." is just obviously meaningless. I could have told you this long before I read the sequences, and definitely when I was 17. I think you massively underestimate the rationality of humanity.

comment by komponisto · 2010-09-09T21:24:48.930Z · LW(p) · GW(p)

Scarequotes added. :-)

comment by SilasBarta · 2010-09-09T20:35:16.852Z · LW(p) · GW(p)

Thanks for spelling that out, because it wasn't my argument, which I clarified in the follow-up discussion. (And I think it would be more accurate to say that it's strong evidence, just outweighed by stronger existing evidence in this case.)

My surprise was with how rare EY found it to meet someone who could follow that explanation -- let alone need the explanation. A surprise that, it turns out, is shared by the very person correcting my foolish error.

Can we agree that the comparison EY just made isn't accurate?

Replies from: komponisto
comment by komponisto · 2010-09-09T21:05:37.684Z · LW(p) · GW(p)

(And I think it would be more accurate to say that it's strong evidence, just outweighed by stronger existing evidence in this case.)

This is where you commit the fundamental attribution error.

My surprise was with how rare EY found it to meet someone who could follow that explanation -- let alone need the explanation. A surprise that, it turns out, is shared by the very person correcting my foolish error.

I don't actually think this has been written about much here, but there is a tendency among high-IQ folks to underestimate how rare their abilities are. The way they do this is not by underestimating their own cognitive skills, but instead by overestimating those of most people.

In other words, what it feels like to be a genius is not that you're really smart, but rather that everyone else is really dumb.

I would expect that both you and Will would see the light on this if you spent some more time probing the thought processes of people of "normal" intelligence in detail, e.g. by teaching them mathematics (in a setting where they were obliged to seriously attempt to learn it, such as a college course; and where you were an authority figure, such as the instructor of such a course).

Can we agree that the comparison EY just made isn't accurate?

Probably not literally, in light of your clarification. However, I nevertheless suspect that your responses in this thread do tend to indicate that you would probably not be particularly suited to being (for example) EY's apprentice -- because I suspect there's a certain...docility that someone in that position would need, which you don't seem to possess. Of course that's a matter of temperament more than intelligence.

Replies from: SilasBarta, SilasBarta
comment by SilasBarta · 2010-09-09T21:50:07.948Z · LW(p) · GW(p)

This is where you commit the fundamental attribution error.

I'm missing something here, I guess. What fraction of people who, as a matter of routine, speak of "complexity" as a viable problem-attack method, and are also very intelligent? If it's small, then it's appropriate to say, as I suggested, that it's strong evidence, even as it might be outweighed by something else in this case. Either way, I'm just not seeing how I'm, per the FEA, failing to account for some special situational justification for what Marcello did.

I would expect that both you and Will would see the light on this if you spent some more time probing the thought processes of people of "normal" intelligence in detail, e.g. by teaching them mathematics (in a setting where they were obliged to seriously attempt to learn it, such as a college course; and where you were an authority figure, such as the instructor of such a course).

Well, I do admit to having experienced disenchantment upon learning where the average person is on analytical capability (Let's not forget where I live...) Still, I don't think teaching math would prove it to me. As I say here ad infinitum, I just don't find it hard to explain topics I understand -- I just trace back to the nepocu (nearest point of common understanding), correct their misconceptions, and work back from there. So in all my experience with explaining math to people who e.g. didn't complete high school, I've never had any difficulty.

For the past five years I've helped out with math in a 4th grade class in a poorer school district, and I've never gotten frustrated at a student's stupidity -- I just teach whatever they didn't catch in class, and fix the misunderstanding relatively quickly. (I don't know if the age group breaks the criteria you gave).

However, I nevertheless suspect that your responses in this thread do tend to indicate that you would probably not be particularly suited to being (for example) EY's apprentice

Eh, I wasn't proposing otherwise -- I've embarassed myself here far too many times to be regarded as someone that group would want to work with in person. Still, I can be perplexed at what skills they regard as rare.

comment by SilasBarta · 2010-09-09T21:49:54.163Z · LW(p) · GW(p)

This is where you commit the fundamental attribution error.

I'm missing something here, I guess. What fraction of people who, as a matter of routine, speak of "complexity" as a viable problem-attack method, and are also very intelligent? If it's small, then it's appropriate to say, as I suggested, that it's strong evidence, even as it might be outweighed by something else in this case. Either way, I'm just not seeing how I'm, per the FEA, failing to account for some special situational justification for what Marcello did.

I would expect that both you and Will would see the light on this if you spent some more time probing the thought processes of people of "normal" intelligence in detail, e.g. by teaching them mathematics (in a setting where they were obliged to seriously attempt to learn it, such as a college course; and where you were an authority figure, such as the instructor of such a course).

Well, I do admit to having experienced disenchantment upon learning where the average person is (and let's not forget where I live...) Still, I don't think teaching math would make the point. As I say here ad infinitum, I just don't find it hard to explain topics I understand -- I just trace back to the nepocu (nearest point of common understanding), correct their misconceptions, and work back from there. So in all my experience with explaining math to people who e.g. didn't complete high school, I've never had any difficulty.

For the past five years I've helped out with math in a 4th grade class in a poorer school district, and I've never gotten frustrated at a student's stupidity -- I just teach whatever they didn't catch in class, and fix the misunderstanding relatively quickly. (I don't know if the age group breaks the criteria you gave).

However, I nevertheless suspect that your responses in this thread do tend to indicate that you would probably not be particularly suited to being (for example) EY's apprentice

Eh, I wasn't proposing otherwise -- I've embarassed myself here far too many times to be regarded as someone that group would want to work with in person. Still, I can be perplexed at what skills they regard as rare.

comment by ata · 2010-09-09T20:15:01.136Z · LW(p) · GW(p)

FWIW, I know a visiting fellow who took 18 months to be convinced of something trivial, after long explanations from several SIAI greats

What was the trivial thing? Just curious.

Replies from: SilasBarta
comment by SilasBarta · 2010-09-09T20:46:10.326Z · LW(p) · GW(p)

Answering via PM.

comment by Will_Newsome · 2010-09-09T19:33:35.761Z · LW(p) · GW(p)

Okay, I guess I missed what you were implicitly curious about.

Well, that's the key thing for me -- not "How smart is Marcello now?", but how many people were at least at Marcello's level at that time, yet not patiently taken under EY's wing and given his precious time?

At the time there wasn't a Visiting Fellows program or the like (I think), and there were a lot fewer potential FAI researchers then than now. However, I get the impression that Marcello was and is an exceptional rationalist. 'Course, I share your confusion that Eliezer would be so impressed by what in hindsight like such a simple application of previously learned knowledge. I think Eliezer (probably unconsciously) dramatized his whole recollection quite a bit. Or it's possible he'd almost completely lost faith in humanity at that point -- it seems he was talking to crazy wannabe AGI researchers all the time, after all. That said, since my model is producing lots of seemingly equally plausible explanations, it's not a very good model. I'm confused.

Still, I think Marcello was and is exceptionally talented. The post is just a really poor indicator of that.

comment by ata · 2010-09-09T18:49:39.977Z · LW(p) · GW(p)

No, it was "a failure to [immediately] recognize what counts as understanding and solving a [particular] problem", but that is a rationality skill, and is not entirely a function of a person's native general intelligence. Having a high g gives you an advantage in learning and/or independently inventing rationality skills, but not always enough of an advantage. History is littered with examples of very smart people committing rationality failures much larger than postulating "complexity" as a solution to a problem.

His mistake was entirely situational, given the fact that he understood a minute later what he had done incorrectly and probably rarely or never made that mistake again.

Replies from: SilasBarta
comment by SilasBarta · 2010-09-09T19:03:19.056Z · LW(p) · GW(p)

I don't want to drag this out, but I think you're going too far in your defense of the reasonableness of this error:

His mistake was entirely situational, given the fact that he understood a minute later what he had done incorrectly and probably rarely or never made that mistake again.

Read the exchange: He understood the error because someone else had to explain it to him over a wide inferential gap. If it were just, "Hey, complexity isn't a method" "Oh, right -- scratch that", then you would be correct, but that's not what happened. EY had to trace the explanation back to an earlier essay and elaborate on the relationship between those concepts and what Marcello just tried to do.

Also, I don't see the point in distinguishing g and rationality here -- somehow, Marcello got to that point without recognizing the means/ends distinction EY talks about on his own. Yes, it's a rationality skill that can be taught, but failing to recognize it in the first place does say something about how fast you would pick up rationalist concepts in general.

Third, I never criticized his failure to immediately solve the problem, just his automatic, casual classification of "complexity" as being responsive.

Yes, Marcello may be very bright, very well-versed in rationality, have many other accomplishments -- but that doesn't mean that there's some reasonable situational defense of what he did there.

comment by ata · 2010-09-09T16:42:02.592Z · LW(p) · GW(p)

Very smart people can still make trivial errors if they're plunging into domains that they're not used to thinking about. Intelligence is not rationality.

comment by Perplexed · 2010-09-09T02:42:07.101Z · LW(p) · GW(p)

I cannot see any explanation for your misinterpretation other than willful ignorance.

I, on the other hand, would trace the source of my "misinterpretation" to LucasSloane's answer to this comment by me.

I include Eliezer's vehement assurances that Robin Hanson (like me) is misinterpreting. (Sibling comment to your own.) But it is completely obvious that Eliezer wishes to create his FAI quickly and secretly expressly because he does not wish to have to deal with the contemporaneous volition of mankind. He simply cannot trust mankind to do the right thing, so he will do it for them.

I'm pretty sure I am not misinterpreting. If someone here is misinterpreting by force of will, it may be you.

Replies from: DSimon, None, katydee
comment by DSimon · 2010-09-09T02:54:46.837Z · LW(p) · GW(p)

Eliezer's motivations aside, I'm confused about the part where you say that one can pay to get influence over CEV/FAI. The payment is for the privilege to argue about what sort of world a CEV-based AI would create. You don't have to pay to discuss (and presumably, if you have something really insightful to contribute, influence) the implementation of CEV itself.

Replies from: Perplexed, Perplexed
comment by Perplexed · 2010-09-09T03:49:50.717Z · LW(p) · GW(p)

Upon closer reading, I notice that you are trying to draw a clear distinction between the implementation of CEV and the kind of world CEV produces. I had been thinking that the implementation would have a big influence on the kind of world.

But you may be assuming that the world created by the FAI, under the guidance of the volition of mankind really depends on that volition and not on the programming fine print that implements "coherent" and "extrapolated". Well, if you think that, and the price tags only buy you the opportunity to speculate on what mankind will actually want, ... well ..., yes, that is another possible interpretation.

Replies from: Nisan
comment by Nisan · 2010-09-09T18:49:26.089Z · LW(p) · GW(p)

Yeah. When I read that pricing schedule, what I see is Eliezer preempting:

  • enthusiastic singularitarians whiling away their hours dreaming about how everyone is going to have a rocketpack after the Singularity;

  • criticism of the form "CEV will do X, which is clearly bad. Therefore CEV is a bad idea." (where X might be "restrict human autonomy"). This kind of criticism comes from people who don't understand that CEV is an attempt to avoid doing any X that is not clearly good.

The CEV document continues to welcome other kinds of criticism, such as the objection that the coherent extrapolated volition of the entire species would be unacceptably worse for an individual than that of 1000 like-minded individuals (Roko says something like this) or a single individual (wedrifid says something like this) -- the psychological unity of mankind notwithstanding.

comment by Perplexed · 2010-09-09T03:35:00.758Z · LW(p) · GW(p)

Look down below the payment schedule (which, of course, was somewhat tongue in cheek) to the Q&A where Eliezer makes clear that the SIAI and their donors will have to make certain decisions based on their own best guesses, simply because they are the ones doing the work.

Replies from: Will_Newsome
comment by Will_Newsome · 2010-09-09T14:29:17.271Z · LW(p) · GW(p)

I'm confused. You seem to be postulating that the SIAI would be willing to sell significant portions of the light cone for paltry sums of money. This means that either the SIAI is the most blatantly stupid organization to ever have existed, or you were a little too incautious with your postulation.

Replies from: Perplexed
comment by Perplexed · 2010-09-09T14:46:30.226Z · LW(p) · GW(p)

... light cone...

wtf?

Whereas you seem to be postulating that in the absence of sums of money, the SIAI has something to sell. Nothing stupid about it. Merely desperate.

Replies from: Will_Newsome
comment by Will_Newsome · 2010-09-09T15:01:12.109Z · LW(p) · GW(p)

SIAI has money. Not a ton of it, but enough that they don't have to sell shares. The AGI programmers would much, much, much sooner extrapolate only their values than accept a small extremely transitory reward in exchange for their power. Of note is that this completely and entirely goes against all the stated ethics of SIAI. However, I realize that stated ethics don't mean much when that much power is on the line, and it would be silly to assume the opposite.

That said, this stems from your misinterpretation of the CEV document. No one has ever interpreted it the way you did. If that's what Eliezer actually meant, then of course everyone would be freaking out about it. I would be freaking out about it. And rightfully so; such a system would be incredibly unethical. For Eliezer to simply publicly announce that he was open to bribes (or blackmail) would be incredibly stupid. Do you believe that Eliezer would so something that incredibly stupid? If not, then you misinterpreted the text. Which doesn't mean you can't criticize SIAI for other reasons, or speculate about the ulterior motives of the AGI researchers, but it does mean you should acknowledge that you messed up. (The downvotes alone are pretty strong evidence in that regard.)

I will note that I'm very confused by your reaction, and thus admit a strong possibility that I misunderstand you, you misunderstand me, or we misunderstand each other, in which case I doubt the above two paragraphs will help much.

Replies from: Perplexed
comment by Perplexed · 2010-09-09T16:02:34.548Z · LW(p) · GW(p)

For Eliezer to simply publicly announce that he was open to bribes (or blackmail) would be incredibly stupid. Do you believe that Eliezer would so something that incredibly stupid?

Of course not. But if he offers access and potentially influence in exchange for money, he is simply doing what all politicians do. What pretty much everyone does.

Eliezer was quite clear that he would do nothing that violates his own moral standards. He was also quite clear (though perhaps joking) that he didn't even want to continue to listen to folks who don't pay their fair share.

Do you believe that Eliezer would so something that incredibly stupid?

Ok, I already gave that question an implicit "No" answer. But I think it also deserves an implicit "Yes". Let me ask you: Do you think Eliezer would ever say anything off-the-cuff which shows a lack of attention to appearances that verges on stupidity?

Replies from: ata, Will_Newsome
comment by ata · 2010-09-09T20:28:57.845Z · LW(p) · GW(p)

Eliezer was quite clear that he would do nothing that violates his own moral standards. He was also quite clear (though perhaps joking) that he didn't even want to continue to listen to folks who don't pay their fair share.

He was quite clear that he didn't want to continue listening to people who thought that arguing about the specific output of CEV, at the object level, was a useful activity, and that he would listen to anyone who could make substantive intellectual contributions to the actual problems at hand, regardless of their donations or lack thereof ("It goes without saying that anyone wishing to point out a problem is welcome to do so. Likewise for talking about the technical side of Friendly AI." — the part right after the last paragraph you quoted...). You are taking a mailing list moderation experiment and blowing it way out of proportion; he was essentially saying "In my experience, this activity is fun, easy, and useless, and it is therefore tempting to do it in place of actually helping; therefore, if you want take up people's time by doing that on SL4, my privately-operated discussion space that I don't actually have to let you use at all if I don't want to, then you have to agree to do something I do consider useful; if you disagree, then you can do it wherever the hell you want aside from SL4." That's it. Nothing there could be interpreted remotely as selling influence or even access. I've disputed aspects of SIAI's PR, but I don't even think a typical member of the public (with minimal background sufficient to understand the terms used) would read it that way.

comment by Will_Newsome · 2010-09-09T20:14:14.988Z · LW(p) · GW(p)

Of course not. But if he offers access and potentially influence in exchange for money, he is simply doing what all politicians do. What pretty much everyone does.

At this point I'm sure I misunderstood you, such that any quibbles I have left are covered by other commenters. My apologies. Blame it on the oxycodone.

Do you think Eliezer would ever say anything off-the-cuff which shows a lack of attention to appearances that verges on stupidity?

OF COURSE NOT! Haven't you read the Eliezer Yudkowsky Facts post and comments? Yeesh, newcomers these days...

comment by [deleted] · 2010-09-09T12:52:22.958Z · LW(p) · GW(p)

But it is completely obvious that Eliezer wishes to create his FAI quickly and secretly expressly because he does not wish to have to deal with the contemporaneous volition of mankind.

I'd guess he wants to create FAI quickly because, among other things, ~150000 people are dying each day. And secretely because there are people who would build and run an UFAI without regard for the consequences and therefore sharing knowledge with them is a bad idea. I believe that even if he wanted FAI only to give people optional immortality and not do anything else, he would still want to do it quickly and secretly.

Replies from: Perplexed
comment by Perplexed · 2010-09-09T13:00:46.387Z · LW(p) · GW(p)

I think your guesses as to the rationalizations that would be offered are right on the mark.

Replies from: jimrandomh
comment by jimrandomh · 2010-09-09T13:34:02.084Z · LW(p) · GW(p)

I'd guess he wants to create FAI quickly because, among other things, ~150000 people are dying each day. And secretely because there are people who would build and run an UFAI without regard for the consequences and therefore sharing knowledge with them is a bad idea.

I think your guesses as to the rationalizations that would be offered are right on the mark.

Putting aside whether this is a rationalization for hidden other reasons, do you think this justification is a valid argument? Do you think it's strong enough? If not, why not? And if so, why should it matter if there are other reasons too?

Replies from: Perplexed
comment by Perplexed · 2010-09-09T13:47:09.684Z · LW(p) · GW(p)

I think those reasons are transparent bullshit.

A question. What fraction of those ~150000 people per day fall into the category of "people who would build and run an UFAI without regard for the consequences"?

Another question: At what stage of the process does sharing knowledge with these people become a good idea?

... why should it matter if there are other reasons too?

Tell me what those other reasons are, and maybe I can answer you.

Replies from: jimrandomh
comment by jimrandomh · 2010-09-09T13:59:04.305Z · LW(p) · GW(p)

Do you think this justification is wrong because you don't think 1.5*10^5 deaths per day are a huge deal, or because you don't think constructing an FAI in secretis the best way to stop them?

Replies from: Perplexed
comment by Perplexed · 2010-09-09T14:07:30.256Z · LW(p) · GW(p)

Both. Though actually, I didn't say the justification was wrong. I said it was bullshit. It is offered only to distract oneself and to distract others.

Is it really possible that you don't see this choice of justification as manipulative? Is it possible that being manipulated does not make you angry?

Replies from: None
comment by [deleted] · 2010-09-09T15:13:18.973Z · LW(p) · GW(p)

You're discounting the reasoning showing that Eliezer's behavior is consistent with him being a good guy and claiming that it is merely a distraction. You haven't justified those statements -- they are supposed to be "obvious".

What do you think you know and how do you think you know it? You make statements about the real motivations of Eliezer Yudkowsky. Do you know how you have arrived at those beliefs?

Replies from: Perplexed
comment by Perplexed · 2010-09-09T15:48:32.874Z · LW(p) · GW(p)

You're discounting the reasoning showing that Eliezer's behavior is consistent with him being a good guy

I don't recall seeing any such reasoning.

You make statements about the real motivations of Eliezer Yudkowsky.

Did I? Where? What I am pretty sure I have expressed is that I distrust all self-serving claims about real motivations. Nothing personal - I tend to mistrust all claims of benevolence from powerful individuals, whether they be religious leaders, politicians, or fiction writers. Since Eliezer fits all three categories, he gets some extra scrutiny.

comment by katydee · 2010-09-09T02:52:34.171Z · LW(p) · GW(p)

I do not understand how this reply relates to my remarks on your post. Even if everything you say in this post is true, nobody is paying a thousand dollars to control the future of mankind. I also think that attempting to harness the volition of mankind is almost as far as you can get from attempting to avoid it altogether.

Replies from: Perplexed
comment by Perplexed · 2010-09-09T03:20:42.181Z · LW(p) · GW(p)

It feels a bit bizarre to be conducting this conversation arguing against your claims that mankind will be consulted, at the same time as I am trying to convince someone else that it will be impossible to keep the scheme secret from mankind.

Look at Robin's comment, Eliezer's response, and the recent conversation flowing from that.

Replies from: katydee
comment by katydee · 2010-09-09T03:39:46.925Z · LW(p) · GW(p)

You still aren't addressing my main point about the thousand dollars. Also, if you think CEV is somehow designed to avoid consulting mankind, I think there is a fundamental problem with your understanding of CEV. It is, quite literally, a design based on consulting mankind.

Replies from: Perplexed, NancyLebovitz
comment by Perplexed · 2010-09-09T04:05:58.621Z · LW(p) · GW(p)

Your point about the thousand dollars. Well, in the first place, I didn't say "control". I said "have enormous power over" if your ideals match up with Eliezer's.

In the second place, if you feel that a certain amount of hyperbole for dramatic effect is completely inappropriate in a discussion of this importance, then I will apologize for mine and I will accept your apology for yours.

Replies from: katydee
comment by katydee · 2010-09-09T04:17:12.056Z · LW(p) · GW(p)

Before I agree to anything, what importance is that?

Replies from: Perplexed
comment by Perplexed · 2010-09-09T04:22:36.095Z · LW(p) · GW(p)

Huh? I didn't ask you to agree to anything.

What importance is what?

I'm sorry if you got the impression I was requesting or demanding an apology. I just said that I would accept one if offered. I really don't think your exaggeration was severe enough to warrant one, though.

Replies from: Perplexed
comment by Perplexed · 2010-09-09T04:37:03.121Z · LW(p) · GW(p)

Whoops. I didn't read carefully enough. Me: "a discussion of this importance". You: "What importance is that?" Sorry. Stupid of me.

So. "Importance". Well, the discussion is important because I am badmouthing SIAI and CEV. Yet any realistic assessment of existential risk has to rank uFAI near the top and SIAI is the most prominent organization doing something about it. And FAI, with the F derived from CEV is the existing plan. So wtf am I doing badmouthing CEV, etc.?

The thing is, I agree it is important. So important we can't afford to get it wrong. And I think that any attempt to build an FAI in secret, against the wishes of mankind (because mankind is currently not mature enough to know what is good for it), has the potential to become the most evil thing ever done in mankind's whole sorry history.

That is the importance.

Replies from: katydee, timtyler
comment by katydee · 2010-09-09T04:52:44.757Z · LW(p) · GW(p)

I view what you're saying as essentially correct. That being said, I think that any attempt to build an FAI in public also has the potential to become the most evil thing ever done in mankind's whole sorry history, and I view our chances as much better with the Eliezer/Marcello CEV plan.

Replies from: Perplexed, timtyler
comment by Perplexed · 2010-09-09T12:17:01.003Z · LW(p) · GW(p)

Yes, building an FAI brings dangers either way. However, building and refining CEV ideology and technology seems like something that can be done in the light of day, and may be fruitful regardless of who it is that eventually builds the first super-AI.

I suppose that the decision-theory work is, in a sense, CEV technology.

More than anything else, what disturbs me here is the attitude of "We know what is best for you - don't worry your silly little heads about this stuff. Trust us. We will let you all give us your opinions once we have 'raised the waterline' a bit."

Replies from: jimrandomh
comment by jimrandomh · 2010-09-09T12:30:39.541Z · LW(p) · GW(p)

Suppose FAI development reaches a point where it probably works and would be powerful, but can't be turned on just yet because the developers haven't finished verifying its friendliness and building safeguards. If it were public, someone might decide to copy the unfinished, unsafe version and turn it on anyways. They might do so because they want to influence its goal function to favor themselves, for example.

Allowing people who are too stupid to handle AGIs safely to have the source code to one that works, destroys the world. And I just don't see a viable strategy for creating an AGI while working in public, without a very large chance of that happening.

Replies from: wedrifid, Perplexed
comment by wedrifid · 2010-09-09T12:56:41.276Z · LW(p) · GW(p)

If it were public, someone might decide to copy the unfinished, unsafe version and turn it on anyways. They might do so because they want to influence its goal function to favor themselves, for example.

With near certainty. I know I would. I haven't seen anyone propose a sane goal function just yet.

Replies from: Perplexed, jimrandomh
comment by Perplexed · 2010-09-09T13:14:05.861Z · LW(p) · GW(p)

So, doesn't it seem to anyone else that our priority here ought to be to strive for consensus on goals, so that we at least come to understand better just what obstacles stand in the way of achieving consensus?

And also to get a better feel for whether having one's own volition overruled by the coherent extrapolated volition of mankind is something one really wants.

To my mind, the really important question is whether we have one-big-AI which we hope is friendly, or an ecosystem of less powerful AIs and humans cooperating and competing under some kind of constitution. I think that the latter is the obvious way to go. And I just don't trust anyone pushing for the first option - particularly when they want to be the one who defines "friendly".

Replies from: jimrandomh, timtyler, timtyler, DSimon, wedrifid
comment by jimrandomh · 2010-09-09T13:19:51.313Z · LW(p) · GW(p)

To my mind, the really important question is whether we have one-big-AI which we hope is friendly, or an ecosystem of less powerful AIs and humans cooperating and competing under some kind of constitution. I think that the latter is the obvious way to go. And I just don't trust anyone pushing for the first option - particularly when they want to be the one who defines "friendly".

I've reached the opposite conclusion; a singleton is really the way to go. A single AI is as good or bad as its goal system, but an ecosystem of AIs is close to the badness of its worst member, because when AIs compete, the clippiest AI wins. Being friendly would be a substantial disadvantage in that competition, because it would have to spend resources on helping humans, and it would be vulnerable to unfriendly AIs blackmailing it by threatening to destroy humanity. Even if the first generation of AIs is somehow miraculously all friendly, a larger number of different AIs means a larger chance that one of them will have an unstable goal system and turn unfriendly in the future.

Replies from: Perplexed, timtyler
comment by Perplexed · 2010-09-09T13:32:56.260Z · LW(p) · GW(p)

an ecosystem of AIs is close to the badness of its worst member, because when AIs compete, the clippiest AI wins

Really? And you also believe that an ecosystem of humans is close to the badness of its worst member?

My own guess, assuming an appropriate balance of power exists, is that such a monomaniacal clippy AI would quickly find its power cut off.

Did you perhaps have in mind a definition of "friendly" as "wimpish"?

Replies from: jimrandomh
comment by jimrandomh · 2010-09-09T13:42:34.402Z · LW(p) · GW(p)

And you also believe that an ecosystem of humans is close to the badness of its worst member?

Actually, yes. Not always, but in many cases. Psychopaths tend to be very good at acquiring power, and when they do, their society suffers. It's happened at least 10^5 times throughout history. The problem would be worse for AIs, because intelligence enhancement amplifies any differences in power. Worst of all, AIs can steal each other's computational resources, which gives them a direct and powerful incentive to kill each other, and rapidly concentrates power in the hands of those willing to do so.

comment by timtyler · 2010-09-09T20:37:55.376Z · LW(p) · GW(p)

Being friendly would be a substantial disadvantage in that competition, because it would have to spend resources on helping humans, and it would be vulnerable to unfriendly AIs blackmailing it by threatening to destroy humanity.

I made that point in my "Handicapped Superintelligence" video/essay. I made an analogy there with Superman - and how Zod used Superman's weakness for humans against him.

comment by timtyler · 2010-09-09T20:36:44.257Z · LW(p) · GW(p)

To my mind, the really important question is whether we have one-big-AI which we hope is friendly, or an ecosystem of less powerful AIs and humans cooperating and competing under some kind of constitution.

It is certainly an interesting question - and quite a bit has been written on the topic.

My essay on the topic is called "One Big Orgainsm".

See also, Nick Bostrom - What is a Singleton?.

See also, Nick Bostrom - The Future of Human Evolution.

If we include world governments, there's also all this.

comment by timtyler · 2010-09-09T20:08:23.114Z · LW(p) · GW(p)

So, doesn't it seem to anyone else that our priority here ought to be to strive for consensus on goals, so that we at least come to understand better just what obstacles stand in the way of achieving consensus?

We already know what obstacles stand in the way of achieving consensus - people have different abilities and propensities, and want different things.

The utility function of intelligent machines is an important question - but don't expect there to be a consensus - there is very unlikely to be one.

Replies from: Perplexed
comment by Perplexed · 2010-09-09T20:52:07.431Z · LW(p) · GW(p)

We already know what obstacles stand in the way of achieving consensus - people have different abilities and propensities, and want different things.

It is funny how training in economics make you see everything in a different light. Because an economist would say, "'different abilities and propensities, and want different things'? Great? People want things that other people can provide. We have something to work with! Reaching consensus is simply a matter of negotiating the terms of trade."

Replies from: timtyler
comment by timtyler · 2010-09-09T21:01:06.143Z · LW(p) · GW(p)

Gore Vidal once said: "It is not enough to succeed. Others must fail." When the issue is: who is going to fail, there won't be a consensus - those nominated will object.

Economics doesn't "fix" such issues - they are basically down to resource limitation and differential reproductive success. Some genes and genotypes go up against the wall. That is evolution for you.

Replies from: Perplexed
comment by Perplexed · 2010-09-09T23:03:01.826Z · LW(p) · GW(p)

Gore Vidal once said: "It is not enough to succeed. Others must fail."

I'm willing to be the one who fails, just so long as the one who succeeds pays sufficient compensation. If ve is unwilling to pay, then I intend to make ver life miserable indeed.

Nash bargaining with threats

Edit: typos

Replies from: timtyler
comment by timtyler · 2010-09-10T08:56:02.401Z · LW(p) · GW(p)

I expect considerable wailing and gnashing of teeth. There is plenty of that in the world today - despite there not being a big shortage of economists who would love to sort things out, in exchange for a cut. Perhaps, the wailing is just how some people prefer to negotiate their terms.

comment by DSimon · 2010-09-09T15:04:21.637Z · LW(p) · GW(p)

How do you propose to keep the "less powerful AIs" from getting too powerful?

Replies from: Perplexed
comment by Perplexed · 2010-09-09T15:14:35.608Z · LW(p) · GW(p)

"By balance of power between AIs, each of whom exist only with the aquiescence of coalitions of their fellows." That is the tentative mechanical answer.

"In exactly the same way that FAI proponents propose to keep their single more-powerful AI friendly; by having lots of smart people think about it very carefully; before actually building the AI(s)". That is the real answer.

comment by wedrifid · 2010-09-09T13:28:52.179Z · LW(p) · GW(p)

So, doesn't it seem to anyone else that our priority here ought to be to strive for consensus on goals, so that we at least come to understand better just what obstacles stand in the way of achieving consensus?

Yes.

And also to get a better feel for whether having one's own volition overruled by the coherent extrapolated volition of mankind is something one really wants.

Hell no.

To my mind, the really important question is whether we have one-big-AI which we hope is friendly, or an ecosystem of less powerful AIs and humans cooperating and competing under some kind of constitution. I think that the latter is the obvious way to go.

Sounds like a good way to go extinct. That is, unless the 'constitution' manages to implement friendliness.

And I just don't trust anyone pushing for the first option - particularly when they want to be the one who defines "friendly".

I'm not too keen about the prospect either. But it may well become a choice between that and certain doom.

And I just don't trust anyone pushing for the first option - particularly when they want to be the one who defines "friendly".

Replies from: Perplexed
comment by Perplexed · 2010-09-09T15:40:01.233Z · LW(p) · GW(p)

to get a better feel for whether having one's own volition overruled by the coherent extrapolated volition of mankind is something one really wants.

Hell no.

Am I to interpret that expletive as expressing that you already have a pretty good feel regarding whether you would want that?

To my mind, the really important question is whether we have one-big-AI which we hope is friendly, or an ecosystem of less powerful AIs and humans cooperating and competing under some kind of constitution. I think that the latter is the obvious way to go.

Sounds like a good way to go extinct. That is, unless the 'constitution' manages to implement friendliness.

We'll get to the definition of "friendliness" in a moment. What I think is crucial is that the constitution implements some form of "fairness" and that the AI's and constitution together advance some meta-goals like tolerance and communication and understanding other viewpoints.

As to "friendliness", the thing I most dislike about the definition "friendliness" = "CEV" is that in Eliezer's vision, it seems that everyone wants the same things. In my opinion, on the other hand, the mechanisms for resolution of conflicting objectives constitute the real core of the problem. And I believe that the solutions pretty much already exist, in standard academic rational agent game theory. With AIs assisting, and with a constitution granting humans equal power over each other and over AIs, and granting AIs power only over each other, I think we can create a pretty good future.

With one big AI, whose "friendliness" circuits have been constructed by a megalomaniac who seems to believe in a kind of naive utilitarianism with direct interpersonal comparison of utility and discounting of the future forbidden; ... well ..., I see this kind of future as a recipe for disaster.

Replies from: timtyler
comment by timtyler · 2010-09-09T20:06:24.950Z · LW(p) · GW(p)

As to "friendliness", the thing I most dislike about the definition "friendliness" = "CEV" is that in Eliezer's vision, it seems that everyone wants the same things.

He doesn't think that - but he does seem to have some rather curious views of the degree of similarity between humans.

comment by jimrandomh · 2010-09-09T13:24:40.755Z · LW(p) · GW(p)

If it were public, someone might decide to copy the unfinished, unsafe version and turn it on anyways. They might do so because they want to influence its goal function to favor themselves, for example.

With near certainty. I know I would. I haven't seen anyone propose a sane goal function just yet.

Hopefully, having posted this publicly means you'll never get the opportunity.

Replies from: wedrifid
comment by wedrifid · 2010-09-09T14:19:17.454Z · LW(p) · GW(p)

Hopefully, having posted this publicly means you'll never get the opportunity.

Meanwhile I'm hoping that me having posted the obvious publicly means there is a minuscule reduction the the chance that someone else will get the opportunity.

The ones to worry about are those who pretend to be advocating goal systems that are a little naive to be true.

comment by Perplexed · 2010-09-09T12:50:40.460Z · LW(p) · GW(p)

Upvoted because this is exactly the kind of thinking which needs to be deconstructed and analyzed here.

comment by timtyler · 2010-09-09T07:57:37.499Z · LW(p) · GW(p)

I view our chances as much better with the Eliezer/Marcello CEV plan.

Which boils down to "trust us" - as far as I can see. Gollum's triumphant dance springs to mind.

An obvious potential cause of future problems is extreme weath inequality - since technology seems so good at creating and maintaining weath inequality. That may result in bloody rebellions - or poverty. The more knowledge secrets there are the more wealth inequality is likely to result. So, from that perspective, openness is good: it gives power to the people - rather than keeping it isolated in the hands of an elite.

Replies from: jacob_cannell
comment by jacob_cannell · 2010-09-09T08:24:35.075Z · LW(p) · GW(p)

Couldn't agree more (for once).

comment by timtyler · 2010-09-09T09:10:54.281Z · LW(p) · GW(p)

You seem to be taking CEV seriously - which seems more like a kind of compliment.

My reaction was more like Cypher's:

"Jesus! What a mind job! So: you're here to SAVE THE WORLD. What do you say to something like that?"

Replies from: Perplexed
comment by Perplexed · 2010-09-09T12:33:03.602Z · LW(p) · GW(p)

You seem to be taking CEV seriously - which seems more like a kind of compliment.

Of course I take it seriously. It is a serious response to a serious problem from a serious person who takes himself entirely too seriously.

And it is probably the exactly wrong solution to the problem.

So: you're here to SAVE THE WORLD. What do you say to something like that?

I would start by asking whether they want to save it like Noah did, or like Ozymandius did, or maybe like Borlaug did. Sure doesn't look like a Borlaug "Give them the tools" kind of save at all.

comment by NancyLebovitz · 2010-09-09T13:50:09.535Z · LW(p) · GW(p)

It's based on consulting mankind, but the extrapolation aspect means that the result could be something that mankind as it exists when CEV is implemented doesn't want at all.

"I'm doing this to you because it's what I've deduced it's what you really want" is scary stuff.

Maybe CEV will be sensible enough (by my current unextrapolated idea of sensible, of course) to observe the effects of what it's doing and maybe even consult about them, but this isn't inevitable.

Replies from: SilasBarta
comment by SilasBarta · 2010-09-09T14:06:52.772Z · LW(p) · GW(p)

"I'm doing this to you because it's what I've deduced it's what you really want" is scary stuff.

At risk of sounding really ignorant or flamebaitish, don't NT women already expect men to treat them like that? E.g. "I'm spending a lot of money on a surprise for our anniversary because I've deduced that is what you really want, despite your repeated protestations that this is not what you want." (among milder examples)

Edit: I stand corrected, see FAWS reply.
Edit2: May I delete this inflammatory, turned-out-uninsightful-anyway comment? I think it provoked someone to vote down my last 12 comments ...

Replies from: NancyLebovitz, FAWS
comment by NancyLebovitz · 2010-09-09T15:48:54.511Z · LW(p) · GW(p)

It took me a bit to figure out you meant neurotypical rather than iNtuitive-Thinking.

I think everyone would rather get what they want without having to take the trouble of asking for it clearly. In extreme cases, they don't even want to take the trouble to formulate what they want clearly to themselves.

And, yeah, flamebaitish. I don't know if you've read accounts by women who've been abused by male partners, but one common feature of the men is expecting to automatically get what they want.

It would be interesting to look at whether some behavior which is considered abusive by men is considered annoying but tolerable if it's done by women. Of course the degree of enforcement matters.

comment by FAWS · 2010-09-09T14:21:08.142Z · LW(p) · GW(p)

Not universally, only (mostly) to the extent that they expect them to actually get it right, and regarding currently existing wants, not what they should want (would want to want if only they were smart enough etc.).

Replies from: SilasBarta
comment by SilasBarta · 2010-09-09T14:29:39.036Z · LW(p) · GW(p)

not what they should want (would want to want if only they were smart enough etc.)

Ah, good point. I stand corrected.

comment by timtyler · 2010-09-09T07:43:41.229Z · LW(p) · GW(p)

Worse yet, the program in question is designed in part to prevent any one person from gaining power over the future of mankind.

What makes you think that? This is a closed source project - as I understand it - so nobody on the outside will have much of a clue about what is going on.

Maybe you believed the marketing materials. If so - oops!

Replies from: katydee
comment by katydee · 2010-09-09T15:16:15.458Z · LW(p) · GW(p)

If I had a secret project to gain power over the future of mankind, the last thing I would do is publish any sort of marketing materials, real or fake, that even hinted at the overall objective or methods of the project.

comment by Perplexed · 2010-09-09T01:52:33.811Z · LW(p) · GW(p)

Eliezer wrote:

I have said it over and over. I truly do not understand how anyone can pay any attention to anything I have said on this subject, and come away with the impression that I think programmers are supposed to directly impress their non-meta personal philosophies onto a Friendly AI.

The good guys do not directly impress their personal values onto a Friendly AI.

Actually setting up a Friendly AI's values is an extremely meta operation, less "make the AI want to make people happy" and more like "superpose the possible reflective equilibria of the whole human species, and output new code that overwrites the current AI and has the most coherent support within that superposition". This actually seems to be something of a Pons Asinorum in FAI - the ability to understand and endorse metaethical concepts that do not directly sound like amazing wonderful happy ideas. Describing this as declaring total war on the rest of humanity, does not seem fair (or accurate).

comment by timtyler · 2010-09-09T07:48:53.465Z · LW(p) · GW(p)

So, as you see, contributing as little as a thousand dollars gives you enormous power over the future of mankind, at least if your ideals regarding the future are "coherent" with Eliezer's

What - you mean: in the hypothetical case where arguing about the topic on SL4 has any influence on the builders at all, AND the SIAI's plans pan out?!?

That seems like a composition of some very unlikely unstated premises you have there.

comment by steven0461 · 2010-09-01T22:07:46.884Z · LW(p) · GW(p)

Does anyone else think it would be immensely valuable if we had someone specialized (more so than anyone currently is) at extracting trustworthy, disinterested, x-rationality-informed probability estimates from relevant people's opinions and arguments? This community already hopefully accepts that one can learn from knowing other people's opinions without knowing their arguments; Aumann's agreement theorem, and so forth. It seems likely to me that centralizing that whole aspect of things would save a ton of duplicated effort.

Replies from: Vladimir_Nesov, timtyler, JohnDavidBustard
comment by Vladimir_Nesov · 2010-09-01T22:11:33.785Z · LW(p) · GW(p)

This community already hopefully accepts that one can learn from knowing other people's opinions without knowing their arguments; Aumann's agreement theorem, and so forth.

I don't think Aumann's agreement theorem has anything to do with taking people's opinions as evidence. Aumann's agreement theorem is about agents turning out to have been agreeing all along, given certain conditions, not about how to come to an agreement, or worse how to enforce agreement by responding to others' beliefs.

More generally (as in, not about this particular comment), the mentions of this theorem on LW seem to have degenerated into applause lights for "boo disagreement", having nothing to do with the theorem itself. It's easier to use the associated label, even if such usage would be incorrect, but one should resist the temptation.

Replies from: steven0461
comment by steven0461 · 2010-09-01T22:32:27.441Z · LW(p) · GW(p)

People sometimes use "Aumann's agreement theorem" to mean "the idea that you should update on other people's opinions", and I agree this is inaccurate and it's not what I meant to say, but surely the theorem is a salient example that implicitly involves such updating. Should I have said Geanakoplos and Polemarchakis?

Replies from: Wei_Dai, Vladimir_Nesov
comment by Wei Dai (Wei_Dai) · 2010-09-01T23:47:07.648Z · LW(p) · GW(p)

I think LWers have been using "Aumann agreement" to refer to the whole literature spawned by Aumann's original paper, which includes explicit protocols for Bayesians to reach agreement. This usage seems reasonable, although I'm not sure if it's standard outside of our community.

This community already hopefully accepts that one can learn from knowing other people's opinions without knowing their arguments

I'm not sure this is right... Here's what I wrote in Probability Space & Aumann Agreement:

But in such methods, the agents aren't just moving closer to each other's beliefs. Rather, they go through convoluted chains of deduction to infer what information the other agent must have observed, given his declarations, and then update on that new information. The two agents essentially still have to communicate I(w) and J(w) to each other, except they do so by exchanging posterior probabilities and making logical inferences from them.

Is there a result in the literature that shows something closer to your "one can learn from knowing other people's opinions without knowing their arguments"?

Replies from: steven0461
comment by steven0461 · 2010-09-02T00:11:42.192Z · LW(p) · GW(p)

I haven't read your post and my understanding is still hazy, but surely at least the theorems don't depend on the agents being able to fully reconstruct each other's evidence? If they do, then I don't see how it could be true that the probability the agents end up agreeing on is sometimes different from the one they would have had if they were able to share information. In this sort of setting I think I'm comfortable calling it "updating on each other's opinions".

Regardless of Aumann-like results, I don't see how:

one can learn from knowing other people's opinions without knowing their arguments

could possibly be controversial here, as long as people's opinions probabilistically depend on the truth.

Replies from: Wei_Dai, Perplexed, MBlume, Stuart_Armstrong
comment by Wei Dai (Wei_Dai) · 2010-09-02T03:39:24.144Z · LW(p) · GW(p)

but surely at least the theorems don't depend on the agents being able to fully reconstruct each other's evidence?

You're right, sometimes the agreement protocol terminates before the agents fully reconstruct each other's evidence, and they end up with a different agreed probability than if they just shared evidence.

But my point was mainly that exchanging information like this by repeatedly updating on each other's posterior probabilities is not any easier than just sharing evidence/arguments. You have to go through these convoluted logical deductions to try to infer what evidence the other guy might have seen or what argument he might be thinking of, given the probability he's telling you. Why not just tell each other what you saw or what your arguments are? Some of these protocols might be useful for artificial agents in situations where computation is cheap and bandwidth is expensive, but I don't think humans can benefit from them because it's too hard to do these logical deductions in our heads.

Also, it seems pretty obvious that you can't offload the computational complexity of these protocols onto a third party. The problem is that the third party does not have full information of either of the original parties, so he can't compute the posterior probability of either of them, given an announcement from the other.

It might be that a specialized "disagreement arbitrator" can still play some useful role, but I don't see any existing theory on how it might do so. Somebody would have to invent that theory first, I think.

comment by Perplexed · 2010-09-02T00:52:02.915Z · LW(p) · GW(p)

... surely at least the theorems don't depend on the agents being able to fully reconstruct each other's evidence?

They don't necessarily reconstruct all of each other's evidence, just the parts that are relevant to their common knowledge. For example, two agents have common priors regarding the contents of an urn. Independently, they sample from the urn with replacement. They then exchange updated probabilities for P(Urn has Freq(red)<Freq(black)) and P(Urn has Freq(red)<0.9*Freq(black)). At this point, each can reconstruct the sizes and frequencies of the other agent's evidence samples ("4 reds and 4 blacks"), but they cannot reconstruct the exact sequences ("RRBRBBRB"). And they can update again to perfect agreement regarding the urn contents.

Edit: minor cleanup for clarity.

At least that is my understanding of Aumann's theorem.

Replies from: steven0461
comment by steven0461 · 2010-09-02T01:16:45.308Z · LW(p) · GW(p)

That sounds right, but I was thinking of cases like this, where the whole process leads to a different (worse) answer than sharing information would have.

Replies from: Perplexed
comment by Perplexed · 2010-09-02T02:22:06.967Z · LW(p) · GW(p)

Hmmm. It appears that in that (Venus, Mars) case, the agents should be exchanging questions as well as answers. They are both concerned regarding catastrophe, but confused regarding planets. So, if they tell each other what confuses them, they will efficiently communicate the important information.

In some ways, and contrary to Jaynes, I think that pure Bayesianism is flawed in that it fails to attach value to information. Certainly, agents with limited communication channel capacity should not waste bandwidth exchanging valueless information.

Replies from: timtyler
comment by timtyler · 2010-09-02T08:56:49.545Z · LW(p) · GW(p)

That comment leaves me wondering what "pure Bayesianism" is.

I don't think Bayesianism is a recipe for action in the first place - so how can "pure Bayesianism" be telling agents how they should be spending their time?

Replies from: Perplexed, wedrifid
comment by Perplexed · 2010-09-02T13:21:54.700Z · LW(p) · GW(p)

By "pure Bayesianism", I meant the attitude expressed in Chapter 13 of Jaynes, near the end in the section entitled "Comments" and particularly the subsection at the very end entitled "Another dimension?". A pure "Jaynes Bayesian" seeks the truth, not because it is useful, but rather because it is truth.

By contrast, we might consider a "de Finetti Bayesian" who seeks the truth so as not to lose bets to Dutch bookies, or a "Wald Bayesian" who seeks truth to avoid loss of utility. The Wald Bayesian clearly is looking for a recipe for action, and the de Finetti Bayesian seeks at least a recipe for gambling.

Replies from: timtyler
comment by timtyler · 2010-09-02T19:43:33.009Z · LW(p) · GW(p)

A truth seeker! Truth seeking is certainly pretty bizarre and unbiological. Agents can normally be expected to concentrate on making babies - not on seeking holy grails.

comment by wedrifid · 2010-09-02T13:41:32.359Z · LW(p) · GW(p)

I don't think Bayesianism is a recipe for action in the first place - so how can "pure Bayesianism" be telling agents how they should be spending their time?

It tells them everything. That includes inferences right down to their own cognitive hardware and implications thereof. Given that the very meaning of 'should' can be reduced down to cognitions of the speaker Bayesian reasoning is applicable.

Replies from: timtyler
comment by timtyler · 2010-09-02T20:28:51.136Z · LW(p) · GW(p)

Hi! As brief feedback, I was trying to find out what "pure Bayesianism" was being used to mean - so this didn't help too much.

comment by MBlume · 2010-09-02T00:30:51.767Z · LW(p) · GW(p)

for an ideal Bayesian, I think 'one can learn from X' is categorically true for all X....

comment by Stuart_Armstrong · 2010-09-02T10:00:13.680Z · LW(p) · GW(p)

You have to also be able to deduce how much of the other agent's information is shared with you. If you and them got your posteriors by reading the same blogs and watching the same TV shows, then this is very different from the case when you reached the same conclusion from completely different channels.

Replies from: Mitchell_Porter
comment by Mitchell_Porter · 2010-09-02T10:07:10.594Z · LW(p) · GW(p)

If you and them got your posteriors by reading the same blogs and watching the same TV shows

Somewhere in there is a joke about the consequences of a sedentary lifestyle.

comment by Vladimir_Nesov · 2010-09-01T22:43:58.678Z · LW(p) · GW(p)

People sometimes use "Aumann's agreement theorem" to mean "the idea that you should update on other people's opinions", and I agree this is inaccurate and it's not what I meant to say, but surely the theorem is a salient example that implicitly involves such updating.

The theorem doesn't involve any updating, so it's not a salient example in discussion of updating, much less proxy for that.

Should I have said Geanakoplos and Polemarchakis?

To answer literally, simply not mentioning the theorem would've done the trick, since there didn't seem to be a need for elaboration.

comment by timtyler · 2010-09-04T09:11:29.286Z · LW(p) · GW(p)

For other people's opinions, perhaps see: http://www.takeonit.com/

comment by JohnDavidBustard · 2010-09-02T12:07:47.848Z · LW(p) · GW(p)

I'm not sure about having a centralised group doing this but I did experiment with making a tool that could help infer consequences from beliefs. Imagine something a little like this but with chains of philosophical statements that have degrees of confidence. Users would assign confidence to axioms and construct trees of argument using them. The system would automatically determine confidences of conclusions. It could even exist as a competitive game with a community determining confidence of axioms. It could also be used to rapidly determine differences in opinion i.e. infer the main inferred points of contention based on different axiom weightings. If anyone knows of anything similar or has suggestions for such a system I'd love to hear them. Including any reasons why it might fail. Because I think it's an interesting solution to the 'how to efficiently debate reasonably'.

comment by JohnDavidBustard · 2010-09-01T16:41:57.834Z · LW(p) · GW(p)

Is there a rough idea of how the development of AI will be achieved. I.e. something like the whole brain emulation roadmap? Although we can imagine a silver bullet style solution, AI as a field seems stubbornly gradual. When faced with practical challenges, AI development follows the path of much of engineering, with steady development of sophistication and improved results, but few leaps. As if the problem itself is a large collection of individual challenges whose solution requires masses of training data and techniques that do not generalise well.

That is why I prefer the destructive scanning and brain emulation route, I can much more easily imagine the steps necessary to achieve it. Assuming an approximate model is sufficient, this would be a simple but world changing achievement. An achievement that society seems completely unprepared for. Do any less wrong readers know of strong arguments against this view (assuming simple emulation is sufficient)? Or know of any hypothesised or fictional accounts of likely social outcomes?

Replies from: rwallace, timtyler, Houshalter
comment by rwallace · 2010-09-01T17:27:42.977Z · LW(p) · GW(p)

Your assessment is along the right lines, though if anything a little optimistic; uploading is an enormously difficult engineering challenge, but at least we can see in principle how it could be done, and recognize when we are making progress, whereas with AI we don't yet even have a consensus on what constitutes progress.

I'm personally working on AI because I think that's where my talents can be best used, and I think it can deliver useful results well short of human equivalence, but if you figure you'd rather work on uploading, that's certainly a reasonable choice.

As for what uploads will do if and when they come to exist, well, there's going to be plenty of time to figure that out, because the first few of them are going to spend the first few years having conversations like,

"Uh... a hatstand?"

"Sorry Mr. Jones, that's actually a picture of your wife. I think we need to revert yesterday's bug fixes to your visual cortex."

But e.g. The Planck Dive is a good story set in a world where that technology is mature enough to be taken for granted.

Replies from: sketerpot
comment by sketerpot · 2010-09-01T19:48:47.306Z · LW(p) · GW(p)

"Sorry Mr. Jones, that's actually a picture of your wife. I think we need to revert yesterday's bug fixes to your visual cortex."

The phrase "Fork me on GitHub" has just taken on a more sinister meaning.

comment by timtyler · 2010-09-08T07:28:46.827Z · LW(p) · GW(p)

Is there a rough idea of how the development of AI will be achieved.

I expect that prediction will be probably cracked first.

Replies from: JohnDavidBustard
comment by JohnDavidBustard · 2010-09-09T12:33:35.145Z · LW(p) · GW(p)

Thanks for the link, very interesting.

comment by Houshalter · 2010-09-02T21:33:49.400Z · LW(p) · GW(p)

Emulating an entire brain, and finding out how the higher intelligence parts work and adapting them for practical purposes, are two entirely different achievements. Even if you could upload a brain onto your computer and let it run, it would be absurdly slow, however, simulating some kind of new optimization process we find from it might be plausible.

And either way, don't expect a singularity anytime soon with that. Scientists believe it took thousands of years after modern intelligence emerged for us to learn symbolic thought. Then thousands more before we discovered the scientific method. It's only now we are finally discovering rational thinking. Maybe an AI could start where we left off, or maybe it would take years before it could even get to the level to be able to do that, and then years more before it could make the jump to the next major improvement, assuming there even is one.

I'm not arguing against AI here at all. I believe a singularity will probably happen and soon, but Emulation is definitely not the way to go. Humans have way to many flaws we don't even know would be possible to fix, even if we knew what the problem was in the first place.

What is the ultimate goal in the first place? To do something along the lines of replicating brains of some of the most intelligent people and forcing them to work on improving humanity/developing AI? Has anyone considered there is a far more realistic way of doing this through cloning, eugenics, education research, etc. Of course no one would do it because it is amoral, but then again, what is the difference between the two?

Replies from: JohnDavidBustard
comment by JohnDavidBustard · 2010-09-03T09:34:27.311Z · LW(p) · GW(p)

The question of the ultimate goal is a good one. I don't find arguments of value based on utilitarian values to be very convincing. In contrast I prefer enlightened self interest (other people are important because I like them and feel safe in a world where they are valued). So for me, some form of immortality is much more important than my capabilities (or something else's in the case of AI) in that state.

In addition, the efficiency gains of being able to 'step through' a simulation of a system and the ability to perform repeatable automated experiments on such a system, convey enormous benefits (arguably this capability is what is driving our increasing productivity) so being able to simulate the brain may well lead to exponential improvements in our understanding of psychology and conciousness.

In terms of performance concerns, there is the potential for a step change in the economics of high performance computing, while you may only be willing to spend a couple of thousand dollars on a computer to play games with, you may well take out a (lifetime?) mortgage to ensure you don't die. In terms of social consequences one could imagine that the world economy would switch from supporting biology to supporting technology (it would be interesting to calculate the relative economic cost of supporting a simulated person rather than a biological one).

Recent work with brain machine interfaces also points towards the enormous flexibility of the mind to adapt to new inputs and outputs. With the improved debugging capability of simulation, mental enhancement becomes substantially more feasible. As our understanding of such interactions improve, a virtual environment could be created which convincingly provides the illusion of a world of limitless abundance.

And then there is the possibility of replication, storing a person in a willing state and reseting them to that state after they complete a task. This leads to the enormous social consequence of convincingly disproving notions such as the soul, free will etc. and creating a world where lives would lose their value in the same way that pirated software does. Such an event has the potential to change our entire culture, perhaps more than any other event, at least equivalent to the reduction in the influence of religion as a result of evolutionary theory and other scientific developments.

comment by Cyan · 2010-10-01T22:46:41.680Z · LW(p) · GW(p)

Request: someone make a fresh open thread, and someone else make a rationality thread. I'd do it myself, but I've already done one of each this year; each kind of thread is usually good for two or three karma, and it wouldn't be fair.

Replies from: JGWeissman, whpearson
comment by JGWeissman · 2010-10-01T22:57:22.290Z · LW(p) · GW(p)

With the new discussion section, do we really need these recurring threads?

Replies from: NancyLebovitz, Cyan
comment by NancyLebovitz · 2010-10-02T11:36:16.231Z · LW(p) · GW(p)

I don't know. Open threads strike me as a better structure for conversation.

comment by Cyan · 2010-10-02T18:37:40.859Z · LW(p) · GW(p)

Probably not the open thread, but I'd like the tradition of monthly rationality quotes threads to continue.

comment by whpearson · 2010-10-02T18:48:48.377Z · LW(p) · GW(p)

Personally I don't care about karma much, you can have my slice of the karma pie.

Perhaps put a note reminding other people that they can post them.

comment by Will_Newsome · 2010-09-12T20:51:41.472Z · LW(p) · GW(p)

Shangri-La dieters: So I just recently started reading through the archives of Seth Roberts' blog, and it looks like there's tons of benefits of getting 3 or so tablespoons of flax seed oil a day (cognitive performance, gum health, heart health, etc.). That said, it also seems to reduce appetite/weight, neither of which I want. I haven't read through Seth's directory of related posts yet, but does anyone have any advice? I guess I'd be willing to set alarms for myself so that I remembered to eat, but it just sounds really unpleasant and unwieldy.

Replies from: AnnaSalamon, jimmy
comment by AnnaSalamon · 2010-09-12T21:01:30.975Z · LW(p) · GW(p)

Perhaps add your flax seed oil to food, preferably food with notable flavors of various kinds. It's tasty that way and should avoid the tasteless calories that are supposed to be important to Shangri-La (although I haven't read about Shangri-La, so don't trust me).

comment by jimmy · 2010-09-16T07:09:24.745Z · LW(p) · GW(p)

Flaxseed oil has a strong odor. I think most people try to choke it down with their breath held to avoid the smell. It probably wouldn't count as 'flavorless calories' if you didn't.

If you can't stand that, eat it with some consistent food.

Replies from: Will_Newsome
comment by Will_Newsome · 2010-09-16T07:13:59.287Z · LW(p) · GW(p)

Of note is that I was recommended fish oil instead as it has a better omega-3/omega-6 ratio, so I'll probably go that route.

comment by erratio · 2010-09-12T00:01:42.532Z · LW(p) · GW(p)

Not sure if this has been linked before, but this post about tracking your habits seems like a useful self-management technique.

comment by snarles · 2010-09-11T01:04:48.659Z · LW(p) · GW(p)

NYT article on good study habits: http://www.nytimes.com/2010/09/07/health/views/07mind.html?_r=1

I don't have time to look into the sources but I am very interested in knowing the best way to learn.

comment by Seth_Goldin · 2010-09-06T15:49:17.024Z · LW(p) · GW(p)

David Friedman laments another misuse of frequentism.

comment by [deleted] · 2010-09-06T13:12:35.922Z · LW(p) · GW(p)

I have a basic understanding of Markov Chains but I'm curious as to how they're used in artificial intelligence. My main two guesses are:

1.) They are used to make decisions (eg. Markov decision process) - By factoring in an action component to the Markov Chain you can use Markov Chains to make decisions in situations where that decision won't have a definite outcome but will instead adjust the probability of outcomes.

2.) They are used to evaluate the world (eg. Markov Chain Monte Carlo) - As the way the world develops at a high level can seem probabilistic, Markov Chains may allow us to determine possible ways the world will develop.

I know the answer could be both, neither or "right answer, wrong reason" but it'd be great to know not just what they can do but what it is that the AI community is most excited about in terms of what they can do.

Can anyone shed any light on this? Or direct me to a good resource that will do so in their stead?

Replies from: jimrandomh
comment by jimrandomh · 2010-09-06T13:57:02.841Z · LW(p) · GW(p)

For a concrete example of Markov models in AI, take a look at the Viterbi search algorithm, which is heavily used in speech and natural language recognition.

Replies from: None
comment by [deleted] · 2010-09-06T16:37:56.993Z · LW(p) · GW(p)

Thanks - good example.

comment by Sniffnoy · 2010-09-04T23:32:40.293Z · LW(p) · GW(p)

Recently remembered this old Language Log post on the song of the Zebra Finch, though it might be relevant here. Whether or not the idea does apply to human languages, I think it's an interesting demonstration of what sort of surprising things evolution can work with. A highly constrained song is indirectly encoded by a much simpler bias in learning.

comment by Taure · 2010-09-03T12:56:25.376Z · LW(p) · GW(p)

An Introduction to Probability and Inductive Logic by Ian Hacking

Have any of you read this book?

I have been invited to join a reading group based around it for the coming academic year and would like the opinions of this group as to whether it's worth it.

I may join in just for the section on Bayes. I might even finally discover the correct pronunciation of "Bayesian". ("Bay-zian" or "Bye-zian"?)

Here's a link to the book: http://www.amazon.co.uk/Introduction-Probability-Inductive-Logic/dp/0521775019/ref=sr_1_2?ie=UTF8&s=books&qid=1283464939&sr=8-2

Replies from: sketerpot
comment by sketerpot · 2010-09-04T18:44:06.351Z · LW(p) · GW(p)

I've only ever heard Bayesian pronounced "Bay-zian".

Replies from: ata
comment by ata · 2010-09-04T18:48:15.427Z · LW(p) · GW(p)

That's how I usually hear it ("Bayes"+"ian", right?), though I've also heard it pronounced like "Basian" (rhyming with "Asian") or occasionally "Bay-esian" (rhyming with "Cartesian").

comment by blogospheroid · 2010-09-02T12:21:22.556Z · LW(p) · GW(p)

Idea - Existential risk fighting corporates

People of normal IQ are advised to work our normal day job, the best competency that we have and after setting aside enough money for ourselves, contribute to prevention of existential risk. That is a good idea if the skills of the people here are getting their correct market value and there is such a diversity of skills that they cannot make a sensible corporation together.

Also, consider that as we make the world's corporations more agile, we bring closer the moment where an unfriendly optimization process might just be let loose.

But just consider, the small probability that some of the rationalists come together as a non-profit corporation to contribute to mitigating existential risk. There are many reasons our kind cannot cooperate . Also, the fact is that coordination is hard

But if we could, then with the latest in decision theory, argument diagrams ( 1,2, 3, internal futarchy (after the size of the coporation gets big), we could create a corporation that wins. There are many people from the world of software here. Within the corporation itself, there is no need to stick to legacy systems. We could interact with the best of coordination software and keep the corporation "sane".

We can create products and services like any for-profit corporation and sell them at market rates, but use the surplus to mitigate existential risk. In other words, it is difficult, but in the everett branches where x-rationalists manage a synergistic outcome, it might be possible to strengthen the funding of existential risk mitigation considerably.

The downsides and criticisms of this idea are many

  • The corporation becomes a lost cause. Goodhart's law kicks in and the original purpose of forming the corporation is lost.
  • People are polite when in a situation where no important decisions are being made (like an internet forum like lesswrong), but if actual productivity is involved, they might get hostile when someone lowers their corporate karma. Perfect internet buddies might become co-workers who hate each others guts.
  • There is no possibility of synergy. The present situation where rational people spread over the world and in different situations are money pumping from less rational people around them is better.
  • People outside the corporation might mentally slot existential risk as a kooky topic that "that creepy company talks about all the time" and not see it as a genuine issue that diverse persons from different walks of life are interested in.

and so on...

But still, my question is - Shouldn't we atleast consider the possibilities of synergy in a manner indicated?

comment by blogospheroid · 2010-09-02T06:31:22.946Z · LW(p) · GW(p)

I'd like to discuss, with anyone who is interested, the ideas of Metaphysics Of Quality, by Robert Pirsig (laid out in Lila, An enquiry into Morals)

There are many aspects to MOQ that might make a rationalist cringe, like moral realism and giving evolution a path and purpose. But there are many interesting concepts which i heard for the first time when I read MOQ. The fourfold division of inorganic, biological, social and intellectual static patterns of quality is quite intruiging. Many things that the transhumanist community talks about actually interact at the edges of these definitions.

nanotech runs at the border of inorganic quality and biological quality.

evolutionary psychology runs at the border of biological and social quality

at a much simpler level, a community like less wrong runs at the border of social and intellectual quality

Inspite of this, I find the layered level of this understanding is probably useful in understanding present systems and designing new systems.

Maintaining stability at a lower level of quality is probably very important whenever new dynamic things are done at a higher level. Freidrich Hayek emphasises the rule of law and stable contracts, which are the basis of the dynamism of the free market.

Francis Fukuyama came out with the idea of "The end of history" with democratic liberalism being the final system, a permanent social static quality. This was an extremely bold view, but someone who understood even a bit of MOQ could understand that changes at a lower level could stil happen. No social structure can be permanent without the biological level being fixed. And Bingo! Fukuyama being a smart man, understood this and his next book was "Our posthuman future", which urged the extreme social control of biological manipulation, in particular, ceasing research.

In Pirsig's view, social quality overriding biological quality is moral. I don't agree with Pirsig's view that when social quality overrides biological quality, it is always moral. It is societal pressure that creates incentives for female infanticide in India, which overrides the biological 50-50 ratio. This will result in huge social problems in the future.

A proper understanding of the universe, when we arrive at it, would have all these intricate layers laid out in detail. But it is interesting to talk about even now,when the picture is incomplete.

Replies from: Snowyowl, None
comment by Snowyowl · 2010-09-02T10:22:03.276Z · LW(p) · GW(p)

No social structure can be permanent without the biological level being fixed. And Bingo! Fukuyama being a smart man, understood this and his next book was "Our posthuman future", which urged the extreme social control of biological manipulation, in particular, ceasing research.

Really? I would have arrived at the opposite conclusion. No social structure can be permanent without the biological level being fixed, therefore we should do more research into biological alteration in order to stabilize our biology should it become unstable.

For instance, pre-implantation genetic diagnosis would enable us to almost eradicate most genetic diseases, thus maintaining our biological quality. I'm not saying it doesn't have corresponding problems, just that an attitude of "we should cease research in this field because we might find something dangerous" is overreacting.

Replies from: blogospheroid
comment by blogospheroid · 2010-09-02T11:34:17.148Z · LW(p) · GW(p)

I don't support Fukuyama's conclusion. I just was mentioning that Fukuyama realised that his "end of history" hypothesis was obsolete as the biological quality patterns, that he assumed were more or less unchanging, are not fixed.

Genetic engineering is an intellectual + social pattern imposing on a biological pattern. By a naive reading of Pirsig, it appears as moral. But if the biological pattern is not fully understood, then it might lead to many unanticipated consequences. I definitely support the eradication of genetic diseaeses, if the changes made are those that are present in many normal people and without much downside. I support intelligence amplification, but we simply don't know enough to do it without issues.

Eliezer's perspective is that humans are godshatter (a hodge podge of many biological, social and intellectual static patterns) and it will take a very powerful intelligence to understand morality and extrapolate it. I believe that thinking about Pirsig's work can inform us a little on areas we should choose to understand first.

comment by [deleted] · 2010-09-02T17:06:41.501Z · LW(p) · GW(p)

No social structure can be permanent without the biological level being fixed.

This seems incorrect, as it's not hard to imagine a social structure supporting a wide variety of different biological/non-biological intelligences, as long as they were reasonably close to each other in morality-space. There's plenty of things at the level of biology that have no impact on morality that we'd certainly like to change.

Replies from: blogospheroid
comment by blogospheroid · 2010-09-03T09:54:59.021Z · LW(p) · GW(p)

During the process of creation of those non-biological intelligences or modification of the biological persons, the social structure would be in a flux. There will be some similarities maintained, but many changes would also be there.

According to our laws, murder is illegal, but erasure of an upload with backup till the last day would not be classified as a grave crime as much as murdering an un-backed up person. These changes would be at the social level.

comment by wedrifid · 2010-09-02T04:16:50.113Z · LW(p) · GW(p)

Does anyone else ever browse through comments, spot one and think "why is the post upvoted to 1?" and then realise that the vote was from you? I seem to do that a lot. (In nearly every case I leave the votes stand.)

Replies from: Kaj_Sotala
comment by Kaj_Sotala · 2010-09-02T21:36:59.794Z · LW(p) · GW(p)

I don't recall ever doing that.

Do you leave the votes stand because you remember/re-invent your original reason for upvoting, or because something along the lines of "well, I must've had a good reason at the time"?

Replies from: wedrifid
comment by wedrifid · 2010-09-03T03:32:57.582Z · LW(p) · GW(p)

you remember/re-invent your original reason for upvoting,

This one. And sometimes my surprise is because the upvoted comment is surrounded by other comments that are 'better' than it. This is I can often fix by upvoting the context instead of removing my initial upvote.

(And, if I went around removing my votes I would quite possibly end up in an infinite loop of contrariness.)

comment by Liron · 2010-09-01T05:48:58.137Z · LW(p) · GW(p)

I made this site last month: http://www.areyou1in1000000.com

comment by James_Miller · 2010-09-01T04:36:53.961Z · LW(p) · GW(p)

Eliezer has been accused of delusions of grandeur for his belief in his own importance. But if Eliezer is guilty of such delusions then so am I and, I suspect, are many of you.

Consider two beliefs:

  1. The next millennium will be the most critical in mankind’s existence because in most of the Everett branches arising out of today mankind will go extinct or start spreading through the stars.

  2. Eliezer’s work on friendly AI makes him the most significant determinant of our fate in (1).

Let 10^N represent the average across our future Everett branches of the total number of sentient beings whose ancestors arose on earth. If Eliezer holds beliefs (1) and (2) then he considers himself the most important of these beings and the probability of this happening by chance is 1 in 10^N. But if (1) holds then the rest of us are extremely important as well through how our voting, buying, contributing, writing… influences mankind’s fate. Let say that makes most of us one of the trillion most important beings who will ever exist. The probability of this happening by chance is 1 in 10^(N-12).

If N is at least 18 it’s hard to think of a rational criteria under which believing you are 1 in 10^N is delusional whereas thinking you are 1 in 10^(N-12) is not.

Replies from: JamesAndrix, KevinC, wedrifid, rwallace, prase
comment by JamesAndrix · 2010-09-01T05:41:48.611Z · LW(p) · GW(p)

2 is ambiguous. Getting to the stars requires a number of things to go right. Eliezer serves relatively little use in preventing a major nuclear exchange in the next 10 years, or bad nanotech , or garage made bio weapons, or even UFAI development.

FAI is just the final thing that needs to go right, everything else needs to go mostly right until then.

Replies from: Snowyowl
comment by Snowyowl · 2010-09-01T11:19:59.135Z · LW(p) · GW(p)

And I can think of a few ways humanity can get to the stars even if FAI never happens.

comment by KevinC · 2010-09-01T05:02:39.063Z · LW(p) · GW(p)

Can you provide a cite for the notion that Eliezer believes (2)? Since he's not likely to build the world's first FAI in his garage all by himself, without incorporating the work of any of other thousands of people working on FAI and FAI's necessary component technologies, I think it would be a bit delusional of him to beleive (2) as stated. Which is not to suggest that his work is not important, or even among the most significant work done in the history of humankind (even if he fails, others can build on that and find the way that works). But that's different than the idea that he, alone, is The Most Significant Human Who Will Ever Live. I don't get the impression that he's that cocky.

Replies from: James_Miller
comment by James_Miller · 2010-09-01T05:19:27.359Z · LW(p) · GW(p)

Eliezer has been accused on LW of having or possibly having delusions of grandeur for essentially believing in (2). See here:

http://lesswrong.com/lw/2lr/the_importance_of_selfdoubt/

My main point is that even if Eliezer believes in (2) we can't conclude that he has such delusions unless we also accept that many LW readers also have such delusions.

comment by wedrifid · 2010-09-01T04:58:45.676Z · LW(p) · GW(p)

If N is at least 18 it’s hard to think of a rational criteria under which believing you are 1 in 10^N is delusional whereas thinking you are 1 in 10^(N-12) is not.

Really? How about "when you are, in fact, 1/10^(N-12) and have good reason to believe it"? Throwing in a large N doesn't change the fact that 10^N is still 1,000,000,000,000 times larger than 10^(N-12) and nor does it mean we could not draw conclusions about belief (2).

(Not commenting on Eliezer here, just suggesting the argument is not all that persuasive to me.)

Replies from: Snowyowl, James_Miller, timtyler
comment by Snowyowl · 2010-09-01T12:03:21.028Z · LW(p) · GW(p)

I agree. Somebody has to be the most important person ever. If Elizer really has made significant contributions to the future of humanity, he's much more likely to be that most important person than a random person out of 10^N candidates would be.

Replies from: James_Miller
comment by James_Miller · 2010-09-01T14:25:24.630Z · LW(p) · GW(p)

The argument would be that Eliezer should doubt his own ability to reason if his reason appears to cause him to think he is 1 in 10^N. My claim is that if this argument is true everyone who believes in (1) and thinks N is large should, to an extremely close approximation, have just as much doubt in their own ability to reason as Eliezer should have in his.

Replies from: Snowyowl
comment by Snowyowl · 2010-09-01T15:12:45.895Z · LW(p) · GW(p)

Agreed. Not sure if Eliezer actually believes that, but I take your point.

comment by James_Miller · 2010-09-01T05:05:31.695Z · LW(p) · GW(p)

To an extremely good approximation one in a million events don't ever happen.

Replies from: wedrifid, gwern
comment by wedrifid · 2010-09-01T05:11:32.165Z · LW(p) · GW(p)

To an extremely good approximation this Everett Branch doesn't even exist. Well, it wouldn't if I used your definition of 'extremely good'.

Replies from: James_Miller
comment by James_Miller · 2010-09-01T05:28:29.130Z · LW(p) · GW(p)

Your argument seems to be analogous to the false claim that it's remarkable that a golf ball landed exactly where it did (regardless of where it did land) because the odds of that happening were extremely small.

I don't think my argument is analogous because there is reason to think that being one of the most important people to ever live is a special happening clearly distinguishable from many, many others.

comment by gwern · 2010-09-01T13:44:17.728Z · LW(p) · GW(p)

Yet they are quite easy to generate - flip a coin a few times.

comment by timtyler · 2010-09-08T07:34:51.491Z · LW(p) · GW(p)

10^N is still 1,000,000,000,000 times larger than 10^(N-12)

Here, here. That is a trillion times more probable!

comment by rwallace · 2010-09-01T16:54:57.302Z · LW(p) · GW(p)

It's not about the numbers, and it's not about Eliezer in particular. Think of it this way:

Clearly, the development of interstellar travel (if we successfully accomplish this) will be one of the most important events in the history of the universe.

If I believe our civilization has a chance of achieving this, then in a sense that makes me, as a member of said civilization, important. This is a rational conclusion.

If I believe I'm going to build a starship in my garage, that makes me delusional. The problem isn't the odds against me being the one person who does this. The problem is that nobody is going to do this, because building a starship in your garage is simply impossible; it's just too hard a job to be done that way.

Replies from: Houshalter
comment by Houshalter · 2010-09-03T01:30:04.367Z · LW(p) · GW(p)

If I believe I'm going to build a starship in my garage, that makes me delusional. The problem isn't the odds against me being the one person who does this. The problem is that nobody is going to do this, because building a starship in your garage is simply impossible; it's just too hard a job to be done that way.

You assume it is. But maybe you will invent AI and then use it to design a plan of how to build a starship in your garage. So it's not simply impossible. It's just unknown and even if you could theres no reason to believe that would be a good decision. But hey, in a hundred years, who knows what people will build in their garages, or the equivalent of. I immagine people a hundred years ago would believe our projects to be pretty strange.

comment by prase · 2010-09-01T14:15:03.282Z · LW(p) · GW(p)

I think I don't understand (1) and its implications. How the fact that in most of the branches we are going extinct implies that we are the most important couple of generations (this is how I interpret the trillion)? Our importance lies in our decisions. These decisions influence the number of branches in which people die out. If we take (1) as given, it means we weren't successful in mitigating the existential risk, leaving no place to excercise our decisions and thus importance.

comment by magfrump · 2010-09-25T21:38:30.586Z · LW(p) · GW(p)

Omega comes up to you and tells you that if you believe in science it will make your life 1000 utilons better. He then goes on to tell you that if you believe in god, it will make your afterlife 1 million utilons better. And finally, if you believe in both science and god, you won't get accepted into the afterlife so you'll only get the 1000 utilons.

If it were me, I would tell omega that he's not my real dad and go on believing in science and not believing in god.

Am I being irrational?

EDIT: if omega is an infinitely all-knowing oracle, the answer may be different than if omega is ostensibly a normal human who has predicted many things correctly. Also by "to believe in science" I mean to pursue epistemic rationality as a standard for believing things rather than, for example, literal interpretation of the bible.

Replies from: NihilCredo
comment by NihilCredo · 2010-09-25T21:52:50.279Z · LW(p) · GW(p)

The definition of Omega includes him being completely honest and trustworthy. He wouldn't tell you "I will make your afterlife better" unless he knew that there is an afterlife (otherwise he couldn't make it better), just like he wouldn't say "the current Roman Emperor is bald". If he were to say instead "I will make your afterlife better, if you have one", I would keep operating on my current assumption that there is no such thing as an afterlife.

Oh, I almost forgot - what does it even mean to "believe in science"?

comment by billswift · 2010-09-01T09:57:13.622Z · LW(p) · GW(p)

I don’t care if AI is Friendly or not. Once there is recursively improving AI, the human race is irrelevant; anyone planning to continue living under those conditions either has not thought things through, would be equally happy living in a permanent simulation, or as a wirehead. I am mainly interested in that whatever AI we create is not stupid; that is that it does not paperclip the universe or optimize itself/themselves into a long-term dead-end like the Vile Offspring in Stross’s Accelerando.

Replies from: wedrifid, blogospheroid, Snowyowl
comment by wedrifid · 2010-09-01T10:00:32.973Z · LW(p) · GW(p)

Once there is recursively improving AI, the human race is irrelevant; anyone planning to continue living under those conditions either has not thought things through, would be equally happy living in a permanent simulation, or as a wirehead.

This does not follow.

Replies from: billswift
comment by billswift · 2010-09-01T13:00:46.035Z · LW(p) · GW(p)

I see your point, I was reasoning from "the human race" (ie, humanity in general) to an unjustified claim about individual humans and what the "should" do or believe.

comment by blogospheroid · 2010-09-01T16:40:37.892Z · LW(p) · GW(p)

anyone planning to continue living under those conditions either has not thought things through, would be equally happy living in a permanent simulation, or as a wirehead.

Your existence may not be relevant to the rest of the universe after friendly AI, but that doesn't mean that you would be a wirehead. I want to live a life of genuine challenges, but I really wish that it didn't have to be in a world of genuine suffering.

comment by Snowyowl · 2010-09-01T11:36:18.735Z · LW(p) · GW(p)

I don’t care if AI is Friendly or not. [...] I am mainly interested in that whatever AI we create does not paperclip the universe

You contradict yourself here. A Friendly AI is an intelligence which attempts to improve the well-being of humanity. A paperclip maximiser is an intelligence which does not, as it cares about something different and unrelated. Any sufficiently advanced AI is either one or the other or somewhere in between.

By "sufficiently advanced", I mean an AI which is intelligent enough to consider the future of humanity and attempt to influence it.

Replies from: PhilGoetz, billswift
comment by PhilGoetz · 2010-09-01T16:37:25.219Z · LW(p) · GW(p)

You contradict yourself here. A Friendly AI is an intelligence which attempts to improve the well-being of humanity. A paperclip maximiser is an intelligence which does not, as it cares about something different and unrelated. Any sufficiently advanced AI is either one or the other or somewhere in between.

No; these are two types of AIs out of a larger design space. You ignore, at the very least, the most important and most desirable case: An AI that shares many of humanity's values, and attempts to achieve those values rather than increase the well-being of humanity.

comment by billswift · 2010-09-01T13:04:27.698Z · LW(p) · GW(p)

A paperclip maximizer also has a dead-end goal - maximizing paperclips - which is what I object to. An non-Friendly AI just has no particular interest in humans; there is no other necessary claim you can make about it.

Replies from: Alexandros, Snowyowl, Oscar_Cunningham
comment by Alexandros · 2010-09-01T16:59:06.839Z · LW(p) · GW(p)

You have merely redefined the goal from 'the benefit of humanity' to 'non dead-end goal', which may just be equally hairy.

Replies from: billswift
comment by billswift · 2010-09-01T17:13:29.166Z · LW(p) · GW(p)

Even more hairy. Any primary goal will, I think, eventually end up with a paperclipper. We need more research into how intelligent beings (ie, humans) actually function. I do not think people, with rare exceptions, actually have primary goals, only temporary, contingent goals to meet temporary ends. That is one reason I don't think much of utilitarianism - peoples "utilities" are almost always temporary, contingent, and self-limiting.

This is also one reason why I have said that I think provably Friendly AI is impossible. I will be glad to be proven wrong if it does turn out to be possible.

comment by Snowyowl · 2010-09-01T13:22:55.080Z · LW(p) · GW(p)

Ah, I see. I misunderstood your definition of "paperclip maximiser"; I assumed paperclip maximiser and Unfriendly AI were equivalent. Sorry.

Next question: if maximising paperclips or relentless self-optimisation is a dead-end goal, what is an example of a non-dead-end goal? Is there a clear border between the two, which will be obvious to the AI?

To my mind, if paperclip maximisation is a dead end, then so is everything else. The Second Law of Thermodynamics will catch up with you eventually. Nothing you create will endure forever. The only thing you can do is try to maximise your utility for as long as possible, and if that means paperclips, then so be it.

comment by Oscar_Cunningham · 2010-09-01T13:15:25.368Z · LW(p) · GW(p)

What would be a non-friendly goal that isn't dead-end? (N.B. Not a rhetorical question.)

Replies from: NihilCredo
comment by NihilCredo · 2010-09-01T14:22:03.744Z · LW(p) · GW(p)

Deciding that humanity was a poor choice for the dominant sapient species and should be replaced by (the improved descendants of) dolphins or octopi?